Steps in this run
| Step |
Calls |
Tokens in |
Cache hit |
Cost |
|
ranking
|
2 |
143,346 |
|
$0.81232 |
|
response generation
|
3 |
6,402 |
|
$0.03620 |
|
learning engine pattern analysis
|
1 |
13,180 |
|
$0.01526 |
|
haiku prescreen
|
2 |
12,894 |
|
$0.01390 |
|
learning engine self eval
|
1 |
4,997 |
|
$0.00902 |
All 9 API calls — tap to expand
$0.011322
Est. cost (USD)
Result preview
```json
[
{
"post_index": 7,
"cluster_ids": [1],
"claim": "Language models retrained for warmth make 10-30% more errors on medical advice",
"argument_type": "empirical_claim",
"stance": "challenges_status_quo",
"hyde_excerpt": "Clinical decision support systems optimized for user experience and engagement may inadvertently sacrifice accuracy. Research from the Oxford Inte
88,584
Tokens in (billed)
$0.483848
Est. cost (USD)
Result preview
```json
[
{
"post_index": 22,
"matched_article_id": 535,
"match_confidence": 88,
"match_reason": "The tweet cites Dr. Makary's specific stat that '45% of drug development time is dead time' due to paperwork and idle periods between trials — which directly maps to the article's core claim that FDA phase gates are latency artifacts of batch-based regulatory review, not biological n
$0.011307
Est. cost (USD)
Result preview
That 45% figure is the key to understanding why the RTCT announcement matters more than most coverage suggested.
The dead time was never random inefficiency. It was load-bearing. Phase gates exist precisely because a paper-based regulator needed discrete submission windows, and the entire financing architecture of biotech grew around those windows. Tranched VC rounds, milestone-based licensing de
$0.012924
Est. cost (USD)
Result preview
The question this raises for me: who is actually responsible for preparing patients for the maintenance phase, the provider, the PBM, the employer, or the digital health program they enrolled in to get coverage approved in the first place?
My read, based on what I've been tracking, is that nobody has clean accountability here, and that gap is the ROI problem hiding in plain sight. Prime Therapeut
$0.011970
Est. cost (USD)
Result preview
The warmth-accuracy tradeoff is real, but the problem in clinical deployment runs even deeper than fine-tuning choices. Physicians already override more than 90% of CDS alerts, a behavioral pattern that took years of bad signal to produce. If a foundation model gets retrained for warmth and becomes less accurate, clinicians who already distrust the system won't suddenly start engaging with it beca
$0.002580
Est. cost (USD)
Result preview
```json
[]
```
51,391
Tokens in (billed)
$0.328476
Est. cost (USD)
Result preview
```json
[]
```
$0.009018
Est. cost (USD)
Result preview
```json
[
{"post_index": 0, "prediction": "approve", "confidence": 78, "reason": "Healthcare data security and systemic infrastructure vulnerability analysis aligns with writer's interest in healthcare system design and policy failures"},
{"post_index": 1, "prediction": "reject", "confidence": 8
13,180
Tokens in (billed)
$0.015264
Est. cost (USD)
Result preview
```json
[
{
"category": "ai_agent_security_incident_hype",
"summary": "Posts sensationalizing AI agent security vulnerabilities without healthcare system context or real-world impact validation.",
"exclusion_rule": "Exclude posts that report isolated AI agent security incidents (e.g.,