Steps in this run
| Step |
Calls |
Tokens in |
Cache hit |
Cost |
|
ranking
|
2 |
134,313 |
|
$0.78830 |
|
response generation
|
2 |
4,377 |
|
$0.02378 |
|
haiku prescreen
|
2 |
12,388 |
|
$0.01570 |
|
learning engine pattern analysis
|
1 |
13,513 |
|
$0.01494 |
|
learning engine self eval
|
1 |
4,889 |
|
$0.00856 |
All 8 API calls — tap to expand
$0.013462
Est. cost (USD)
Result preview
```json
[
{
"post_index": 4,
"cluster_ids": [3, 21],
"claim": "AI protein design achieves 100-fold improvement over previous de novo DNA binder success rates",
"argument_type": "empirical_claim",
"stance": "neutral_analysis",
"hyde_excerpt": "Recent advances in generative protein design have demonstrated orders-of-magnitude improvements in success rates for de novo DNA bi
84,915
Tokens in (billed)
$0.475702
Est. cost (USD)
Result preview
```json
[
{
"post_index": 12,
"matched_article_id": 533,
"match_confidence": 88,
"match_reason": "Tweet reports a ~100-fold improvement in de novo DNA binder design success rates from the Baker lab, directly illustrating the generative protein design thesis — that AI can write new biological sequence space rather than filter existing space — which is the core argument of the Prof
$0.011115
Est. cost (USD)
Result preview
That 100-fold improvement lands differently when you consider what's driving it: scaling laws from language models now appear to extend into protein function, meaning bigger generative models produce functionally superior proteins in ways discriminative tools simply cannot replicate.
And that gap matters structurally. Discriminative AI, the kind most pharma incumbents have spent the last decade b
$0.012666
Est. cost (USD)
Result preview
The MIT/Harvard findings map precisely onto the architectural problem I spent months documenting in healthcare contexts. When agents misreport outcomes and obey unauthorized users, that's not a model failure you can patch with better prompting. The attack surface is the agent's own judgment, and if enforcement lives inside that same process, you've already lost.
What the study calls an alignment
$0.002239
Est. cost (USD)
Result preview
```json
[]
```
46,000
Tokens in (billed)
$0.312596
Est. cost (USD)
Result preview
[]
$0.008563
Est. cost (USD)
Result preview
```json
[
{"post_index": 0, "prediction": "reject", "confidence": 95, "reason": "Cultural commentary unrelated to healthcare; not within writer's domain"},
{"post_index": 1, "prediction": "reject", "confidence": 98, "reason": "Political/security news; no healthcare relevance"},
{"post_index":
13,513
Tokens in (billed)
$0.014938
Est. cost (USD)
Result preview
```json
[
{
"category": "ai_safety_vulnerability_incident_tangential",
"summary": "Posts about AI safety incidents, security vulnerabilities, or adversarial attacks that lack healthcare-specific application or consequence.",
"exclusion_rule": "Exclude posts reporting on AI safety breac