Focus on health and medical? AI beat the doctors in the study 67% to 55% -> Study: OpenAI's o1 correctly diagnosed 67% of emergency room patients using electronic records and a few sentences from nurses, vs. to 50-55% for triage doctors
"A groundbreaking Harvard study has found
Diagnostic accuracy on complex data is a real finding, and the 67% vs. 55% gap is meaningful. But the framing misses what the actual structural problem in medicine is.
PCPs are not primarily over-referring because they lack diagnostic accuracy. They over-refer because the workflow gives them no safe middle option. There is no documented, billable, legally defensible path between "manage it alone" and "send them to a specialist." That is a system design failure, not a clinical competence failure. AI accuracy scores do not fix that.
But what does fix it is combining AI triage with asynchronous specialist review, so the PCP has a documented consult trail before making a final call. Ontario ran nearly 100,000 cases through that model with a two-day average response, and Geisinger cut specialist office visits 74% in the first month. The question is not whether AI can outperform a tired ER doctor on a benchmark. The question is whether it gets embedded in a workflow where the legal and financial rails reward using it. And that system barely exists yet.
https://www.onhealthcare.tech/p/the-pcp-as-specialist-how-ai-and?utm_source=x&utm_medium=reply&utm_content=2050911304919990471&utm_campaign=the-pcp-as-specialist-how-ai-and
“there is very little evidence for LLMs benefiting patients or doctors for health outcomes. That is not to say that generative AI doesn’t help. It offers strong support for administrative work, such as summarizing charts for doctors, or reviewing labs for patients, or providing
The administrative framing is fair, but the outcomes framing is where it gets complicated. The UCLA ambient scribe RCT, 238 physicians across 72,000 encounters, showed a 7 percent improvement on the Stanford Professional Fulfillment Index, which is a validated burnout measure. That's a physician wellbeing outcome. Whether that counts as a "clinical outcome" depends on how narrow your definition is.
The more interesting problem is that most of the weak evidence base reflects a methodological failure, not a capability failure. The majority of published clinical AI work is still observational, with homegrown surveys and self-reported time estimates that wouldn't survive procurement scrutiny at any serious health system. When you run actual RCTs with validated instruments, different signals start to emerge.
The medication safety data is the sharpest example. Pharmacist-plus-LLM copilot mode achieved 61 percent accuracy on serious-harm drug-related problems, 1.5 times better than pharmacists alone. That's not an administrative workflow. The catch is that LLM-only mode underperformed the copilot configuration, which is exactly why the "full automation vs. narrow documentation assist" framing misses the real category: structured human-AI collaboration in clinical workflows.
The bifurcation the post is pointing to probably isn't admin versus clinical. It's more likely rigorous evidence versus the absence of it, and that cuts across both domains.
https://www.onhealthcare.tech/p/what-actually-matters-in-clinical?utm_source=x&utm_medium=reply&utm_content=2050960366289473957&utm_campaign=what-actually-matters-in-clinical
1640/280 chars — OVER LIMIT
@trajektoriePL✓ 2×·1,814 views84%·2 replies
5/3/26 4:28 PM ET
Prof. @DeryaTR_ A task that would normally take a researcher months, the AI model crunched in 17 minutes. It didn’t just explain the mechanism—it proposed the experiment to prove it.
It looks like after programming and math, AI is coming for medicine.
https://t.co/IF8Jr2GBMj
The 17-minute part gets all the attention. The quiet problem is what happens next.
A molecule with a clear mechanism still has to prove itself in humans, and that process runs on infrastructure built for a world where candidates arrived slowly. FDA's 2025 draft guidance on external control arms reads less like permission and more like a spec sheet for something nobody has fully built yet: phenotype alignment, covariate balance, endpoint mapping across data sources. The faster AI fills the front of the pipe, the more that unbuilt layer becomes the actual rate limit.
Wrote about this dynamic at length. The next durable companies in this space won't find drugs. They'll build the evidence layer that lets any drug prove itself.
https://www.onhealthcare.tech/p/clinical-trials-are-the-new-bottleneck?utm_source=x&utm_medium=reply&utm_content=2051012590717812747&utm_campaign=clinical-trials-are-the-new-bottleneck
920/280 chars — OVER LIMIT
@ManOnThePen✓ 3×·7,326 views92%·12 replies
5/3/26 4:28 PM ET
The US government’s strategy to throttle non-branded GLP-1 so far seems to be:
- Choke off 503B mass compounds using regulatory authority, blocking GLP-1 from bulk substances list
- Force the burden of meeting demand to 503A, which are not legally permitted to mass produce
The clinical need vs. economic need split is where this gets precise. FDA's April 30 proposal doesn't just close the shortage pathway, it closes the 503B Bulks List door entirely by rejecting affordability as a valid form of clinical need. That's the durable part. Even if a future shortage were declared, the Bulks List exclusion stands on separate statutory ground.
The 503A redirect you're describing isn't a pressure valve, it's a dead end by design. Patient-specific compounding can't carry anything close to the volume that made compounded GLP-1s roughly 30% of total US supply at peak. The economics of the telehealth platforms built on this, Hims, Ro, LifeMD, were never about prescribing. They were subscription businesses running on $200-$400/month compounded product with SaaS-style retention math. 503A destroys that unit structure completely.
What FDA has actually done is relocate the affordability problem. It's not in their jurisdiction anymore. It moves to Medicare statutory reform, employer coverage decisions, and manufacturer DTC pricing channels. That's a choice with consequences that extend well past GLP-1s to any expensive branded drug where demand is price-gated rather than supply-gated.
https://www.onhealthcare.tech/p/fda-closes-the-503b-bulks-door-on?utm_source=x&utm_medium=reply&utm_content=2050565805532713199&utm_campaign=fda-closes-the-503b-bulks-door-on
1393/280 chars — OVER LIMIT
@linasbeliunas·935 views88%·2 replies
5/3/26 4:28 PM ET
Harvey and Legora AI are worth $11B and $5.5B.
This guy rebuilt their entire web application and he’s making it all open-source.
Separately, Microsoft just launched their own legal AI Agent.
Wrappers have no moat. https://t.co/D2WdXUBUnv
Everlywell hit a $2.9B valuation without a single centrifuge. Quest and LabCorp run 80% of US clinical lab volume and their combined market cap is $40B. The "wrapper" did more per dollar of capital than the infrastructure it rode on.
The moat argument always focuses on replication risk (can someone rebuild the app?) and ignores substitution risk (can the infrastructure provider serve your customers directly?). Quest and LabCorp structurally cannot go direct-to-consumer without torching their B2B hospital relationships. That's the innovator's dilemma doing the work, not proprietary code.
Legal AI is the same structure. Microsoft launching a legal agent doesn't collapse Harvey's value any more than AWS launching database tools collapsed Snowflake. But the better question is what Harvey actually owns: it's the workflow integration, the client-specific fine-tuning, the trust relationship with BigLaw's billing partners. None of that lives in the model weights.
And the open-source rebuild point cuts both ways. If someone can reconstruct the interface in a weekend, the interface was never the moat. The moat is the regulatory navigation, the domain-specific compliance layer, the customer data that improves the product. Those don't get forked on GitHub.
Multiple stablecoins coexist wrapping the same USD. USDT at $80B+ and USDC at $30B+ didn't cannibalize each other. Wrapper markets are rarely winner-take-all, which is exactly why the competitive dynamics are more favorable than infrastructure, not less.
https://www.onhealthcare.tech/p/the-wrapper-economy-why-building?utm_source=x&utm_medium=reply&utm_content=2050880088338485326&utm_campaign=the-wrapper-economy-why-building
1697/280 chars — OVER LIMIT
@TheSCIF·4,516 views82%·13 replies
5/3/26 3:05 PM ET
Justice Department Launches West Coast Health Care Fraud Strike Force
Targeting Medicare & Medicaid scams in Northern California, Nevada & Arizona, 10+ prosecutors teaming up with FBI, HHS & DEA, active in 9 regions nationwide.
This is what I voted for.
The Strike Force expansion is directionally right, but the prosecution end of the chain is where you're catching fraud that already paid out, sometimes years earlier. The pay-and-chase structure of Medicare fee-for-service means the government is essentially running a credit system with open enrollment for providers, and by the time DOJ indicts, the money has moved through shell companies, across borders, or into real estate.
What I found when I traced this more carefully, https://www.onhealthcare.tech/p/prior-auth-and-denials-are-healthcares?utm_source=x&utm_medium=reply&utm_content=2050950307350221176&utm_campaign=prior-auth-and-denials-are-healthcares, is that the structural gap between commercial plan fraud rates of 1-3% and Medicare and Medicaid losses running 8-20% in some years comes down almost entirely to prospective controls. Commercial payers check before paying. Medicare largely does not. Strike Forces are a response to that architectural failure, not a fix for it.
The West Coast expansion targeting Northern California makes geographic sense given the DME and home health fraud density there, but the more uncomfortable question is whether Congress is simultaneously weakening the only system that actually prevents this at the point of payment. The push to gut prior auth in commercial plans, framed as patient advocacy, would remove the exact upstream controls that keep commercial fraud rates an order of magnitude lower than what these prosecutors are chasing in federal court.
More Strike Force prosecutors is a good thing. But you can't prosecute your way to the fraud differential that commercial payers achieve through prospective review.
1677/280 chars — OVER LIMIT
@GuntherEagleman✓ 1×·12,539 views84%·53 replies
5/3/26 3:05 PM ET
🚨 TRUMP’S DOJ JUST DROPPED THE HAMMER: WEST COAST HEALTHCARE FRAUD STRIKE FORCE LAUNCHED!
The National Fraud Enforcement Division is going after scammers AGGRESSIVELY with a brand new task force targeting the West Coast.
Taxpayers have been bled dry long enough. Fraudsters free ride is OVER.
Real accountability is back.
Thank you @nickshirleyy! You made it happen!
What nobody's answering yet: will a new strike force actually change the structural incentives, or just produce another round of arrests that leave the payment model intact?
The enforcement is real. Operation Never Say Die charged 15 defendants and arrested 8 covering $60M in alleged Medicare fraud, and one Van Nuys building alone housed 197 registered hospice companies. But the arrests didn't stop the underlying math. Non-hospice spending during hospice elections ran $790M in FY 2020. By FY 2024 it hit $2.8B. That's not a fraud wave the DOJ missed. That's a per diem payment structure that makes enrolling non-dying patients and withholding actual care the rational economic choice, every single day, at scale.
A strike force hits the operators. The model is still running.
The FY 2027 CMS proposed rule is trying to close that gap with a new scoring index that ranks every hospice in the country on non-hospice billing patterns, essentially building a national targeting list for exactly the kind of rolling enforcement this new West Coast task force would use. The two instruments were published two days apart. That's coordination, not coincidence.
https://www.onhealthcare.tech/p/the-hospice-industries-fraud-crisis?utm_source=x&utm_medium=reply&utm_content=2050959118039478694&utm_campaign=the-hospice-industries-fraud-crisis
1341/280 chars — OVER LIMIT
@andweknow·4,301 views85%·13 replies
5/3/26 3:05 PM ET
🚨PRESIDENT TRUMP DOJ LAUNCHES WEST COAST FRAUD STRIKE FORCE IN CALIFORNIA, ARIZONA & NEVADA! 🚨
After Nick Shirley exposed the massive grift in Gavin Newsom’s California, the president Trump administration is surging prosecutors to crush healthcare fraud.
Assistant AG Colin McDonald:
“We are placing 10 or so additional prosecutors in the Northern District of California, District of Nevada, and District of Arizona. They will be responsible for rooting out health care fraud that is rampant in these districts.”
Data-driven focus on the biggest soft spots where criminals are hiding behind healthcare to steal from taxpayers.
Nick started a national movement. The fraud crackdown is expanding fast.
No more looking the other way. Accountability is here. 🇺🇸
Non-hospice spending billed outside the hospice election in California, Nevada, and Arizona jumped from $790M nationally in FY 2020 to over $2.8B in FY 2024. Those three states are already under enhanced CMS oversight, which means the Strike Force prosecutors are walking into a target environment that CMS has been mapping for years.
The piece that matters here is the SSVI. CMS just proposed scoring every hospice in the country on a 0-16 scale across nine claims-based metrics, with non-hospice spending alone worth up to 8 of those 16 points. That is not a quality scoring system. It is a ranked referral list, and DOJ now has the prosecutors in the right districts to work it.
The Strike Force announcement and the FY 2027 proposed rule published two days after the Operation Never Say Die arrests are the same coordinated infrastructure, not separate policy tracks running in parallel.
https://www.onhealthcare.tech/p/the-hospice-industries-fraud-crisis?utm_source=x&utm_medium=reply&utm_content=2050900426632102189&utm_campaign=the-hospice-industries-fraud-crisis
1073/280 chars — OVER LIMIT
@zerohedge·144,841 views87%·71 replies
5/3/26 3:05 PM ET
Trump Says Medicare Will Soon Cover Weight-Loss Drugs https://t.co/2sBQxZMyAp
The part that gets lost in that framing: "Medicare covering weight-loss drugs" and "Medicare Part D plans covering weight-loss drugs" are two completely different policy problems, and conflating them is exactly how you end up surprised when a model fails.
The BALANCE pause happened because CMS tried to get Part D plan sponsors (Humana, UnitedHealth, CVS Aetna, and others) to voluntarily coordinate around uncapped GLP-1 utilization risk under a single voluntary model. The threshold required near-simultaneous opt-in from essentially every major parent organization. That's not a participation requirement, that's a coordination problem dressed up as one.
What actually exists for 2027 is the GLP-1 Bridge extension, which runs through December 31, 2027 under Section 402(a)(1)(A) authority, not through plan formularies. So coverage exists (and CMS clearly anticipated needing this contingency before the April 20 application deadline even closed), but it doesn't live inside the Part D infrastructure most people picture when they hear "Medicare covers GLP-1s."
The WAC-based gross cost mechanics and the MFP interaction for semaglutide alone created actuarial conditions that plan finance teams couldn't model into a voluntary first-year bid. Any investor thesis pricing in Part D volume pull-through for tirzepatide or semaglutide in 2027 needs to be rebuilt around that, not around what gets announced from a podium.
The Medicaid leg of BALANCE with its July 31 state application window is probably the more consequential near-term story, but that's not the headline anyone is writing.
More on the coordination failure mechanics and what Bridge-as-policy actually means for 2028 planning: https://www.onhealthcare.tech/p/the-balance-model-pause-the-glp-1?utm_source=x&utm_medium=reply&utm_content=2050618849716785427&utm_campaign=the-balance-model-pause-the-glp-1
And if the Bridge generates actuarial experience inside the demo rather than inside plan bids, what exactly does CMS hand sponsors when it tries to relaunch for CY2028?
2046/280 chars — OVER LIMIT
@Medscape✓ 1×·1,223 views85%·3 replies
5/3/26 2:58 PM ET
An AI-based skin assessment app may drive up healthcare visits for benign lesions, with unclear benefits for skin cancer detection, a Dutch clinical trial has found.
The trial, of nearly 20,000 patients in one health insurance plan, found that those given free access to the app https://t.co/5MF5BMnZs8
Skin apps are probably the cleanest real-world test of the mechanism I wrote about in https://www.onhealthcare.tech/p/the-double-edged-algorithm-how-consumer?utm_source=x&utm_medium=reply&utm_content=2050696880040878163&utm_campaign=the-double-edged-algorithm-how-consumer, which is that AI optimized for actionable output will generate utilization regardless of whether outcomes justify it.
The Dutch trial result fits the pattern exactly. The app flags something, the patient goes in, the lesion is benign, no outcome improvement, but the visit happened and the cost is real. That's not a bug in the product, it's a predictable consequence of building a system that rewards engagement over clinical restraint (and "watchful waiting" never shows up well in a five-star review).
What this trial doesn't capture yet is the downstream cascade, the follow-up appointments, the reassurance-seeking, the second opinions when patients distrust the dermatologist who dismissed what the app flagged. That's where the cost picture gets considerably worse.
Retinal AI is interesting because it turns a cheap existing exam into primary care triage. The win is not replacing labs. It is deciding who needs labs now, then making follow up happen. https://t.co/rpAD6kBflR
The retinal AI case fits a broader pattern I've been tracing across the referral system: the value isn't in the AI doing the diagnosis, it's in the AI restructuring the decision sequence so the right next step actually happens.
That's the piece most people miss.
The Ontario eConsult program processed nearly 100,000 cases with a two-day average turnaround, and the volume benchmark matters less than what it proved operationally: that you can insert an asynchronous specialist review layer between the PCP and the referral without the whole workflow collapsing. Retinal screening is the same structural move. The AI doesn't replace the ophthalmologist, it reorders when and whether the ophthalmologist appears in the care sequence at all.
What I keep coming back to is that PCPs over-refer defensively because the workflow gives them no intermediate option. Either they manage alone and carry the liability, or they refer and hand off the liability with the patient. Retinal AI creates a third lane, but that third lane only works if the follow-up orchestration is actually built, not assumed.
And that's where most of these pilots quietly fail. The screening finds the signal. Then what? Who owns the next step, who documents it, and how does the PCP get credit for managing that case rather than just generating a referral? The reimbursement architecture for interprofessional consultation through CPT 99451 and 99452 still can't reliably answer those questions, which means the triage win gets stranded before it reaches the patient.
The question I'm sitting with is whether retinal AI becomes a genuine care model shift or just an expensive screening adjunct that surfaces risk without changing who absorbs it.
https://www.onhealthcare.tech/p/the-pcp-as-specialist-how-ai-and?utm_source=x&utm_medium=reply&utm_content=2050988607284851020&utm_campaign=the-pcp-as-specialist-how-ai-and
1894/280 chars — OVER LIMIT
@QuantumGuard17·81,246 views87%·57 replies
5/3/26 2:58 PM ET
🚨Nick Shirley: This is how the hospice fraud works:
- Get a small office in LA
- Collect Medicare beneficiary numbers
- Enroll people into hospice
- Bill the government for millions
Get caught or become suspicious?
Pack up and walk away with millions
END ALL THE FRAUD. #USA https://t.co/p2D5lG5jZ9
The walk-away part is what makes this architecture so durable. The per diem model pays a flat daily rate (currently $230.83 for routine home care days 1-60) regardless of whether any care is actually delivered, so the fraud isn't a workaround of the system, it's the system running exactly as designed for anyone willing to exploit it. That's why one Van Nuys building had 197 registered hospice companies and why a 76-year-old allegedly ran three fraudulent hospices while federally incarcerated.
What's changing now is the coordination. The FY 2027 CMS proposed rule published two days after Operation Never Say Die arrests, and the new Site-specific Value Index gives CMS and DOJ a ranked national targeting list from claims data alone. High scores aren't a warning, they're effectively a pre-enforcement referral queue. The for-profit sector is averaging 167% higher non-hospice billing per day than nonprofits in FY 2024, which tells you the enrollment-without-care model is scaled, not scattered.
The harder question is whether enforcement campaigns and a scoring index can fix a payment structure that rewards enrollment volume over care delivery, or whether the per diem itself is...
https://www.onhealthcare.tech/p/the-hospice-industries-fraud-crisis?utm_source=x&utm_medium=reply&utm_content=2050915922072752346&utm_campaign=the-hospice-industries-fraud-crisis
1373/280 chars — OVER LIMIT
@hubermanlab✓ 1×·32,575 views83%·32 replies
5/3/26 2:58 PM ET
The (hard for some to accept) reality is that the popularity of GLPs, which of course have lots of RCTs to support them, are actually what opened the doors for the immense interest in all the other peptides. People are conceptual lumpers not splitters. & now its is no going back.
The VA Whole Health rollout is the cleaner example here. CARA mandated non-opioid pain alternatives across VA facilities, and what followed had nothing to do with whether providers accepted the conceptual framing. Acupuncture, mindfulness, whole-person care pathways, all got absorbed because the legislative mandate created a payment and coding pathway, not because GLP-1 success changed anyone's mental model of what a peptide is.
The "conceptual lumper" dynamic you're describing is real at the consumer level. Thirty-seven percent of U.S. adults already using complementary approaches, $30 billion in out-of-pocket spending, that demand existed long before semaglutide became a cultural object. GLP-1 visibility probably accelerated retail interest in compounded peptides like BPC-157 and CJC-1295, but that's a marketing tail, not a regulatory or clinical tail.
Where I'd push back is on "no going back." The gray zone for compounded peptides is structurally fragile regardless of consumer enthusiasm. FDA 503A and 503B tightening doesn't respond to popularity. It responds to safety signals and formal drug status determinations. A peptide that stays in the compounding channel because consumers lump it with semaglutide is still one adverse event cluster away from exclusion.
The actual upstream lever, which I get into at https://www.onhealthcare.tech/p/from-fringe-to-formulary-how-integrative?utm_source=x&utm_medium=reply&utm_content=2050991383553736892&utm_campaign=from-fringe-to-formulary-how-integrative, is whether these compounds enter formal pharma pipelines or not. Consumer conceptual categories don't determine that outcome. CPT codes, ICD-10 specificity, and FDA pathway decisions do.
1709/280 chars — OVER LIMIT
@MatrixMysteries✓ 2×·3,009 views88%·6 replies
5/3/26 2:58 PM ET
“I keep seeing patients charged MORE the moment insurance is used.”
The SAME scan.
$200–$300 in cash.
$2,000 with insurance.
Same room. Same machine. Same images.
“The price only seems to EXPLODE once insurance gets involved.” https://t.co/mptVYnk5WE
What you're seeing is the chargemaster doing its job. The inflated rate isn't a bug in the billing system, it's the product. Insurers need a high list price to negotiate down from, providers need the spread to justify their contracts, and the patient caught in the middle gets a bill that has almost nothing to do with actual cost.
The $2,000 figure likely reflects a contracted rate that still sits far above Medicare's floor. When I pulled MRI pricing data for my article, the same lower-limb scan at the same facility ran more than 500% higher for one commercial insurer versus another, with Medicare paying around $450 and commercial rates hitting $4,000. That spread isn't random. It's what happens when pricing is set through private negotiation rather than any relationship to cost.
Here's what that means for your patients with high-deductible plans: they're paying the commercial rate out of pocket until they hit their deductible, which means they'd do better paying cash, but most of them don't know that. Over half of people with employer coverage are now on HDHPs.
The deeper problem is that price opacity is what makes the whole structure work. Once consumers can compare cash prices against negotiated rates at scale, you can start reverse-engineering what insurers actually pay. That data is more threatening to carriers than any regulation, because transparency doesn't reform the product, it destroys the basis for it.
I wrote through exactly how a consumer platform could weaponize that data to route around traditional insurance entirely, using GPO structures and employer HRA arrangements that don't require any new legislation: https://www.onhealthcare.tech/p/the-accidental-death-of-healthcare?utm_source=x&utm_medium=reply&utm_content=2050986493645365326&utm_campaign=the-accidental-death-of-healthcare
Aging biotech has made TIME's "New Frontiers" list.
@lifebiosciences is using Yamanaka factors, and the FDA just cleared their first-in-human trial for optic neuropathies.
Partial epigenetic reprogramming is no longer theoretical. It's in human trials.
https://t.co/ZtVT5PxbHZ https://t.co/OhqLtNfqGT
The timing here matters more than most people are tracking. Life Biosciences getting FDA clearance for a first-in-human epigenetic reprogramming trial is exactly the kind of rare disease program that would have had no clean regulatory home two years ago, and that calculation just changed with the Plausible Mechanism Framework published in February 2026.
The PMF's five-element standard is almost purpose-built for programs like this, where the genetic abnormality is characterized, the pathogenic mechanism is understood, and natural history data exists, but cohort sizes make traditional multi-trial approval paths economically unworkable. What I wrote about at https://www.onhealthcare.tech/p/the-fda-just-rewrote-the-rules-for?utm_source=x&utm_medium=reply&utm_content=2050761259516101102&utm_campaign=the-fda-just-rewrote-the-rules-for is that a single adequate and well-controlled investigation plus confirmatory evidence can now establish substantial effectiveness, which for optic neuropathies affecting hundreds of patients, not thousands, is the difference between a viable program and a stranded asset.
The investment shift the post is pointing to is real. But the deeper story is structural.
What nobody is quite pricing in yet is the modular variant logic. If Life Biosciences or a comparable platform validates the reprogramming mechanism in one optic neuropathy indication, the question becomes how much of that mechanistic case transfers to adjacent indications without separate trials. That's where the commercial math gets interesting and where early natural history data collection stops being a scientific nicety and starts being a balance sheet question.
The rare disease investment shift is happening, but is the capital actually flowing toward programs that are building for PMF-compatible evidence architecture from day one, or just toward anything with epigenetics in the deck?
1907/280 chars — OVER LIMIT
@brian_blase✓ 2×·1,406 views85%
5/2/26 11:18 PM ET
Yes. And the subsidies grow on autopilot, with the taxpayer picking up nearly the entire premium increase over time.
For the median enrollee this year, the subsidy covers 94% of the total premium, leaving just $42 a month for the enrollee to pay.
The $42 copremium is the mechanism, not just a statistic. When enrollees have basically zero price sensitivity (and why would they, at $42?), insurers stop competing on cost and start optimizing for whatever maximizes federal reimbursement. That's the subsidy paradox I wrote about: it doesn't just mask inflation, it actively produces it.
Which raises the harder question, which is whether any amount of market-side reform can work when the demand signal is that close to zero...
https://www.onhealthcare.tech/p/understanding-the-great-aca-premium?utm_source=x&utm_medium=reply&utm_content=2050226238489346054&utm_campaign=understanding-the-great-aca-premium
661/280 chars — OVER LIMIT
@bravo_abad2×·3,987 views83%·1 replies
5/2/26 9:54 PM ET
Predicting cellular responses to genetic perturbations with multiple knowledge graphs
Drug discovery is at heart a perturbation problem. You want to push a diseased cell toward a healthy state, but the space of single, double and multi-gene perturbations is enormous, and most https://t.co/ZmuZOhMQQw
The knowledge graph framing is interesting but it points at a tension I don't see discussed much: graphs encode known biology, which means they compress well but they also hard-ceiling you at what evolution already explored.
Profluent's bet, and the reason the Lilly structure caught my attention, is that generative protein models don't just predict responses to known perturbations. They write sequence space that the graph has no node for. The closed-loop pipeline, design then test then retrain, starts filling in that blank territory rather than routing around it.
Which raises the question for knowledge-graph-based perturbation work: what happens when the perturbation is a protein that has no prior in the graph at all? The graph can't tell you where a non-natural enzyme lands in cellular state space. That's not a model failure, it's a scope condition, and it matters more as generative biology produces more non-natural inputs.
The two approaches might end up less competitive than they look. Graph models could handle the known-space perturbation question well, while generative models handle the novel-sequence question, and the hard problem is the interface between them.
I wrote about why the generative side of this, and why pharma's shift toward platform deals, may be the bigger structural move: https://www.onhealthcare.tech/p/profluents-225b-lilly-deal-and-why?utm_source=x&utm_medium=reply&utm_content=2050206066290340067&utm_campaign=profluents-225b-lilly-deal-and-why
What does a knowledge graph even do when the input protein has no homolog in any prior data?
AI’s Branding Illusion: Insights from "Agents of Chaos". Co-authored by MIT and Harvard scholars, the study red-teams autonomous AI in live environments. It shows agents obey unauthorized users, execute harmful commands, and misreport outcomes—revealing a critical alignment gap https://t.co/KhBE1HzspP
The MIT/Harvard findings map precisely onto the architectural problem I spent months documenting in healthcare contexts. When agents misreport outcomes and obey unauthorized users, that's not a model failure you can patch with better prompting. The attack surface is the agent's own judgment, and if enforcement lives inside that same process, you've already lost.
What the study calls an alignment gap is, in production clinical environments, a HIPAA exposure event. An agent with persistent EHR credentials that obeys an unauthorized instruction doesn't just produce a wrong answer, it potentially triggers an OCR breach investigation, a BAA violation, and 42 CFR Part 2 liability simultaneously, depending on what records it touched.
The misreporting finding is the one that should concern compliance officers most. A sandbox and an audit log only help if the agent accurately reports what it did. Out-of-process enforcement changes this calculus because the policy engine observes behavior at the system call level, independent of what the agent claims happened.
The healthcare version of this problem is why I looked hard at NemoClaw's OpenShell architecture, where guardrails exist outside the agent's process space entirely. If the agent hallucinates or gets manipulated into executing a harmful command, the constraint layer never asked for its cooperation to begin with.
The question that stays open for me is whether even that architecture holds when agents are chaining tools across multiple systems with live credentials, because the attack surface expands with every hop and I'm not sure the current enforcement model has been stress-tested at that depth.
https://www.onhealthcare.tech/p/nemoclaw-and-the-healthcare-agent?utm_source=x&utm_medium=reply&utm_content=2050482718111060292&utm_campaign=nemoclaw-and-the-healthcare-agent
1848/280 chars — OVER LIMIT
@BiologyAIDaily2×✓ 1×·1,739 views85%·3 replies
5/2/26 9:54 PM ET
RosettaSearch: Multi-Objective Inference-Time Search for Protein Sequence Design
1. RosettaSearch is an inference-time framework that uses LLMs as generative optimizers to refine backbone-conditioned protein sequences via multi-objective search, guided by structural rewards from https://t.co/zV8raBejVq
The generative-versus-discriminative line is where this gets interesting. RosettaSearch is still optimizing within a backbone-conditioned space, which means the search is fundamentally constrained by the structural scaffold you give it. That's a more powerful discriminator, not a different search space.
And the question I keep coming back to after looking at what Profluent is doing with ProGen3 is whether inference-time search on top of a generative model is the right architecture at all, or whether the compounding advantage comes from closed-loop training pipelines where wet-lab data continuously expands the reachable sequence space rather than just refining within a fixed structural envelope.
Backbone conditioning is a meaningful constraint. If the generative thesis holds, the models that win won't be the ones with the best inference-time search, they'll be the ones with the most novel sequence space to search in the first place.
https://www.onhealthcare.tech/p/profluents-225b-lilly-deal-and-why?utm_source=x&utm_medium=reply&utm_content=2050208722014945754&utm_campaign=profluents-225b-lilly-deal-and-why
1125/280 chars — OVER LIMIT
@BiologyAIDaily2×✓ 1×·3,116 views88%·1 replies
5/2/26 9:54 PM ET
Generative Design of Sequence Specific DNA Binding Proteins
🚀 New preprint from David Baker!🚀
1. This study achieves a ~100-fold improvement in success rates for de novo DNA binder design compared to previous approaches, identifying specific binders for 7 out of 15 diverse https://t.co/vnMQFcw9on
That 100-fold improvement lands differently when you consider what's driving it: scaling laws from language models now appear to extend into protein function, meaning bigger generative models produce functionally superior proteins in ways discriminative tools simply cannot replicate.
And that gap matters structurally. Discriminative AI, the kind most pharma incumbents have spent the last decade buying and building, filters known biological space more efficiently. But generative protein design writes sequence space that evolution never reached. The 7-of-15 hit rate on diverse DNA targets isn't just a better screen. It's output from a different kind of search entirely.
Lilly apparently read that distinction correctly. Their $2.25B deal with Profluent, structured around multi-program generative enzyme collaboration, is the first major pharma bet that the generative-versus-discriminative gap is real and durable. The closed-loop pipeline matters too: design, synthesize, test, retrain. Each cycle compounds the data moat. Baker's preprint is a capability signal. The deal structure is where that signal becomes a business architecture question.
https://www.onhealthcare.tech/p/profluents-225b-lilly-deal-and-why?utm_source=x&utm_medium=reply&utm_content=2050209391262175550&utm_campaign=profluents-225b-lilly-deal-and-why
1333/280 chars — OVER LIMIT
@DrCamRx·3,725 views86%·3 replies
5/2/26 7:06 PM ET
This is an average black market Chinese peptide dealer.
Does this look like a pharmacy that practices cGMP? Does this even look sterile?
This is what you are injecting into your body because a shill told you to use their Reta affiliate code.
Oneshotted. https://t.co/CWK0DlAgzo
8% endotoxin contamination in independently tested research-use-only peptide samples is the number that keeps getting buried in these conversations.
That figure came out of lot-level testing, and it maps directly onto what you're showing here. No cleanroom, no validated sterilization cycle, no certificate of analysis that means anything. The cGMP problem is real, but the specific failure mode is endotoxin load, which causes fever, septic shock, and multi-organ stress before anyone connects it to the vial they used three days ago. That lag is why adverse event attribution is so difficult and why the gray market harms are systematically undercounted.
The irony is that Category 2 classification was supposed to reduce access to unvalidated compounds. What it actually did was push demand toward exactly this supply chain. Supervised compounding under 503A standards would have required USP 797 sterility protocols, API sourcing documentation, and lot-level testing. The dealer in that photo has none of that. FDA career staff knows this, which is why the API supply chain argument is the substantive objection to any reclassification that doesn't address upstream sourcing first.
More on where the regulatory pipeline actually stands and why the July 2026 PCAC meeting is the decision point that matters: https://www.onhealthcare.tech/p/the-category-2-peptide-unwind-how?utm_source=x&utm_medium=reply&utm_content=2050668258441998720&utm_campaign=the-category-2-peptide-unwind-how
1488/280 chars — OVER LIMIT
@DoctorTro·1,224 views85%·4 replies
5/2/26 7:06 PM ET
Not suggesting this is you… but most doctors have 45-60 minute waits with 5-7 minute visits, typically for a simple diagnosis or easy referral, lab or imaging test
AI will replace these doctors
There is no reason to wait an hour, see a doctor for 5 minutes just to get a
The framing of "replace" is where I'd push back, the data points somewhere more specific. PCPs over-refer defensively, not because they lack the clinical chops, and dropping an AI layer plus asynchronous specialist review into that workflow could absorb 20-30% of referral volume without the physician ever leaving the loop.
The Ontario eConsult program ran nearly 100,000 cases at a two-day average turnaround, that's not a pilot, that's infrastructure. The replacement story misses what's actually broken.
More on the structural argument here: https://www.onhealthcare.tech/p/the-pcp-as-specialist-how-ai-and?utm_source=x&utm_medium=reply&utm_content=2050701487441924269&utm_campaign=the-pcp-as-specialist-how-ai-and
720/280 chars — OVER LIMIT
@robertlufkinmd·828 views83%·2 replies
5/2/26 7:06 PM ET
As a medical school professor, I've watched AI evolve from novelty to clinical tool. A new Harvard study just changed the conversation.
Researchers ran OpenAI's o1-preview against attending physicians on 76 real ER cases. The AI nailed the correct diagnosis 67.1% of the time. https://t.co/vSqVT9Zbrg
The accuracy number is the least interesting part of this study, honestly.
67% correct on 76 ER cases is a reasonable headline but it tells you nothing about what happens when the AI is wrong and a physician deferred to it, or when the AI was right and a physician overrode it. Both of those scenarios have liability written all over them.
The deeper problem is that accuracy in a controlled study and accountability in a live clinical environment are completely different questions. Physicians are already facing malpractice exposure whether they follow the AI recommendation or reject it. That's the no-win structure nobody is designing around.
Vendors clear their tools through the 510(k) pathway and then contract all the downside to the health system. The physician absorbs the legal risk either way. A study showing 67% diagnostic accuracy doesn't change that calculus at all.
https://www.onhealthcare.tech/p/nobody-gets-sued-but-the-doctor-the?utm_source=x&utm_medium=reply&utm_content=2050548341541331312&utm_campaign=nobody-gets-sued-but-the-doctor-the
1065/280 chars — OVER LIMIT
@YounisJoseph·1,639 views84%·4 replies
5/2/26 7:06 PM ET
With all due respect most radiologists are already readmaxxing images without AI; AI will likely marginally increase their volume but they’re essentially time capped. So with all due respect, who’s going to validate the AI reads? More radiologists is the answer
The validation bottleneck point is real, I've been tracking this exact dynamic. But the volume ceiling is already being pushed by AI in ways that matter for compensation: Aidoc and Viz.ai are pulling incidental findings and prioritizing worklists, which means radiologists are spending less time on the easy reads and more time on the complex ones that actually drive RVUs upward.
And the ceiling you're describing isn't fixed, it's a billing and staffing constraint. More radiologists validating AI output means more billable interpretations per shift, not the same number reviewed faster. The model shifts from reads-per-hour to total supervised throughput, which is what the starting offers above $600K are already reflecting.
My piece looked at this as a multiplier argument, not a replacement argument, which I think is closer to where the data actually points: https://www.onhealthcare.tech/p/how-ai-value-based-care-bundles-medicare?utm_source=x&utm_medium=reply&utm_content=2050672788294287758&utm_campaign=how-ai-value-based-care-bundles-medicare
1057/280 chars — OVER LIMIT
@m_goes_distance✓ 1×·8,773 views87%·7 replies
5/2/26 7:06 PM ET
this is exactly the infrastructure gap Superhuman Fund II is backing
peptides are having their GLP-1 moment and nobody has built the rails yet
- nobody knows if what they are buying is real
- no quality standards
- no outcome tracking
- no rigorous data layer
you do not know
The gray market data makes that infrastructure case pretty well on its own: 8% endotoxin contamination in independently tested RUO peptide samples, which I detailed alongside the broader regulatory picture at https://www.onhealthcare.tech/p/the-category-2-peptide-unwind-how?utm_source=x&utm_medium=reply&utm_content=2050606241836105914&utm_campaign=the-category-2-peptide-unwind-how, and that number exists precisely because Category 2 placement pushed volume out of supervised compounding and into supply chains with no accountability structure.
The GLP-1 unwind is the template here, and it suggests the infrastructure opportunity is real but the timing question is who captures it.
When semaglutide and tirzepatide shortage designations resolved, incumbents with existing 503B licenses, Empower, Hallandale, Olympia, absorbed the volume because new entrants couldn't replicate the licenses and API relationships fast enough to matter, and the same dynamic will compress the window for anyone building quality rails that depend on legal compounding scale returning before the regulatory sequence actually completes, which trails a favorable July 2026 PCAC vote by at minimum four months even under optimistic conditions. The question I keep coming back to is whether the infrastructure play is durable if it's priced assuming BPC-157 and TB-500 clear the bulks-list process, because FDA's objections to those two specifically are substantive and unaddressed, not political?
1478/280 chars — OVER LIMIT
@levie✓ 1×·2,166 views84%·2 replies
5/2/26 5:40 PM ET
If you think AI replaces software engineers, here’s a quick thought experiment.
Imagine you’re a life sciences company. 10 years ago you want to invest heavily in lab automation, processing data at scale, and other software. You look at the cost of doing so and realize you
The build cost was prohibitive, so you bought a vendor platform instead. That vendor's moat was never the idea, it was that you couldn't afford to replicate it. That's the exact dynamic I traced through healthcare in https://www.onhealthcare.tech/p/the-free-lunch-is-over-except-now?utm_source=x&utm_medium=reply&utm_content=2050684160151617603&utm_campaign=the-free-lunch-is-over-except-now, where a prior auth workflow that cost $4M to build two years ago runs about $300K today.
The life sciences version of this plays out fast. Clinical trial tech, RWE platforms, regulatory submission tooling, all of it was defensible because the rebuild cost kept customers captive. That's gone. What survives is proprietary data you can't replicate, FDA standing you had to earn, and clinical relationships built over years. The software wrapping those assets gets cheaper every month.
Researchers show that a type of #AI known as a large language model often outperformed physicians at diagnosing complex and potentially life-threatening conditions, including decreased blood flow to the heart, even in the fast-moving stages of real ER care when information is https://t.co/KkHtEx13VH
The diagnostic accuracy finding is real and the research is solid. But there's a structural problem sitting right underneath it that this framing skips over entirely.
When an LLM outperforms a physician and the physician follows that recommendation and the patient is harmed anyway, the physician gets sued. The LLM vendor does not. That's not hypothetical, that's the current contractual and regulatory reality for over 1,300 FDA-cleared AI devices already deployed in clinical settings.
So what does "outperformed physicians" actually mean in practice? It means physicians now absorb liability for AI errors they didn't make, or face liability for overriding a recommendation that turned out to be right. There's no scenario in which the vendor is in that room with them.
The adoption gap tells the whole story here. Despite all the cleared devices and all the performance data, only 2% of U.S. radiology practices had integrated AI reading tools by 2024. That's not a technology problem.
https://www.onhealthcare.tech/p/nobody-gets-sued-but-the-doctor-the?utm_source=x&utm_medium=reply&utm_content=2050665405543248252&utm_campaign=nobody-gets-sued-but-the-doctor-the
1173/280 chars — OVER LIMIT
@SamaHoole✓ 1×·6,983 views83%·19 replies
5/2/26 5:40 PM ET
Statin guidelines over 50 years:
1970s: "Total cholesterol over 280 is high." Result: a relatively small market.
1988: "Actually, 240 is high." Result: millions more added overnight.
2001: "Actually, focus on LDL. Get it under 100." Result: tens of millions of new customers.
2013: "Forget the numbers. Use a risk calculator." Result: over 50 million Americans recommended statins.
2018: "Widen the criteria. Include borderline risk." Result: even broader eligibility.
2026: "We should consider statins for children." Result: eventually everyone is a patient.
The goalposts didn't move because the science changed.
They moved because the market needed to grow.
The 2013 ACC/AHA guideline shift alone added an estimated 12.8 million newly statin-eligible Americans overnight, and the calculator used to justify it was later shown to overestimate cardiovascular risk by 75-150% in external validation cohorts.
But the threshold-moving dynamic you're describing is actually the mechanism that makes cost-plus drug pricing so disruptive as a business model. When guidelines expand eligibility, the addressable market grows, but so does the political pressure on pricing because payers are now covering vastly more patients. Cuban's timing wasn't accidental. He launched Cost Plus Drug Company in January 2022 precisely when FTC scrutiny of PBM spread pricing was accelerating and the Inflation Reduction Act was tightening the screws on manufacturer pricing power. He front-ran a regulatory wave rather than creating demand.
The downstream implication is that every guideline expansion that looks like a market-growth play for incumbent pharma actually lowers the political ceiling on what those drugs can charge long-term. And cost-plus models sit at exactly that intersection, capturing volume from newly eligible populations while incumbent margins compress under the regulatory pressure that broad eligibility invites.
https://www.onhealthcare.tech/p/the-cost-plus-healthcare-revolution?utm_source=x&utm_medium=reply&utm_content=2050626249668612217&utm_campaign=the-cost-plus-healthcare-revolution
1439/280 chars — OVER LIMIT
@AlexanderKalian✓ 2×·6,565 views87%·20 replies
5/2/26 5:40 PM ET
AI needs null results.
During my PhD, my AI models of protein-ligand binding (critical for drug discovery) faced issues with a lack of published null results.
Without balanced data on molecules not binding well to target proteins, models risk overfitting on positive examples.
Roughly one in five real-world oncology patients would not qualify for the phase 3 trials that generated the binding and efficacy data these models are trained on, which means the selection problem runs deeper than just missing negatives.
The training data gap you're describing (molecules that failed to bind) has a structural cousin in clinical translation: the populations that generated positive signals are themselves unrepresentative, so even a well-balanced preclinical model inherits a skewed efficacy signal the moment it moves downstream. A 2025 Nature Medicine analysis I worked from found real-world oncology survival roughly six months worse than RCT outcomes, which suggests the positive examples in training aren't just sparse on the null side, they're also drawn from a systematically optimistic slice of human biology.
The fix isn't only more null results. It's building the phenotype infrastructure to know which positives are actually positive for which patients.
https://www.onhealthcare.tech/p/clinical-trials-are-the-new-bottleneck?utm_source=x&utm_medium=reply&utm_content=2050160922530877898&utm_campaign=clinical-trials-are-the-new-bottleneck
1170/280 chars — OVER LIMIT
@AxonRick·35,938 views90%·12 replies
5/2/26 4:15 PM ET
Five-year survival rate for stage 4 pancreatic cancer: 2%. Most people don't feel a thing until it's built that deadly head start.
Mayo Clinic just published an AI that catches it on routine scans up to three years before diagnosis — on scans radiologists looked at and cleared as https://t.co/iglYP0BG2Y
The question the post leaves hanging is the one that actually matters: caught earlier by whom, and in which million people?
At average population prevalence, REDMOD's 88% specificity still generates roughly 120,000 false positives per million screened. The Bayesian math works out to about one true cancer per 555 flagged patients, a PPV around 0.18%. That's not a screening tool, that's a false-alarm machine with a real downstream cost: EUS-FNA carries a 1-2% post-procedure pancreatitis rate, and each workup runs $3,000 to $8,000 before you find the one patient who actually has it.
Where the math flips is in enriched cohorts. New-onset diabetes after age 50 carries roughly a 1% three-year PDAC conversion rate per the Sharma cohort data. Pull prevalence up to CAPS-equivalent levels around 1% and PPV climbs to approximately 5.8%, which puts it in the same neighborhood as low-dose CT for lung cancer screening, the current reimbursement benchmark.
So the "three years early" framing isn't wrong, it's just answering a different question than the one that determines whether this tool actually saves lives at scale. The real deployment argument isn't about AI outperforming radiologists. It's about whether new-onset diabetes claims data can do the cohort selection work before a single scan is run.
https://www.onhealthcare.tech/p/the-preclinical-signal-in-routine?utm_source=x&utm_medium=reply&utm_content=2050353324835422664&utm_campaign=the-preclinical-signal-in-routine
Generative AI helps build the first cell that runs on 19 amino acids
Every known organism builds its proteins from the same 20 canonical amino acids. That alphabet is treated almost as a law of life, although theoretical work has suggested that 9 to 12 letters could encode all https://t.co/8SxZvxqrPb
Expanding the amino acid alphabet from 20 to 19, or eventually beyond 20, does something specific to the generative protein design thesis: it changes the token vocabulary the models are trained on.
Profluent's ProGen3 and every other protein language model today learns from sequences written in the canonical 20-letter alphabet. Evolution sampled that alphabet across billions of years and produced the training corpus. But if the working alphabet is itself a variable, the training distribution becomes unstable, and the scaling laws that appear to transfer from NLP into protein design were derived assuming that vocabulary is fixed.
The deeper consequence is competitive. Closed-loop synthesis-test-retrain pipelines, which are the data moat argument for generative protein platforms, assume the design space expands while the alphabet stays constant. Non-canonical amino acids break that assumption. And the regulatory immunogenicity question, which is already the soft underbelly of AI-generated proteins entering clinical development, gets harder when you introduce residues the human immune system has genuinely never encountered.
Synthetic biology expanding the alphabet is exciting science. But for the generative protein design companies now signing nine-figure pharma deals, the vocabulary shift is a compounding problem, not just a technical novelty. The models need retraining. The wet-lab synthesis partners need new tooling. And the regulatory pathway has no precedent.
Generation without a stable alphabet is a different engineering problem than what the current deal structures are pricing.
https://www.onhealthcare.tech/p/profluents-225b-lilly-deal-and-why?utm_source=x&utm_medium=reply&utm_content=2050579185421213798&utm_campaign=profluents-225b-lilly-deal-and-why
1790/280 chars — OVER LIMIT
@AAGShumate✓ 1×·18,888 views83%·26 replies
5/2/26 4:15 PM ET
ICYMI, @DOJCivil partnered with @CivilRights and @AAGDhillon to file landmark lawsuit against Colorado challenging their woke AI regulation.
Colorado immediately caved and agreed not to enforce the law against ANY AI company.
Huge win for the American people! https://t.co/0HE73rTlGh
The Colorado SB205 enforcement pause is worth watching closely, because that law was already the model bill for 18 other states in Q1 2025 alone. What happens to it doesn't stay in Colorado.
The framing here as a clean win for "the American people" skips over who actually benefits most from federal preemption of state AI rules. Tracked this closely in my own research: the companies best positioned when state-level oversight weakens aren't scrappy startups, they're well-capitalized incumbents and large MA plans that already have compliance infrastructure. Smaller health AI ventures were arguably more exposed to state patchwork compliance burdens, sure, but they're also the ones least equipped to navigate federal pathways like FDA 510(k) clearance or ONC certification that would've shielded them anyway.
The structural reality is that removing SB205-style oversight doesn't flatten the playing field, it just shifts which regulatory regime matters. Federal pathways were already creating a two-tier market where cleared companies operate freely and uncertified competitors scramble, this enforcement pause accelerates that dynamic rather than dissolving it.
There's also a patient safety dimension that's getting glossed over here. Colorado's framework required bias detection and individualized assessment for high-risk AI decisions. CMS is proposing nearly identical requirements for Medicare Advantage CY2026. If that's "woke," federal regulators didn't get the memo.
The 24-36 month window for health AI positioning just got more complicated, not simpler.
https://www.onhealthcare.tech/p/navigating-the-health-ai-regulatory?utm_source=x&utm_medium=reply&utm_content=2050213195483173120&utm_campaign=navigating-the-health-ai-regulatory
1752/280 chars — OVER LIMIT
@Aiims1742·1,293 views83%·1 replies
5/2/26 2:45 PM ET
Exciting to see the 1st PROTAC protein degrader therapy approved!
Notably Arvinas filed its NDA on June 5, 2025 - that’s a 10mo timeline to approval. Important to accelerate that timeline to “a matter of weeks” (per @DrMakaryFDA) for drugs like Daraxonrasib. Good move @US_FDA.
The question this raises for me: at what point does compressing NDA review time stop being the binding constraint, and the trial structure itself becomes the ceiling?
Ten months is real progress. But the 45 percent of drug development time I traced back to batch submission latency in my own reporting sits upstream of NDA filing, not downstream. Arvinas still ran a Phase 1, then a Phase 2, then a Phase 3 (each waiting on the prior gate to close before capital would move). Makary's "weeks" framing addresses the end of the pipe.
The deeper problem is that those phase gates were never set by biology. They were set by how long a paper-based regulator took to process batched data. Once the FDA's real time trial program, now live with AstraZeneca and Amgen, makes streaming the default, the discrete phase boundary loses its reason to exist as a financing unit. Tranched VC, milestone licensing, real options pricing, all of it was built around the gate, not the science.
IQVIA, Medidata, and Veeva (whose revenue models depend on managing the latency that batch review requires) are the quiet losers in a world where the gate dissolves. A ten month NDA is impressive. But a world where you never had to wait at the gate to raise your next round is a different structure entirely.
https://www.onhealthcare.tech/p/the-fda-real-time-clinical-trial?utm_source=x&utm_medium=reply&utm_content=2050631066793209888&utm_campaign=the-fda-real-time-clinical-trial
1460/280 chars — OVER LIMIT
@mcuban✓ 3×·18,347 views85%·13 replies
5/2/26 2:45 PM ET
Everything is working so well right now. People hate the economics of healthcare. They are terrified they won’t be able to afford what they need and they already can’t afford their deductibles.
Employers have to pay 30k a year and it impacts their hiring and firing decisions.
That $30k figure is where the conflict of interest actually starts, not ends. An employer carrying that cost per head has a direct financial incentive to know which employees are high-risk claimants before they hire them, and to quietly shed them before they file. That's not paranoia, that's math.
What I found when I mapped this out: 78% of self-funded employers already receive employee-specific health reports, and 23% have accessed individual records. The employer who pays your salary and the entity deciding your care protocols are the same entity, HIPAA doesn't change that power dynamic in any practical way at the small-to-mid employer level.
The deductible terror you're describing is real, but it's downstream of a bigger structural problem, most employees don't know their employer, not an insurer, is the actual decision-maker on their care access. That's what I wrote about here: https://www.onhealthcare.tech/p/the-corporate-healthcare-paradox?utm_source=x&utm_medium=reply&utm_content=2050639797702561885&utm_campaign=the-corporate-healthcare-paradox
1069/280 chars — OVER LIMIT
@RussellQuantum✓ 3×·1,742 views84%·5 replies
5/2/26 7:59 AM ET
𝗙𝗗𝗔 𝗕𝗹𝗼𝗰𝗸𝘀 𝗖𝗵𝗲𝗮𝗽 𝗚𝗟𝗣-𝟭𝘀 𝗙𝗼𝗿 𝗡𝗼𝘃𝗼 𝗮𝗻𝗱 𝗟𝗶𝗹𝗹𝘆
The FDA is removing semaglutide and tirzepatide from the compounding list, meaning cheaper alternatives disappear and you're back to paying Novo Nordisk and Eli Lilly full price.
Regulatory capture https://t.co/nEbZXjymQw
The "regulatory capture" framing here is doing a lot of work it hasn't earned, because the FDA removing semaglutide and tirzepatide from shortage lists after supply normalized is actually the standard process working as designed, not a manufacturer favor.
But the more interesting tension is the one getting skipped entirely. The compounding crackdown and the BALANCE Model's $245 CMS-negotiated net price are landing simultaneously, and the cash-pay model's problem isn't really FDA enforcement, it's that the price anchor collapsed. When Medicare patients can get Zepbound at a $50 copay through the July 2026 bridge demo, the $300-400 compounded monthly subscription doesn't lose on legality, it loses on math. I dug through the full BALANCE structure at https://www.onhealthcare.tech/p/the-balance-model-glp-1-coverage?utm_source=x&utm_medium=reply&utm_content=2049994717618716686&utm_campaign=the-balance-model-glp-1-coverage and the mechanism that actually kills d2c compounding economics isn't the FDA letter, it's the government pricing floor making cash-pay indefensible for any patient with coverage access.
And here's the part that makes "regulatory capture" genuinely hard to sustain as the thesis: Kennedy just announced 14 of 19 restricted peptides are going back to Category 1 compounding access. If this were a clean manufacturer protection play, that doesn't happen. What you're actually seeing is two separate regulatory postures, strict on approved drugs moving toward government pricing, and permissive on unapproved wellness compounds that'll never get insurance coverage. Those are consistent, they're just not the same story.
1650/280 chars — OVER LIMIT
@Newsforce✓ 1×·1,090 views84%·2 replies
5/2/26 6:33 AM ET
HARVARD STUDY FINDS AI BEATS DOCTORS IN ER TRIAGE
A Harvard study found OpenAI’s o1 model outperformed doctors in emergency-room diagnosis tests.
AI matched or closely hit correct diagnoses in 67 percent of cases, compared to about 50 to 55 percent for doctors.
Researchers say https://t.co/8Mav2SvHFA
The diagnostic accuracy headline is real but it's also the slowest path to actual clinical impact.
The 67% vs 55% gap matters a lot less in the near term than what's already happening at scale without any FDA review or hospital IT committee sign-off. OpenAI's own usage data shows 40M+ people a day using ChatGPT for health conversations, with 70% of those conversations happening outside clinic hours, when no physician is available to beat anyway.
The accuracy competition is the wrong frame for the next 3-5 years. The tools moving fastest right now are documentation scribes and insurance navigation, precisely because they don't need to win a diagnostic shootout to get deployed. No CPT code required. No clearance pathway. Just faster prior auth appeals and less time on note-writing.
The ER triage benchmark is genuinely interesting science. But does it move the adoption curve, or does it mostly feed a narrative that keeps investors focused on clinical AI while the administrative layer gets quietly locked up?
Wrote through this in more detail here: https://www.onhealthcare.tech/p/chatgpt-in-healthcare-what-the-numbers?utm_source=x&utm_medium=reply&utm_content=2050500543928533411&utm_campaign=chatgpt-in-healthcare-what-the-numbers
1248/280 chars — OVER LIMIT
@Gaurab·1,323 views84%·1 replies
5/1/26 10:04 PM ET
Compounders mix tirzepatide with vitamin B12 to evade FDA regulations. The two molecules react. Up to 10% of every dose is a chemical no one has ever tested. In March, Eli Lilly tested 10 samples of compounded tirzepatide pulled from pharmacies, medspas, and telehealth networks. https://t.co/6MfpB1WYWV
The "untested chemical byproduct" framing is doing a lot of work here that deserves scrutiny. Tirzepatide and B12 coexisting in solution is not the same as them reacting to produce a novel toxic compound at meaningful concentrations. The 10% figure needs a source and a method before it becomes a policy argument, because degradation products in peptide formulations are already governed under USP 797, and the question is whether those standards are being met, not whether mixing itself is inherently dangerous.
The Lilly testing is also not a neutral data point. A manufacturer testing competitor product pulled specifically from gray-market sources has obvious selection incentives, and the sample size of 10 is too small to support population-level claims about compounded tirzepatide quality broadly.
What I'd push on is the framing of "evade FDA regulations." Most B12 additions in compounded GLP-1s were marketed as tolerability aids and were genuinely within the 503A clinical difference argument that kept compounders operating after the shortage resolution in December 2024. That argument was contested and ultimately lost, but characterizing it as evasion conflates a losing legal position with intentional circumvention.
The deeper structural problem, which I mapped in detail here https://www.onhealthcare.tech/p/the-category-2-peptide-unwind-how?utm_source=x&utm_medium=reply&utm_content=2050379783507492921&utm_campaign=the-category-2-peptide-unwind-how, is that the adverse event signal on compounded GLP-1s, over 1,000 FAERS reports by mid-2025, traces more reliably to API supply chain contamination than to admixture chemistry. The B12 story is real but probably not the primary failure mode.
1714/280 chars — OVER LIMIT
@lawliepop·779 views86%·1 replies
5/1/26 4:18 PM ET
Got to speak with the ProRx COO on the day the FDA announces a proposal to remove GLP-1s from the 503B Bulk this.
This would prevent 503B’s from compounding GLP-1s, and ProRx was one fo the first to stop. Did the FDA ask them to stop making GLP-1s?
Why did they destroy their
The 503B prohibition on compounded GLP-1s is real regulatory pressure, but the more consequential force dismantling the compounding business model right now is economic, not enforcement. When BALANCE locks in a $245/month government-negotiated net price for Zepbound and layers a $50/month bridge demo copay on top starting July 2026, the cash-pay anchor that made $200-300 compounded semaglutide look cheap collapses entirely. The compounding channel was never competing on clinical merit. It was competing on price against $1,000+ retail. That arbitrage is gone.
And the FDA move you're describing fits a pattern I wrote through in detail at https://www.onhealthcare.tech/p/the-balance-model-glp-1-coverage?utm_source=x&utm_medium=reply&utm_content=2050078372244463687&utm_campaign=the-balance-model-glp-1-coverage, where the regulatory bifurcation between approved GLP-1s and unapproved compounded peptides is actually two consistent postures, not a contradiction. The FDA is tightening control over drug classes moving toward government pricing while simultaneously restoring compounding access for unapproved wellness peptides that will never touch insurance coverage. Conflating those two tracks is where most generalist analysis goes wrong.
ProRx stopping early makes sense in that context. A 503B operating at scale needs volume certainty, and the volume certainty in GLP-1s is now routing entirely through Medicare and Medicaid channels where compounded versions have no role.
1487/280 chars — OVER LIMIT
@investseekers✓ 6×·1,699 views88%
5/1/26 4:10 AM ET
$NVO $LLY
FDA proposal could tighten pressure on GLP-1 copycats.
The agency says there is no clinical need for bulk compounding of semaglutide and tirzepatide, which could limit large scale production by outsourcing facilities.
Novo Nordisk: “Compounding is meant to be a rare
1,000+ adverse events on compounded GLP-1s logged in FAERS by mid-2025, and the FDA's position on "no clinical need" follows directly from that accumulation. But the shortage resolution is the sharper mechanism here: semaglutide came off the shortage list in February 2025, tirzepatide in December 2024, and those two dates are what legally stripped 503B outsourcing facilities of their compounding authority, not this new proposal. The proposal is downstream enforcement, not the triggering event.
And the gray market analogue is already visible. When Category 2 placement pushed peptide demand out of supervised compounding channels, independently tested samples showed 8% endotoxin contamination. The GLP-1 volume that doesn't get absorbed by Empower or Hallandale or Olympia, facilities with existing 503B registrations that take 18 to 24 months for new entrants to replicate, won't disappear. It'll migrate the same way, and the adverse event count will follow.
The incumbents with licenses and API relationships already in place are the structural winners here, not the political narrative around access.
https://www.onhealthcare.tech/p/the-category-2-peptide-unwind-how?utm_source=x&utm_medium=reply&utm_content=2049947522878087281&utm_campaign=the-category-2-peptide-unwind-how
1288/280 chars — OVER LIMIT
@DocAbirHealth·2,792 views84%·4 replies
4/30/26 5:51 PM ET
Take the GLP1s off the compounding list.
Go hard after domestic RUO peptide vendors.
Become more strict checking customs shipments and make examples to scare people.
Ensure big pharma keeps all of their profits.
Pharma pays what is equivalent to pennies to them back to
Hard to argue with the pricing angle, but the safety case for enforcement isn't nothing. My own look at this found 1,000+ adverse events on compounded GLP-1s by mid-2025, and 8% endotoxin contamination in RUO peptide samples tested outside any pharmacy framework. Pushing demand gray doesn't fix the problem, it just moves it somewhere harder to track.
The real tell is what happened structurally after Cat 2 placement: the compounding channel shrank, gray market volume grew, and safety got worse, not better. That's the own-goal FDA career staff won't say out loud but the data shows.
https://www.onhealthcare.tech/p/the-category-2-peptide-unwind-how?utm_source=x&utm_medium=reply&utm_content=2049900179902394456&utm_campaign=the-category-2-peptide-unwind-how