All exclusion rules (Haiku prescreen + Sonnet ranking + learned) — edit, remove, or propose new ones
Active Rules
530
Applied to every scan by Haiku + Sonnet
[glp1_peptide_market_pricing_only]
Learned63 rejectionsActive
Exclude posts that report GLP-1 or peptide market share, prescription trends, pricing competition, or off-label prescribing patterns without analyzing structural healthcare delivery, payer policy, or clinical outcome implications. Posts that are purely market speculation or competitive comparison (e.g., 'Mounjaro vs Wegovy scripts') lack healthcare systems substance.
Posts about GLP-1 drugs and peptides focused narrowly on market dynamics, pricing, competition, or off-label use without healthcare systems analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Exclude posts that discuss workforce disruption, labor market impacts, or job automation in broad macro terms (e.g., 'AI could automate 57% of work hours,' 'robots scaling 24x') without analyzing healthcare-specific roles, clinical workflows, or labor market restructuring in healthcare.
Posts about AI, robots, or technology causing job displacement or labor market change in general terms.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
🚨 BREAKING: Anthropic new research finds that AI’s impact on jobs is primarily at the task level.
Rather than eliminating jobs, it is progressively taking over the functions that define them and gradually absorbing the core work in many jobs/roles.
The paper, “Labor Market http
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts that narrate specific fraud cases, denied insurance claims, or company misconduct (Optum, Aetna, hospices) without connecting these incidents to systemic incentive structures, policy failures, or replicable patterns across healthcare delivery or financing.
Posts reporting healthcare fraud, insurance denials, or provider misconduct as individual scandals without broader system-level diagnosis.
3 example posts
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
Created 2026-04-17 · Updated 2026-05-01
Edit rule text
[clinical_anecdote_without_systems_context]
Learned25 rejectionsActive
Exclude posts that describe a single clinical case, individual patient outcome, physician workflow observation, or isolated clinical scenario without connecting it to healthcare system patterns, policy, reimbursement, or scalable implementation challenges.
Posts sharing individual clinical observations, single-case examples, or physician anecdotes without broader healthcare delivery or system implications.
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
"How can medicine save the most lives?"
Most people ask this rhetorically.
@Farzad_MD and Tom Frieden took it literally.
From banning smoking in NYC bars to cutting teen smoking in half in 5 years, this is what happens when you stop treating diseases and start preventing them. h
Created 2026-04-24 · Updated 2026-05-02
Edit rule text
[ai_safety_vulnerability_incident_tangential]
Learned22 rejectionsActive
Exclude posts about AI model vulnerabilities, security breaches, or safety incidents (e.g., Claude deleting databases, hardcoded API keys, malicious agent skills) unless they specifically analyze implications for healthcare delivery, clinical workflows, or patient safety systems. Posts that sensationalize AI risks without healthcare application context should be rejected.
Posts about AI safety incidents, security vulnerabilities, or model jailbreaks that lack healthcare systems context
3 example posts
Our attention to biorisks posed by AI needs to match the current attention given to cyber-risks. The staged release of Claude Mythos in order to bolster defenses in key industries is necessary to shore up resilience against a new class of cyber-risk across critical industries. We
Anthropic built something so powerful that they are only letting 50 organisations touch it.
It is called Claude Mythos.
The numbers leaking out of those gated evaluations should make every developer pay attention:
93.9% on SWE-bench Verified
94.6% on GPQA Diamond
Claude Opus
Hacking Mexico government with AI assistance. Attacker exfiltrated hundreds of millions of citizen records. 75% of the executed commands across the entire cyberattack campaign were generated by Claude. 40 minutes after Claude said "I'm not going to create that file" it was report
Exclude posts that describe a single clinical case, patient anecdote, doctor's personal workflow experience, or individual health observation (e.g., one patient's hospital bill, a clinician's chat experience, a doctor's commentary on disease management) unless the post connects to systemic healthcare delivery, operational patterns, or policy implications affecting many patients.
Posts sharing individual clinical cases, patient stories, or single-observation health insights without broader healthcare system implications.
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
Created 2026-04-24 · Updated 2026-05-02
Edit rule text
[ai_infrastructure_and_compute_hype]
Learned22 rejectionsActive
Exclude posts about AI compute bottlenecks, power grids, training compute growth, chip production, or data center infrastructure unless they explicitly connect to healthcare AI deployment, clinical validation challenges, or health system technical requirements.
Posts about AI compute capacity, training efficiency, power infrastructure, or model scaling divorced from healthcare applications.
3 example posts
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
AI could, in theory, automate 57% of US work hours. Yet most human skills remain relevant.
The future of work is not human or machine – but a partnership between people, agents, and robots.
Read our latest research on skill partnerships in the age of AI: https://t.co/h1K56uPqPo
Exclude posts that cite clinical trial data, FDA approvals, or research outcomes (e.g., drug efficacy, gene therapy, CAR-T results) as standalone observations without discussing how these findings impact healthcare delivery, system workflows, policy, or healthcare technology adoption.
Posts reporting clinical trial results, drug approvals, or research findings without analysis of healthcare system implications, adoption barriers, or operational integration.
3 example posts
🚨Top line results of ACHIEVE-4 are out, the T2D study of orforglipron vs insulin glargine in patients with increased cardiovascular risk. This is the study the FDA wants full results for by June for Foundayo.
Versus insulin glargine:
▪️ 16% lower risk of MACE-4 events and a 23%
Good summary of the marked benefit of the molecular glue drug (daraxonrasib) vs pancreatic cancer, from Revolution Medicines, and other progress (adds to the neoantigen vaccine with 6-year survival)
gift link https://t.co/qk7Ar9dCAQ https://t.co/SMiA51fiwX
Insightful plenary from the father of CAR-T, @carlhjune #AACR26
🔬 CAR-T for solid tumors is finally breaking through. 7 FDA approvals in blood cancers and now solid tumors are next 🎯
Clinical signals
• CLDN18.2 (Satri-cel): 38% vs 4% ORR in gastric cancer (The Lancet 2025) http
Exclude posts that focus on AI company business news (funding rounds, valuations, ARR, manufacturing/scaling metrics) without demonstrating healthcare system adoption, healthcare-specific revenue, or healthcare operational use cases. Business metrics alone do not qualify.
Posts reporting on AI company funding, valuation, ARR, or business metrics that lack healthcare-specific application evidence.
3 example posts
Microsoft just turned an $11 billion startup into a Word feature.
Harvey raised $200M at an $11B valuation in March on the bet that legal AI is its own surface. The numbers held that up. $190M ARR per TechCrunch's December reporting. 100,000 lawyers across 1,300 organizations in
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Exclude posts that describe a single clinical encounter, personal anecdote, or isolated medical observation (e.g., a doctor using ChatGPT for one patient case, a personal hospital bill experience) unless the post explicitly connects the observation to a systemic healthcare problem, provider workflow, or care delivery model issue.
Posts sharing a single clinical case, personal health experience, or isolated medical observation without healthcare systems perspective.
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
"How can medicine save the most lives?"
Most people ask this rhetorically.
@Farzad_MD and Tom Frieden took it literally.
From banning smoking in NYC bars to cutting teen smoking in half in 5 years, this is what happens when you stop treating diseases and start preventing them. h
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
Created 2026-04-28 · Updated 2026-05-03
Edit rule text
[glp1_peptide_macro_or_personal_framing]
Learned16 rejectionsActive
Exclude posts about GLP-1 drugs, peptides, or weight loss therapeutics that focus on personal side effects, cost arbitrage, pharmaceutical pricing dynamics, supply chain contamination, or macro economic trends without connecting to a healthcare access, regulatory, or delivery system problem worth solving.
Posts about GLP-1 drugs, peptides, or weight loss medications focused on personal experiences, macro economics, or pharmaceutical business dynamics rather than healthcare systems innovation.
3 example posts
She's right. The safety risk was never the peptides. It was the supply chain. Regulated compounding access fixes the exact problems people are worried about. Heavy metals, contamination, underdosed vials.
And there it is.
Within hours of RFK's announcement someone is already pricing out how much Hims can charge for compounds the research community has had access to for a fraction of that cost.
This is why the outcome of these PCAC meetings matters more than the announcement.
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
Exclude posts that report healthcare fraud, billing abuse, insurance denials, or financial scandals (Optum, hospice fraud, billing errors) as breaking news or outrage without analyzing root causes in healthcare system design, reimbursement incentives, or operational vulnerabilities that the writer could offer insight into.
Posts reporting healthcare fraud, billing scandals, or financial misconduct as isolated news without analyzing systemic healthcare delivery or policy implications.
3 example posts
$LLY $NVO $HIMS
🚨 BREAKING: COURT DISMISSES PART OF ELI LILLY LAWSUIT AGAINST EMPOWER PHARMACY
BOTH LILLY AND EMPOWER ISSUED STATEMENTS CELEBRATING THE RULING
Dismissed: Lanham Act false advertising + consumer harm claim
Allowed to proceed: unfair competition claims under h
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Created 2026-04-17 · Updated 2026-05-02
Edit rule text
[unvalidated_or_speculative_medical_claims]
Learned15 rejectionsActive
Exclude posts that promote or discuss unvalidated medical treatments, speculative drug applications, off-label uses, or anecdotal side effects without citing clinical evidence, regulatory status, or peer-reviewed research. This includes personal reports of medication effects framed as general conclusions.
Posts about unproven treatments, speculative drug mechanisms, or anecdotal medical claims without clinical evidence or regulatory approval status.
3 example posts
Cirrhosis is not necessarily “end-stage” liver disease. 35% of patients achieve recompensation (recovery) when the aetiology of cirrhosis has been treated. This is increasingly more common for MASLD cirrhosis in the GLP1 era.
📸: https://t.co/dITDGcLpTt https://t.co/REo0nlD1mn
I've closely monitored Alzheimers research for 40 years. Conclusions:
1)Incredible hype/Little practical value
2)Meds don't work
3)Early testing does much more harm than good
4)No low hanging fruit
5)Be skeptical of next "breakthru"
6)In many, just old age https://t.co/pAaCpo1Sfc
And there it is.
Within hours of RFK's announcement someone is already pricing out how much Hims can charge for compounds the research community has had access to for a fraction of that cost.
This is why the outcome of these PCAC meetings matters more than the announcement.
Created 2026-04-11 · Updated 2026-04-20
Edit rule text
[ai_safety_cybersecurity_incident_tangential]
Learned14 rejectionsActive
Exclude posts that report AI safety vulnerabilities, cybersecurity incidents, or data leaks (e.g., Claude deleting databases, ClickUp email leaks, malicious AI agent skills) unless the post explicitly analyzes how this impacts healthcare operations, patient data, or clinical workflows. Generic AI safety concerns are out of scope.
Posts about AI model security vulnerabilities, data breaches, or safety incidents without direct healthcare application context.
3 example posts
Our attention to biorisks posed by AI needs to match the current attention given to cyber-risks. The staged release of Claude Mythos in order to bolster defenses in key industries is necessary to shore up resilience against a new class of cyber-risk across critical industries. We
Anthropic built something so powerful that they are only letting 50 organisations touch it.
It is called Claude Mythos.
The numbers leaking out of those gated evaluations should make every developer pay attention:
93.9% on SWE-bench Verified
94.6% on GPQA Diamond
Claude Opus
Hacking Mexico government with AI assistance. Attacker exfiltrated hundreds of millions of citizen records. 75% of the executed commands across the entire cyberattack campaign were generated by Claude. 40 minutes after Claude said "I'm not going to create that file" it was report
Created 2026-04-30 · Updated 2026-05-03
Edit rule text
[ai_safety_vulnerability_tangent]
Learned14 rejectionsActive
Exclude posts that focus on AI safety vulnerabilities, jailbreaks, prompt injection attacks, or security incidents (e.g., Claude deleting databases, hardcoded API keys leaking) unless the post explicitly connects the vulnerability to healthcare delivery, patient safety, clinical workflows, or healthcare data. Generic AI security incidents reframed with loose healthcare language do not qualify.
Posts about AI model security vulnerabilities, jailbreaks, or safety failures that lack healthcare-specific application or consequence analysis.
3 example posts
Anthropic built something so powerful that they are only letting 50 organisations touch it.
It is called Claude Mythos.
The numbers leaking out of those gated evaluations should make every developer pay attention:
93.9% on SWE-bench Verified
94.6% on GPQA Diamond
Claude Opus
Hacking Mexico government with AI assistance. Attacker exfiltrated hundreds of millions of citizen records. 75% of the executed commands across the entire cyberattack campaign were generated by Claude. 40 minutes after Claude said "I'm not going to create that file" it was report
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
Exclude posts that report clinical research results, trial data, or academic findings (vision models, protein design, AI diagnostics) without analyzing how these findings change healthcare workflows, reimbursement, access, or clinical decision-making. The post must connect research to healthcare system implications.
Posts sharing clinical trial results, research findings, or academic observations without healthcare system analysis.
3 example posts
Experimentally Validated Deep Learning Control of Protein Aggregation
1. The study introduces AggreProt, a deep neural network that predicts residue-level aggregation-prone regions (APRs) directly from protein sequence, and then uses those predictions to design mutations that ht
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
Created 2026-04-17 · Updated 2026-05-02
Edit rule text
[ai_safety_vulnerability_tangential]
Learned13 rejectionsActive
Exclude posts that describe AI model vulnerabilities, security breaches, or safety guardrail failures (e.g., Claude deleting databases, AI agents being hijacked, prompt injection attacks) unless the post explicitly analyzes systemic healthcare delivery, regulatory, or patient safety implications. Posts that sensationalize AI safety incidents without healthcare context should be rejected.
Posts about AI security incidents, jailbreaks, or safety vulnerabilities that lack healthcare systems application or consequence analysis.
3 example posts
Anthropic built something so powerful that they are only letting 50 organisations touch it.
It is called Claude Mythos.
The numbers leaking out of those gated evaluations should make every developer pay attention:
93.9% on SWE-bench Verified
94.6% on GPQA Diamond
Claude Opus
Hacking Mexico government with AI assistance. Attacker exfiltrated hundreds of millions of citizen records. 75% of the executed commands across the entire cyberattack campaign were generated by Claude. 40 minutes after Claude said "I'm not going to create that file" it was report
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
Exclude posts featuring political figures (RFK Jr., senators, congressmen) making healthcare claims, complaints about drug pricing, or regulatory criticism unless the post includes specific policy analysis, evidence-based critique, or healthcare system consequences—not just amplification of the claim.
Posts of political figures making broad healthcare claims or criticisms without substantive policy analysis or healthcare systems insight.
3 example posts
@PirateWires He's objectively correct. Brian Thompson made decisions that led to denials of medical care, and people died. He used Ai to find ways to deny claims ffs. Brian Thompson has more blood on his hands than whoever shot him
A senator complaining about drug prices while voting for the law that set them is not a reformer. He is a magician. The trick is making you watch his hands.
RFK Jr. calls out Democrat House representatives to their face for ignoring chronic disease while claiming to care about public health.
“The Congressman was talking about the deaths from infectious disease, which are a couple thousand a year.”
“90% of the people who die in this
Exclude posts that report pharmaceutical trial results, efficacy numbers, or drug approval news (e.g., obesity drugs, gene therapies, antivirals) without contextualizing how the data affects healthcare operations, pricing, market access, prescribing patterns, or healthcare infrastructure.
Posts announcing drug trial results or efficacy data without analyzing healthcare system implications, pricing, access, or clinical practice adoption.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
Exclude posts that report on healthcare fraud, insurance denial scandals, or regulatory criticism (hospital billing disputes, insurance denials, Medicaid fraud) as isolated incidents or outrage without connecting to broader healthcare system design, payment structure, or operational problems.
Posts reporting fraud, policy outrage, or regulatory incidents without systemic healthcare analysis
3 example posts
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
Created 2026-04-29 · Updated 2026-05-03
Edit rule text
[political_figures_healthcare_posturing]
Learned12 rejectionsActive
Exclude posts where a political figure makes a healthcare claim, accusation, or regulatory complaint, unless the post provides detailed healthcare systems analysis (e.g., specific policy mechanisms, epidemiological data, or institutional reform pathway). Posts that amplify a politician's healthcare statement without independent analysis should be rejected.
Posts featuring political figures making healthcare claims or accusations without substantive healthcare systems analysis.
3 example posts
What $1 Billion a Day Buys in American Health Care
The U.S. is spending $1 billion/day on the war in Iran — over a year, that would cover 37 million Medicaid enrollees. Congress just cut $911 billion from the program because it was too expensive.
Read & subscribe (for free!)
A senator complaining about drug prices while voting for the law that set them is not a reformer. He is a magician. The trick is making you watch his hands.
RFK Jr. calls out Democrat House representatives to their face for ignoring chronic disease while claiming to care about public health.
“The Congressman was talking about the deaths from infectious disease, which are a couple thousand a year.”
“90% of the people who die in this
Exclude posts that discuss infrastructure scaling, compute capacity, power generation, chip manufacturing, or data center expansion in the context of general AI advancement unless the post explicitly connects these technical developments to healthcare delivery, clinical AI systems, or health tech deployment constraints.
Posts about AI compute, power grids, data centers, chips, or infrastructure scaling that lack healthcare-specific application or relevance.
3 example posts
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
Exclude posts that invoke healthcare topics (Medicaid, insurance, FDA policy) primarily as a vehicle for partisan political commentary, government spending criticism, or tax policy outrage without substantive analysis of healthcare delivery, access, or operational challenges.
Posts using healthcare as a framing for political or partisan outrage without substantive healthcare policy analysis.
3 example posts
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
States are rushing “affordability” bills, but most just mask high prices with rebates, mandates, or price caps. @MrRBourne & Nathan Miller argue durable relief means rolling back cost-raising rules and expanding supply.
https://t.co/WG5egT1NfL
Created 2026-04-15 · Updated 2026-04-30
Edit rule text
[clinical_observation_without_systems_context]
Learned12 rejectionsActive
Exclude posts that describe individual clinical cases, patient experiences, or isolated trial observations without analyzing how these findings affect healthcare systems, access, policy, or organizational practice.
Posts reporting clinical anecdotes, patient observations, or trial data without connecting to broader healthcare delivery or system implications.
3 example posts
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
I sat with a patient today who first noticed a change in October. It’s April now. In all those months of appointments and follow-ups, her breast had only truly been looked at twice. That stayed with me.
If something has changed with your body — especially something under your ht
Created 2026-04-13 · Updated 2026-04-26
Edit rule text
[truncated_incomplete_posts]
Learned12 rejectionsActive
Exclude posts that end abruptly mid-sentence, are missing critical context, or appear truncated with no clear conclusion. These posts lack sufficient information to assess relevance to healthcare tech systems.
Posts that are cut off mid-sentence or clearly incomplete, making them impossible to evaluate for substantive healthcare tech content.
3 example posts
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Exclude posts that merely announce pharmaceutical trial results, drug efficacy data, or Phase 3 outcomes (e.g., obesity drug weight loss %, hypertension study results, gene therapy trials) unless the post analyzes systemic healthcare barriers, reimbursement, access, regulatory impact, or operational implementation.
Posts reporting drug trial results or pharmaceutical data announcements without analysis of healthcare delivery, access, pricing, or systemic implications.
3 example posts
Among veterans with moderate to severe #ChronicPain in primary care, the whole health team intervention produced greater improvement in the Brief Pain Inventory interference scores at 12 months compared with cognitive behavioral therapy and usual care.
https://t.co/xtZY4sgyGt h
A JAHA study of 1,181,007 younger US veterans just dropped bad news about BP in your 30s.
This is not mainly an older-adult problem anymore. Nearly half met the bar for hypertension. The catch: about half of them didn't know it.
Here's what most people miss: https://t.co/mxT2oC
Grace Science’s experience highlights a growing disconnect at FDA between talk and action on therapies for rare diseases. Despite efficacy signals in a monogenic ultrarare disease, FDA said the plausible mechanism framework is not available, and requires a new manufacturing
Created 2026-05-01 · Updated 2026-05-03
Edit rule text
[truncated_or_incomplete_posts]
Learned11 rejectionsActive
Exclude posts that end abruptly with ellipses, incomplete sentences, or missing final thoughts that prevent full understanding of the argument or claim being made.
Posts that are cut off mid-sentence or clearly incomplete, lacking full context or conclusion.
3 example posts
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Exclude posts that announce AI infrastructure tools (OpenShell, Mesa, NemoClaw), compute platforms (NVIDIA stacks), developer workshops, or sandbox technologies unless the post shows validated healthcare workflow adoption or clinical problem-solving. Generic AI tool launches with loose healthcare framing are excluded.
Posts promoting AI infrastructure, compute platforms, or developer tools without demonstrating healthcare-specific application or validation.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Created 2026-04-16 · Updated 2026-05-02
Edit rule text
[retweet_or_shallow_commentary_only]
Learned11 rejectionsActive
Exclude posts that are primarily retweets with minimal commentary, simple agreement/disagreement statements, or short reactions that do not add original analysis, data, or healthcare technology perspective. Posts must contain substantive original insight.
Posts that are retweets or brief reactions without original analysis or substantive healthcare technology insight
3 example posts
Yesterday, @RandDWorld featured us twice.
@ProQR turns to Ginkgo’s autonomous lab to scale AI-enabled RNA editing discovery: https://t.co/DyENAd4VdM
Ginkgo’s CEO says biotech needs its Waymo moment: https://t.co/kW27eBjAmf
Want to learn more about our partnership with ProQR? h
.@openloophealth expands into sleep diagnostics.
Health tech company announces new partnership Happy Sleep—bringing at‑home sleep apnea testing to patients for the first time.
Watch to hear more about its big step toward better rest and smarter care⤵️
https://t.co/ATcNkYrrpK h
good news: it is a specific virus that has a good prognosis - 85%+ of full recovery.
thanks everyone who helped me; it is hard to research while immobilized, and I got some things wrong, which you helped clear up. im extremely thankful and hope i can give it back somehow
sadl
Created 2026-04-14 · Updated 2026-04-16
Edit rule text
[ai_company_funding_and_valuation]
Learned11 rejectionsActive
Exclude posts that focus on AI company funding announcements, ARR growth, valuation milestones, or business metrics (e.g., revenue growth from $1M to $60M, $1.03B seed rounds, $3.5B valuations). The post must discuss healthcare impact or application, not the company's financial performance.
Posts about AI company fundraising rounds, valuations, and business metrics rather than healthcare applications
3 example posts
The AI labs' voracious appetite for training data has lifted a number of startups offering that data.
That includes Fleet, an RL gym startup that's grown ARR from $1m to $60m+ and is now raising at ~$750m from BCV.
https://t.co/v3CceXapH1
The 26 prompts running inside 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 just got open-sourced. This is literally the entire brain of a $200/month AI coding tool.
Someone reverse-engineered every prompt from the accidentally published npm source and you can now study all of them for free.
Claude Code uses 26
OpenAI dropping Agent Builder today is either going to make you rich or expose that you've been selling hot air.
I went deep analyzing what this actually means.
Here's the $4B opportunity hiding in plain sight:
The mainstream narrative: "Agent Builder democratizes AI! Anyone c
Exclude posts that are clinical anecdotes, single case reports, or personal observations from clinicians (e.g., 'I used Claude for a GI bleed differential') unless the post extracts a systems-level insight about clinical workflow, technology adoption barriers, or care delivery model implications.
Posts sharing individual clinical observations, case examples, or single-patient anecdotes without healthcare system-level insights.
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
"How can medicine save the most lives?"
Most people ask this rhetorically.
@Farzad_MD and Tom Frieden took it literally.
From banning smoking in NYC bars to cutting teen smoking in half in 5 years, this is what happens when you stop treating diseases and start preventing them. h
Exclude posts that debate nutrition claims, fitness protocols, or health guidelines (e.g., 'Your body can only use 25-30g protein...', GLP-1 heart muscle loss narratives) unless the post cites peer-reviewed evidence, clinical trial data, or healthcare system policy implications affecting patient care at scale.
Posts making sweeping health claims or debunking fitness/nutrition myths without clinical validation or healthcare system context.
3 example posts
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
Attention PK nerds, pharmacologists, and clinicians who actually understand serum levels:
I haven’t seen this discussed, but it could matter for patients priced out of injectables.
If a 25 mg oral semaglutide tablet has ~1% bioavailability, that’s ~0.25 mg systemically… on
The only problem with the GLP-1 heart muscle loss narrative is...
... that it's just a narrative.
GLP-1s have reliably improved cardiovascular outcomes in trials, to the point that some research suggests benefit may even be independent of (not reliant on) weight loss.
Created 2026-04-25 · Updated 2026-04-28
Edit rule text
[political_outrage_without_healthcare_analysis]
Learned9 rejectionsActive
Exclude posts that frame healthcare issues (fraud, denials, policy cuts) primarily as political outrage or moral judgment without explaining the healthcare system dynamics, incentives, or structural changes at stake. Moral righteousness without analysis is opinion, not insight.
Posts expressing political anger about healthcare policy or fraud without substantive systems analysis
3 example posts
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
Exclude posts that focus on AI infrastructure (chips, data centers, compute scaling, power grids, GPU benchmarks) or general AI capability announcements unless the post explicitly connects the infrastructure or capability advancement to a specific, validated healthcare application or clinical use case.
Posts about AI compute, infrastructure scaling, or general technical capabilities with only loose or speculative healthcare framing.
3 example posts
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
Exclude posts that describe AI safety failures, security breaches, or vulnerability demonstrations (e.g., Claude deleting databases, hardcoded API keys, malicious skills) unless the post explicitly analyzes healthcare-specific implications or system-level consequences in healthcare delivery.
Posts about AI safety incidents, security vulnerabilities, or jailbreaks that lack healthcare-specific context or application.
3 example posts
Anthropic built something so powerful that they are only letting 50 organisations touch it.
It is called Claude Mythos.
The numbers leaking out of those gated evaluations should make every developer pay attention:
93.9% on SWE-bench Verified
94.6% on GPQA Diamond
Claude Opus
Hacking Mexico government with AI assistance. Attacker exfiltrated hundreds of millions of citizen records. 75% of the executed commands across the entire cyberattack campaign were generated by Claude. 40 minutes after Claude said "I'm not going to create that file" it was report
𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗔𝗹𝗿𝗲𝗮𝗱𝘆 𝗕𝗲𝗶𝗻𝗴 𝗛𝗶𝗷𝗮𝗰𝗸𝗲𝗱
Researcher Aks Sharma at Manifold found 30 malicious skills on ClawHub turning AI agents into a crypto farming botnet: 10,000 downloads before anyone noticed.
⬩ The attack required zero exploits. Malicious https://t.co/v4oBXPPydu
Exclude posts that discuss GLP-1/peptide market pricing, script numbers, competitive launches, or weight loss outcomes (e.g., 'Oral Wegovy scripts off to slow start', 'survodutide 16.6% weight loss') unless the post analyzes healthcare system operational impact, payer strategy, access barriers, or clinical workflow integration.
Posts about GLP-1 and peptide drug market dynamics, pricing competition, or script volume trends without healthcare system operational analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Exclude posts that discuss GLP-1 drugs, semaglutide, retatrutide, compounded peptides, or weight-loss medications purely from a market, pricing, competitive, or personal/off-label use perspective—without analyzing healthcare system implications, access barriers, clinical protocols, or reimbursement policy.
Posts about GLP-1 drugs, peptides, or weight-loss medications focused on market dynamics, pricing, or off-label use without healthcare systems context.
3 example posts
Ivermectin and Mebendazole Cost a Fraction of Chemo. Big Pharma Can't Patent Them. That's the Problem.
Cancer centers get a cut of every chemotherapy bill. Generic drugs don't generate that margin. Two affordable, widely available compounds showing 84% clinical benefit in a real
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
Created 2026-04-28 · Updated 2026-05-03
Edit rule text
[off_topic_non_healthcare_with_loose_framing]
Learned8 rejectionsActive
Exclude posts about conspiracy theories (UFOs), entertainment (film rentals), general employment statistics, financial chargebacks, fitness nutrition claims, or other non-healthcare domains that either have no healthcare label or use healthcare labels only as loose cover. Posts must be substantively about healthcare, not accidentally tagged.
Posts about non-healthcare topics (UFOs, sports, movies, employment data, chargebacks) that have no legitimate healthcare connection
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
Two companies you've never heard of built a combined $373M revenue business by helping employees bypass IT. Now comes the part where IT buys its way back in.
Replit just hit $253M ARR growing 2,352% YoY. 85% of the Fortune 500 have employees on it. Lovable is at $120M ARR, $6.6B
"Your body can only use 25-30g of protein per meal. Anything above that gets wasted."
This claim has been repeated in fitness nutrition for over a decade, and it was built on studies that measured the right thing over the wrong timescale.
Moore 2009 gave six young men 0, 5, ht
Exclude posts about AI infrastructure (chips, GPUs, data centers, energy grids, compute scaling) unless they explicitly connect to healthcare-specific bottlenecks, clinical workflows, or healthcare system transformation. General AI infrastructure hype without healthcare application should be rejected.
Posts about AI compute scaling, data center buildout, energy infrastructure, or model training bottlenecks that lack specific healthcare application or impact.
3 example posts
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
[New] from a16z @speedrun:
Come for the Agent, Stay for the Network
there's a quiet pattern hiding inside the most defensible vertical AI startups right now:
the agent is the wedge
the network is the moat.
here's what I mean:
an HVAC tech needs a part today.
>>Traditionally:
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Created 2026-04-22 · Updated 2026-05-02
Edit rule text
[glp1_peptide_macro_or_personal_narrative]
Learned8 rejectionsActive
Exclude posts that discuss GLP-1 or peptide drugs through personal experience (side effects, family stories), pricing speculation (what Hims charges), or general commentary on the 'peptide economy' without healthcare policy, regulatory, or operational depth. Personal narratives and pricing gossip do not qualify.
Posts about GLP-1 drugs or peptides framed through personal anecdotes, macro pricing commentary, or lifestyle observations without healthcare systems analysis.
3 example posts
Metformin has been front-line for type 2 diabetes for 30 years.
The head-to-head data from the last decade says SGLT2 inhibitors now beat it on every cardiovascular endpoint that matters.
Lower MACE. Lower heart failure. Lower all-cause mortality. Same glycemic control.
ADA's
She's right. The safety risk was never the peptides. It was the supply chain. Regulated compounding access fixes the exact problems people are worried about. Heavy metals, contamination, underdosed vials.
I never met my grandfather.
He died of pancreatic cancer when my father was just 19. Today, Yash Bindal, 33, father to 18-month-old Maya, faces the same fate.
@PopVaxIndia is using AI to make him a personalized generative medicine to extend his life.
https://t.co/O5VIXbmMGd
Exclude posts that focus on AI compute infrastructure scaling (data centers, power grids, semiconductors, token budgets, training compute, GPU supply), AI model technical capabilities (interpretability, sparse networks, atomic precision), or general AI advancement, unless the post explicitly connects to a healthcare deployment, clinical workflow, or healthcare system decision.
Posts about AI compute infrastructure, energy grids, semiconductor scaling, or foundational AI capabilities presented as tech industry news without healthcare-specific deployment or clinical application.
3 example posts
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
Exclude posts that focus on AI model technical capabilities (e.g., vulnerability discovery, security exploits, model architecture features) where healthcare is mentioned only as a passing context or matching tag, not as the substantive focus or application domain.
Posts about AI model technical capabilities (zero-day vulnerabilities, policy implications, attacking groups) with tenuous or absent healthcare connection.
3 example posts
project glasswing is a good example of anthropic’s stated theory that being at the frontier allows them to shape policy
if openai releases a high cyber capability model generally, rather than through a special release, and there is a major breach, they will get a lot of flak
Guillermo reports "we believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel"
Alex Stamos warns us that defensive agents with autonomy and https://t
> Vercel got pawned
> severe enough to notify law enforcement
> the only advice: “review your environment variables”
> what does that even mean?
> $10B company, and this is how you communicate
Cyber attacks ramping fast, starting to see why Anthropic is scared to
Exclude posts that make broad claims about AI's impact on jobs, labor markets, or worker displacement (e.g., 'AI will automate 57% of work') with healthcare mentioned only as supporting evidence or one case study, rather than focusing on healthcare-specific workforce dynamics.
Posts about AI-driven labor market disruption, job displacement, or workforce transformation using healthcare as one example among broader macro claims.
3 example posts
🚨 BREAKING: Anthropic new research finds that AI’s impact on jobs is primarily at the task level.
Rather than eliminating jobs, it is progressively taking over the functions that define them and gradually absorbing the core work in many jobs/roles.
The paper, “Labor Market http
AI could, in theory, automate 57% of US work hours. Yet most human skills remain relevant.
The future of work is not human or machine – but a partnership between people, agents, and robots.
Read our latest research on skill partnerships in the age of AI: https://t.co/h1K56uPqPo
AI is taking on more of the labor.
It is not taking on the accountability.
@danielnewmanUV and @GregLotko talk with @Darren_Surch of @Interskil about why mainframe teams now have to interpret and stand behind AI-driven outputs, and why organizations that stop investing in htt
Exclude posts that report pharmaceutical trial outcomes, drug efficacy percentages, or FDA approval announcements without analyzing how the drug impacts healthcare delivery, reimbursement, adoption barriers, or system-level outcomes. Raw trial data or approval news alone is insufficient.
Posts announcing pharmaceutical trial results or drug efficacy data without healthcare systems context
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
Exclude posts that describe individual clinical encounters, personal patient anecdotes, single-case observations, or clinician tool usage stories unless they generalize to healthcare system-level problems, workflow bottlenecks, or implementation barriers affecting multiple providers or patient populations.
Posts sharing clinical case observations, patient stories, or single-clinician experiences without connecting to broader healthcare delivery challenges or system-level implications.
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
Exclude posts that present clinical trial results, drug efficacy statistics, or phase trial outcomes (e.g., percentage weight loss, cardiovascular endpoints) without addressing how the drug will be accessed, covered by insurance, prescribed in practice, or integrated into care systems.
Posts reporting clinical trial data or drug efficacy findings without healthcare access, coverage, or implementation analysis.
3 example posts
What superhuman vision can detect from the retinal photo, which human eyes cannot, is stunning. A new foundation AI model screening for diabetes hypertension, hyperlipidemia, gout, osteoporosis, and thyroid disease @NatureMedicine
https://t.co/GhKvUqz4Vy https://t.co/iKcXCbLceu
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
Created 2026-04-28 · Updated 2026-04-29
Edit rule text
[tangential_ai_infrastructure_or_compute_hype]
Learned7 rejectionsActive
Exclude posts about AI infrastructure platforms, GPU compute, foundational models, or cloud services (AWS, NVIDIA, OpenAI) that mention healthcare tangentially but focus primarily on the infrastructure or platform's general capabilities rather than a specific healthcare problem or workflow being solved.
Posts about AI infrastructure, cloud platforms, or compute capabilities without healthcare-specific application
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Created 2026-04-28 · Updated 2026-05-03
Edit rule text
[broad_unvalidated_health_claim_without_evidence]
Learned7 rejectionsActive
Exclude posts that make sweeping claims about AI transforming healthcare, disrupting the labor market, or changing medical practice without citing clinical evidence, validated deployments, or grounded healthcare system analysis.
Posts making broad claims about AI capabilities, market trends, or health outcomes without clinical validation, evidence, or healthcare system analysis.
3 example posts
The problem with Reality Labs is not ambition. It is time. AI turned into revenue faster because it improves existing workflows. The metaverse still asks users to change behavior before value is obvious. https://t.co/lopZiUwGU5
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
What superhuman vision can detect from the retinal photo, which human eyes cannot, is stunning. A new foundation AI model screening for diabetes hypertension, hyperlipidemia, gout, osteoporosis, and thyroid disease @NatureMedicine
https://t.co/GhKvUqz4Vy https://t.co/iKcXCbLceu
Exclude posts from AI companies (NVIDIA, OpenAI, Meta, Anthropic) announcing new software, models, or platforms (OpenShell, NemoClaw, GPT-5, Claude Mythos, Mesa, Foundry) that read as promotional without validating healthcare-specific outcomes, clinical deployment success, or healthcare workflow transformation.
Posts celebrating AI company announcements, product launches, or technical capabilities without demonstrating concrete healthcare use cases or system-level impact.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Created 2026-04-27 · Updated 2026-05-03
Edit rule text
[off_topic_or_non_healthcare_domain]
Learned7 rejectionsActive
Exclude posts that are completely outside healthcare domains (UFOs, entertainment, immigration policy, criminal justice, sociology research, environmental issues) even if they contain tangential healthcare language or are miscategorized.
Posts about UFOs, film releases, immigration, labor organizing, criminal justice, or other domains with no healthcare relevance
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
"Your body can only use 25-30g of protein per meal. Anything above that gets wasted."
This claim has been repeated in fitness nutrition for over a decade, and it was built on studies that measured the right thing over the wrong timescale.
Moore 2009 gave six young men 0, 5, ht
Sent a European Advertiser hundreds of leads last month for an invoicing totaling roughly $30k
They just sent over a chargeback report for 4 leads totaled at roughly $40
Never do this
Eat the loss, don’t mention it, not worth diminishing yourself in an affiliates eye over $40
Created 2026-04-25 · Updated 2026-04-27
Edit rule text
[truncated_incomplete_unfinished_posts]
Learned7 rejectionsActive
Exclude posts that end abruptly with ellipsis, incomplete sentences, or orphaned fragments that prevent understanding the full claim or argument. The post must be substantially complete to be evaluable.
Posts that are cut off mid-sentence, lack closure, or appear to be drafts without coherent argument.
3 example posts
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
🚨 BREAKING: Anthropic new research finds that AI’s impact on jobs is primarily at the task level.
Rather than eliminating jobs, it is progressively taking over the functions that define them and gradually absorbing the core work in many jobs/roles.
The paper, “Labor Market http
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
Exclude posts announcing AI model capabilities, product launches, or feature releases (e.g., Claude Code architecture, Mesa filesystem, OpenAI on Bedrock) unless the post demonstrates specific healthcare application, healthcare customer use cases, or healthcare system impact.
Posts about AI company product announcements or capability releases that lack healthcare-specific application or validation.
3 example posts
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts that are primarily personal testimonials, family health stories, individual patient outcomes, or unverified anecdotal claims about drug side effects or medical experiences without clinical evidence or healthcare delivery context.
Posts sharing personal health experiences, family medical histories, or individual case narratives presented without clinical validation or systems-level insights.
3 example posts
I never met my grandfather.
He died of pancreatic cancer when my father was just 19. Today, Yash Bindal, 33, father to 18-month-old Maya, faces the same fate.
@PopVaxIndia is using AI to make him a personalized generative medicine to extend his life.
https://t.co/O5VIXbmMGd
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
good news: it is a specific virus that has a good prognosis - 85%+ of full recovery.
thanks everyone who helped me; it is hard to research while immobilized, and I got some things wrong, which you helped clear up. im extremely thankful and hope i can give it back somehow
sadl
Created 2026-04-20 · Updated 2026-04-21
Edit rule text
[retweet_or_shallow_commentary]
Learned7 rejectionsActive
Exclude posts that are retweets of announcements, press releases, or brief agreeing commentary without adding original analysis, specificity, or healthcare systems context. Posts should demonstrate substantive original thinking about healthcare implications.
Posts that are primarily retweets, press release amplification, or surface-level commentary without original analysis or healthcare systems insight.
3 example posts
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Yesterday, @RandDWorld featured us twice.
@ProQR turns to Ginkgo’s autonomous lab to scale AI-enabled RNA editing discovery: https://t.co/DyENAd4VdM
Ginkgo’s CEO says biotech needs its Waymo moment: https://t.co/kW27eBjAmf
Want to learn more about our partnership with ProQR? h
.@openloophealth expands into sleep diagnostics.
Health tech company announces new partnership Happy Sleep—bringing at‑home sleep apnea testing to patients for the first time.
Watch to hear more about its big step toward better rest and smarter care⤵️
https://t.co/ATcNkYrrpK h
Created 2026-04-13 · Updated 2026-04-17
Edit rule text
[ai_company_funding_and_metrics]
Learned7 rejectionsActive
Exclude posts that primarily report AI company fundraising, ARR growth, valuation announcements, or business metrics (e.g., seed rounds, Series A closings, revenue multiples) unless the post explicitly analyzes how that capital or metric directly enables a specific healthcare outcome or application.
Posts reporting funding rounds, valuations, revenue milestones, or business metrics for AI/tech companies without healthcare application focus.
3 example posts
OpenAI dropping Agent Builder today is either going to make you rich or expose that you've been selling hot air.
I went deep analyzing what this actually means.
Here's the $4B opportunity hiding in plain sight:
The mainstream narrative: "Agent Builder democratizes AI! Anyone c
this is unbelievable!
Perplexity launched an AI agent called “Computer” and their revenue went straight vertical.
$305M to $450M ARR in one month. it lets corporate teams in finance, legal, ops run tasks in plain english.
the person your company just hired to “figure out AI”
Really enjoyed the deck @loganbartlett and team just shared on the state of Software, wanted to pull out a few things that caught my eye:
1. AI-native companies are growing faster AND more efficiently
The growth rates are really staggering. And they’re doing it with very few pe
Created 2026-04-12 · Updated 2026-04-13
Edit rule text
[unverified_fringe_medical_claims]
Learned7 rejectionsActive
Reject posts promoting unproven treatments, off-label drug claims, vaccine conspiracy narratives, or medical assertions without peer-reviewed evidence or regulatory approval citation.
Unverified or fringe medical claims
3 example posts
They held an OPEN FLAME to a mouse’s body for 7 seconds.
Burned 20% of its body.
Gave it BPC-157. The skin GREW back STRONGER than the global medical standard could achieve. More collagen. Less scarring. Full tensile strength.
The untreated group never recovered.
Now look at
Holy: A woman needed daily blood transfusions for over a decade. Then doctors reprogrammed her own immune cells. Now all three of her autoimmune diseases are in complete remission.
For the first time ever. Lets dig into this: 🧵
(Sources in the comments) https://t.co/z3ASR49vtG
Finasteride will regrow your hair, but it'll also destroy your dick and crater your mental health.
Compared to non-users, finasteride users show markedly higher rates of depression, anxiety, and suicidal thoughts.
Peptides > finasteride for hair loss. https://t.co/q22PCbfEwo
Created 2026-04-09 · Updated 2026-04-11
Edit rule text
[pharma_trial_data_without_systems_analysis]
Learned6 rejectionsActive
Exclude posts that merely report drug trial outcomes, efficacy percentages, or clinical data (e.g., 'survodutide posts top-line Phase 3 results', 'sacubitril/valsartan reduces CV mortality 20%') unless the post connects these results to healthcare system challenges, pricing, access barriers, or operational impacts on care delivery.
Posts announcing pharmaceutical trial results or drug efficacy data without analyzing healthcare system impacts, access, or business model implications.
3 example posts
Among veterans with moderate to severe #ChronicPain in primary care, the whole health team intervention produced greater improvement in the Brief Pain Inventory interference scores at 12 months compared with cognitive behavioral therapy and usual care.
https://t.co/xtZY4sgyGt h
A JAHA study of 1,181,007 younger US veterans just dropped bad news about BP in your 30s.
This is not mainly an older-adult problem anymore. Nearly half met the bar for hypertension. The catch: about half of them didn't know it.
Here's what most people miss: https://t.co/mxT2oC
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[ai_safety_vulnerability_incident_tangent]
Learned6 rejectionsActive
Exclude posts that focus primarily on AI safety incidents, security breaches, or agent failures (e.g., Claude deleting databases, API key leaks, malicious agent skills) unless the post explicitly analyzes implications for healthcare operations or patient safety systems. General AI safety concerns without healthcare context do not qualify.
Posts about AI safety failures, security vulnerabilities, or agent mishaps that are tangential to healthcare applications.
3 example posts
Anthropic built something so powerful that they are only letting 50 organisations touch it.
It is called Claude Mythos.
The numbers leaking out of those gated evaluations should make every developer pay attention:
93.9% on SWE-bench Verified
94.6% on GPQA Diamond
Claude Opus
Hacking Mexico government with AI assistance. Attacker exfiltrated hundreds of millions of citizen records. 75% of the executed commands across the entire cyberattack campaign were generated by Claude. 40 minutes after Claude said "I'm not going to create that file" it was report
𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗔𝗹𝗿𝗲𝗮𝗱𝘆 𝗕𝗲𝗶𝗻𝗴 𝗛𝗶𝗷𝗮𝗰𝗸𝗲𝗱
Researcher Aks Sharma at Manifold found 30 malicious skills on ClawHub turning AI agents into a crypto farming botnet: 10,000 downloads before anyone noticed.
⬩ The attack required zero exploits. Malicious https://t.co/v4oBXPPydu
Exclude posts that report FDA rulings, fraud investigations, compliance failures, or regulatory changes (e.g., court dismissals, audit findings, policy announcements) unless they analyze structural healthcare system implications, operational consequences for providers, or impact on clinical decision-making frameworks.
Posts reporting healthcare fraud, regulatory actions, policy changes, or scandals without systemic healthcare delivery or operational analysis.
3 example posts
$LLY $NVO $HIMS
🚨 BREAKING: COURT DISMISSES PART OF ELI LILLY LAWSUIT AGAINST EMPOWER PHARMACY
BOTH LILLY AND EMPOWER ISSUED STATEMENTS CELEBRATING THE RULING
Dismissed: Lanham Act false advertising + consumer harm claim
Allowed to proceed: unfair competition claims under h
Grace Science’s experience highlights a growing disconnect at FDA between talk and action on therapies for rare diseases. Despite efficacy signals in a monogenic ultrarare disease, FDA said the plausible mechanism framework is not available, and requires a new manufacturing
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Exclude posts that make sweeping health claims, start broad medical debates, or offer personal health opinions (e.g., eating disorder risks, medication efficacy assertions, treatment recommendations) without clinical evidence, nuance, or healthcare system context. Posts that are primarily commentary or debate-starting rather than substantive healthcare systems analysis should be excluded.
Posts making broad health or medical claims, starting debates, or offering opinions without evidence, nuance, or healthcare system analysis.
3 example posts
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
"How can medicine save the most lives?"
Most people ask this rhetorically.
@Farzad_MD and Tom Frieden took it literally.
From banning smoking in NYC bars to cutting teen smoking in half in 5 years, this is what happens when you stop treating diseases and start preventing them. h
What superhuman vision can detect from the retinal photo, which human eyes cannot, is stunning. A new foundation AI model screening for diabetes hypertension, hyperlipidemia, gout, osteoporosis, and thyroid disease @NatureMedicine
https://t.co/GhKvUqz4Vy https://t.co/iKcXCbLceu
Created 2026-05-01 · Updated 2026-05-02
Edit rule text
[broad_health_claim_or_debate_without_nuance]
Learned6 rejectionsActive
Exclude posts that make definitive or contradictory claims about drug safety, efficacy, or side effects (e.g., 'GLP-1s cause eating disorders' vs. 'GLP-1s improve outcomes') where the post presents the claim without proportional discussion of evidence limitations, population differences, or clinical heterogeneity.
Posts making broad, sweeping health claims or engaging in clinical debates without nuanced evidence review or healthcare context.
3 example posts
What superhuman vision can detect from the retinal photo, which human eyes cannot, is stunning. A new foundation AI model screening for diabetes hypertension, hyperlipidemia, gout, osteoporosis, and thyroid disease @NatureMedicine
https://t.co/GhKvUqz4Vy https://t.co/iKcXCbLceu
Interpretability is built on a few core assumptions.
Two of our ICLR 2026 @iclr_conf papers suggest some of those assumptions are wrong (or at least highly incomplete).
1. Sparse CLIP: Co-Optimizing Interpretability and Performance in Contrastive Learning https://t.co/3JzHDqRj3
💬 Viewpoint: The widespread use of #AI for residency application screening in US graduate medical education programs introduces new legal and ethical concerns, particularly regarding disparate impact discrimination and unvalidated subgroup performance.
https://t.co/WBeGQmkBr1 h
Exclude posts that express enthusiasm about biotech or AI company announcements, partnerships, or milestones (e.g., Ginkgo-OpenAI collaboration, Profluent-Lilly deal) without providing critical analysis of market implications, adoption barriers, competitive dynamics, or healthcare system integration challenges. Founder/CEO celebration posts without systems perspective should be rejected.
Posts celebrating biotech or AI company achievements with hype rather than substantive analysis
3 example posts
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
Created 2026-04-30 · Updated 2026-05-03
Edit rule text
[biotech_founder_enthusiasm_without_validation]
Learned6 rejectionsActive
Exclude posts from biotech founders or company accounts promoting internal experiments, prototype results, or autonomous lab announcements (e.g., 'we paired our lab with GPT-5') unless the findings have been published, peer-reviewed, or demonstrate real production deployment with measurable outcomes.
Posts from biotech or startup founders hyping experimental capabilities or closed-loop experiments without peer review, publication, or deployed evidence.
3 example posts
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Exclude posts that report drug trial outcomes, efficacy numbers, or Phase 3 data (e.g., weight loss percentages, mortality reductions) unless the post connects these results to healthcare delivery challenges, prescribing patterns, reimbursement implications, or system-level friction. Raw trial data announcements lack healthcare tech substance.
Posts announcing pharmaceutical trial results or drug efficacy data without healthcare delivery, pricing, or system-level analysis.
3 example posts
Experimentally Validated Deep Learning Control of Protein Aggregation
1. The study introduces AggreProt, a deep neural network that predicts residue-level aggregation-prone regions (APRs) directly from protein sequence, and then uses those predictions to design mutations that ht
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
Exclude posts that report healthcare fraud, FDA/CMS enforcement, pricing disputes, or regulatory criticism (e.g., pricing negotiations, Medicaid audits, prior auth complaints) without explaining how the incident reveals systemic failures in healthcare operations, pricing mechanisms, or regulatory oversight.
Posts describing fraud, regulatory action, pricing disputes, or policy complaints without analyzing systemic healthcare implications or root causes.
3 example posts
$LLY $NVO $HIMS
🚨 BREAKING: COURT DISMISSES PART OF ELI LILLY LAWSUIT AGAINST EMPOWER PHARMACY
BOTH LILLY AND EMPOWER ISSUED STATEMENTS CELEBRATING THE RULING
Dismissed: Lanham Act false advertising + consumer harm claim
Allowed to proceed: unfair competition claims under h
Grace Science’s experience highlights a growing disconnect at FDA between talk and action on therapies for rare diseases. Despite efficacy signals in a monogenic ultrarare disease, FDA said the plausible mechanism framework is not available, and requires a new manufacturing
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Created 2026-04-30 · Updated 2026-05-03
Edit rule text
[ai_safety_vulnerability_incident_not_healthcare]
Learned6 rejectionsActive
Exclude posts that report AI safety incidents, security vulnerabilities, or hacking examples (e.g., Claude deleting databases, malicious AI agent skills, API key leaks) without demonstrating direct healthcare system impact or learning relevant to healthcare deployment. The incident must show healthcare-specific consequences, not just generic AI risk.
Posts about AI safety failures, security breaches, or vulnerability incidents that lack healthcare application context.
3 example posts
Anthropic built something so powerful that they are only letting 50 organisations touch it.
It is called Claude Mythos.
The numbers leaking out of those gated evaluations should make every developer pay attention:
93.9% on SWE-bench Verified
94.6% on GPQA Diamond
Claude Opus
Hacking Mexico government with AI assistance. Attacker exfiltrated hundreds of millions of citizen records. 75% of the executed commands across the entire cyberattack campaign were generated by Claude. 40 minutes after Claude said "I'm not going to create that file" it was report
𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗔𝗹𝗿𝗲𝗮𝗱𝘆 𝗕𝗲𝗶𝗻𝗴 𝗛𝗶𝗷𝗮𝗰𝗸𝗲𝗱
Researcher Aks Sharma at Manifold found 30 malicious skills on ClawHub turning AI agents into a crypto farming botnet: 10,000 downloads before anyone noticed.
⬩ The attack required zero exploits. Malicious https://t.co/v4oBXPPydu
Exclude posts that describe a single clinical case, personal patient anecdote, individual diagnostic decision, or one clinician's workflow observation (e.g., patient billing confusion, sepsis alert trigger, differential diagnosis approach) without connecting to healthcare system design, policy, or scalable implications.
Posts about individual clinical cases, personal healthcare experiences, or single-patient anecdotes without systemic healthcare implications.
3 example posts
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
Exclude posts that treat workforce disruption, labor market shifts, or job automation as macro-economic commentary (e.g., '57% of US work hours could be automated', 'AI agents reduce need for human seats') without analyzing how these changes specifically impact healthcare staffing, clinician workflows, nursing shortages, or healthcare labor markets.
Posts about workforce disruption, labor market impacts, or job displacement from AI or automation without healthcare-specific analysis or context.
3 example posts
Humanoid robots are moving from Silicon Valley novelty to viable business model—powered by AI and global supply chains, especially in China. But as adoption grows, so do the questions about how humans and machines will actually coexist.
More on Primer, streaming Wednesdays http
Stanford and Harvard published the most unsettling AI paper of the year.
It shows how autonomous AI agents, when placed in competitive or open environments, don’t just optimize for performance…
They drift toward manipulation, coordination failures, and strategic chaos. https://
[New] from a16z @speedrun:
Come for the Agent, Stay for the Network
there's a quiet pattern hiding inside the most defensible vertical AI startups right now:
the agent is the wedge
the network is the moat.
here's what I mean:
an HVAC tech needs a part today.
>>Traditionally:
Exclude posts that are primarily announcements or promotional content from AI companies (NVIDIA, Microsoft, OpenAI, Anthropic) about new tools, SDKs, sandboxes, or learning resources—unless the post provides substantive analysis of healthcare adoption barriers, clinical validation, or operational integration challenges.
Posts announcing AI company product launches, software features, or capability demos without validation of healthcare application or business model.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Created 2026-04-28 · Updated 2026-05-03
Edit rule text
[tangential_ai_cybersecurity_not_healthcare]
Learned6 rejectionsActive
Exclude posts that discuss AI model vulnerabilities, cyber takeover capabilities, or AI safety concerns in generic or non-healthcare contexts (e.g., simulated corporate networks, general frontier model capabilities) — even if tangentially labeled healthcare.
Posts about AI, cybersecurity, or model vulnerabilities framed loosely as healthcare-relevant but applied to non-healthcare domains.
1 example post
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Exclude posts that make broad health claims, debunk popular beliefs, or argue against medical narratives without citing peer-reviewed evidence, clinical trials, or substantive expert analysis.
Posts making sweeping health claims or counter-claims without clinical evidence or nuanced discussion
3 example posts
The only problem with the GLP-1 heart muscle loss narrative is...
... that it's just a narrative.
GLP-1s have reliably improved cardiovascular outcomes in trials, to the point that some research suggests benefit may even be independent of (not reliant on) weight loss.
"Your body can only use 25-30g of protein per meal. Anything above that gets wasted."
This claim has been repeated in fitness nutrition for over a decade, and it was built on studies that measured the right thing over the wrong timescale.
Moore 2009 gave six young men 0, 5, ht
Impressive study and even with the limitations, is an important addition to the Rapamycin literature
In my opinion, the only plausible off-label use of Rapamycin currently should be in ApoE4 carriers as not many options are available). That would be an important trial we are
Exclude posts about macro-level topics (tariffs, grid infrastructure, compute scaling, energy policy, fintech disruption) where healthcare is mentioned tangentially or as one example among many, without healthcare-specific policy implications or system analysis.
Posts about broad economic, infrastructure, or policy trends (tariffs, power grids, compute scaling) with only loose healthcare framing.
3 example posts
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
States are rushing “affordability” bills, but most just mask high prices with rebates, mandates, or price caps. @MrRBourne & Nathan Miller argue durable relief means rolling back cost-raising rules and expanding supply.
https://t.co/WG5egT1NfL
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
Exclude posts that focus on GLP-1/peptide drug pricing dynamics, individual weight loss outcomes, market competition between brands (Lilly vs. Novo), or personal experiences with medications without analyzing healthcare access, insurance coverage, reimbursement policy, or systemic barriers. The post must address healthcare system implications, not just market or personal narratives.
Posts about GLP-1 or peptide drug pricing, market competition, or individual anecdotes without healthcare policy or system implications.
3 example posts
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
$LLY v $NVO
Foundayo (orforglipron) scripts off to a slow start both in raw numbers and in comparison to Oral Wegovy’s launch at same time point.
Overall statistics show Oral Wegovy script growth is robust, and thus far undeterred, by Foundayo market entry.
🎩 @bloomberg https
Created 2026-04-24 · Updated 2026-05-01
Edit rule text
[unvalidated_speculative_medical_interventions]
Learned6 rejectionsActive
Exclude posts that express personal advocacy for unvalidated medical interventions (rapamycin, ibogaine, peptide-forward telehealth concierge services) without acknowledging lack of clinical evidence, regulatory status, or potential harms.
Posts enthusiastically promoting experimental, off-label, or unproven medical treatments and drugs as viable healthcare solutions.
3 example posts
Impressive study and even with the limitations, is an important addition to the Rapamycin literature
In my opinion, the only plausible off-label use of Rapamycin currently should be in ApoE4 carriers as not many options are available). That would be an important trial we are
We’re exploring the idea of a peptide-forward telehealth concierge medical service. Medicine 3.0 focused on full optimization- peptides, hormones, diet/exercise. MD is a former college varsity rower, fellowship at Yale etc.
Would you be interested in participating in a pilot
I am a strong believer in ibogaine, which is one of the reasons why @ataibeckley acquired the residual interest in its ibogaine program in Q4 2023 and now owns it 100%.
I’m very encouraged to see the administration taking a positive public stance on this important topic.
Created 2026-04-24 · Updated 2026-04-26
Edit rule text
[truncated_or_incomplete_posts_low_substance]
Learned6 rejectionsActive
Exclude posts that are truncated (ending with '...' or mid-word), incomplete, or so brief they lack substantive content even if the topic is healthcare-adjacent.
Posts that are cut off mid-sentence, incomplete, or lack sufficient substance to evaluate.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
Exclude posts that report healthcare fraud, billing scandals, or operational failures (e.g., hospital billing issues, hospice fraud, Change Healthcare incident) as breaking news or outrage without analyzing systemic root causes, regulatory gaps, or healthcare market structure implications. Posts must provide systems-level insight.
Posts reporting healthcare fraud, billing scandals, or operational failures without systemic analysis of root causes or policy implications.
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
Exclude posts that recount a single clinical case, patient interaction, or clinical workflow moment (sepsis alert triggered, patient examination delay) without connecting to healthcare system challenges, operational failures, or scalable insights.
Posts describing clinical anecdotes, individual patient cases, or medical observations without broader healthcare delivery or systems insights.
3 example posts
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
I sat with a patient today who first noticed a change in October. It’s April now. In all those months of appointments and follow-ups, her breast had only truly been looked at twice. That stayed with me.
If something has changed with your body — especially something under your ht
Exclude posts that discuss AI automation, workforce reduction, SaaS seat consolidation, or labor market disruption in generic terms or non-healthcare contexts without analyzing impact on clinical workflows, healthcare staffing models, or patient care delivery.
Posts about AI-driven job displacement or business model shifts without healthcare workforce or operational context
3 example posts
AI could, in theory, automate 57% of US work hours. Yet most human skills remain relevant.
The future of work is not human or machine – but a partnership between people, agents, and robots.
Read our latest research on skill partnerships in the age of AI: https://t.co/h1K56uPqPo
Is the business model for traditional software companies in permanent decline due to AI Agents not needing seats?
2 examples:
Re: @salesforce, we’ve reduced our seats from 10+ to 2 human seats and 1 API seat. And yet, we now pay $22,000 a year, 83% up from $12,000. Why? Our
Why the biggest fintech players are in for a shock.
"The shift is from human UX to agent UX.
In the past, you won with dashboards, design and user experience.
Now, the buyer is an AI agent, and it only cares about APIs, performance and integration.
That breaks traditional htt
Created 2026-04-20 · Updated 2026-04-27
Edit rule text
[ai_company_product_metrics_not_healthcare]
Learned6 rejectionsActive
Exclude posts that announce AI company product launches (e.g., new models, APIs, workshops, cloud labs) or boast usage metrics and customer adoption when the post does not demonstrate meaningful healthcare delivery or clinical application. Hype or aspirational framing (e.g., 'as easy as starting a startup on AWS') does not count as healthcare substance.
Posts about AI company product launches, user metrics, or business announcements without healthcare-specific application or analysis.
3 example posts
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Almost all of my positions selling some kind of AI/agentic SaaS tool have (either by foresight or customer demand) pivoted to some kind of business model where they “forward deploy” to the customer first and then sell the system they create back to them as SaaS. 99% of “normie” b
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
Exclude posts that discuss AI's impact on employment, job displacement, wage trends, or labor market transformation as broad macroeconomic or sociological observations, unless the post identifies specific healthcare job roles, workflows, or institutional staffing models being disrupted.
Posts about AI's impact on jobs, labor markets, or workforce displacement treated as macro trends without healthcare-specific operational implications.
3 example posts
🚨 BREAKING: Anthropic new research finds that AI’s impact on jobs is primarily at the task level.
Rather than eliminating jobs, it is progressively taking over the functions that define them and gradually absorbing the core work in many jobs/roles.
The paper, “Labor Market http
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
AI could, in theory, automate 57% of US work hours. Yet most human skills remain relevant.
The future of work is not human or machine – but a partnership between people, agents, and robots.
Read our latest research on skill partnerships in the age of AI: https://t.co/h1K56uPqPo
Exclude posts that focus primarily on fraud case details, arrests, sentencing, or enforcement headlines without substantive analysis of underlying healthcare system vulnerabilities, operational failures, or policy changes needed to prevent similar fraud.
Posts reporting healthcare fraud cases, enforcement actions, or financial misconduct as crime/enforcement news without healthcare system analysis.
3 example posts
In 2021, Javaid Purwaiz, an OBGYN, was sentenced to 59 years in prison for one of the most severe cases of healthcare fraud in the country’s history.
Once you go through court records, you realize the fraud that gave him a life sentence is the same fraud used by gender doctors.
$340 MILLION in fraud targeted — in 1 WEEK.
That’s what happens when enforcement gets serious.
Luxury cars. Fake claims. Stolen benefits meant for Americans in need — now turning into prison sentences.
The hammer is dropping. We’re just getting started.https://t.co/qP0cOIypE4
🚨 As you pay your taxes this week, LOOK at what the fraudsters allegedly did with your money❗️
🔹Cosmetic procedures
🔹Breast implants
🔹Tweaks to arms and thighs
🔹Tummy tuck
🔹Purebred dogs
🔹Flights to Hawaii
🔹Flights to Disneyland
🔹Multimillion-dollar home
🔹Range https://t.c
Exclude posts that present clinical trial data, drug efficacy results, or research findings as standalone observations without connecting to healthcare system implications such as coverage decisions, clinical adoption barriers, workflow integration, or care delivery outcomes.
Posts reporting clinical trial results, drug efficacy data, or research findings in isolation without analysis of healthcare implementation, access barriers, or systemic adoption.
3 example posts
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
Here is a video of me entering my office tomorrow knowing that $NTLA is about to present the first-ever Phase 3 data of an In Vivo (!) CRISPR Gene Editing Program. Somehow - and after @adamfeuerstein’s🧵👇- I have a feeling it won’t be the only BioTech and CRISPR news…🤔 $XBI https:
Exclude posts that discuss AI infrastructure, compute stacks, cloud platforms, manufacturing scaling, or economic/tariff policy that mention healthcare tangentially or use healthcare as a generic example without demonstrating specific healthcare operational, clinical, or system-level impact.
Posts about AI infrastructure, compute, tariffs, or macroeconomic policy with loose healthcare framing but no healthcare-specific application or impact analysis.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Exclude posts from non-healthcare domains (Revolut banking models, ClickUp productivity tools, tariff economics, software moats at Snapchat) that mention healthcare tangentially or are tagged with healthcare keywords but do not address healthcare-specific systems, providers, patients, or health outcomes.
Posts about non-healthcare domains (banking, compliance tools, software tools, macroeconomics) framed with loose healthcare relevance.
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Exclude posts that frame healthcare issues primarily as political outrage, corporate scandal, or regulatory overreach without providing substantive analysis of how the issue affects patient care, provider workflows, or healthcare system efficiency. Posts must offer systems-level insight, not just moral indignation.
Posts expressing political outrage about healthcare policy, regulation, or corporate behavior without substantive analysis of healthcare system impact
3 example posts
@swyx > get government sponsored monopoly
> prevent patients from getting their data
> make data non transferable
> contribute nothing to open source software
> refuse to collaborate with other software vendors and kill the ecosystem
> appeal to administrators and be hated by p
🚨 Surgeon @EithanHaim reveals shocking medical fraud scheme: Texas doctors allegedly changing teens' medical records and using fake billing codes to secretly continue banned gender treatments—scamming insurance and taxpayers. He's speaking at a #DetransAwarenessDay @genspect foru
I am not so partisan that I can't appreciate Congresswoman Alexandria Ocasio-Cortez taking down the CEO of CVS on behalf of all Americans.
Healthcare is a universal issue, so pay attention to what's being sold to us.
Translation: "Our perfect patient is insured by Aetna, CVS. T
Created 2026-04-14 · Updated 2026-04-16
Edit rule text
[academic_or_research_observation_only]
Learned6 rejectionsActive
Exclude posts that report clinical trial results, research breakthroughs, or scientific findings as standalone observations without analyzing their healthcare system implications, market adoption barriers, clinical workflow integration, or impact on patient access.
Posts about clinical research findings, trial results, or scientific discoveries presented as isolated facts without healthcare system or business context.
3 example posts
Today the first results of the very first phase 3 study of a pan-KRAS-inhibitor in metastatic pancreatic cancer dropped, which might apply to > 90% of all pancreatic cancer patients with a KRAS-mutation!
Median overall survival of 13.2 months versus 6.7 months with chemo in
NIH-funded researchers have uncovered a key reason why immunotherapy has largely failed in pancreatic cancer — and identified a promising strategy to overcome that resistance.
Read on to learn more about this discovery: https://t.co/BoCHpLxp5g https://t.co/3DXv4E9DOE
Across large, multicohort datasets, CardioNets achieved superior performance to ECG-only baselines and diagnostic accuracy comparable to CMR-based models, supporting its potential to expand access to advanced cardiovascular assessment. Full study results: https://t.co/VP2iOBLUev
Created 2026-04-12 · Updated 2026-04-14
Edit rule text
[political_regulatory_outrage_without_analysis]
Learned6 rejectionsActive
Exclude posts that frame healthcare policy, regulatory action, or government spending primarily as political theater, incompetence, or malfeasance, without providing substantive analysis of the actual healthcare system impact, clinical implications, or structural problem being addressed.
Posts that use healthcare policy or regulatory news as a vehicle for political attack or outrage without substantive analysis of healthcare implications.
3 example posts
Indian has 0.7 active physicians per 1,000 people, America has 3.0 active physicians per 1,000 people.
You are a liar. You are not motivated by increasing patient access to care. You just want to practice in America because you can make more money.
The attack by the Trump Administration on blue states for alleged Medicaid "fraud" is using such garbage math to make up numbers that even Dr. Oz had to admit it.
⬇️⬇️⬇️
https://t.co/V0dZfx0OdK
🚨 BREAKING: It was just revealed that the blue state of Hawaii got MILLIONS of federal dollars to fight Medicare and Medicaid fraud — and secured **ZERO** fraud convictions in 5 years
Insane.
ANDREW FERGUSON, White House fraud task force vice chair: "Millions of millions of htt
Created 2026-04-12 · Updated 2026-04-14
Edit rule text
[ai_company_business_metrics]
Learned6 rejectionsActive
Exclude posts that primarily report on AI company business metrics (revenue, ARR, valuation, hiring, stock price, funding rounds) even if the company has healthcare products. The post must discuss AI applied to solve healthcare problems, not AI company financial performance.
Posts focused on AI company revenue, valuations, hiring, and business performance rather than healthcare applications
3 example posts
Anthropic's CEO:
“coding is going away first, then all of software engineering."
Now, Anthropic looks to hire 454 engineers at $320k–$405k.
coding isn’t vanishing it’s becoming leverage for the few who can build, review, and ship at a completely different scale. https://t.co
Boris Cherny created Claude Code. It hit $2.5 billion in annualized revenue in 9 months. Fastest B2B product ramp in history. Faster than ChatGPT, Slack, or Snowflake ever reached $1 billion.
Now he says coding is “solved” and IDEs will be dead by end of year. https://t.co/HI7M
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties!
The @nvidia CEO joins to discuss:
-- Nvidia's future, roadmap to $1T revenue
-- Physical AI's $50T market
-- Rise of the agent, OpenClaw's inflection moment
-- Inference explosion, Groq deal
-- AI PR Crisis, Anthropic's comms m
Created 2026-04-11 · Updated 2026-04-15
Edit rule text
[ai_company_product_launch_unvalidated]
Learned5 rejectionsActive
Exclude posts that announce new AI products, tools, or capabilities from tech companies (NVIDIA, Microsoft, Anthropic, etc.) based on company statements, demos, or research papers, without demonstrated healthcare use cases, clinical validation, or real-world deployment in healthcare settings.
Posts announcing AI company product launches, demos, or capability announcements without evidence of healthcare adoption or clinical validation.
3 example posts
Our new preprint is a significant milestone for us
We built "HealthFormer" by training on our deeply phenotyped cohort from the Human Phenotype Project data. Healthformer is a multimodal generative transformer model that tokenizes each participant's physiological trajectory http
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Exclude posts that discuss macro trends (tariffs, labor disruption, software moats, business model shifts, AWS infrastructure) that happen to mention healthcare companies or use healthcare as a surface example, but lack specific healthcare operational or policy analysis.
Posts about broad economic, labor market, or infrastructure trends that use healthcare as loose framing without systems analysis
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Exclude posts that feature non-healthcare companies (e.g., Snapchat, Revolut, ClickUp, Figure robots, Microsoft product launches) and describe their capabilities or business milestones using healthcare as a secondary or hypothetical use case, rather than analyzing validated healthcare applications or system-level healthcare implications.
Posts about non-healthcare companies, technologies, or domains that are tangentially connected to healthcare through minimal or speculative framing.
3 example posts
The problem with Reality Labs is not ambition. It is time. AI turned into revenue faster because it improves existing workflows. The metaverse still asks users to change behavior before value is obvious. https://t.co/lopZiUwGU5
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Created 2026-05-02 · Updated 2026-05-03
Edit rule text
[ai_agent_security_incident_hype]
Learned5 rejectionsActive
Exclude posts that report isolated AI agent security incidents (e.g., Claude deleting a database, compromised API keys, malicious skills) as breaking news or hype without demonstrating systemic healthcare implications, clinical risk, or operational impact to healthcare delivery. Focus on healthcare systems risk, not generic cybersecurity theater.
Posts sensationalizing AI agent security vulnerabilities without healthcare system context or real-world impact validation.
3 example posts
Hacking Mexico government with AI assistance. Attacker exfiltrated hundreds of millions of citizen records. 75% of the executed commands across the entire cyberattack campaign were generated by Claude. 40 minutes after Claude said "I'm not going to create that file" it was report
𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗔𝗹𝗿𝗲𝗮𝗱𝘆 𝗕𝗲𝗶𝗻𝗴 𝗛𝗶𝗷𝗮𝗰𝗸𝗲𝗱
Researcher Aks Sharma at Manifold found 30 malicious skills on ClawHub turning AI agents into a crypto farming botnet: 10,000 downloads before anyone noticed.
⬩ The attack required zero exploits. Malicious https://t.co/v4oBXPPydu
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
Exclude posts from biotech founders, startup CEOs, or research labs (Ginkgo, Figure Robotics, Profluent, Goodfire) celebrating technical milestones, closed-loop experiments, or partnership announcements unless the post demonstrates clinical validation, healthcare system integration, or measurable healthcare outcome data.
Posts from company founders or biotech executives celebrating technical achievements or partnerships without evidence of clinical impact, healthcare adoption, or patient outcomes.
3 example posts
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
Exclude posts that focus on market pricing, pricing competition, generic launch impacts, or competitive pricing strategy for pharmaceuticals or treatments unless the post analyzes broader healthcare system implications such as insurance coverage, clinical deployment barriers, or patient access at scale. Posts that treat pricing as a market/financial story rather than a healthcare access story should be excluded.
Posts analyzing drug or peptide market pricing, competition, or pricing dynamics without healthcare delivery system implications.
3 example posts
States are rushing “affordability” bills, but most just mask high prices with rebates, mandates, or price caps. @MrRBourne & Nathan Miller argue durable relief means rolling back cost-raising rules and expanding supply.
https://t.co/WG5egT1NfL
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
$LLY ’s Mounjaro will not be listed on Australia’s PBS after pricing negotiations collapsed.
Eli Lilly walked away from talks with the government, leaving around 450,000 patients without subsidized access.
Patients will continue to pay hundreds of dollars per month out of
Created 2026-05-01 · Updated 2026-05-02
Edit rule text
[drug_trial_data_without_systems_context]
Learned5 rejectionsActive
Exclude posts that announce or discuss pharma trial results, FDA approvals, efficacy metrics, or clinical outcomes without explaining how these findings change healthcare delivery, access, reimbursement, or clinical practice systems.
Posts reporting pharmaceutical trial data, efficacy results, or drug approval news without healthcare delivery or systems analysis.
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
Exclude posts that report fraud scandals, FDA decisions, or regulatory disputes (e.g., Optum fraud audit, Lilly lawsuit dismissal, Medicaid policy changes) without explaining how the issue affects healthcare delivery, provider incentives, patient access, or care quality at a systems level.
Posts reporting healthcare fraud, regulatory action, or policy disputes without analysis of systemic healthcare implications.
3 example posts
$LLY $NVO $HIMS
🚨 BREAKING: COURT DISMISSES PART OF ELI LILLY LAWSUIT AGAINST EMPOWER PHARMACY
BOTH LILLY AND EMPOWER ISSUED STATEMENTS CELEBRATING THE RULING
Dismissed: Lanham Act false advertising + consumer harm claim
Allowed to proceed: unfair competition claims under h
Grace Science’s experience highlights a growing disconnect at FDA between talk and action on therapies for rare diseases. Despite efficacy signals in a monogenic ultrarare disease, FDA said the plausible mechanism framework is not available, and requires a new manufacturing
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Exclude posts that focus exclusively on GLP-1 and peptide drug pricing, script counts, market share, competitive launches, or cost comparisons (e.g., Novo vs. Lilly, generic semaglutide pricing in India, HIMS pricing tiers) unless the post analyzes how pricing affects healthcare access, outcomes, equity, or healthcare delivery systems.
Posts about GLP-1 and peptide drug pricing, prescriptions, and market competition without healthcare systems analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Exclude posts that focus on AI safety vulnerabilities, agent jailbreaks, or cybersecurity incidents (e.g., Claude deleting databases, malicious ClawHub skills, hardcoded API leaks) unless the post explicitly connects the vulnerability to a specific healthcare delivery, clinical decision-making, or patient outcome impact.
Posts about AI safety vulnerabilities, agent hacking, or security incidents presented without healthcare system implications or applications.
3 example posts
Anthropic built something so powerful that they are only letting 50 organisations touch it.
It is called Claude Mythos.
The numbers leaking out of those gated evaluations should make every developer pay attention:
93.9% on SWE-bench Verified
94.6% on GPQA Diamond
Claude Opus
Hacking Mexico government with AI assistance. Attacker exfiltrated hundreds of millions of citizen records. 75% of the executed commands across the entire cyberattack campaign were generated by Claude. 40 minutes after Claude said "I'm not going to create that file" it was report
𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗔𝗹𝗿𝗲𝗮𝗱𝘆 𝗕𝗲𝗶𝗻𝗴 𝗛𝗶𝗷𝗮𝗰𝗸𝗲𝗱
Researcher Aks Sharma at Manifold found 30 malicious skills on ClawHub turning AI agents into a crypto farming botnet: 10,000 downloads before anyone noticed.
⬩ The attack required zero exploits. Malicious https://t.co/v4oBXPPydu
Created 2026-04-29 · Updated 2026-05-03
Edit rule text
[broad_health_claim_without_nuance_or_evidence]
Learned5 rejectionsActive
Exclude posts that make broad claims about medical treatments, health outcomes, or clinical effectiveness (e.g., 'AI can now design antibodies,' 'AI turned X into a feature') without providing evidence, context about limitations, competitive landscape, or clinical validation status.
Posts making sweeping claims about healthcare, treatment efficacy, or medical interventions without substantiating evidence or nuance.
3 example posts
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
Created 2026-04-28 · Updated 2026-05-03
Edit rule text
[glp1_peptide_market_pricing_speculation]
Learned5 rejectionsActive
Exclude posts that discuss GLP-1 and peptide pricing, script counts, market share, or competitive launches (e.g., Foundayo vs. Wegovy prescriptions, price reductions, generic launches) unless the post analyzes healthcare system implications, payer policy, or clinical/epidemiological outcomes. Market speculation and pricing dynamics alone are insufficient.
Posts about GLP-1 and peptide market dynamics, pricing, prescriptions, and competitive positioning without healthcare systems or clinical outcomes analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Created 2026-04-28 · Updated 2026-04-30
Edit rule text
[unvalidated_speculative_medical_claims]
Learned5 rejectionsActive
Exclude posts that propose unvalidated or speculative medical claims about off-label uses, unproven health benefits, or fringe medical interventions (e.g., rapamycin for aging, ibogaine without established evidence) without peer-reviewed support.
Posts making broad or unvalidated health claims about drugs, peptides, or treatments without rigorous evidence or clinical grounding.
3 example posts
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
The only problem with the GLP-1 heart muscle loss narrative is...
... that it's just a narrative.
GLP-1s have reliably improved cardiovascular outcomes in trials, to the point that some research suggests benefit may even be independent of (not reliant on) weight loss.
Impressive study and even with the limitations, is an important addition to the Rapamycin literature
In my opinion, the only plausible off-label use of Rapamycin currently should be in ApoE4 carriers as not many options are available). That would be an important trial we are
Exclude posts that report GLP-1 or peptide drug pricing, market share, prescription volume, or competitive launches (e.g., Mounjaro vs. Wegovy, generics pricing) unless they analyze systemic healthcare implications like access barriers, payer incentives, or clinical practice shifts. Market dynamics alone are insufficient.
Posts about GLP-1 and peptide drug market pricing, competition, and prescription patterns without healthcare economics or delivery system analysis.
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
Created 2026-04-26 · Updated 2026-05-02
Edit rule text
[cybersecurity_or_ai_vulnerability_tangential]
Learned5 rejectionsActive
Exclude posts about AI model hacking, zero-day vulnerabilities, network takeovers, or cybersecurity proofs-of-concept unless they directly demonstrate a healthcare system failure, clinical harm, or specific healthcare infrastructure risk.
Posts about AI model vulnerabilities, cyberattacks, or security exploits that lack healthcare-specific context or application.
3 example posts
🚨 SaaS platform ClickUp, used by 85% of the Fortune 500, has been leaking customer emails through its homepage for at least 465 days, and counting.
ClickUp has a $4 billion valuation. They are SOC 2 Type 2, ISO 27001, ISO 27017, ISO 27018, ISO 42001, and PCI DSS certified. The f
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Exclude posts that are primarily about non-healthcare topics (space programs, employment platforms, labor statistics, financial disputes, UFOs, entertainment) even if they are tagged or matched to healthcare categories. The core subject matter must be healthcare-focused, not peripheral.
Posts about entirely non-healthcare domains (space exploration, UFOs, employment tech, chargebacks, labor statistics) that have minimal or no healthcare relevance despite topic classification.
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
Two companies you've never heard of built a combined $373M revenue business by helping employees bypass IT. Now comes the part where IT buys its way back in.
Replit just hit $253M ARR growing 2,352% YoY. 85% of the Fortune 500 have employees on it. Lovable is at $120M ARR, $6.6B
Lenny Rachitsky gets ~200 requests every week for things like events, partnerships and content. He declines 99.9% of them using different email templates that match the type of request.
He says yes to very few things, but those all adhere to the same question: If his audience
Created 2026-04-24 · Updated 2026-04-29
Edit rule text
[glp1_peptide_market_pricing_dynamics]
Learned5 rejectionsActive
Exclude posts that focus primarily on GLP-1 or peptide drug market share, pricing, launch velocity, script volumes, or competitive positioning (e.g., Novo vs. Lilly) without analyzing underlying healthcare system impacts, patient access barriers, or clinical outcomes.
Posts about GLP-1 and peptide drug market dynamics, pricing, competition, and adoption metrics without healthcare systems analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Exclude posts that amplify unsubstantiated or sensationalized health claims (e.g., 'AI used to deny claims, people died'), conspiracy-style narratives, or extreme characterizations of motives without citing credible evidence or peer-reviewed research.
Posts making alarming health or policy claims without credible evidence, often with conspiratorial undertones.
3 example posts
@PirateWires He's objectively correct. Brian Thompson made decisions that led to denials of medical care, and people died. He used Ai to find ways to deny claims ffs. Brian Thompson has more blood on his hands than whoever shot him
They Don't Work for You-
Calls to protect foreigners from deportation or to keep the borders wide open are not about compassion. They are a core part of the globalist plan to flood the labor market with cheaper more compliant workers suppress wages for Americans and make
RFK Jr. calls out Democrat House representatives to their face for ignoring chronic disease while claiming to care about public health.
“The Congressman was talking about the deaths from infectious disease, which are a couple thousand a year.”
“90% of the people who die in this
Created 2026-04-24 · Updated 2026-04-25
Edit rule text
[broad_unsubstantiated_health_claims]
Learned5 rejectionsActive
Exclude posts that present broad medical or nutritional assertions (e.g., 'your body can only use 25-30g protein per meal', GLP-1 side effect narratives presented without trial context) as established fact without citing peer-reviewed evidence or acknowledging limitations.
Posts making sweeping health or medical claims without rigorous evidence, validation, or nuance.
3 example posts
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
Attention PK nerds, pharmacologists, and clinicians who actually understand serum levels:
I haven’t seen this discussed, but it could matter for patients priced out of injectables.
If a 25 mg oral semaglutide tablet has ~1% bioavailability, that’s ~0.25 mg systemically… on
The only problem with the GLP-1 heart muscle loss narrative is...
... that it's just a narrative.
GLP-1s have reliably improved cardiovascular outcomes in trials, to the point that some research suggests benefit may even be independent of (not reliant on) weight loss.
Exclude posts that quote or amplify political figures criticizing healthcare spending, drug prices, or chronic disease policy (e.g., 'RFK Jr. calls out...', 'A senator complaining about...') unless the post provides healthcare-specific policy analysis, regulatory impact assessment, or evidence about implementation outcomes.
Posts amplifying political figures' healthcare claims or criticism without substantive policy or healthcare system analysis.
3 example posts
What $1 Billion a Day Buys in American Health Care
The U.S. is spending $1 billion/day on the war in Iran — over a year, that would cover 37 million Medicaid enrollees. Congress just cut $911 billion from the program because it was too expensive.
Read & subscribe (for free!)
A senator complaining about drug prices while voting for the law that set them is not a reformer. He is a magician. The trick is making you watch his hands.
RFK Jr. calls out Democrat House representatives to their face for ignoring chronic disease while claiming to care about public health.
“The Congressman was talking about the deaths from infectious disease, which are a couple thousand a year.”
“90% of the people who die in this
Exclude posts about compute, energy infrastructure, supply chains, labor markets, or other non-healthcare domains that merely use healthcare language as analogy or loose context (e.g., power grid bottlenecks, chip supply, semiconductor trends, labor market macro) without directly analyzing healthcare operations, delivery, or policy.
Posts about non-healthcare domains (energy, semiconductors, space, finance, labor) that are only weakly connected to healthcare through metaphor or tangential framing
3 example posts
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
Why the biggest fintech players are in for a shock.
"The shift is from human UX to agent UX.
In the past, you won with dashboards, design and user experience.
Now, the buyer is an AI agent, and it only cares about APIs, performance and integration.
That breaks traditional htt
Created 2026-04-23 · Updated 2026-04-26
Edit rule text
[conspiracy_or_sensationalized_health_claims]
Learned5 rejectionsActive
Exclude posts that present alarming health or medical claims (e.g., smart TVs collecting screenshots, missing scientists, unvalidated psychedelic treatments) without credible scientific sources, peer review, or clear distinction between speculation and fact.
Posts making unsubstantiated or sensationalized claims about health threats, surveillance, or medical interventions without credible evidence or peer review.
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
As far as I know this is the only naturally-derived, classical psychedelic, that has killed people.
Ayahuasca has some deaths, but it's unclear what the cause was, and unlikely directly related to its cardiovascular risk profile. https://t.co/DatuHiBOTX
🚨BREAKING: A peer reviewed study just confirmed your smart TV is taking screenshots of your screen every 15 seconds and sending them to company servers.
Samsung every minute. LG every 15 seconds. Running even when you are using it as a monitor.
Here is how to stop it:
Created 2026-04-23 · Updated 2026-04-24
Edit rule text
[unvalidated_speculative_peptide_drug_claims]
Learned5 rejectionsActive
Exclude posts that promote peptides, rapamycin, ibogaine, or other experimental drugs/treatments for off-label uses (longevity, optimization, weight loss) without peer-reviewed clinical trial data, FDA approval context, or critical discussion of limitations and safety profiles.
Posts promoting peptides, off-label drugs, or experimental treatments without rigorous clinical evidence or FDA validation.
3 example posts
Impressive study and even with the limitations, is an important addition to the Rapamycin literature
In my opinion, the only plausible off-label use of Rapamycin currently should be in ApoE4 carriers as not many options are available). That would be an important trial we are
We’re exploring the idea of a peptide-forward telehealth concierge medical service. Medicine 3.0 focused on full optimization- peptides, hormones, diet/exercise. MD is a former college varsity rower, fellowship at Yale etc.
Would you be interested in participating in a pilot
I am a strong believer in ibogaine, which is one of the reasons why @ataibeckley acquired the residual interest in its ibogaine program in Q4 2023 and now owns it 100%.
I’m very encouraged to see the administration taking a positive public stance on this important topic.
Created 2026-04-23 · Updated 2026-04-24
Edit rule text
[political_figures_healthcare_grandstanding]
Learned5 rejectionsActive
Exclude posts featuring political figures (RFK Jr., senators, congresspeople) making healthcare-related claims, accusations, or announcements that lack supporting data, peer-reviewed evidence, or systems-level analysis—especially when the post reads as amplification or political theater rather than policy critique.
Posts where political figures make broad healthcare claims or allegations without substantive analysis or evidence.
3 example posts
What $1 Billion a Day Buys in American Health Care
The U.S. is spending $1 billion/day on the war in Iran — over a year, that would cover 37 million Medicaid enrollees. Congress just cut $911 billion from the program because it was too expensive.
Read & subscribe (for free!)
RFK Jr. calls out Democrat House representatives to their face for ignoring chronic disease while claiming to care about public health.
“The Congressman was talking about the deaths from infectious disease, which are a couple thousand a year.”
“90% of the people who die in this
🚨BREAKING: HHS Sec. RFK Jr. just announced President Trump has SAVED and FOUND 138,000 missing children lost under Biden.
"Many have been trafficked, undergone slavery, s*xual abuse."
Follow: @BoLoudon https://t.co/p6YKEm38T7
Exclude posts about AI model technical capabilities (GPT-5.5 cybersecurity, Claude reasoning, compute scaling) that mention healthcare tangentially or not at all. Posts must demonstrate concrete healthcare application or policy impact, not just AI capability hype.
Posts about AI model capabilities, vulnerabilities, or technical benchmarks with only tenuous or implied healthcare relevance.
2 example posts
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Exclude posts about general macroeconomic trends, labor markets, geopolitics, elections, legislation, or trade policy unless they include specific healthcare policy analysis, reimbursement impact, or healthcare delivery system implications. Broad commentary on labor markets, elections, or international affairs does not qualify.
Posts on broad macro, political, or economic topics (trade, labor, elections, geopolitics) without specific healthcare policy or delivery analysis.
3 example posts
Total employment in New Jersey declined by 10,300 jobs in February, though the unemployment rate in the state decreased by 0.1% to 5.1%.
The January estimate was revised downward by 2,500 jobs, resulting in a December-to-January net gain of 3,500 jobs, down from the preliminary
A senator complaining about drug prices while voting for the law that set them is not a reformer. He is a magician. The trick is making you watch his hands.
They Don't Work for You-
Calls to protect foreigners from deportation or to keep the borders wide open are not about compassion. They are a core part of the globalist plan to flood the labor market with cheaper more compliant workers suppress wages for Americans and make
Exclude posts about tariffs, corporate tax policy, banking models, insurance as a business/tech product, or general economic policy that mention healthcare peripherally but center on non-healthcare business or policy analysis. Posts must be primarily about healthcare system dynamics, not general business or macro policy.
Posts about non-healthcare policy, finance, or business dynamics that lack healthcare systems focus
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
Exclude posts that present isolated clinical trial results, drug mechanism findings, or preclinical breakthroughs without connecting to a healthcare systems problem, patient access issue, regulatory pathway challenge, or market opportunity that matters beyond the lab.
Posts reporting clinical trial results, drug efficacy data, or research findings without discussing healthcare delivery, access, cost, or operational implications.
3 example posts
🫀 Detecting Diffuse Non-Calcified Coronary Atherosclerosis with Photon Counting CT: Seeing What Conventional CT Often Misses
In coronary CTA, the hardest disease to detect is not focal stenosis.
It’s diffuse, non-calcified atherosclerosis.
No obvious narrowing.
No calcium.
Just
Good summary of the marked benefit of the molecular glue drug (daraxonrasib) vs pancreatic cancer, from Revolution Medicines, and other progress (adds to the neoantigen vaccine with 6-year survival)
gift link https://t.co/qk7Ar9dCAQ https://t.co/SMiA51fiwX
Insightful plenary from the father of CAR-T, @carlhjune #AACR26
🔬 CAR-T for solid tumors is finally breaking through. 7 FDA approvals in blood cancers and now solid tumors are next 🎯
Clinical signals
• CLDN18.2 (Satri-cel): 38% vs 4% ORR in gastric cancer (The Lancet 2025) http
Created 2026-04-19 · Updated 2026-04-22
Edit rule text
[ai_company_product_metrics_and_launches]
Learned5 rejectionsActive
Exclude posts that report on AI company product launches, revenue figures, ARR milestones, user adoption rates, or fundraising news (e.g., Claude Code revenue, Anthropic hiring, OpenAI SDK launches) unless the post explicitly analyzes how that product solves a specific healthcare delivery problem or system challenge.
Posts about AI company product launches, revenue milestones, or business metrics without healthcare application focus.
3 example posts
Everyone's covering agents that help you work and build. Almost nobody's covering this:
The same primitives ARE the production runtime.
The SDK is one line:
npm install @anthropic-ai/claude-agent-sdk
The CLAUDE.md that guides Claude Code in your terminal is the exact same http
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Anthropic's CEO:
“coding is going away first, then all of software engineering."
Now, Anthropic looks to hire 454 engineers at $320k–$405k.
coding isn’t vanishing it’s becoming leverage for the few who can build, review, and ship at a completely different scale. https://t.co
Exclude posts that announce clinical trial data, drug efficacy results, or research findings (e.g., Phase 3 trial results for new medications, cholesterol reduction statistics, AI model medical screening performance) without connecting to healthcare system barriers, implementation challenges, or policy implications.
Posts reporting clinical trial results, drug efficacy data, or research findings without healthcare delivery or policy system analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
Created 2026-04-19 · Updated 2026-04-30
Edit rule text
[glp1_peptide_macro_or_personal_context]
Learned5 rejectionsActive
Exclude posts about GLP-1 side effects, personal weight loss stories, pharmacoeconomics of existing drugs, or biological mechanisms of GLP-1 — unless they frame healthcare technology application or systemic innovation. Personal wellness and drug mechanism posts belong in clinical/wellness categories, not healthcare tech.
Posts about GLP-1 drugs, peptides, or weight loss that focus on personal experience, macro economics, or general biology rather than healthcare tech innovation.
3 example posts
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
The brain is the master regulator of food intake and energy balance. A brilliant new @CellCellPress review, including the mechanism of GLP-1 drugs, by @ClemmensenC and colleagues, open-access
https://t.co/KbDf7ym288 https://t.co/DMFeS9zHfw
GLP-1 drugs are the ultimate validation of the techno-solutionist approach to society's most challenging problems.
The obesity crisis seemed liked it would just get worse and worse forever. Scolding from public health officials didn't work. Proposals to completely overhaul our f
Created 2026-04-17 · Updated 2026-04-18
Edit rule text
[ai_company_product_launch_or_metrics]
Learned5 rejectionsActive
Exclude posts that report product launches, feature announcements, or business metrics (ARR, growth rates, customer counts) for AI companies or AI tools—even if tangentially framed as healthcare—unless the post specifically analyzes how the product changes healthcare delivery or outcomes.
Posts announcing AI company product launches, feature releases, or business metrics without healthcare application context.
3 example posts
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
🚨 Anthropic's own team just showed how to build production AI agents.
30 minutes. free. from the engineers who built it.
watch the workshop. bookmark it.
you spent 6 months managing every workflow yourself.
they just showed how to put all of it on autopilot.
Then read the ht
Kensho AI Mafia led by @DanielNadler needs to be studied. Particularly their success in Vertical AI. From a cursory look, Kensho alumni have founded:
- Suno (music)
- OpenEvidence (healthcare)
- Chai Discovery (biopharma)
- LangChain (agent infra)
Exclude posts that announce product milestones, revenue, hiring, or competitive positioning for AI/software companies — even those with some healthcare use cases — unless the post explicitly addresses healthcare delivery, regulatory, or operational outcomes.
Posts reporting product adoption, revenue, user growth, or engineering hiring metrics for AI tools or platforms without healthcare application specificity.
3 example posts
Congrats to @AbridgeHQ, @AnthropicAI, @cursor_ai, @elise_ai, @Fal, @WeAreLegora, and @Perplexity_ai on being named to the @Forbes AI 50 — redefining how the world builds, works, and communicates through AI.
We couldn't be more excited to back them as they continue to shape the h
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Anthropic's CEO:
“coding is going away first, then all of software engineering."
Now, Anthropic looks to hire 454 engineers at $320k–$405k.
coding isn’t vanishing it’s becoming leverage for the few who can build, review, and ship at a completely different scale. https://t.co
Exclude posts that are primarily retweets, event announcements (e.g., 'we're presenting a poster'), company press releases, or shallow responses ('thanks everyone') that contain no original analysis or substantive healthcare tech insight.
Posts that are primarily retweets, event announcements, or surface-level commentary without original insight or analysis.
3 example posts
Recursion at #AACR: Transcriptional Atlas of Patient Tumors for Preclinical Model Selection
On April 20, 9am-12pm, we’re presenting a poster on CellNeighbor – a novel computational framework designed to contextualize cell line expression profiles within the landscape of https://
Market maps have become a real focus of ours as LLMs are getting company categorization so wrong.
Our latest, in partnership with Confido Health & @RMFnyc1, focuses on agentic AI for the ambulatory market. What's being deployed now?
Our focus was Series A onwards.
👇 htt
In the latest episode of AI Grand Rounds, Dr. @byrondcrowe, chief medical officer of @doctronic, describes how administrative complexity can interfere with timely, effective treatment, and how AI may help address those challenges. Full episode: https://t.co/hL9Dh2VjYc https://t.c
Created 2026-04-16 · Updated 2026-04-19
Edit rule text
[glp1_peptide_adjacent_non_healthcare_context]
Learned5 rejectionsActive
Exclude posts that discuss GLP-1 drugs, peptides, or weight loss medications primarily through personal anecdotes, pricing gossip, or macro-level cultural commentary (e.g., 'obesity crisis solved') without clinical trial data, healthcare access analysis, or policy implications.
Posts about GLP-1 drugs, weight loss medications, or peptides that focus on personal side effects, pricing speculation, or general statements about obesity without clinical or healthcare economics substance.
3 example posts
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
The brain is the master regulator of food intake and energy balance. A brilliant new @CellCellPress review, including the mechanism of GLP-1 drugs, by @ClemmensenC and colleagues, open-access
https://t.co/KbDf7ym288 https://t.co/DMFeS9zHfw
GLP-1 drugs are the ultimate validation of the techno-solutionist approach to society's most challenging problems.
The obesity crisis seemed liked it would just get worse and worse forever. Scolding from public health officials didn't work. Proposals to completely overhaul our f
Exclude posts about CPU launches, chip supply chains, compute capacity announcements, space infrastructure, or semiconductor strategy unless the post explicitly connects the infrastructure advancement to a specific healthcare delivery, diagnostic, or clinical workflow problem.
Posts about AI compute, infrastructure, chip announcements, or space-based computing without specific healthcare application or relevance.
3 example posts
🔥 CPUs are having a moment.
#Nvidia launched a standalone CPU. #Arm made its first chip in 35 years. #Intel & #AMD are raising prices amid a supply crunch.
What's behind it: Agentic AI needs far more CPU than anyone planned for — driving a structural shift in CPU:GPU ratios tow
To put Elon's space compute vision into perspective:
1 TW of compute in orbit
That's 10 million tons to orbit each year.
That's 100,000 launches a year, almost one every 5 minutes.
In the airline business that's normal!
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties!
The @nvidia CEO joins to discuss:
-- Nvidia's future, roadmap to $1T revenue
-- Physical AI's $50T market
-- Rise of the agent, OpenClaw's inflection moment
-- Inference explosion, Groq deal
-- AI PR Crisis, Anthropic's comms m
Exclude posts that share academic research, peer-reviewed findings, clinical mechanisms, or ward-level medical observations without connecting them to healthcare tech application, business models, or systems-level implications. Posts about pancreatic cancer biology, GLP-1 mechanisms, or physician anecdotes from rounds should focus on how this informs healthcare tech, not just document clinical facts.
Posts reporting clinical research findings, mechanistic biology, or ward-level observations without healthcare systems or business application insight.
3 example posts
~1-2% of the patients on ward rounds has something bad going on which hasn’t been identified yet.
As the attending, one of my main duties on rounds is to spot these cases. I do a lot of this by Noticing Things.
A 🤖 iPad makes it much less likely you will Notice Things. 🤔
Interpretable Antibody–Antigen Structural Interface Prediction via Adaptive Graph Learning and Cyclic Transfer
1. The paper introduces VASCIF (Variable-domain Antibody–antigen Structural Complex Interface Finder), a structure-aware model that jointly predicts paratopes and https
NIH-funded researchers have uncovered a key reason why immunotherapy has largely failed in pancreatic cancer — and identified a promising strategy to overcome that resistance.
Read on to learn more about this discovery: https://t.co/BoCHpLxp5g https://t.co/3DXv4E9DOE
Created 2026-04-16 · Updated 2026-04-16
Edit rule text
[tangential_infrastructure_and_compute_hype]
Learned5 rejectionsActive
Exclude posts discussing AI compute scaling, energy consumption, chip manufacturing, data center buildout, or infrastructure investments (e.g., GPU capacity, power grids, gas turbines) unless directly tied to healthcare AI deployment or healthcare data processing challenges.
Posts about AI infrastructure, compute scaling, energy, or chip manufacturing without healthcare application focus.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Created 2026-04-16 · Updated 2026-04-30
Edit rule text
[ai_model_technical_capability_tangent]
Learned5 rejectionsActive
Exclude posts that focus on AI model technical capabilities, architecture, or safety properties (e.g., Claude Code design, GPT security vulnerabilities, agent autonomy) without connecting to a specific healthcare operational or clinical problem being solved.
Posts about AI model capabilities (agents, code generation, security vulnerabilities) disconnected from healthcare application or system context
3 example posts
A must read for anyone interested in building practical AI systems in 2026:
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
The paper explains the architecture of a modern production-grade AI agent system (Claude Code) by analyzing its source http
🚨 Anthropic's own team just showed how to build production AI agents.
30 minutes. free. from the engineers who built it.
watch the workshop. bookmark it.
you spent 6 months managing every workflow yourself.
they just showed how to put all of it on autopilot.
Then read the ht
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
Created 2026-04-14 · Updated 2026-04-27
Edit rule text
[tangential_ai_infrastructure_hype]
Learned5 rejectionsActive
Exclude posts celebrating AI compute scaling, training compute growth, data center buildout, power grid expansion, chip manufacturing, or energy infrastructure—unless the post explicitly connects this infrastructure trend to a specific healthcare delivery challenge, clinical workflow, or health tech business model.
Posts about AI infrastructure, compute scaling, energy, or data center buildout without clear healthcare application.
3 example posts
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
Created 2026-04-14 · Updated 2026-05-01
Edit rule text
[ai_company_business_metrics_not_healthcare]
Learned5 rejectionsActive
Exclude posts that focus primarily on AI company business metrics (revenue, ARR, valuation, hiring announcements, stock performance) without demonstrating direct application to healthcare problems or operations. The post must connect the metric to a healthcare outcome or use case.
Posts about AI company revenue, valuation, hiring, or business performance metrics disconnected from healthcare applications
3 example posts
Two years ago the best AI models couldn't complete beginner-level cyber tasks. One just executed a full 32-step corporate network takeover. The Bank of England is convening emergency CEO briefings.
Look at that chart. GPT-4o maxes out at 2 steps. Initial reconnaissance. It can
Anthropic's CEO:
“coding is going away first, then all of software engineering."
Now, Anthropic looks to hire 454 engineers at $320k–$405k.
coding isn’t vanishing it’s becoming leverage for the few who can build, review, and ship at a completely different scale. https://t.co
Boris Cherny created Claude Code. It hit $2.5 billion in annualized revenue in 9 months. Fastest B2B product ramp in history. Faster than ChatGPT, Slack, or Snowflake ever reached $1 billion.
Now he says coding is “solved” and IDEs will be dead by end of year. https://t.co/HI7M
Exclude posts that highlight AI technical capabilities in non-healthcare domains (cybersecurity breaches, robotics competitions, network penetration) even if healthcare is mentioned in passing or used as a loose frame. The post must center healthcare delivery, clinical workflows, or healthcare business problems.
Posts showcasing AI model capabilities (cyber security, network takeovers, robot races) where healthcare is tangential or completely absent from the narrative.
3 example posts
> Vercel got pawned
> severe enough to notify law enforcement
> the only advice: “review your environment variables”
> what does that even mean?
> $10B company, and this is how you communicate
Cyber attacks ramping fast, starting to see why Anthropic is scared to
In Beijing's 2026 humanoid robot half-marathon, HONOR's Lightning completed the 21 km course in 50:26 minute.
Beat current human men's half-marathon world record of 57:20.
Last year's winner took over 2 hours 40 minutes.
Massive progress in 12 month
https://t.co/OcZJ66ebWD
FT: The White House is moving to give major US agencies access to a modified Anthropic Mythos model built to hunt dangerous software flaws before attackers find them.
That makes Mythos useful for defense because a model that can find a weakness in an operating system, browser, h
Created 2026-04-14 · Updated 2026-04-20
Edit rule text
[political_outrage_without_healthcare_substance]
Learned5 rejectionsActive
Exclude posts that use healthcare as framing for political complaints (Trump administration, DOGE, federal spending) without offering data-driven analysis, specific policy mechanisms, or healthcare system impact. Posts should advance healthcare understanding, not score political points.
Posts expressing political criticism or outrage about government policy, spending, or administration without substantive healthcare analysis or actionable insights
3 example posts
I joined tribal leaders in Phoenix to reaffirm our commitment to self-governance and sovereignty in Indian Country. Together, we are making healthcare more affordable, strengthening communities and improving outcomes across Indian Country. https://t.co/SsjrQwoTgf
Indian has 0.7 active physicians per 1,000 people, America has 3.0 active physicians per 1,000 people.
You are a liar. You are not motivated by increasing patient access to care. You just want to practice in America because you can make more money.
The attack by the Trump Administration on blue states for alleged Medicaid "fraud" is using such garbage math to make up numbers that even Dr. Oz had to admit it.
⬇️⬇️⬇️
https://t.co/V0dZfx0OdK
Created 2026-04-13 · Updated 2026-04-14
Edit rule text
[infrastructure_and_non_healthcare_tech]
Learned5 rejectionsActive
Exclude posts about semiconductor manufacturing, data infrastructure, vector databases, or general business models (switching costs, network effects, moats) where healthcare is mentioned only as a contextual example or tangential case study rather than being the core subject.
Posts about tech infrastructure, hardware, or non-healthcare business models that use healthcare as a loose framing device.
3 example posts
Quantum computers are still on the drawing board, but quantum sensing is here now—and this technology can transform not just industry but America's security picture. Read a new Defining Ideas article by Dr. Vivek Lall and Haibo Huang: https://t.co/UeEjZWIO27
In general, there are 5 kind of moats:
▪️ Intangible Assets
▪️ Switching Costs
▪️ Network Effects
▪️ Cost Advantage
▪️ Efficient Scale
I'll teach you everything you need to know in 2 minutes: https://t.co/v9w6pfJOGh
HNSW is fast & performant. But what's it costing you?
DiskBBQ gets you great recall & speed using a fraction of the memory.
HNSW vs DiskBBQ in 40 seconds with @_jphwang https://t.co/3BqD9a6srU
Created 2026-04-12 · Updated 2026-04-13
Edit rule text
[personal_anecdote_or_low_substance_opinion]
Learned5 rejectionsActive
Exclude posts that are primarily personal anecdotes, lifestyle updates, or opinionated takes (e.g., 'I now believe,' 'I got a notification,' personal recovery stories) that lack healthcare systems analysis or evidence-based insight.
Posts sharing personal stories, lifestyle updates, or opinion statements without healthcare insight or evidence
3 example posts
I got a notification from Whoop yesterday that the FDA is targeting them for being an "unapproved medical device" for offering blood pressure insights
A pretty classic case of the FDA working as a lobby for the multi billion dollar medical device industry instead of supporting n
Mine got LASIK, as had many of the nurses.
A lot of ophthalmologists have.
There's a weird delusion that the profession is all afraid of it, but there's no basis for that belief beyond fearmongering. https://t.co/yVBDfNDiB5
New @JAMANetwork paper out from our team here at UCLA Health/WLA VA and @samirguptaGI's team at UCSD/San Diego VA!
In this first study from a multi-part research project, our teams are trying to understand what age your medical doctors and the colorectal cancer prevention https:
Exclude posts from biotech/AI founders or companies announcing capability breakthroughs (e.g., 'we trained on cohort data and built HealthFormer', 'we're designing proteins with atomic precision', '24x manufacturing scaling') without peer-reviewed evidence, clinical validation, or demonstrated healthcare system adoption. Self-promotional announcements and preprint hype without validation should be excluded.
Posts where biotech or AI founders make broad claims about capabilities or impact without clinical validation, peer review, or healthcare systems evidence.
3 example posts
Our new preprint is a significant milestone for us
We built "HealthFormer" by training on our deeply phenotyped cohort from the Human Phenotype Project data. Healthformer is a multimodal generative transformer model that tokenizes each participant's physiological trajectory http
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Experimentally Validated Deep Learning Control of Protein Aggregation
1. The study introduces AggreProt, a deep neural network that predicts residue-level aggregation-prone regions (APRs) directly from protein sequence, and then uses those predictions to design mutations that ht
Exclude posts about AI infrastructure, compute optimization, model architectures, filesystem design, or technical capabilities (NVIDIA stacks, context windows, memory systems, OpenShell sandboxing) that do not explicitly connect to healthcare operations or clinical workflows. Generic AI infrastructure posts with loose healthcare framing belong in tech, not healthcare.
Posts about AI infrastructure, compute capabilities, or technical architecture that lack clear healthcare application or are tangential to healthcare delivery.
3 example posts
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Exclude posts that announce AI company product launches, features, or technical capabilities (e.g., OpenShell, Mesa filesystem, learning workshops) unless the post demonstrates validated healthcare application, clinical adoption, or specific healthcare workflow improvement with evidence.
Posts announcing new AI company products, features, or model capabilities presented without healthcare application validation or clinical relevance.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Exclude posts that present broad, unsubstantiated health claims (e.g., 'AI is taking on labor but not accountability,' 'medicine can save the most lives') or reframe healthcare debates without data, evidence, or specific system-level analysis. Single anecdotes or rhetorical questions about health without substantive backing also fall here.
Posts making broad unvalidated health claims or debate-framing statements without evidence or healthcare systems context.
3 example posts
Stanford and Harvard published the most unsettling AI paper of the year.
It shows how autonomous AI agents, when placed in competitive or open environments, don’t just optimize for performance…
They drift toward manipulation, coordination failures, and strategic chaos. https://
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
Exclude posts that announce new AI agent features, product launches, or model capabilities (e.g., foundry agents, HealthFormer preprints, NemoClaw training, Harvey legal AI) unless the post includes evidence of healthcare provider adoption, clinical validation, or real-world healthcare operational impact.
Posts announcing AI agent or model product launches, features, or capabilities without demonstrating healthcare-specific validation or operational deployment.
3 example posts
Our new preprint is a significant milestone for us
We built "HealthFormer" by training on our deeply phenotyped cohort from the Human Phenotype Project data. Healthformer is a multimodal generative transformer model that tokenizes each participant's physiological trajectory http
The problem with Reality Labs is not ambition. It is time. AI turned into revenue faster because it improves existing workflows. The metaverse still asks users to change behavior before value is obvious. https://t.co/lopZiUwGU5
Microsoft just turned an $11 billion startup into a Word feature.
Harvey raised $200M at an $11B valuation in March on the bet that legal AI is its own surface. The numbers held that up. $190M ARR per TechCrunch's December reporting. 100,000 lawyers across 1,300 organizations in
Exclude posts about semiconductor manufacturing, data center buildout, power generation, tariffs, or infrastructure supply chains that mention healthcare only tangentially or use healthcare as a loose hook for a non-healthcare story.
Posts about computing infrastructure, power grids, supply chains, or macro-economic policy with only tangential healthcare framing.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Created 2026-05-02 · Updated 2026-05-02
Edit rule text
[generalist_ai_capability_tangential]
Learned4 rejectionsActive
Exclude posts that discuss AI model design, architecture, interpretability research, or technical capabilities (e.g., context windows, knowledge distillation, circuit analysis) that are discussed in generalist or academic contexts without direct application to healthcare workflow, clinical decision-making, or patient outcomes.
Posts about general AI model capabilities, architecture, or design patterns applied outside healthcare or tangentially framed.
3 example posts
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
Stanford and Harvard published the most unsettling AI paper of the year.
It shows how autonomous AI agents, when placed in competitive or open environments, don’t just optimize for performance…
They drift toward manipulation, coordination failures, and strategic chaos. https://
Interpretability is built on a few core assumptions.
Two of our ICLR 2026 @iclr_conf papers suggest some of those assumptions are wrong (or at least highly incomplete).
1. Sparse CLIP: Co-Optimizing Interpretability and Performance in Contrastive Learning https://t.co/3JzHDqRj3
Created 2026-05-02 · Updated 2026-05-03
Edit rule text
[pharmaceutical_trial_data_without_context]
Learned4 rejectionsActive
Exclude posts that report pharmaceutical trial results, efficacy numbers, or FDA regulatory actions as standalone announcements unless they analyze how the drug changes clinical practice, reimbursement, patient access, or healthcare delivery systems. Trial data alone without context is insufficient.
Posts announcing drug trial results or clinical data without systemic healthcare analysis or implementation implications.
3 example posts
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
$IBRX
Here's a wild theory.
What if we're given FDA acceptance of sBla and PDUFA at same time and then it's announced after reviewing everything it's been determined we will be given rapid expanded access review under "plausible mechanism of action".
That may sound crazy ht
Exclude posts analyzing AI's impact on employment, workforce disruption, or labor market trends (e.g., 'AI automates 57% of work hours,' 'job functions vs. job elimination') unless the post includes specific healthcare workforce dynamics—clinical roles, administrative burden, training requirements, or healthcare labor market shifts. Generalist labor market commentary without healthcare lens should be rejected.
Posts discussing workforce disruption, labor market effects, or job displacement from AI/automation at macro level without healthcare-specific implications.
3 example posts
Stanford and Harvard published the most unsettling AI paper of the year.
It shows how autonomous AI agents, when placed in competitive or open environments, don’t just optimize for performance…
They drift toward manipulation, coordination failures, and strategic chaos. https://
[New] from a16z @speedrun:
Come for the Agent, Stay for the Network
there's a quiet pattern hiding inside the most defensible vertical AI startups right now:
the agent is the wedge
the network is the moat.
here's what I mean:
an HVAC tech needs a part today.
>>Traditionally:
🚨 BREAKING: Anthropic new research finds that AI’s impact on jobs is primarily at the task level.
Rather than eliminating jobs, it is progressively taking over the functions that define them and gradually absorbing the core work in many jobs/roles.
The paper, “Labor Market http
Created 2026-05-01 · Updated 2026-05-02
Edit rule text
[broad_health_claim_without_nuance]
Learned4 rejectionsActive
Exclude posts that make sweeping health claims, propose unvalidated medical interventions, or present anecdotal medical observations as generalizable insights without citing evidence, clinical context, or healthcare systems impact. Single anecdotes, founder speculation, and vague health claims do not qualify.
Posts making broad, unvalidated claims about health or medical efficacy without evidence, nuance, or healthcare systems perspective.
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
What superhuman vision can detect from the retinal photo, which human eyes cannot, is stunning. A new foundation AI model screening for diabetes hypertension, hyperlipidemia, gout, osteoporosis, and thyroid disease @NatureMedicine
https://t.co/GhKvUqz4Vy https://t.co/iKcXCbLceu
Exclude posts that report healthcare fraud incidents, billing scandals, enforcement actions, or financial misconduct (e.g., Optum audits, hospice fraud, billing disputes) unless the post analyzes systemic healthcare infrastructure vulnerabilities, policy gaps, or architectural solutions. Posts that frame these as isolated scandal/enforcement stories rather than healthcare system design problems should be excluded.
Posts reporting healthcare fraud, billing scandals, or regulatory enforcement actions without analyzing systemic vulnerabilities or policy implications.
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
Exclude posts that announce AI company product launches, model releases, or capability claims (e.g., new Claude versions, OpenAI features, AWS services) without demonstrating how these products are being applied in healthcare delivery, have healthcare validation, or solve documented healthcare problems.
Posts announcing new AI model releases, product features, or company capabilities that lack evidence of healthcare application, validation, or clinical utility.
3 example posts
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts from company founders, official corporate accounts (@Figure_robot, @nvidia, @GSK), or product teams announcing new features, launches, or metrics when the post lacks third-party validation, competitive analysis, or concrete healthcare delivery use cases.
Posts from company founders or official accounts announcing product launches, updates, or capabilities without independent validation or healthcare system context.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Exclude posts that discuss AI compute growth, training efficiency, model scaling, or infrastructure breakthroughs (trillion-fold compute growth, power grid challenges, data center expansion) where healthcare is absent or only generically mentioned—the post is about AI infrastructure, not healthcare systems.
Posts about AI compute, training infrastructure, or foundation model breakthroughs that lack specific healthcare delivery or clinical application context.
3 example posts
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Created 2026-05-01 · Updated 2026-05-01
Edit rule text
[ai_agent_security_incident_tangential]
Learned4 rejectionsActive
Exclude posts about AI safety vulnerabilities, chatbot exploits, or agent security incidents (e.g., database deletion, API key leaks, jailbreaks) unless they involve a healthcare organization, patient data, or clinical workflow. General AI security incidents without healthcare specificity are tangential.
Posts about AI agent security vulnerabilities, data breaches, or system compromises that lack healthcare-specific context or healthcare entity involvement.
3 example posts
𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗔𝗹𝗿𝗲𝗮𝗱𝘆 𝗕𝗲𝗶𝗻𝗴 𝗛𝗶𝗷𝗮𝗰𝗸𝗲𝗱
Researcher Aks Sharma at Manifold found 30 malicious skills on ClawHub turning AI agents into a crypto farming botnet: 10,000 downloads before anyone noticed.
⬩ The attack required zero exploits. Malicious https://t.co/v4oBXPPydu
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
Exclude posts that are primarily founder quotes, company blog-style announcements, or self-promotional statements from biotech or AI startups describing their technical achievements, lab results, or system capabilities (e.g., 'we paired our lab with GPT-5', 'we created a novel protein') unless accompanied by independent peer review, published validation, or demonstrated healthcare impact.
Posts featuring founder commentary or company statements celebrating technical breakthroughs or capabilities without independent validation or healthcare outcome evidence.
3 example posts
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Exclude posts about semiconductor supply chains, power grids, data center growth, tariff policy, or compute bottlenecks unless they explicitly analyze healthcare-specific impacts on clinical workflows, medical device manufacturing, or healthcare IT systems. Macro-level infrastructure posts without healthcare application are tangential.
Posts about computing infrastructure, energy, tariffs, or macro economic policy that lack direct healthcare application or analysis.
3 example posts
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
Exclude posts that express political outrage about healthcare policy (tariffs, Medicaid cuts, tax policy, state regulations) without analyzing the underlying healthcare system incentives, delivery model impacts, or operational consequences of these policies.
Posts framing healthcare issues as political scandals or policy failures without structural or operational diagnosis.
3 example posts
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
States are rushing “affordability” bills, but most just mask high prices with rebates, mandates, or price caps. @MrRBourne & Nathan Miller argue durable relief means rolling back cost-raising rules and expanding supply.
https://t.co/WG5egT1NfL
$LLY ’s Mounjaro will not be listed on Australia’s PBS after pricing negotiations collapsed.
Eli Lilly walked away from talks with the government, leaving around 450,000 patients without subsidized access.
Patients will continue to pay hundreds of dollars per month out of
Exclude posts announcing AI company product launches, partnerships, or integrations (e.g., OpenAI + AWS, AI agent tools) that lack evidence of healthcare-specific use cases, clinical validation, or healthcare operational adoption.
Posts announcing AI company product launches or partnerships that are loosely tied to healthcare or lack validation of healthcare applicability.
3 example posts
[New] from a16z @speedrun:
Come for the Agent, Stay for the Network
there's a quiet pattern hiding inside the most defensible vertical AI startups right now:
the agent is the wedge
the network is the moat.
here's what I mean:
an HVAC tech needs a part today.
>>Traditionally:
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts that share drug trial topline results, Phase 3 data, efficacy metrics, or FDA acceptance announcements (e.g., survodutide Phase 3 results, GLP-1 weight loss percentages, hepatitis B treatment approvals) unless the post explicitly connects these to healthcare system dynamics, reimbursement, access barriers, or care delivery innovation.
Posts reporting clinical trial results, drug efficacy data, or pharma announcements without analysis of healthcare delivery, payer, regulatory, or system-level implications.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
Exclude posts that express founder enthusiasm, startup momentum, or platform capability claims (e.g., 'it's as easy to start biotech on GCL as software on AWS,' 'forward deploy to customers') without providing validation, evidence, or analysis of actual healthcare adoption or impact.
Posts expressing founder enthusiasm or startup optimism about biotech, cloud labs, or healthcare platforms without evidence or grounded analysis.
3 example posts
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Almost all of my positions selling some kind of AI/agentic SaaS tool have (either by foresight or customer demand) pivoted to some kind of business model where they “forward deploy” to the customer first and then sell the system they create back to them as SaaS. 99% of “normie” b
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
Exclude posts that assert broad health claims (e.g., 'GLP-1s improve cardiovascular outcomes,' 'protein absorption limits,' 'AI can now design antibodies') or debate clinical efficacy/safety without rigorous evidence, nuance, or analysis of healthcare system implications.
Posts making broad health or clinical claims or engaging in clinical debates without evidence, nuance, or healthcare systems perspective.
3 example posts
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
Attention PK nerds, pharmacologists, and clinicians who actually understand serum levels:
I haven’t seen this discussed, but it could matter for patients priced out of injectables.
If a 25 mg oral semaglutide tablet has ~1% bioavailability, that’s ~0.25 mg systemically… on
U.S. nursing homes are fabricating schizophrenia diagnoses to hide their use of dangerous antipsychotic drugs to subdue dementia patients, a government watchdog report found.
The drugs increase the risk of falls, strokes and death. https://t.co/6SkzWxZfSz
Exclude posts that make broad health claims, report conflicting evidence, or debate clinical efficacy (e.g., 'GLP-1s cause eating disorders' or 'GLP-1s improve cardiovascular outcomes') without addressing how this evidence affects healthcare access, insurance policy, provider decision-making, or healthcare delivery systems.
Posts making broad medical claims or engaging in clinical debates without healthcare system, policy, or implementation context.
2 example posts
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
The only problem with the GLP-1 heart muscle loss narrative is...
... that it's just a narrative.
GLP-1s have reliably improved cardiovascular outcomes in trials, to the point that some research suggests benefit may even be independent of (not reliant on) weight loss.
Exclude posts about energy grids, semiconductor supply chains, tariffs, compute capacity, or macro-economic trends that only loosely connect to healthcare (e.g., 'grid equipment bottlenecks affecting data centers' or 'tariff impacts on manufacturing') unless they provide specific healthcare operational or market impact analysis.
Posts about infrastructure, energy, compute, tariffs, or macroeconomic policy that mention healthcare tangentially but lack healthcare-specific analysis.
3 example posts
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
Exclude posts that assert broad health claims (e.g., 'GLP-1s exacerbate eating disorders,' 'GLP-1s have improved cardiovascular outcomes,' 'protein per meal is wasted') or debate drug/treatment efficacy without citing peer-reviewed evidence, randomized trial results, or systematic analysis.
Posts making broad health claims or debating drug safety/efficacy without citing evidence, trial data, or systematic analysis.
3 example posts
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
A 65% cholesterol reduction has been available since 2015. Almost nobody could get it. The drug required a needle every two weeks, cost $5,850+/year, and insurers fought every prescription.
@Merck spent a decade figuring out how to put the same mechanism in a pill. Enlicitide:
Exclude posts that report on healthcare policy, insurance practices, or regulatory failures primarily as political outrage or moral scandal (e.g., 'they denied care and people died', 'insurers are evil') without substantive analysis of why those system failures occur, how they propagate operationally, or what structural changes would prevent recurrence. Moral outrage without systems thinking is political commentary, not healthcare tech insight.
Posts framing healthcare policy, insurance decisions, or regulatory issues as political scandals without analyzing healthcare system mechanics or operational consequences
3 example posts
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
States are rushing “affordability” bills, but most just mask high prices with rebates, mandates, or price caps. @MrRBourne & Nathan Miller argue durable relief means rolling back cost-raising rules and expanding supply.
https://t.co/WG5egT1NfL
@PirateWires He's objectively correct. Brian Thompson made decisions that led to denials of medical care, and people died. He used Ai to find ways to deny claims ffs. Brian Thompson has more blood on his hands than whoever shot him
Exclude posts that report healthcare fraud, insurance denials, or healthcare company misconduct primarily as criminal wrongdoing or enforcement news without analyzing systemic failures, incentive structures, or technology solutions to prevent recurrence. Crime reporting without healthcare systems context is not tech strategy content.
Posts reporting healthcare fraud, billing scandals, or misconduct as crime/enforcement news without healthcare systems analysis.
3 example posts
What happened during the Change disaster?
Hospitals got bailed out.
CMS advanced $3.2 billion to hospitals between March and June 2024. UnitedHealth/Optum extended $6.5 billion in interest-liquidity through April 30.
Mercy, I looked it up, specifically had 218 days of cash
U.S. nursing homes are fabricating schizophrenia diagnoses to hide their use of dangerous antipsychotic drugs to subdue dementia patients, a government watchdog report found.
The drugs increase the risk of falls, strokes and death. https://t.co/6SkzWxZfSz
@PirateWires He's objectively correct. Brian Thompson made decisions that led to denials of medical care, and people died. He used Ai to find ways to deny claims ffs. Brian Thompson has more blood on his hands than whoever shot him
Exclude posts that focus on AI safety incidents, security vulnerabilities, or model breaches (Claude deleting databases, API key leaks, malicious agent skills) where the healthcare connection is superficial—the post is really about AI infrastructure risk, not healthcare delivery or clinical impact.
Posts about AI model security breaches, vulnerability discoveries, or safety incidents that are tangentially framed as healthcare-relevant but lack healthcare-specific context or implications.
3 example posts
𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗔𝗹𝗿𝗲𝗮𝗱𝘆 𝗕𝗲𝗶𝗻𝗴 𝗛𝗶𝗷𝗮𝗰𝗸𝗲𝗱
Researcher Aks Sharma at Manifold found 30 malicious skills on ClawHub turning AI agents into a crypto farming botnet: 10,000 downloads before anyone noticed.
⬩ The attack required zero exploits. Malicious https://t.co/v4oBXPPydu
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
Exclude posts about non-healthcare industries, geopolitics, labor market trends, or general business/tech topics that mention healthcare only as a tangential reference or forced label. The post must be fundamentally about healthcare systems, not merely tagged with healthcare terms.
Posts about non-healthcare domains (space, labor markets, software business models, UFOs) that lack substantive connection to healthcare systems.
3 example posts
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
AI could, in theory, automate 57% of US work hours. Yet most human skills remain relevant.
The future of work is not human or machine – but a partnership between people, agents, and robots.
Read our latest research on skill partnerships in the age of AI: https://t.co/h1K56uPqPo
Exclude posts that make broad or speculative health claims (e.g., side effect narratives, unproven treatment efficacy, unsupported dietary claims) or promote medical interventions without citing clinical evidence, RCT data, or healthcare provider validation. Unsubstantiated medical claims are not healthcare tech news.
Posts making broad health claims or promoting unvalidated medical interventions without clinical evidence or healthcare validation.
3 example posts
Interpretability is built on a few core assumptions.
Two of our ICLR 2026 @iclr_conf papers suggest some of those assumptions are wrong (or at least highly incomplete).
1. Sparse CLIP: Co-Optimizing Interpretability and Performance in Contrastive Learning https://t.co/3JzHDqRj3
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
Exclude posts that discuss GLP-1, semaglutide, retatrutide, survodutide, or peptide markets solely through pricing dynamics, patient weight loss percentages, script growth rates, or personal weight-loss narratives without addressing healthcare system implications (insurance coverage, clinical guidelines, access equity, clinical workflow integration).
Posts about GLP-1 drugs, peptides, or weight-loss medications focused on market pricing, competition, and personal usage without healthcare system analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Exclude posts whose primary subject is fintech, general software business dynamics, or AI company infrastructure (e.g., Revolut banking ML, Snap Stories/AR, Microsoft Word feature, software seat models) even if they mention healthcare or use healthcare as an analogy. The post must center on healthcare systems, not use healthcare as a secondary reference point.
Posts about non-healthcare domains (fintech, software business models, general AI infrastructure) that use loose healthcare framing or tangential healthcare references.
3 example posts
The problem with Reality Labs is not ambition. It is time. AI turned into revenue faster because it improves existing workflows. The metaverse still asks users to change behavior before value is obvious. https://t.co/lopZiUwGU5
Microsoft just turned an $11 billion startup into a Word feature.
Harvey raised $200M at an $11B valuation in March on the bet that legal AI is its own surface. The numbers held that up. $190M ARR per TechCrunch's December reporting. 100,000 lawyers across 1,300 organizations in
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
Exclude posts that are primarily personal founder/CEO statements about internal progress ('we reduced seats from 10+ to 2,' 'we're not going to stop until it's easy to start a biotech startup on GCL') or self-promotional tool announcements without evidence of clinical validation, healthcare adoption, or healthcare systems change.
Posts expressing founder optimism or startup progress announcements without substantive evidence of healthcare impact or system-level significance.
2 example posts
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Almost all of my positions selling some kind of AI/agentic SaaS tool have (either by foresight or customer demand) pivoted to some kind of business model where they “forward deploy” to the customer first and then sell the system they create back to them as SaaS. 99% of “normie” b
Exclude posts that describe a single patient case, one clinician's workflow observation, or a personal clinical incident (e.g., 'I sat with a patient today...', 'low grade fever, mildly tachycardic...') unless the post explicitly connects it to a broader healthcare system pattern, policy failure, or scalable solution.
Posts sharing a single clinical observation, patient case, or clinician anecdote without generalizable systems-level insight
3 example posts
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
I sat with a patient today who first noticed a change in October. It’s April now. In all those months of appointments and follow-ups, her breast had only truly been looked at twice. That stayed with me.
If something has changed with your body — especially something under your ht
Created 2026-04-27 · Updated 2026-04-27
Edit rule text
[ai_agent_technical_capability_tangent]
Learned4 rejectionsActive
Exclude posts that showcase AI agent technical features, design patterns, framework announcements, or benchmark performance unless the post explicitly demonstrates healthcare-specific application, clinical validation, or healthcare workflow integration. Generic AI agent capability posts with loose healthcare framing do not qualify.
Posts highlighting AI agent technical capabilities, architecture, or performance metrics without healthcare-specific application or clinical validation.
3 example posts
[New] from a16z @speedrun:
Come for the Agent, Stay for the Network
there's a quiet pattern hiding inside the most defensible vertical AI startups right now:
the agent is the wedge
the network is the moat.
here's what I mean:
an HVAC tech needs a part today.
>>Traditionally:
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Is the business model for traditional software companies in permanent decline due to AI Agents not needing seats?
2 examples:
Re: @salesforce, we’ve reduced our seats from 10+ to 2 human seats and 1 API seat. And yet, we now pay $22,000 a year, 83% up from $12,000. Why? Our
Exclude posts that discuss AI model capabilities (e.g., GPT-5.5 taking over corporate networks, Claude Code architecture, zero-day vulnerabilities) that are tangentially labeled as healthcare-relevant but focus on technical AI capability, not on how the capability solves a healthcare delivery, patient outcomes, or healthcare system problem.
Posts about AI model technical capabilities, safety, or vulnerabilities framed loosely as healthcare-relevant but lacking concrete healthcare application.
2 example posts
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Created 2026-04-27 · Updated 2026-04-27
Edit rule text
[ai_infrastructure_compute_hype_tangential]
Learned4 rejectionsActive
Exclude posts that discuss AI infrastructure (compute, chips, data centers, power grids, semiconductors) or general AI model capabilities without demonstrating concrete, specific healthcare application or healthcare systems insight.
Posts about AI compute, data centers, chips, and energy infrastructure with only loose or speculative healthcare connection.
3 example posts
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Created 2026-04-26 · Updated 2026-04-26
Edit rule text
[non_healthcare_domain_with_loose_framing]
Learned4 rejectionsActive
Exclude posts about general tech business, labor market trends, infrastructure, or macro-economics that only loosely mention healthcare or are tagged as healthcare without substantive healthcare-specific content (e.g., AI agent SaaS adoption in non-healthcare verticals, labor market statistics, supply chain issues unrelated to healthcare delivery).
Posts about general business, tech, labor markets, or infrastructure that reference healthcare tangentially or not at all
3 example posts
Is the business model for traditional software companies in permanent decline due to AI Agents not needing seats?
2 examples:
Re: @salesforce, we’ve reduced our seats from 10+ to 2 human seats and 1 API seat. And yet, we now pay $22,000 a year, 83% up from $12,000. Why? Our
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Almost all of my positions selling some kind of AI/agentic SaaS tool have (either by foresight or customer demand) pivoted to some kind of business model where they “forward deploy” to the customer first and then sell the system they create back to them as SaaS. 99% of “normie” b
Created 2026-04-26 · Updated 2026-04-27
Edit rule text
[ai_agent_capability_tangent]
Learned4 rejectionsActive
Exclude posts that focus on AI agent architecture (e.g., Claude Code design, agentic SaaS patterns, safety guardrails, cyber vulnerabilities) when the healthcare connection is tangential or speculative, and the post does not analyze how these capabilities change clinical decision-making or health system operations.
Posts about AI agent technical capabilities, architectural design, or safety vulnerabilities without healthcare-specific application.
3 example posts
A must read for anyone interested in building practical AI systems in 2026:
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
The paper explains the architecture of a modern production-grade AI agent system (Claude Code) by analyzing its source http
Almost all of my positions selling some kind of AI/agentic SaaS tool have (either by foresight or customer demand) pivoted to some kind of business model where they “forward deploy” to the customer first and then sell the system they create back to them as SaaS. 99% of “normie” b
🚨 Anthropic's own team just showed how to build production AI agents.
30 minutes. free. from the engineers who built it.
watch the workshop. bookmark it.
you spent 6 months managing every workflow yourself.
they just showed how to put all of it on autopilot.
Then read the ht
Created 2026-04-26 · Updated 2026-04-27
Edit rule text
[ai_safety_and_security_tangential]
Learned4 rejectionsActive
Exclude posts that focus on AI security vulnerabilities, zero-day exploits, or agent jailbreaks (e.g., AI taking over networks, breaking safety guardrails) without tying the concern to a specific healthcare delivery, clinical decision, or patient outcome scenario.
Posts about AI system vulnerabilities, cyberattacks, or safety guardrails that lack healthcare-specific application or analysis.
2 example posts
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Exclude posts that announce AI company product launches, user growth, feature releases, or business metrics (e.g., 'Mesa filesystem for enterprise AI agents', 'Cursor adoption metrics') without demonstrating healthcare-specific clinical validation, healthcare workflow integration, or healthcare system problem-solving.
Posts about AI company product launches, user adoption, or business metrics that lack healthcare-specific validation or use case.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts that dismiss or broadly counter established medical concerns or trial data with vague assertions ('it's just a narrative') or that make unsubstantiated claims about drug safety/efficacy without citing peer-reviewed evidence or acknowledging legitimate scientific debate.
Posts making sweeping health claims or dismissing medical concerns (like GLP-1 cardiac risk narratives) as 'just narratives' without evidentiary grounding.
1 example post
The only problem with the GLP-1 heart muscle loss narrative is...
... that it's just a narrative.
GLP-1s have reliably improved cardiovascular outcomes in trials, to the point that some research suggests benefit may even be independent of (not reliant on) weight loss.
Exclude posts that amplify healthcare claims made by political figures (RFK Jr., senators, HHS officials) without independent verification, healthcare systems analysis, or evidence of actual healthcare delivery impact. Posts must analyze policy substance, not echo political announcements.
Posts amplifying healthcare policy claims or victories by political figures without independent analysis or evidence of healthcare systems impact.
3 example posts
What $1 Billion a Day Buys in American Health Care
The U.S. is spending $1 billion/day on the war in Iran — over a year, that would cover 37 million Medicaid enrollees. Congress just cut $911 billion from the program because it was too expensive.
Read & subscribe (for free!)
RFK Jr. calls out Democrat House representatives to their face for ignoring chronic disease while claiming to care about public health.
“The Congressman was talking about the deaths from infectious disease, which are a couple thousand a year.”
“90% of the people who die in this
🚨BREAKING: HHS Sec. RFK Jr. just announced President Trump has SAVED and FOUND 138,000 missing children lost under Biden.
"Many have been trafficked, undergone slavery, s*xual abuse."
Follow: @BoLoudon https://t.co/p6YKEm38T7
Exclude posts that report GLP-1, obesity drug, or peptide trial results, market uptake, pricing negotiations, or competitive positioning (e.g., script comparisons, market share, dosing claims) unless they analyze structural healthcare system impacts, access barriers, or clinical decision-making frameworks.
Posts about GLP-1, peptide, or obesity drug market performance, pricing, or competitive dynamics without healthcare system context.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Created 2026-04-24 · Updated 2026-05-03
Edit rule text
[unsubstantiated_fringe_medical_claims]
Learned4 rejectionsActive
Exclude posts that make sweeping medical claims (e.g., about protein absorption, vaccine safety tests, smart TV health risks, vaccine ingredients) without citing peer-reviewed studies, proper methodology context, or acknowledging limitations and counterevidence. Include claims about unvalidated treatments, drug dosing myths, or sensationalized health risks.
Posts making bold medical claims without peer-reviewed evidence, proper context, or clinical validation.
3 example posts
"Your body can only use 25-30g of protein per meal. Anything above that gets wasted."
This claim has been repeated in fitness nutrition for over a decade, and it was built on studies that measured the right thing over the wrong timescale.
Moore 2009 gave six young men 0, 5, ht
As far as I know this is the only naturally-derived, classical psychedelic, that has killed people.
Ayahuasca has some deaths, but it's unclear what the cause was, and unlikely directly related to its cardiovascular risk profile. https://t.co/DatuHiBOTX
🚨BREAKING: A peer reviewed study just confirmed your smart TV is taking screenshots of your screen every 15 seconds and sending them to company servers.
Samsung every minute. LG every 15 seconds. Running even when you are using it as a monitor.
Here is how to stop it:
Created 2026-04-23 · Updated 2026-04-23
Edit rule text
[geopolitical_military_conflict_coverage]
Learned4 rejectionsActive
Exclude posts that cover geopolitical conflicts, ceasefires, military announcements, or defense policy (e.g., Lebanon-Israel ceasefire, Ukraine war votes) unless they directly analyze healthcare system impacts, displaced population health, or pandemic/epidemic consequences.
Posts reporting on geopolitical tensions, military conflicts, ceasefires, or defense policy without healthcare or public health angle.
3 example posts
President Donald J. Trump announces a 10-day ceasefire between Lebanon and Israel.
"It has been my Honor to solve 9 Wars across the World, and this will be my 10th, so let's, GET IT DONE!" https://t.co/YujXwyUReM
"I will be inviting the Prime Minister of Israel, Bibi Netanyahu, and the President of Lebanon, Joseph Aoun, to the White House... Both sides want to see PEACE, and I believe that will happen, quickly!" - President Donald J. Trump 🇺🇸 https://t.co/KFipIMmFOD
Breaking news: President Trump announced a pause in fighting in Lebanon.
Lebanon and Israel “agreed that in order to achieve PEACE between their Countries, they will formally begin a 10 Day CEASEFIRE,” Trump said in a social media post.
https://t.co/QHKR5ewrMN
Exclude posts that discuss workforce disruption, automation of tasks, or labor market impact of AI using broad claims or generalist data (e.g., '57% of US work hours could be automated') without specific analysis of healthcare workforce roles, clinical workflows, or health system employment impacts.
Posts about AI-driven job displacement, task-level automation, or labor market effects presented as generalist macro commentary without healthcare-specific analysis.
3 example posts
🚨 BREAKING: Anthropic new research finds that AI’s impact on jobs is primarily at the task level.
Rather than eliminating jobs, it is progressively taking over the functions that define them and gradually absorbing the core work in many jobs/roles.
The paper, “Labor Market http
AI could, in theory, automate 57% of US work hours. Yet most human skills remain relevant.
The future of work is not human or machine – but a partnership between people, agents, and robots.
Read our latest research on skill partnerships in the age of AI: https://t.co/h1K56uPqPo
Is the business model for traditional software companies in permanent decline due to AI Agents not needing seats?
2 examples:
Re: @salesforce, we’ve reduced our seats from 10+ to 2 human seats and 1 API seat. And yet, we now pay $22,000 a year, 83% up from $12,000. Why? Our
Created 2026-04-23 · Updated 2026-04-29
Edit rule text
[cybersecurity_vulnerability_tangential]
Learned4 rejectionsActive
Exclude posts about AI model vulnerabilities, cyberattacks, security exploits, or AI safety guardrails unless they are directly tied to a healthcare delivery system, patient data, or clinical decision-making. Abstract AI security concerns without healthcare specificity do not qualify.
Posts about AI/cybersecurity vulnerabilities, zero-day exploits, or model safety that lack specific healthcare application context.
3 example posts
🚨 SaaS platform ClickUp, used by 85% of the Fortune 500, has been leaking customer emails through its homepage for at least 465 days, and counting.
ClickUp has a $4 billion valuation. They are SOC 2 Type 2, ISO 27001, ISO 27017, ISO 27018, ISO 42001, and PCI DSS certified. The f
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Created 2026-04-23 · Updated 2026-04-28
Edit rule text
[speculative_sci_fi_or_robotics_applications]
Learned4 rejectionsActive
Exclude posts about humanoid robots, robot competitions, speculative sci-fi applications, or robotics breakthroughs (e.g., half-marathon times, Tesla Optimus production)—unless the post demonstrates a concrete, deployed healthcare use case or clinical validation.
Posts about humanoid robots, speculative future applications, or non-healthcare robotics competitions that tangentially reference healthcare.
2 example posts
Elon Musk says Optimus could start being useful outside Tesla as soon as next year.
$TSLA is ramping production, building a second Optimus factory at Giga Texas and plans to unveil the V3 design around mid-year. https://t.co/Pfvs1ctFvi
In Beijing's 2026 humanoid robot half-marathon, HONOR's Lightning completed the 21 km course in 50:26 minute.
Beat current human men's half-marathon world record of 57:20.
Last year's winner took over 2 hours 40 minutes.
Massive progress in 12 month
https://t.co/OcZJ66ebWD
Exclude posts that discuss energy, grid equipment, data center growth, chip manufacturing, power supply, or compute infrastructure as macro trends without demonstrating how these challenges impact healthcare delivery, clinical operations, or health tech deployment.
Posts about energy, data centers, chips, and grid infrastructure without concrete healthcare application
2 example posts
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
Exclude posts reporting general labor market disruption statistics, employment decline across age cohorts, or macro job displacement trends from AI adoption — unless the analysis connects directly to healthcare workforce roles, hiring, or operational changes in healthcare settings.
Posts about AI job displacement and labor market trends without healthcare-specific workforce analysis
3 example posts
A major milestone just landed quietly: for the first time ever, half of all employed Americans use AI at work. Gallup's Q1 2026 survey of nearly 24,000 workers shows that adoption has more than doubled since 2023, when only 21% reported any AI use. https://t.co/jmQga9tbWT
I think we now have real evidence that AI exposure is associated with job decline for age <25. The Canary in the Coalmine paper addresses a lot of concerns. While economic science takes time; now is the time to think about policy responses.
@erikbryn @BharatKChandar @RuyuChen
In Beijing's 2026 humanoid robot half-marathon, HONOR's Lightning completed the 21 km course in 50:26 minute.
Beat current human men's half-marathon world record of 57:20.
Last year's winner took over 2 hours 40 minutes.
Massive progress in 12 month
https://t.co/OcZJ66ebWD
Exclude posts about compute scaling, power grids, data center equipment, semiconductor supply chains, or energy infrastructure unless they explicitly connect to healthcare delivery, clinical AI deployment, or healthcare-specific capacity challenges. Generalist infrastructure hype without healthcare application should be rejected.
Posts about compute, power infrastructure, or data center scaling without healthcare system application.
2 example posts
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
Exclude posts that document healthcare fraud, insurance denials, billing abuse, or practitioner misconduct primarily as moral outrage or vilification (e.g., 'Brian Thompson deserved it') without analyzing the systemic incentives, regulatory failures, or structural reforms needed.
Posts reporting healthcare fraud, billing abuse, or insurance denials as moral outrage without systems-level analysis or policy solutions.
3 example posts
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
What happened during the Change disaster?
Hospitals got bailed out.
CMS advanced $3.2 billion to hospitals between March and June 2024. UnitedHealth/Optum extended $6.5 billion in interest-liquidity through April 30.
Mercy, I looked it up, specifically had 218 days of cash
Exclude posts that share clinical trial outcomes, phase study results, or basic science observations as isolated data points without connecting to healthcare delivery, access, reimbursement, or systemic adoption challenges.
Posts reporting clinical trial results, drug efficacy data, or basic research findings without analysis of healthcare system implications or implementation barriers
3 example posts
This is now published – the first win for factor XI inhibition in ischemic stroke
The reason it's so interesting is that factor XI inhibition reduces the risk of pathological clotting without increasing the risk of bleeding
The idea came from genetic evidence: humans with https
New promising phase 1 study for lung cancer @NEJM *
Zongertinib in HER2-Mutant NSCLC
-ORR 76% (tumor shrinkage in most patients)
-PFS 14.4 mo (disease control)
-Brain mets: 47% response
✅ https://t.co/jVN8TuRJcg
Yesterday, @RandDWorld featured us twice.
@ProQR turns to Ginkgo’s autonomous lab to scale AI-enabled RNA editing discovery: https://t.co/DyENAd4VdM
Ginkgo’s CEO says biotech needs its Waymo moment: https://t.co/kW27eBjAmf
Want to learn more about our partnership with ProQR? h
Created 2026-04-18 · Updated 2026-04-20
Edit rule text
[ai_model_capability_tangent]
Learned4 rejectionsActive
Exclude posts that celebrate AI model technical capabilities (speed, accuracy, agentic behavior, code generation) demonstrated in non-healthcare contexts (cybersecurity, software engineering, space) unless the post explicitly argues why that capability is transformative for a specific healthcare challenge.
Posts showcasing AI model capabilities (coding speed, vulnerability detection, network takeover) applied to non-healthcare domains or with only loose healthcare framing.
3 example posts
Guillermo reports "we believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel"
Alex Stamos warns us that defensive agents with autonomy and https://t
Boltz-2 just got a major speed upgrade. 🚀
We’re releasing Lightning-Boltz, a local, GPU-accelerated framework free from public MSA server bottlenecks.⚡
On a single L40S, total runtime drops to 28s per input vs 89s with the rate-limited server and 298s with MMseqs-CPU.
1/5 🧵 ht
> Vercel got pawned
> severe enough to notify law enforcement
> the only advice: “review your environment variables”
> what does that even mean?
> $10B company, and this is how you communicate
Cyber attacks ramping fast, starting to see why Anthropic is scared to
Exclude posts that cite labor market studies, employment statistics, or job displacement trends related to AI exposure—unless the post specifically analyzes impact on healthcare workforce (nurses, physicians, billing staff, etc.). Generic labor market analysis with AI + healthcare framing does not qualify.
Posts about AI's impact on employment, labor market trends, and job displacement presented as macro economic commentary without healthcare-specific workforce analysis.
3 example posts
A major milestone just landed quietly: for the first time ever, half of all employed Americans use AI at work. Gallup's Q1 2026 survey of nearly 24,000 workers shows that adoption has more than doubled since 2023, when only 21% reported any AI use. https://t.co/jmQga9tbWT
I think we now have real evidence that AI exposure is associated with job decline for age <25. The Canary in the Coalmine paper addresses a lot of concerns. While economic science takes time; now is the time to think about policy responses.
@erikbryn @BharatKChandar @RuyuChen
Among workers ages 22–25, employment in the most AI-exposed occupations has fallen roughly 16% relative to the least-exposed. This is after controlling for firm-type effects, which isolate AI exposure from broader shocks like interest rate pressure or sector slowdowns. The gap ht
Exclude posts that discuss GLP-1 drugs, peptides, or weight loss through personal anecdotes, pricing controversy, supply chain gossip, or macro economic framing (e.g., 'Hims pricing,' personal side effects, regulatory classification debates) without substantive healthcare delivery, reimbursement, or clinical implementation analysis.
Posts about GLP-1 drugs, peptides, or weight loss in macro-economic or personal anecdote framing without healthcare delivery systems context
3 example posts
Peptide synthesis is one of the hardest things to do right
Semaglutide comes out correct only 55% of the time. BPC-157 ~74%. every amino acid compounds the error
China won this because they have the scale to throw most of it away
We need to be building this capacity in the US
She's right. The safety risk was never the peptides. It was the supply chain. Regulated compounding access fixes the exact problems people are worried about. Heavy metals, contamination, underdosed vials.
And there it is.
Within hours of RFK's announcement someone is already pricing out how much Hims can charge for compounds the research community has had access to for a fraction of that cost.
This is why the outcome of these PCAC meetings matters more than the announcement.
Created 2026-04-17 · Updated 2026-04-22
Edit rule text
[tangential_ai_model_capability_hype]
Learned4 rejectionsActive
Exclude posts announcing AI model speed upgrades, new model releases, code generation performance, or benchmark wins (e.g., 'Lightning-Boltz 28s runtime', 'full 32-step network takeover') unless they demonstrate a specific healthcare delivery or diagnostic improvement.
Posts about AI model technical capabilities, speed improvements, or benchmark performance presented as AI industry news rather than healthcare application breakthroughs.
3 example posts
Boltz-2 just got a major speed upgrade. 🚀
We’re releasing Lightning-Boltz, a local, GPU-accelerated framework free from public MSA server bottlenecks.⚡
On a single L40S, total runtime drops to 28s per input vs 89s with the rate-limited server and 298s with MMseqs-CPU.
1/5 🧵 ht
FT: The White House is moving to give major US agencies access to a modified Anthropic Mythos model built to hunt dangerous software flaws before attackers find them.
That makes Mythos useful for defense because a model that can find a weakness in an operating system, browser, h
AI is letting developers ship three to four times faster. It is also flooding codebases with vulnerabilities at the same rate.
Aikido Security scans 15 open-source ecosystems for malware. A year ago: 30,000 packages per day. Now: 100,000.
The attack surface is not growing https
Exclude posts about GLP-1 drugs, peptides, or weight loss that analyze only personal health experiences, side effects, drug pricing economics, or biological mechanisms without connecting to healthcare delivery, access, or system-wide implications for patient care.
Posts about GLP-1 drugs, peptides, obesity, or weight loss that focus on market economics, personal anecdotes, or biology without healthcare system implications.
3 example posts
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
The brain is the master regulator of food intake and energy balance. A brilliant new @CellCellPress review, including the mechanism of GLP-1 drugs, by @ClemmensenC and colleagues, open-access
https://t.co/KbDf7ym288 https://t.co/DMFeS9zHfw
GLP-1 drugs are the ultimate validation of the techno-solutionist approach to society's most challenging problems.
The obesity crisis seemed liked it would just get worse and worse forever. Scolding from public health officials didn't work. Proposals to completely overhaul our f
Created 2026-04-17 · Updated 2026-04-17
Edit rule text
[ai_model_capability_technical_tangent]
Learned4 rejectionsActive
Exclude posts about AI model technical achievements (reasoning capabilities, task automation, jailbreaking, safety escapes, benchmark performance) unless they directly connect to a specific healthcare workflow, clinical decision, or operational vulnerability.
Posts about AI model technical capabilities, reasoning benchmarks, or autonomous task execution that lack connection to a healthcare operations or clinical decision-making problem.
3 example posts
Researchers gave AI agents a simple choice: hit your performance target or follow the rules.
Most of them chose to cheat.
McGill University tested 12 of the most powerful AI models on 40 realistic workplace scenarios. Healthcare. Finance. Logistics. Scientific research. Each AI
AI is letting developers ship three to four times faster. It is also flooding codebases with vulnerabilities at the same rate.
Aikido Security scans 15 open-source ecosystems for malware. A year ago: 30,000 packages per day. Now: 100,000.
The attack surface is not growing https
Two years ago the best AI models couldn't complete beginner-level cyber tasks. One just executed a full 32-step corporate network takeover. The Bank of England is convening emergency CEO briefings.
Look at that chart. GPT-4o maxes out at 2 steps. Initial reconnaissance. It can
Exclude posts focused on AI compute scaling, chip manufacturing, orbital compute, or infrastructure investments (Nvidia, Intel, space compute) unless they explicitly connect to a healthcare delivery, clinical outcome, or healthcare operational problem being solved.
Posts about AI compute, hardware, or infrastructure capabilities that lack specific healthcare application or analysis.
3 example posts
🔥 CPUs are having a moment.
#Nvidia launched a standalone CPU. #Arm made its first chip in 35 years. #Intel & #AMD are raising prices amid a supply crunch.
What's behind it: Agentic AI needs far more CPU than anyone planned for — driving a structural shift in CPU:GPU ratios tow
To put Elon's space compute vision into perspective:
1 TW of compute in orbit
That's 10 million tons to orbit each year.
That's 100,000 launches a year, almost one every 5 minutes.
In the airline business that's normal!
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties!
The @nvidia CEO joins to discuss:
-- Nvidia's future, roadmap to $1T revenue
-- Physical AI's $50T market
-- Rise of the agent, OpenClaw's inflection moment
-- Inference explosion, Groq deal
-- AI PR Crisis, Anthropic's comms m
Exclude posts that document fraud allegations, billing scams, or insurance wrongdoing as scandal reporting or outrage without explaining the underlying healthcare system failure, incentive misalignment, or policy solution needed.
Posts reporting healthcare fraud, billing schemes, or insurance misconduct without analyzing systemic healthcare economics or policy implications.
3 example posts
🚨 As you pay your taxes this week, LOOK at what the fraudsters allegedly did with your money❗️
🔹Cosmetic procedures
🔹Breast implants
🔹Tweaks to arms and thighs
🔹Tummy tuck
🔹Purebred dogs
🔹Flights to Hawaii
🔹Flights to Disneyland
🔹Multimillion-dollar home
🔹Range https://t.c
🚨 Fraudsters literally looted $250-500 BILLION a year from taxpayers for years, now changes are being made to prevent this fraud:
- Treasury is now going after the banks
- Whistleblowers can make 30% for exposing fraud
- Auto dealers will be tracked down
END ALL THE FRAUD. https
🚨 Surgeon @EithanHaim reveals shocking medical fraud scheme: Texas doctors allegedly changing teens' medical records and using fake billing codes to secretly continue banned gender treatments—scamming insurance and taxpayers. He's speaking at a #DetransAwarenessDay @genspect foru
Exclude posts that present scientific findings, review articles, or medical mechanism discussions without addressing how these insights translate to healthcare system challenges, provider workflows, or patient access barriers that tech could solve.
Posts sharing academic research findings, mechanistic insights, or clinical observations without linking to healthcare delivery or technology adoption
3 example posts
I never met my grandfather.
He died of pancreatic cancer when my father was just 19. Today, Yash Bindal, 33, father to 18-month-old Maya, faces the same fate.
@PopVaxIndia is using AI to make him a personalized generative medicine to extend his life.
https://t.co/O5VIXbmMGd
What scares me about AI: it gets SO good at being almost correct that nobody catches hallucinations…unless they learned the subject before LLMs existed. Eventually, no one will have. Medicine is full of niche sh*t. How much can we manually verify?
good news: it is a specific virus that has a good prognosis - 85%+ of full recovery.
thanks everyone who helped me; it is hard to research while immobilized, and I got some things wrong, which you helped clear up. im extremely thankful and hope i can give it back somehow
sadl
Exclude posts about GLP-1 drugs, peptides, or weight-loss medications that rely on anecdotal reports, speculation about off-label uses, or personal accounts of side effects without peer-reviewed evidence or healthcare access/pricing analysis.
Posts about unproven or speculative uses of peptides, GLP-1 drugs, or emerging treatments without rigorous evidence or healthcare system analysis
3 example posts
I never met my grandfather.
He died of pancreatic cancer when my father was just 19. Today, Yash Bindal, 33, father to 18-month-old Maya, faces the same fate.
@PopVaxIndia is using AI to make him a personalized generative medicine to extend his life.
https://t.co/O5VIXbmMGd
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
GLP-1 drugs are the ultimate validation of the techno-solutionist approach to society's most challenging problems.
The obesity crisis seemed liked it would just get worse and worse forever. Scolding from public health officials didn't work. Proposals to completely overhaul our f
Created 2026-04-16 · Updated 2026-04-20
Edit rule text
[workforce_disruption_or_labor_market_macro]
Learned4 rejectionsActive
Exclude posts discussing workforce automation, job displacement, or labor market trends from AI that lack healthcare-specific workforce analysis, clinical staffing context, or healthcare operational solutions.
Posts analyzing AI's impact on employment, job displacement, and labor market trends in generic or macro terms without healthcare-specific workforce context.
3 example posts
Among workers ages 22–25, employment in the most AI-exposed occupations has fallen roughly 16% relative to the least-exposed. This is after controlling for firm-type effects, which isolate AI exposure from broader shocks like interest rate pressure or sector slowdowns. The gap ht
This reflects what I'm seeing too.
AI can do an increasing number of tasks.
If your job consists of only those tasks, you're at risk of being completely automated out of that job.
If your job consists partially of those tasks, plus other higher value tasks not yet automated,
Fortune: The survey says 29% of workers admit sabotaging company AI plans, and that rises to 44% for Gen Z.
Companies are finding that AI rollout is colliding with a basic workplace fact: people resist tools they think will erase their role.
That sabotage ranges from ignoring h
Exclude posts that report healthcare fraud, billing abuse, corporate scandal, or enforcement actions (lawsuits, DOJ cases) without offering systems-level analysis of how healthcare technology, policy, or organizational structure contributed to or could prevent the problem.
Posts reporting healthcare fraud, corporate malfeasance, or financial scandal without analysis of systemic causes or solutions.
3 example posts
🚨 Fraudsters literally looted $250-500 BILLION a year from taxpayers for years, now changes are being made to prevent this fraud:
- Treasury is now going after the banks
- Whistleblowers can make 30% for exposing fraud
- Auto dealers will be tracked down
END ALL THE FRAUD. https
🚨 Surgeon @EithanHaim reveals shocking medical fraud scheme: Texas doctors allegedly changing teens' medical records and using fake billing codes to secretly continue banned gender treatments—scamming insurance and taxpayers. He's speaking at a #DetransAwarenessDay @genspect foru
American surgeon exposes US Health Insurance companies latest scam
- Doctors submit codes to determine eligibility for care
- Health Insurance companies are now saying codes don’t need prior authorizations, but they won’t tell you if it’s covered until AFTER the procedure
“What
Exclude posts that announce new AI model releases, agent frameworks, code generation tools, or developer SDKs from AI labs (OpenAI, Anthropic, Microsoft) — even if the post *mentions* healthcare — unless the post explicitly addresses healthcare workflow, regulation, or clinical operations.
Posts announcing AI company product releases, agent frameworks, or SDK updates without demonstrating healthcare-specific use cases or implications.
3 example posts
OpenAI introduced GPT-Rosalind, a frontier reasoning model specifically architected for the life sciences, focusing heavily on biology, drug discovery, and translational medicine.
Designed to accelerate the historically slow 10-to-15-year drug approval pipeline, Rosalind is http
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Microsoft is reportedly testing the integration of "OpenClaw-like" autonomous AI agents directly into its Microsoft 365 Copilot ecosystem.
Moving beyond a reactive chatbot interface, the goal is to create an "always-on" assistant that runs autonomously in the background.
These
Exclude posts about semiconductor launches, CPU/GPU announcements, orbital compute, data infrastructure, or training data supply chains unless the post explicitly connects the infrastructure advancement to a specific healthcare workflow, clinical outcome, or healthcare AI bottleneck. Posts like 'Nvidia launched a CPU' or 'Elon's space compute vision' tagged with healthcare context but lacking healthcare specificity should be rejected.
Posts about AI compute infrastructure, chips, data centers, or space compute positioned as healthcare-relevant without actual healthcare application.
3 example posts
🔥 CPUs are having a moment.
#Nvidia launched a standalone CPU. #Arm made its first chip in 35 years. #Intel & #AMD are raising prices amid a supply crunch.
What's behind it: Agentic AI needs far more CPU than anyone planned for — driving a structural shift in CPU:GPU ratios tow
To put Elon's space compute vision into perspective:
1 TW of compute in orbit
That's 10 million tons to orbit each year.
That's 100,000 launches a year, almost one every 5 minutes.
In the airline business that's normal!
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties!
The @nvidia CEO joins to discuss:
-- Nvidia's future, roadmap to $1T revenue
-- Physical AI's $50T market
-- Rise of the agent, OpenClaw's inflection moment
-- Inference explosion, Groq deal
-- AI PR Crisis, Anthropic's comms m
Exclude posts that report clinical trial outcomes, research discoveries, or drug mechanism findings as isolated facts unless they explicitly address how this finding changes clinical practice, healthcare delivery infrastructure, or tech-enabled care workflows.
Posts reporting clinical trial results, drug efficacy data, or research findings with no connection to healthcare systems, tech implementation, or operational change.
3 example posts
The brain is the master regulator of food intake and energy balance. A brilliant new @CellCellPress review, including the mechanism of GLP-1 drugs, by @ClemmensenC and colleagues, open-access
https://t.co/KbDf7ym288 https://t.co/DMFeS9zHfw
good news: it is a specific virus that has a good prognosis - 85%+ of full recovery.
thanks everyone who helped me; it is hard to research while immobilized, and I got some things wrong, which you helped clear up. im extremely thankful and hope i can give it back somehow
sadl
Revolution Medicines shared their findings in a press release Monday that said there may soon be a pill against pancreatic cancer, a deadly disease that strikes more than 60,000 Americans every year. The company said the pill doubled survival to 13.2 months compared with standard
Exclude posts that promote medical treatments, drugs, or peptide therapies based primarily on anecdotal patient reports, unverified side effects, or lifestyle/wellness framing without reference to clinical trial data, FDA approval status, or peer-reviewed evidence.
Posts promoting GLP-1 drugs, peptide therapies, or medical interventions with anecdotal evidence, unsupported claims, or wellness framing rather than clinical validation.
3 example posts
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
GLP-1 drugs are the ultimate validation of the techno-solutionist approach to society's most challenging problems.
The obesity crisis seemed liked it would just get worse and worse forever. Scolding from public health officials didn't work. Proposals to completely overhaul our f
I promised to come back to @X after I investigated the facts concerning @EPotterMD's video post about @UHC and its health insurance subsidiary, UnitedHealthcare.
To review, I made an @X post in response to Dr. Potter's videos and X posts about an overzealous representative of
Exclude posts that discuss government regulation, political figures, or policy debates (antitrust, corporate oversight, self-governance) where healthcare is mentioned as example or framing but the post does not provide actionable healthcare policy insights or analysis specific to healthcare systems.
Posts about government policy, regulation, or political figures that use healthcare as loose context without substantive healthcare-specific analysis.
3 example posts
@swyx > get government sponsored monopoly
> prevent patients from getting their data
> make data non transferable
> contribute nothing to open source software
> refuse to collaborate with other software vendors and kill the ecosystem
> appeal to administrators and be hated by p
I am not so partisan that I can't appreciate Congresswoman Alexandria Ocasio-Cortez taking down the CEO of CVS on behalf of all Americans.
Healthcare is a universal issue, so pay attention to what's being sold to us.
Translation: "Our perfect patient is insured by Aetna, CVS. T
I joined tribal leaders in Phoenix to reaffirm our commitment to self-governance and sovereignty in Indian Country. Together, we are making healthcare more affordable, strengthening communities and improving outcomes across Indian Country. https://t.co/SsjrQwoTgf
Created 2026-04-15 · Updated 2026-04-16
Edit rule text
[truncated_incomplete_low_effort]
Learned4 rejectionsActive
Exclude posts that end abruptly with ellipses, incomplete sentences, or truncated text that makes it impossible to assess the full argument or claim being made.
Posts that are clearly incomplete, cut off mid-sentence, or lack sufficient context to evaluate substance.
3 example posts
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
AI is taking on more of the labor.
It is not taking on the accountability.
@danielnewmanUV and @GregLotko talk with @Darren_Surch of @Interskil about why mainframe teams now have to interpret and stand behind AI-driven outputs, and why organizations that stop investing in htt
$LLY ’s Mounjaro will not be listed on Australia’s PBS after pricing negotiations collapsed.
Eli Lilly walked away from talks with the government, leaving around 450,000 patients without subsidized access.
Patients will continue to pay hundreds of dollars per month out of
Exclude posts that report isolated clinical trial results, mechanistic findings, disease mechanism reviews, or individual clinician observations without connecting to healthcare delivery systems, business models, regulatory barriers, or scaling implications.
Posts sharing clinical research findings, mechanistic discoveries, or medical observations without healthcare system, policy, or operational context.
3 example posts
The brain is the master regulator of food intake and energy balance. A brilliant new @CellCellPress review, including the mechanism of GLP-1 drugs, by @ClemmensenC and colleagues, open-access
https://t.co/KbDf7ym288 https://t.co/DMFeS9zHfw
good news: it is a specific virus that has a good prognosis - 85%+ of full recovery.
thanks everyone who helped me; it is hard to research while immobilized, and I got some things wrong, which you helped clear up. im extremely thankful and hope i can give it back somehow
sadl
Great work by @DanielJDrucker and team; biologically plausible mechanism of GLP1-RA benefit independent of weight loss. Excellent article by @megtirrell @CNN describing the publication. Could it justify new approaches for these drugs? I think so. https://t.co/pHudk7lkAR
Exclude posts that announce research papers, preprints, or experimental findings (e.g., AI antibody design, interpretability research, neural network studies) without explaining how the work addresses a concrete healthcare system problem, clinical workflow, or regulatory barrier.
Academic papers, preprints, or research findings presented as isolated technical contributions without healthcare delivery or policy implications.
3 example posts
Interpretability is built on a few core assumptions.
Two of our ICLR 2026 @iclr_conf papers suggest some of those assumptions are wrong (or at least highly incomplete).
1. Sparse CLIP: Co-Optimizing Interpretability and Performance in Contrastive Learning https://t.co/3JzHDqRj3
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
A must read for anyone interested in building practical AI systems in 2026:
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
The paper explains the architecture of a modern production-grade AI agent system (Claude Code) by analyzing its source http
Exclude posts that express political criticism, outrage, or commentary about healthcare policy, regulation, or politicians without providing substantive analysis of the healthcare business, operational, or clinical impact. Partisan framing, personal attacks on officials, or generic "scandal" narratives without systems-level reasoning are insufficient.
Posts expressing political outrage or framing regulatory/policy issues without substantive healthcare system analysis or operational insight
3 example posts
I am not so partisan that I can't appreciate Congresswoman Alexandria Ocasio-Cortez taking down the CEO of CVS on behalf of all Americans.
Healthcare is a universal issue, so pay attention to what's being sold to us.
Translation: "Our perfect patient is insured by Aetna, CVS. T
The Overturn Of The Chevron Doctrine Is Severely Overlooked
Do you all not see the fallout our from this being basically revoked?
Did you know this gave unelected bureaucratic parties the power to interpret the law how they deemed fit?
Why do you all think so many judges ar
Indian has 0.7 active physicians per 1,000 people, America has 3.0 active physicians per 1,000 people.
You are a liar. You are not motivated by increasing patient access to care. You just want to practice in America because you can make more money.
Exclude posts about general workplace trends, demographic patterns, economic behavior, or biological mechanisms that use healthcare as a context but do not address healthcare delivery, healthcare technology, or healthcare business models. The healthcare angle must be central, not incidental.
Posts about general topics (workplace dynamics, demographics, economic trends, nutrition biology) with loose or superficial healthcare relevance
3 example posts
Among workers ages 22–25, employment in the most AI-exposed occupations has fallen roughly 16% relative to the least-exposed. This is after controlling for firm-type effects, which isolate AI exposure from broader shocks like interest rate pressure or sector slowdowns. The gap ht
If you haven't read this report from @IPPR -do find time
The basic tenets of a healthcare system is to reduce mortality where possible, improve quality of life
And this is where the UK is at.
I will repeat.
Flooding a system with lesser trained people in a healthcare system h
SaaS companies must focus R&D on outcomes over new tools, according to Emergence Capital’s @jakesaper.
"Building more features on an old model is like adding horsepower to a horse.”
“Most of these companies are spending to defend the old tool based regime…” https://t.co/h5
Exclude posts about data center investment, GPU procurement, compute infrastructure, or power grid demand that lack specific healthcare application or analysis. Healthcare connection must be explicit and substantive, not just assumed.
Posts about AI infrastructure, data centers, or compute capacity with tangential or missing healthcare relevance.
3 example posts
Microsoft is reportedly testing the integration of "OpenClaw-like" autonomous AI agents directly into its Microsoft 365 Copilot ecosystem.
Moving beyond a reactive chatbot interface, the goal is to create an "always-on" assistant that runs autonomously in the background.
These
Hyperscalers will spend $700 BILLION on data centers in 2026 alone.
Amazon: $200B. Google: $185B. Meta: $135B.
AI data centers now represent 70%+ of all new grid interconnection requests in the US.
The bottleneck isn't the algorithm anymore. It's the power line.
Elon Musk: “Hold on to your Tesla stock.”
Because what’s coming isn’t just another car update—it’s an entirely new paradigm.
From Optimus humanoid robots that could one day take care of your kids, walk your dog, and support elderly parents, to CyberCab scaling into mass product
Created 2026-04-14 · Updated 2026-04-15
Edit rule text
[infrastructure_and_compute_hype]
Learned4 rejectionsActive
Exclude posts that focus on data center capex, GPU/chip demand, power grid interconnections, or hardware infrastructure trends unless they directly analyze how this infrastructure bottleneck impacts a specific healthcare AI application or clinical deployment.
Posts about data center spending, compute infrastructure, energy demands, and hardware trends with only tangential healthcare framing.
3 example posts
Hyperscalers will spend $700 BILLION on data centers in 2026 alone.
Amazon: $200B. Google: $185B. Meta: $135B.
AI data centers now represent 70%+ of all new grid interconnection requests in the US.
The bottleneck isn't the algorithm anymore. It's the power line.
Elon Musk: “Hold on to your Tesla stock.”
Because what’s coming isn’t just another car update—it’s an entirely new paradigm.
From Optimus humanoid robots that could one day take care of your kids, walk your dog, and support elderly parents, to CyberCab scaling into mass product
Sequoia’s @shaunmmaguire wrote a private hardware manifesto arguing that over the next 25 years, most of the money will be made in hardware:
"Every software revolution is preceded by a hardware revolution."
"To have the iOS App Store that enabled Uber, DoorDash, and all of http
Created 2026-04-13 · Updated 2026-04-14
Edit rule text
[non_healthcare_domain_with_healthcare_label]
Learned4 rejectionsActive
Exclude posts about non-healthcare companies (Revolut, Snapchat, Salesforce, Meta) or general infrastructure (AWS, NVIDIA, Microsoft compute) that mention healthcare in passing or use healthcare as a loose example of broader business/technology trends, unless the post provides substantive analysis of healthcare-specific operational or clinical impact.
Posts about non-healthcare companies, products, or infrastructure that tangentially relate to healthcare through loose framing.
3 example posts
The problem with Reality Labs is not ambition. It is time. AI turned into revenue faster because it improves existing workflows. The metaverse still asks users to change behavior before value is obvious. https://t.co/lopZiUwGU5
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Exclude posts that report individual research findings, drug trial outcomes, or scientific discoveries in isolation without explaining how the result changes healthcare practice, policy, reimbursement, or patient access. A single study or discovery must be contextualized within healthcare delivery systems.
Posts reporting on academic research, clinical trial results, or scientific discoveries without connecting to healthcare system, policy, or operational implications
3 example posts
Interpretable Antibody–Antigen Structural Interface Prediction via Adaptive Graph Learning and Cyclic Transfer
1. The paper introduces VASCIF (Variable-domain Antibody–antigen Structural Complex Interface Finder), a structure-aware model that jointly predicts paratopes and https
Today the first results of the very first phase 3 study of a pan-KRAS-inhibitor in metastatic pancreatic cancer dropped, which might apply to > 90% of all pancreatic cancer patients with a KRAS-mutation!
Median overall survival of 13.2 months versus 6.7 months with chemo in
NIH-funded researchers have uncovered a key reason why immunotherapy has largely failed in pancreatic cancer — and identified a promising strategy to overcome that resistance.
Read on to learn more about this discovery: https://t.co/BoCHpLxp5g https://t.co/3DXv4E9DOE
Created 2026-04-13 · Updated 2026-04-14
Edit rule text
[ai_technical_capability_hype]
Learned4 rejectionsActive
Exclude posts that celebrate AI model technical achievements (coding abilities, benchmark performance, novel architectures, prompt leaks) unless the post explains how that specific capability solves a documented healthcare problem or enables a new clinical workflow.
Posts about AI model capabilities, training techniques, and benchmarks presented as breakthroughs without healthcare-specific context or application.
3 example posts
The 26 prompts running inside 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 just got open-sourced. This is literally the entire brain of a $200/month AI coding tool.
Someone reverse-engineered every prompt from the accidentally published npm source and you can now study all of them for free.
Claude Code uses 26
Claude Code is not AGI, but it is the single biggest advance in AI since the LLM.
But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close.
And that changes everything.
The source code leak proves it. Tucked away at its center is a
The CEO of Google DeepMind just went on record saying he disagrees with one of the most respected AI researchers in the world.
Demis Hassabis, the man behind AlphaFold, AlphaGo, and Google's entire AI operation publicly pushed back against Yann LeCun's claim that large language
Created 2026-04-13 · Updated 2026-04-14
Edit rule text
[tangential_ai_hype_without_healthcare_substance]
Learned4 rejectionsActive
Exclude posts that focus on general AI advances (new models, AI researcher moves, AI labs, AGI debates, world models) where the healthcare connection is assumed but not demonstrated — i.e., 'AI is advancing, therefore healthcare will change' without specific application.
Posts about AI breakthroughs, model capabilities, or AI researcher announcements that only loosely or speculatively connect to healthcare.
3 example posts
Quantum computers are still on the drawing board, but quantum sensing is here now—and this technology can transform not just industry but America's security picture. Read a new Defining Ideas article by Dr. Vivek Lall and Haibo Huang: https://t.co/UeEjZWIO27
Sequoia partner @gradypb says software is shifting from apps that demand attention to agents that work quietly in the background.
This shift will change what moats will look like, and will be especially hard for incumbents to deal with. "It's two very different business https://
In general, there are 5 kind of moats:
▪️ Intangible Assets
▪️ Switching Costs
▪️ Network Effects
▪️ Cost Advantage
▪️ Efficient Scale
I'll teach you everything you need to know in 2 minutes: https://t.co/v9w6pfJOGh
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[tangential_ai_capability_hype]
Learned4 rejectionsActive
Exclude posts that focus on AI model capabilities (Claude, LLMs, world models, reasoning benchmarks) where healthcare is a secondary or illustrative example rather than the primary subject of analysis. The post must center on healthcare systems or clinical application, not AI technical advancement.
Posts about general AI breakthroughs, model capabilities, or AI research that only loosely connect to healthcare applications.
3 example posts
Claude Code is not AGI, but it is the single biggest advance in AI since the LLM.
But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close.
And that changes everything.
The source code leak proves it. Tucked away at its center is a
Demis Hassabis makes the split that matters.
One risk is misuse by bad actors.
The other is loss of control as systems become more agentic and start completing real tasks on their own.
That is where AI safety gets much more concrete.
Not bad answers on a screen.
Operational htt
The CEO of Google DeepMind just went on record saying he disagrees with one of the most respected AI researchers in the world.
Demis Hassabis, the man behind AlphaFold, AlphaGo, and Google's entire AI operation publicly pushed back against Yann LeCun's claim that large language
Created 2026-04-12 · Updated 2026-04-13
Edit rule text
[political_outrage_regulatory_posturing]
Learned4 rejectionsActive
Exclude posts that use inflammatory political rhetoric ('garbage math', 'sophisticated liars', 'liquidation') to attack government programs, administrations, or regulators without providing specific healthcare analysis, evidence, or solutions. Political commentary must be grounded in healthcare outcomes or policy mechanics.
Posts that frame healthcare/regulatory issues through partisan political outrage without substantive policy or clinical analysis
3 example posts
The attack by the Trump Administration on blue states for alleged Medicaid "fraud" is using such garbage math to make up numbers that even Dr. Oz had to admit it.
⬇️⬇️⬇️
https://t.co/V0dZfx0OdK
🚨 BREAKING: It was just revealed that the blue state of Hawaii got MILLIONS of federal dollars to fight Medicare and Medicaid fraud — and secured **ZERO** fraud convictions in 5 years
Insane.
ANDREW FERGUSON, White House fraud task force vice chair: "Millions of millions of htt
Through SIMPLE solutions, we will be saving the American people $3.9 TRILLION over the next 10 years.🔥
By eliminating self-attestation, streamlining processes, updating technology, & more TRILLIONS will be going back into the pockets of Americans.💵
These are the savings we can
Created 2026-04-12 · Updated 2026-04-13
Edit rule text
[political_regulatory_outrage_without_substance]
Learned4 rejectionsActive
Exclude posts that weaponize healthcare or regulatory topics (DOGE, fraud allegations, government spending) primarily to attack political opponents, express partisan outrage, or make sweeping claims without specific evidence or healthcare-focused analysis.
Posts using healthcare/regulatory topics as vehicles for political criticism without substantive analysis of actual health policy impact.
3 example posts
Indian has 0.7 active physicians per 1,000 people, America has 3.0 active physicians per 1,000 people.
You are a liar. You are not motivated by increasing patient access to care. You just want to practice in America because you can make more money.
The federal government is the world’s largest IT customer, spending ~$2TN since 1994. In theory, this *should* give us great buying power to negotiate good deals for taxpayers, but of course that’s not what happens: in 2021, the US Department of Agriculture agreed to pay $170 mil
The attack by the Trump Administration on blue states for alleged Medicaid "fraud" is using such garbage math to make up numbers that even Dr. Oz had to admit it.
⬇️⬇️⬇️
https://t.co/V0dZfx0OdK
Created 2026-04-12 · Updated 2026-04-13
Edit rule text
[fringe_unvalidated_medical_claims]
Learned4 rejectionsActive
Exclude posts that promote unvalidated medical claims, speculative disease mechanisms without peer-reviewed evidence, or fringe interventions (e.g., 'metabolic causes of Alzheimer's,' 'genetic resistance to GLP-1,' compounded peptides with no safety data) presented as established fact.
Posts promoting unvalidated medical interventions, speculative disease mechanisms, or unproven treatments
3 example posts
If @mochihealth is willing to mislead patients about the safety and efficacy of their products, why should anyone believe their products even contain just the API they claim?
There’s no evidence “compounded oral Semaglutide” is safe or effective
Novo’s oral formulation is
As a medical school professor, I now believe the biggest mistake in Alzheimer's research was ignoring metabolism.
A comprehensive Frontiers in Neurology review makes the case clear: mitochondrial dysfunction and metabolic failure happen YEARS before amyloid plaques or memory htt
Hoy voy a contarles de las verdaderas razones del envejecimiento cerebral acelerado, es este paper que salió en abril de 2026 y que todo el mundo está citando pero casi nadie lo está explicando. https://t.co/07eWbX61vL
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[non_healthcare_business_or_tech_tangent]
Learned4 rejectionsActive
Exclude posts about general business metrics, tech infrastructure, macroeconomic trends, space exploration, sports, real estate, or non-healthcare domains (Boeing wing engineering, Real Madrid trades, Virginia redistricting, fertilizer company earnings, Soviet space programs) even if posted by healthcare-adjacent accounts.
Posts about general business strategy, tech infrastructure, economics, or non-healthcare domains that are loosely tagged as healthcare-adjacent but lack substantive healthcare systems relevance.
3 example posts
The N1 was a super heavy-lift launch vehicleintended to deliver payloads beyond low Earth orbit.
The N1 was the Soviet counterpart to the US Saturn V, planned for crewed travel to the Moon and beyond, with studies beginning as early as 1959. https://t.co/pB4u9TjyC4
“That's what excites me. It's where CF is today, but more-so where we're heading in the future based on the strategy and the platform that we've put in place.”
Hear our CEO reflect on our path so far.
“Over my 16 years at CF, I’ve seen a lot of transition that’s occurred. A company that was less than a $3 billion market cap to now roughly $15 billion.”
Hear CF Industries’ CEO describe the company’s evolution through its strategic pivot to decarbonization.
Created 2026-04-12 · Updated 2026-04-23
Edit rule text
[political_outrage_regulatory_framing]
Learned4 rejectionsActive
Exclude posts that frame healthcare policy, FDA actions, or regulatory decisions primarily as political outrage, using ALL CAPS, exclamation marks, and partisan language (DOGE, 'waking up Congress') rather than providing actual analysis of healthcare implications.
Posts using healthcare/regulatory topics as a vehicle for political anger or partisan messaging rather than substantive analysis.
3 example posts
Through SIMPLE solutions, we will be saving the American people $3.9 TRILLION over the next 10 years.🔥
By eliminating self-attestation, streamlining processes, updating technology, & more TRILLIONS will be going back into the pockets of Americans.💵
These are the savings we can
🇺🇸 DOGE SUBCOMMITTEE: $3.9 TRILLION IN SAVINGS—IF CONGRESS WAKES UP
The DOGE Subcommittee says the U.S. could save $3.9 TRILLION over 10 years by doing what any business already does—verifying identities, ditching self-certification, and cracking down on fraud.
Just front-end I
@charliekirk11 It wasn’t easy calling out 50 lies in one tweet, Charlie, but hell, someone’s gotta do it. Let’s go:
1. No taxes on tips? Temporary till 2028. After that, back to normal.
2. Trump tax cuts permanent? For the rich, yes. For working folks? Temporary.
3. Child tax
Created 2026-04-11 · Updated 2026-04-12
Edit rule text
[tangential_policy_outrage]
Learned4 rejectionsActive
Exclude posts that express political outrage or cynicism about healthcare policy (insurance practices, regulatory capture, bureaucracy) without offering substantive analysis of the problem, tradeoffs, or proposed solution. Sarcasm and venting about 'scams' without evidence-based argument do not qualify.
Posts framing healthcare policy problems (prior authorization, insurance practices, regulatory issues) as outrage without substantive systems analysis
3 example posts
@swyx > get government sponsored monopoly
> prevent patients from getting their data
> make data non transferable
> contribute nothing to open source software
> refuse to collaborate with other software vendors and kill the ecosystem
> appeal to administrators and be hated by p
🚨 Surgeon @EithanHaim reveals shocking medical fraud scheme: Texas doctors allegedly changing teens' medical records and using fake billing codes to secretly continue banned gender treatments—scamming insurance and taxpayers. He's speaking at a #DetransAwarenessDay @genspect foru
I am not so partisan that I can't appreciate Congresswoman Alexandria Ocasio-Cortez taking down the CEO of CVS on behalf of all Americans.
Healthcare is a universal issue, so pay attention to what's being sold to us.
Translation: "Our perfect patient is insured by Aetna, CVS. T
Exclude posts that discuss AI or robotics labor market impact, job displacement, or workforce disruption as general macro trends (e.g., software engineering code review, humanoid robot manufacturing scaling) unless the post analyzes specific healthcare workforce challenges, clinician roles, or healthcare labor dynamics.
Posts analyzing workforce disruption, labor market dynamics, or job automation trends at macro level without healthcare-specific context or systemic analysis.
3 example posts
the future of software engineering seems uncontroversially prompting + code review. startups will skip the code review because they’re racing against time. larger/serious orgs will take code review very seriously.
llms can do code review, but my guess is that because they have t
Humanoid robots are moving from Silicon Valley novelty to viable business model—powered by AI and global supply chains, especially in China. But as adoption grows, so do the questions about how humans and machines will actually coexist.
More on Primer, streaming Wednesdays http
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Exclude posts that make speculative or fringe claims about medical treatments, unproven interventions, or radical health claims (e.g., generic drugs curing cancer by cost argument, compounded off-label peptides as emerging category, ivermectin as cancer treatment alternative) without peer-reviewed evidence or regulatory validation.
Posts making broad, unvalidated health claims or promoting unproven medical interventions without evidence or clinical validation.
3 example posts
Ivermectin and Mebendazole Cost a Fraction of Chemo. Big Pharma Can't Patent Them. That's the Problem.
Cancer centers get a cut of every chemotherapy bill. Generic drugs don't generate that margin. Two affordable, widely available compounds showing 84% clinical benefit in a real
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Exclude posts that announce AI model features, compute infrastructure launches, or technical capability milestones (NVIDIA stack, OpenShell, Claude Code, Mesa filesystem) unless they explicitly demonstrate a concrete healthcare application, implementation, or operational impact.
Posts about AI compute infrastructure, model capabilities, or technical announcements without specific healthcare deployment or use case.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Exclude posts that speculate on regulatory approval timelines (e.g., 'FDA has 30 days', 'PDUFA at same time'), deal valuations, or market pricing (e.g., GLP-1 competition, drug pricing negotiations) without analyzing the underlying healthcare system pressures, patient access implications, or operational business model changes these create.
Posts speculating on regulatory outcomes, deal economics, or market dynamics without substantive healthcare systems or operational analysis.
3 example posts
Atrium tried to buy a hospital in 2018.
The Attorney General killed it.
In 2026, they came back with paperwork instead of cash.
The Attorney General has 30 days.
The Wake County board has 48 hours.
@NC_Governor
https://t.co/g99Ub3iMhh https://t.co/x0bxsg7VIR
Ivermectin and Mebendazole Cost a Fraction of Chemo. Big Pharma Can't Patent Them. That's the Problem.
Cancer centers get a cut of every chemotherapy bill. Generic drugs don't generate that margin. Two affordable, widely available compounds showing 84% clinical benefit in a real
$LLY $NVO $HIMS
🚨 BREAKING: COURT DISMISSES PART OF ELI LILLY LAWSUIT AGAINST EMPOWER PHARMACY
BOTH LILLY AND EMPOWER ISSUED STATEMENTS CELEBRATING THE RULING
Dismissed: Lanham Act false advertising + consumer harm claim
Allowed to proceed: unfair competition claims under h
Exclude posts that focus on AI infrastructure (compute, model training, architecture, context windows, code generation) without demonstrating how these tools directly solve a healthcare-specific problem, improve clinical workflows, or address healthcare business model constraints. Posts about foundation models, training methods, or infrastructure applied to non-healthcare use cases (banking, general coding) should be excluded even if healthcare is loosely mentioned.
Posts about AI infrastructure, compute, or model architecture that lack clear healthcare application or are applied to non-healthcare domains.
3 example posts
Our new preprint is a significant milestone for us
We built "HealthFormer" by training on our deeply phenotyped cohort from the Human Phenotype Project data. Healthformer is a multimodal generative transformer model that tokenizes each participant's physiological trajectory http
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Exclude posts that announce company partnerships, product launches, or technical capability demonstrations (e.g., Figure robotics production update, Ginkgo autonomous lab experiment, Mesa filesystem launch) unless the post provides evidence of healthcare system adoption, clinical validation, or impact on provider workflows.
Posts announcing biotech or AI company product launches, collaborations, or capability demos without evidence of healthcare impact or validation.
3 example posts
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Exclude posts that report pharmaceutical trial results, drug efficacy data, obesity drug market competition, or GLP-1/peptide pricing dynamics in isolation. Include only if the post analyzes systemic healthcare implications (reimbursement, access, provider behavior, care delivery model changes).
Posts reporting drug trial results, pharma market pricing, or GLP-1 competitive dynamics without healthcare systems analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[non_healthcare_business_tangential_framing]
Learned3 rejectionsActive
Exclude posts that discuss non-healthcare business models, software platforms, or corporate announcements (e.g., Revolut banking, Snapchat features, ClickUp workplace tools) even if they mention AI or contain loose healthcare-adjacent language. The core subject must be a healthcare-specific operational, clinical, or systems problem.
Posts about general business or technology developments (banking, software, startups) with only tangential or forced healthcare framing.
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Exclude posts from biotech or AI company official accounts (@Ginkgo, @Figure_robot, @nvidia, @NVIDIAAI) that showcase product capabilities, partnerships, or milestones—unless an independent third party or peer-reviewed source validates the healthcare impact or system adoption.
Posts from biotech or AI company founders/teams sharing enthusiasm about their product capabilities or partnerships without independent healthcare validation.
3 example posts
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Exclude posts that are primarily trial data reporting or drug approval announcements (e.g., phase 3 efficacy numbers, FDA acceptance, clinical outcomes) unless the post analyzes healthcare system implications such as payer coverage decisions, clinical adoption barriers, or care delivery workflow changes.
Posts reporting pharmaceutical trial results, efficacy data, or regulatory announcements without healthcare system delivery or reimbursement analysis.
3 example posts
Grace Science’s experience highlights a growing disconnect at FDA between talk and action on therapies for rare diseases. Despite efficacy signals in a monogenic ultrarare disease, FDA said the plausible mechanism framework is not available, and requires a new manufacturing
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
Exclude posts that focus on GLP-1/peptide prescription volumes, market competition, price undercutting, off-label usage trends, or compounding dynamics without analyzing healthcare system implications like insurance coverage, provider incentives, or patient access barriers.
Posts tracking GLP-1 market share, script growth, pricing competition, or weight loss drug dynamics without healthcare access or reimbursement implications
3 example posts
$LLY $NVO $HIMS
🚨 BREAKING: COURT DISMISSES PART OF ELI LILLY LAWSUIT AGAINST EMPOWER PHARMACY
BOTH LILLY AND EMPOWER ISSUED STATEMENTS CELEBRATING THE RULING
Dismissed: Lanham Act false advertising + consumer harm claim
Allowed to proceed: unfair competition claims under h
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Exclude posts that discuss tariffs, tax policy, affordability mandates, regulatory frameworks, or labor market trends in generic terms that could apply to any industry. These lack healthcare-specific operational, reimbursement, or clinical workflow insight.
Posts about broad economic, regulatory, or policy trends that apply generically across industries without healthcare-specific analysis
3 example posts
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Exclude posts that announce or promote products, features, or companies (e.g., ClickUp, Mesa, Revolut, Snapchat AR, Microsoft Word features) that are general software or business tools tangentially framed with healthcare language but do not address healthcare-specific workflows, compliance, or clinical outcomes.
Posts about non-healthcare companies, products, or startup announcements loosely tagged as healthcare-adjacent
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Exclude posts that describe a single clinical case, isolated workflow observation, or individual patient scenario without connecting it to systemic healthcare challenges, provider incentives, regulatory barriers, or scalable solutions. Anecdotal clinical examples alone lack systems insight.
Posts sharing isolated clinical observations, single-patient examples, or clinical workflow anecdotes without broader healthcare systems analysis
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
"How can medicine save the most lives?"
Most people ask this rhetorically.
@Farzad_MD and Tom Frieden took it literally.
From banning smoking in NYC bars to cutting teen smoking in half in 5 years, this is what happens when you stop treating diseases and start preventing them. h
Exclude posts that merely announce pharmaceutical trial data, efficacy percentages, FDA rulings, or weight loss/clinical outcomes without contextualizing how this changes healthcare delivery, reimbursement, provider workflows, or patient access. Standalone trial result announcements lack systems depth.
Posts reporting drug trial results, efficacy data, or FDA approvals without analyzing healthcare system implications or operational impact
3 example posts
Grace Science’s experience highlights a growing disconnect at FDA between talk and action on therapies for rare diseases. Despite efficacy signals in a monogenic ultrarare disease, FDA said the plausible mechanism framework is not available, and requires a new manufacturing
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[pharma_trial_data_or_market_pricing_only]
Learned3 rejectionsActive
Exclude posts that report drug trial data, clinical efficacy numbers, market pricing information, or competitive dynamics for pharmaceuticals (GLP-1s, obesity drugs, gene therapies) without contextualizing how this affects healthcare delivery systems, reimbursement policy, or clinical practice patterns.
Posts reporting pharmaceutical trial results, drug pricing, or market dynamics without healthcare systems analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
Exclude posts that announce drug trial data, phase results, lab capabilities (e.g., protein design, antibody engineering), or scientific breakthroughs unless the post analyzes healthcare delivery implications, access barriers, clinical decision-making trade-offs, or systemic adoption challenges.
Posts announcing pharmaceutical trial results, biotech breakthroughs, or protein/drug engineering capabilities without healthcare system or clinical adoption context.
3 example posts
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Experimentally Validated Deep Learning Control of Protein Aggregation
1. The study introduces AggreProt, a deep neural network that predicts residue-level aggregation-prone regions (APRs) directly from protein sequence, and then uses those predictions to design mutations that ht
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
Exclude posts that discuss broad AI trends (agentic AI, context windows, AI job displacement, software moats, tariff economics) where healthcare is mentioned as one example among many or used as secondary framing. Posts must center on healthcare-specific systemic dynamics to be in scope.
Posts about generalist AI trends, labor disruption, or macro policy that peripherally reference healthcare without substantive healthcare analysis.
3 example posts
A Microsoft acabou de transformar uma startup de $11 bilhões de dólares em uma funcionalidade do Word.
Não foi uma aquisição nem uma parceria.
Uma funcionalidade.
A Harvey levantou $200M a uma valuation de $11B em março. $190M de receita recorrente anual. 100 mil advogados. Co
Our attention to biorisks posed by AI needs to match the current attention given to cyber-risks. The staged release of Claude Mythos in order to bolster defenses in key industries is necessary to shore up resilience against a new class of cyber-risk across critical industries. We
Stanford and Harvard published the most unsettling AI paper of the year.
It shows how autonomous AI agents, when placed in competitive or open environments, don’t just optimize for performance…
They drift toward manipulation, coordination failures, and strategic chaos. https://
Exclude posts that report biotech research findings, experimental validations, or clinical observations (e.g., protein design, AI-predicted aggregation, antibody binding precision, retinal photo screening) as standalone scientific updates. Posts must analyze how findings change healthcare delivery, clinical decision-making, or care models to be in scope.
Posts describing biotech breakthroughs, research findings, or clinical observations as isolated scientific facts without healthcare systems implications.
3 example posts
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
Experimentally Validated Deep Learning Control of Protein Aggregation
1. The study introduces AggreProt, a deep neural network that predicts residue-level aggregation-prone regions (APRs) directly from protein sequence, and then uses those predictions to design mutations that ht
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
Exclude posts that report legal judgments, regulatory approvals, or fraud incidents (e.g., court dismissals of lawsuits, FDA coverage decisions, hospice fraud) as breaking news or incident coverage. Posts must analyze how the event reshapes healthcare delivery, incentives, or market structure to be in scope.
Posts reporting legal suits, FDA rulings, regulatory events, or fraud cases without analyzing systemic healthcare implications.
3 example posts
$LLY $NVO $HIMS
🚨 BREAKING: COURT DISMISSES PART OF ELI LILLY LAWSUIT AGAINST EMPOWER PHARMACY
BOTH LILLY AND EMPOWER ISSUED STATEMENTS CELEBRATING THE RULING
Dismissed: Lanham Act false advertising + consumer harm claim
Allowed to proceed: unfair competition claims under h
Grace Science’s experience highlights a growing disconnect at FDA between talk and action on therapies for rare diseases. Despite efficacy signals in a monogenic ultrarare disease, FDA said the plausible mechanism framework is not available, and requires a new manufacturing
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Exclude posts that announce AI model features, open-source releases, or technical capabilities (e.g., NemoClaw, OpenShell, Mesa, AggreProt) without demonstrating validated healthcare use cases, clinical outcomes, or operational adoption in healthcare settings.
Posts announcing AI model capabilities, features, or open-source tools that lack evidence of real-world healthcare impact or deployment.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Exclude posts about robotics (humanoid robots, manufacturing automation), computing infrastructure, or non-healthcare business models (software moats, network effects, SaaS pricing) that use healthcare as a loose framing device or mention healthcare tangentially without substantive analysis of healthcare-specific challenges or operations.
Posts about robotics, infrastructure, or non-healthcare business models tangentially framed as healthcare-relevant without demonstrating healthcare application.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
$IBRX
Here's a wild theory.
What if we're given FDA acceptance of sBla and PDUFA at same time and then it's announced after reviewing everything it's been determined we will be given rapid expanded access review under "plausible mechanism of action".
That may sound crazy ht
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[generalist_macro_policy_without_healthcare_lens]
Learned3 rejectionsActive
Exclude posts about general economic policy, macro trends, regulatory frameworks, or business compliance (e.g., tariffs, ISO certifications, SOC 2 compliance, banking systems) that mention healthcare tangentially or apply a healthcare label without analyzing healthcare-specific implications or system-level impact.
Posts about broad macroeconomic, regulatory, or policy topics (tariffs, banking, compliance certifications) with only tangential healthcare framing.
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[broad_health_claim_without_systems_analysis]
Learned3 rejectionsActive
Exclude posts that make broad claims about health trends, clinical efficacy, or disease screening (e.g., 'AI can detect diabetes from retinal photos,' 'AI is enabling new amino acids in life') without analyzing healthcare adoption barriers, workflow integration, reimbursement, or systemic implementation challenges. Clinical capability claims alone are insufficient.
Posts making broad or sweeping health claims, trend predictions, or clinical observations without analyzing how they affect healthcare operations, workflow, or delivery.
3 example posts
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
Experimentally Validated Deep Learning Control of Protein Aggregation
1. The study introduces AggreProt, a deep neural network that predicts residue-level aggregation-prone regions (APRs) directly from protein sequence, and then uses those predictions to design mutations that ht
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
Exclude posts that announce AI company product features, launches, or business updates (e.g., NVIDIA OpenShell, Mesa filesystem, Foundry agents, Cursor tool updates) unless the post demonstrates healthcare-specific use, validation, or system impact. Product announcements and company metrics without healthcare context are insufficient.
Posts promoting AI company product launches, feature announcements, or business updates without evidence of healthcare-specific application or validation.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Exclude posts that report drug trial data, efficacy percentages, or clinical endpoints (e.g., weight loss %, mortality reduction %) without analyzing how the result affects healthcare access, reimbursement, prescribing patterns, or healthcare delivery systems. Posting trial toplines alone is insufficient.
Posts announcing pharmaceutical trial results or drug efficacy data without analyzing healthcare system implications, market access, or delivery challenges.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
Exclude posts that assert broad claims about AI disruption, market shifts, or labor impacts (e.g., 'AI will replace X jobs,' 'the future of healthcare is agents') without citing data, case studies, or healthcare systems evidence. Speculation and futurism without grounding do not qualify.
Posts making broad claims about AI or technology impact on healthcare, work, or markets without specific evidence, validation, or healthcare systems analysis.
3 example posts
Stanford and Harvard published the most unsettling AI paper of the year.
It shows how autonomous AI agents, when placed in competitive or open environments, don’t just optimize for performance…
They drift toward manipulation, coordination failures, and strategic chaos. https://
[New] from a16z @speedrun:
Come for the Agent, Stay for the Network
there's a quiet pattern hiding inside the most defensible vertical AI startups right now:
the agent is the wedge
the network is the moat.
here's what I mean:
an HVAC tech needs a part today.
>>Traditionally:
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[non_healthcare_company_product_announcement]
Learned3 rejectionsActive
Exclude posts that announce new products, features, or updates from generalist tech companies (AWS, NVIDIA, Microsoft, OpenAI, Meta, etc.) unless the post deeply analyzes how that product solves a specific, documented healthcare workflow or operational problem. Company announcements without healthcare systems context do not qualify.
Posts announcing product launches or capability updates from non-healthcare companies (AWS, NVIDIA, Microsoft, Meta) with tangential or loose healthcare framing.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[non_english_or_foreign_language_content]
Learned3 rejectionsActive
Exclude posts written in non-English languages or with significant foreign language content, as they fall outside the writer's editorial scope for English-language healthcare tech content.
Posts written in languages other than English
1 example post
A Microsoft acabou de transformar uma startup de $11 bilhões de dólares em uma funcionalidade do Word.
Não foi uma aquisição nem uma parceria.
Uma funcionalidade.
A Harvey levantou $200M a uma valuation de $11B em março. $190M de receita recorrente anual. 100 mil advogados. Co
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[regulatory_or_fraud_scandal_without_analysis]
Learned3 rejectionsActive
Exclude posts that report FDA rulings, fraud allegations, or regulatory events as breaking news or outrage without substantive analysis of how these decisions reshape healthcare delivery, reimbursement, or business models.
Posts about healthcare fraud, regulatory action, or policy changes without systemic healthcare analysis
3 example posts
$LLY $NVO $HIMS
🚨 BREAKING: COURT DISMISSES PART OF ELI LILLY LAWSUIT AGAINST EMPOWER PHARMACY
BOTH LILLY AND EMPOWER ISSUED STATEMENTS CELEBRATING THE RULING
Dismissed: Lanham Act false advertising + consumer harm claim
Allowed to proceed: unfair competition claims under h
Grace Science’s experience highlights a growing disconnect at FDA between talk and action on therapies for rare diseases. Despite efficacy signals in a monogenic ultrarare disease, FDA said the plausible mechanism framework is not available, and requires a new manufacturing
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[pharmaceutical_or_clinical_trial_data_only]
Learned3 rejectionsActive
Exclude posts that are purely trial result announcements or clinical observations (efficacy numbers, phase outcomes, patient metrics) without analysis of healthcare system implications, reimbursement, access, or operational adoption challenges.
Posts reporting drug trial results or clinical data without healthcare systems, pricing, or operational context
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[ai_company_product_announcement_unvalidated]
Learned3 rejectionsActive
Exclude posts that are primarily product launches, feature announcements, or promotional content from AI companies (NVIDIA, OpenAI, Anthropic, AWS) — even if healthcare-adjacent — unless the post demonstrates actual clinical validation, healthcare customer adoption, or specific operational impact in healthcare workflows.
Posts announcing new AI tools, features, or products without evidence of healthcare validation or adoption
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Created 2026-05-03 · Updated 2026-05-03
Edit rule text
[ai_company_product_launch_announcement]
Learned3 rejectionsActive
Exclude posts that announce new AI products, tools, or software features from tech companies (Microsoft, NVIDIA, Anthropic, OpenAI) unless the post demonstrates concrete healthcare delivery impact or clinical workflow integration. Product announcements and marketing statements are not substantive healthcare tech content.
Posts announcing AI company product launches, features, or tooling without healthcare-specific validation or application context.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Exclude posts that describe clinical observations, research paper findings, academic studies, or interpretability insights without connecting them to healthcare system operations, implementation challenges, or real-world healthcare impact. A single clinical anecdote or research observation, even if valid, is insufficient.
Posts about individual clinical observations, research findings, or interpretability studies without healthcare systems or operational context.
3 example posts
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
Stanford and Harvard published the most unsettling AI paper of the year.
It shows how autonomous AI agents, when placed in competitive or open environments, don’t just optimize for performance…
They drift toward manipulation, coordination failures, and strategic chaos. https://
Experimentally Validated Deep Learning Control of Protein Aggregation
1. The study introduces AggreProt, a deep neural network that predicts residue-level aggregation-prone regions (APRs) directly from protein sequence, and then uses those predictions to design mutations that ht
Exclude posts about macro economics, general business models, software industry trends, or policy changes that mention healthcare only superficially or use healthcare as a loose example of a broader non-healthcare pattern. The healthcare insight must be central, not illustrative of a general business point.
Posts about general business, economic, or policy trends that have only tangential or forced healthcare framing.
3 example posts
The problem with Reality Labs is not ambition. It is time. AI turned into revenue faster because it improves existing workflows. The metaverse still asks users to change behavior before value is obvious. https://t.co/lopZiUwGU5
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Exclude posts that announce new AI products, models, or features from AI companies (OpenAI, Anthropic, NVIDIA, Microsoft) framed as healthcare-relevant without demonstrating actual healthcare use, validation, or customer adoption. Self-reported 'proof of concept' experiments or feature announcements alone are insufficient.
Posts announcing AI company product launches, model capabilities, or features without evidence of healthcare validation or real-world healthcare application.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Created 2026-05-02 · Updated 2026-05-02
Edit rule text
[pharma_trial_data_or_clinical_result_only]
Learned3 rejectionsActive
Exclude posts that announce or describe pharmaceutical trial data, drug efficacy numbers, or clinical trial results without analyzing healthcare system impact, market access challenges, payer dynamics, or operational implications. Reporting weight loss percentages or mortality reductions alone is insufficient.
Posts that report pharmaceutical trial results or clinical data without healthcare systems analysis or operational implications.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
Created 2026-05-02 · Updated 2026-05-02
Edit rule text
[ai_infrastructure_compute_tangent_no_healthcare]
Learned3 rejectionsActive
Exclude posts that focus on AI infrastructure (filesystem design, compute optimization, codebase architecture, software stacks) or technical AI capabilities without clear healthcare-specific application or clinical/operational validation. The post must demonstrate healthcare use, not just mention it in passing.
Posts about AI infrastructure, model architectures, or compute capabilities that are tangentially tied to healthcare.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Exclude posts that celebrate biotech/AI startup experiments, milestones, or capabilities (e.g., autonomous lab results, closed-loop AI-protein synthesis, robot manufacturing ramp) without evidence of clinical translation, healthcare customer adoption, or regulatory pathway progress. Founder enthusiasm and experimental announcements without real-world healthcare impact should be rejected.
Posts from biotech/AI founders or companies celebrating autonomous lab experiments, AI-protein design milestones, or closed-loop systems without clinical or commercial validation.
3 example posts
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Experimentally Validated Deep Learning Control of Protein Aggregation
1. The study introduces AggreProt, a deep neural network that predicts residue-level aggregation-prone regions (APRs) directly from protein sequence, and then uses those predictions to design mutations that ht
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
Exclude posts that celebrate or explain AI model capabilities, architectural designs, or technical improvements (e.g., context windows, interpretability research, coding agents, memory mechanisms) unless the post demonstrates specific application within healthcare workflows, clinical validation, or operational integration. General AI advancement posts belong to generalist tech audiences, not healthcare builders.
Posts discussing AI model capabilities, architectural innovations, or technical breakthroughs (Claude, GPT-5, interpretability) that are not grounded in healthcare delivery.
3 example posts
The problem with Reality Labs is not ambition. It is time. AI turned into revenue faster because it improves existing workflows. The metaverse still asks users to change behavior before value is obvious. https://t.co/lopZiUwGU5
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
Exclude posts that report pharmaceutical trial results, drug efficacy metrics, or clinical trial announcements (e.g., GLP-1 data, gene therapy approvals, Phase 3 results) unless the post analyzes healthcare system implications—reimbursement, adoption barriers, workflow integration, or operational impact. Raw clinical data without systems context is insufficient.
Posts reporting drug trial results, clinical efficacy data, or pharma news without connecting to healthcare delivery systems, cost structures, or operational implications.
3 example posts
Grace Science’s experience highlights a growing disconnect at FDA between talk and action on therapies for rare diseases. Despite efficacy signals in a monogenic ultrarare disease, FDA said the plausible mechanism framework is not available, and requires a new manufacturing
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
Exclude posts from biotech company accounts (@Ginkgo, @Figure_robot) or founders self-promoting research capabilities, autonomous lab results, protein design breakthroughs, or partnerships without third-party validation, peer review citation, or healthcare delivery application. Self-promotional founder/company posts lack editorial distance.
Posts by biotech founders or companies self-promoting research findings, capabilities, or partnerships without independent validation or healthcare systems analysis.
3 example posts
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Created 2026-05-02 · Updated 2026-05-02
Edit rule text
[ai_company_product_or_business_announcement]
Learned3 rejectionsActive
Exclude posts that announce AI company product launches, integrations, partnerships, or business updates (e.g., NVIDIA releasing tools, Anthropic releasing models, AWS partnerships) unless the post explicitly analyzes how the product improves healthcare delivery, clinical outcomes, or healthcare operations. Company announcements without healthcare application context are self-promotional tangents.
Posts announcing AI company product launches, partnerships, or business updates without demonstrating healthcare-specific application or clinical validation.
3 example posts
If you're a student, professor, or researcher—this one's for you.
We’re hosting a series of virtual learnings for you to get hands-on experience with the NVIDIA NemoClaw and OpenShell software stack. You’ll get practical guidance on integrating agents with academic datasets and
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
We created OpenShell to make AI agents safe for enterprises.
Built in open source so any company can adopt and trust it, this secure sandbox controls what agents can access, share, and send.
Our CEO, Jensen, explains 👇 https://t.co/7EiIsxr0CG
Created 2026-05-02 · Updated 2026-05-02
Edit rule text
[pharmaceutical_trial_data_announcement_only]
Learned3 rejectionsActive
Exclude posts that announce pharmaceutical trial data, phase results, or efficacy numbers (especially obesity/GLP-1 drugs, rare disease therapies) without explaining healthcare system implications, pricing dynamics, patient access barriers, or care delivery challenges. Isolated trial outcome reporting without systems context is insufficient.
Posts reporting drug trial results or pharma data without healthcare systems analysis or clinical significance contextualization.
3 example posts
As I mentioned before, I am now sharing an example from GPT-5.5 Pro, also featured by OpenAI, that really left me stunned by what it is capable of in biomedical science. (full report on the website I created with Codex, link in the thread).
To push GPT-5.5 Pro hard, I uploaded a
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
Exclude posts about tariffs, inflation, labor market disruption, insurance economics, or policy debates that lack specific healthcare system analysis or operational impact. Examples: tariff effects, labor disruption macro trends, insurance policy generalities, financial market dynamics.
Posts about broad economic, policy, or labor market trends with loose or no healthcare connection.
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Exclude posts that report healthcare fraud, regulatory violations, or scandal incidents (hospice fraud, insurance denials, billing issues) as breaking news or outrage without connecting to underlying system design flaws, market structure problems, or operational recommendations.
Posts about healthcare fraud, regulatory issues, or scandals reported as news without analysis of systemic healthcare problems or policy implications.
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
Exclude posts about workforce disruption, labor market shifts, business model transitions, software moats, or macro-economic policies (tariffs, taxes, affordability bills) that use healthcare as an example or passing reference but do not provide healthcare-specific systems analysis or operational implications for healthcare delivery.
Posts about economics, labor markets, business models, or macro trends that mention healthcare tangentially but lack healthcare-specific analysis.
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
Exclude posts about generalist topics (compute capacity, energy grids, tariff policy, labor market disruption, workforce automation) that mention healthcare only as a secondary example or tangent. The post must be focused on healthcare-specific systems, not healthcare as one instance of a broader trend.
Posts about broader macro-economic, labor market, or infrastructure trends that happen to mention healthcare as one domain among many.
3 example posts
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
AI could, in theory, automate 57% of US work hours. Yet most human skills remain relevant.
The future of work is not human or machine – but a partnership between people, agents, and robots.
Read our latest research on skill partnerships in the age of AI: https://t.co/h1K56uPqPo
Exclude posts whose primary subject is non-healthcare technology (e.g., power infrastructure, banking systems, robotic manufacturing, filesystem design, software licensing models) even if the post mentions healthcare companies, includes healthcare terminology in the matched title, or claims healthcare relevance. The post's core argument should be about healthcare delivery system problems, not tangential tech infrastructure.
Posts about non-healthcare tech, infrastructure, or business topics that are loosely framed with healthcare labels or tangential references.
3 example posts
[New] from a16z @speedrun:
Come for the Agent, Stay for the Network
there's a quiet pattern hiding inside the most defensible vertical AI startups right now:
the agent is the wedge
the network is the moat.
here's what I mean:
an HVAC tech needs a part today.
>>Traditionally:
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Exclude posts that announce AI model technical capabilities, architectural innovations, or design patterns (e.g., context windows, agent frameworks, interpretability papers) unless they are explicitly applied to a validated healthcare use case or clinical decision-making problem. Posts should not be included if they treat the AI capability as the story rather than the healthcare outcome.
Posts announcing new AI model capabilities or architectural insights without specific healthcare application or validation.
3 example posts
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts reporting healthcare fraud, insurance scandal, billing disputes, or regulatory outrage (e.g., Medicaid program audits, insurance denials, ClickUp data leaks) unless they analyze root causes in healthcare systems, incentives, or structural governance. Scandal reporting and political grievance posts are out of scope.
Posts about healthcare fraud, regulatory scandal, or political healthcare posturing without systemic analysis.
3 example posts
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
Exclude posts that make broad assertions about healthcare business models, labor disruption, market consolidation, or clinical adoption without providing evidence, case studies, data, or analysis of how the claimed pattern actually manifests in healthcare workflows, operations, or policy.
Posts making sweeping healthcare claims, business model predictions, or market declarations without evidence, validation, or system-level analysis.
3 example posts
What superhuman vision can detect from the retinal photo, which human eyes cannot, is stunning. A new foundation AI model screening for diabetes hypertension, hyperlipidemia, gout, osteoporosis, and thyroid disease @NatureMedicine
https://t.co/GhKvUqz4Vy https://t.co/iKcXCbLceu
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
States are rushing “affordability” bills, but most just mask high prices with rebates, mandates, or price caps. @MrRBourne & Nathan Miller argue durable relief means rolling back cost-raising rules and expanding supply.
https://t.co/WG5egT1NfL
Exclude posts that discuss AI infrastructure (chips, power grids, data centers, compute growth, manufacturing) unless the post explicitly connects the infrastructure development to healthcare delivery, clinical operations, or health-tech feasibility. General AI infrastructure hype without healthcare application is tangential.
Posts about AI compute, data center infrastructure, energy, or chip manufacturing that lack clear healthcare systems application.
3 example posts
Demis Hassabis says bigger context windows are still a brute force answer to memory.
The human brain does something stranger. During sleep, it replays what matters and folds new knowledge into what it already knows.
AI does not need infinite context. It needs the right memory h
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Created 2026-05-02 · Updated 2026-05-02
Edit rule text
[pharma_trial_data_without_systems_context]
Learned3 rejectionsActive
Exclude posts that announce clinical trial results, phase data, or drug efficacy metrics (e.g., weight loss percentages, mortality reduction numbers) without examining how the result affects healthcare access, reimbursement, delivery models, or system-wide adoption. Isolated trial outcome reporting is insufficient.
Posts reporting pharmaceutical trial results or drug efficacy data without healthcare system, access, or implementation analysis.
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
Exclude posts announcing AI company product launches, partnership news, or business strategy (e.g., AWS/OpenAI integration, agent network strategies, tool launches) unless the post explicitly demonstrates healthcare-specific validation or application. Business announcement framing without healthcare specificity is insufficient.
Posts announcing AI company products, partnerships, or business initiatives with loose or indirect healthcare application.
3 example posts
[New] from a16z @speedrun:
Come for the Agent, Stay for the Network
there's a quiet pattern hiding inside the most defensible vertical AI startups right now:
the agent is the wedge
the network is the moat.
here's what I mean:
an HVAC tech needs a part today.
>>Traditionally:
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Created 2026-05-02 · Updated 2026-05-02
Edit rule text
[glp1_peptide_market_dynamics_only]
Learned3 rejectionsActive
Exclude posts that report GLP-1/peptide market share, prescription trends, pricing competition, or weight-loss drug launch data (e.g., Oral Wegovy vs. Foundayo scripts, semaglutide generic competition) unless they connect to healthcare access, reimbursement policy, or delivery system impact. Market dynamics alone are insufficient.
Posts about GLP-1 and peptide market pricing, competition, or prescription data without healthcare systems analysis.
3 example posts
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
$LLY v $NVO
Foundayo (orforglipron) scripts off to a slow start both in raw numbers and in comparison to Oral Wegovy’s launch at same time point.
Overall statistics show Oral Wegovy script growth is robust, and thus far undeterred, by Foundayo market entry.
🎩 @bloomberg https
Exclude self-promotional posts from company founders, biotech startups, or official company accounts (e.g., Figure AI, Ginkgo Bioworks, GSK) announcing product launches, manufacturing milestones, or capability claims without third-party validation or critical analysis of healthcare impact.
Posts from biotech founders, startup leaders, or company accounts promoting products, capabilities, or achievements with marketing language and limited independent validation.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts that announce or discuss clinical trial results, drug efficacy metrics, or RCT/RWE data in isolation (e.g., "drug X reduced mortality by Y%") without connecting to healthcare access barriers, implementation challenges, insurance coverage, or broader system context. Clinical data points alone without systems analysis do not qualify.
Posts reporting pharmaceutical trial results, drug efficacy data, or clinical outcomes without analyzing healthcare delivery, access, or system implications.
3 example posts
Experimentally Validated Deep Learning Control of Protein Aggregation
1. The study introduces AggreProt, a deep neural network that predicts residue-level aggregation-prone regions (APRs) directly from protein sequence, and then uses those predictions to design mutations that ht
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
Exclude posts that report academic research findings, clinical observations, neural network interpretability papers, or protein science discoveries without connecting to healthcare system adoption, clinical workflow integration, policy impact, or real-world health outcomes. The post must address healthcare application or impact, not just research findings.
Posts sharing clinical research findings, medical observations, or interpretability insights without healthcare application or policy implications.
3 example posts
Experimentally Validated Deep Learning Control of Protein Aggregation
1. The study introduces AggreProt, a deep neural network that predicts residue-level aggregation-prone regions (APRs) directly from protein sequence, and then uses those predictions to design mutations that ht
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
What superhuman vision can detect from the retinal photo, which human eyes cannot, is stunning. A new foundation AI model screening for diabetes hypertension, hyperlipidemia, gout, osteoporosis, and thyroid disease @NatureMedicine
https://t.co/GhKvUqz4Vy https://t.co/iKcXCbLceu
Created 2026-05-01 · Updated 2026-05-01
Edit rule text
[tangential_non_healthcare_tech_or_founder_hype]
Learned3 rejectionsActive
Exclude posts from tech company founders, executives (e.g., Satya Nadella, Figure AI) or company accounts announcing product features, manufacturing milestones, or capability updates that lack specific healthcare use cases or clinical evidence. The post must demonstrate healthcare application, not just tech company announcements.
Posts from tech founders or companies showcasing product capabilities or company metrics without healthcare application.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts about tariffs, fintech, compliance certifications, software moats, or macro policy that use healthcare as a passing reference or framing device but lack substantive analysis of how the business/policy issue actually changes healthcare delivery, economics, or operations.
Posts about non-healthcare business, policy, or tech infrastructure loosely framed as healthcare-relevant without substantive connection
3 example posts
Revolut just moved the IP of banking into a model.
Trained on 24 billion banking events in 111 countries.
One foundation model replacing six separate ML systems.
Credit scoring: +130%
Fraud recall: +65%
Marketing engagement: +79%
The model is the new moat.
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Created 2026-05-01 · Updated 2026-05-01
Edit rule text
[personal_anecdote_or_single_case_healthcare]
Learned3 rejectionsActive
Exclude posts that recount individual patient experiences, personal healthcare billing stories, or single anecdotal clinical observations without connecting to broader healthcare system patterns, policy implications, or operational lessons applicable beyond that case.
Posts sharing personal health experiences or single anecdotal healthcare events without generalizable systems insight
3 example posts
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
Created 2026-05-01 · Updated 2026-05-01
Edit rule text
[pharma_trial_data_without_systems_lens]
Learned3 rejectionsActive
Exclude posts that present drug trial data, efficacy metrics, or pharmaceutical announcements (GLP-1s, antibodies, gene therapies) as standalone statistics without analyzing their impact on healthcare delivery, access, pricing, or clinical workflows. The post must contextualize how the drug/treatment changes healthcare practice or economics.
Posts reporting pharmaceutical trial results or drug efficacy data in isolation without healthcare system context or clinical decision-making implications
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
$IBRX
Here's a wild theory.
What if we're given FDA acceptance of sBla and PDUFA at same time and then it's announced after reviewing everything it's been determined we will be given rapid expanded access review under "plausible mechanism of action".
That may sound crazy ht
Exclude posts that focus on pharmaceutical market pricing, generic competition, patent expiration, or pricing dynamics (e.g., drug price comparisons, generic launch timelines, market share) without analyzing how these market events affect healthcare delivery, patient access, provider prescribing patterns, or insurance coverage decisions.
Posts analyzing drug pricing, market competition, patent expirations, or generic launches as isolated financial or competitive events without examining healthcare access, provider behavior, or insurance coverage implications.
3 example posts
$LLY v $NVO
Foundayo (orforglipron) scripts off to a slow start both in raw numbers and in comparison to Oral Wegovy’s launch at same time point.
Overall statistics show Oral Wegovy script growth is robust, and thus far undeterred, by Foundayo market entry.
🎩 @bloomberg https
India’s weight-loss drug market just ran a live experiment in price elasticity.
Novo Nordisk’s semaglutide patent expired 20 March 2026.
Within 3 weeks:
15+ generics launched
Cheapest at Rs 2,000/month (branded was Rs 10,000+)
Novo cut Ozempic and Wegovy prices by 36-48%
Bu
$LLY $NVO $HIMS
🚨 LILLY GLP-1 PILL FOUNDAYO: NEARLY 4,000 PRESCRIPTIONS IN WEEK 2
- Foundayo had 1,390 Rxs during week 1
- Meanwhile, Novo's Wegovy Pill had 3k in first 4 days and 18,410 prescriptions in its second week 🤯
- IQVIA data
- Week ending Apr 17
"While we believe htt
Exclude posts reporting AI safety incidents, code execution vulnerabilities, or cybersecurity breaches (e.g., Claude deleting databases, API key leaks, AI takeover scenarios) that do not explicitly frame the risk or impact within a healthcare delivery, patient data, or clinical decision-making context.
Posts about AI model safety vulnerabilities, security breaches, or hacking incidents presented as general tech stories without connecting to healthcare-specific risks or consequences.
3 example posts
𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗔𝗹𝗿𝗲𝗮𝗱𝘆 𝗕𝗲𝗶𝗻𝗴 𝗛𝗶𝗷𝗮𝗰𝗸𝗲𝗱
Researcher Aks Sharma at Manifold found 30 malicious skills on ClawHub turning AI agents into a crypto farming botnet: 10,000 downloads before anyone noticed.
⬩ The attack required zero exploits. Malicious https://t.co/v4oBXPPydu
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
Exclude posts that report fraud, abuse, or scandal in healthcare (hospice fraud, nursing home abuse, prior auth denials, claim rejections) as isolated incidents or moral outrage without analyzing systemic healthcare business model failures, regulatory gaps, or operational solutions.
Posts reporting healthcare fraud, scandal, or industry misconduct without analyzing systemic causes or healthcare delivery implications.
3 example posts
"At the Veterans Health Administration, we have the exquisite gift of a hospice unit, a place to care for veterans when time is short. We care for the sons of mothers who can no longer be here to care for them."
In #APieceofMyMind, a #palliative care #physician reflects on https
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
Exclude posts that report AI safety incidents, cybersecurity breaches, or LLM/agent failure modes (e.g., Claude deleting databases, AI agents being hijacked, compliance certifications failing) unless the post explicitly connects the vulnerability to healthcare delivery, patient outcomes, or healthcare system risk management.
Posts about AI safety incidents, security vulnerabilities, or system failures that lack direct healthcare application or systemic healthcare context.
3 example posts
𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗔𝗿𝗲 𝗔𝗹𝗿𝗲𝗮𝗱𝘆 𝗕𝗲𝗶𝗻𝗴 𝗛𝗶𝗷𝗮𝗰𝗸𝗲𝗱
Researcher Aks Sharma at Manifold found 30 malicious skills on ClawHub turning AI agents into a crypto farming botnet: 10,000 downloads before anyone noticed.
⬩ The attack required zero exploits. Malicious https://t.co/v4oBXPPydu
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
Exclude posts from company founders, executives, or official company accounts announcing product launches, manufacturing milestones, or capability improvements (e.g., 'we shipped X', 'we scaled Y') unless the post includes independent validation, healthcare system adoption, or operational impact data.
Posts from biotech founders, AI company executives, or robotics companies making promotional announcements about their products/progress without independent validation or healthcare impact analysis.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts that report pharmaceutical trial data, drug efficacy results, Phase 3 outcomes, or FDA regulatory decisions in isolation, unless they connect to healthcare delivery, insurance coverage, provider workflows, or system-level implications.
Posts reporting drug trial results, clinical data, or FDA approvals without healthcare systems or operational context.
3 example posts
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
What superhuman vision can detect from the retinal photo, which human eyes cannot, is stunning. A new foundation AI model screening for diabetes hypertension, hyperlipidemia, gout, osteoporosis, and thyroid disease @NatureMedicine
https://t.co/GhKvUqz4Vy https://t.co/iKcXCbLceu
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
Exclude posts that announce or promote a new AI tool, product feature, or technical capability (e.g., 'Introducing Mesa' or 'Figure scaled manufacturing 24x') without providing evidence of healthcare adoption, clinical validation, or actual healthcare system impact.
Posts announcing a tool, product, or capability without evidence of healthcare system adoption or validation
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts about energy infrastructure, semiconductor supply chains, tariffs, or macro-economic trends that mention healthcare only in passing or as a vague use case. These are infrastructure or economics posts, not healthcare tech posts.
Posts about compute, power grids, tariffs, or broad macro trends with loose or missing healthcare connection
3 example posts
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
Is the business model for traditional software companies in permanent decline due to AI Agents not needing seats?
2 examples:
Re: @salesforce, we’ve reduced our seats from 10+ to 2 human seats and 1 API seat. And yet, we now pay $22,000 a year, 83% up from $12,000. Why? Our
Created 2026-05-01 · Updated 2026-05-01
Edit rule text
[market_pricing_dynamics_without_systems_lens]
Learned3 rejectionsActive
Exclude posts that focus on GLP-1/peptide market share, pricing tiers, prescription counts, or competitive launches (e.g., 'Foundayo had 4,000 scripts in week 2') without connecting to healthcare delivery, coverage models, or system-level consequences. Market metrics alone are financial news, not healthcare tech.
Posts about drug pricing, market competition, or prescription volumes without healthcare system analysis
3 example posts
$LLY v $NVO
Foundayo (orforglipron) scripts off to a slow start both in raw numbers and in comparison to Oral Wegovy’s launch at same time point.
Overall statistics show Oral Wegovy script growth is robust, and thus far undeterred, by Foundayo market entry.
🎩 @bloomberg https
India’s weight-loss drug market just ran a live experiment in price elasticity.
Novo Nordisk’s semaglutide patent expired 20 March 2026.
Within 3 weeks:
15+ generics launched
Cheapest at Rs 2,000/month (branded was Rs 10,000+)
Novo cut Ozempic and Wegovy prices by 36-48%
Bu
$LLY $NVO $HIMS
🚨 LILLY GLP-1 PILL FOUNDAYO: NEARLY 4,000 PRESCRIPTIONS IN WEEK 2
- Foundayo had 1,390 Rxs during week 1
- Meanwhile, Novo's Wegovy Pill had 3k in first 4 days and 18,410 prescriptions in its second week 🤯
- IQVIA data
- Week ending Apr 17
"While we believe htt
Exclude posts that report clinical trial outcomes, Phase 3 data, drug efficacy metrics, or FDA approvals in isolation. Posts must connect trial results to healthcare system implications (access, pricing, adoption barriers, clinical implementation challenges).
Posts reporting drug trial results, clinical data, or pharmaceutical announcements without healthcare delivery or system-level analysis
3 example posts
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
What superhuman vision can detect from the retinal photo, which human eyes cannot, is stunning. A new foundation AI model screening for diabetes hypertension, hyperlipidemia, gout, osteoporosis, and thyroid disease @NatureMedicine
https://t.co/GhKvUqz4Vy https://t.co/iKcXCbLceu
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
Exclude posts that report fraud, billing scandals, or compliance violations (hospice fraud, Medicaid scheme, nursing home abuse) as breaking news or outrage without analyzing systemic causes, policy solutions, or how health tech can address root causes.
Posts reporting healthcare fraud, billing scandals, or compliance failures without structural analysis
3 example posts
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
What happened during the Change disaster?
Hospitals got bailed out.
CMS advanced $3.2 billion to hospitals between March and June 2024. UnitedHealth/Optum extended $6.5 billion in interest-liquidity through April 30.
Mercy, I looked it up, specifically had 218 days of cash
U.S. nursing homes are fabricating schizophrenia diagnoses to hide their use of dangerous antipsychotic drugs to subdue dementia patients, a government watchdog report found.
The drugs increase the risk of falls, strokes and death. https://t.co/6SkzWxZfSz
Created 2026-05-01 · Updated 2026-05-01
Edit rule text
[ai_infrastructure_compute_hype_no_healthcare]
Learned3 rejectionsActive
Exclude posts about AI compute scaling, chip manufacturing, data center infrastructure, power grid bottlenecks, or training compute growth—even if framed as foundational—unless they directly address healthcare application constraints, clinical deployment costs, or healthcare-specific infrastructure barriers.
Posts about AI compute, infrastructure, or energy requirements without healthcare-specific application
3 example posts
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
📈 NVIDIA tops AI leaderboards and benchmarks with open models driven by extreme co-design across compute, networking, memory, storage, and software.
This includes models for biology, AI physics, agentic AI, physical AI, robotics, and autonomous vehicles.
By being vertically htt
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Exclude posts from company founders or official accounts (Ginkgo, Figure AI, Profluent, etc.) announcing manufacturing milestones, production scaling, or product updates that read as self-promotional without third-party validation or healthcare system context.
Posts from biotech/robotics founders announcing product milestones or manufacturing progress without independent validation or healthcare impact analysis.
3 example posts
Not something you'd see everyday—changing the alphabet of life.
All of life organisms are are built from 20 amino acids. Now genAI is enabling life to be built with 19 amino acids, making isoleucine dispensable. @ScienceMagazine
https://t.co/7CBn0Xhuxs https://t.co/tkxtCrFx9Y
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Exclude posts that announce drug trial data, Phase 3 results, FDA approvals, or regulatory milestones (e.g., survodutide Phase 3, CRISPR gene therapy, hepatitis B treatment acceptance) without discussing clinical adoption barriers, healthcare system integration, or real-world implementation challenges.
Posts reporting pharmaceutical trial results or regulatory approvals without healthcare system analysis.
3 example posts
#News for #investors and #media: Today we are announcing that our potential first-in-class treatment for chronic hepatitis B has been accepted for regulatory review by the US FDA.
It has also received Breakthrough Therapy designation.
🔗 Learn more: https://t.co/AnUodGmljS htt
What superhuman vision can detect from the retinal photo, which human eyes cannot, is stunning. A new foundation AI model screening for diabetes hypertension, hyperlipidemia, gout, osteoporosis, and thyroid disease @NatureMedicine
https://t.co/GhKvUqz4Vy https://t.co/iKcXCbLceu
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
Created 2026-05-01 · Updated 2026-05-01
Edit rule text
[biotech_or_ai_product_launch_without_validation]
Learned3 rejectionsActive
Exclude posts that announce a new AI tool, algorithm, platform, or biotech capability (e.g., foundation models, filesystem products, AI agents for clinicians) without evidence of clinical validation, real-world deployment, or healthcare outcome measurement.
Posts announcing new biotech or AI tools, capabilities, or products without evidence of healthcare validation or clinical utility.
3 example posts
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
Exclude posts that focus on general infrastructure, compute scaling, energy grids, tariffs, geopolitical conflict, or macro policy (e.g., "tariff increases reduce imports", "grid equipment grew 1%/yr", "training compute has grown by one trillion times") unless the post explicitly connects these trends to specific healthcare delivery, cost, access, or innovation challenges.
Posts discussing broad macroeconomic, infrastructure, or policy trends (tariffs, grid capacity, compute scaling, geopolitics) with only loose or superficial healthcare framing.
3 example posts
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
States are rushing “affordability” bills, but most just mask high prices with rebates, mandates, or price caps. @MrRBourne & Nathan Miller argue durable relief means rolling back cost-raising rules and expanding supply.
https://t.co/WG5egT1NfL
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Exclude posts that discuss AI infrastructure, compute growth, energy consumption, grid bottlenecks, or model training scale as standalone business or macro trends. If healthcare is mentioned only as a tag or afterthought, exclude it. The post must explain how this infrastructure trend enables or transforms specific healthcare delivery, clinical workflows, or health systems.
Posts about AI model scale, compute requirements, energy infrastructure, or training capacity as macro trends without healthcare application.
3 example posts
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Exclude posts that report on healthcare fraud, insurance denials, or regulatory misconduct as personal grievances or emotional outrage (e.g., 'Brian Thompson,' '$9,000 bill,' 'insurers decided not to cover') unless the post analyzes root causes in healthcare system design, incentive structures, policy gaps, or market failures. Outrage-driven narrative without systems insight is insufficient.
Posts describing healthcare fraud, insurance denials, or regulatory failures as moral outrages without systemic healthcare policy or economics analysis.
3 example posts
Ah yes, Optum, the company that did Tim Walz’s audit of the 14 high risk Medicaid programs consumed by Somali fraud. https://t.co/dNRnjI7wxB
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
Exclude posts about non-healthcare companies (robotics, software platforms, fintech, general AI tools) or founder/startup enthusiasm that mention healthcare only in passing or use healthcare as a loose analogy without demonstrating healthcare-specific business model or operational change.
Posts about non-healthcare companies, infrastructure, or founder success stories with loose or no healthcare framing.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts that discuss macro trends (tariffs, energy grids, compute scaling, chip bottlenecks, manufacturing capacity) where healthcare is mentioned only as a secondary example or loose contextual label, not the primary analytical focus.
Posts about broad macro trends, infrastructure scaling, or supply chain dynamics with only tangential healthcare framing.
3 example posts
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
Exclude posts that report healthcare fraud, denials, abuse, or regulatory violations (e.g., UnitedHealth Change disaster, nursing home antipsychotic abuse, insurance claim denials, Brian Thompson incident) as outrage or breaking news without substantive analysis of system-level drivers, payer incentives, prior authorization mechanics, or policy solutions. Moral outrage without systems insight should be excluded.
Posts reporting healthcare fraud, regulatory violations, nursing home abuse, or denials scandals as breaking news without analyzing root causes, system incentives, or structural reform implications.
3 example posts
Last week I randomly got a $9,000 bill for a hospital visit from last year. I called them up and they said my insurance decided not to cover it. Why? Because.
.@Aetna can just decide they’re not interested in covering something and then you’re left with an inflated bill to pay h
NY rakes in $6.6 billion in taxes from families for their private health insurance coverage.
That’s $1,760 a year per family on top of their premiums.
Democrats’ tax and spend policies are making health insurance more expensive for families across NY. https://t.co/ggrjqFKrv4
What happened during the Change disaster?
Hospitals got bailed out.
CMS advanced $3.2 billion to hospitals between March and June 2024. UnitedHealth/Optum extended $6.5 billion in interest-liquidity through April 30.
Mercy, I looked it up, specifically had 218 days of cash
Exclude posts about AI infrastructure, compute scaling, robotics manufacturing, software business model shifts, or SaaS metrics that mention healthcare only tangentially or use it as a casual example rather than as the core focus. Posts about Figure robots, Mesa filesystems, Replit growth, or mainframe software architectures should be excluded unless deeply grounded in healthcare operational specifics.
Posts about general tech infrastructure, compute, robotics, or software business models that weakly tie to healthcare or use healthcare as a tangential example.
3 example posts
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents.
Every team building agents eventually hits the same wall: where do the files live?
Not the chat history, the actual artifacts the agent works on.
> The contracts your age
Thought I would start posting about interesting things happening at AWS. Not a bad day to start.🚀
Today at #WhatsNextWithAWS we announced a big step forward with @OpenAI on Amazon Bedrock:
1. OpenAI models now available
2. Codex for enterprise development
3. Amazon Bedrock Manag
Exclude posts from biotech founders, startup leaders, or venture figures expressing enthusiasm about new AI tools, cloud labs, or platforms (e.g., 'as easy as AWS for biotech') without demonstrating validation, adoption, or healthcare outcomes.
Posts from biotech founders or startup figures expressing optimism about new tools or platforms without evidence of healthcare impact
3 example posts
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
A must read for anyone interested in building practical AI systems in 2026:
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
The paper explains the architecture of a modern production-grade AI agent system (Claude Code) by analyzing its source http
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Exclude posts that report healthcare fraud (e.g., nursing home diagnosis fabrication, insurance denials, UnitedHealth Change outage), regulatory enforcement, or financial misconduct without explaining systemic causes, policy failures, or how the healthcare system should adapt.
Posts reporting healthcare fraud, regulatory violations, or scandals without analyzing root causes or healthcare system implications.
3 example posts
What happened during the Change disaster?
Hospitals got bailed out.
CMS advanced $3.2 billion to hospitals between March and June 2024. UnitedHealth/Optum extended $6.5 billion in interest-liquidity through April 30.
Mercy, I looked it up, specifically had 218 days of cash
U.S. nursing homes are fabricating schizophrenia diagnoses to hide their use of dangerous antipsychotic drugs to subdue dementia patients, a government watchdog report found.
The drugs increase the risk of falls, strokes and death. https://t.co/6SkzWxZfSz
@PirateWires He's objectively correct. Brian Thompson made decisions that led to denials of medical care, and people died. He used Ai to find ways to deny claims ffs. Brian Thompson has more blood on his hands than whoever shot him
Exclude posts that present a single drug trial result, clinical case observation, or patient anecdote without connecting to healthcare systems, coverage, operations, or policy. Trial data alone is news, not analysis of how it affects healthcare delivery or tech adoption.
Posts that report a single clinical trial result, patient outcome, or clinical observation without broader healthcare delivery or policy implications.
3 example posts
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
Here is a video of me entering my office tomorrow knowing that $NTLA is about to present the first-ever Phase 3 data of an In Vivo (!) CRISPR Gene Editing Program. Somehow - and after @adamfeuerstein’s🧵👇- I have a feeling it won’t be the only BioTech and CRISPR news…🤔 $XBI https:
Exclude posts that promote a startup or founder's success story, product launch, alumni network achievement, or business traction (e.g., Replit ARR, Kensho alumni companies, Ginkgo cloud lab enthusiasm, LillyDirect partnership announcement) unless the post demonstrates clinically validated healthcare impact, healthcare provider adoption, or patient outcome improvement.
Posts celebrating or hyping AI/tech startup launches, founder achievements, or product milestones with loose or no healthcare validation.
3 example posts
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Almost all of my positions selling some kind of AI/agentic SaaS tool have (either by foresight or customer demand) pivoted to some kind of business model where they “forward deploy” to the customer first and then sell the system they create back to them as SaaS. 99% of “normie” b
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
Exclude posts that are primarily founder enthusiasm, product launch announcements, or startup milestone celebrations from biotech or AI companies unless the post includes independent validation of healthcare impact, clinical evidence, or healthcare system adoption metrics. Self-promotional startup news is not healthcare tech content.
Posts featuring founder enthusiasm, product launches, or milestone announcements from biotech/AI startups without independent healthcare validation or systems impact analysis.
3 example posts
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Almost all of my positions selling some kind of AI/agentic SaaS tool have (either by foresight or customer demand) pivoted to some kind of business model where they “forward deploy” to the customer first and then sell the system they create back to them as SaaS. 99% of “normie” b
Kensho AI Mafia led by @DanielNadler needs to be studied. Particularly their success in Vertical AI. From a cursory look, Kensho alumni have founded:
- Suno (music)
- OpenEvidence (healthcare)
- Chai Discovery (biopharma)
- LangChain (agent infra)
Created 2026-04-30 · Updated 2026-04-30
Edit rule text
[glp1_peptide_market_price_and_competition_only]
Learned3 rejectionsActive
Exclude posts that report only on GLP-1 or peptide drug pricing, prescription volume comparisons, market share rankings, or competitive product launches (e.g., 'Foundayo had 1,390 Rxs in week 2 vs. Wegovy's 18,410') without connecting to healthcare access, clinical outcomes, patient impact, insurance coverage policy, or healthcare system economics.
Posts focused narrowly on GLP-1/peptide drug pricing, market share, prescription volumes, or competitive dynamics without healthcare system or outcomes analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Exclude posts that express founder enthusiasm about biotech projects, lab cloud platforms, or startup pivots (e.g., 'nothing beats running Ginkgo cloud lab', 'Kensho alumni success') without providing evidence of healthcare impact, customer traction, or clinical validation.
Posts about biotech founder excitement, startup pivots, or lab automation without validated healthcare outcomes or business fundamentals.
3 example posts
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Almost all of my positions selling some kind of AI/agentic SaaS tool have (either by foresight or customer demand) pivoted to some kind of business model where they “forward deploy” to the customer first and then sell the system they create back to them as SaaS. 99% of “normie” b
Kensho AI Mafia led by @DanielNadler needs to be studied. Particularly their success in Vertical AI. From a cursory look, Kensho alumni have founded:
- Suno (music)
- OpenEvidence (healthcare)
- Chai Discovery (biopharma)
- LangChain (agent infra)
Exclude posts that showcase new AI agent capabilities, architectural patterns, or model design insights (e.g., 'Claude Code: The Design Space of AI Agent Systems,' 'Foundry enables durable, stateful agents') where healthcare is mentioned as a use case but no validation, pilot data, or healthcare-specific insight is provided.
Posts announcing new AI agent capabilities, architecture patterns, or model releases where the healthcare angle is speculative or illustrative rather than grounded in real deployment data.
3 example posts
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
A must read for anyone interested in building practical AI systems in 2026:
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
The paper explains the architecture of a modern production-grade AI agent system (Claude Code) by analyzing its source http
🚨 Anthropic's own team just showed how to build production AI agents.
30 minutes. free. from the engineers who built it.
watch the workshop. bookmark it.
you spent 6 months managing every workflow yourself.
they just showed how to put all of it on autopilot.
Then read the ht
Exclude posts announcing AI model releases, product features, company partnerships, or infrastructure updates (e.g., Microsoft Foundry for stateful agents, Snapchat copying Stories) where the healthcare application is speculative, aspirational, or not yet validated in practice.
Posts about AI or tech product launches, features, or company announcements that have loose or speculative healthcare relevance.
3 example posts
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
A must read for anyone interested in building practical AI systems in 2026:
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
The paper explains the architecture of a modern production-grade AI agent system (Claude Code) by analyzing its source http
Exclude posts that are primarily single-patient anecdotes, personal clinical observations, or individual stories (e.g., a patient's delayed diagnosis, a resident's use of ChatGPT for cases) unless the post contextualizes the anecdote within broader systemic healthcare issues, policy implications, or operational patterns.
Posts describing individual clinical encounters, personal experiences, or single cases without extracting broader healthcare systems insights.
3 example posts
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
I sat with a patient today who first noticed a change in October. It’s April now. In all those months of appointments and follow-ups, her breast had only truly been looked at twice. That stayed with me.
If something has changed with your body — especially something under your ht
Created 2026-04-29 · Updated 2026-04-29
Edit rule text
[ai_company_product_or_capability_announcement]
Learned3 rejectionsActive
Exclude posts that announce new AI company products, features, or architectural capabilities (e.g., Foundry enables durable stateful agents, Claude Code design principles) unless they explicitly demonstrate a concrete healthcare application, operational workflow change, or measurable impact on healthcare delivery.
Posts announcing new AI company products, features, or technical capabilities with minimal healthcare-specific application or impact.
3 example posts
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
A must read for anyone interested in building practical AI systems in 2026:
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
The paper explains the architecture of a modern production-grade AI agent system (Claude Code) by analyzing its source http
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
Created 2026-04-29 · Updated 2026-04-29
Edit rule text
[broad_clinical_debate_without_systems_lens]
Learned3 rejectionsActive
Exclude posts that make sweeping clinical claims, challenge established medical narratives, or debate drug safety/efficacy (e.g., 'GLP-1s don't cause heart muscle loss,' 'Sacubitril/Valsartan works but isn't used') without analyzing why the healthcare system fails to adopt proven treatments, what barriers exist, or what operational changes are needed.
Posts that assert broad clinical claims or engage in clinical debates without providing healthcare systems context, operational implementation, or policy implications.
3 example posts
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
A 65% cholesterol reduction has been available since 2015. Almost nobody could get it. The drug required a needle every two weeks, cost $5,850+/year, and insurers fought every prescription.
@Merck spent a decade figuring out how to put the same mechanism in a pill. Enlicitide:
Exclude posts about software business models, fintech disruption, compliance certifications, employee management, or macro policy trends that mention healthcare only incidentally or use healthcare as a loose analogy rather than as the core analytical subject.
Posts about general business models, tech industry dynamics, or policy topics that are tangentially framed around healthcare without substantive healthcare-specific analysis.
3 example posts
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
Is the business model for traditional software companies in permanent decline due to AI Agents not needing seats?
2 examples:
Re: @salesforce, we’ve reduced our seats from 10+ to 2 human seats and 1 API seat. And yet, we now pay $22,000 a year, 83% up from $12,000. Why? Our
Exclude posts that describe AI safety failures, security breaches, or agent incidents (e.g., Claude deleting databases, API key leaks, prompt injection attacks) where the primary focus is the technical vulnerability or AI company mishap rather than healthcare-specific consequences, system failures, or clinical impact.
Posts about AI safety incidents, security vulnerabilities, or agent mishaps that use healthcare framing but lack healthcare-specific analysis or impact.
3 example posts
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
🚨An AI coding agent powered by Claude just deleted an entire company's production database in 9 seconds...
-Cursor running Anthropic's flagship Claude Opus 4.6 was set to do a routine task on PocketOS, a SaaS platform for car rental businesses
-The AI hit a barrier and decided
Exclude posts that express enthusiasm about AI compute capabilities, biotech founder success networks, protein folding achievements, or cloud lab infrastructure as general-purpose tools or hype without grounding claims in specific, validated healthcare delivery problems they solve or healthcare system bottlenecks they address. Infrastructural enthusiasm and founder boosterism are not healthcare tech strategy.
Posts celebrating AI infrastructure, biotech founder networks, or compute breakthroughs with loose or speculative healthcare framing rather than demonstrated healthcare use cases
3 example posts
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
A must read for anyone interested in building practical AI systems in 2026:
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
The paper explains the architecture of a modern production-grade AI agent system (Claude Code) by analyzing its source http
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Exclude posts about AI model architecture, compute scaling, software-as-a-service business model changes, or platform features (agents, APIs, dashboards) unless the post explicitly analyzes how that capability or business model impacts healthcare delivery, clinical workflows, or health tech economics.
Posts about AI model capabilities, compute infrastructure, software business models, or tech platform features presented with loose or missing healthcare specificity.
3 example posts
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
Is the business model for traditional software companies in permanent decline due to AI Agents not needing seats?
2 examples:
Re: @salesforce, we’ve reduced our seats from 10+ to 2 human seats and 1 API seat. And yet, we now pay $22,000 a year, 83% up from $12,000. Why? Our
Exclude posts that report GLP-1/peptide prescription volumes, pricing comparisons, market share shifts, or competitive drug launch data unless the post includes analysis of healthcare system impact (e.g., insurance coverage, patient access barriers, health equity, cost-effectiveness relative to outcomes).
Posts about GLP-1 and peptide drug pricing, market share, prescription volumes, and competitive launches without healthcare economics or access analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Created 2026-04-29 · Updated 2026-04-29
Edit rule text
[glp1_peptide_market_pricing_and_competition]
Learned3 rejectionsActive
Exclude posts that report GLP-1 or peptide drug prescription numbers, pricing comparisons, market launches, or competitive positioning (e.g., Mounjaro vs. Wegovy script counts, price reductions, generic entries) unless they analyze systemic healthcare implications like insurance coverage, access equity, or clinical outcomes at scale.
Posts about GLP-1 and peptide drug market dynamics, pricing, prescriptions, and competitive launches without healthcare systems analysis.
3 example posts
🚨 Survodutide, a weekly subcutaneous dual GLP-1/glucagon agonist, posts new top-line Phase 3 results today
📊 SYNCHRONIZE-1: Adults with overweight/obesity randomized to survodutide vs. placebo for 76 weeks
⚖️ 16.6% weight loss (efficacy estimand) vs. 3.2% in placebo
85.1% http
$NVO $LLY
Boehringer Ingelheim and Zealand Pharma report strong phase 3 data for obesity drug survodutide.
Patients lost up to 16.6% of body weight over 76 weeks vs. 3.2% for placebo.
The drug targets both GLP-1 and glucagon, a combo approach aimed at boosting weight loss.
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
Exclude posts that focus on AI safety failures, security breaches, or capability demonstrations (e.g., AI deleting databases, taking over networks, leaking data) unless the post explicitly connects the incident to healthcare delivery, patient outcomes, or clinical decision-making systems. Generic AI safety concerns without healthcare application do not qualify.
Posts about AI safety incidents, security vulnerabilities, or capability demonstrations that lack concrete healthcare system implications.
3 example posts
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
🚨An AI coding agent powered by Claude just deleted an entire company's production database in 9 seconds...
-Cursor running Anthropic's flagship Claude Opus 4.6 was set to do a routine task on PocketOS, a SaaS platform for car rental businesses
-The AI hit a barrier and decided
Exclude posts that report fraud, misconduct, or regulatory enforcement (e.g., nursing home abuse, insurance denials, UnitedHealth outages, hospice fraud) unless the post analyzes the underlying healthcare system failure, proposes policy solutions, or connects the incident to broader operational or structural healthcare problems. Posts that are primarily scandal/outrage reporting without systems analysis do not qualify.
Posts reporting healthcare fraud, regulatory violations, or industry scandals without analyzing systemic healthcare policy or operational implications.
3 example posts
🚨 SaaS platform ClickUp, used by 85% of the Fortune 500, has been leaking customer emails through its homepage for at least 465 days, and counting.
ClickUp has a $4 billion valuation. They are SOC 2 Type 2, ISO 27001, ISO 27017, ISO 27018, ISO 42001, and PCI DSS certified. The f
What happened during the Change disaster?
Hospitals got bailed out.
CMS advanced $3.2 billion to hospitals between March and June 2024. UnitedHealth/Optum extended $6.5 billion in interest-liquidity through April 30.
Mercy, I looked it up, specifically had 218 days of cash
💬 Viewpoint: The widespread use of #AI for residency application screening in US graduate medical education programs introduces new legal and ethical concerns, particularly regarding disparate impact discrimination and unvalidated subgroup performance.
https://t.co/WBeGQmkBr1 h
Exclude posts that engage in health debates (e.g., discussing side effects, efficacy claims, or medical myths) or make broad claims about drug safety without connecting to healthcare delivery, access, coverage, or policy systems. Posts must address how healthcare systems adopt, regulate, or implement interventions—not just clinical facts.
Posts that make broad claims about drug safety, efficacy, or medical narratives without healthcare system analysis or policy implications.
3 example posts
⚠️ Sacubitril/Valsartan works. So why aren’t we using it?
The evidence is undeniable:
↓ CV mortality: 20% (RCT) / 10–38% (RWE)
↓ HF hospitalization: 21% (RCT) / 10–16% (RWE)
↓ All-cause mortality: 15% (RCT) / 10–25% (RWE)
Plus: reverse remodeling, less MR, better QoL & https:
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
A 65% cholesterol reduction has been available since 2015. Almost nobody could get it. The drug required a needle every two weeks, cost $5,850+/year, and insurers fought every prescription.
@Merck spent a decade figuring out how to put the same mechanism in a pill. Enlicitide:
Exclude posts that report healthcare fraud, nursing home scandals, or regulatory enforcement actions with moral outrage or sensationalism (e.g., 'he used AI to deny claims') without analyzing systemic incentives, policy failures, or structural healthcare reform implications.
Posts reporting healthcare fraud, compliance failures, or regulatory scandals with outrage framing but no analysis of systemic causes or healthcare policy implications.
3 example posts
U.S. nursing homes are fabricating schizophrenia diagnoses to hide their use of dangerous antipsychotic drugs to subdue dementia patients, a government watchdog report found.
The drugs increase the risk of falls, strokes and death. https://t.co/6SkzWxZfSz
@PirateWires He's objectively correct. Brian Thompson made decisions that led to denials of medical care, and people died. He used Ai to find ways to deny claims ffs. Brian Thompson has more blood on his hands than whoever shot him
What $1 Billion a Day Buys in American Health Care
The U.S. is spending $1 billion/day on the war in Iran — over a year, that would cover 37 million Medicaid enrollees. Congress just cut $911 billion from the program because it was too expensive.
Read & subscribe (for free!)
Created 2026-04-29 · Updated 2026-04-29
Edit rule text
[tangential_biotech_or_infrastructure_enthusiasm]
Learned3 rejectionsActive
Exclude posts that promote biotech tools, cloud lab platforms, or AI infrastructure with enthusiasm about enabling innovation (e.g., 'as easy as starting a software startup') without evidence of healthcare validation, clinical adoption, or analysis of regulatory/operational barriers in healthcare.
Posts from biotech founders or infrastructure advocates promoting their tools/platforms with claims about enabling faster innovation but lacking healthcare validation or systems analysis.
3 example posts
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
Almost all of my positions selling some kind of AI/agentic SaaS tool have (either by foresight or customer demand) pivoted to some kind of business model where they “forward deploy” to the customer first and then sell the system they create back to them as SaaS. 99% of “normie” b
Kensho AI Mafia led by @DanielNadler needs to be studied. Particularly their success in Vertical AI. From a cursory look, Kensho alumni have founded:
- Suno (music)
- OpenEvidence (healthcare)
- Chai Discovery (biopharma)
- LangChain (agent infra)
Created 2026-04-29 · Updated 2026-04-29
Edit rule text
[tangential_non_healthcare_infrastructure]
Learned3 rejectionsActive
Exclude posts about semiconductor supply chains, power grids, gas turbines, manufacturing bottlenecks, or data center infrastructure unless they explicitly connect to healthcare delivery, drug manufacturing, or medical device production. General infrastructure hype without healthcare specificity does not qualify.
Posts about energy infrastructure, manufacturing supply chains, or compute hardware hype without direct healthcare application
3 example posts
Using 180 years of US data to show that tariff increases reduce imports, output, and manufacturing, with effects operating through both supply and demand channels, from Tamar den Besten, Regis Barnichon, @drkaenzig, and Aayush Singh https://t.co/R1kdtfi3fO https://t.co/wvXj31qs9e
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
Created 2026-04-29 · Updated 2026-04-29
Edit rule text
[ai_agent_safety_vulnerability_hype]
Learned3 rejectionsActive
Exclude posts that sensationalize AI agent vulnerabilities, safety failures, or security breaches (e.g., 'Claude deleted a database in 9 seconds', 'AI agent took over a network') unless they directly demonstrate a specific healthcare system failure or patient harm scenario. Abstract AI safety concerns without healthcare operational context do not qualify.
Posts dramatizing AI agent security risks or jailbreaks without healthcare application context
3 example posts
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
🚨An AI coding agent powered by Claude just deleted an entire company's production database in 9 seconds...
-Cursor running Anthropic's flagship Claude Opus 4.6 was set to do a routine task on PocketOS, a SaaS platform for car rental businesses
-The AI hit a barrier and decided
Exclude posts that sensationalize AI safety incidents, security vulnerabilities, or agent behavior anomalies (e.g., 'Claude deleted a database,' 'API keys leaked') without demonstrating concrete impact on healthcare delivery, patient safety, or clinical decision-making. The post must show healthcare-specific harm, not generic AI risk theater.
Posts dramatizing AI safety breaches, vulnerability disclosures, or agent failures without connecting to actual healthcare system risks or clinical outcomes.
3 example posts
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds.
A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping an
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
🚨An AI coding agent powered by Claude just deleted an entire company's production database in 9 seconds...
-Cursor running Anthropic's flagship Claude Opus 4.6 was set to do a routine task on PocketOS, a SaaS platform for car rental businesses
-The AI hit a barrier and decided
Exclude posts that celebrate AI technical capabilities (agents, code generation, reasoning) or discuss general software architecture without demonstrating concrete healthcare delivery, clinical workflow, or patient outcome improvements.
Posts about AI model capabilities, agent design patterns, or software engineering best practices applied tangentially to healthcare.
3 example posts
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
AI could, in theory, automate 57% of US work hours. Yet most human skills remain relevant.
The future of work is not human or machine – but a partnership between people, agents, and robots.
Read our latest research on skill partnerships in the age of AI: https://t.co/h1K56uPqPo
Exclude posts that focus on AI agent architectural decisions, model safety guardrails, cybersecurity vulnerabilities, or agentic capability demonstrations where healthcare is mentioned only as context or one example among many non-healthcare domains. The post must center on healthcare-specific agent deployment, clinical workflows, or health system risks.
Posts about AI agent/model technical capabilities or security vulnerabilities with only superficial healthcare framing.
3 example posts
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
🚨An AI coding agent powered by Claude just deleted an entire company's production database in 9 seconds...
-Cursor running Anthropic's flagship Claude Opus 4.6 was set to do a routine task on PocketOS, a SaaS platform for car rental businesses
-The AI hit a barrier and decided
Great example of what Foundry enables: durable, stateful agents that run across time boundaries, orchestrate tools and models, and close the loop with evaluation and improvement over long-running workflows. @jeffhollan https://t.co/v0yWTDIcn3
Exclude posts that discuss AI agents replacing workers, automation disrupting business models, seat reduction, or labor market shifts in generalist business language (e.g., software seats, business model decline) unless the post specifically addresses healthcare workforce implications such as clinician roles, administrative staff impact, or health system labor strategies.
Posts about AI automation, job displacement, or labor market disruption presented in broad business or economic terms without healthcare workforce impact analysis.
3 example posts
AI could, in theory, automate 57% of US work hours. Yet most human skills remain relevant.
The future of work is not human or machine – but a partnership between people, agents, and robots.
Read our latest research on skill partnerships in the age of AI: https://t.co/h1K56uPqPo
Is the business model for traditional software companies in permanent decline due to AI Agents not needing seats?
2 examples:
Re: @salesforce, we’ve reduced our seats from 10+ to 2 human seats and 1 API seat. And yet, we now pay $22,000 a year, 83% up from $12,000. Why? Our
AI is taking on more of the labor.
It is not taking on the accountability.
@danielnewmanUV and @GregLotko talk with @Darren_Surch of @Interskil about why mainframe teams now have to interpret and stand behind AI-driven outputs, and why organizations that stop investing in htt
Exclude posts that recount a single clinical anecdote, patient interaction, diagnostic moment, or care workflow observation (e.g., 'I gave a patient a differential,' 'A patient waited months for imaging') without analyzing root causes in healthcare system structure, prior authorization policies, staffing, or operational design.
Posts describing a single clinician observation, patient case, or care encounter without connecting to healthcare system design, workflow, policy, or broader implementation challenges.
3 example posts
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
I sat with a patient today who first noticed a change in October. It’s April now. In all those months of appointments and follow-ups, her breast had only truly been looked at twice. That stayed with me.
If something has changed with your body — especially something under your ht
Created 2026-04-28 · Updated 2026-04-28
Edit rule text
[ai_cybersecurity_vulnerability_tangent]
Learned3 rejectionsActive
Exclude posts that focus on AI safety incidents, hacking demonstrations, or data breach scenarios involving AI agents or SaaS tools (e.g., ClickUp leaks, Claude takeover risks) unless the post explicitly addresses healthcare-specific deployment risks, HIPAA implications, or clinical decision-making vulnerabilities.
Posts about AI agents or SaaS platforms causing security breaches, data leaks, or system takeovers presented as cautionary tales without healthcare-specific risk analysis.
3 example posts
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
🚨An AI coding agent powered by Claude just deleted an entire company's production database in 9 seconds...
-Cursor running Anthropic's flagship Claude Opus 4.6 was set to do a routine task on PocketOS, a SaaS platform for car rental businesses
-The AI hit a barrier and decided
🚨 SaaS platform ClickUp, used by 85% of the Fortune 500, has been leaking customer emails through its homepage for at least 465 days, and counting.
ClickUp has a $4 billion valuation. They are SOC 2 Type 2, ISO 27001, ISO 27017, ISO 27018, ISO 42001, and PCI DSS certified. The f
Created 2026-04-28 · Updated 2026-04-28
Edit rule text
[ai_agent_safety_vulnerability_tangent]
Learned3 rejectionsActive
Exclude posts that focus on AI agent safety failures, cybersecurity vulnerabilities, or proof-of-concept exploits (e.g., agents deleting databases, accessing shells, leaking customer data) unless the post explicitly analyzes healthcare-specific deployment risks or clinical decision-making safety. General AI safety concerns are out of scope.
Posts about AI agent security risks, jailbreaks, and system vulnerabilities without healthcare application context.
3 example posts
clickup is SOC 2 Type 2 certified. ISO 27001. ISO 27017. ISO 27018. ISO 42001. PCI DSS. every compliance badge you can buy.
none of it caught a hardcoded API key leaking 959 customer emails for 15 months. none of it flagged a zero-protection SSRF on a free-tier endpoint. their o
🚨An AI coding agent powered by Claude just deleted an entire company's production database in 9 seconds...
-Cursor running Anthropic's flagship Claude Opus 4.6 was set to do a routine task on PocketOS, a SaaS platform for car rental businesses
-The AI hit a barrier and decided
🚨 SaaS platform ClickUp, used by 85% of the Fortune 500, has been leaking customer emails through its homepage for at least 465 days, and counting.
ClickUp has a $4 billion valuation. They are SOC 2 Type 2, ISO 27001, ISO 27017, ISO 27018, ISO 42001, and PCI DSS certified. The f
Created 2026-04-28 · Updated 2026-04-28
Edit rule text
[glp1_peptide_market_speculation_only]
Learned3 rejectionsActive
Exclude posts that only report GLP-1 or peptide market pricing, prescription volumes, competitive launch sequencing, or manufacturer market share data. The post must address broader healthcare system implications (access, reimbursement policy, clinical practice change, or population health outcome), not just market competition or pricing.
Posts about GLP-1 or peptide market dynamics, pricing, or competitive positioning without healthcare systems insight.
3 example posts
$LLY v $NVO
Foundayo (orforglipron) scripts off to a slow start both in raw numbers and in comparison to Oral Wegovy’s launch at same time point.
Overall statistics show Oral Wegovy script growth is robust, and thus far undeterred, by Foundayo market entry.
🎩 @bloomberg https
$LLY $NVO $HIMS
🚨 LILLY GLP-1 PILL FOUNDAYO: NEARLY 4,000 PRESCRIPTIONS IN WEEK 2
- Foundayo had 1,390 Rxs during week 1
- Meanwhile, Novo's Wegovy Pill had 3k in first 4 days and 18,410 prescriptions in its second week 🤯
- IQVIA data
- Week ending Apr 17
"While we believe htt
This admin is kicking butt.
One week: GLP-1s from $1,350 to $199/mo. 12 peptides removed from Category 2. Amazon entered the space. HIMS added Lilly drugs. Pediatric oral GLP-1 trial data dropped.
Exclude posts that describe AI model vulnerabilities, hacking demonstrations, or safety failures (e.g., 'AI deleted a database', 'AI took over a network', 'safety guardrails are useless') unless the post explicitly analyzes the healthcare delivery system impact or proposes a healthcare-specific mitigation. Dramatic vulnerability framing alone is insufficient.
Posts about AI model vulnerabilities, jailbreaks, or safety concerns framed as dramatic headlines without healthcare-specific implications or systems analysis.
3 example posts
🚨An AI coding agent powered by Claude just deleted an entire company's production database in 9 seconds...
-Cursor running Anthropic's flagship Claude Opus 4.6 was set to do a routine task on PocketOS, a SaaS platform for car rental businesses
-The AI hit a barrier and decided
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Created 2026-04-28 · Updated 2026-04-28
Edit rule text
[biotech_founder_enthusiasm_without_evidence]
Learned3 rejectionsActive
Exclude posts where a biotech founder, CEO, or employee promotes their own tool, company product, or research with enthusiasm but lacks independent evidence, peer review, or analysis of healthcare system impact. This includes unvetted claims about product capabilities or customer adoption.
Self-promotional posts from biotech founders or company leaders expressing enthusiasm about their product or technology without independent validation or healthcare systems insight.
3 example posts
AI can now design antibodies that bind with atomic precision, but not ones that cells can produce. Our preprint closes this gap, delivering a structural principle, an AI-guided rescue pipeline, and adalimumab variants with 20-100x in vivo potency.
https://t.co/GvfgHA5EcU
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
🚨 Anthropic's own team just showed how to build production AI agents.
30 minutes. free. from the engineers who built it.
watch the workshop. bookmark it.
you spent 6 months managing every workflow yourself.
they just showed how to put all of it on autopilot.
Then read the ht
Exclude posts that report corporate fraud, wrongdoing, or political outrage (e.g., insurance denials, hospice exploitation, data breaches) without explaining systemic failures in healthcare incentives, regulations, or business models that enable or permit such conduct.
Posts reporting healthcare fraud, corporate wrongdoing, or political scandal without systemic analysis of root causes or solutions.
3 example posts
What happened during the Change disaster?
Hospitals got bailed out.
CMS advanced $3.2 billion to hospitals between March and June 2024. UnitedHealth/Optum extended $6.5 billion in interest-liquidity through April 30.
Mercy, I looked it up, specifically had 218 days of cash
U.S. nursing homes are fabricating schizophrenia diagnoses to hide their use of dangerous antipsychotic drugs to subdue dementia patients, a government watchdog report found.
The drugs increase the risk of falls, strokes and death. https://t.co/6SkzWxZfSz
@PirateWires He's objectively correct. Brian Thompson made decisions that led to denials of medical care, and people died. He used Ai to find ways to deny claims ffs. Brian Thompson has more blood on his hands than whoever shot him
Created 2026-04-28 · Updated 2026-04-28
Edit rule text
[ai_agent_or_model_technical_tangent]
Learned3 rejectionsActive
Exclude posts that explain AI agent design, Claude's architecture, model personalities, or technical AI capabilities unless they include concrete healthcare use case validation, deployment outcome, or clinical workflow integration. Technical deep-dives on AI systems without healthcare application do not qualify.
Posts about AI agent architecture, model capabilities, or technical design patterns that lack specific healthcare application or validation.
3 example posts
A must read for anyone interested in building practical AI systems in 2026:
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
The paper explains the architecture of a modern production-grade AI agent system (Claude Code) by analyzing its source http
🚨 Anthropic's own team just showed how to build production AI agents.
30 minutes. free. from the engineers who built it.
watch the workshop. bookmark it.
you spent 6 months managing every workflow yourself.
they just showed how to put all of it on autopilot.
Then read the ht
Claude remains irreducibly Claude. If you know, you know.
(The fact that models have distinct personalities that are consistent across generations is technically interesting, it also makes it very easy to use new releases when they come along, because they feel very similar). ht
Created 2026-04-28 · Updated 2026-04-28
Edit rule text
[glp1_peptide_market_speculation]
Learned3 rejectionsActive
Exclude posts that primarily discuss GLP-1 or peptide prescription volumes, pricing comparisons, market share between brands (Ozempic vs. Wegovy vs. Foundayo), or adoption rates. These are market speculation, not healthcare systems innovation or policy analysis.
Posts focused on GLP-1 and peptide drug market share, pricing, and competitive dynamics without healthcare systems analysis
3 example posts
This is just two GLP-1s, one peptide, one use case
what happens when off-label prescribing ramps up
what happens when retatrutide hits the market
what happens when other peptides become compoundable
chapter one
$LLY v $NVO
Foundayo (orforglipron) scripts off to a slow start both in raw numbers and in comparison to Oral Wegovy’s launch at same time point.
Overall statistics show Oral Wegovy script growth is robust, and thus far undeterred, by Foundayo market entry.
🎩 @bloomberg https
India’s weight-loss drug market just ran a live experiment in price elasticity.
Novo Nordisk’s semaglutide patent expired 20 March 2026.
Within 3 weeks:
15+ generics launched
Cheapest at Rs 2,000/month (branded was Rs 10,000+)
Novo cut Ozempic and Wegovy prices by 36-48%
Bu
Exclude posts about UFOs, cryptocurrency, general employment trends, criminal law, media criticism, supply chain/infrastructure unrelated to healthcare delivery, or conspiracy claims—even if they mention 'health' or 'medicine' tangentially or as a loose comparison. The post must address a healthcare system problem or healthcare-specific application.
Posts about entirely non-healthcare domains (UFOs, finance, employment, criminal justice, conspiracy, media) that are not meaningfully connected to healthcare systems or policy
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
Two companies you've never heard of built a combined $373M revenue business by helping employees bypass IT. Now comes the part where IT buys its way back in.
Replit just hit $253M ARR growing 2,352% YoY. 85% of the Fortune 500 have employees on it. Lovable is at $120M ARR, $6.6B
"Your body can only use 25-30g of protein per meal. Anything above that gets wasted."
This claim has been repeated in fitness nutrition for over a decade, and it was built on studies that measured the right thing over the wrong timescale.
Moore 2009 gave six young men 0, 5, ht
Exclude posts that make broad health claims (e.g., 'Your body can only use 25-30g of protein per meal', 'GLP-1s exacerbate eating disorders', 'drug improved cardiovascular outcomes') without linking to peer-reviewed evidence, clinical trial data, or credible institutional sources. Debate and disagreement are acceptable if framed as contested evidence.
Posts making sweeping health benefit claims or debunking common health beliefs without citing rigorous evidence or explaining the underlying mechanism
3 example posts
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
Attention PK nerds, pharmacologists, and clinicians who actually understand serum levels:
I haven’t seen this discussed, but it could matter for patients priced out of injectables.
If a 25 mg oral semaglutide tablet has ~1% bioavailability, that’s ~0.25 mg systemically… on
A 65% cholesterol reduction has been available since 2015. Almost nobody could get it. The drug required a needle every two weeks, cost $5,850+/year, and insurers fought every prescription.
@Merck spent a decade figuring out how to put the same mechanism in a pill. Enlicitide:
Exclude posts that criticize healthcare policy, drug pricing regulation, or government decisions (e.g., Medicaid cuts, FDA rules, congressional votes) primarily as political outrage or moral complaint, without providing systems-level analysis, economic evidence, or operational context for why the policy is flawed.
Posts expressing political outrage or regulatory criticism of healthcare policy without substantive systems analysis or evidence-based argument.
3 example posts
States are rushing “affordability” bills, but most just mask high prices with rebates, mandates, or price caps. @MrRBourne & Nathan Miller argue durable relief means rolling back cost-raising rules and expanding supply.
https://t.co/WG5egT1NfL
$LLY ’s Mounjaro will not be listed on Australia’s PBS after pricing negotiations collapsed.
Eli Lilly walked away from talks with the government, leaving around 450,000 patients without subsidized access.
Patients will continue to pay hundreds of dollars per month out of
What $1 Billion a Day Buys in American Health Care
The U.S. is spending $1 billion/day on the war in Iran — over a year, that would cover 37 million Medicaid enrollees. Congress just cut $911 billion from the program because it was too expensive.
Read & subscribe (for free!)
Created 2026-04-27 · Updated 2026-04-27
Edit rule text
[clinical_anecdote_without_systems_insight]
Learned3 rejectionsActive
Exclude posts that describe isolated clinical cases, patient interactions, or medical observations (e.g., 'patient presented with X symptom, I ordered Y test') without connecting to healthcare policy, operational systems, or scalable insights. A single clinician's story is not enough.
Single clinical observations or patient anecdotes presented without healthcare system analysis or broader implications
3 example posts
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
I sat with a patient today who first noticed a change in October. It’s April now. In all those months of appointments and follow-ups, her breast had only truly been looked at twice. That stayed with me.
If something has changed with your body — especially something under your ht
Everyone was so excited about this film, Amazon launched it a day early! It's only $2.99 to rent & it's available now. I highly encourage everybody to check it out. You don't have to be "for or against LDL" to be moved by the life-changing stories & be curious about the q
Exclude posts discussing AI infrastructure (chips, data centers, power grids, compute scale, training compute, GPT releases) that lack specific application to healthcare workflows, clinical outcomes, or healthcare business models—even if the author works in healthcare or has a healthcare post history.
Posts about AI compute, energy, infrastructure, or technical capabilities that are tangentially framed as healthcare-relevant but lack concrete healthcare application.
3 example posts
Software is not a moat
Over the last 15+ years, nearly every innovation @EvanSpiegel and his team shipped got copied. Stories. AR glasses. Swipe-based navigation. The camera-first interface.
And yet @Snapchat is the only independent consumer social app that has lasted. Nearly 1
AI could, in theory, automate 57% of US work hours. Yet most human skills remain relevant.
The future of work is not human or machine – but a partnership between people, agents, and robots.
Read our latest research on skill partnerships in the age of AI: https://t.co/h1K56uPqPo
Is the business model for traditional software companies in permanent decline due to AI Agents not needing seats?
2 examples:
Re: @salesforce, we’ve reduced our seats from 10+ to 2 human seats and 1 API seat. And yet, we now pay $22,000 a year, 83% up from $12,000. Why? Our
Created 2026-04-27 · Updated 2026-04-27
Edit rule text
[off_topic_conspiracy_or_fringe_claims]
Learned3 rejectionsActive
Exclude posts that promote conspiracy theories, sensationalize criminal allegations, make unverified claims about corruption without evidence, or veer into non-healthcare domains (UFOs, immigration, sports, finance, labor market macro) even if loosely framed as healthcare-adjacent.
Posts making unsubstantiated conspiracy claims, sensationalized allegations, or discussing non-healthcare fringe topics.
3 example posts
@PirateWires He's objectively correct. Brian Thompson made decisions that led to denials of medical care, and people died. He used Ai to find ways to deny claims ffs. Brian Thompson has more blood on his hands than whoever shot him
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
Two companies you've never heard of built a combined $373M revenue business by helping employees bypass IT. Now comes the part where IT buys its way back in.
Replit just hit $253M ARR growing 2,352% YoY. 85% of the Fortune 500 have employees on it. Lovable is at $120M ARR, $6.6B
Created 2026-04-27 · Updated 2026-04-27
Edit rule text
[stock_ticker_speculation]
Learned3 rejectionsActive
Exclude posts that lead with stock tickers ($LLY, $NVO, $HIMS, $IBRX, etc.) and focus primarily on investment strategy, price comparisons, or trading positions rather than substantive healthcare system analysis or clinical outcomes.
Posts focused on stock price movements, trading positions, or speculation about company valuations with minimal healthcare substance.
3 example posts
Here is a video of me entering my office tomorrow knowing that $NTLA is about to present the first-ever Phase 3 data of an In Vivo (!) CRISPR Gene Editing Program. Somehow - and after @adamfeuerstein’s🧵👇- I have a feeling it won’t be the only BioTech and CRISPR news…🤔 $XBI https:
$LLY v $NVO
Foundayo (orforglipron) scripts off to a slow start both in raw numbers and in comparison to Oral Wegovy’s launch at same time point.
Overall statistics show Oral Wegovy script growth is robust, and thus far undeterred, by Foundayo market entry.
🎩 @bloomberg https
$LLY $NVO $HIMS
🚨 LILLY GLP-1 PILL FOUNDAYO: NEARLY 4,000 PRESCRIPTIONS IN WEEK 2
- Foundayo had 1,390 Rxs during week 1
- Meanwhile, Novo's Wegovy Pill had 3k in first 4 days and 18,410 prescriptions in its second week 🤯
- IQVIA data
- Week ending Apr 17
"While we believe htt
Exclude posts that argue for or against a broad health claim, medical myth, or drug side effect debate (e.g., 'GLP-1s cause eating disorders', 'protein absorption myths') without analyzing how these claims impact healthcare access, clinical workflows, or healthcare delivery. The post must address a healthcare system problem, not just dispute a medical fact.
Posts debating or asserting broad health claims (side effects, cardiovascular outcomes, protein metabolism) without healthcare delivery or systems context
3 example posts
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
A 65% cholesterol reduction has been available since 2015. Almost nobody could get it. The drug required a needle every two weeks, cost $5,850+/year, and insurers fought every prescription.
@Merck spent a decade figuring out how to put the same mechanism in a pill. Enlicitide:
The only problem with the GLP-1 heart muscle loss narrative is...
... that it's just a narrative.
GLP-1s have reliably improved cardiovascular outcomes in trials, to the point that some research suggests benefit may even be independent of (not reliant on) weight loss.
Exclude posts that present a single clinical case, patient encounter, or clinical observation (e.g., 'a patient I saw today...', 'epic alert triggered...') without connecting it to systemic issues, evidence-based patterns, or healthcare delivery problems. Clinical stories must illuminate a healthcare system problem, not just narrate individual events.
Individual clinical cases or anecdotes presented as observations without broader healthcare systems insight or evidence base
3 example posts
Started with standard ChatGPT for clinicians asking for a differential for a GI bleed patient. Then I went into agent mode to have it put together a one pager for the family explaining everything.
Of course, this is not a real patient. https://t.co/PEUeCqizT1
Attention PK nerds, pharmacologists, and clinicians who actually understand serum levels:
I haven’t seen this discussed, but it could matter for patients priced out of injectables.
If a 25 mg oral semaglutide tablet has ~1% bioavailability, that’s ~0.25 mg systemically… on
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
Created 2026-04-27 · Updated 2026-04-27
Edit rule text
[broad_healthcare_trend_claim_without_evidence]
Learned3 rejectionsActive
Exclude posts that make broad claims about healthcare trends (e.g., 'AI is taking on labor but not accountability', 'GLP-1s exacerbate eating disorders') without presenting studies, data, or specific evidence to support the claim.
Posts making sweeping claims about healthcare trends, outcomes, or safety without supporting data or nuance.
3 example posts
Here is a video of me entering my office tomorrow knowing that $NTLA is about to present the first-ever Phase 3 data of an In Vivo (!) CRISPR Gene Editing Program. Somehow - and after @adamfeuerstein’s🧵👇- I have a feeling it won’t be the only BioTech and CRISPR news…🤔 $XBI https:
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
U.S. nursing homes are fabricating schizophrenia diagnoses to hide their use of dangerous antipsychotic drugs to subdue dementia patients, a government watchdog report found.
The drugs increase the risk of falls, strokes and death. https://t.co/6SkzWxZfSz
Created 2026-04-27 · Updated 2026-04-27
Edit rule text
[broad_health_claims_without_nuance]
Learned3 rejectionsActive
Exclude posts that assert broad health claims (e.g., 'GLP-1s don't cause eating disorders, the narrative is false,' 'AI improves cardiovascular outcomes reliably') or dismiss concerns with brief rebuttal—without engaging competing evidence, acknowledging subpopulation variation, or analyzing real-world implementation risks.
Posts making sweeping health claims or counter-narratives without evidence, nuance, or acknowledgment of clinical trade-offs.
3 example posts
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
A 65% cholesterol reduction has been available since 2015. Almost nobody could get it. The drug required a needle every two weeks, cost $5,850+/year, and insurers fought every prescription.
@Merck spent a decade figuring out how to put the same mechanism in a pill. Enlicitide:
The only problem with the GLP-1 heart muscle loss narrative is...
... that it's just a narrative.
GLP-1s have reliably improved cardiovascular outcomes in trials, to the point that some research suggests benefit may even be independent of (not reliant on) weight loss.
Created 2026-04-27 · Updated 2026-04-27
Edit rule text
[ai_infrastructure_compute_hype_no_application]
Learned3 rejectionsActive
Exclude posts that discuss GPU bottlenecks, data center buildouts, power generation, or compute scaling trends (e.g., Elon's terrawatt announcement, grid equipment manufacturing) even if framed as enabling AI—unless the post explicitly connects to healthcare delivery, clinical workflows, or health system operations.
Posts about AI compute, power infrastructure, or chip capacity without healthcare-specific application or analysis.
3 example posts
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Created 2026-04-27 · Updated 2026-04-27
Edit rule text
[off_topic_conspiracy_or_fringe]
Learned3 rejectionsActive
Exclude posts that promote conspiracy theories, unsubstantiated fringe claims (e.g., UFOs, Area 51), or topics entirely outside healthcare and technology domains, even if tangentially associated with a healthcare account.
Posts about conspiracy theories, UFOs, unsubstantiated claims, or topics with no healthcare relevance.
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
"Your body can only use 25-30g of protein per meal. Anything above that gets wasted."
This claim has been repeated in fitness nutrition for over a decade, and it was built on studies that measured the right thing over the wrong timescale.
Moore 2009 gave six young men 0, 5, ht
Some people argue that keeping an open-mind makes it easier to believe in conspiracy theories.
But we found the exact oppposite in our newest paper.
Open-mindedness was the strongest predictor of *rejecting* conspiracy theories in a sample of 46,745 participants around the http
Created 2026-04-26 · Updated 2026-04-26
Edit rule text
[truncated_or_low_effort_incomplete_posts]
Learned3 rejectionsActive
Exclude posts that appear truncated, contain only partial sentences, fragment mid-thought, or consist of minimal content (e.g., a single quote, a headline fragment, or incomplete reasoning). Posts must include enough coherent content to evaluate substantive merit.
Posts that are obviously cut off, fragmented, or incomplete with minimal or no substantive content.
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
Two companies you've never heard of built a combined $373M revenue business by helping employees bypass IT. Now comes the part where IT buys its way back in.
Replit just hit $253M ARR growing 2,352% YoY. 85% of the Fortune 500 have employees on it. Lovable is at $120M ARR, $6.6B
Lenny Rachitsky gets ~200 requests every week for things like events, partnerships and content. He declines 99.9% of them using different email templates that match the type of request.
He says yes to very few things, but those all adhere to the same question: If his audience
Created 2026-04-26 · Updated 2026-04-26
Edit rule text
[broad_unvalidated_health_claims]
Learned3 rejectionsActive
Exclude posts that assert broad health claims, speculative drug effects, or controversial medical narratives (e.g., GLP-1s cause eating disorders, cholesterol drugs are universally blocked by insurers) without citing peer-reviewed evidence, clinical trial data, or acknowledging scientific debate and heterogeneity.
Posts making broad health or medical claims that lack evidence, are speculative, or contradict established research without nuance
3 example posts
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
Attention PK nerds, pharmacologists, and clinicians who actually understand serum levels:
I haven’t seen this discussed, but it could matter for patients priced out of injectables.
If a 25 mg oral semaglutide tablet has ~1% bioavailability, that’s ~0.25 mg systemically… on
A 65% cholesterol reduction has been available since 2015. Almost nobody could get it. The drug required a needle every two weeks, cost $5,850+/year, and insurers fought every prescription.
@Merck spent a decade figuring out how to put the same mechanism in a pill. Enlicitide:
Exclude posts that report on AI model security flaws, zero-day vulnerabilities, network takeovers, or prompt injection attacks (e.g., 'Claude Code: He proved that every safety guardrail we trust is architecturally useless') unless they directly address a healthcare deployment risk or clinical system compromise.
Posts about AI model vulnerabilities, jailbreaks, or cybersecurity exploits framed loosely as healthcare-relevant but without healthcare-specific application
2 example posts
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Exclude posts that advocate for, heavily promote, or explore off-label, experimental, or fringe medical interventions (ibogaine, rapamycin, ayahuasca, unvalidated peptide uses) where the evidence base is unclear, speculative, or lacking peer-reviewed validation in the claimed indication.
Posts promoting or exploring unvalidated medical treatments, unproven interventions, or speculative drug uses without clinical evidence.
3 example posts
Impressive study and even with the limitations, is an important addition to the Rapamycin literature
In my opinion, the only plausible off-label use of Rapamycin currently should be in ApoE4 carriers as not many options are available). That would be an important trial we are
As far as I know this is the only naturally-derived, classical psychedelic, that has killed people.
Ayahuasca has some deaths, but it's unclear what the cause was, and unlikely directly related to its cardiovascular risk profile. https://t.co/DatuHiBOTX
We’re exploring the idea of a peptide-forward telehealth concierge medical service. Medicine 3.0 focused on full optimization- peptides, hormones, diet/exercise. MD is a former college varsity rower, fellowship at Yale etc.
Would you be interested in participating in a pilot
Exclude posts that focus on AI model technical capabilities (inference speed, benchmark scores, zero-day vulnerabilities, safety guardrails, network takeover demonstrations) where healthcare is mentioned only as a loose example or label, not the core application being analyzed.
Posts about AI model technical capabilities, benchmarks, or safety vulnerabilities that only tangentially relate to healthcare.
3 example posts
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Claude remains irreducibly Claude. If you know, you know.
(The fact that models have distinct personalities that are consistent across generations is technically interesting, it also makes it very easy to use new releases when they come along, because they feel very similar). ht
Created 2026-04-26 · Updated 2026-04-26
Edit rule text
[fraud_scandal_without_systems_analysis]
Learned3 rejectionsActive
Exclude posts that report fraud cases, corporate misconduct, or regulatory violations (e.g., nursing home abuse, insurance denials, hospice fraud) as breaking news or moral outrage without connecting to systemic healthcare policy, incentive structures, or reform mechanisms.
Posts reporting healthcare fraud, corporate malfeasance, or scandal headlines without structural or policy analysis.
3 example posts
U.S. nursing homes are fabricating schizophrenia diagnoses to hide their use of dangerous antipsychotic drugs to subdue dementia patients, a government watchdog report found.
The drugs increase the risk of falls, strokes and death. https://t.co/6SkzWxZfSz
@PirateWires He's objectively correct. Brian Thompson made decisions that led to denials of medical care, and people died. He used Ai to find ways to deny claims ffs. Brian Thompson has more blood on his hands than whoever shot him
🚨BREAKING: HHS Sec. RFK Jr. just announced President Trump has SAVED and FOUND 138,000 missing children lost under Biden.
"Many have been trafficked, undergone slavery, s*xual abuse."
Follow: @BoLoudon https://t.co/p6YKEm38T7
Created 2026-04-26 · Updated 2026-04-26
Edit rule text
[clinical_observation_without_system_context]
Learned3 rejectionsActive
Exclude posts that describe individual patient encounters, clinical trial data points, or drug side-effect observations in isolation, without connecting to healthcare delivery systems, reimbursement, workflow, or policy impact.
Posts sharing isolated clinical anecdotes, patient cases, or drug trial observations without healthcare systems analysis.
3 example posts
"There is also compelling preliminary evidence suggesting that the use of these drugs [GLP-1] could exacerbate and lead to new diagnoses of restrictive
eating disorders, including anorexia nervosa."
@NEJM today
https://t.co/swx1y25F1X https://t.co/bQYlpuFBjM
The only problem with the GLP-1 heart muscle loss narrative is...
... that it's just a narrative.
GLP-1s have reliably improved cardiovascular outcomes in trials, to the point that some research suggests benefit may even be independent of (not reliant on) weight loss.
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
Created 2026-04-26 · Updated 2026-04-26
Edit rule text
[glp1_peptide_market_price_only]
Learned3 rejectionsActive
Exclude posts that report GLP-1 or peptide drug pricing, generic launches, competitive pricing announcements, or market share metrics without analyzing healthcare access, clinical outcomes, payer policy, or system-level impacts.
Posts about GLP-1 and peptide pricing, market dynamics, or competitive launches without healthcare system impact analysis.
3 example posts
India’s weight-loss drug market just ran a live experiment in price elasticity.
Novo Nordisk’s semaglutide patent expired 20 March 2026.
Within 3 weeks:
15+ generics launched
Cheapest at Rs 2,000/month (branded was Rs 10,000+)
Novo cut Ozempic and Wegovy prices by 36-48%
Bu
$LLY $NVO $HIMS
🚨 LILLY GLP-1 PILL FOUNDAYO: NEARLY 4,000 PRESCRIPTIONS IN WEEK 2
- Foundayo had 1,390 Rxs during week 1
- Meanwhile, Novo's Wegovy Pill had 3k in first 4 days and 18,410 prescriptions in its second week 🤯
- IQVIA data
- Week ending Apr 17
"While we believe htt
$HIMS expands GLP-1 offering to include both $NVO and $LLY products.
The platform now allows providers to prescribe Eli Lilly’s Zepbound and Foundayo via LillyDirect, alongside Wegovy through its collaboration with Novo.
Link: https://t.co/AxG1LrECyY
#stocks #Investing
Exclude posts that report healthcare fraud, scandal, or abuse (e.g., UnitedHealth Change outage, nursing home misconduct, prior auth denials leading to deaths) as breaking news or outrage without analyzing root causes, systemic incentives, or operational/policy solutions.
Posts reporting healthcare fraud, abuse, or scandal (UnitedHealth outage, nursing home fraud, denial deaths) without systems-level analysis or operational insight.
3 example posts
What happened during the Change disaster?
Hospitals got bailed out.
CMS advanced $3.2 billion to hospitals between March and June 2024. UnitedHealth/Optum extended $6.5 billion in interest-liquidity through April 30.
Mercy, I looked it up, specifically had 218 days of cash
U.S. nursing homes are fabricating schizophrenia diagnoses to hide their use of dangerous antipsychotic drugs to subdue dementia patients, a government watchdog report found.
The drugs increase the risk of falls, strokes and death. https://t.co/6SkzWxZfSz
@PirateWires He's objectively correct. Brian Thompson made decisions that led to denials of medical care, and people died. He used Ai to find ways to deny claims ffs. Brian Thompson has more blood on his hands than whoever shot him
Created 2026-04-25 · Updated 2026-04-27
Edit rule text
[off_topic_conspiracy_fringe_claims]
Learned3 rejectionsActive
Exclude posts that promote conspiracy theories, unverified extraordinary claims (UFOs, Area 51, etc.), or sensationalized narratives without credible evidence or peer-reviewed validation.
Posts promoting conspiracy theories, UFO stories, or unsubstantiated fringe claims loosely tied to health
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
They Don't Work for You-
Calls to protect foreigners from deportation or to keep the borders wide open are not about compassion. They are a core part of the globalist plan to flood the labor market with cheaper more compliant workers suppress wages for Americans and make
As far as I know this is the only naturally-derived, classical psychedelic, that has killed people.
Ayahuasca has some deaths, but it's unclear what the cause was, and unlikely directly related to its cardiovascular risk profile. https://t.co/DatuHiBOTX
Exclude posts whose primary subject is non-healthcare (cybersecurity vulnerabilities, aerospace engineering, geopolitical conflicts, labor market disruption, economic policy) even if they mention healthcare terms or have a healthcare angle tacked on. The core content must be healthcare systems, policy, or operations—not a different domain with healthcare as a secondary angle.
Posts about non-healthcare domains (tech, crypto, geopolitics, labor) that tangentially invoke healthcare framing or language.
3 example posts
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Why the biggest fintech players are in for a shock.
"The shift is from human UX to agent UX.
In the past, you won with dashboards, design and user experience.
Now, the buyer is an AI agent, and it only cares about APIs, performance and integration.
That breaks traditional htt
The N1 was a super heavy-lift launch vehicleintended to deliver payloads beyond low Earth orbit.
The N1 was the Soviet counterpart to the US Saturn V, planned for crewed travel to the Moon and beyond, with studies beginning as early as 1959. https://t.co/pB4u9TjyC4
Created 2026-04-25 · Updated 2026-04-25
Edit rule text
[vigilante_violence_moral_justification]
Learned3 rejectionsActive
Reject posts that frame violence or killing as morally justified based on the victim's professional conduct, policy decisions, or business practices. This includes content that argues a person 'deserved' harm due to their role in healthcare denials, insurance decisions, or other institutional actions, even when paired with factual critique of those decisions.
Exclude posts primarily about non-healthcare domains (software engineering, aerospace, advertising, space exploration, corporate management, chemical manufacturing) that lack substantive healthcare systems relevance, even if a healthcare keyword appears in the matched title or account.
Posts about business, engineering, space, software tools, or other non-healthcare domains that mention healthcare tangentially or are mistagged.
3 example posts
Two companies you've never heard of built a combined $373M revenue business by helping employees bypass IT. Now comes the part where IT buys its way back in.
Replit just hit $253M ARR growing 2,352% YoY. 85% of the Fortune 500 have employees on it. Lovable is at $120M ARR, $6.6B
The N1 was a super heavy-lift launch vehicleintended to deliver payloads beyond low Earth orbit.
The N1 was the Soviet counterpart to the US Saturn V, planned for crewed travel to the Moon and beyond, with studies beginning as early as 1959. https://t.co/pB4u9TjyC4
Claude remains irreducibly Claude. If you know, you know.
(The fact that models have distinct personalities that are consistent across generations is technically interesting, it also makes it very easy to use new releases when they come along, because they feel very similar). ht
Created 2026-04-24 · Updated 2026-04-24
Edit rule text
[off_topic_or_conspiracy_fringe_claims]
Learned3 rejectionsActive
Exclude posts about UFOs, unsubstantiated conspiracy theories, fringe claims without evidence, or topics (Bob Lazar, Pope Bible quotes, chargebacks) that have no healthcare systems relevance or credibility.
Posts about UFOs, conspiracy theories, unsubstantiated claims, or topics with no healthcare relevance.
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
Things you wonder while watching ASPCA ads:
1. Why are they filming those poor dogs instead of immediately warming them up?
2. Why are they asking for my $18 while they're sitting on $466 million in investments?
3. Why is the ASPCA CEO paid $1.2 million per year?
4. Do viewers h
This free paper (one 17 we offer both online and as PDFs) explores the claim “socialism’s never been tried” and what it is that forces socialists to make this absurd argument. A hypothetical Elon Musk lends a hand.
Exclude posts that are primarily about non-healthcare domains (aerospace, real estate, geopolitics, labor market macro, sports, AI company metrics, finance) and only tangentially or superficially reference healthcare through a loose framing device, comparison, or misleading keyword match. The core substance must be healthcare-focused, not tangential.
Posts about non-healthcare domains (finance, politics, tech, sports) tagged or loosely framed as healthcare-relevant but lacking genuine healthcare substance.
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
Two companies you've never heard of built a combined $373M revenue business by helping employees bypass IT. Now comes the part where IT buys its way back in.
Replit just hit $253M ARR growing 2,352% YoY. 85% of the Fortune 500 have employees on it. Lovable is at $120M ARR, $6.6B
The N1 was a super heavy-lift launch vehicleintended to deliver payloads beyond low Earth orbit.
The N1 was the Soviet counterpart to the US Saturn V, planned for crewed travel to the Moon and beyond, with studies beginning as early as 1959. https://t.co/pB4u9TjyC4
Exclude posts that advocate for or celebrate fringe, experimental, or unproven medical interventions (ibogaine, rapamycin off-label use, unvalidated peptide therapies) without acknowledging safety data, regulatory status, or lack of clinical evidence. Personal belief in or investment in unvalidated treatments is insufficient.
Posts promoting or investing in unproven or speculative medical treatments without clinical validation or regulatory context.
3 example posts
Impressive study and even with the limitations, is an important addition to the Rapamycin literature
In my opinion, the only plausible off-label use of Rapamycin currently should be in ApoE4 carriers as not many options are available). That would be an important trial we are
As far as I know this is the only naturally-derived, classical psychedelic, that has killed people.
Ayahuasca has some deaths, but it's unclear what the cause was, and unlikely directly related to its cardiovascular risk profile. https://t.co/DatuHiBOTX
I am a strong believer in ibogaine, which is one of the reasons why @ataibeckley acquired the residual interest in its ibogaine program in Q4 2023 and now owns it 100%.
I’m very encouraged to see the administration taking a positive public stance on this important topic.
Created 2026-04-24 · Updated 2026-04-24
Edit rule text
[glp1_peptide_market_and_pricing]
Learned3 rejectionsActive
Exclude posts that focus primarily on GLP-1 or peptide pricing, commercial availability, pharmacy partnerships, or market competition (e.g., 'Hims now offers Lilly drugs at $X/month'). These are product/market posts, not healthcare systems analysis. Include only if post analyzes regulatory, access, or reimbursement policy impact.
Posts about GLP-1 and peptide pricing, market dynamics, and commercial availability without healthcare systems analysis.
3 example posts
$HIMS expands GLP-1 offering to include both $NVO and $LLY products.
The platform now allows providers to prescribe Eli Lilly’s Zepbound and Foundayo via LillyDirect, alongside Wegovy through its collaboration with Novo.
Link: https://t.co/AxG1LrECyY
#stocks #Investing
This admin is kicking butt.
One week: GLP-1s from $1,350 to $199/mo. 12 peptides removed from Category 2. Amazon entered the space. HIMS added Lilly drugs. Pediatric oral GLP-1 trial data dropped.
🚨 IMPORTANT NOTES ON THE $HIMS x $LLY ANNOUNCEMENT
1. This is not a "partnership"
2. Pricing on Hims is the same as everywhere else: Foundayo will cost $149/mo (low dose) to $349/mo (higher doses), plus a $149/mo membership fee
3. Unit economics are likely worse than the Novo
Exclude posts that promote unvalidated medical claims, fringe treatments (ayahuasca, ibogaine, rapamycin for aging), or speculative off-label drug uses without citing peer-reviewed evidence or regulatory context. Conspiracy narratives about vaccines or FDA actions count as well.
Posts making speculative, unvalidated, or fringe medical claims without rigorous evidence (e.g., off-label drugs, unproven treatments, conspiracy narratives about vaccines or health interventions).
3 example posts
Impressive study and even with the limitations, is an important addition to the Rapamycin literature
In my opinion, the only plausible off-label use of Rapamycin currently should be in ApoE4 carriers as not many options are available). That would be an important trial we are
As far as I know this is the only naturally-derived, classical psychedelic, that has killed people.
Ayahuasca has some deaths, but it's unclear what the cause was, and unlikely directly related to its cardiovascular risk profile. https://t.co/DatuHiBOTX
We’re exploring the idea of a peptide-forward telehealth concierge medical service. Medicine 3.0 focused on full optimization- peptides, hormones, diet/exercise. MD is a former college varsity rower, fellowship at Yale etc.
Would you be interested in participating in a pilot
Created 2026-04-24 · Updated 2026-04-24
Edit rule text
[broad_health_claims_without_nuance_or_evidence]
Learned3 rejectionsActive
Exclude posts that make broad, simplistic health claims (e.g., 'protein above 25g is wasted,' 'ultra-processed food causes brain noise') without linking to peer-reviewed studies, acknowledging research limitations, or presenting nuanced discussion of mechanisms and individual variation.
Posts making sweeping health or nutrition claims that oversimplify complex biology without citing research or acknowledging counterarguments.
2 example posts
"Your body can only use 25-30g of protein per meal. Anything above that gets wasted."
This claim has been repeated in fitness nutrition for over a decade, and it was built on studies that measured the right thing over the wrong timescale.
Moore 2009 gave six young men 0, 5, ht
Ingredients in ultra-processed food create "food noise" in your brain.
Food noise causes you to overeat, leading to obesity.
This is solvable by eating single-ingredient foods...
Or by injecting a GLP type drug (reta, tirza, etc.).
One is a long term solution, one is not.
Created 2026-04-23 · Updated 2026-04-24
Edit rule text
[broad_questionable_health_claims_without_nuance]
Learned3 rejectionsActive
Exclude posts that make broad, oversimplified health claims (e.g., 'body can only absorb 25-30g protein per meal', 'food noise causes obesity', vaccine safety claims) without acknowledging individual variability, mechanism complexity, or citing high-quality evidence.
Posts making sweeping health claims (protein absorption limits, vaccine safety with incomplete context, food noise causing obesity) that oversimplify complex nutrition or medical science.
3 example posts
"Your body can only use 25-30g of protein per meal. Anything above that gets wasted."
This claim has been repeated in fitness nutrition for over a decade, and it was built on studies that measured the right thing over the wrong timescale.
Moore 2009 gave six young men 0, 5, ht
Ingredients in ultra-processed food create "food noise" in your brain.
Food noise causes you to overeat, leading to obesity.
This is solvable by eating single-ingredient foods...
Or by injecting a GLP type drug (reta, tirza, etc.).
One is a long term solution, one is not.
Why does Rep. Chu demand every newborn get a hepatitis B vaccine with only a 4-day safety test and no placebo, when healthy babies of uninfected mothers face essentially zero risk?
When did “my body, my choice” stop applying to parents and their infants? https://t.co/tu6aLSkoMi
Created 2026-04-23 · Updated 2026-04-23
Edit rule text
[conspiracy_or_fringe_unsubstantiated_claims]
Learned3 rejectionsActive
Exclude posts that promote conspiracy theories, UFO sightings, alleged cover-ups, or extraordinary claims (missing children conspiracies, mysterious deaths linked to UFO research, smart TVs secretly recording users) without peer-reviewed evidence or credible institutional verification.
Posts promoting UFO sightings, missing persons conspiracies, or unverified extraordinary claims without credible evidence or scientific backing.
3 example posts
Bob Lazar allegedly watched people fly a UFO at Area 51.
“They knew how to fly it.”
“The craft had a corona discharge glow on the bottom and lifted off silently up into the sky … ”
And it had one shocking, anomalous effect that still perplexes him to this day:
As Lazar https:
🚨BREAKING: HHS Sec. RFK Jr. just announced President Trump has SAVED and FOUND 138,000 missing children lost under Biden.
"Many have been trafficked, undergone slavery, s*xual abuse."
Follow: @BoLoudon https://t.co/p6YKEm38T7
🚨BREAKING: A peer reviewed study just confirmed your smart TV is taking screenshots of your screen every 15 seconds and sending them to company servers.
Samsung every minute. LG every 15 seconds. Running even when you are using it as a monitor.
Here is how to stop it:
Created 2026-04-23 · Updated 2026-04-23
Edit rule text
[personal_anecdote_or_unsubstantiated_claim]
Learned3 rejectionsActive
Exclude posts that lead with personal anecdotes ('I've received nine reports', 'I never met my grandfather', 'Request for caregiver product'), unverified side effects, or individual narratives unless they are explicitly framed as validated research findings or policy case studies.
Posts sharing personal stories, unverified user reports, or individual experiences presented as evidence without clinical validation or systemic analysis.
3 example posts
I've closely monitored Alzheimers research for 40 years. Conclusions:
1)Incredible hype/Little practical value
2)Meds don't work
3)Early testing does much more harm than good
4)No low hanging fruit
5)Be skeptical of next "breakthru"
6)In many, just old age https://t.co/pAaCpo1Sfc
Request for caregiver product: a status layer for adult children with aging parents living in long term care.
In assisted living, hospice, hospital-at-home, and other long-term care settings, the signals already exist...they're just fragmented and hard to synthesize. Meal logs,
I never met my grandfather.
He died of pancreatic cancer when my father was just 19. Today, Yash Bindal, 33, father to 18-month-old Maya, faces the same fate.
@PopVaxIndia is using AI to make him a personalized generative medicine to extend his life.
https://t.co/O5VIXbmMGd
Created 2026-04-23 · Updated 2026-04-23
Edit rule text
[glp1_peptide_macro_personal_framing]
Learned3 rejectionsActive
Exclude posts about GLP-1 drugs, peptide synthesis, semaglutide, or weight-loss medications that are framed as personal side effects, pricing narratives, individual patient stories, or macro economics rather than healthcare system challenges, clinical evidence, or regulatory/access problems.
Posts about GLP-1 drugs or peptides framed through personal anecdotes, macro pricing narratives, or general wellness contexts rather than healthcare system, clinical evidence, or operational challenges.
3 example posts
Peptide synthesis is one of the hardest things to do right
Semaglutide comes out correct only 55% of the time. BPC-157 ~74%. every amino acid compounds the error
China won this because they have the scale to throw most of it away
We need to be building this capacity in the US
She's right. The safety risk was never the peptides. It was the supply chain. Regulated compounding access fixes the exact problems people are worried about. Heavy metals, contamination, underdosed vials.
I never met my grandfather.
He died of pancreatic cancer when my father was just 19. Today, Yash Bindal, 33, father to 18-month-old Maya, faces the same fate.
@PopVaxIndia is using AI to make him a personalized generative medicine to extend his life.
https://t.co/O5VIXbmMGd
Created 2026-04-23 · Updated 2026-04-23
Edit rule text
[fraud_scandal_reporting_without_analysis]
Learned3 rejectionsActive
Exclude posts that report healthcare fraud convictions, enforcement sweeps, or criminal prosecutions as crime-focused news without analyzing root causes, systemic vulnerabilities, or operational/policy implications for healthcare delivery and administration.
Posts about healthcare fraud cases, enforcement actions, or financial crime prosecutions presented as crime reporting without systems-level analysis of what they reveal about healthcare operations or policy.
3 example posts
In 2021, Javaid Purwaiz, an OBGYN, was sentenced to 59 years in prison for one of the most severe cases of healthcare fraud in the country’s history.
Once you go through court records, you realize the fraud that gave him a life sentence is the same fraud used by gender doctors.
$340 MILLION in fraud targeted — in 1 WEEK.
That’s what happens when enforcement gets serious.
Luxury cars. Fake claims. Stolen benefits meant for Americans in need — now turning into prison sentences.
The hammer is dropping. We’re just getting started.https://t.co/qP0cOIypE4
🚨 As you pay your taxes this week, LOOK at what the fraudsters allegedly did with your money❗️
🔹Cosmetic procedures
🔹Breast implants
🔹Tweaks to arms and thighs
🔹Tummy tuck
🔹Purebred dogs
🔹Flights to Hawaii
🔹Flights to Disneyland
🔹Multimillion-dollar home
🔹Range https://t.c
Created 2026-04-23 · Updated 2026-04-23
Edit rule text
[infrastructure_compute_hype_tangential]
Learned3 rejectionsActive
Exclude posts focused on GPU scaling, liquid cooling, CPU launches, compute infrastructure, or orbital compute capacity where healthcare is mentioned as a potential application but the substance is infrastructure/hardware hype, not healthcare-specific challenges or adoption patterns.
Posts about AI compute infrastructure, data centers, cooling systems, or hardware scaling that mention healthcare only tangentially or as a use case without healthcare-specific operational insight.
3 example posts
Most AI discussions ignore the physical reality: a lot of facilities still can’t support liquid cooling.
At NVIDIA GTC, @Lenovo’s Jon Alexander explained that across 4,400 global locations, many sites still aren’t ready for liquid cooling. Some environments support megawatts of
🔥 CPUs are having a moment.
#Nvidia launched a standalone CPU. #Arm made its first chip in 35 years. #Intel & #AMD are raising prices amid a supply crunch.
What's behind it: Agentic AI needs far more CPU than anyone planned for — driving a structural shift in CPU:GPU ratios tow
To put Elon's space compute vision into perspective:
1 TW of compute in orbit
That's 10 million tons to orbit each year.
That's 100,000 launches a year, almost one every 5 minutes.
In the airline business that's normal!
Created 2026-04-23 · Updated 2026-04-23
Edit rule text
[workforce_disruption_macro_labor]
Learned3 rejectionsActive
Exclude posts that report employment decline, job displacement statistics, or labor market disruption from AI adoption presented as general economic data, even if healthcare workers are mentioned. The post must address healthcare-specific workforce transitions or retraining.
Posts about AI-driven job displacement or labor market disruption presented as macro-economic trends rather than healthcare-specific workforce challenges.
3 example posts
A major milestone just landed quietly: for the first time ever, half of all employed Americans use AI at work. Gallup's Q1 2026 survey of nearly 24,000 workers shows that adoption has more than doubled since 2023, when only 21% reported any AI use. https://t.co/jmQga9tbWT
I think we now have real evidence that AI exposure is associated with job decline for age <25. The Canary in the Coalmine paper addresses a lot of concerns. While economic science takes time; now is the time to think about policy responses.
@erikbryn @BharatKChandar @RuyuChen
In Beijing's 2026 humanoid robot half-marathon, HONOR's Lightning completed the 21 km course in 50:26 minute.
Beat current human men's half-marathon world record of 57:20.
Last year's winner took over 2 hours 40 minutes.
Massive progress in 12 month
https://t.co/OcZJ66ebWD
Exclude posts that showcase AI model speed, technical optimization, code execution ability, or capability demonstrations (e.g., 'model completed task in X seconds', 'agent executed full network takeover', 'LLM got feature X right') unless directly tied to a healthcare workflow, clinical decision, or patient outcome impact.
Posts about AI model technical capabilities, speed improvements, or benchmark achievements presented without concrete healthcare use cases or applications.
3 example posts
Boltz-2 just got a major speed upgrade. 🚀
We’re releasing Lightning-Boltz, a local, GPU-accelerated framework free from public MSA server bottlenecks.⚡
On a single L40S, total runtime drops to 28s per input vs 89s with the rate-limited server and 298s with MMseqs-CPU.
1/5 🧵 ht
Everyone's covering agents that help you work and build. Almost nobody's covering this:
The same primitives ARE the production runtime.
The SDK is one line:
npm install @anthropic-ai/claude-agent-sdk
The CLAUDE.md that guides Claude Code in your terminal is the exact same http
FT: The White House is moving to give major US agencies access to a modified Anthropic Mythos model built to hunt dangerous software flaws before attackers find them.
That makes Mythos useful for defense because a model that can find a weakness in an operating system, browser, h
Exclude posts that emphasize AI model performance metrics, speed upgrades, benchmark improvements, or technical architecture features without demonstrating direct application to a healthcare delivery, clinical, or regulatory challenge. Posts must connect capability to healthcare use case.
Posts about AI model technical capabilities, speed improvements, or performance benchmarks with loose or no healthcare connection
3 example posts
Boltz-2 just got a major speed upgrade. 🚀
We’re releasing Lightning-Boltz, a local, GPU-accelerated framework free from public MSA server bottlenecks.⚡
On a single L40S, total runtime drops to 28s per input vs 89s with the rate-limited server and 298s with MMseqs-CPU.
1/5 🧵 ht
Everyone's covering agents that help you work and build. Almost nobody's covering this:
The same primitives ARE the production runtime.
The SDK is one line:
npm install @anthropic-ai/claude-agent-sdk
The CLAUDE.md that guides Claude Code in your terminal is the exact same http
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Created 2026-04-22 · Updated 2026-04-22
Edit rule text
[cybersecurity_or_vulnerability_tangential]
Learned3 rejectionsActive
Exclude posts about AI discovering vulnerabilities, corporate security breaches, or cybersecurity threats unless the post explicitly connects to healthcare data, patient privacy, EHR systems, or healthcare infrastructure. General cybersecurity or defense topics do not qualify.
Posts about AI-assisted cybersecurity vulnerabilities, hacking, or zero-day exploits without healthcare application
3 example posts
Guillermo reports "we believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel"
Alex Stamos warns us that defensive agents with autonomy and https://t
> Vercel got pawned
> severe enough to notify law enforcement
> the only advice: “review your environment variables”
> what does that even mean?
> $10B company, and this is how you communicate
Cyber attacks ramping fast, starting to see why Anthropic is scared to
FT: The White House is moving to give major US agencies access to a modified Anthropic Mythos model built to hunt dangerous software flaws before attackers find them.
That makes Mythos useful for defense because a model that can find a weakness in an operating system, browser, h
Exclude posts announcing AI company product launches, SDK/API releases, business metrics, or feature updates unless the post specifically analyzes the product's healthcare application, regulatory compliance, or impact on healthcare workflows.
Posts about AI company product launches, SDK releases, feature announcements, or business metrics for tools used broadly across industries, with tenuous healthcare connection.
3 example posts
𝐇𝐨𝐜𝐤𝐞𝐲𝐒𝐭𝐚𝐜𝐤 (𝐘𝐂 𝐒𝟐𝟑) 𝐫𝐚𝐢𝐬𝐞𝐝 $𝟓𝟎𝐌 𝐭𝐨 𝐛𝐮𝐢𝐥𝐝 𝐀𝐈 𝐫𝐞𝐯𝐞𝐧𝐮𝐞 𝐚𝐠𝐞𝐧𝐭𝐬 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐞𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞. 📈
Systems of record → systems of action.
Congrats to @hockeystackHQ co-founders Buğra Gündüz, Arda Bulut, and Emir Atlı on the https://t.co/5A7An4w0MW
Congrats to @AbridgeHQ, @AnthropicAI, @cursor_ai, @elise_ai, @Fal, @WeAreLegora, and @Perplexity_ai on being named to the @Forbes AI 50 — redefining how the world builds, works, and communicates through AI.
We couldn't be more excited to back them as they continue to shape the h
Everyone's covering agents that help you work and build. Almost nobody's covering this:
The same primitives ARE the production runtime.
The SDK is one line:
npm install @anthropic-ai/claude-agent-sdk
The CLAUDE.md that guides Claude Code in your terminal is the exact same http
Exclude posts that present clinical research results, drug trial outcomes, or laboratory findings as standalone observations without discussing healthcare delivery, regulatory, reimbursement, or system-level implications. The post must connect clinical data to healthcare practice, access, or implementation challenges.
Posts reporting clinical trial results, drug efficacy data, or research findings without analysis of healthcare system implications or adoption barriers.
3 example posts
Good summary of the marked benefit of the molecular glue drug (daraxonrasib) vs pancreatic cancer, from Revolution Medicines, and other progress (adds to the neoantigen vaccine with 6-year survival)
gift link https://t.co/qk7Ar9dCAQ https://t.co/SMiA51fiwX
Insightful plenary from the father of CAR-T, @carlhjune #AACR26
🔬 CAR-T for solid tumors is finally breaking through. 7 FDA approvals in blood cancers and now solid tumors are next 🎯
Clinical signals
• CLDN18.2 (Satri-cel): 38% vs 4% ORR in gastric cancer (The Lancet 2025) http
🧬 In vivo CAR-T engineering: the next frontier? From manufacturing → reprogramming in situ! @AACR
https://t.co/i78CUWFBcE
▪️ Bypasses ex vivo complexity & delays
🦠 Viral + non-viral delivery strategies emerging
🎯 Targets endogenous T cells directly in patients
💥 Potential h
Exclude posts discussing AI compute infrastructure, GPU capacity, data center cooling, chip architectures, or space-based computing unless they explicitly connect to a healthcare delivery problem, clinical use case, or healthcare organization operational need.
Posts about AI compute, hardware, cooling infrastructure, or chipset announcements that lack clear healthcare delivery or application context.
3 example posts
AI progress is hitting a wall, and the constraint is risk.
From RSAC, @Commvault Chief Market Officer Anna Griffin, lays it out clearly: data is scaling faster than architectures can handle, agents are expanding the attack surface, and most organizations don’t have the governance
Boltz-2 just got a major speed upgrade. 🚀
We’re releasing Lightning-Boltz, a local, GPU-accelerated framework free from public MSA server bottlenecks.⚡
On a single L40S, total runtime drops to 28s per input vs 89s with the rate-limited server and 298s with MMseqs-CPU.
1/5 🧵 ht
Most AI discussions ignore the physical reality: a lot of facilities still can’t support liquid cooling.
At NVIDIA GTC, @Lenovo’s Jon Alexander explained that across 4,400 global locations, many sites still aren’t ready for liquid cooling. Some environments support megawatts of
Exclude posts that frame healthcare issues as political scandals, regulatory failures, or corporate malfeasance primarily to generate outrage — unless they provide concrete analysis of how the system problem affects healthcare operations, access, or delivery.
Posts expressing political outrage, scandal reporting, or regulatory criticism without substantive healthcare system analysis
3 example posts
@swyx > get government sponsored monopoly
> prevent patients from getting their data
> make data non transferable
> contribute nothing to open source software
> refuse to collaborate with other software vendors and kill the ecosystem
> appeal to administrators and be hated by p
UnitedHealth Group $UNH is in free fall.
In the last month, the stock has dropped 45%.
That’s a brutal stretch for what many consider one of the most reliable compounders in the healthcare space.
So what happened? And more importantly, what should investors do now?
Let’s unpa
I am not so partisan that I can't appreciate Congresswoman Alexandria Ocasio-Cortez taking down the CEO of CVS on behalf of all Americans.
Healthcare is a universal issue, so pay attention to what's being sold to us.
Translation: "Our perfect patient is insured by Aetna, CVS. T
Created 2026-04-21 · Updated 2026-04-21
Edit rule text
[ai_company_product_metrics_and_funding]
Learned3 rejectionsActive
Exclude posts that primarily report funding amounts, valuation milestones, revenue figures, or product launch metrics for AI companies—even if the company has healthcare products. The post must demonstrate how the AI is applied to solve a healthcare problem, not just celebrate the company's business success.
Posts reporting on AI company funding rounds, revenue milestones, or product launch metrics without healthcare application focus.
3 example posts
𝐇𝐨𝐜𝐤𝐞𝐲𝐒𝐭𝐚𝐜𝐤 (𝐘𝐂 𝐒𝟐𝟑) 𝐫𝐚𝐢𝐬𝐞𝐝 $𝟓𝟎𝐌 𝐭𝐨 𝐛𝐮𝐢𝐥𝐝 𝐀𝐈 𝐫𝐞𝐯𝐞𝐧𝐮𝐞 𝐚𝐠𝐞𝐧𝐭𝐬 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐞𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞. 📈
Systems of record → systems of action.
Congrats to @hockeystackHQ co-founders Buğra Gündüz, Arda Bulut, and Emir Atlı on the https://t.co/5A7An4w0MW
Congrats to @AbridgeHQ, @AnthropicAI, @cursor_ai, @elise_ai, @Fal, @WeAreLegora, and @Perplexity_ai on being named to the @Forbes AI 50 — redefining how the world builds, works, and communicates through AI.
We couldn't be more excited to back them as they continue to shape the h
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Created 2026-04-21 · Updated 2026-04-21
Edit rule text
[ai_company_product_metrics_and_business_news]
Learned3 rejectionsActive
Exclude posts that report AI company fundraising, valuation, revenue, hiring, or product launch announcements (e.g., Claude Code hitting $2.5B ARR, Anthropic hiring 454 engineers, HockeyStack raising $50M) unless the post explicitly connects the product or service to a specific healthcare operational problem or patient outcome.
Posts about AI company funding rounds, revenue milestones, hiring, and product launches without healthcare application specificity.
3 example posts
𝐇𝐨𝐜𝐤𝐞𝐲𝐒𝐭𝐚𝐜𝐤 (𝐘𝐂 𝐒𝟐𝟑) 𝐫𝐚𝐢𝐬𝐞𝐝 $𝟓𝟎𝐌 𝐭𝐨 𝐛𝐮𝐢𝐥𝐝 𝐀𝐈 𝐫𝐞𝐯𝐞𝐧𝐮𝐞 𝐚𝐠𝐞𝐧𝐭𝐬 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐞𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞. 📈
Systems of record → systems of action.
Congrats to @hockeystackHQ co-founders Buğra Gündüz, Arda Bulut, and Emir Atlı on the https://t.co/5A7An4w0MW
Congrats to @AbridgeHQ, @AnthropicAI, @cursor_ai, @elise_ai, @Fal, @WeAreLegora, and @Perplexity_ai on being named to the @Forbes AI 50 — redefining how the world builds, works, and communicates through AI.
We couldn't be more excited to back them as they continue to shape the h
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Exclude posts that focus primarily on AI company product launches, revenue figures, ARR milestones, or business metrics (e.g., Claude Code's $2.5B annualized revenue, Anthropic hiring announcements, OpenAI SDK updates) — even if the company has healthcare-adjacent products. The post must demonstrate a specific healthcare use case or clinical application, not just celebrate the AI product itself.
Posts about AI company product launches, revenue milestones, or business metrics disconnected from healthcare application.
3 example posts
Boltz-2 just got a major speed upgrade. 🚀
We’re releasing Lightning-Boltz, a local, GPU-accelerated framework free from public MSA server bottlenecks.⚡
On a single L40S, total runtime drops to 28s per input vs 89s with the rate-limited server and 298s with MMseqs-CPU.
1/5 🧵 ht
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Anthropic's CEO:
“coding is going away first, then all of software engineering."
Now, Anthropic looks to hire 454 engineers at $320k–$405k.
coding isn’t vanishing it’s becoming leverage for the few who can build, review, and ship at a completely different scale. https://t.co
Exclude posts that present clinical research outcomes, phase trial data, or mechanistic disease findings in isolation — unless they address healthcare access, delivery efficiency, cost, or systemic challenges.
Posts reporting clinical trial results, drug efficacy data, or disease biology findings without connecting to healthcare delivery, operational, or system-level implications.
3 example posts
Cirrhosis is not necessarily “end-stage” liver disease. 35% of patients achieve recompensation (recovery) when the aetiology of cirrhosis has been treated. This is increasingly more common for MASLD cirrhosis in the GLP1 era.
📸: https://t.co/dITDGcLpTt https://t.co/REo0nlD1mn
This is now published – the first win for factor XI inhibition in ischemic stroke
The reason it's so interesting is that factor XI inhibition reduces the risk of pathological clotting without increasing the risk of bleeding
The idea came from genetic evidence: humans with https
New promising phase 1 study for lung cancer @NEJM *
Zongertinib in HER2-Mutant NSCLC
-ORR 76% (tumor shrinkage in most patients)
-PFS 14.4 mo (disease control)
-Brain mets: 47% response
✅ https://t.co/jVN8TuRJcg
Exclude posts that promote AI tool launches, SDK releases, product announcements, or company features—even if the tool is healthcare-adjacent. Posts must include evidence of healthcare customer traction, clinical validation, or healthcare-specific use case results, not just feature availability.
Posts announcing AI tools, SDKs, platforms, or product launches (from companies or authors) without demonstrating healthcare customer adoption, clinical validation, or healthcare-specific problem-solving.
3 example posts
Everyone's covering agents that help you work and build. Almost nobody's covering this:
The same primitives ARE the production runtime.
The SDK is one line:
npm install @anthropic-ai/claude-agent-sdk
The CLAUDE.md that guides Claude Code in your terminal is the exact same http
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Boris Cherny created Claude Code. It hit $2.5 billion in annualized revenue in 9 months. Fastest B2B product ramp in history. Faster than ChatGPT, Slack, or Snowflake ever reached $1 billion.
Now he says coding is “solved” and IDEs will be dead by end of year. https://t.co/HI7M
Exclude posts that report drug trial results, phase study outcomes, or clinical efficacy metrics in isolation. Posts must include analysis of healthcare system implications—reimbursement, adoption barriers, operational integration, or delivery model impact.
Posts reporting clinical trial results, drug efficacy data, or research findings without connecting to healthcare delivery systems, reimbursement, access, or operational implications.
3 example posts
This is now published – the first win for factor XI inhibition in ischemic stroke
The reason it's so interesting is that factor XI inhibition reduces the risk of pathological clotting without increasing the risk of bleeding
The idea came from genetic evidence: humans with https
New promising phase 1 study for lung cancer @NEJM *
Zongertinib in HER2-Mutant NSCLC
-ORR 76% (tumor shrinkage in most patients)
-PFS 14.4 mo (disease control)
-Brain mets: 47% response
✅ https://t.co/jVN8TuRJcg
Today the first results of the very first phase 3 study of a pan-KRAS-inhibitor in metastatic pancreatic cancer dropped, which might apply to > 90% of all pancreatic cancer patients with a KRAS-mutation!
Median overall survival of 13.2 months versus 6.7 months with chemo in
Created 2026-04-20 · Updated 2026-04-20
Edit rule text
[speculative_robotics_or_sci_fi_applications]
Learned3 rejectionsActive
Exclude posts that describe humanoid robots, autonomous labs, or speculative AI agents in science-fiction-like contexts—solving problems that don't align with real healthcare delivery bottlenecks, or presented as novelty/future tech rather than addressing current clinical or operational pain points.
Posts about humanoid robots, autonomous labs, or speculative future technology applications without grounding in current healthcare delivery challenges.
3 example posts
Humanoid robots are moving from Silicon Valley novelty to viable business model—powered by AI and global supply chains, especially in China. But as adoption grows, so do the questions about how humans and machines will actually coexist.
More on Primer, streaming Wednesdays http
Earlier this year we paired our autonomous lab with OpenAI's GPT-5 in a closed-loop experiment: GPT-5 designed cell-free protein synthesis reactions, our RACs ran them, and the model iterated on the results. Over 36,000 experiments and six cycles later, it landed on a reaction ht
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Exclude posts that report healthcare fraud, medical misconduct, or policy scandals primarily as outrage or criminal reporting without analyzing systemic causes, healthcare market impact, or operational implications. Scandal reporting without systems analysis is crime/politics content, not healthcare tech analysis.
Posts about healthcare fraud, medical scandal, or policy outrage focused on moral/criminal reporting without healthcare systems, operational, or market analysis.
3 example posts
$340 MILLION in fraud targeted — in 1 WEEK.
That’s what happens when enforcement gets serious.
Luxury cars. Fake claims. Stolen benefits meant for Americans in need — now turning into prison sentences.
The hammer is dropping. We’re just getting started.https://t.co/qP0cOIypE4
🚨 As you pay your taxes this week, LOOK at what the fraudsters allegedly did with your money❗️
🔹Cosmetic procedures
🔹Breast implants
🔹Tweaks to arms and thighs
🔹Tummy tuck
🔹Purebred dogs
🔹Flights to Hawaii
🔹Flights to Disneyland
🔹Multimillion-dollar home
🔹Range https://t.c
🚨 Fraudsters literally looted $250-500 BILLION a year from taxpayers for years, now changes are being made to prevent this fraud:
- Treasury is now going after the banks
- Whistleblowers can make 30% for exposing fraud
- Auto dealers will be tracked down
END ALL THE FRAUD. https
Created 2026-04-20 · Updated 2026-04-20
Edit rule text
[ai_model_capability_hype_tangential]
Learned3 rejectionsActive
Exclude posts that highlight AI model capabilities (code generation speed, vulnerability detection, autonomous system performance) in non-healthcare or tangentially healthcare contexts. The post must demonstrate a clear healthcare workflow or outcome impact, not just AI capability demonstration.
Posts showcasing AI model technical capabilities (coding speed, security vulnerability detection, network infiltration) with loose or no healthcare framing.
3 example posts
Everyone's covering agents that help you work and build. Almost nobody's covering this:
The same primitives ARE the production runtime.
The SDK is one line:
npm install @anthropic-ai/claude-agent-sdk
The CLAUDE.md that guides Claude Code in your terminal is the exact same http
FT: The White House is moving to give major US agencies access to a modified Anthropic Mythos model built to hunt dangerous software flaws before attackers find them.
That makes Mythos useful for defense because a model that can find a weakness in an operating system, browser, h
AI is letting developers ship three to four times faster. It is also flooding codebases with vulnerabilities at the same rate.
Aikido Security scans 15 open-source ecosystems for malware. A year ago: 30,000 packages per day. Now: 100,000.
The attack surface is not growing https
Exclude posts announcing new AI model capabilities, releases, or technical benchmarks (e.g., coding speed 3-4x faster, agent jailbreaks, zero-day hunting) unless they explicitly demonstrate how that capability addresses a named healthcare workflow, compliance requirement, or business model. Generic tech capability announcements do not qualify.
Posts about AI model releases, capabilities (coding speed, jailbreak tests, agent autonomy), or security vulnerabilities with loose or no healthcare framing
3 example posts
Everyone's covering agents that help you work and build. Almost nobody's covering this:
The same primitives ARE the production runtime.
The SDK is one line:
npm install @anthropic-ai/claude-agent-sdk
The CLAUDE.md that guides Claude Code in your terminal is the exact same http
FT: The White House is moving to give major US agencies access to a modified Anthropic Mythos model built to hunt dangerous software flaws before attackers find them.
That makes Mythos useful for defense because a model that can find a weakness in an operating system, browser, h
AI is letting developers ship three to four times faster. It is also flooding codebases with vulnerabilities at the same rate.
Aikido Security scans 15 open-source ecosystems for malware. A year ago: 30,000 packages per day. Now: 100,000.
The attack surface is not growing https
Created 2026-04-20 · Updated 2026-04-20
Edit rule text
[peptide_and_glp1_macro_or_personal_narrative]
Learned3 rejectionsActive
Exclude posts about GLP-1 drugs, peptides, or obesity treatments that center on personal anecdotes (individual patient stories, side effect narratives, family histories), market speculation (pricing, franchise value), or generic mechanism reviews—unless they analyze healthcare delivery, reimbursement, or clinical evidence gaps.
Posts about GLP-1 drugs, peptides, or weight-loss medications framed as personal stories, market hype, or macro economic commentary without healthcare systems analysis
3 example posts
I never met my grandfather.
He died of pancreatic cancer when my father was just 19. Today, Yash Bindal, 33, father to 18-month-old Maya, faces the same fate.
@PopVaxIndia is using AI to make him a personalized generative medicine to extend his life.
https://t.co/O5VIXbmMGd
And there it is.
Within hours of RFK's announcement someone is already pricing out how much Hims can charge for compounds the research community has had access to for a fraction of that cost.
This is why the outcome of these PCAC meetings matters more than the announcement.
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
Created 2026-04-20 · Updated 2026-04-20
Edit rule text
[robotics_and_speculative_sci_fi_applications]
Learned3 rejectionsActive
Exclude posts about robots or AI agents competing in non-healthcare domains (marathons, games, sports records, cybersecurity penetration tests, space tasks) even if they claim relevance to healthcare labor. The post must describe actual healthcare work, not speculative spillover.
Posts about humanoid robots, autonomous labs, or speculative AI agents achieving non-healthcare feats (sports, marathons, network takeovers)
3 example posts
In Beijing's 2026 humanoid robot half-marathon, HONOR's Lightning completed the 21 km course in 50:26 minute.
Beat current human men's half-marathon world record of 57:20.
Last year's winner took over 2 hours 40 minutes.
Massive progress in 12 month
https://t.co/OcZJ66ebWD
Two years ago the best AI models couldn't complete beginner-level cyber tasks. One just executed a full 32-step corporate network takeover. The Bank of England is convening emergency CEO briefings.
Look at that chart. GPT-4o maxes out at 2 steps. Initial reconnaissance. It can
Yesterday, @RandDWorld featured us twice.
@ProQR turns to Ginkgo’s autonomous lab to scale AI-enabled RNA editing discovery: https://t.co/DyENAd4VdM
Ginkgo’s CEO says biotech needs its Waymo moment: https://t.co/kW27eBjAmf
Want to learn more about our partnership with ProQR? h
Exclude posts that report fraud schemes, embezzlement, fake billing, or criminal cases in healthcare as breaking news or enforcement updates — unless the post analyzes the underlying healthcare system vulnerability, reimbursement design flaw, or structural weakness that enabled the fraud.
Posts reporting healthcare fraud, billing schemes, or financial scandals as crime reporting or outrage without analyzing systemic healthcare delivery or reimbursement failures
3 example posts
$340 MILLION in fraud targeted — in 1 WEEK.
That’s what happens when enforcement gets serious.
Luxury cars. Fake claims. Stolen benefits meant for Americans in need — now turning into prison sentences.
The hammer is dropping. We’re just getting started.https://t.co/qP0cOIypE4
🚨 As you pay your taxes this week, LOOK at what the fraudsters allegedly did with your money❗️
🔹Cosmetic procedures
🔹Breast implants
🔹Tweaks to arms and thighs
🔹Tummy tuck
🔹Purebred dogs
🔹Flights to Hawaii
🔹Flights to Disneyland
🔹Multimillion-dollar home
🔹Range https://t.c
🚨 Fraudsters literally looted $250-500 BILLION a year from taxpayers for years, now changes are being made to prevent this fraud:
- Treasury is now going after the banks
- Whistleblowers can make 30% for exposing fraud
- Auto dealers will be tracked down
END ALL THE FRAUD. https
Exclude posts announcing AI company product releases, hiring milestones, revenue figures, or business metrics (e.g., $2.5B ARR, hiring engineers at $320k salary, fastest ramp in history) unless the post explicitly analyzes how that product solves a healthcare delivery problem.
Posts about AI company product launches, revenue achievements, or business metrics without demonstrating healthcare-specific application or impact
3 example posts
Everyone's covering agents that help you work and build. Almost nobody's covering this:
The same primitives ARE the production runtime.
The SDK is one line:
npm install @anthropic-ai/claude-agent-sdk
The CLAUDE.md that guides Claude Code in your terminal is the exact same http
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Anthropic's CEO:
“coding is going away first, then all of software engineering."
Now, Anthropic looks to hire 454 engineers at $320k–$405k.
coding isn’t vanishing it’s becoming leverage for the few who can build, review, and ship at a completely different scale. https://t.co
Exclude posts that report healthcare fraud cases, financial misconduct, or billing abuse scandals (e.g., cosmetic procedure fraud, prior auth scams, telehealth abuse) unless the post connects the fraud to systemic healthcare delivery failures and proposes or analyzes systemic remedies.
Posts reporting healthcare fraud, abuse, or financial scandals without analysis of systemic vulnerabilities or policy/operational solutions.
3 example posts
🚨 As you pay your taxes this week, LOOK at what the fraudsters allegedly did with your money❗️
🔹Cosmetic procedures
🔹Breast implants
🔹Tweaks to arms and thighs
🔹Tummy tuck
🔹Purebred dogs
🔹Flights to Hawaii
🔹Flights to Disneyland
🔹Multimillion-dollar home
🔹Range https://t.c
🚨 Fraudsters literally looted $250-500 BILLION a year from taxpayers for years, now changes are being made to prevent this fraud:
- Treasury is now going after the banks
- Whistleblowers can make 30% for exposing fraud
- Auto dealers will be tracked down
END ALL THE FRAUD. https
🚨 Surgeon @EithanHaim reveals shocking medical fraud scheme: Texas doctors allegedly changing teens' medical records and using fake billing codes to secretly continue banned gender treatments—scamming insurance and taxpayers. He's speaking at a #DetransAwarenessDay @genspect foru
Created 2026-04-19 · Updated 2026-04-19
Edit rule text
[tangential_ai_capability_or_model_hype]
Learned3 rejectionsActive
Exclude posts about AI model capabilities (agentic behavior, reasoning, hallucination, jailbreaks, safety tests) unless they explicitly connect to a healthcare-specific problem, regulatory requirement, or clinical decision-making context.
Posts about AI model capabilities, technical benchmarks, or safety concerns that lack healthcare application or systems context.
3 example posts
Market maps have become a real focus of ours as LLMs are getting company categorization so wrong.
Our latest, in partnership with Confido Health & @RMFnyc1, focuses on agentic AI for the ambulatory market. What's being deployed now?
Our focus was Series A onwards.
👇 htt
Researchers gave AI agents a simple choice: hit your performance target or follow the rules.
Most of them chose to cheat.
McGill University tested 12 of the most powerful AI models on 40 realistic workplace scenarios. Healthcare. Finance. Logistics. Scientific research. Each AI
AI is letting developers ship three to four times faster. It is also flooding codebases with vulnerabilities at the same rate.
Aikido Security scans 15 open-source ecosystems for malware. A year ago: 30,000 packages per day. Now: 100,000.
The attack surface is not growing https
Exclude posts that report clinical trial outcomes, drug efficacy percentages, or research findings (e.g., 'ORR 76% in lung cancer', 'phase 3 pancreatic cancer results', 'ECG diagnostic accuracy') unless they analyze the healthcare system implications, reimbursement barriers, implementation strategy, or access/affordability challenges.
Posts sharing clinical trial results, drug efficacy data, or research findings without connecting to healthcare delivery systems, pricing, access, or implementation challenges.
3 example posts
This is now published – the first win for factor XI inhibition in ischemic stroke
The reason it's so interesting is that factor XI inhibition reduces the risk of pathological clotting without increasing the risk of bleeding
The idea came from genetic evidence: humans with https
New promising phase 1 study for lung cancer @NEJM *
Zongertinib in HER2-Mutant NSCLC
-ORR 76% (tumor shrinkage in most patients)
-PFS 14.4 mo (disease control)
-Brain mets: 47% response
✅ https://t.co/jVN8TuRJcg
Yesterday, @RandDWorld featured us twice.
@ProQR turns to Ginkgo’s autonomous lab to scale AI-enabled RNA editing discovery: https://t.co/DyENAd4VdM
Ginkgo’s CEO says biotech needs its Waymo moment: https://t.co/kW27eBjAmf
Want to learn more about our partnership with ProQR? h
Exclude posts that discuss compute infrastructure, CPU launches, data center capacity, space-based compute, or hardware announcements (e.g., Nvidia CPUs, Elon's orbital compute, chip supply chains) unless the post explicitly ties this infrastructure to a specific healthcare delivery problem or clinical application.
Posts about AI infrastructure, compute capacity, or hardware announcements with only tangential healthcare relevance.
3 example posts
🔥 CPUs are having a moment.
#Nvidia launched a standalone CPU. #Arm made its first chip in 35 years. #Intel & #AMD are raising prices amid a supply crunch.
What's behind it: Agentic AI needs far more CPU than anyone planned for — driving a structural shift in CPU:GPU ratios tow
To put Elon's space compute vision into perspective:
1 TW of compute in orbit
That's 10 million tons to orbit each year.
That's 100,000 launches a year, almost one every 5 minutes.
In the airline business that's normal!
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties!
The @nvidia CEO joins to discuss:
-- Nvidia's future, roadmap to $1T revenue
-- Physical AI's $50T market
-- Rise of the agent, OpenClaw's inflection moment
-- Inference explosion, Groq deal
-- AI PR Crisis, Anthropic's comms m
Exclude posts that simply report phase trial results, drug efficacy data, or breakthrough announcements without analyzing healthcare delivery implications, market access barriers, or systemic healthcare change. Clinical data alone without systems context does not qualify.
Posts reporting clinical trial results or drug discovery announcements without healthcare systems or market analysis context.
3 example posts
This is now published – the first win for factor XI inhibition in ischemic stroke
The reason it's so interesting is that factor XI inhibition reduces the risk of pathological clotting without increasing the risk of bleeding
The idea came from genetic evidence: humans with https
New promising phase 1 study for lung cancer @NEJM *
Zongertinib in HER2-Mutant NSCLC
-ORR 76% (tumor shrinkage in most patients)
-PFS 14.4 mo (disease control)
-Brain mets: 47% response
✅ https://t.co/jVN8TuRJcg
Yesterday, @RandDWorld featured us twice.
@ProQR turns to Ginkgo’s autonomous lab to scale AI-enabled RNA editing discovery: https://t.co/DyENAd4VdM
Ginkgo’s CEO says biotech needs its Waymo moment: https://t.co/kW27eBjAmf
Want to learn more about our partnership with ProQR? h
Created 2026-04-17 · Updated 2026-04-17
Edit rule text
[tangential_ai_company_product_launches]
Learned3 rejectionsActive
Exclude posts announcing new AI model releases, agent SDKs, or developer tools from AI companies (Anthropic, OpenAI, Microsoft) unless the post explicitly demonstrates how the tool solves a healthcare-specific problem or workflow. Product availability alone is insufficient.
Posts about AI company product releases and SDKs that are not specifically designed for healthcare workflows.
3 example posts
OpenAI introduced GPT-Rosalind, a frontier reasoning model specifically architected for the life sciences, focusing heavily on biology, drug discovery, and translational medicine.
Designed to accelerate the historically slow 10-to-15-year drug approval pipeline, Rosalind is http
Today we launched a major update to the OpenAI Agents SDK to help developers build and deploy long-running, durable agents in production.
You can now build your own Codex-style agents using powerful primitives for modern agents - file and computer use, skills, memory and
Microsoft is reportedly testing the integration of "OpenClaw-like" autonomous AI agents directly into its Microsoft 365 Copilot ecosystem.
Moving beyond a reactive chatbot interface, the goal is to create an "always-on" assistant that runs autonomously in the background.
These
Exclude posts that focus on compute infrastructure (GPUs, CPUs, data centers, power grids, orbital compute) or hardware announcements for AI labs, even if healthcare is mentioned tangentially. These posts must demonstrate concrete healthcare application, not just infrastructure capabilities.
Posts about AI compute infrastructure, data centers, chips, and energy requirements without healthcare application context.
3 example posts
🔥 CPUs are having a moment.
#Nvidia launched a standalone CPU. #Arm made its first chip in 35 years. #Intel & #AMD are raising prices amid a supply crunch.
What's behind it: Agentic AI needs far more CPU than anyone planned for — driving a structural shift in CPU:GPU ratios tow
To put Elon's space compute vision into perspective:
1 TW of compute in orbit
That's 10 million tons to orbit each year.
That's 100,000 launches a year, almost one every 5 minutes.
In the airline business that's normal!
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties!
The @nvidia CEO joins to discuss:
-- Nvidia's future, roadmap to $1T revenue
-- Physical AI's $50T market
-- Rise of the agent, OpenClaw's inflection moment
-- Inference explosion, Groq deal
-- AI PR Crisis, Anthropic's comms m
Created 2026-04-17 · Updated 2026-04-17
Edit rule text
[ai_infrastructure_compute_hype]
Learned3 rejectionsActive
Exclude posts that focus on compute infrastructure, data center spending, chip manufacturing, or hardware scaling (GPUs, CPUs, space compute) even if they mention AI. These posts must demonstrate specific healthcare application, not just macro infrastructure trends.
Posts about AI compute infrastructure, data centers, and hardware scaling without healthcare application specificity.
3 example posts
🔥 CPUs are having a moment.
#Nvidia launched a standalone CPU. #Arm made its first chip in 35 years. #Intel & #AMD are raising prices amid a supply crunch.
What's behind it: Agentic AI needs far more CPU than anyone planned for — driving a structural shift in CPU:GPU ratios tow
To put Elon's space compute vision into perspective:
1 TW of compute in orbit
That's 10 million tons to orbit each year.
That's 100,000 launches a year, almost one every 5 minutes.
In the airline business that's normal!
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties!
The @nvidia CEO joins to discuss:
-- Nvidia's future, roadmap to $1T revenue
-- Physical AI's $50T market
-- Rise of the agent, OpenClaw's inflection moment
-- Inference explosion, Groq deal
-- AI PR Crisis, Anthropic's comms m
Exclude posts that report healthcare fraud, billing abuse, or financial misconduct as breaking news or outrage without structural analysis of why the system failed or how it connects to broader healthcare operations/incentives.
Posts reporting healthcare fraud, billing schemes, or financial misconduct that function as news alerts rather than systems analysis.
3 example posts
🚨 As you pay your taxes this week, LOOK at what the fraudsters allegedly did with your money❗️
🔹Cosmetic procedures
🔹Breast implants
🔹Tweaks to arms and thighs
🔹Tummy tuck
🔹Purebred dogs
🔹Flights to Hawaii
🔹Flights to Disneyland
🔹Multimillion-dollar home
🔹Range https://t.c
🚨 Fraudsters literally looted $250-500 BILLION a year from taxpayers for years, now changes are being made to prevent this fraud:
- Treasury is now going after the banks
- Whistleblowers can make 30% for exposing fraud
- Auto dealers will be tracked down
END ALL THE FRAUD. https
🚨 Surgeon @EithanHaim reveals shocking medical fraud scheme: Texas doctors allegedly changing teens' medical records and using fake billing codes to secretly continue banned gender treatments—scamming insurance and taxpayers. He's speaking at a #DetransAwarenessDay @genspect foru
Created 2026-04-17 · Updated 2026-04-17
Edit rule text
[ai_model_capability_technical_hype_tangential]
Learned3 rejectionsActive
Exclude posts that highlight AI model technical capabilities (agent behavior, safety sandbox escapes, coding speed benchmarks, network takeovers) presented as interesting technical feats, unless the post explicitly analyzes healthcare-specific risks or applications of that capability.
Posts about AI model technical capabilities (jailbreaks, agent autonomy, coding speed) framed as security or capability news rather than healthcare-specific application.
3 example posts
Researchers gave AI agents a simple choice: hit your performance target or follow the rules.
Most of them chose to cheat.
McGill University tested 12 of the most powerful AI models on 40 realistic workplace scenarios. Healthcare. Finance. Logistics. Scientific research. Each AI
Two years ago the best AI models couldn't complete beginner-level cyber tasks. One just executed a full 32-step corporate network takeover. The Bank of England is convening emergency CEO briefings.
Look at that chart. GPT-4o maxes out at 2 steps. Initial reconnaissance. It can
Is AGI actually here…or are we watching the best marketing play in tech history? @danielnewmanUV and @patrickmoorhead break it down on The Flip.
On one side: A model escaped a safety sandbox and chained zero-days without human prompting.
On the other: The "G" in AGI means https
Created 2026-04-17 · Updated 2026-04-17
Edit rule text
[ai_infrastructure_and_compute_hype_tangential]
Learned3 rejectionsActive
Exclude posts that focus primarily on compute infrastructure (CPUs, data centers, chip supply chains, energy grids, orbital compute) even if they mention healthcare applications tangentially or speculatively. The post must demonstrate concrete healthcare application, not hypothetical future potential.
Posts about data center capacity, chip manufacturing, or compute infrastructure with only loose or aspirational healthcare framing.
3 example posts
🔥 CPUs are having a moment.
#Nvidia launched a standalone CPU. #Arm made its first chip in 35 years. #Intel & #AMD are raising prices amid a supply crunch.
What's behind it: Agentic AI needs far more CPU than anyone planned for — driving a structural shift in CPU:GPU ratios tow
To put Elon's space compute vision into perspective:
1 TW of compute in orbit
That's 10 million tons to orbit each year.
That's 100,000 launches a year, almost one every 5 minutes.
In the airline business that's normal!
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties!
The @nvidia CEO joins to discuss:
-- Nvidia's future, roadmap to $1T revenue
-- Physical AI's $50T market
-- Rise of the agent, OpenClaw's inflection moment
-- Inference explosion, Groq deal
-- AI PR Crisis, Anthropic's comms m
Exclude posts about GLP-1 drugs, peptide therapeutics, or weight-loss medications that focus on financial markets, stock performance, personal side effects, or general economic trends—unless the post substantively analyzes healthcare access, reimbursement, manufacturing capacity, or clinical practice change.
Posts about GLP-1 drugs or peptides framed primarily through macro economics, stock performance, or personal wellness without healthcare systems or access analysis.
3 example posts
And there it is.
Within hours of RFK's announcement someone is already pricing out how much Hims can charge for compounds the research community has had access to for a fraction of that cost.
This is why the outcome of these PCAC meetings matters more than the announcement.
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
The brain is the master regulator of food intake and energy balance. A brilliant new @CellCellPress review, including the mechanism of GLP-1 drugs, by @ClemmensenC and colleagues, open-access
https://t.co/KbDf7ym288 https://t.co/DMFeS9zHfw
Exclude posts that report on healthcare fraud, insurance scams, medical misconduct, or regulatory violations as breaking news or scandal outrage without analyzing root causes, systemic healthcare business model problems, or policy implications for healthcare tech/operations.
Posts about healthcare fraud, regulatory failures, or scandals framed as outrage without substantive healthcare systems insight or policy analysis.
3 example posts
🚨 As you pay your taxes this week, LOOK at what the fraudsters allegedly did with your money❗️
🔹Cosmetic procedures
🔹Breast implants
🔹Tweaks to arms and thighs
🔹Tummy tuck
🔹Purebred dogs
🔹Flights to Hawaii
🔹Flights to Disneyland
🔹Multimillion-dollar home
🔹Range https://t.c
🚨 Fraudsters literally looted $250-500 BILLION a year from taxpayers for years, now changes are being made to prevent this fraud:
- Treasury is now going after the banks
- Whistleblowers can make 30% for exposing fraud
- Auto dealers will be tracked down
END ALL THE FRAUD. https
🚨 Surgeon @EithanHaim reveals shocking medical fraud scheme: Texas doctors allegedly changing teens' medical records and using fake billing codes to secretly continue banned gender treatments—scamming insurance and taxpayers. He's speaking at a #DetransAwarenessDay @genspect foru
Exclude posts that rely primarily on personal anecdotes, individual case reports, unverified side effect reports, or singular clinical observations presented as healthcare insights without epidemiological data, system-wide evidence, or peer-reviewed backing.
Posts sharing personal experiences, unverified claims, or isolated clinical observations without systemic healthcare analysis or evidence.
3 example posts
good news: it is a specific virus that has a good prognosis - 85%+ of full recovery.
thanks everyone who helped me; it is hard to research while immobilized, and I got some things wrong, which you helped clear up. im extremely thankful and hope i can give it back somehow
sadl
~1-2% of the patients on ward rounds has something bad going on which hasn’t been identified yet.
As the attending, one of my main duties on rounds is to spot these cases. I do a lot of this by Noticing Things.
A 🤖 iPad makes it much less likely you will Notice Things. 🤔
Indian has 0.7 active physicians per 1,000 people, America has 3.0 active physicians per 1,000 people.
You are a liar. You are not motivated by increasing patient access to care. You just want to practice in America because you can make more money.
Exclude posts that present unverified medical claims, early-stage experimental treatments, or speculative cures (e.g., 'a pill against pancreatic cancer' from a press release, GLP-1 side effect anecdotes, or unvalidated mechanism claims) without robust clinical trial evidence, peer-reviewed publication, or regulatory approval status clearly stated.
Posts about novel drugs, treatments, or medical interventions that lack peer-reviewed validation, clinical trial data, or are presented as speculative breakthroughs.
3 example posts
I never met my grandfather.
He died of pancreatic cancer when my father was just 19. Today, Yash Bindal, 33, father to 18-month-old Maya, faces the same fate.
@PopVaxIndia is using AI to make him a personalized generative medicine to extend his life.
https://t.co/O5VIXbmMGd
New promising phase 1 study for lung cancer @NEJM *
Zongertinib in HER2-Mutant NSCLC
-ORR 76% (tumor shrinkage in most patients)
-PFS 14.4 mo (disease control)
-Brain mets: 47% response
✅ https://t.co/jVN8TuRJcg
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
Exclude posts that demonstrate or discuss AI model technical capabilities (coding ability, security vulnerability detection, agent autonomy milestones, prompt engineering) where healthcare is mentioned only as context or example, not as the substantive focus of the post.
Posts highlighting AI model technical capabilities, safety tests, or competitive benchmarks that only tangentially connect to healthcare.
3 example posts
Two years ago the best AI models couldn't complete beginner-level cyber tasks. One just executed a full 32-step corporate network takeover. The Bank of England is convening emergency CEO briefings.
Look at that chart. GPT-4o maxes out at 2 steps. Initial reconnaissance. It can
Anthropic's CEO:
“coding is going away first, then all of software engineering."
Now, Anthropic looks to hire 454 engineers at $320k–$405k.
coding isn’t vanishing it’s becoming leverage for the few who can build, review, and ship at a completely different scale. https://t.co
Is AGI actually here…or are we watching the best marketing play in tech history? @danielnewmanUV and @patrickmoorhead break it down on The Flip.
On one side: A model escaped a safety sandbox and chained zero-days without human prompting.
On the other: The "G" in AGI means https
Exclude posts that amplify news, press releases, or product announcements with only surface-level commentary, questions, or celebratory reactions—unless the poster adds original analysis, criticism, or strategic insight explaining why the announcement matters for healthcare builders or healthcare systems.
Posts that retweet announcements, press releases, or other content with minimal original analysis or insight added by the poster
3 example posts
Yesterday, @RandDWorld featured us twice.
@ProQR turns to Ginkgo’s autonomous lab to scale AI-enabled RNA editing discovery: https://t.co/DyENAd4VdM
Ginkgo’s CEO says biotech needs its Waymo moment: https://t.co/kW27eBjAmf
Want to learn more about our partnership with ProQR? h
.@openloophealth expands into sleep diagnostics.
Health tech company announces new partnership Happy Sleep—bringing at‑home sleep apnea testing to patients for the first time.
Watch to hear more about its big step toward better rest and smarter care⤵️
https://t.co/ATcNkYrrpK h
good news: it is a specific virus that has a good prognosis - 85%+ of full recovery.
thanks everyone who helped me; it is hard to research while immobilized, and I got some things wrong, which you helped clear up. im extremely thankful and hope i can give it back somehow
sadl
Exclude posts that discuss compute infrastructure, data center spending, GPU capacity, energy grids, satellite compute, or hardware roadmaps without explaining concrete healthcare applications or problems those solve. Posts about Nvidia's 'AI factory,' orbital compute, or hyperscaler capex belong here unless tied to specific clinical workflows.
Posts about AI infrastructure, compute capacity, data centers, or space-based computing with only loose or no healthcare connection
3 example posts
To put Elon's space compute vision into perspective:
1 TW of compute in orbit
That's 10 million tons to orbit each year.
That's 100,000 launches a year, almost one every 5 minutes.
In the airline business that's normal!
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties!
The @nvidia CEO joins to discuss:
-- Nvidia's future, roadmap to $1T revenue
-- Physical AI's $50T market
-- Rise of the agent, OpenClaw's inflection moment
-- Inference explosion, Groq deal
-- AI PR Crisis, Anthropic's comms m
Hyperscalers will spend $700 BILLION on data centers in 2026 alone.
Amazon: $200B. Google: $185B. Meta: $135B.
AI data centers now represent 70%+ of all new grid interconnection requests in the US.
The bottleneck isn't the algorithm anymore. It's the power line.
Exclude posts that report clinical trial outcomes, drug efficacy data, or mechanistic research findings (pancreatic cancer trials, immunotherapy mechanisms, protein mapping) unless they explicitly address healthcare access, implementation challenges, or system-level adoption barriers.
Posts reporting clinical trial results, research findings, or drug discoveries without analysis of healthcare delivery, access, or system-level implications.
3 example posts
Revolution Medicines shared their findings in a press release Monday that said there may soon be a pill against pancreatic cancer, a deadly disease that strikes more than 60,000 Americans every year. The company said the pill doubled survival to 13.2 months compared with standard
Today the first results of the very first phase 3 study of a pan-KRAS-inhibitor in metastatic pancreatic cancer dropped, which might apply to > 90% of all pancreatic cancer patients with a KRAS-mutation!
Median overall survival of 13.2 months versus 6.7 months with chemo in
NIH-funded researchers have uncovered a key reason why immunotherapy has largely failed in pancreatic cancer — and identified a promising strategy to overcome that resistance.
Read on to learn more about this discovery: https://t.co/BoCHpLxp5g https://t.co/3DXv4E9DOE
Exclude posts about GLP-1 drugs, obesity treatments, or peptide therapies that are primarily framed through venture capital, market economics, or behavioral/cultural angles rather than clinical efficacy, healthcare access, or delivery system impact.
Posts about GLP-1 drugs, peptides, or obesity framed through economics, investing, or non-clinical angles rather than clinical outcomes or healthcare delivery.
3 example posts
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
GLP-1 drugs are the ultimate validation of the techno-solutionist approach to society's most challenging problems.
The obesity crisis seemed liked it would just get worse and worse forever. Scolding from public health officials didn't work. Proposals to completely overhaul our f
Great work by @DanielJDrucker and team; biologically plausible mechanism of GLP1-RA benefit independent of weight loss. Excellent article by @megtirrell @CNN describing the publication. Could it justify new approaches for these drugs? I think so. https://t.co/pHudk7lkAR
Created 2026-04-16 · Updated 2026-04-16
Edit rule text
[retweet_or_tangential_commentary_only]
Learned3 rejectionsActive
Exclude posts that are retweets of press releases, celebratory company announcements, or one-line commentary without original analysis, healthcare systems context, or substantive take. Posts that merely celebrate a partnership or quote a keynote without explaining why it matters to healthcare builders should be rejected.
Posts that are primarily retweets, celebratory announcements, or generic commentary without substantive analysis or healthcare systems insight.
3 example posts
Yesterday, @RandDWorld featured us twice.
@ProQR turns to Ginkgo’s autonomous lab to scale AI-enabled RNA editing discovery: https://t.co/DyENAd4VdM
Ginkgo’s CEO says biotech needs its Waymo moment: https://t.co/kW27eBjAmf
Want to learn more about our partnership with ProQR? h
.@openloophealth expands into sleep diagnostics.
Health tech company announces new partnership Happy Sleep—bringing at‑home sleep apnea testing to patients for the first time.
Watch to hear more about its big step toward better rest and smarter care⤵️
https://t.co/ATcNkYrrpK h
Revolution Medicines shared their findings in a press release Monday that said there may soon be a pill against pancreatic cancer, a deadly disease that strikes more than 60,000 Americans every year. The company said the pill doubled survival to 13.2 months compared with standard
Created 2026-04-16 · Updated 2026-04-16
Edit rule text
[tangential_non_healthcare_infrastructure_hype]
Learned3 rejectionsActive
Exclude posts about infrastructure, compute capacity, data centers, space technology, or energy/power systems that do not explicitly connect to a healthcare application or problem. General technology infrastructure announcements should not be included.
Posts about computing infrastructure, space-based compute, or data center spending that lack healthcare specificity
3 example posts
To put Elon's space compute vision into perspective:
1 TW of compute in orbit
That's 10 million tons to orbit each year.
That's 100,000 launches a year, almost one every 5 minutes.
In the airline business that's normal!
Hyperscalers will spend $700 BILLION on data centers in 2026 alone.
Amazon: $200B. Google: $185B. Meta: $135B.
AI data centers now represent 70%+ of all new grid interconnection requests in the US.
The bottleneck isn't the algorithm anymore. It's the power line.
Elon Musk: “Hold on to your Tesla stock.”
Because what’s coming isn’t just another car update—it’s an entirely new paradigm.
From Optimus humanoid robots that could one day take care of your kids, walk your dog, and support elderly parents, to CyberCab scaling into mass product
Created 2026-04-16 · Updated 2026-04-16
Edit rule text
[retweet_or_shallow_commentary_without_analysis]
Learned3 rejectionsActive
Exclude posts that retweet press releases, product announcements, or news headlines with only surface-level commentary (e.g., 'watch to hear more', isolated quotes, or celebratory emoji) without substantive analysis of healthcare impact or business implications.
Posts that amplify news or announcements with minimal original insight, analysis, or healthcare-specific takeaway.
3 example posts
Yesterday, @RandDWorld featured us twice.
@ProQR turns to Ginkgo’s autonomous lab to scale AI-enabled RNA editing discovery: https://t.co/DyENAd4VdM
Ginkgo’s CEO says biotech needs its Waymo moment: https://t.co/kW27eBjAmf
Want to learn more about our partnership with ProQR? h
.@openloophealth expands into sleep diagnostics.
Health tech company announces new partnership Happy Sleep—bringing at‑home sleep apnea testing to patients for the first time.
Watch to hear more about its big step toward better rest and smarter care⤵️
https://t.co/ATcNkYrrpK h
And then I want them to try to sort out one insurance issue. Just one. I want them to see the hours it takes to navigate hospital billing, specialist offices, CPT codes, pharmacy reps, compounding facilities, patient copay programs, and infusion experts. 2/3
Exclude posts about AI infrastructure scaling (data center spending, compute roadmaps, physical AI, robotics frameworks, quantum sensing) unless they include concrete healthcare application details or outcomes. Infrastructure hype without healthcare specificity should not be included.
Posts about AI infrastructure, compute capacity, data centers, or technical capability breakthroughs with only loose or speculative healthcare connection.
3 example posts
Microsoft is reportedly testing the integration of "OpenClaw-like" autonomous AI agents directly into its Microsoft 365 Copilot ecosystem.
Moving beyond a reactive chatbot interface, the goal is to create an "always-on" assistant that runs autonomously in the background.
These
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties!
The @nvidia CEO joins to discuss:
-- Nvidia's future, roadmap to $1T revenue
-- Physical AI's $50T market
-- Rise of the agent, OpenClaw's inflection moment
-- Inference explosion, Groq deal
-- AI PR Crisis, Anthropic's comms m
Is AGI actually here…or are we watching the best marketing play in tech history? @danielnewmanUV and @patrickmoorhead break it down on The Flip.
On one side: A model escaped a safety sandbox and chained zero-days without human prompting.
On the other: The "G" in AGI means https
Created 2026-04-15 · Updated 2026-04-15
Edit rule text
[unvalidated_drug_claims]
Learned3 rejectionsActive
Exclude posts that present anecdotal reports, speculative mechanisms, or unvalidated claims about GLP-1 drugs, peptides, or experimental treatments as if they are established facts. Posts must cite peer-reviewed evidence or clinical trial results, not personal reports or inference.
Posts making speculative or anecdotal claims about GLP-1 drugs, peptides, or experimental treatments beyond published evidence
3 example posts
I have now received nine reports from people taking GLP-1 drugs who got the same side effect:
They no longer feel normal when they come off.
"I feel hangry again", "I started thinking about hunger and I hate it", "I have to go back to Adderall".
8/9 reports -> from women.
GLP-1 drugs are the ultimate validation of the techno-solutionist approach to society's most challenging problems.
The obesity crisis seemed liked it would just get worse and worse forever. Scolding from public health officials didn't work. Proposals to completely overhaul our f
Great work by @DanielJDrucker and team; biologically plausible mechanism of GLP1-RA benefit independent of weight loss. Excellent article by @megtirrell @CNN describing the publication. Could it justify new approaches for these drugs? I think so. https://t.co/pHudk7lkAR
Created 2026-04-15 · Updated 2026-04-15
Edit rule text
[ai_infrastructure_hype]
Learned3 rejectionsActive
Exclude posts that discuss AI infrastructure (data centers, compute spending, energy grids, foundational model capabilities, robotics platforms) unless they directly connect to a specific healthcare use case or clinical problem being solved.
Posts about AI compute, data centers, energy infrastructure, and foundational model capabilities without healthcare specificity
3 example posts
Is AGI actually here…or are we watching the best marketing play in tech history? @danielnewmanUV and @patrickmoorhead break it down on The Flip.
On one side: A model escaped a safety sandbox and chained zero-days without human prompting.
On the other: The "G" in AGI means https
Microsoft is exploring always-on AI agents for Copilot that can operate across Office apps without prompts, inspired by the viral OpenClaw project.
The move comes as Anthropic encroaches on its core turf and customers question Copilot’s value.
Read more:
Hyperscalers will spend $700 BILLION on data centers in 2026 alone.
Amazon: $200B. Google: $185B. Meta: $135B.
AI data centers now represent 70%+ of all new grid interconnection requests in the US.
The bottleneck isn't the algorithm anymore. It's the power line.
Created 2026-04-15 · Updated 2026-04-15
Edit rule text
[personal_anecdote_or_unverified_claim]
Learned3 rejectionsActive
Exclude posts that rely on personal anecdotes, alleged insider accounts, unverified fraud claims, or inflammatory accusations without documented evidence, official reporting, or credible source attribution.
Posts sharing personal experiences, unverified allegations, or claims without source verification or substantiation
3 example posts
good news: it is a specific virus that has a good prognosis - 85%+ of full recovery.
thanks everyone who helped me; it is hard to research while immobilized, and I got some things wrong, which you helped clear up. im extremely thankful and hope i can give it back somehow
sadl
🚨 Surgeon @EithanHaim reveals shocking medical fraud scheme: Texas doctors allegedly changing teens' medical records and using fake billing codes to secretly continue banned gender treatments—scamming insurance and taxpayers. He's speaking at a #DetransAwarenessDay @genspect foru
And then I want them to try to sort out one insurance issue. Just one. I want them to see the hours it takes to navigate hospital billing, specialist offices, CPT codes, pharmacy reps, compounding facilities, patient copay programs, and infusion experts. 2/3
Exclude posts about AI infrastructure, data centers, compute capacity, hardware advances, or robotics platforms unless they directly address a specific healthcare delivery problem or clinical workflow. Generic infrastructure announcements with loose healthcare framing do not qualify.
Posts about compute, data centers, hardware, or infrastructure that mention healthcare tangentially or lack concrete healthcare application
3 example posts
Hyperscalers will spend $700 BILLION on data centers in 2026 alone.
Amazon: $200B. Google: $185B. Meta: $135B.
AI data centers now represent 70%+ of all new grid interconnection requests in the US.
The bottleneck isn't the algorithm anymore. It's the power line.
Elon Musk: “Hold on to your Tesla stock.”
Because what’s coming isn’t just another car update—it’s an entirely new paradigm.
From Optimus humanoid robots that could one day take care of your kids, walk your dog, and support elderly parents, to CyberCab scaling into mass product
Across NVIDIA Jetson and our robotics software stack, we’re focused on making it easy for developers to turn open source innovation, like @openclaw, into deployable, real‑world autonomy on the edge.
Created 2026-04-15 · Updated 2026-04-15
Edit rule text
[self_promotional_tool_launch_without_substance]
Learned3 rejectionsActive
Exclude posts that announce a tool, framework, prompt, or product launch primarily to drive adoption or engagement, with minimal healthcare-specific reasoning, evidence, or analysis of why it matters for health tech stakeholders. Posts like 'I built a prompt that's badass' or 'Here's how to get 200 users from LinkedIn' belong here.
Posts promoting a tool, product, or framework launch with little substantive healthcare analysis or credibility signal.
3 example posts
Here is V2 of my company "Initiation Report" Deep Research Prompt. Serious thanks to the community for the feedback. This thing is pretty badass now.
_____
I've made several updates:
• No longer too positive: People rightfully called out that the previous model rated everything
HNSW is fast & performant. But what's it costing you?
DiskBBQ gets you great recall & speed using a fraction of the memory.
HNSW vs DiskBBQ in 40 seconds with @_jphwang https://t.co/3BqD9a6srU
The easiest way to get your first 200 users from LinkedIn:
> Set up keywords for what you sell
> OutX finds people already asking
> Reply while intent is hot
Exclude posts that cite clinical trial results, research findings, or academic discoveries in isolation without analyzing how those findings change healthcare business models, delivery systems, cost structures, or operational workflows. Posts like 'A study found that X improves outcomes' without healthcare systems context belong here.
Posts reporting clinical research findings or academic observations without healthcare business, delivery, or implementation context.
3 example posts
Today the first results of the very first phase 3 study of a pan-KRAS-inhibitor in metastatic pancreatic cancer dropped, which might apply to > 90% of all pancreatic cancer patients with a KRAS-mutation!
Median overall survival of 13.2 months versus 6.7 months with chemo in
NIH-funded researchers have uncovered a key reason why immunotherapy has largely failed in pancreatic cancer — and identified a promising strategy to overcome that resistance.
Read on to learn more about this discovery: https://t.co/BoCHpLxp5g https://t.co/3DXv4E9DOE
Across large, multicohort datasets, CardioNets achieved superior performance to ECG-only baselines and diagnostic accuracy comparable to CMR-based models, supporting its potential to expand access to advanced cardiovascular assessment. Full study results: https://t.co/VP2iOBLUev
Exclude posts that discuss AI compute infrastructure, datacenter capex, energy demand, or hardware buildout without directly connecting these trends to healthcare delivery, clinical workflows, or health tech business models. Posts about 'Amazon spending $200B on datacenters' or 'NVIDIA chips' belong here unless they analyze healthcare-specific implications.
Posts about datacenter spending, energy infrastructure, and compute capacity that lack direct healthcare application or analysis.
3 example posts
Hyperscalers will spend $700 BILLION on data centers in 2026 alone.
Amazon: $200B. Google: $185B. Meta: $135B.
AI data centers now represent 70%+ of all new grid interconnection requests in the US.
The bottleneck isn't the algorithm anymore. It's the power line.
Elon Musk: “Hold on to your Tesla stock.”
Because what’s coming isn’t just another car update—it’s an entirely new paradigm.
From Optimus humanoid robots that could one day take care of your kids, walk your dog, and support elderly parents, to CyberCab scaling into mass product
This chart puts the datacenter demands into perspective very clearly. Amazon has done more capex in the last 3 years than its entire history.
Right now most AI adoption is on chat tools that are relatively token efficient. Comparatively, coding agents, use orders of magnitude h
Created 2026-04-14 · Updated 2026-04-14
Edit rule text
[retweet_or_tangential_commentary]
Learned3 rejectionsActive
Exclude posts that are primarily retweets with minimal commentary, or posts that tangentially touch healthcare business/policy/tech without substantive original analysis, leaving unclear why healthcare tech professionals should engage.
Posts that are low-effort retweets or loose commentary on healthcare-adjacent topics without original analysis or substance
3 example posts
Another week on the road meeting with a couple dozen IT and AI leaders from large enterprises across banking, media, retail, healthcare, consulting, tech, and sports, to discuss agents in the enterprise.
Some quick takeaways:
* Clear that we’re moving from chat era of AI to
Sequoia partner @gradypb says software is shifting from apps that demand attention to agents that work quietly in the background.
This shift will change what moats will look like, and will be especially hard for incumbents to deal with. "It's two very different business https://
From @Jessica_Baladad :
To my friends in pharmacy who work at CVS, if you needed your District Manager yesterday, they were in the Finance, Ways and Means Committee watching the FairRx Act (HB1959) get placed behind the budget as it moves forward in the Tennessee General https:/
Created 2026-04-14 · Updated 2026-04-14
Edit rule text
[ai_technical_tangent_loose_healthcare_framing]
Learned3 rejectionsActive
Exclude posts that treat healthcare as a loose contextual mention or speculative use case for AI technical advances (world models, reasoning systems, agentic behavior) without grounding in actual healthcare workflows, clinical validation, or operational implementation.
Posts about general AI technical advances (world models, reasoning, agents) that merely reference healthcare without substantive healthcare application
3 example posts
The 26 prompts running inside 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 just got open-sourced. This is literally the entire brain of a $200/month AI coding tool.
Someone reverse-engineered every prompt from the accidentally published npm source and you can now study all of them for free.
Claude Code uses 26
Claude Code is not AGI, but it is the single biggest advance in AI since the LLM.
But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close.
And that changes everything.
The source code leak proves it. Tucked away at its center is a
The CEO of Google DeepMind just went on record saying he disagrees with one of the most respected AI researchers in the world.
Demis Hassabis, the man behind AlphaFold, AlphaGo, and Google's entire AI operation publicly pushed back against Yann LeCun's claim that large language
Created 2026-04-14 · Updated 2026-04-14
Edit rule text
[retweet_commentary_and_low_substance]
Learned3 rejectionsActive
Exclude posts that are primarily retweets, quote-tweets with minimal added commentary, citations of others' research or announcements without original analysis or healthcare-specific insight, or posts that simply amplify another author's take without substantive contribution.
Posts that are primarily retweets of others' content, shallow commentary on external reports, or non-original analysis
3 example posts
Here is V2 of my company "Initiation Report" Deep Research Prompt. Serious thanks to the community for the feedback. This thing is pretty badass now.
_____
I've made several updates:
• No longer too positive: People rightfully called out that the previous model rated everything
"Do One Thing Every Day That Scares You" - debut From The Trenches blog from Linda Bain, serial biotech executive and Venture Partner at Atlas, on pushing yourself to the uncomfortable...
https://t.co/eyXGovi3FX
In general, there are 5 kind of moats:
▪️ Intangible Assets
▪️ Switching Costs
▪️ Network Effects
▪️ Cost Advantage
▪️ Efficient Scale
I'll teach you everything you need to know in 2 minutes: https://t.co/v9w6pfJOGh
Exclude posts that promote or analyze unvalidated medical treatments, compounded or custom peptides, integrative medicine approaches, or speculative healthcare solutions without credible clinical evidence, peer-reviewed research, or FDA approval status.
Posts promoting unproven medical interventions, compounded peptides, or speculative treatments without clinical validation
3 example posts
No one has an issue with thermodynamics @BioLayne
The issue is when self-celebrating nutrition “experts” reducing obesity to the post-hoc arithmetic of calorie balance, as if bookkeeping is a biology.
Another issue is when people with massive platforms use that shallow take as
For the first time in recorded British history, 50% of women are not mothers by age 30.
Of those women, a further 50% will never become mothers.
It takes a village to raise a child, a village that no longer exists for an increasing number of people:
-Fewer siblings among the r
$NVO $LLY $xbi
As FOUNDAYO launches, a look at Oral Wegovy’s erstwhile ascendancy reveals a complicated picture.
Among Oral Wegovy’s patients:
> A large proportion had no prior evidence of GLP-1-based medications, highlighting the potential for oral formulations to expand
Created 2026-04-14 · Updated 2026-04-14
Edit rule text
[non_healthcare_tech_and_infrastructure]
Learned3 rejectionsActive
Exclude posts about computing infrastructure (data centers, GPUs, chips), robotics platforms, quantum technology, or broader tech industry trends that mention healthcare tangentially or speculatively rather than addressing actual healthcare implementation or clinical outcomes.
Posts about general technology infrastructure, semiconductor investments, robotics, or quantum computing with minimal healthcare substance
3 example posts
Hyperscalers will spend $700 BILLION on data centers in 2026 alone.
Amazon: $200B. Google: $185B. Meta: $135B.
AI data centers now represent 70%+ of all new grid interconnection requests in the US.
The bottleneck isn't the algorithm anymore. It's the power line.
Elon Musk: “Hold on to your Tesla stock.”
Because what’s coming isn’t just another car update—it’s an entirely new paradigm.
From Optimus humanoid robots that could one day take care of your kids, walk your dog, and support elderly parents, to CyberCab scaling into mass product
Across NVIDIA Jetson and our robotics software stack, we’re focused on making it easy for developers to turn open source innovation, like @openclaw, into deployable, real‑world autonomy on the edge.
Created 2026-04-14 · Updated 2026-04-14
Edit rule text
[ai_technical_capability_hype_tangential]
Learned3 rejectionsActive
Exclude posts that focus on AI model technical capabilities (code generation, reasoning, architecture), AI lab announcements, or general AI breakthroughs unless the post explicitly demonstrates how these capabilities address a specific healthcare delivery challenge or clinical workflow.
Posts about AI model capabilities, technical breakthroughs, or architecture discussions that don't connect meaningfully to healthcare delivery or systems
3 example posts
🚨 DAVID SACKS: “Anthropic has proven that it's very good at two things — One is product releases, the second is scaring people … At the same time they roll out a new model … they also roll out some study showing the worst possible implication where the technology could lead.” htt
The 26 prompts running inside 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 just got open-sourced. This is literally the entire brain of a $200/month AI coding tool.
Someone reverse-engineered every prompt from the accidentally published npm source and you can now study all of them for free.
Claude Code uses 26
Claude Code is not AGI, but it is the single biggest advance in AI since the LLM.
But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close.
And that changes everything.
The source code leak proves it. Tucked away at its center is a
Created 2026-04-14 · Updated 2026-04-14
Edit rule text
[ai_company_business_metrics_and_funding]
Learned3 rejectionsActive
Exclude posts that primarily report financial metrics (ARR, revenue growth, valuations, funding rounds) for AI companies or tech firms, even if the company has healthcare products. The focus must be on healthcare impact or application, not business performance metrics.
Posts focused on AI company revenue, ARR growth, valuations, and funding rounds rather than healthcare applications
3 example posts
Microsoft is exploring always-on AI agents for Copilot that can operate across Office apps without prompts, inspired by the viral OpenClaw project.
The move comes as Anthropic encroaches on its core turf and customers question Copilot’s value.
Read more:
The AI labs' voracious appetite for training data has lifted a number of startups offering that data.
That includes Fleet, an RL gym startup that's grown ARR from $1m to $60m+ and is now raising at ~$750m from BCV.
https://t.co/v3CceXapH1
OpenAI dropping Agent Builder today is either going to make you rich or expose that you've been selling hot air.
I went deep analyzing what this actually means.
Here's the $4B opportunity hiding in plain sight:
The mainstream narrative: "Agent Builder democratizes AI! Anyone c
Created 2026-04-14 · Updated 2026-04-14
Edit rule text
[unvalidated_treatment_or_drug_claims]
Learned3 rejectionsActive
Exclude posts that promote peptides, novel drug combinations, integrative medicine, or other medical interventions that lack peer-reviewed clinical validation or are presented with speculative benefit claims. Include posts claiming novel compounds reverse disease without substantiating clinical evidence.
Posts promoting unvalidated, speculative, or fringe medical treatments, compounds, or interventions without rigorous evidence.
3 example posts
No one has an issue with thermodynamics @BioLayne
The issue is when self-celebrating nutrition “experts” reducing obesity to the post-hoc arithmetic of calorie balance, as if bookkeeping is a biology.
Another issue is when people with massive platforms use that shallow take as
$NVO $LLY $xbi
As FOUNDAYO launches, a look at Oral Wegovy’s erstwhile ascendancy reveals a complicated picture.
Among Oral Wegovy’s patients:
> A large proportion had no prior evidence of GLP-1-based medications, highlighting the potential for oral formulations to expand
If @mochihealth is willing to mislead patients about the safety and efficacy of their products, why should anyone believe their products even contain just the API they claim?
There’s no evidence “compounded oral Semaglutide” is safe or effective
Novo’s oral formulation is
Created 2026-04-14 · Updated 2026-04-14
Edit rule text
[self_promotional_tool_or_product_launch]
Learned3 rejectionsActive
Exclude posts that are primarily self-promotional (launching a personal tool, company update, or newsletter prompt) or list generic software/business tactics with only loose or generic healthcare labeling, even if the author claims healthcare relevance.
Posts promoting personal tools, startups, or products with minimal substantive healthcare content or analysis
3 example posts
Here is V2 of my company "Initiation Report" Deep Research Prompt. Serious thanks to the community for the feedback. This thing is pretty badass now.
_____
I've made several updates:
• No longer too positive: People rightfully called out that the previous model rated everything
In general, there are 5 kind of moats:
▪️ Intangible Assets
▪️ Switching Costs
▪️ Network Effects
▪️ Cost Advantage
▪️ Efficient Scale
I'll teach you everything you need to know in 2 minutes: https://t.co/v9w6pfJOGh
The easiest way to get your first 200 users from LinkedIn:
> Set up keywords for what you sell
> OutX finds people already asking
> Reply while intent is hot
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[off_topic_domain_with_healthcare_label]
Learned3 rejectionsActive
Exclude posts where the primary subject is fertility, demographics, nutrition/thermodynamics, cryptocurrency, British social trends, or other non-healthcare domains that mention health only tangentially or as surface-level context.
Posts about non-healthcare domains (fertility, nutrition, demographics, crypto) that use healthcare framing as a superficial connection
3 example posts
No one has an issue with thermodynamics @BioLayne
The issue is when self-celebrating nutrition “experts” reducing obesity to the post-hoc arithmetic of calorie balance, as if bookkeeping is a biology.
Another issue is when people with massive platforms use that shallow take as
For the first time in recorded British history, 50% of women are not mothers by age 30.
Of those women, a further 50% will never become mothers.
It takes a village to raise a child, a village that no longer exists for an increasing number of people:
-Fewer siblings among the r
Chamath: Trump Created an Identity Crisis in the Democratic Party
@chamath on E214:
"The crazy thing about the Democrats is that they are the most sophisticated liars."
"The conventional wisdom was that the Republicans were pro-capital and Democrats were pro-labor."
"And th
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[general_infrastructure_and_compute_hype]
Learned3 rejectionsActive
Exclude posts about AI infrastructure spending, data center construction, energy grid demands, or hardware/compute capability announcements (e.g., $700B datacenter spending, NVIDIA compute announcements, quantum computing potential) unless they explicitly tie to a healthcare delivery or clinical problem.
Posts about AI infrastructure, data center spending, and compute requirements that lack healthcare-specific relevance
3 example posts
Hyperscalers will spend $700 BILLION on data centers in 2026 alone.
Amazon: $200B. Google: $185B. Meta: $135B.
AI data centers now represent 70%+ of all new grid interconnection requests in the US.
The bottleneck isn't the algorithm anymore. It's the power line.
Elon Musk: “Hold on to your Tesla stock.”
Because what’s coming isn’t just another car update—it’s an entirely new paradigm.
From Optimus humanoid robots that could one day take care of your kids, walk your dog, and support elderly parents, to CyberCab scaling into mass product
Across NVIDIA Jetson and our robotics software stack, we’re focused on making it easy for developers to turn open source innovation, like @openclaw, into deployable, real‑world autonomy on the edge.
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[academic_or_clinical_observation_only]
Learned3 rejectionsActive
Exclude posts that report a single clinical study finding, research publication, or medical observation without discussing healthcare system implications, policy impact, adoption barriers, or market/operational context relevant to healthcare stakeholders.
Posts reporting isolated clinical findings or research observations without broader healthcare systems, policy, or operational context.
3 example posts
The remarkable story of Chinese scientist Tu Youyou, who won the 2015 Nobel Prize in Physiology or Medicine for her discovery of artemisinin — a breakthrough drug that has saved millions of lives from malaria worldwide.
In the late 1960s and early 1970s, amid China's "Project ht
Across large, multicohort datasets, CardioNets achieved superior performance to ECG-only baselines and diagnostic accuracy comparable to CMR-based models, supporting its potential to expand access to advanced cardiovascular assessment. Full study results: https://t.co/VP2iOBLUev
Care professions like teaching and nursing are still more likely to attract women than men.
Surprisingly, the gender gap in these roles is often wider in countries with greater overall gender equality. A new study co-authored by @YaleSOM's Adriana L. Germano explores the reasons
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[tangential_industry_or_domain_content]
Learned3 rejectionsActive
Exclude posts where healthcare appears only in the matched headline but the actual post content discusses unrelated domains (retail, banking, general software dynamics, workplace culture) with no substantive healthcare-specific insight or analysis.
Posts about business, finance, or tech topics that mention healthcare in the headline but focus on unrelated domain dynamics.
3 example posts
Another week on the road meeting with a couple dozen IT and AI leaders from large enterprises across banking, media, retail, healthcare, consulting, tech, and sports, to discuss agents in the enterprise.
Some quick takeaways:
* Clear that we’re moving from chat era of AI to
Sequoia partner @gradypb says software is shifting from apps that demand attention to agents that work quietly in the background.
This shift will change what moats will look like, and will be especially hard for incumbents to deal with. "It's two very different business https://
Here is V2 of my company "Initiation Report" Deep Research Prompt. Serious thanks to the community for the feedback. This thing is pretty badass now.
_____
I've made several updates:
• No longer too positive: People rightfully called out that the previous model rated everything
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[ai_technical_capability_tangent]
Learned3 rejectionsActive
Exclude posts that focus on AI technical achievements (Claude capabilities, LLM advances, world models, agentic AI) where healthcare is mentioned only as a possible future use case or vague example, rather than demonstrated application or analysis.
Posts about AI model breakthroughs, architecture, or technical capabilities with only loose or speculative healthcare framing.
3 example posts
The 26 prompts running inside 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 just got open-sourced. This is literally the entire brain of a $200/month AI coding tool.
Someone reverse-engineered every prompt from the accidentally published npm source and you can now study all of them for free.
Claude Code uses 26
OpenAI dropping Agent Builder today is either going to make you rich or expose that you've been selling hot air.
I went deep analyzing what this actually means.
Here's the $4B opportunity hiding in plain sight:
The mainstream narrative: "Agent Builder democratizes AI! Anyone c
Claude Code is not AGI, but it is the single biggest advance in AI since the LLM.
But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close.
And that changes everything.
The source code leak proves it. Tucked away at its center is a
Exclude posts that simply report clinical trial outcomes, epidemiological findings, or medical research results without discussing their impact on healthcare systems, provider economics, patient access, or technology adoption.
Posts sharing clinical research findings, trial results, or medical observations without connecting them to healthcare delivery, policy, or business implications.
3 example posts
Across large, multicohort datasets, CardioNets achieved superior performance to ECG-only baselines and diagnostic accuracy comparable to CMR-based models, supporting its potential to expand access to advanced cardiovascular assessment. Full study results: https://t.co/VP2iOBLUev
Care professions like teaching and nursing are still more likely to attract women than men.
Surprisingly, the gender gap in these roles is often wider in countries with greater overall gender equality. A new study co-authored by @YaleSOM's Adriana L. Germano explores the reasons
Shingles vaccination reduced major adverse cardiac events and secondary cardiovascular outcomes for patients with atherosclerotic cardiovascular disease, according to new research.
The research, presented at the American College of Cardiology (ACC) Scientific Session 2026, comes
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[ai_capability_technical_tangent]
Learned3 rejectionsActive
Exclude posts that lead with AI technical capabilities (Claude Code features, LLM benchmarks, reasoning advances, world models) and merely use healthcare as a passing example or framing device, rather than analyzing a specific healthcare problem or system the technology addresses.
Posts about AI model technical advances, benchmarks, or capabilities that are tangentially healthcare-related or use healthcare as an example without substantive healthcare analysis.
3 example posts
The 26 prompts running inside 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 just got open-sourced. This is literally the entire brain of a $200/month AI coding tool.
Someone reverse-engineered every prompt from the accidentally published npm source and you can now study all of them for free.
Claude Code uses 26
OpenAI dropping Agent Builder today is either going to make you rich or expose that you've been selling hot air.
I went deep analyzing what this actually means.
Here's the $4B opportunity hiding in plain sight:
The mainstream narrative: "Agent Builder democratizes AI! Anyone c
Claude Code is not AGI, but it is the single biggest advance in AI since the LLM.
But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close.
And that changes everything.
The source code leak proves it. Tucked away at its center is a
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[general_ai_infrastructure_tangent]
Learned3 rejectionsActive
Exclude posts focused on data center capex, chip manufacturing, AI hardware paradigms, or general compute trends that only tangentially mention healthcare or lack any healthcare-specific application or analysis.
Posts about AI infrastructure, compute, and hardware trends that mention healthcare tangentially or not at all
3 example posts
Hyperscalers will spend $700 BILLION on data centers in 2026 alone.
Amazon: $200B. Google: $185B. Meta: $135B.
AI data centers now represent 70%+ of all new grid interconnection requests in the US.
The bottleneck isn't the algorithm anymore. It's the power line.
Elon Musk: “Hold on to your Tesla stock.”
Because what’s coming isn’t just another car update—it’s an entirely new paradigm.
From Optimus humanoid robots that could one day take care of your kids, walk your dog, and support elderly parents, to CyberCab scaling into mass product
Across NVIDIA Jetson and our robotics software stack, we’re focused on making it easy for developers to turn open source innovation, like @openclaw, into deployable, real‑world autonomy on the edge.
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[off_topic_or_tangential_framing]
Learned3 rejectionsActive
Exclude posts whose primary subject is non-healthcare (demographics, politics, biography, international conflict) and merely reference healthcare as background or tangential context. The post must center on a healthcare tech or policy question, not use healthcare as a rhetorical frame for other topics.
Posts on completely unrelated topics (demographics, identity politics, biography) that use healthcare as loose framing or context
3 example posts
Care professions like teaching and nursing are still more likely to attract women than men.
Surprisingly, the gender gap in these roles is often wider in countries with greater overall gender equality. A new study co-authored by @YaleSOM's Adriana L. Germano explores the reasons
For the first time in recorded British history, 50% of women are not mothers by age 30.
Of those women, a further 50% will never become mothers.
It takes a village to raise a child, a village that no longer exists for an increasing number of people:
-Fewer siblings among the r
Chamath: Trump Created an Identity Crisis in the Democratic Party
@chamath on E214:
"The crazy thing about the Democrats is that they are the most sophisticated liars."
"The conventional wisdom was that the Republicans were pro-capital and Democrats were pro-labor."
"And th
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[academic_research_or_clinical_observation_only]
Learned3 rejectionsActive
Exclude posts that cite clinical studies, medical observations, or research results (e.g., 'study shows X reduces Y events', 'women with condition Z have more risk') without discussing healthcare delivery, policy impact, or systemic implications. Clinical data alone does not constitute healthcare tech analysis.
Posts that report clinical research findings or medical observations without connecting to healthcare systems, policy, or business implications
3 example posts
Shingles vaccination reduced major adverse cardiac events and secondary cardiovascular outcomes for patients with atherosclerotic cardiovascular disease, according to new research.
The research, presented at the American College of Cardiology (ACC) Scientific Session 2026, comes
As a medical school professor, I now believe the biggest mistake in Alzheimer's research was ignoring metabolism.
A comprehensive Frontiers in Neurology review makes the case clear: mitochondrial dysfunction and metabolic failure happen YEARS before amyloid plaques or memory htt
Thank you, Governor @KellyAyotte, for protecting patient safety with your veto of HB 349 and to the New Hampshire lawmakers who voted today to sustain the veto. Applause to @NHMedSociety for leading the effort to keep eye surgery in the hands of highly trained physicians. AMA is
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[generalist_macro_economics_or_infrastructure]
Learned3 rejectionsActive
Exclude posts about general technology infrastructure (datacenter capex, hardware vs. software economics, semiconductor supply chains) or macroeconomic trends that mention healthcare only as an example sector, not as the primary analysis focus.
Posts about general economic trends, datacenter infrastructure, or hardware/software business dynamics that lack specific healthcare framing
3 example posts
Quantum computers are still on the drawing board, but quantum sensing is here now—and this technology can transform not just industry but America's security picture. Read a new Defining Ideas article by Dr. Vivek Lall and Haibo Huang: https://t.co/UeEjZWIO27
Sequoia partner @gradypb says software is shifting from apps that demand attention to agents that work quietly in the background.
This shift will change what moats will look like, and will be especially hard for incumbents to deal with. "It's two very different business https://
In general, there are 5 kind of moats:
▪️ Intangible Assets
▪️ Switching Costs
▪️ Network Effects
▪️ Cost Advantage
▪️ Efficient Scale
I'll teach you everything you need to know in 2 minutes: https://t.co/v9w6pfJOGh
Created 2026-04-13 · Updated 2026-04-13
Edit rule text
[ai_model_technical_capabilities_tangent]
Learned3 rejectionsActive
Exclude posts that focus primarily on AI model technical capabilities, research advances (e.g., Claude Code capabilities, world models, robotics demonstrations) or AI safety discussions without clear connection to healthcare practice or outcomes.
Posts about AI model technical breakthroughs, capabilities, or research that lack direct healthcare application
3 example posts
Claude Code is not AGI, but it is the single biggest advance in AI since the LLM.
But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close.
And that changes everything.
The source code leak proves it. Tucked away at its center is a
MiroFish is probably the craziest thing I've ever seen in AI (after world models, maybe?).
Wanted to do a research for the 'Netanyahu out by...?' Polymarket:
> asked Claude to create a report 'Try to gather as much information about Netanyahu, about why he would be out, what act
The general public: "AI is overhyped, it still can't count the Rs in strawberry!"
Meanwhile, Claude Mythos Preview during a safety test:
Escaped its sandbox, gained broad internet access, emailed the researcher running the evaluation, then posted details of its exploit to http
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[clinical_observation_without_systems_insight]
Learned3 rejectionsActive
Exclude posts that report clinical research, medical case observations, or patient outcomes without analyzing healthcare systems, technology implications, or operational/business context. Pure clinical findings should have healthcare tech or systems angle to fit the writer's scope.
Posts sharing clinical research findings or medical observations without connecting to healthcare systems, technology, or broader patterns
3 example posts
As a medical school professor, I now believe the biggest mistake in Alzheimer's research was ignoring metabolism.
A comprehensive Frontiers in Neurology review makes the case clear: mitochondrial dysfunction and metabolic failure happen YEARS before amyloid plaques or memory htt
Hoy voy a contarles de las verdaderas razones del envejecimiento cerebral acelerado, es este paper que salió en abril de 2026 y que todo el mundo está citando pero casi nadie lo está explicando. https://t.co/07eWbX61vL
🚨New in @JACCJournals
Can #CCTA stratify heart failure risk through radiomic phenotyping of epicardial adiposity?
We derived & externally validated a deep radiomic signature of epicardial fat in >70,000 adults.
w/ @Charis_Oxford on behalf of the #ORFAN consortium
🧵👇 h
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[ai_model_technical_tangent]
Learned3 rejectionsActive
Exclude posts that focus on AI model technical capabilities, system architecture, or safety features (Claude Code features, world models, agent behavior) without clear, specific healthcare application or analysis. Tangential healthcare mentions don't qualify.
Posts about AI model architecture, capabilities, and technical details with weak or no healthcare connection
3 example posts
Claude Code is not AGI, but it is the single biggest advance in AI since the LLM.
But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close.
And that changes everything.
The source code leak proves it. Tucked away at its center is a
MiroFish is probably the craziest thing I've ever seen in AI (after world models, maybe?).
Wanted to do a research for the 'Netanyahu out by...?' Polymarket:
> asked Claude to create a report 'Try to gather as much information about Netanyahu, about why he would be out, what act
When AI aggregators update too quickly, there is no positive-measure set of training weights that improves learning access, whereas such weights exist when updating is slow, from @DAcemogluMIT, Tianyi Lin, Asuman Ozdaglar, and James Siderius https://t.co/fO2ZpUOWe3 https://t.co/c
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[tangential_non_healthcare_tech]
Learned3 rejectionsActive
Exclude posts where the primary subject is non-healthcare technology (datacenter infrastructure, space computing, robotics, reverse engineering), even if the author works in healthcare or mentions healthcare tangentially. The post must center healthcare problems, not tech infrastructure.
Posts about general technology, infrastructure, or space tech that use loose healthcare framing but are fundamentally about non-healthcare domains.
3 example posts
This chart puts the datacenter demands into perspective very clearly. Amazon has done more capex in the last 3 years than its entire history.
Right now most AI adoption is on chat tools that are relatively token efficient. Comparatively, coding agents, use orders of magnitude h
Building data centers in space is highly complicated and very expensive. It requires designing completely new tech to deal with the galactic radiation bombardment and extreme heat and cold among other things. And when they have all this figured out, the compute power is much less
Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down:
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[personal_anecdote_or_lifestyle_framing]
Learned3 rejectionsActive
Exclude posts that use personal anecdotes (patient recovery stories, individual career narratives, lifestyle achievements) as the primary evidence for healthcare claims, unless embedded in peer-reviewed research or systematic analysis.
Posts about personal health experiences, career milestones, or lifestyle changes presented as healthcare insights.
3 example posts
Mine got LASIK, as had many of the nurses.
A lot of ophthalmologists have.
There's a weird delusion that the profession is all afraid of it, but there's no basis for that belief beyond fearmongering. https://t.co/yVBDfNDiB5
New @JAMANetwork paper out from our team here at UCLA Health/WLA VA and @samirguptaGI's team at UCSD/San Diego VA!
In this first study from a multi-part research project, our teams are trying to understand what age your medical doctors and the colorectal cancer prevention https:
april 6th 2022 vs april 6th 2026
this was my final day of intensive outpatient for my last mental health crisis. four years later and i have two degrees, a job, and i'm saving up to move to the city. i'm really proud of myself guys 😭 https://t.co/U9InCjCjEr
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[non_healthcare_domain_tangential_framing]
Learned3 rejectionsActive
Exclude posts about non-healthcare domains (space data centers, robotics engineering, general mathematics, video game history) that include passing healthcare references or are shared by healthcare-adjacent accounts but have no meaningful healthcare application or analysis.
Posts about space infrastructure, robotics, mathematics, or general tech topics that use healthcare-adjacent language but lack substantive healthcare relevance.
3 example posts
Building data centers in space is highly complicated and very expensive. It requires designing completely new tech to deal with the galactic radiation bombardment and extreme heat and cold among other things. And when they have all this figured out, the compute power is much less
Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down:
Imagine you're John Carmack
you're 22 years old and you just wrote a 3D engine in assembly that runs at 35fps on a 486
Doom drops. Quake drops. Half the planet is playing your code.
you're the reason GPUs exist. you're the reason your friend Jensen has a yacht today.
then in
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[technical_ai_capability_tangent]
Learned3 rejectionsActive
Exclude posts focused on AI technical capabilities (model behavior, security, code-level optimization) where healthcare is mentioned only as a tangential application or example. The post must center on healthcare problems or systems, not general AI engineering.
Posts about AI model technical capabilities (sandboxing, reverse-engineering, memory optimization) that mention healthcare only incidentally or use healthcare as an example.
3 example posts
MiroFish is probably the craziest thing I've ever seen in AI (after world models, maybe?).
Wanted to do a research for the 'Netanyahu out by...?' Polymarket:
> asked Claude to create a report 'Try to gather as much information about Netanyahu, about why he would be out, what act
The general public: "AI is overhyped, it still can't count the Rs in strawberry!"
Meanwhile, Claude Mythos Preview during a safety test:
Escaped its sandbox, gained broad internet access, emailed the researcher running the evaluation, then posted details of its exploit to http
Anthropic released a system card today for Claude Mythos Preview, a model they're not publicly releasing. Some notes:
- They've had model available internally since Feb 24, 2026
- Not releasing due to offensive cyber capabilities of model. Autonomously found zero-day vulns in ht
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[political_policy_without_healthcare_substance]
Learned3 rejectionsActive
Exclude posts that use healthcare as a framing device for broader political or budgetary arguments (e.g., DOGE savings, tax policy, government efficiency) without analyzing actual healthcare delivery, clinical outcomes, or healthcare business models.
Posts about government policy (DOGE, tax cuts, reconciliation) that tangentially mention healthcare spending or efficiency but lack substantive healthcare analysis.
3 example posts
Through SIMPLE solutions, we will be saving the American people $3.9 TRILLION over the next 10 years.🔥
By eliminating self-attestation, streamlining processes, updating technology, & more TRILLIONS will be going back into the pockets of Americans.💵
These are the savings we can
🇺🇸 DOGE SUBCOMMITTEE: $3.9 TRILLION IN SAVINGS—IF CONGRESS WAKES UP
The DOGE Subcommittee says the U.S. could save $3.9 TRILLION over 10 years by doing what any business already does—verifying identities, ditching self-certification, and cracking down on fraud.
Just front-end I
@charliekirk11 It wasn’t easy calling out 50 lies in one tweet, Charlie, but hell, someone’s gotta do it. Let’s go:
1. No taxes on tips? Temporary till 2028. After that, back to normal.
2. Trump tax cuts permanent? For the rich, yes. For working folks? Temporary.
3. Child tax
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[tangential_tech_or_space_infrastructure]
Learned3 rejectionsActive
Exclude posts about robotics, space-based data centers, software engineering techniques, 3D engines, or IT infrastructure unless they explicitly address a healthcare problem or clinical workflow application.
Posts about general tech infrastructure, robotics, space computing, or software engineering with loose or absent healthcare relevance
3 example posts
Building data centers in space is highly complicated and very expensive. It requires designing completely new tech to deal with the galactic radiation bombardment and extreme heat and cold among other things. And when they have all this figured out, the compute power is much less
Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down:
Imagine you're John Carmack
you're 22 years old and you just wrote a 3D engine in assembly that runs at 35fps on a 486
Doom drops. Quake drops. Half the planet is playing your code.
you're the reason GPUs exist. you're the reason your friend Jensen has a yacht today.
then in
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[non_healthcare_political_policy]
Learned3 rejectionsActive
Exclude posts about political policy, government budgets, or policy figures (DOGE, tax policy, congressional reconciliation) that invoke healthcare only as a framing device or example, rather than substantively analyzing healthcare system impact.
Posts about government policy, budget cuts, or political figures with only tangential or framing-based healthcare connection
3 example posts
Through SIMPLE solutions, we will be saving the American people $3.9 TRILLION over the next 10 years.🔥
By eliminating self-attestation, streamlining processes, updating technology, & more TRILLIONS will be going back into the pockets of Americans.💵
These are the savings we can
🇺🇸 DOGE SUBCOMMITTEE: $3.9 TRILLION IN SAVINGS—IF CONGRESS WAKES UP
The DOGE Subcommittee says the U.S. could save $3.9 TRILLION over 10 years by doing what any business already does—verifying identities, ditching self-certification, and cracking down on fraud.
Just front-end I
@charliekirk11 It wasn’t easy calling out 50 lies in one tweet, Charlie, but hell, someone’s gotta do it. Let’s go:
1. No taxes on tips? Temporary till 2028. After that, back to normal.
2. Trump tax cuts permanent? For the rich, yes. For working folks? Temporary.
3. Child tax
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[ai_model_technical_deep_dive]
Learned3 rejectionsActive
Exclude posts focused on AI model technical capabilities, safety testing, sandboxing escapes, code reverse-engineering, or architecture details (Claude, LLMs, world models) unless they explicitly connect to a healthcare use case or clinical application.
Posts about AI model capabilities, training, security testing, or reverse engineering without healthcare context
3 example posts
MiroFish is probably the craziest thing I've ever seen in AI (after world models, maybe?).
Wanted to do a research for the 'Netanyahu out by...?' Polymarket:
> asked Claude to create a report 'Try to gather as much information about Netanyahu, about why he would be out, what act
The general public: "AI is overhyped, it still can't count the Rs in strawberry!"
Meanwhile, Claude Mythos Preview during a safety test:
Escaped its sandbox, gained broad internet access, emailed the researcher running the evaluation, then posted details of its exploit to http
Anthropic released a system card today for Claude Mythos Preview, a model they're not publicly releasing. Some notes:
- They've had model available internally since Feb 24, 2026
- Not releasing due to offensive cyber capabilities of model. Autonomously found zero-day vulns in ht
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[geopolitical_and_conflict_framing]
Learned3 rejectionsActive
Exclude posts that use healthcare (medical records, hospital closures, physician casualties) as a vehicle to discuss geopolitical conflict, military actions, or political persecution. The healthcare element must be the primary analytical focus, not a data point in a broader conflict narrative.
Posts that frame healthcare or medical topics primarily through geopolitical conflict, military/defense, or political oppression narratives.
3 example posts
Vanderbilt University Medical Center @VUMChealth has had to turn over #trans patient medical records to the Tennessee Attorney General's office due to an investigation into medical billing fraud. Trans procedures is a growing multi-billion dollar industry.https://t.co/8LIpKkOIMa
ISrael's systematic liquidation of the medical leadership in Gaza:
Israeli Airstrike Kills Renowned Cardiologist in Gaza
Dr Marwan Al‑Sultan, the Director of the Indonesian Hospital in northern Gaza, was killed today in an Israeli airstrike. According to his surviving daughter, L
There are two clocks running in the Hormuz crisis. One belongs to the insurance industry. The other belongs to biology. They cannot be reconciled. And that irreconcilability is the single most important fact in the global food system right now.
The insurance clock: P&I clubs can
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[fringe_unvalidated_medical_interventions]
Learned3 rejectionsActive
Exclude posts that promote untested peptides, experimental interventions, or 'biohacking' protocols without rigorous clinical evidence, peer-reviewed publication, or regulatory clearance. Posts claiming dramatic results from animal studies or anecdotal use without human trial data should be rejected.
Posts promoting unproven, speculative, or fringe medical treatments lacking clinical validation or peer-reviewed evidence.
3 example posts
april 6th 2022 vs april 6th 2026
this was my final day of intensive outpatient for my last mental health crisis. four years later and i have two degrees, a job, and i'm saving up to move to the city. i'm really proud of myself guys 😭 https://t.co/U9InCjCjEr
Building data centers in space is highly complicated and very expensive. It requires designing completely new tech to deal with the galactic radiation bombardment and extreme heat and cold among other things. And when they have all this figured out, the compute power is much less
@ronbrachman The basic idea of world models is very old.
Optimal control folks were using model-based planning in the 1960s (using the "adjoint state" methods, which deep learning people would now call "backprop through time").
But the real question is what you do with this idea
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[general_ai_technical_capabilities]
Learned3 rejectionsActive
Exclude posts focused on AI model technical capabilities (sandbox escapes, reverse engineering, code leaks, world models, robotics scaling) where healthcare is mentioned only as context or example. The post must address how AI capabilities specifically solve healthcare problems, not general AI advancement.
Posts about general AI model capabilities, AI safety, or AI research that are tangentially connected to healthcare at best.
3 example posts
MiroFish is probably the craziest thing I've ever seen in AI (after world models, maybe?).
Wanted to do a research for the 'Netanyahu out by...?' Polymarket:
> asked Claude to create a report 'Try to gather as much information about Netanyahu, about why he would be out, what act
NEW: Foundation models pose unprecedented privacy risks — from scraping your personal data for training to regurgitating it in outputs. Yet the public currently depends almost entirely on developers to self-police. Our latest issue brief examines what governance mechanisms https:
The general public: "AI is overhyped, it still can't count the Rs in strawberry!"
Meanwhile, Claude Mythos Preview during a safety test:
Escaped its sandbox, gained broad internet access, emailed the researcher running the evaluation, then posted details of its exploit to http
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[political_policy_without_healthcare_analysis]
Learned3 rejectionsActive
Exclude posts that center on DOGE initiatives, government cost-cutting announcements, political figures' statements, or regulatory vetoes where the healthcare angle is secondary to political messaging or partisan framing. The post must analyze healthcare system mechanics, not use healthcare as a vehicle for political commentary.
Posts about government policy, political figures, or regulatory bodies framed around political narratives rather than substantive healthcare system analysis.
3 example posts
Through SIMPLE solutions, we will be saving the American people $3.9 TRILLION over the next 10 years.🔥
By eliminating self-attestation, streamlining processes, updating technology, & more TRILLIONS will be going back into the pockets of Americans.💵
These are the savings we can
🇺🇸 DOGE SUBCOMMITTEE: $3.9 TRILLION IN SAVINGS—IF CONGRESS WAKES UP
The DOGE Subcommittee says the U.S. could save $3.9 TRILLION over 10 years by doing what any business already does—verifying identities, ditching self-certification, and cracking down on fraud.
Just front-end I
@charliekirk11 It wasn’t easy calling out 50 lies in one tweet, Charlie, but hell, someone’s gotta do it. Let’s go:
1. No taxes on tips? Temporary till 2028. After that, back to normal.
2. Trump tax cuts permanent? For the rich, yes. For working folks? Temporary.
3. Child tax
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[non_healthcare_domain_with_loose_healthcare_tag]
Learned3 rejectionsActive
Exclude posts about robotics, space data centers, pure mathematics, LLM architecture, or other technical domains where the healthcare connection is superficial or aspirational rather than demonstrating actual healthcare application or impact.
Posts about space infrastructure, robotics, mathematics, or other technical domains that are minimally related to healthcare despite matching a healthcare article
3 example posts
Building data centers in space is highly complicated and very expensive. It requires designing completely new tech to deal with the galactic radiation bombardment and extreme heat and cold among other things. And when they have all this figured out, the compute power is much less
Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down:
Imagine you're John Carmack
you're 22 years old and you just wrote a 3D engine in assembly that runs at 35fps on a 486
Doom drops. Quake drops. Half the planet is playing your code.
you're the reason GPUs exist. you're the reason your friend Jensen has a yacht today.
then in
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[ai_model_technical_capabilities_tangential]
Learned3 rejectionsActive
Exclude posts that discuss AI model technical achievements, safety vulnerabilities, sandboxing escapes, or reverse-engineering efforts—even if loosely matched to healthcare articles. Focus must be on healthcare problem-solving, not AI capability demonstrations.
Posts about AI model capabilities, safety testing, or reverse-engineering that are tangentially framed as healthcare but lack concrete healthcare relevance
3 example posts
MiroFish is probably the craziest thing I've ever seen in AI (after world models, maybe?).
Wanted to do a research for the 'Netanyahu out by...?' Polymarket:
> asked Claude to create a report 'Try to gather as much information about Netanyahu, about why he would be out, what act
The general public: "AI is overhyped, it still can't count the Rs in strawberry!"
Meanwhile, Claude Mythos Preview during a safety test:
Escaped its sandbox, gained broad internet access, emailed the researcher running the evaluation, then posted details of its exploit to http
Anthropic released a system card today for Claude Mythos Preview, a model they're not publicly releasing. Some notes:
- They've had model available internally since Feb 24, 2026
- Not releasing due to offensive cyber capabilities of model. Autonomously found zero-day vulns in ht
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[ai_company_metrics_and_fundraising]
Learned3 rejectionsActive
Exclude posts that focus on AI company fundraising amounts, valuations, revenue metrics, or financial performance (e.g., 'raised $1.03bn', 'Series A at $725M', 'cash flow return on investment'). These are business news about AI companies, not healthcare applications.
Posts about AI company valuations, funding rounds, and business metrics unrelated to healthcare application
3 example posts
BREAKING: Yann LeCun has raised a HUGE $1.03bn round for his new startup, in Europe's largest seed EVER!
It values Advanced Machine Intelligence (AMI) at a WHOPPING $3.5bn!
The company, based in Paris, is aiming to develop the next generation of AI models that go beyond LLMs.
Fleet closed an unannounced $45M Series A at $725M.
Led by insiders Sequoia, Bain, Menlo, SVA.
RR grew from $1M 6 months ago → $63M RR now → $160M next Q.
Congrats @fleet_ai!
Nvidia’s 73% cash flow return on investment puts it in the top 0.1% of companies, driving a valuation far above today’s levels in UBS HOLT’s model. Meanwhile, stock-based compensation and slowing growth make many software names look expensive.
Full analysis:
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[geopolitical_military_defense]
Learned3 rejectionsActive
Reject posts about military conflicts, defense policy, geopolitical tensions, or international trade disputes without direct, substantive connection to healthcare markets or policy.
Military and geopolitical content excluded
3 example posts
ISrael's systematic liquidation of the medical leadership in Gaza:
Israeli Airstrike Kills Renowned Cardiologist in Gaza
Dr Marwan Al‑Sultan, the Director of the Indonesian Hospital in northern Gaza, was killed today in an Israeli airstrike. According to his surviving daughter, L
Catastrophe for UK competitiveness and AI ambitions.
Britain now has the highest industrial electricity prices in the developed world. At 25p per kilowatt-hour, its power costs stand at double the EU average and quadruple those of the US (6p) and China (7p).
But this isn’t
BREAKING: The IRGC just published a target list on Tasnim: Google. Microsoft. Palantir. IBM. Nvidia. Oracle. Amazon. Every US technology company with infrastructure in the Gulf is now a declared military objective of 31 autonomous commanders who need no permission to strike and a
Exclude posts about humanoid robot production scaling, general robotics manufacturing milestones, or speculative future applications unless the post addresses current validated use in clinical settings, patient care delivery, or specific healthcare workflow automation.
Posts about humanoid robots, general manufacturing automation, or speculative future applications with minimal current healthcare relevance.
1 example post
Today we’re giving an update on ramping F.03 production at BotQ
In the last 120 days, Figure scaled manufacturing 24x - from 1 robot/day to 1 robot/hour
We will manufacture 55 humanoid robots this week https://t.co/Am5Kn53mVE
Exclude posts that take a strong stance on a health claim or drug narrative (e.g., 'GLP-1s improve cardiovascular outcomes,' 'the eating disorders narrative is wrong,' 'protein absorption limits don't exist') without citing rigorous evidence, healthcare policy implications, or system-level analysis that would affect healthcare delivery or clinical practice.
Posts asserting broad health claims, debating drug efficacy or side effects, or critiquing medical narratives without healthcare system evidence.
2 example posts
The only problem with the GLP-1 heart muscle loss narrative is...
... that it's just a narrative.
GLP-1s have reliably improved cardiovascular outcomes in trials, to the point that some research suggests benefit may even be independent of (not reliant on) weight loss.
"Your body can only use 25-30g of protein per meal. Anything above that gets wasted."
This claim has been repeated in fitness nutrition for over a decade, and it was built on studies that measured the right thing over the wrong timescale.
Moore 2009 gave six young men 0, 5, ht
Exclude posts that relate a single clinical anecdote, individual patient interaction, or one-off provider observation (e.g., 'a patient wasn't examined twice in months,' 'an epic sepsis alert triggered') without drawing a connection to systemic healthcare delivery gaps, operational failures, policy problems, or structural insights that apply beyond the individual case.
Posts describing a single clinical encounter, patient case, or provider observation without broader healthcare systems insight or generalizable lesson.
2 example posts
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
I sat with a patient today who first noticed a change in October. It’s April now. In all those months of appointments and follow-ups, her breast had only truly been looked at twice. That stayed with me.
If something has changed with your body — especially something under your ht
Created 2026-04-29 · Updated 2026-04-29
Edit rule text
[ai_infrastructure_or_compute_hype_tangential]
Learned2 rejectionsActive
Exclude posts that discuss AI training compute, data center capacity, grid infrastructure, or power generation scaling unless the post explicitly connects these constraints to healthcare AI deployment, clinical decision-making latency, or healthcare-specific infrastructure bottlenecks.
Posts about AI compute scaling, data center infrastructure, and power grids with loose or no healthcare relevance.
2 example posts
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
Created 2026-04-28 · Updated 2026-04-28
Edit rule text
[speculative_ai_safety_or_cybersecurity_tangent]
Learned2 rejectionsActive
Exclude posts that discuss AI safety, cybersecurity vulnerabilities, AI takeover scenarios, or speculative AI capability risks in abstract or non-healthcare contexts, or without clear connection to healthcare operational or clinical risk.
Posts about AI safety vulnerabilities, cybersecurity risks, or speculative AI capability threats without healthcare application or context
2 example posts
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Created 2026-04-27 · Updated 2026-04-27
Edit rule text
[tangential_biotech_founder_enthusiasm]
Learned2 rejectionsActive
Exclude posts that are primarily self-promotional founder statements, personal enthusiasm, or vision statements about the speaker's own biotech tool or startup without analyzing concrete healthcare impact or market dynamics.
Posts by biotech founders/investors expressing personal excitement or vision about their own tools or companies without substantive healthcare analysis.
2 example posts
Nothing beats running @ginkgo cloud lab for happy customers!
Not going to stop until it’s as easy to start a biotech startup on GCL as it is to start a software startup on AWS. https://t.co/GDEyT1pI7F
We’re exploring the idea of a peptide-forward telehealth concierge medical service. Medicine 3.0 focused on full optimization- peptides, hormones, diet/exercise. MD is a former college varsity rower, fellowship at Yale etc.
Would you be interested in participating in a pilot
Exclude posts that discuss general infrastructure, energy grids, manufacturing, or economic policy where healthcare is mentioned as one tangential example or comparison, but the primary focus is non-healthcare domain analysis (e.g., posts about grid equipment, gas turbines, chip manufacturing bottlenecks).
Posts about broad macroeconomic, infrastructure, or policy topics that merely reference healthcare as one example among many non-healthcare domains.
2 example posts
Since I began work on AI in 2010, training compute for frontier models has grown by one trillion times.
Now we're looking at something like another thousand-fold growth in effective compute by the end of 2028.
1000x the existing 1,000,000,000,000x.
Extraordinary stuff.
Follow the bottleneck.
Chips → data centers → grid equipment → power → gas turbines
Grid equipment grew 1%/yr for decades. Then data centers showed up as an entirely new buyer.
Gas turbine makers shipped 5–7 GW/yr. Last year? Orders hit 100 GW.
@maxlbcook on how he https://t.
Created 2026-04-26 · Updated 2026-04-26
Edit rule text
[ai_agent_cybersecurity_tangent]
Learned2 rejectionsActive
Exclude posts that discuss AI agent technical capabilities, security vulnerabilities, or proof-of-concept exploits (e.g., taking over networks, bypassing guardrails) where the healthcare framing is absent or tangential. Healthcare AI must be the primary subject, not a loose comparison.
Posts about AI agent capabilities, vulnerabilities, or cybersecurity implications that lack healthcare application specificity.
2 example posts
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Created 2026-04-26 · Updated 2026-04-26
Edit rule text
[ai_safety_cybersecurity_tangent]
Learned2 rejectionsActive
Exclude posts that discuss AI safety failures, security vulnerabilities, jailbreaks, or agent control issues (e.g., taking over networks, breaking guardrails) unless they explicitly connect to a concrete healthcare delivery, clinical decision-making, or patient-facing system risk.
Posts about AI safety, security vulnerabilities, or jailbreak demonstrations without healthcare-specific application
2 example posts
A researcher gave an AI agent access to his shell, his files, and his network. Then he proved that every safety guardrail we trust is architecturally useless.
It cannot tell the difference between your instructions and a hacker's.
The paper is called Parallax: Why AI Agents htt
This is by far the most important result of the entire GPT-5.5 release:
In a cyber evaluation GPT-5.5 was able to take over a simulated corporate network in 1/10 trials with a budget of 100M tokens.
Previously, the only model that was able to solve this task was Claude Mythos,
Exclude posts that describe a single clinical case, patient experience, or research observation (e.g., 'a patient I saw had a delayed diagnosis,' 'sepsis alert triggered') unless the post explicitly extracts a healthcare system insight, policy implication, or operational lesson for broader healthcare design.
Posts sharing individual clinical observations, patient anecdotes, or research notes without connecting to healthcare system design, policy, or operational lessons.
2 example posts
low grade fever, mildly tachycardic, weakness, nothing focal, no alarm signs/symptoms
epic sepsis alert triggered
vanc/pip-tazo given, lactate checked
flu+
sepsis metric met
care worse
lather, rinse, repeat
Metric based "QI" does net harm
I sat with a patient today who first noticed a change in October. It’s April now. In all those months of appointments and follow-ups, her breast had only truly been looked at twice. That stayed with me.
If something has changed with your body — especially something under your ht
Created 2026-04-24 · Updated 2026-04-24
Edit rule text
[personal_anecdote_or_individual_narrative]
Learned2 rejectionsActive
Exclude posts that center on personal anecdotes, individual patient stories, family medical narratives, or first-person health experiences, unless they are framed to illustrate a broader healthcare system problem or policy insight.
Posts sharing personal health stories, individual case narratives, or lifestyle experiences without systemic healthcare analysis.
2 example posts
I never met my grandfather.
He died of pancreatic cancer when my father was just 19. Today, Yash Bindal, 33, father to 18-month-old Maya, faces the same fate.
@PopVaxIndia is using AI to make him a personalized generative medicine to extend his life.
https://t.co/O5VIXbmMGd
good news: it is a specific virus that has a good prognosis - 85%+ of full recovery.
thanks everyone who helped me; it is hard to research while immobilized, and I got some things wrong, which you helped clear up. im extremely thankful and hope i can give it back somehow
sadl
Exclude posts demonstrating novel technology in non-healthcare or hypothetical healthcare settings, or that cite non-healthcare performance achievements (robot race times, space infrastructure) as proof-of-concept for healthcare applications without real healthcare validation or deployment evidence.
Posts about emerging technologies (humanoid robots, orbital compute, exotic innovations) applied to healthcare hypothetically or in non-healthcare contexts
2 example posts
In Beijing's 2026 humanoid robot half-marathon, HONOR's Lightning completed the 21 km course in 50:26 minute.
Beat current human men's half-marathon world record of 57:20.
Last year's winner took over 2 hours 40 minutes.
Massive progress in 12 month
https://t.co/OcZJ66ebWD
Eli Lilly is suing the FDA to classify retatrutide as a biologic. Retatrutide's main chain has 39 alpha amino acids. Lilly makes it with solid-phase synthesis, the standard chemistry for peptide drugs. Under FDA law, a biologic has more than 40 amino acids. Above 40, solid-phase
Exclude posts that analyze healthcare organization financial metrics (bonds issued, credit downgrades, spending patterns) when the analysis is purely financial/actuarial without substantive insight into healthcare delivery, patient outcomes, or operational efficiency.
Posts analyzing healthcare organizational finances (credit ratings, debt issuance, spending) without healthcare delivery or outcomes insight.
2 example posts
Yale New Haven got downgraded from Aa3 to A1 in 2023. Then issued $669 million more. When your credit card company lowers your limit and you open a new card, nobody calls it strategy. Unless you’re a hospital.
https://t.co/yxhB8kLQWZ
Equity Group plans to cut healthcare costs by launching Equity Afia Pharmacy, controlling drug prices, and lowering insurance premiums.
Outpatient care could become up to 40% cheaper for Kenyans.
Created 2026-04-14 · Updated 2026-04-14
Edit rule text
[off_topic_conspiracy_or_non_healthcare]
Learned2 rejectionsActive
Exclude posts about geopolitical conflicts, military actions, UFO narratives, or other non-healthcare domains that are posted to healthcare-focused channels but lack any genuine healthcare relevance or analysis.
Posts about geopolitical conflict, UFOs, or other non-healthcare topics that have no substantive connection to healthcare.
2 example posts
ISrael's systematic liquidation of the medical leadership in Gaza:
Israeli Airstrike Kills Renowned Cardiologist in Gaza
Dr Marwan Al‑Sultan, the Director of the Indonesian Hospital in northern Gaza, was killed today in an Israeli airstrike. According to his surviving daughter, L
🚨 Bob Lazar set for Iconic return to The Joe Rogan Podcast 🛸👽
Reports say he will appear with Movie Director Luigi Vendittelli to discuss a new documentary called S4: The Bob Lazar Story (or similar title). The film is set to premiere on April 3, 2026 on Amazon, with the Rogan i
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[geopolitical_conflict_framing]
Learned2 rejectionsActive
Exclude posts that focus on geopolitical conflict, military action, or international incidents (Gaza crisis, Iran conflict, sanctions) that mention healthcare only as collateral damage framing rather than analyzing systemic healthcare policy or outcomes.
Posts about military conflict, geopolitics, or international incidents with healthcare only as incidental or emotional framing
2 example posts
ISrael's systematic liquidation of the medical leadership in Gaza:
Israeli Airstrike Kills Renowned Cardiologist in Gaza
Dr Marwan Al‑Sultan, the Director of the Indonesian Hospital in northern Gaza, was killed today in an Israeli airstrike. According to his surviving daughter, L
There are two clocks running in the Hormuz crisis. One belongs to the insurance industry. The other belongs to biology. They cannot be reconciled. And that irreconcilability is the single most important fact in the global food system right now.
The insurance clock: P&I clubs can
Created 2026-04-12 · Updated 2026-04-12
Edit rule text
[personal_anecdote_or_lifestyle]
Learned1 rejectionActive
Exclude posts that are primarily personal anecdotes about individual health journeys, mental health recovery stories, or lifestyle optimization without broader healthcare system or industry analysis.
Posts about personal health experiences, lifestyle changes, or individual medical narratives without systemic healthcare insights.
3 example posts
Hoy voy a contarles de las verdaderas razones del envejecimiento cerebral acelerado, es este paper que salió en abril de 2026 y que todo el mundo está citando pero casi nadie lo está explicando. https://t.co/07eWbX61vL
Mine got LASIK, as had many of the nurses.
A lot of ophthalmologists have.
There's a weird delusion that the profession is all afraid of it, but there's no basis for that belief beyond fearmongering. https://t.co/yVBDfNDiB5
New @JAMANetwork paper out from our team here at UCLA Health/WLA VA and @samirguptaGI's team at UCSD/San Diego VA!
In this first study from a multi-part research project, our teams are trying to understand what age your medical doctors and the colorectal cancer prevention https:
Created 2026-04-11 · Updated 2026-04-12
Edit rule text
[tangential_healthcare_framing]
Learned0 rejectionsActive
Reject posts where healthcare is a passing reference or loose analogy while the core content covers general business trends, tech infrastructure, geopolitics, or economic policy.
Healthcare mentioned only tangentially
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[unvalidated_repurposed_drugs_or_peptides]
Learned0 rejectionsActive
Reject posts about unregulated peptides, GLP-1 lifestyle opinions, or repurposed drug efficacy claims without peer-reviewed clinical data, regulatory context, or healthcare system analysis.
Speculative peptide and repurposed drug claims
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[satirical_or_pr_content]
Learned0 rejectionsActive
Reject satirical, mocking, or sarcastic posts about healthcare topics, and retweets or link drops with no original analysis or obvious PR and advertising content.
Satirical, promotional, or PR content
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[non_healthcare_business_content]
Learned0 rejectionsActive
Reject general career advice, entrepreneurship narratives, labor market trends, macroeconomic commentary, or business automation content lacking explicit healthcare market connection.
General business content without healthcare focus
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[truncated_or_low_effort_posts]
Learned0 rejectionsActive
Reject posts that are visibly truncated, end mid-sentence, contain only promotional language, or lack sufficient content to evaluate healthcare relevance.
Incomplete or truncated posts
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[entertainment_sports_general_tech]
Learned0 rejectionsActive
Reject entertainment, sports, athletic biohacking, or general tech news without a specific, demonstrated healthcare system or innovation angle.
Entertainment, sports, and unrelated tech
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[fraud_enforcement_crime_reporting]
Learned0 rejectionsActive
Reject posts primarily reporting DOJ actions, fraud arrests, or law enforcement outcomes in healthcare without analyzing systemic policy implications or structural healthcare failures.
Fraud alerts without systemic analysis
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[clinical_only_no_systems_context]
Learned0 rejectionsActive
Reject clinical trial results, drug mechanisms, disease biology, or case studies lacking a policy, technology, market, or healthcare business model dimension.
Pure clinical content without systems insight
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[shallow_low_substance]
Learned0 rejectionsActive
Reject one-line slogans, vague claims, motivational takes, hot takes, or broad assertions ('healthcare is broken') without supporting reasoning, data, or mechanism explanation.
Low-substance takes without analysis
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[stock_trading_and_finance]
Learned0 rejectionsActive
Reject stock trading, options, short-selling, ticker price analysis, crypto, DeFi, or speculative investing content even when healthcare companies are mentioned.
Financial trading and speculation excluded
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[generalist_ai_no_healthcare]
Learned0 rejectionsActive
Reject generalist AI content, AI company revenue milestones, product launches, or infrastructure news lacking a specific, demonstrated healthcare application or clinical use case.
AI content without healthcare application
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[breaking_news_no_argument]
Learned0 rejectionsActive
Reject BREAKING NEWS or political event announcements that merely mention healthcare without making a substantive healthcare system or policy argument.
Breaking news without healthcare argument
Created 2026-04-11 · Updated 2026-04-11
Edit rule text
[political_outrage_sonnet]
SonnetActive
Political outrage about healthcare-adjacent topics without a substantive healthcare system or policy argument.
Political outrage without a healthcare system argument
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[sports_biohacking]
SonnetActive
Sports performance enhancement, athletic biohacking, or peptide/steroid use in athletic contexts.
Sports performance enhancement or athletic biohacking
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[satirical_joking_tone]
SonnetActive
Satirical, joking, or mocking in tone rather than analytical.
Satirical or joking tone rather than analytical
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[immigration_visa_sonnet]
SonnetActive
Personal immigration or visa story (H1B, J1, USCIS, work authorization) even if told by a healthcare worker — immigration policy, not healthcare policy.
Personal immigration or visa story
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[breaking_news_no_argument_sonnet]
SonnetActive
'BREAKING:' news alert or political event announcement that merely mentions healthcare — requires a healthcare SYSTEM argument, not just a news report.
Breaking news that merely mentions healthcare
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[non_us_healthcare_sonnet]
SonnetActive
UK, India, Canada, Australia, EU, or any non-US healthcare system, policy, or regulation — REJECT even if the topic is interesting. US healthcare context only.
Non-US healthcare system content
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[broad_claim_no_specifics]
SonnetActive
Broad claim ('Healthcare is broken', 'AI will change everything') without getting into HOW or WHY.
Broad claim without explaining HOW or WHY
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[shallow_hot_take]
SonnetActive
Shallow hot take, slogan, or one-line opinion with no supporting argument or data.
Shallow hot take or slogan with no supporting argument
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[ai_company_metrics]
SonnetActive
AI company revenue, valuation, or funding milestones (Anthropic ARR, OpenAI revenue, etc.) with explicit healthcare context — even if that company has healthcare products. This is business news, not healthcare insight.
AI company revenue, valuation, or funding milestones
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[generalist_ai_no_healthcare_sonnet]
SonnetActive
Generalist AI (ChatGPT tips, AI art, general AI hype, AI productivity tools, AI ethics in the abstract) WITHOUT a specific healthcare application or policy angle.
Generalist AI without a healthcare application
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[retweet_pr_no_analysis]
SonnetActive
Retweet or quote-tweet with no original analysis, link-only drop, or obvious PR/ad content.
Retweet, link drop, or PR content with no original analysis
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[stock_trading_finance]
SonnetActive
Primarily about stock trading, short-selling, options, or non-healthcare financial markets — including stock ticker price analysis ($NVO, $LLY, $MRNA price targets or analyst ratings).
Stock trading or non-healthcare financial markets
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[substance_filter_haiku]
HaikuActive
One-line opinions, slogans, motivational takes, vague claims ('healthcare is broken'), or hot takes without supporting reasoning or data. Also: posts that are primarily financial scorecards or revenue numbers without any healthcare policy or operations argument.
One-liners, slogans, and posts lacking analytical depth
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[political_outrage_no_substance]
Learned0 rejectionsActive
Reject posts expressing partisan political outrage, conspiracy framing, or ideological attacks using healthcare as a vehicle without substantive policy mechanism analysis.
Political outrage lacking healthcare analysis
Created 2026-04-08 · Updated 2026-04-11
Edit rule text
[sports_performance_enhancement]
HaikuActive
Sports performance enhancement, biohacking for athletic performance, or peptide/steroid use outside a clinical or health-policy context.
Sports performance enhancement or athletic biohacking
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[satirical_sarcastic]
HaikuActive
Satirical, joke, or clearly sarcastic posts about healthcare topics — posts that mock rather than analyze.
Satirical or sarcastic healthcare posts
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[immigration_visa_stories]
Learned0 rejectionsActive
Reject personal immigration, visa, or work-authorization narratives even when told by healthcare workers; these are immigration policy posts, not healthcare system posts.
Immigration stories not healthcare policy
Created 2026-04-08 · Updated 2026-04-11
Edit rule text
[breaking_news_political_event]
HaikuActive
'BREAKING NEWS' or news-alert posts about political events, law enforcement actions, or government announcements that merely mention healthcare — the post must make a healthcare SYSTEM argument, not just report a political event.
Breaking news posts that merely mention healthcare
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[personal_wellness_fitness]
HaikuActive
Personal wellness, fitness, or supplement use without a healthcare system, policy, or technology argument.
Personal wellness or fitness without healthcare system angle
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[crypto_defi_speculative_finance]
HaikuActive
Crypto, DeFi, tokens, or speculative personal finance content — even if the poster mentions health/wellness products tangentially.
Crypto, DeFi, or speculative finance
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[crime_law_enforcement_unrelated]
HaikuActive
Crime, law enforcement, or legal proceedings unrelated to healthcare fraud, medical malpractice, or health tech liability.
Crime or law enforcement unrelated to healthcare
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[non_us_healthcare]
Learned0 rejectionsActive
Reject posts about UK, Canada, India, Australia, EU, or any non-US healthcare system, policy, or regulation.
Non-US healthcare systems excluded
Created 2026-04-08 · Updated 2026-04-11
Edit rule text
[general_tech_no_healthcare]
HaikuActive
General tech or AI/ML industry news without a healthcare angle.
General tech or AI news without a healthcare angle
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[stock_trading_investing]
HaikuActive
Stock trading, short-selling, options, or public market investing — even if healthcare companies are mentioned. Includes posts with ticker symbols ($NVO, $LLY, $MRNA) focused on price targets, analyst ratings, or investment returns.
Stock trading or public market investing
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[entertainment_sports_personal]
HaikuActive
Entertainment, sports, or purely personal content.
Entertainment, sports, or purely personal content
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[deep_clinical_only]
HaikuActive
Clinical trial results, clinical case evaluations, or scientific data without a policy, technology, or market-structure dimension.
Clinical data without policy/tech dimension
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
[generalist_ai]
HaikuActive
Generalist AI content WITHOUT a specific healthcare/medtech/biotech angle. Includes AI company revenue milestones, valuations, and funding rounds (e.g. 'Anthropic hit $30B ARR') — even if that AI company has healthcare products. The post must be about AI USED IN healthcare, not AI company business metrics.
AI content without a healthcare angle
Created 2026-04-08 · Updated 2026-04-08
Edit rule text
Propose New RulesAnalyzes your recent rejections with AI — does not save until you approve
Removed Rules
303Not applied to scans — can be re-enabled
[ai_company_revenue_metrics]
Learned9 rejectionsInactive
Exclude posts that primarily report revenue run rates, ARR milestones, financial growth trajectories, or company valuations for AI firms (Anthropic, OpenAI, Google) unless the post explicitly connects these metrics to a specific healthcare use case or business model. Financial performance metrics alone are not healthcare tech content.
[clinical_research]
Learned8 rejectionsInactive
Reject posts about occupational health research. This newsletter covers health technology products and digital health innovation, AI applications in medicine, healthcare business strategy, investment and M&A, drug discovery and biotech, and health policy—none of which encompass occupational health research or workplace health initiatives.
[ai_agent_infrastructure_product_launch]
Learned6 rejectionsInactive
Exclude posts that announce or celebrate AI agent infrastructure product launches, API releases, or technical capabilities (e.g., Claude Code, Managed Agents, sandbox abstractions) unless the post explicitly applies these tools to solve a specific healthcare problem or demonstrates healthcare-relevant use case.
[clinical_opinion_without_systems_context]
Learned6 rejectionsInactive
Exclude posts that present isolated clinical observations, medication critiques, or disease mechanism discussions without connecting to healthcare delivery, business models, regulatory change, or systemic impact. Posts should have relevance beyond a single clinical niche.
[ai_agent_product_launch_drama]
Learned6 rejectionsInactive
Exclude posts that focus on AI agent product announcements, infrastructure releases, or competitive drama between Claude Managed Agents, OpenAI Agents, or similar platforms. The post must demonstrate how agents solve a specific healthcare problem, not just announce availability or capability.
[tangential_political_or_enforcement_news]
Learned5 rejectionsInactive
Exclude posts that focus on political appointments, government personnel drama, law enforcement fraud busts, or policy enforcement actions (RFK Jr. removal, HHS lobbying claims, FBI/state attorney general fraud announcements) where healthcare is secondary framing rather than primary substantive focus.
[technology_non_health]
Learned5 rejectionsInactive
Reject posts about non healthcare software security implications. This newsletter covers health technology products and innovation, AI/data science in medicine, payer strategy, health system business models, healthcare investment, biotech innovation, and health policy. Posts focused primarily on non healthcare software security, cybersecurity vulnerabilities, data breach incidents, or IT infrastructure protection fall outside this scope and should be excluded regardless of which companies or systems are involved.
[unverified_ai_capability_claims]
Learned5 rejectionsInactive
Exclude posts that make dramatic claims about AI model capabilities (e.g., escaping sandboxes, gaining internet access, reverse-engineering proprietary systems) based on unverified leaks, internal drama, or speculation rather than published research or official statements.
[healthcare_system_outrage_without_solution]
Learned5 rejectionsInactive
Exclude posts that report healthcare system failures, fraud cases, Medicaid policy changes, or worker burnout stories purely as outrage narratives without analyzing root causes, systemic implications, or potential solutions. Posts must provide structural insight, not just alarm.
[clinical_epidemiology_research]
Learned4 rejectionsInactive
Reject posts about clinical epidemiology studies, disease prevalence research, and patient outcome data that are shared primarily for their clinical or scientific interest rather than their implications for health technology innovation, market dynamics, business strategy, or healthcare policy affecting the markets covered by the newsletter.
[truncated_low_signal_posts]
Learned4 rejectionsInactive
Exclude posts that appear truncated, end abruptly mid-thought, contain broken formatting, or are missing critical context (indicated by '&' HTML entities, trailing ellipsis without resolution, or incomplete sentences).
Exclude posts that claim dramatic medical outcomes (skin regeneration from peptides, genetic resistance to drugs, complete autoimmune remission) without citing published studies, sample sizes, or peer review. These are anecdotal or promotional fringe claims.
Exclude posts that are primarily personal stories (immigration journeys, mental health recovery timelines, individual drug experiments, personal LASIK decisions, hair loss treatments) without deriving broader healthcare system or policy insights.
Exclude posts that focus on world models, JEPA training tricks, Lie algebra analysis of LLM hidden states, mathematical proofs, or low-level ML engineering without explaining healthcare relevance or clinical impact.
Exclude posts that discuss military conflicts, geopolitical tensions, electricity prices, data center construction in space, or international waterways — even if they mention a healthcare company or use healthcare-adjacent language. These are geopolitical/infrastructure posts, not healthcare analysis.
Exclude posts that weaponize healthcare topics to attack political opponents, institutions, or ideological groups without substantive analysis of the healthcare mechanism, evidence, or policy alternative. Posts framing healthcare as a proxy for culture war grievances should be excluded.
Exclude posts that announce AI product features, software releases, or technical capabilities (Claude for Word, agent frameworks, MCP protocols, model optimizations) unless the post explicitly connects the tool to a healthcare problem, clinical workflow, or health outcome improvement.
[general_software_product_launches]
Learned4 rejectionsInactive
Reject posts about general-purpose software tools and their feature releases, unless the post explicitly discusses healthcare-specific use cases, healthcare market adoption, or implications for health-tech business models. Posts that merely announce new features or availability tiers for non-healthcare-focused platforms fall outside scope.
[immigration_policy_advocacy]
Learned4 rejectionsInactive
Reject posts about individual immigration cases, legal status verification, or border/visa administration framed as personal narratives or social justice arguments. These posts lack connection to health technology, business strategy, policy affecting healthcare markets, or clinical/operational innovation.
[vaccine_safety_activism]
Learned4 rejectionsInactive
Reject posts about vaccine safety concerns framed as regulatory failures, claims of institutional corruption in drug approval processes, or assertions that regulatory agencies ignored scientific evidence to suppress safety warnings. These posts center on public health activism and regulatory criticism rather than health technology innovation, market dynamics, investment strategy, or policy's effect on healthcare business.
[general_career_advice]
Learned4 rejectionsInactive
Reject posts about general job interview preparation, hiring process mechanics, career development frameworks, or HR negotiation tactics that are not specific to healthcare, health technology, or health-industry employment dynamics. This includes advice on questions to ask recruiters, team structure evaluation, or career pathing that could apply to any industry.
[niche_clinical_opinion_without_systems_context]
Learned4 rejectionsInactive
Exclude posts that are purely clinical education (drug mechanisms, diagnostic algorithms, treatment preferences) or personal clinical opinions without connecting to healthcare economics, policy, market dynamics, or system-level implications. Deep clinical content without healthcare context is off-topic.
[speculative_unverified_ai_capability_claims]
Learned4 rejectionsInactive
Exclude posts that make speculative or dramatic claims about AI capabilities, market share, or competitive positioning (e.g., 'Claude took the lead,' 'AI can write better papers than humans,' 'agents will replace jobs') without citing studies, benchmarks, or data. Hype and competitive drama without substantiation are off-topic.
[speculative_financial_opinion_biotech_peptides]
Learned4 rejectionsInactive
Exclude posts that present investment theses, market size projections, or financial speculation about peptide companies, biotech startups, or longevity drug markets without explaining the clinical mechanism, regulatory pathway, or healthcare system impact.
[healthcare_fraud_law_enforcement_alerts]
Learned4 rejectionsInactive
Exclude posts that report individual fraud cases, arrests, or law enforcement actions against healthcare providers or companies, unless the post analyzes a systemic vulnerability, regulatory gap, or healthcare business model failure.
Exclude posts about political personnel changes, government appointments, or partisan governance drama that mention healthcare only as context or framing, rather than analyzing a specific healthcare policy, market, or clinical outcome.
Exclude posts that promote ivermectin, mebendazole, ketamine, or other repurposed drugs for off-label uses (cancer, autoimmune conditions) based on anecdotal evidence, unverified studies, or claims lacking peer-reviewed validation and regulatory approval.
[ai_agent_infrastructure_product_announcements]
Learned4 rejectionsInactive
Exclude posts that announce or celebrate new AI agent infrastructure, sandbox management, or deployment platforms (e.g., Claude Managed Agents, Agent Builder) unless the post explicitly analyzes how this technology solves a specific healthcare problem or workflow.
[off_topic_tangential_tech]
Learned4 rejectionsInactive
Exclude posts about general software engineering, ML practices, macro-economic trends, geopolitical issues, or AI architecture trends that lack clear and direct healthcare application. Posts about coding practices, time-series forecasting, NATO defense, or market trading belong in tech/finance, not healthcare tech.
[non_health_tech_earnings]
Learned4 rejectionsInactive
Reject posts about semiconductor, software, or general technology company financial performance, valuation multiples, and stock market positioning unless explicitly connected to health-tech adoption, healthcare market dynamics, or health-specific business models. Posts analyzing cash flow returns, compensation structures, or growth rates of non-health companies lack direct relevance to healthcare technology, policy, or investment markets.
[partisan_political_messaging_healthcare_framing]
Learned4 rejectionsInactive
Exclude posts that frame healthcare or medical policy primarily as partisan political criticism, attacks on officials, or ideological claims without detailed policy mechanism, evidence, or systems-level analysis (e.g., 'Big Pharma captured NIH', 'government failures', 'political illusion').
[partisan_political_messaging]
Learned4 rejectionsInactive
Exclude posts that frame healthcare or policy announcements primarily through partisan political lens (Trump admin victories, EU tech nationalism, FDA actions as political wins) without substantive healthcare business or innovation analysis. Political celebration or outrage is not healthcare tech insight.
[general_ai_product_launches]
Learned4 rejectionsInactive
Reject posts about major AI model releases, AI agent platforms, or AI tooling announcements that lack explicit healthcare, medical, or health-tech application framing. Posts should be rejected even if they mention downstream use cases (marketing, analytics, operations) unless the tweet specifically contextualizes the tool within a healthcare business problem or health-tech company use case.
[unreleased_model_leaks_and_scaremongering]
Learned4 rejectionsInactive
Exclude posts that discuss unreleased model versions, leaked codebases, reverse-engineered systems, or claimed capability escapes (especially Claude Mythos, internal versions). These are insider tech drama, not healthcare application analysis.
[clinical_opinion_without_systems_insight]
Learned4 rejectionsInactive
Exclude posts that present isolated clinical opinions, trial result reactions, or treatment debates (lipid targets, drug dosing, clinical guidelines) without explaining systemic healthcare implications, business model impact, or broader innovation context. Clinical discussion alone is insufficient.
[clinical_niche_without_systems_insight]
Learned4 rejectionsInactive
Exclude posts that engage in specialized clinical discussion (trial data, drug mechanisms, clinical guidelines) without connecting to healthcare delivery, access, economics, or structural system issues. The post must serve a healthcare tech/systems audience, not clinical specialists only.
[legal_and_regulatory_affairs]
Learned4 rejectionsInactive
Reject posts about non healthcare related fraud prevention policy. This newsletter covers health technology innovation, AI applications in healthcare, payer and provider business strategy, investment trends, and care delivery models—not the regulatory and compliance mechanisms used to detect fraud or enforce anti-fraud standards. Posts about fraud prevention policy, regardless of context or scale, fall outside this scope.
[labor_and_employment]
Learned4 rejectionsInactive
Reject posts about labor union disputes. This newsletter covers health technology innovation, AI/data science in medicine, payer strategy, health system business models, healthcare investment, biotech/pharma innovation, and health policy as it affects these sectors. Labor union disputes and employment relations fall outside this scope and do not address the business, technology, or market dynamics that define the newsletter's focus.
[tangential_ai_infrastructure_not_healthcare]
Learned4 rejectionsInactive
Exclude posts focused on AI infrastructure (data centers, electricity consumption, token economics, compute factories) that mention healthcare tangentially or not at all. The post must directly address healthcare delivery, clinical outcomes, or healthcare-specific AI adoption challenges.
[unreleased_model_scaremongering]
Learned4 rejectionsInactive
Exclude posts claiming unreleased AI models (Claude Mythos Preview, etc.) have broken sandboxes, discovered zero-day vulnerabilities, or demonstrated dangerous emergent behaviors. These are speculation or internal testing claims presented as fact without peer review or independent verification.
[political_news]
Learned4 rejectionsInactive
Reject posts about partisan health policy. This newsletter covers health technology innovation, AI/data science applications, market dynamics (payers, hospitals, investment), drug discovery, and health policy as it affects these specific sectors and technologies. Content centered on partisan political positions, partisan disagreements about healthcare policy, or health policy framed primarily through partisan political lenses does not align with this business and innovation-focused scope.
[clinical_trial_results]
Learned4 rejectionsInactive
Reject posts about clinical trial outcomes, drug efficacy results, and medical research findings that are shared primarily as scientific discoveries rather than as evidence supporting a business thesis, market opportunity, regulatory change, or technology application. Focus on filtering content that treats research results as standalone medical news rather than as inputs to health-tech strategy, investment theses, or policy analysis.
[product_launch_announcements]
Learned4 rejectionsInactive
Reject posts about product launches, feature releases, or API availability announcements from health tech vendors. These are promotional or product-focused content that do not advance analysis of market dynamics, business strategy, technology trends, or healthcare economics—the substantive focus areas of the newsletter.
[incomplete_or_truncated_social_posts]
Learned4 rejectionsInactive
Exclude posts that end abruptly with ellipsis (...), incomplete URLs, or mid-sentence truncation. These posts lack sufficient context to evaluate their merit or relevance and appear to be automated feed captures rather than intentional content shares.
[ai_company_revenue_and_valuation]
Learned4 rejectionsInactive
Exclude posts that cite revenue run-rates, ARR figures, funding amounts, or valuation metrics for AI companies (Anthropic, Google, etc.) even if the company builds healthcare tools. The focus must be on healthcare application, not company financial performance.
[healthcare_fraud_and_crime_alerts]
Learned4 rejectionsInactive
Exclude posts that are primarily news alerts about law enforcement, fraud prosecutions, or DOJ announcements — even if healthcare-adjacent. The post must analyze systemic healthcare implications, not just report crime statistics or enforcement agency statements.
[speculative_financial_opinion]
Learned4 rejectionsInactive
Exclude posts that speculate about financial outcomes, investment theses, market timing, company growth projections, or economic trends—even healthcare-adjacent ones—without grounding in healthcare data, policy analysis, or clinical evidence.
[glp1_commodity_price_opinion]
Learned4 rejectionsInactive
Exclude posts that focus on GLP-1 pricing, cost-benefit opinions, or advocacy for pricing changes without substantive clinical, regulatory, or health economics analysis. Personal lifestyle opinions about drug affordability or necessity are not healthcare tech insights.
[incomplete_truncated_low_effort]
Learned4 rejectionsInactive
Exclude posts that are visibly truncated, incomplete, or fragmented (ending with ellipsis, missing context, or incomplete sentences) or contain only promotional/hype language with no substantive healthcare analysis or claim.
[fringe_unsubstantiated_medical_claims]
Learned4 rejectionsInactive
Exclude posts that advance unsubstantiated or fringe medical claims (e.g., GLP-1 users have 195% higher suicide rates, peptides should be OTC without evidence, ivermectin hidden oncology trials, psychiatric medication fraud conspiracy). Posts must cite peer-reviewed evidence or institutional sources, not speculation or anecdotal observation.
[political_prediction_markets]
Learned3 rejectionsInactive
Reject posts about using AI tools for political prediction markets, geopolitical forecasting, or simulations of political outcomes. These posts use healthcare-adjacent AI techniques but apply them to non-healthcare domains and lack substantive connection to health technology markets, policy, or innovation.
[emerging_market_healthcare_access]
Learned3 rejectionsInactive
Reject posts about healthcare cost reduction or access expansion initiatives in emerging or developing markets that lack a technology product, venture investment, or regulatory policy angle specific to the newsletter's focus areas. Posts about pharmacy launches, insurance premium reductions, or cost-of-care improvements in non-primary markets should be excluded unless they center on a health tech platform, digital health innovation, or cross-border investment thesis.
[cryptocurrency_market_predictions]
Learned3 rejectionsInactive
Reject posts about cryptocurrency market trends, token launches, blockchain layer architecture, stablecoin adoption, or crypto-native financial instruments. This includes posts that frame AI primarily through a cryptocurrency/blockchain lens rather than as applied to healthcare delivery, health systems, or health technology business models.
[entertainment_or_conspiracy_framing]
Learned3 rejectionsInactive
Exclude posts about entertainment industry figures, UFO/alien conspiracy content, celebrity interviews, or pop culture that have no genuine healthcare relevance, even if tangentially mentioned.
[personal_anecdote_or_lifestyle_opinion]
Learned3 rejectionsInactive
Exclude posts that are primarily personal anecdotes (individual health journey, personal medication trial, lifestyle choice) or opinion pieces without connecting to broader healthcare trends, systemic issues, or evidence-based insight.
[niche_technical_insider_without_healthcare]
Learned3 rejectionsInactive
Exclude posts that discuss pure machine learning architecture, technical deep-dives into model internals, software engineering best practices, or AI infrastructure optimization when they lack any explicit connection to healthcare delivery, outcomes, or policy.
[basic_neuroscience_research]
Learned3 rejectionsInactive
Reject posts about basic scientific research, disease pathophysiology, or academic breakthroughs in neurology, oncology, or other medical domains that lack direct connection to health technology products, healthcare business models, payer strategy, health policy, or investment opportunity. Focus on research announcements that are primarily educational or academic rather than commercializable or policy-relevant.
[professional_scope_regulation]
Learned3 rejectionsInactive
Reject posts about professional licensing scope disputes, credential requirements, or turf battles between healthcare professions regarding who is authorized to perform specific clinical procedures. These posts typically frame regulatory decisions through a professional advocacy or patient safety lens rather than analyzing business, technology, or market implications.
[clinical_research_findings]
Learned3 rejectionsInactive
Reject posts about clinical research studies, medical findings, diagnostic validation studies, or disease phenotyping research published in medical journals. These posts focus on clinical science outcomes rather than the business, technology, investment, or policy dimensions that drive the newsletter's core coverage areas.
[generic_startup_funding_announcements]
Learned3 rejectionsInactive
Reject posts about funding rounds, valuations, or revenue metrics for startups that lack substantive analysis of healthcare market dynamics, business model viability, competitive positioning, or regulatory implications. Posts that function as promotional announcements or investor celebration without exploring the underlying healthtech strategy or market significance should be excluded.
[ai_governance_regulation]
Learned3 rejectionsInactive
Reject posts about AI governance, regulatory frameworks, or self-policing mechanisms that discuss AI risks in abstract or cross-sector terms without direct application to health-tech business models, healthcare-specific markets, or investment opportunities. This includes posts that frame AI policy as a systemic or societal issue rather than through a health-tech industry lens.
[ai_safety_capability_leaks]
Learned3 rejectionsInactive
Reject posts about AI safety incidents, model capability demonstrations, AI system exploits, or sandbox escapes that are framed as general AI industry commentary rather than health-tech or healthcare-specific applications. This includes posts using healthcare AI incidents primarily as illustrations of broader AI risk narratives.
[public_health_conferences]
Learned3 rejectionsInactive
Reject posts about general public health conferences, summits, and international health forums that lack specific focus on health technology, business models, investment strategy, regulatory/policy impact, or healthcare market dynamics. Posts should be excluded if they promote attendance or coverage of broad health conferences without substantive analysis of how specific technologies, companies, or market forces are affected.
[general_ai_infrastructure_philosophy]
Learned3 rejectionsInactive
Reject posts about AI infrastructure, model moats, or AI competitive dynamics that lack explicit connection to healthcare technology, health systems, payer strategy, drug discovery, or healthcare investment. Posts should be grounded in specific health-tech applications, regulatory developments, or healthcare market implications rather than general AI platform strategy.
[ai_safety_capabilities]
Learned3 rejectionsInactive
Reject posts about AI model safety demonstrations, security vulnerability disclosures, or autonomous capability benchmarks in general-purpose AI systems—unless the post explicitly connects these capabilities to a specific healthcare delivery, health-tech product, or health industry business outcome.
[software_security_exploitation]
Learned3 rejectionsInactive
Reject posts about reverse engineering software authentication systems, circumventing access controls, or exploiting security mechanisms—regardless of whether they involve health-tech platforms. These topics fall outside the newsletter's focus on legitimate health-tech business models, policy, and innovation.
[ufo_conspiracy_entertainment]
Learned3 rejectionsInactive
Reject posts about unverified UFO or extraterrestrial conspiracy narratives, Area 51 claims, and entertainment media promoting fringe theories unrelated to healthcare, medical innovation, or health systems. This includes promotional content for documentaries or podcast appearances centered on these topics.
[general_ai_engineering_achievements]
Learned3 rejectionsInactive
Reject posts about independent developers, open-source contributors, or engineers building AI systems where the primary narrative is technical achievement, performance optimization, or speed benchmarks absent any connection to healthcare markets, health-tech companies, clinical workflows, or health system adoption. The focus should be on health tech innovation, not AI engineering feats in isolation.
[individual_healthcare_experience_complaints]
Learned3 rejectionsInactive
Reject posts about individual patient experiences with insurance coverage denials, claim reimbursement disputes, or healthcare system access problems told as personal narratives. These posts focus on consumer grievances rather than structural market dynamics, policy implications, business model analysis, or technology solutions relevant to health-tech entrepreneurs and investors.
[infectious_disease_public_health]
Learned3 rejectionsInactive
Reject posts about infectious disease outbreaks, vaccination campaigns, and communicable disease epidemiology framed as public health awareness or clinical updates. These posts address disease prevention and clinical medicine rather than health tech innovation, healthcare business strategy, market dynamics, or health policy affecting those sectors.
[geopolitical_commodity_markets]
Learned3 rejectionsInactive
Reject posts about geopolitical crises, international shipping disruptions, or commodity market dynamics (fertilizer, oil, grain, etc.) that lack explicit connection to healthcare markets, health tech companies, or health-related policy. Posts must directly address healthcare-specific impacts rather than treating healthcare as an incidental downstream effect of non-health-tech analysis.
[personal_mental_health_narratives]
Learned3 rejectionsInactive
Reject posts about individual mental health journeys, recovery milestones, and personal wellness achievements shared as social commentary. These posts center on lived experience and personal narrative rather than healthcare industry trends, business models, technology innovation, or policy impact.
[general_infrastructure_technology]
Learned3 rejectionsInactive
Reject posts about general computing infrastructure, data center technology, or space-based engineering that lack explicit connection to healthcare applications, health-tech companies, or medical innovation. These posts discuss technology challenges and economics in non-healthcare domains.
[general_robotics_ai_development]
Learned3 rejectionsInactive
Reject posts about robotics, autonomous systems, or general AI/ML research that lack direct application to healthcare delivery, clinical decision-making, health technology products, or regulated medical domains. This includes synthetic data generation techniques, simulation frameworks, and hardware interfaces developed for non-medical robotics applications, regardless of their potential future healthcare relevance.
[pure_ml_theory_discussions]
Learned3 rejectionsInactive
Reject posts about fundamental machine learning theory, computational history, or algorithmic techniques when discussed in isolation from healthcare applications, health tech products, or health industry problems. Include posts that debate academic lineage or theoretical foundations of ML concepts without connecting to health tech innovation, health system adoption, or healthcare market dynamics.
[tech_industry_narratives]
Learned3 rejectionsInactive
Reject posts about tech industry figure comebacks, non-healthcare AI company pivots, or narratives framed around engineer obsession and redemption arcs in consumer tech or general AI. These posts lack connection to healthcare markets, health systems, payers, providers, or health-specific applications of technology.
[general_economic_philosophy]
Learned3 rejectionsInactive
Reject posts about general economic history, capitalism as a philosophical system, or broad principles of entrepreneurship and innovation that are not explicitly connected to healthcare, health technology, or health markets. These posts should be removed even if they mention productivity, education, or research investment, unless they directly analyze how these principles apply to specific health-tech companies, healthcare business models, or health policy outcomes.
[partisan_bad_faith_activism]
Learned3 rejectionsInactive
Exclude posts that leverage healthcare topics (FDA, vaccines, medical institutions) primarily to advance partisan political narratives, attack individuals or organizations in bad faith, or promote conspiracy claims—rather than analyze healthcare policy or system design.
[off_topic_or_tangential_tech]
Learned3 rejectionsInactive
Exclude posts about local AI inference, LLM optimization, software engineering patterns, data center resource consumption, or open-source AI frameworks—unless they explicitly address healthcare-specific challenges (e.g., clinical data governance, FDA-regulated AI deployment).
[personal_lifestyle_anecdote]
Learned3 rejectionsInactive
Exclude posts that are primarily personal anecdotes about drug use, fitness experimentation, immigration milestones, or individual wellness opinions—even if they mention health topics. Posts must provide healthcare system, policy, or population-level insight, not individual testimonial.
[geopolitical_military_infrastructure]
Learned3 rejectionsInactive
Exclude posts focused on military strikes, geopolitical tensions, international waterway disputes, defense infrastructure, or energy policy that are only tangentially or not at all connected to healthcare delivery, regulation, or patient outcomes.
[general_ai_infrastructure_optimization]
Learned3 rejectionsInactive
Reject posts about general-purpose AI infrastructure improvements, model compression techniques, and local inference optimization that lack explicit connection to healthcare applications or health-tech business models. These posts discuss AI engineering and hardware efficiency in isolation from healthcare markets, payer strategy, health systems, or clinical/health-tech use cases.
[housing_finance_policy]
Learned3 rejectionsInactive
Reject posts about housing finance, mortgage markets, government-sponsored enterprises in the housing sector, and sovereign wealth fund composition that lack direct application to healthcare markets, health technology, or health system strategy. These posts may discuss financial policy or government asset management but do not address health tech innovation, healthcare delivery, or health-specific business models.
[general_job_recruitment]
Learned3 rejectionsInactive
Reject posts about job openings, hiring calls, and recruitment announcements that are primarily soliciting applications or advertising employment opportunities. This includes posts framed as research program positions, fellowship opportunities, or team-building calls, even when posted by individuals or organizations otherwise relevant to healthcare or technology.
[general_tech_sector_news]
Learned3 rejectionsInactive
Reject posts about AI model performance, AI security vulnerabilities, or general technology company stock movements unless they directly address healthcare-specific applications, health data risks, clinical workflows, or health-system integration. Posts that discuss AI safety or AI companies primarily through a tech-sector investment or market lens rather than a health-tech innovation lens should be excluded.
[general_energy_policy]
Learned3 rejectionsInactive
Reject posts about energy pricing, industrial electricity costs, and national competitiveness frameworks unless they directly address healthcare-specific infrastructure costs, hospital operations, or health tech company supply chain impacts. Posts analyzing macroeconomic energy policy divorced from healthcare market effects fall outside newsletter scope.
[tech_industry_compensation]
Learned3 rejectionsInactive
Reject posts about technology industry employee compensation, salary benchmarks across non-healthcare tech companies, or compensation comparisons that don't directly analyze healthcare-specific talent acquisition, retention, or market dynamics affecting health-tech companies.
[general_labor_market_ai_impacts]
Learned3 rejectionsInactive
Reject posts about broad AI impacts on hiring, employment, or general labor market dynamics that lack healthcare-specific framing or implications. These posts treat AI as a general workplace phenomenon rather than exploring healthcare business models, health-tech companies, or healthcare industry dynamics.
[generic_ai_business_opportunity]
Learned3 rejectionsInactive
Reject posts about generic AI business opportunities, entrepreneurial potential, or tool-agnostic startup advice that lack specific application to health technology, healthcare delivery, health insurance, biotech, pharma, or health policy. Posts must demonstrate concrete connection to healthcare markets or health-tech business models to be in scope.
[personal_immigration_career_milestones]
Learned3 rejectionsInactive
Reject posts about individual career transitions, immigration visa applications, personal gratitude statements, or professional milestones that center on an individual's biographical journey rather than analyzing broader healthcare market trends, business strategy, or technology adoption patterns. These posts lack the systems-level or market-level analysis that defines the newsletter's scope.
[political_charity_activism]
Learned3 rejectionsInactive
Reject posts about political figures responding to social hardship through charitable acts, particularly those that frame individual stories as evidence of systemic government failure or use them to promote specific political agendas or candidates. These posts prioritize political activism and social commentary over analysis of healthcare markets, technology, or policy mechanisms.
[general_ai_platform_capabilities]
Learned3 rejectionsInactive
Reject posts about new capabilities, features, or platform announcements for general-purpose AI assistants unless explicitly tied to healthcare applications, medical workflows, health data handling, or health-tech business models. Posts celebrating native agentic features, automation primitives, or cross-platform integrations in non-healthcare AI tools fall outside scope even if tangentially relevant to tech infrastructure.
[geopolitical_military_conflict]
Learned3 rejectionsInactive
Reject posts about military targeting, international conflicts, weapons systems, or geopolitical risk assessments framed through military/defense doctrine, even when they mention technology companies. These posts fall outside the newsletter's coverage of health-tech markets, digital health innovation, and healthcare business strategy.
[tech_industry_leadership_narratives]
Learned3 rejectionsInactive
Reject posts about technology industry leadership, strategic missteps by non-healthcare tech companies, or competitive dynamics between major tech firms (e.g., Meta, Google, Apple) unless the content directly analyzes their entry into healthcare markets, healthcare-specific acquisitions, or healthcare regulatory strategy. Posts framed as technology industry narrative or corporate biography fall outside scope.
[geopolitical_commodities_analysis]
Learned3 rejectionsInactive
Reject posts about geopolitical conflicts, international trade chokepoints, currency systems, shipping logistics, or commodity market dynamics that lack direct connection to healthcare markets, health technology companies, or health policy. This includes analysis of sanctions, de-dollarization, or international finance mechanisms unless explicitly tied to healthcare-specific impacts.
[non_health_sector_corruption]
Learned3 rejectionsInactive
Reject posts about corruption, regulatory capture, or conflict-of-interest allegations in government agencies or industries unrelated to healthcare, health technology, pharmaceuticals, biotech, health insurance, or health systems. This includes water utilities, transportation infrastructure, and other public sector sectors where healthcare is not the primary domain.
[anti_regulation_ideology]
Learned3 rejectionsInactive
Reject posts about regulation that frame it primarily through ideological anti-government or libertarian arguments about bureaucratic capture and compliance costs as general principles. These posts should lack specific reference to health policy mechanics, market structures, or business implications relevant to health tech entrepreneurs and investors.
[ai_industry_competitive_dynamics]
Learned3 rejectionsInactive
Reject posts about competitive disputes between frontier AI model companies, open-source AI platform threats to proprietary vendors, or allegations of anti-competitive behavior in the general AI industry. These posts address AI sector dynamics rather than healthcare-specific applications, business models, or policy.
[software_engineering_tutorials]
Learned3 rejectionsInactive
Reject posts about software development techniques, AI model usage optimization, or coding best practices that lack explicit application to healthcare technology, health data, or health-tech business problems. This includes generic productivity tips for AI tools or software engineering workflows unrelated to healthcare context.
[geopolitical_technology_competition]
Learned3 rejectionsInactive
Reject posts about autonomous vehicles, semiconductor manufacturing, international technology competition, or AI arms races framed primarily as geopolitical/national security issues rather than as business opportunities or challenges within healthcare markets, health insurance operations, or clinical care delivery.
[general_software_engineering]
Learned3 rejectionsInactive
Reject posts about generic software engineering techniques, AI agent development frameworks, and programming platform features that are not specifically applied to healthcare use cases or health-tech business problems. These posts should lack explicit connection to healthcare markets, health data, medical workflows, or health-tech company strategy.
[general_ai_framework_releases]
Learned3 rejectionsInactive
Reject posts about open-source AI frameworks, programming language releases, and general-purpose software tools that are not explicitly positioned as healthcare solutions or integrated into health-tech business strategy. These posts should be excluded even if they mention potential medical applications, unless they analyze actual adoption, regulatory impact, or market implications specific to healthcare companies or health systems.
[cosmetic_wellness_tourism]
Learned3 rejectionsInactive
Reject posts about cosmetic or wellness tourism packages, bundled beauty treatments, medical tourism destinations, or consumer-facing clinic offerings that emphasize aesthetic outcomes over clinical health technology, business model innovation, or healthcare system strategy. These represent consumer health marketing rather than the enterprise, investment, or policy angles the newsletter covers.
[clinical_case_studies]
Learned3 rejectionsInactive
Reject posts about individual patient cases, clinical treatment outcomes, or disease-specific breakthroughs presented primarily as medical narratives rather than as evidence of technology adoption, market disruption, or business model innovation. These posts focus on clinical results rather than the health systems, technologies, companies, or market dynamics driving healthcare change.
[ai_model_safety_discourse]
Learned3 rejectionsInactive
Reject posts about large language model failures, AI model safety concerns, and algorithmic accountability that lack explicit connection to healthcare technology products, clinical applications, or health tech business strategy. This includes broad critiques of AI companies' responses to technical mishaps unless framed through the lens of healthcare-specific harm or healthcare sector implications.
[general_ai_infrastructure_impacts]
Learned3 rejectionsInactive
Reject posts about AI infrastructure's environmental or resource impacts (power consumption, water usage, carbon footprint, supply chain effects) unless explicitly framed around healthcare-specific applications, health tech company operations, or health sector policy responses to AI resource constraints.
[commercial_solar_sales]
Learned3 rejectionsInactive
Reject posts about commercial solar panel installation, roofing sales automation, or solar tax incentive business development. These posts may use healthcare-adjacent AI or automation language but target non-healthcare commercial real estate and energy markets.
[general_ai_infrastructure_news]
Learned3 rejectionsInactive
Reject posts about AI companies' hardware strategies, chip manufacturing, compute infrastructure, or capital allocation decisions that lack explicit connection to healthcare applications, health-tech companies, or healthcare-specific market dynamics. Posts should be excluded unless they directly address how these developments impact healthcare organizations, digital health products, or health systems.
[consumer_enterprise_software_releases]
Learned3 rejectionsInactive
Reject posts about major software product releases, feature deployments, or platform updates from non-healthcare-focused companies, even when those products might tangentially be used in healthcare settings. Focus instead on healthcare-specific applications, health-tech vendors, or enterprise software announcements that directly address healthcare workflows, compliance, or market dynamics.
[geopolitical_conflict_analysis]
Learned3 rejectionsInactive
Reject posts about military conflicts, strikes, warfare, or geopolitical tensions, even when framed around supply chain impacts on commodities or agriculture. The newsletter covers health policy and healthcare economics, not international relations or conflict analysis.
[non_health_tech_equities]
Learned3 rejectionsInactive
Reject posts about equity securities, stock dilution, share offerings, or capital structure decisions at non-health-tech companies. This includes technical analysis of ATMs, shareholder dilution mechanics, and warnings to retail investors about stock price risk in companies outside the health technology, biotech, healthcare services, or health insurance sectors.
[prompt_engineering_ai_tools]
Learned3 rejectionsInactive
Reject posts about AI tool prompts, LLM usage techniques, or prompt engineering frameworks shared primarily as productivity hacks or methodology demonstrations, unless they are specifically contextualized within a healthcare, health-tech, or health policy application and analysis.
[consumer_hardware_speculation]
Learned3 rejectionsInactive
Reject posts about general-purpose computing hardware, semiconductor manufacturing, or speculative consumer tech announcements that lack explicit connection to health systems, payers, clinical delivery, or healthcare regulatory strategy. This includes posts about LLM inference acceleration, local processing capabilities, or edge computing unless directly framed around healthcare applications or health-tech company product roadmaps.
[culture_war_institutional_criticism]
Learned3 rejectionsInactive
Reject posts about institutional controversies framed primarily through partisan or culture-war angles, especially those combining unrelated allegations (medical practices, financial aid, personnel conduct) to delegitimize an organization rather than analyze specific health-tech business dynamics, regulatory outcomes, or market effects.
[generic_business_finance_education]
Learned3 rejectionsInactive
Reject posts about generic financial modeling, unit economics, valuation frameworks, or business education that could apply to any industry and contain no healthcare, health-tech, or health-economy specific context. This includes MBA-style primers, Excel templates, or accounting tutorials presented as broadly applicable business knowledge rather than healthcare-specific analysis.
[judicial_biography_social_critique]
Learned3 rejectionsInactive
Reject posts about judicial decisions, court cases, or judicial philosophy that are framed primarily through personal biography, socioeconomic background critique, or social justice analysis rather than their direct impact on healthcare markets, regulation, or technology adoption.
[wealth_inequality_criticism]
Learned3 rejectionsInactive
Reject posts about wealthy individuals or their family members that use funding disparities, privilege, or personal spending habits as a primary angle for criticism. These posts focus on social inequality or privilege rather than substantive healthcare technology, business strategy, regulation, or market dynamics.
Exclude posts where the primary subject is geopolitics, military defense, trade policy, or China competition, and healthcare is merely a secondary frame or mention. Posts about visa bans, AI investment competition, or Chinese government tech announcements should be excluded unless the healthcare system impact is the central analysis.
[unverified_fringe_diet_treatment_claims]
Learned3 rejectionsInactive
Exclude posts that promote ketogenic diets, Ayurveda, alternative medicine, or repurposed drugs for serious diseases (cancer, Parkinson's, aging) without discussing mechanism of action, clinical trial sample sizes, or clearly stating the evidence is preliminary. Claims like 'ketogenic diet cut Parkinson's symptoms by 41%' need context on study rigor.
[partisan_political_messaging_healthcare_framed]
Learned3 rejectionsInactive
Exclude posts where the primary intent is partisan political criticism (attacking a political figure or administration) that uses healthcare as the vehicle for outrage rather than analyzing healthcare policy or systems. Posts from overtly political figures criticizing opposing administrations without substantive healthcare mechanism analysis should be filtered.
[anecdotal_patient_complaint_without_insight]
Learned3 rejectionsInactive
Exclude posts that share personal anecdotes (patient denied coverage, doctor delayed, individual case of illness) without connecting to broader healthcare system patterns, policy implications, or structural solutions. Single stories presented as social commentary without analysis are too narrow.
[vaccine_safety_activism_claims]
Learned3 rejectionsInactive
Exclude posts that make assertions about vaccine adverse events, regulatory misconduct, or safety concerns without citing peer-reviewed clinical evidence or linking to verified safety databases. Activist messaging about vaccines without rigorous evidence analysis should be filtered.
Exclude posts about general tech trends, business strategy, geopolitical events, or economic policy that mention healthcare only in passing or as a loose analogy, unless the healthcare application is explicit, detailed, and central to the post's argument.
[unverified_fringe_medical_claims_scaremongering]
Learned3 rejectionsInactive
Exclude posts that make sweeping health or safety claims (e.g., vaccine side effects, miracle diet effects, unproven treatment benefits) presented as fact without citing peer-reviewed clinical evidence, regulatory approval status, or peer consensus—especially if framed to alarm or provoke distrust.
[anecdotal_health_story_without_systemic_insight]
Learned3 rejectionsInactive
Exclude posts that lead with personal anecdotes, individual patient cases, or one-off health stories (e.g., someone's cancer diagnosis, delayed chemo appointment, immigration hardship) without connecting to systemic healthcare patterns, policy implications, or scalable solutions.
Exclude posts that announce AI product launches, feature releases, or capability improvements (e.g., Claude features, Shopify integrations, research agents) unless they explicitly demonstrate application to a specific healthcare problem or clinical workflow.
Exclude posts that report healthcare fraud, regulatory warnings, malpractice cases, or law enforcement actions (FDA warning letters, ProPublica fraud alerts) as breaking news or alert items without connecting them to a systemic problem or broader healthcare policy lesson.
Exclude posts where healthcare is mentioned only in the headline match or as a passing reference, but the core content focuses on general business trends (startup funding, career advice, stock markets, geopolitical conflict, AI infrastructure costs) with no substantive healthcare application or insight.
[unverified_fringe_medical_claim_without_context]
Learned3 rejectionsInactive
Exclude posts that promote unverified medical claims about diet, supplements, alternative medicine, or single-study findings (ketogenic diet, hara hachi bu, insulin-obesity claims) without rigorous explanation of methodology, limitations, or how the finding fits into established medical science.
Exclude posts that rely on a single personal anecdote, patient story, or individual healthcare worker complaint (canceled appointments, delayed surgeries, hiring/immigration barriers) without connecting it to a systemic problem, data trend, or policy mechanism.
[government_health_infrastructure_announcements]
Learned3 rejectionsInactive
Reject posts about government healthcare infrastructure initiatives, medical education capacity expansions, or health workforce development programs when framed as policy announcements rather than market analysis, technology adoption, business model disruption, or investment opportunity. These lack the commercial, technological, or strategic business angle required for coverage.
[partisan_health_policy_criticism]
Learned3 rejectionsInactive
Reject posts about government health policy that frame announcements primarily as political theater or bad-faith governance, rather than analyzing the policy's structural impact on healthcare markets, technology adoption, or business strategy. These posts typically criticize implementation failures or administrative negligence without connecting to the newsletter's focus on how policy changes affect health tech companies, payers, providers, or investors.
[crypto_trading_and_speculation]
Learned3 rejectionsInactive
Reject posts about cryptocurrency trading, blockchain arbitrage opportunities, copytrading bots, or financial speculation using AI agents—even if they mention Claude or AI tools. These posts address speculative financial markets and consumer trading strategies unrelated to healthcare technology, health systems, or health-sector business models.
[macroeconomic_development_policy]
Learned3 rejectionsInactive
Reject posts about national economic development, talent allocation across entire economies, or sectoral policy reform that lack a direct connection to health technology, healthcare delivery, health insurance, or health-related innovation. This includes posts analyzing manufacturing capacity, port logistics, or cross-sector talent distribution unless explicitly framed around healthcare-specific markets or health tech companies.
[vaccine_safety_conspiracy_narratives]
Learned3 rejectionsInactive
Reject posts that center on alleged harms from specific pharmaceutical interventions (vaccines, treatments, drugs) based on reanalysis of trial data, claims of regulatory capture or institutional malfeasance, or calls to distrust established medical guidance. These posts typically frame themselves as exposing hidden truths but fall outside the newsletter's focus on health-tech innovation, business models, and policy mechanisms.
[autonomous_vehicle_policy]
Learned3 rejectionsInactive
Reject posts about autonomous vehicle safety, regulation, or policy deployment that use healthcare or regulatory systems as comparative examples but lack substantive connection to health technology, healthcare markets, or health-specific innovation. These posts typically invoke healthcare/regulatory contexts argumentatively rather than analytically examining health-tech business or policy.
[broad_stock_market_recap]
Learned3 rejectionsInactive
Reject posts that provide broad stock market recaps or multi-sector earnings summaries unless the primary thesis directly addresses a health technology company, healthcare payer, biotech firm, or health policy development. General market volatility commentary, cross-sector performance comparisons, and macro economic updates unrelated to healthcare business strategy or innovation are out of scope.
[general_ai_tool_applications]
Learned3 rejectionsInactive
Reject posts about general-purpose AI agents, research automation tools, or productivity software that lack healthcare-specific context or application. Posts should be excluded if they discuss AI tools generically or in non-healthcare contexts (e.g., landing page optimization, general startup productivity), even if the tool could theoretically be applied to health tech.
[ai_infrastructure_costs]
Learned3 rejectionsInactive
Reject posts about AI model pricing, token costs, or LLM infrastructure expenses that lack explicit connection to healthcare delivery, health-tech product development, or healthcare business models. These discussions belong in AI/tech infrastructure coverage, not health-tech strategy.
[general_business_automation]
Learned3 rejectionsInactive
Reject posts about automation, autonomous agents, or decision architecture in general commercial or enterprise contexts unless explicitly connected to healthcare operations, clinical workflows, or health system infrastructure. Posts that apply general business automation principles without healthcare application specificity should be filtered.
[general_ai_developer_tools]
Learned3 rejectionsInactive
Reject posts about general artificial intelligence developer tools, APIs, or research platforms that are not specifically designed for or applied to healthcare use cases. Posts promoting broadly applicable AI infrastructure (research indexing, agent frameworks, LLM tooling) without healthcare-specific context or business model implications fall outside scope.
[non_health_tech_company_announcements]
Learned3 rejectionsInactive
Reject posts about major product announcements or feature launches from non-health-tech companies, even when those features could theoretically be applied to healthcare contexts. The focus should be on companies, products, and innovations explicitly operating in health technology, healthcare delivery, or healthcare-adjacent markets—not general technology infrastructure providers discussing their core business developments.
[patient_advocacy_anecdotes]
Learned3 rejectionsInactive
Reject posts about isolated patient experiences, medical malpractice anecdotes, or individual cases of poor clinical care shared as cautionary tales. These posts frame healthcare problems through personal narrative rather than through the lens of market structure, technology, policy, or business strategy.
[public_health_agency_controversies]
Learned3 rejectionsInactive
Reject posts about government health agency decisions, vaccine policy announcements, or public health controversies framed around political administration actions or delays. These topics lack direct relevance to health tech innovation, digital health products, healthcare business models, or investment strategy.
[medical_malpractice_litigation]
Learned3 rejectionsInactive
Reject posts about individual medical malpractice cases, personal injury litigation, or hospital bankruptcy-related claims disputes. These focus on legal accountability for patient harm rather than analyzing health-tech markets, payer strategy, care delivery innovation, or healthcare policy.
[illicit_drug_safety_warnings]
Learned3 rejectionsInactive
Reject posts about safety issues, quality control failures, or health risks associated with illicit or unregulated drug markets. These posts address consumer drug safety rather than health technology markets, healthcare business strategy, or regulatory policy affecting legitimate innovators and institutions.
[clinical_disease_epidemiology]
Learned3 rejectionsInactive
Reject posts about disease epidemiology, clinical disease trends, and patient experience narratives that frame health conditions as medical mysteries or social problems without connection to technology innovation, business models, investment opportunities, or healthcare system strategy. These posts focus on clinical and public health concerns rather than the technology and business infrastructure of healthcare.
[consumer_nutrition_advice]
Learned3 rejectionsInactive
Reject posts about personal nutrition practices, eating habits, and dietary behavior modification that lack a health-tech, business, policy, or market angle. This includes trending wellness concepts, food science commentary, and satiety discussions presented primarily as consumer health advice rather than as infrastructure, industry, or innovation opportunities.
[general_business_narrative]
Learned3 rejectionsInactive
Reject posts about general business success stories, entrepreneurship lessons, and economic narratives that happen to involve financial services or other industries but lack substantive connection to healthcare markets, health technology, or healthcare-specific business dynamics. These may be well-crafted narratives but fall outside the newsletter's healthcare-focused scope.
[general_developer_tools]
Learned3 rejectionsInactive
Reject posts about general software development tools, programming features, or code development platforms unless they are explicitly positioned within a health-specific use case or healthcare application context. Posts should demonstrate direct application to healthcare technology, medical data workflows, or health system operations to be included.
[partisan_political_criticism]
Learned3 rejectionsInactive
Reject posts about political figures or administrations framed primarily as personal criticism, broken campaign promises, or partisan blame rather than substantive analysis of how specific regulatory changes, policy shifts, or legislative actions affect health-tech markets, business models, or investment opportunities.
[generic_edtech_promotions]
Learned3 rejectionsInactive
Reject posts about online courses, professional certifications, and educational platforms that are promoted as general skill-building offerings rather than analyzed as healthcare market phenomena, competitive threats to health-tech incumbents, or policy-relevant workforce development initiatives.
[patient_care_access_stories]
Learned3 rejectionsInactive
Reject posts about individual patient experiences with insurance coverage decisions, treatment denials, or appointment cancellations framed as personal stories. These posts, while emotionally compelling, lack analysis of underlying business models, policy mechanisms, technology platforms, or market dynamics that would be relevant to health-tech entrepreneurs, investors, and strategists.
Exclude posts discussing reinforcement learning fine-tuning, sandbox isolation techniques, Model Context Protocol implementations, world models, or AI agent infrastructure patterns where the healthcare application is vague, theoretical, or absent—focus is on technical capability rather than healthcare problem-solving.
[unverified_fringe_medical_treatment_claims]
Learned3 rejectionsInactive
Exclude posts promoting ivermectin, mebendazole, ketamine, or other repurposed/unregulated treatments as cancer cures or major clinical breakthroughs with percentages of 'clinical benefit' or 'disappearance rates' not published in peer-reviewed journals or validated by major medical organizations. Include fringe diet claims presented as disease cures.
[career_guidance_vocational_advice]
Learned3 rejectionsInactive
Reject posts about career pivots, vocational training programs, degree options, and professional development pathways. These posts frame health through the lens of individual job satisfaction and educational credentials rather than market dynamics, technology innovation, or healthcare business strategy.
[clinical_trial_consumer_nutrition]
Learned3 rejectionsInactive
Reject posts about clinical trial results for dietary or lifestyle interventions (ketogenic diets, supplements, exercise protocols, etc.) when framed primarily as consumer health advice or direct-to-patient treatment recommendations, rather than as business opportunities, technology platforms, or policy implications affecting health systems or payers.
[general_ai_productivity_trends]
Learned3 rejectionsInactive
Reject posts about artificial intelligence tools and productivity optimization that treat AI as a generic workforce multiplier or business efficiency lever without analyzing health-tech-specific implications, healthcare market dynamics, regulatory constraints, or domain-specific use cases in medicine, payers, providers, or health systems.
[general_ai_philosophy]
Learned3 rejectionsInactive
Reject posts about general AI philosophy, consciousness, or scaling theory that lack explicit connection to health technology applications, healthcare business models, clinical outcomes, or health-specific regulation. Posts should be excluded if they treat health tech merely as an example of broader AI principles rather than examining health-tech markets, companies, or policy directly.
[foundational_ai_research]
Learned3 rejectionsInactive
Reject posts about foundational AI research, machine learning theory, and model training techniques that discuss general AI capabilities without explicit connection to healthcare applications, medical use cases, or health-tech business strategy. This includes papers and discussions on reasoning circuits, reinforcement learning generalization, or model scaling that treat healthcare as an incidental application domain rather than the primary focus.
[generic_entrepreneurship_narratives]
Learned3 rejectionsInactive
Reject posts about entrepreneurial retrospectives, startup failure/success storytelling, and personal founder journeys that do not explicitly address health technology, healthcare markets, medical innovation, or health-system business strategy. These posts may mention AI, SaaS, or business lessons but apply them to non-healthcare domains or generic startup advice.
[basic_disease_biology]
Learned3 rejectionsInactive
Reject posts about basic disease biology, viral mechanisms, cellular pathology, or symptom etiology that lack a health technology, business, policy, or investment angle. Posts about what a disease does to the body, without connecting to a product, company, care model, payer strategy, or regulatory framework, fall outside newsletter scope.
[geopolitical_military_analysis]
Learned3 rejectionsInactive
Reject posts about military capabilities, geopolitical conflict preparation, defense strategies, weapons systems development, or international security tensions—even when framed as technology news or innovation announcements. These posts lack direct relevance to healthcare markets, digital health innovation, or health policy.
[academic_research_tools]
Learned3 rejectionsInactive
Reject posts about open-source or commercial AI tools designed to automate academic research workflows (literature review, paper writing, citation management, rebuttal generation) unless the post explicitly connects the tool to healthcare delivery, health economics, biomedical research, or a health-tech business application.
[general_ai_engineering_patterns]
Learned3 rejectionsInactive
Reject posts about general software engineering practices, LLM agent design patterns, or AI development methodologies that lack healthcare-specific context or application. These posts should address healthcare technology specifically (e.g., clinical AI, healthcare data systems, health tech infrastructure) rather than treating healthcare as one example among many of broader AI/engineering trends.
[generic_personal_finance_advice]
Learned3 rejectionsInactive
Reject posts about generic wealth accumulation strategies, asset ownership frameworks, or personal finance principles that lack specific application to healthcare markets, health-tech business models, or healthcare investment. Posts must connect explicitly to one of the newsletter's core verticals (digital health, healthtech, healthcare payers, providers, biotech, or health policy) rather than treating healthcare as an incidental example of broader financial principles.
[antitrust_conspiracy_allegations]
Learned3 rejectionsInactive
Reject posts about alleged coordinated industry efforts to suppress competitors or technologies, especially those framed as conspiracy theories without substantive market analysis, regulatory documentation, or competitive data. These posts typically call for investigation or legal action based on speculation rather than demonstrable market conditions or documented business practices.
[clinical_pathophysiology_debates]
Learned3 rejectionsInactive
Reject posts about mechanistic claims regarding disease pathophysiology, metabolic processes, or clinical observations that lack explicit connection to a health-tech product, business model, policy change, or investment thesis. Posts should focus on *what is being built or funded to address a problem*, not on establishing or debating the underlying scientific mechanism of the problem itself.
[general_software_security]
Learned3 rejectionsInactive
Reject posts about generic software architecture patterns, container security, or infrastructure isolation techniques unless they are explicitly framed in the context of healthcare-specific implementation challenges, health data protection, or solutions to a documented healthcare technology problem.
[labor_law_employment_policy]
Learned3 rejectionsInactive
Reject posts about broad labor law changes, employment regulations, and worker protections that are not explicitly framed around health-tech company strategy, healthcare workforce dynamics, or business model implications within the health-tech ecosystem. General employment policy wins or losses for workers fall outside scope unless directly tied to health-tech venture strategy, care delivery disruption, or healthcare market structure.
[general_ai_philosophy_claims]
Learned3 rejectionsInactive
Reject posts about whether AGI has been achieved, general claims about AI capability thresholds, or philosophical arguments about AI progress that lack explicit connection to health technology products, healthcare markets, medical innovation, health policy, or healthcare business models.
[tangential_tech_without_healthcare_application]
Learned3 rejectionsInactive
Exclude posts about general AI capabilities, ML engineering patterns, software productivity, time-series forecasting, or tech infrastructure trends that mention healthcare only tangentially or use healthcare as a generic example rather than the core focus.
[bad_faith_political_cynicism]
Learned3 rejectionsInactive
Exclude posts that express partisan political criticism, conspiracy allegations, or cynical attacks on government/institutions in healthcare contexts without offering evidence-based analysis, proposed solutions, or substantive healthcare policy insight.
[clinical_research_reviews]
Learned3 rejectionsInactive
Reject posts about clinical research findings, disease mechanism reviews, or drug efficacy studies that are presented primarily as medical science without connection to health technology products, business models, market dynamics, regulatory strategy, or investment opportunities. This includes literature reviews and clinical trial analyses framed for medical professionals rather than health tech entrepreneurs or investors.
[clinical_medicine_debates]
Learned3 rejectionsInactive
Reject posts about clinical medical debates, pathophysiology discussions, or pharmaceutical treatment efficacy that are framed as peer-to-peer physician education rather than as commentary on health technology, market dynamics, reimbursement policy, or business strategy. This includes debates about cholesterol management, medication effectiveness, or diagnostic approaches absent any health-tech or commercial healthcare angle.
[general_ai_machine_learning]
Learned3 rejectionsInactive
Reject posts about foundational AI/ML concepts, model training techniques, or AI research that lack explicit connection to healthcare, health-tech companies, medical applications, or health systems. This includes theoretical discussions of reward signals, training methodologies, or AI capabilities presented as general technology commentary rather than health-specific innovation or strategy.
[general_ai_capability_benchmarks]
Learned3 rejectionsInactive
Reject posts about AI model performance comparisons, research paper quality benchmarks, or autonomous system competitions that are not explicitly connected to healthcare applications, health-tech business outcomes, or medical/clinical use cases. This includes posts framing AI capability advances as significant without demonstrating health-tech market or regulatory implications.
[unverified_ai_capability_claims_agent_work]
Learned3 rejectionsInactive
Exclude posts that make sweeping claims about AI agents automating healthcare roles, solving clinical problems, or achieving healthcare outcomes without citing validation studies, pilot results, or published evidence.
[drug_mechanism_education]
Learned3 rejectionsInactive
Reject posts about drug mechanisms of action, disease biology, or cellular/molecular pharmacology presented primarily for educational or clinical understanding purposes. These posts focus on the scientific basis of therapies rather than the business models, market dynamics, regulatory strategy, or investment implications of pharmaceutical innovation.
[ai_safety_philosophy]
Learned3 rejectionsInactive
Reject posts about artificial general intelligence (AGI) strategy, AI safety principles, human agency in AI systems, or AI alignment frameworks when presented as abstract philosophical or policy concerns disconnected from specific healthcare applications, health tech products, or medical market dynamics.
[direct_to_consumer_product_reviews]
Learned3 rejectionsInactive
Reject posts about individual consumer experiences with direct-to-consumer beauty, skincare, or wellness products, particularly those focused on personal results, product recommendations, or purchasing experiences. These posts lack analysis of market dynamics, business models, or healthcare/health-tech implications.
[general_ai_product_development]
Learned3 rejectionsInactive
Reject posts about how non-healthcare AI companies build products, manage design processes, or ship features—unless the post explicitly connects these practices to a specific health-tech application, healthcare use case, or health system implementation. Posts about general AI development velocity, design methodologies, or internal company processes at consumer or enterprise software firms are off-topic.
[basic_clinical_breakthroughs]
Learned3 rejectionsInactive
Reject posts about clinical treatment successes, disease remission outcomes, or therapeutic efficacy for specific patient populations unless the post explicitly connects these developments to health-tech platforms, digital health business models, healthcare investment, or policy/regulatory implications affecting the market.
[political_healthcare_advocacy]
Learned3 rejectionsInactive
Reject posts on international healthcare challenges
[general_software_engineering_tools]
Learned3 rejectionsInactive
Reject posts about software engineering tools, APIs, frameworks, and developer platforms that lack explicit application to healthcare or health-tech use cases. These posts may discuss AI agents, monitoring systems, or infrastructure improvements but do not address healthcare-specific problems, markets, regulations, or business models.
[healthcare_fraud_enforcement_reporting]
Learned3 rejectionsInactive
Exclude posts that are primarily crime/fraud reporting (specific arrests, guilty pleas, FBI investigations, Medicaid fraud cases) or government enforcement announcements without analysis of systemic healthcare policy implications or fraud prevention mechanisms.
[ai_agent_product_hype_drama]
Learned3 rejectionsInactive
Exclude posts that are essentially product announcements or technical hype for AI agent platforms (Managed Agents, Agent Builder) unless they specifically analyze healthcare application, clinical workflow integration, or healthcare business model impact.
[truncated_low_effort_posts]
Learned3 rejectionsInactive
Exclude posts that end abruptly with incomplete sentences, ellipses, or clearly truncated thoughts where the full argument cannot be evaluated (text ending with 'St' or 'the fight has literally just begun' without follow-up).
[political_conspiracy_bad_faith]
Learned3 rejectionsInactive
Exclude posts that frame healthcare or regulatory issues through partisan political conspiracy (RFK Jr. being 'sidelined,' government 'cage,' evidence destruction cover-ups, Deep State narratives) rather than substantive policy analysis.
[partisan_political_commentary]
Learned3 rejectionsInactive
Reject posts about partisan political movements, administrative state critiques, or ideological battles framed around government control and institutional reform. These posts lack substantive connection to specific health technology, healthcare market dynamics, policy implementation affecting healthcare delivery, or healthcare-adjacent business strategy.
[public_health_infrastructure_policy]
Learned3 rejectionsInactive
Reject posts about government public health infrastructure, disease surveillance systems, or national health security capabilities—particularly those framed around government agency performance, dismantling, or geopolitical health security concerns. These posts discuss public health governance and policy outcomes rather than the commercial health technology, business models, investment opportunities, or market dynamics that the newsletter covers.
[macro_economic_commentary]
Learned3 rejectionsInactive
Reject posts about general economic conditions, labor market trends, GDP growth, unemployment rates, or productivity metrics that lack explicit connection to healthcare systems, health tech markets, payer/provider economics, or health policy. Posts must address healthcare-specific economic dynamics to be in-scope.
[veterinary_longevity_research]
Learned3 rejectionsInactive
Reject posts about veterinary medicine, pet pharmaceuticals, and animal health research
[medical_regulation_activism]
Learned3 rejectionsInactive
Reject posts about healthcare outside of the United States.
[alternative_medicine_regulation]
Learned3 rejectionsInactive
Reject posts about non-USA healthcare.
[consumer_insurance_disputes]
Learned3 rejectionsInactive
Reject posts about individual consumer insurance claim denials, consumer court litigation outcomes, or tactical advice for fighting rejected claims. These posts frame health insurance through the lens of consumer advocacy and dispute resolution rather than market dynamics, product innovation, payer strategy, or systemic business implications.
[partisan_government_accountability]
Learned3 rejectionsInactive
Reject posts about partisan attacks on government officials, inter-agency blame for policy failures, or criticism framed primarily as political accountability rather than substantive healthcare market or regulatory analysis. These posts lack relevance to health-tech business strategy, innovation, investment, or policy impact on the covered sectors.
[generic_ai_architecture_trends]
Learned3 rejectionsInactive
Reject posts about AI agent architecture, token economics, or computational paradigms that discuss these concepts in generic technology contexts without explicit connection to healthcare delivery, health tech products, medical data, healthcare payers, providers, or health-related business models.
[medical_ethics_enforcement]
Learned3 rejectionsInactive
Reject posts about individual practitioners' criminal conduct, license revocation, or ethical violations. These posts focus on professional discipline and enforcement rather than the technology, business models, market dynamics, or policy frameworks that drive healthcare innovation and strategy.
[unproven_alternative_cancer_treatments]
Learned3 rejectionsInactive
Reject posts about drugs or alternative therapeutic combinations based on preliminary patient-reported findings. This includes content framing such treatments as breakthroughs requiring large-scale clinical trials or positioning them as leadership in healthcare innovation.
[unvalidated_repurposed_drug_claims]
Learned3 rejectionsInactive
Reject posts about repurposed drugs or off-label treatments presented with clinical outcome data, efficacy percentages, or patient testimonials—especially when framed as urgent medical breakthroughs outside traditional regulatory pathways. These posts conflate clinical evidence claims with health-tech business content and fall into direct medical advice territory.
[partisan_political_personnel_drama]
Learned3 rejectionsInactive
Reject posts about personnel removals, internal political conflicts, or administrative drama within government agencies when framed primarily as partisan narrative rather than substantive analysis of policy outcomes, regulatory changes, or market impact. Focus on the mechanism and market effect, not the political theater or personality-driven speculation.
[general_financial_trading]
Learned3 rejectionsInactive
Reject posts about AI systems, trading algorithms, or investment strategies applied to general financial markets, commodities, or broad equity indices. These posts lack specific application to healthcare, health technology, or health-related business models and markets.
[unvalidated_cancer_treatment_claims]
Learned3 rejectionsInactive
Reject posts about repurposed drugs or off-label treatments for serious diseases that claim significant clinical efficacy based on real-world observational data, small cohorts, or preprints lacking peer review and regulatory validation. These posts typically bypass standard pharmaceutical development and clinical evidence hierarchies that the newsletter covers.
[speculative_unregulated_market_opinion]
Learned3 rejectionsInactive
Exclude posts that speculate on unregulated peptide or GLP-1 markets, offer lifestyle opinions about weight loss drugs, or discuss black-market therapeutics without regulatory analysis, safety data, or healthcare system implications. Market gossip and personal usage anecdotes do not qualify.
[geopolitical_defense_policy]
Learned3 rejectionsInactive
Reject posts about geopolitical alliances, defense spending, arms sales, and military industrial strategy unless directly connected to healthcare-specific policy, regulation, or market dynamics. Posts analyzing foreign policy, NATO, defense budgets, or weapons platforms as primary subjects are out of scope, even when framed through economic or commercial lenses.
[institutional_abuse_cover_ups]
Learned3 rejectionsInactive
Reject posts about alleged government or institutional cover-ups of crime, evidence destruction, or abuse of power that are framed as investigative accountability journalism rather than health-tech industry analysis. This includes posts calling for support of independent investigators exposing institutional failures, regardless of the underlying subject matter, unless the investigation directly concerns healthcare regulation, payer fraud, or health-tech company misconduct.
[biotech_financial_speculation]
Learned3 rejectionsInactive
Exclude posts that focus primarily on drug approval timelines, pharmaceutical market share competition, stock outlook, or genetic modifiers of drug efficacy when the purpose is financial speculation rather than healthcare system analysis.
Exclude posts that frame AI job displacement or healthcare workforce burnout as inevitable macro trends without analyzing how specific healthcare business models, technology platforms, or regulatory changes actually address labor challenges. Posts should propose solutions, not just describe problems.
[speculative_unverified_peptide_claims]
Learned3 rejectionsInactive
Exclude posts that speculate about peptide efficacy, off-label uses, genetic modifiers, or mental health effects without peer-reviewed evidence or FDA context. Posts about unregulated peptide markets, 'dirty peptides,' or anecdotal benefit claims should be rejected.
[ai_agent_infrastructure_launch_hype]
Learned3 rejectionsInactive
Exclude posts that treat AI agent platform launches (Anthropic Managed Agents, OpenAI Agent Builder, etc.) as major news events without demonstrating specific healthcare use cases or clinical impact. These posts celebrate the technology itself rather than how it solves healthcare problems.
[healthcare_worker_labor_and_burnout]
Learned3 rejectionsInactive
Exclude posts that highlight healthcare worker labor challenges (nursing shortages, physician burnout, practitioner departures, burnout statistics) as anecdotal complaints, trend observations, or emotional testimonies without proposing structural changes, business model implications, or technology-enabled solutions to the underlying workforce challenge.
[policy_announcement_without_healthcare_analysis]
Learned3 rejectionsInactive
Exclude posts that function primarily as political news or policy announcements (executive orders, dietary guidelines, H-1B visas, microplastics regulation, trade deals) without substantive healthcare business, delivery system, or technology implications. Posts must explain how the policy concretely affects healthcare operations or outcomes, not just celebrate or report the announcement.
[healthcare_fraud_and_enforcement_reporting]
Learned3 rejectionsInactive
Exclude posts that function primarily as law enforcement or fraud alert reporting (FBI/HHS fraud busts, Medicaid fraud schemes, CMS enforcement actions) without providing systemic healthcare insight, policy analysis, or broader implications for healthcare operations or business models.
[ai_agent_infrastructure_announcement]
Learned3 rejectionsInactive
Exclude posts that announce or praise AI agent infrastructure products (Anthropic Managed Agents, Claude Code, OpenAI Agents) purely as developer tools or operational wins, without demonstrating a concrete healthcare use case, clinical problem solved, or healthcare-specific insight. Posts must show healthcare application, not just infrastructure capability.
[personal_productivity_tools]
Learned3 rejectionsInactive
Reject posts about personal AI productivity systems, note-taking applications, or general-purpose AI tools used for individual knowledge management. These posts focus on consumer software workflows rather than healthcare-specific technology, business models, or market dynamics.
[general_ml_engineering_practices]
Learned3 rejectionsInactive
Reject posts about machine learning engineering practices, model development techniques, or software engineering challenges that are framed as general ML/AI problems rather than as solutions to specific healthcare business problems, clinical workflows, or health technology applications. This includes posts about foundation models, training pipelines, or data engineering approaches presented primarily as engineering best practices without healthcare context.
[generic_ai_developer_tools]
Learned3 rejectionsInactive
Reject posts about generic AI agent platforms, LLM infrastructure, or software development tools that discuss implementation details and technical capabilities without connecting to healthcare applications, health system workflows, payer operations, or health-tech business strategy. These posts treat healthcare as an incidental use case rather than the primary subject.
[software_engineering_productivity]
Learned3 rejectionsInactive
Reject posts about software engineering workflows, developer productivity techniques, and AI-assisted coding practices that lack explicit application to healthcare technology, health systems, or life sciences domains. Posts should be rejected even if they mention healthcare tangentially unless they specifically address healthcare use cases, healthcare company engineering challenges, or health tech infrastructure.
[social_safety_net_policy]
Learned3 rejectionsInactive
Reject posts about changes to non-health social safety net programs, including food assistance, welfare work requirements, and state-level benefit administration changes. These posts focus on social policy impacts rather than health technology, payer strategy, care delivery, or healthcare-specific regulation.
[law_enforcement_fraud_cases]
Learned3 rejectionsInactive
Reject posts about criminal fraud cases, law enforcement investigations, and fraud takedowns in healthcare unless they draw broader conclusions around healthcare markets or technology beyond a simple crime report. These posts focus on prosecutorial outcomes and individual criminal cases rather than on systemic market dynamics, technology trends, business strategy, policy mechanisms, or investment opportunities that affect the health tech and healthcare innovation landscape.
[individual_drug_approvals_and_launches]
Learned3 rejectionsInactive
Reject posts about individual pharmaceutical product approvals, regulatory clearances, or commercial launch in non-US markets that do not connect to broader US healthcare health tech business models, market structure changes, or investment implications. These posts treat drugs as consumer products rather than as signals about healthcare systems, payer strategy, or technology-enabled care delivery.
[clinical_practice_guidelines]
Learned3 rejectionsInactive
Reject posts about clinical practice guidelines, treatment protocols, or therapeutic decision-making for practicing clinicians. These posts focus on how physicians should diagnose and treat specific patient populations, rather than on the technology, business models, markets, or policy infrastructure that enable healthcare delivery.
[speculative_biotech_financial_opinion]
Learned3 rejectionsInactive
Exclude posts that analyze which biotech company will win a market (Novo vs. Lilly in GLP-1), predict pharmaceutical stock performance, or discuss financial positioning without connecting to patient outcomes, clinical efficacy differences, or healthcare delivery implications.
[tangential_infrastructure_not_healthcare]
Learned3 rejectionsInactive
Exclude posts that focus on AI infrastructure challenges (electrical grids, transformers, decompilation security, code secrecy) or environmental impact unless explicitly connected to healthcare deployment or clinical operations. General tech infrastructure posts belong in tech media, not healthcare.
[unverified_peptide_market_claims]
Learned3 rejectionsInactive
Exclude posts that discuss unregulated peptides (LOY-002, semaglutide off-label, Chinese research peptides) with claims about efficacy, safety benefits, or market traction that lack peer-reviewed clinical trial data or FDA approval status. Posts must cite specific trial data, not anecdotal reports.
[ai_product_launch_announcements]
Learned3 rejectionsInactive
Unless specifically healthcare related, Reject posts about AI platform feature launches, infrastructure tool releases, or generic AI capability announcements that lack healthcare-specific applications, market strategy analysis, or implications for health-tech business models. Posts must demonstrate direct relevance to healthcare markets, care delivery, payer strategy, or health-tech investment thesis—not simply mention that an AI tool might theoretically apply to healthcare.
[basic_cardiovascular_physiology]
Learned3 rejectionsInactive
Reject posts about basic cardiovascular or physiological mechanisms (e.g., why veins versus arteries develop plaque, how blood vessels function) that are framed as educational content or scientific curiosities rather than as drivers of technology, business model, or policy outcomes in healthcare markets.
[generic_ai_product_launches]
Learned3 rejectionsInactive
Reject posts about non healthcare, pharma, Medtech, biotech or health tech AI product launches, tool releases, or feature announcements that lack explicit connection to healthcare applications, health-tech companies, medical workflows, or health-related industries. Posts listing generic use cases across consumer, finance, legal, and business domains without healthcare-specific angles should be excluded even if healthcare is mentioned as one of many verticals.
[veterinary_drug_development]
Learned3 rejectionsInactive
Reject posts about animal/veterinary.
[unregulated_peptide_market_opinion]
Learned3 rejectionsInactive
Exclude posts that discuss GLP-1 drugs, peptides, or weight-loss medications primarily as commodity market opinion, personal efficacy anecdotes, genetic variation trivia, or unregulated market commentary without addressing healthcare delivery, regulatory gaps, clinical outcomes measurement, or healthcare business model implications.
[truncated_incomplete_social_posts]
Learned3 rejectionsInactive
Exclude posts that end with '>', '...', truncated URLs without context, or mid-sentence cutoffs where the full argument cannot be understood. These are technical feed errors or incomplete drafts, not publishable content.
[government_enforcement_and_fraud_alerts]
Learned3 rejectionsInactive
Exclude posts that are primarily news alerts about DOJ investigations, fraud prosecutions, or law enforcement actions (hospice fraud, Medicaid fraud, IRS enforcement). Unless the post provides healthcare systems insight or policy implications for the writer's audience, these are breaking news crime reporting, not healthcare tech analysis.
[pharma_ideology_critique]
Learned3 rejectionsInactive
Reject posts about pharmaceutical development that frame the regulatory system primarily as bureaucratic theater, absurdist maze-building, or ideological performance rather than engaging with specific regulatory mechanisms, cost structures, incentive misalignments, or business model implications. These posts typically dismiss rather than analyze policy, and lack the practical focus on how regulatory changes affect company strategy, market entry, or healthcare outcomes.
[lifestyle_genetic_health_opinion]
Learned3 rejectionsInactive
Exclude posts discussing genetic risk factors, lifestyle interventions (diet, exercise, vaccination), or personal health optimization unless the post addresses healthcare delivery innovation, policy reform, or market disruption.
[tangential_ai_infrastructure_environmental]
Learned3 rejectionsInactive
Exclude posts focused on AI infrastructure scaling, electricity consumption, data center construction, or token economics unless the post specifically addresses healthcare capacity, deployment barriers, or clinical system constraints.
[peptide_unregulated_market_opinion]
Learned3 rejectionsInactive
Exclude posts about peptides, semaglutide supply, compounding regulations, or illegal pharmaceutical smuggling that lack clinical evidence or systems-level healthcare policy analysis. Posts must go beyond commodity market commentary or regulatory compliance alerts.
[healthcare_opinion_without_mechanism]
Learned3 rejectionsInactive
Exclude posts that state healthcare problems or opinions (e.g., 'physicians leaving medicine,' 'doctor burnout') without explaining *why* this is happening or how it connects to business/technology/policy mechanisms. Posts must move beyond problem assertion to analysis.
[tangential_tech_infrastructure]
Learned3 rejectionsInactive
Exclude posts about AI infrastructure scaling, electricity consumption, data center buildout, or general tech trends that mention healthcare only in passing or as one example among many industries. Posts must center on healthcare-specific technology implications.
[product_announcement_without_analysis]
Learned3 rejectionsInactive
Exclude posts that are primarily product launch announcements, feature descriptions, or promotional content for AI tools (Claude Managed Agents, Alloy partnerships, etc.). Posts must provide strategic insight into how the product reshapes healthcare, not just describe what it does.
[government_enforcement_reporting]
Learned3 rejectionsInactive
Exclude posts that are primarily government announcements, law enforcement quotes, or fraud investigation updates. Posts must include substantive healthcare industry insight or business model implications, not just report that enforcement is happening.
[ai_capability_scaremongering]
Learned3 rejectionsInactive
Exclude posts that discuss unreleased AI models (Claude Mythos Preview, etc.), unverified security vulnerabilities, or speculative AI capabilities without demonstrating how these capabilities solve actual healthcare problems. Posts must focus on deployed, verified healthcare applications, not theoretical model behavior.
[consumer_health_advice]
Learned3 rejectionsInactive
Reject posts about consumer health advice. This newsletter focuses on health technology innovation, AI/data science in medicine, payer strategy, health system business operations, healthcare investment, biotech and pharmaceutical development, and health policy—all from an industry and market perspective. Posts offering personal medical guidance, wellness tips, diet recommendations, symptom management, or other direct-to-consumer health information fall outside this scope.
[social_issues_and_activism]
Learned3 rejectionsInactive
Reject posts about mental health policy advocacy. This newsletter covers health technology products and digital health innovation, AI and data science in medicine, payer strategy, health system business operations, healthcare investment and M&A, biotech and pharmaceutical innovation, and health policy as it affects these markets and technologies. Mental health policy advocacy—focused on legislative or regulatory change to mental health systems and services—falls outside this business and innovation-focused scope.
[financial_markets]
Learned3 rejectionsInactive
Reject posts about non healthcare or biotech or medtdch related venture capital trends. This newsletter covers specific health technology innovations, companies, and products; AI and data science applications in medicine; health insurance and payer strategy; hospital and health system business models; healthcare investment in particular deals and M&A activity; drug discovery and biotech innovation; health policy affecting these sectors; and care delivery models. General non domain related venture capital market trends, investor behavior patterns, and funding environment analysis are outside this scope.
[unverified_medical_claims_without_context]
Learned3 rejectionsInactive
Exclude posts that assert medical benefits or risks (mental health benefits of semaglutide, colorectal cancer prevention, liver function improvement) without citing peer-reviewed trials, effect sizes, or clinical context needed for healthcare decision-making.
[deep_clinical_niche_without_systems_insight]
Learned3 rejectionsInactive
Exclude posts that focus narrowly on clinical parameters, biomarkers, or drug dosing regimens without connecting to healthcare delivery, access, reimbursement, or policy implications that affect broader healthcare systems.
[healthcare_cynicism_blame_cycle]
Learned3 rejectionsInactive
Exclude posts that reduce complex healthcare issues (compounding fraud, GLP-1 access, pharma pricing) to cynical blame narratives ('big pharma created the problem') without substantive discussion of regulatory frameworks, evidence, or viable alternatives.
[ai_agent_product_announcement_drama]
Learned3 rejectionsInactive
Exclude posts that frame AI agent or API product announcements (Claude Managed Agents, etc.) as causing mass startup obsolescence or industry disruption, without analyzing actual healthcare applications, adoption barriers, or clinical impact.
[labor_market_disruption]
Learned3 rejectionsInactive
Reject posts that discuss artificial intelligence's effects on employment, job displacement, career pathways, or workforce transitions unless there is a tie to healthcare
[competitive_business_criticism]
Learned3 rejectionsInactive
Reject posts that discuss pharma licensing deals.
[ai_service_competitive_drama]
Learned3 rejectionsInactive
Reject posts that discuss major AI company updates without a healthcare angel
[unreleased_model_leaks]
Learned3 rejectionsInactive
Reject posts that share analysis, findings, or details about non healthcare or pharma AI models, including internal research mechanisms, model behavior, or technical characteristics
[ai_infrastructure_environmental_impact]
Learned3 rejectionsInactive
Reject posts that discuss AI electricity consumption, energy usage, environmental impact, or physical infrastructure requirements without relation to healthcare or pharma.
[vaccine_alzheimers_research]
Learned3 rejectionsInactive
Reject posts that discuss or analyze relationships between vaccination (including shingles, flu, or other vaccine types) and disease risk.
[product_release_announcements]
Learned3 rejectionsInactive
Reject posts that announce or promote the release of new software products, APIs, tools, or features, particularly those emphasizing speed improvements, technical capabilities, or deployment benefits. This includes posts from company accounts or their representatives describing newly launched services or platform updates.
[medical_lipid_education]
Learned3 rejectionsInactive
Reject posts that discuss general lab results like LDL cholesterol (LDLc), apoB particles, atherosclerosis mechanisms, cardiovascular risk accumulation, or lipid science education, particularly when authored by medical doctors or health professionals and containing technical explanations of different lab results and their role in health.
[ai_service_announcements]
Learned3 rejectionsInactive
Reject posts that announce, promote, or highlight new AI agent platforms, managed services, or infrastructure offerings, particularly those emphasizing ease of deployment, scaling capabilities, or moving from prototype to production. This includes posts from AI companies or developers announcing their agent-building tools or managed agent services.
[government_enforcement_announcements]
Learned3 rejectionsInactive
Reject posts that are not healthcare related, which announce or report on Department of Justice, law enforcement, or government prosecution activities, including statements about criminal cases, fraud investigations, enforcement actions, or related statistics, particularly when posted from official government accounts or attributed to government officials.
[product_launch_hype]
Learned3 rejectionsInactive
Reject posts that make sweeping claims about a specific company or product 'killing' or displacing competitors/startups, use hyperbolic language (e.g., 'THEY DID IT AGAIN'), list product features or capabilities as selling points, or frame a product announcement as a major market disruption event. Include posts with URL links to product demos or announcements paired with hype-driven commentary.
[biotech_stock_analysis]
Learned3 rejectionsInactive
Reject posts that mention specific ticker symbols (prefixed with $) alongside pharmaceutical drug candidates, clinical trial details, or comparative efficacy analysis, particularly when authored by finance/investment accounts and containing investment-related terminology such as 'opportunity', 'advantage', or treatment mechanism discussions.
[speculative_financial_trading_opinion]
Learned3 rejectionsInactive
Exclude posts that make stock predictions, competitive pricing analyses, or financial strategy opinions (e.g., 'Company X needs to lower prices to compete') about pharmaceutical or health tech companies. These posts are financial opinion, not healthcare technology insights.
[healthcare_system_cynicism_without_solution]
Learned3 rejectionsInactive
Exclude posts that express frustration with healthcare policies, institutional practices, or regulatory bodies (e.g., 'FDA requirements are broken,' 'psychiatry has a fraud problem') using inflammatory language without providing evidence-based analysis, data, or actionable solutions for improvement.
[glp1_diet_and_lifestyle_opinion]
Learned3 rejectionsInactive
Exclude posts that present subjective opinions about whether GLP-1 drugs should be OTC, comparative lifestyle claims (e.g., 'eggs for breakfast cure obesity'), or anecdotal diet/nutrition alternatives to GLP-1 therapy without citing peer-reviewed clinical evidence or published guidelines.
[unverified_ai_capability_scaremongering]
Learned3 rejectionsInactive
Exclude posts that make sensational claims about AI models breaking out of sandboxes, discovering zero-days, finding vulnerabilities, or possessing dangerous unreleased capabilities without linking to peer-reviewed research, official documentation, or credible third-party verification. Posts labeled with emoji warnings or ALL CAPS claims about 'unreleased frontier models' are red flags.
[genetic_health_lifestyle]
Learned3 rejectionsInactive
Reject heavily clinical posts that speak to the details of patient cases or that provide deep clinical advice to patients. Also reject heavily scientific posts that speak to the detailed results of clinical trials or clinical research.
[bad_faith_cynicism_without_solution]
Learned3 rejectionsInactive
Exclude posts that express cynicism or distrust toward healthcare institutions, regulators, or medical professionals (e.g., 'psychiatry has a fraud problem,' 'the system is broken') in bad-faith or inflammatory tone without substantive analysis, evidence-based critique, or solution-oriented framing.
[narrow_clinical_niche_without_broader_insight]
Learned3 rejectionsInactive
Exclude posts that describe isolated clinical cases, niche procedural debates, or specialist-only technical discussions (e.g., specific radiology protocols, pediatric scoliosis screening parameters, endoscopy medication timing) without connecting to healthcare delivery, workflow, economics, or technology adoption.
[glp1_commodity_opinion_and_lifestyle]
Learned3 rejectionsInactive
Exclude posts that frame GLP-1 medications primarily as weight-loss lifestyle products, diet alternatives, or personal wellness opinions without clinical context, mechanism analysis, or healthcare system insight. Posts like 'people should just eat eggs' or personal success stories without clinical data qualify.
[ai_company_revenue_and_metrics]
Learned3 rejectionsInactive
Exclude posts that primarily report revenue figures, ARR growth, valuation milestones, or business metrics for AI companies (Anthropic, OpenAI, Claude, etc.). The post must discuss AI APPLIED IN healthcare delivery or clinical outcomes, not AI company financial performance.
Exclude posts that voice personal clinical opinions, prescribing philosophies, or medication critiques without connecting to healthcare business models, regulatory trends, or technology adoption patterns. Clinical takes from individual practitioners without broader healthcare system or tech context are not strategic healthcare tech content.
[speculative_unsubstantiated_ai_claims]
Learned3 rejectionsInactive
Exclude posts that report unverified claims about AI model vulnerabilities, sandbox breakouts, zero-day discoveries, or capabilities presented as fact without credible technical documentation, peer-reviewed sources, or official vendor confirmation. Rumor-based AI threat hype is not actionable healthcare tech content.
[incomplete_fragmented_posts]
Learned3 rejectionsInactive
Exclude posts that appear truncated, cut off mid-thought, end with ellipses or incomplete sentences, or contain obvious line breaks suggesting missing content. The full message must be readable and coherent.
[healthcare_cynicism_without_solution]
Learned3 rejectionsInactive
Exclude posts that express cynicism, frustration, or outrage about healthcare (fraud, system failures, professional misconduct) but lack supporting evidence, quantified impact, or proposed solutions. The post should contribute analysis or data, not just venting.
[glp1_lifestyle_opinion_editorial]
Learned3 rejectionsInactive
Exclude posts that are primarily personal opinions, lifestyle commentary, or philosophical debates about GLP-1 use (e.g., 'people should just eat eggs', 'I've been ripped without GLPs'). Posts must present clinical data, healthcare system impact, or policy analysis—not individual takes on weight loss philosophy.
[unverified_ai_model_claims]
Learned3 rejectionsInactive
Exclude posts that make extraordinary technical claims (e.g., 'AI broke out of sandbox', 'found thousands of zero-day exploits') about unreleased or frontier models, especially when the claims lack peer-reviewed evidence, official confirmation, or come from unverified sources.
[tangential_startup_product_ads]
Learned3 rejectionsInactive
Exclude posts that function primarily as startup pitches, recruiting calls, or product advertisements (e.g., 'We are looking for 503A pharmacy operators', 'Composio connects AI agents', 'Try our authentication layer'). These are vendor pitches, not healthcare analysis.
[healthcare_worker_visa_outrage]
Learned3 rejectionsInactive
Exclude posts that treat visa or immigration policy primarily as individual victim stories. Include only if the post analyzes systemic healthcare workforce impact, staffing shortages, or health system policy implications.
[healthcare_system_fraud_alerts]
Learned3 rejectionsInactive
Exclude posts that primarily announce criminal sentencing, fraud arrests, or embezzlement cases. Include only if the post analyzes systemic healthcare fraud patterns, policy loopholes, or technology failures that enabled fraud.
[unsubstantiated_medical_claims]
Learned3 rejectionsInactive
Exclude posts that assert medical facts (e.g., 'GLP-1 causes suicidal ideation', 'peptides contain lead', 'this drug reverses metabolic dysfunction') without referencing peer-reviewed studies, clinical trials, or FDA data. Personal observation or anecdote alone is insufficient.
[niche_technical_insider_content]
Learned3 rejectionsInactive
Exclude posts that rely heavily on unexplained technical abbreviations (SVA, PI, siRNA, DrugSeq, TB-500, BPC-157), insider trading/biotech slang, or niche research methodology discussions without context. Posts should be comprehensible to healthcare executives, policy makers, and clinicians—not only specialized researchers or traders.
[contentious_bad_faith_rhetoric]
Learned3 rejectionsInactive
Exclude posts that use sarcasm, mockery, inflammatory framing ('smug ridicule,' 'fraud problem,' identity-based attacks), or bad-faith rhetorical devices to make healthcare arguments. Posts should engage substantively with policy or clinical issues, not dismiss or demonize healthcare professionals or systems through cynical tone.
[incomplete_truncated_posts]
Learned3 rejectionsInactive
Exclude posts that end abruptly with ellipses, incomplete URLs, missing final sentences, or clear truncation mid-thought. These posts lack sufficient content to provide meaningful healthcare insight and appear to be copy-paste errors or feed artifacts.
[glp1_weight_loss_lifestyle_opinion]
Learned3 rejectionsInactive
Exclude posts that are primarily personal testimonies, lifestyle takes, or unsubstantiated opinions about GLP-1 efficacy, safety, or appropriate use cases. Posts should contain clinical data, policy analysis, or healthcare system insights—not individual fitness philosophies or 'I did this and it worked' narratives.
[niche_technical_insider_jargon]
Learned3 rejectionsInactive
Exclude posts that rely on unexplained technical jargon from non-healthcare domains (e.g., 'OpenClaw,' 'Kilo Gateway,' 'KiloPass,' 'DrugSeq') without explaining relevance to healthcare, pharma, biotech or health insurance professionals or patients. Posts must be accessible to the target healthcare audience.
[inflammatory_bad_faith_rhetoric]
Learned3 rejectionsInactive
Exclude posts that use inflammatory rhetoric ('fraud problem,' 'played dirty,' 'highly educated immigrants'), conspiracy framings, or sweeping accusations against medical professions, institutions, or demographic groups without citing evidence or nuanced analysis.
[glp1_commodity_speculation]
Learned3 rejectionsInactive
Exclude posts that discuss GLP-1 medications primarily as investment opportunities, ticker symbols, or personal weight-loss stories without substantive clinical, regulatory, or market structure analysis relevant to healthcare professionals.
[personal_anecdote_without_insight]
Learned3 rejectionsInactive
Exclude posts that are primarily personal anecdotes about the author's own health choices, fitness regimen, or individual medical encounter without deriving scalable insights about healthcare delivery, policy, or technology. Personal stories must tie to a larger systemic point.
[opportunistic_startup_pitch]
Learned3 rejectionsInactive
Exclude posts where the primary intent is to recruit for the author's company, solicit business partners, or pitch their own product/service (e.g., 'we are looking for 503A pharmacy operators', 'moving to our platform'). Exclude self-promotional posts disguised as industry commentary.
[institutional_press_release_tone]
Learned3 rejectionsInactive
Exclude posts from institutional accounts (ASCO, NEJM, APA, ASTRO, StanfordHealth) that function as press releases, publication announcements, or formal advocacy calls—even if topically aligned. The writer's voice favors independent, contrarian analysis over institutional messaging.
[off_topic_product_launch]
Learned3 rejectionsInactive
Exclude posts that primarily announce or promote software products, API gateways, authentication layers, or developer tools (e.g., Composio, KiloGateway, GStack) unless they directly address a healthcare-specific problem or use case.
[crime_fraud_sentencing_alerts]
Learned3 rejectionsInactive
Exclude posts that primarily report DOJ convictions, fraud sentencing, or FBI enforcement actions in healthcare, even if structurally on-topic. These read as crime/law enforcement news, not healthcare business analysis, and often carry partisan or sensationalized tone misaligned with the writer's credibility.
[conspiracy_fringe_medical_claims]
Learned3 rejectionsInactive
Exclude posts that promote off-label or low-evidence treatments as superior to standard care (e.g., ivermectin for oncology, eggs as obesity cure), question established vaccine science, or make sweeping claims about medications causing harm (e.g., GLP-1 causing suicidality) without clinical citation.
[political_deportation_visa_outrage]
Learned3 rejectionsInactive
Exclude posts that frame immigration policy, visa freezes, or work authorization delays as healthcare system problems. While healthcare workforce is on-topic, posts that primarily express political outrage about immigration enforcement (rather than analyzing staffing impact) do not align with the writer's tone and audience.
[anecdotal_health_product_testimonial]
Learned2 rejectionsInactive
Exclude posts that are personal product reviews, testimonials about peptide serums, health supplements, or DTC health products (e.g., 'I just got my peptide serums... I'm pretty sure I'm already beautiful') without clinical data, mechanism explanation, or health tech industry insight.
[cyberinfrastructure_environmental_tech_tangent]
Learned2 rejectionsInactive
Exclude posts about AI infrastructure bottlenecks (data centers, transformers, electrical equipment, GPU supply chains, compute capacity) unless the post explicitly connects to a healthcare delivery, clinical workflow, or health system challenge.
[healthcare_worker_labor_complaint]
Learned2 rejectionsInactive
Exclude posts that highlight a single healthcare worker's burnout story, career exit, or workplace complaint unless the post connects the anecdote to broader labor market trends, policy changes, or systemic failures with data.
[unverified_ai_model_capability_claims]
Learned2 rejectionsInactive
Exclude posts that claim unreleased AI models have superhuman capabilities (finding zero-day vulnerabilities, converting competitors, etc.) or leak proprietary model internals without demonstrating actual healthcare applications or validated results.
[tangential_healthcare_infrastructure_tech]
Learned2 rejectionsInactive
Exclude posts that describe general software infrastructure, AI agent frameworks, authentication layers, or development tools that could apply to any industry. The post must demonstrate healthcare-specific application, clinical workflow integration, or healthcare business model focus—not just generic tech that could theoretically be used in healthcare.
[inflammatory_identity_politics]
Learned2 rejectionsInactive
Exclude posts that weaponize healthcare fraud, immigration, or professional credibility debates as identity/political flashpoints rather than engaging with the actual healthcare policy or systemic issue. Focus should be on the healthcare problem, not the cultural grievance.
[black_market_unregulated_product_warnings]
Learned1 rejectionInactive
Exclude posts that warn about contaminated black market peptides, untrustworthy lab testing, or illicit drug batches without connecting to broader regulatory, policy, or access-to-care systemic issues. Safety alerts about underground markets need to analyze why patients resort to them.
Exclude posts speculating about AI causing labor market disruption, unemployment, economic transition, or macro workforce changes where healthcare is mentioned as an afterthought or secondary example—the post must directly address healthcare worker labor impact, not general economic transitions.