Skip to content
Sam Altman ASML Pentagon

Stanford says AI is outrunning everything (and someone tried to kill Sam Altman)

Daniel
Daniel
BullCity AI
AI news that actually matters
Issue #017 โ€ข April 15, 2026
Stanford released its annual AI report card. Someone tried to kill Sam Altman. Twice. Anthropic passed OpenAI in revenue and then couldn't keep the servers running. And the courts split on whether the Pentagon can blacklist an AI company for having opinions. Let's get into it.
freepik_a-clean-modern-newsletter_2791964515
๐Ÿ“Š Stanford Says AI Is Outrunning Everything. The Data Backs It Up.
Stanford HAI published the 2026 AI Index this week. Over 400 pages of data on where AI actually stands. I read the whole thing. Five numbers that matter:

SWE-bench Verified went from 60% to near 100% in a single year. That's a coding benchmark using real GitHub issues. On Humanity's Last Exam, the hardest questions domain experts could write, accuracy jumped from 8.8% to over 50%. The plateau everyone predicted hasn't shown up.

Generative AI hit 53% population adoption in three years. Faster than the personal computer. Faster than the internet. Organizational adoption hit 88%. Four out of five university students use it for coursework. This is mainstream.

China closed the model performance gap. U.S. and Chinese models traded the lead multiple times since early 2025. The U.S. still dominates capital ($285.9B in private investment vs. China's $12.4B) and infrastructure (5,400+ data centers). But China leads publications, patents, and robotics.

The Foundation Model Transparency Index dropped from 58 to 40. The most capable models share the least about how they're built. Over 90% of notable models come from private companies. Training code, dataset sizes, parameter counts. All withheld.

AI researcher migration to the U.S. dropped 89% since 2017. An 80% decline in just the last year. Only 31% of Americans trust their government to regulate AI. Lowest of any country surveyed.
My take:
The number I keep coming back to: estimated consumer value of generative AI tools hit $172 billion annually. The median value per user tripled between 2025 and 2026. People aren't just trying these tools. They're getting measurable value. That's the data point that settles the bubble question for me. Full report โ†’
๐Ÿšจ Someone Tried to Kill Sam Altman. Then It Happened Again.
At 3:37 a.m. on April 10, a 20-year-old from Texas named Daniel Moreno-Gama threw a Molotov cocktail at Sam Altman's San Francisco home. A security guard extinguished the fire. Moreno-Gama then went to OpenAI's headquarters, smashed a chair against the glass doors, and told arriving officers he wanted to "burn it down and kill anyone inside."

Police found a document on him titled "Your Last Warning." It listed names and addresses of AI company CEOs, board members, and investors. A second section warned of humanity's "impending extinction" from AI. He closed with a letter directly to Altman: "if by some miracle you live, then I would take this as a sign from the divine to redeem yourself."

He's been charged with attempted murder and attempted arson at the state level, plus federal charges for an unregistered firearm and attempted property destruction with explosives. Domestic terrorism charges are also being considered.

Two days later, a second attack. A car stopped outside Altman's home early Sunday morning. Someone fired a gun at the house. Two individuals were arrested.

Altman wrote afterward that he'd "underestimated the power of words and narratives" and called for toning down "the rhetoric and tactics" across the AI industry.
My take:
Anti-AI marches in London a few weeks ago. A kill list now. Two attacks on one CEO's home in 48 hours. The fear about AI is real, and some of it is warranted. But we've crossed a line when that fear turns into firebombs and gunfire. The AI industry should take Altman's post seriously. The gap between real risks and imagined ones is where violent radicalization lives. That gap is getting wider, and nobody in a position to close it is trying hard enough.
โš–๏ธ The Pentagon Case Just Split the Courts (And Got More Ironic)
Two courts. Two opposite conclusions. Same case.

On April 8, the D.C. Circuit denied Anthropic's request to block the Pentagon's supply-chain risk designation. The three-judge panel said the "equitable balance cuts in favor of the government" because the case involves "vital AI technology during an active military conflict." Two of the three judges were Trump appointees.

Meanwhile, Judge Rita Lin's San Francisco injunction from March 26 still stands. Her ruling called the Pentagon's actions "Orwellian" and found the designation was retaliation for Anthropic's public criticism.

Bottom line: Anthropic is locked out of new Pentagon contracts but can still serve every other federal agency. Both sides agreed to fast-track the full case.

And then Project Glasswing happened. The same company the Pentagon blacklisted for wanting weapons and surveillance guardrails just built the most capable cybersecurity tool in history and started briefing CISA and Commerce on its capabilities. The Washington Post editorial board called the ban "shortsighted." That might be underselling it.

Axios coverage โ†’    Defense startup boom (Federal Times) โ†’
๐Ÿ’ฐ Anthropic Passed OpenAI in Revenue. Then the Servers Started Crashing.
Anthropic's annualized revenue hit $30 billion. Up from $9 billion at the end of 2025. Up from $1 billion fifteen months ago. They've passed OpenAI's run rate of about $24 billion.

Enterprise drives 80% of it. Over 1,000 companies spend $1M+ per year. Claude Code alone generates $2.5 billion annualized. Eight of the Fortune 10 are customers. The Pentagon ban, the Super Bowl ads, the ChatGPT-to-Claude migration wave. All of it fed the surge.

The catch: the infrastructure can't keep up. Users are reporting degraded performance. Five major outages in March. Claude Code users burning through 5-hour sessions in 90 minutes. Anthropic quietly reduced default "effort" levels to save tokens, and heavy users noticed immediately.

An OpenAI memo obtained by CNBC claimed Anthropic made a "strategic misstep" by not securing enough compute and is "operating on a meaningfully smaller curve." Marc Andreessen publicly questioned whether Mythos is being withheld because of safety concerns or because Anthropic can't afford to run it at scale.

Meanwhile, OpenAI is projecting $14 billion in losses for 2026. Anthropic projects positive cash flow by 2027. The company that most people couldn't name two years ago now out-earns the company that started the consumer AI category.
My take:
$30B ARR is real. The compute crunch is also real. Growing this fast with this little infrastructure headroom is a problem money can solve, but not overnight. Anthropic's new deal with Google and Broadcom for multiple gigawatts of TPU capacity doesn't come online until 2027. Between now and then, they need to keep the best product in AI running on servers that weren't sized for this many users. That's the actual test.
โšก Quick Hits
ASML raised its 2026 sales forecast today. Now projecting โ‚ฌ36-40 billion, up from โ‚ฌ34-39 billion. The company that makes the machines that make AI chips still can't keep up with demand. Bloomberg โ†’

Three-quarters of AI's economic gains captured by 20% of companies. PwC study. The difference between leaders and everyone else: leaders point AI at growth and new markets, not cost reduction. Report โ†’

OpenAI projecting $100B in annual ad revenue by 2030. ChatGPT ad pilot hit $100M annualized in two months. Also projecting $14B in losses this year. Only 5.5% of 900 million weekly users pay. Details โ†’

Software developer employment down 20% for workers aged 22-25. Stanford data. McKinsey says a third of organizations expect to shrink engineering teams. Entry-level is getting hit first. Stanford โ†’

Pentagon's Anthropic ban creating a defense startup boom. Small companies like Smack Technologies and EdgeRunner AI report surging interest from generals. A Space Force contract stuck in procurement for a year got signed in weeks. Security clearance timelines compressing from 18 months to 3. Federal Times โ†’
๐Ÿง  One Thing I'm Thinking About
Read the Stanford report and the Altman attack back to back. One says AI is being adopted faster than any technology in history. The other says someone made a kill list of AI executives and acted on it.

Stanford put a number on it: 31% of Americans trust their government to handle AI. Lowest of any country surveyed. Adoption at 53%. Trust at 31%. That gap is where the fear lives.

Last week Anthropic released Mythos and the security community started losing sleep. This week a 20-year-old threw a firebomb at a CEO's front door. Anti-AI marches through London. Congressional hearings dominated by industry lobbyists while academic voices disappear from the witness lists.

The people building this stuff have a responsibility. To build safely. And to talk straight about what these systems can and can't do. The gap between real risks and imagined ones is where radicalization grows. That gap is widening. And nobody with the reach to close it is trying hard enough.
๐Ÿ“… What's Coming
April 17: Anthropic OpenClaw credit redemption deadline
June-July: FIFA World Cup across North America (AI analytics for all 48 teams)
Q4 2026: Anthropic IPO (reported target)
Q4 2026: OpenAI IPO (reported target)
H2 2026: NVIDIA Vera Rubin platform ships
H2 2026: OpenAI "Sweetpea" consumer hardware unveil
๐Ÿ“ Meanwhile in the Triangle
NC Treasurer Brad Briner went all-in on AI this week, expanding tools across his entire department after a pilot at NCCU in Durham showed 10% productivity gains. NC Central also launched the Institute for Artificial Intelligence with a $1M Google.org grant. First HBCU with a program like it.

But entry-level tech hiring is falling. Epic laid off 1,000+ in March, 200+ in Cary. CS enrollment in NC public universities is up 31% since 2019. The Stanford report says developer employment for 22-25 year olds dropped 20%. The question for Triangle universities: are we training students for the jobs that exist, or the ones that existed two years ago?

Forward this to someone building with AI in the Triangle. Or subscribe at bullcity.ai โ†’
See you next Wednesday.

Share this post