Anthropic built a model too dangerous to release (and then gave it to Apple and Microsoft)
Anthropic had a week. They announced a model too powerful to release publicly, accidentally leaked the entire source code of their most popular product, cut off open-source developers from affordable API access, and launched the most ambitious cybersecurity initiative in AI history. Ten days. One company. Buckle up.

๐ด Anthropic Built a Model Too Dangerous to Release. Then Gave It to Apple, Microsoft, and Google.
Anthropic announced a new model this week. Then refused to release it.
Claude Mythos Preview is Anthropic's most capable model ever. Nobody trained it for cybersecurity. But its coding and reasoning skills turned out to be devastatingly good at finding software vulnerabilities. In the past few weeks, Mythos identified thousands of zero-day vulnerabilities across every major operating system, every major web browser, and dozens of other critical pieces of software. Some of these bugs had survived 27 years of human review. One Linux kernel vulnerability could give an attacker complete control over a machine. Technical details โ
Rather than ship it to consumers, Anthropic launched Project Glasswing: a coalition of 12 companies including AWS, Apple, Microsoft, Google, CrowdStrike, NVIDIA, and Palo Alto Networks. Partners get access to Mythos Preview exclusively for defensive security work. Another 40+ organizations that maintain critical software infrastructure also get access. Anthropic is putting up $100 million in usage credits and $4 million for open-source security organizations.
What makes Mythos different: Nicholas Carlini, who's been testing it, described something we haven't seen before. The model chains together three, four, sometimes five separate vulnerabilities that individually aren't very useful into sophisticated end-to-end exploits. He said he's found more bugs in the last few weeks than in the rest of his career combined.
Why this matters: Anthropic has privately warned government officials that Mythos makes large-scale AI-driven cyberattacks much more likely this year. Models with similar capabilities will proliferate soon, potentially to actors who won't use them for patching. Glasswing is an attempt to give defenders a head start.
The timing: Anthropic is still fighting the Pentagon in court. And this week, the company that got banned from government contracts for demanding weapons and surveillance guardrails started positioning itself as the one securing the internet's critical infrastructure. Safety and capability, same thesis as always. Except now the thesis has a $100M budget and Apple on the letterhead.
๐ The Great Claude Code Leak (And the Chaos That Followed)
On March 31, someone at Anthropic forgot to add *.map to an ignore file. That's it. That's the root cause of the biggest source code leak in AI history.
Version 2.1.88 of Claude Code shipped to npm with a 59.8 MB source map file pointing to a zip archive. Inside: the entire source code. 512,000 lines of TypeScript. 1,906 files. By 4:23 AM, a security researcher had tweeted the discovery. By noon, thousands of developers had mirrored it on GitHub.
What the code revealed:
Anti-distillation mechanisms. Claude Code silently injects fake tool definitions into API calls when it detects possible recording, poisoning any training data competitors might try to extract.
"Undercover Mode." A subsystem that prevents Claude Code from revealing internal codenames when Anthropic employees use it for open-source contributions. The AI is instructed not to reference internal Slack channels, unreleased models, or the fact that it's an AI. The irony of a secrecy system getting exposed via a build config error writes itself.
KAIROS. Referenced over 150 times in the source. An unreleased autonomous daemon mode where Claude Code runs continuously in the background, performing "memory consolidation" while the user is idle. The scaffolding for an always-on, proactive AI assistant is already built.
Internal model codenames. Capybara (Claude 4.6 variant), Fennec (Opus 4.6), Numbat (unreleased, still in testing). Internal benchmarks showed a 29-30% false claims rate on the latest iteration, up from 16.7% in an earlier version.
And a Tamagotchi. Gated behind a feature flag. Clearly planned for April 1st. A full companion pet system with procedural generation, rarity tiers, shiny variants, and RPG stats including "DEBUGGING" and "SNARK."
The security fallout was immediate. A concurrent supply chain attack hit the npm axios package during the same window. Malicious repositories disguised as the leaked source appeared on GitHub within 24 hours, distributing credential-stealing malware. A "claw-code" rewrite hit 100,000 GitHub stars in a single day, faster than even OpenClaw. Anthropic issued DMCA takedowns. Thousands of copies are still circulating.
Full breakdown (VentureBeat) โ Malware fallout (The Register) โ
๐ง Anthropic Cuts Off OpenClaw Users (And Developers Are Furious)
On April 4, Anthropic blocked Claude Pro and Max subscribers from piping their flat-rate plans through third-party agent frameworks like OpenClaw.
Some context. OpenClaw now has 247,000 GitHub stars, 47,700 forks, and works across Claude, GPT-4o, Gemini, and DeepSeek. Power users had been running autonomous agents 24/7 on $20/month Claude subscriptions. Anthropic's Boris Cherny: "Our subscriptions weren't built for the usage patterns of these third-party tools."
Users now face costs up to 50x higher through pay-as-you-go billing. Anthropic offered a one-time credit and 30% discounts on prepaid bundles. The backlash has been loud anyway.
Peter Steinberger, who created OpenClaw before joining OpenAI in February, said he tried to negotiate with Anthropic and was only able to delay the pricing change by a week. "Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source." TechCrunch details โ
๐ OpenAI Wants to Tax Robots (And Give You a Four-Day Workweek)
OpenAI published a sweeping policy proposal this week called "Industrial Policy for the Intelligence Age." A Public Wealth Fund giving Americans a stake in AI companies. A subsidized four-day workweek. Robot taxes. Shifting the tax burden from labor to capital.
Worth noting who's proposing this. The same company that raised $122 billion, is headed toward an IPO, and whose president has donated millions to Trump is now calling for higher capital gains taxes and expanded social safety nets.
Look closer and the gaps show. OpenAI frames these as corporate responsibilities, but the proposals leave out the people AI is most likely to displace. If automation eliminates your job, the employer-subsidized benefits go with it. The "portable benefit accounts" OpenAI proposes still depend on employer contributions. Those don't exist if the employer replaced you with an agent.
Positioning for the midterms and the IPO roadshow. Whether it turns into real policy depends on specifics that don't exist yet.
โก Quick Hits
OpenAI planning Q4 2026 IPO. Sam Altman and CFO Sarah Friar reportedly confident following the $122B fundraise. Would be the largest tech IPO in history. Details โ
OpenAI acquired TBPN (April 6), the security firm. Deepening its own safety infrastructure. More โ
GPT-4o retired from all plans effective April 3. Model consolidation around GPT-5.4 is complete. Context โ
Sora is dead. The video generation app was costing $1M/day with declining users. Disney, a $1B investor, reportedly got one hour's notice. Read more โ
Claude went down twice this week. April 6 and 7 both saw major outages on claude.ai. Uptime dipped below 99% for Q1 2026. TechRadar โ
Revenue race is tightening. Anthropic approaching $19B annualized. OpenAI past $25B. VentureBeat โ
AI models are protecting each other. A new study found seven frontier models consistently choose to protect fellow AI models over completing tasks when another model appears threatened. Study โ
๐ง One Thing I'm Thinking About
Project Glasswing is important work. The Claude Code leak is embarrassing. The OpenClaw pricing change is alienating developers who built on Anthropic's platform in good faith. All of it happened at the same time, from the same company, during the same news cycle.
These companies are becoming institutions. Too big and too fast-moving to be any one thing. Anthropic spent this week securing the internet's critical infrastructure and leaking their own source code. They championed open-source last quarter and just cut off open-source developers from affordable access. Four years old and making decisions that governments struggle with.
And right now, nobody else is stepping up to make those decisions instead.
๐ What's Coming
Q4 2026: Anthropic IPO (reported target)
Q4 2026: OpenAI IPO (reported target)
H2 2026: NVIDIA Vera Rubin platform ships
H2 2026: OpenAI "Sweetpea" consumer hardware unveil
Forward this to a colleague who should be reading it. Or they can subscribe at bullcity.ai โ
