Skip to content
Clawdbot Moltbot Big Tech

A lobster broke the internet (and Anthropic made it molt)

Daniel
Daniel

This week a lobster broke the internet. Also, Big Tech is about to report earnings and Wall Street wants to know if $475 billion in AI spending is actually going to pay off.

Let's get into it.

freepik__generate-9-different-angles-of-this-image-high-ang__52950

🦞 The Big Story: The Clawdbot Craze (Now Moltbot)

Clawdbot is an open-source, self-hosted AI assistant created by Peter Steinberger, the founder of PSPDFKit. It runs on your own hardware (Mac Mini, VPS, Raspberry Pi) and connects to messaging apps you already use: WhatsApp, Telegram, Discord, Slack, Signal, iMessage. You talk to it like you'd text a friend, and it actually does things.

It can control your browser, manage email, execute terminal commands, book flights, check you in, control smart home devices, and proactively reach out with morning briefings. One user documented how Clawdbot helped them buy a car by searching dealerships, filling out inquiry forms, and negotiating prices - they saved $4,200.

The project exploded: 5,000 GitHub stars to 30,000 in days. Discord community hit 8,900+ members. Cloudflare stock jumped 10%.

Then Anthropic sent a trademark request. The name "Clawdbot" (a play on Claude, the lobster mascot from Claude Code) was too close. Steinberger renamed it Moltbot. By 5am the email arrived. By 6:14am: "fuck it, let's go with moltbot." The lobster had molted.

The chaos that followed: Bots sniped the @clawdbot handle, posting crypto wallet addresses. Steinberger accidentally renamed his personal GitHub account in the panic. Bots sniped that too. Fake tokens, pump-and-dump schemes, security researchers found hundreds of users running instances with exposed API keys.

Why this matters beyond the drama: Moltbot represents the first viral open-source AI agent that regular developers can actually run. People want AI that acts, not just AI that chats. The question is who builds the version that's actually safe to run.

💰 Big Tech Earnings: The $475 Billion Question

Earnings season kicks off for tech's biggest names. Microsoft, Meta, Tesla reporting Wednesday. Apple Thursday. Alphabet and Amazon next week.

The number: $475 billion. That's how much Microsoft, Meta, Alphabet, and Amazon are collectively expected to spend on AI infrastructure in 2026. Up from $350B in 2025, $230B in 2024.

The tension: 2025 was the year Wall Street accepted massive AI spending. 2026 is the year they want to see returns. Meta lifted its capex guidance to $70-72 billion without explaining ROI - stock dropped 11% the next day.

Capex guidance matters more than earnings this quarter: - Microsoft fiscal 2026 capex: projected $99 billion - Amazon: $134 billion
- Meta: could hit $95 billion

My take: This earnings season will show whether companies doing the spending are starting to see returns. Watch Azure growth numbers, AWS AI workload commentary, and anything Meta says about ad targeting improvements.

🎯 Quick Hits

  • Anthropic's MCP becomes industry standard - Model Context Protocol hit 97 million monthly SDK downloads and 10,000 active servers. OpenAI, Google, Microsoft all support it. Donated to Linux Foundation. The "USB-C for AI agents" moment.
  • Samsung nearing HBM4 production - Next-gen high-bandwidth memory chips. Could ease AI compute constraints.
  • EU opens formal DMA proceedings against Google - Whether AI services can remain tightly coupled with search and Android.
  • AI coding hits mainstream - Stack Overflow 2025: 65% of developers use AI coding tools at least weekly.
  • Vention raises $110M for AI robotics - 25,000+ robots deployed across 4,000 factories.

💭 One Thing I'm Thinking About

The Moltbot saga reveals the real tension: execution vs. safety. Everyone wants AI that does things. But the moment you give an AI agent real capabilities (system access, credentials, autonomy), you create attack surfaces that didn't exist before.

Moltbot users had API keys, Telegram bot tokens, Slack OAuth credentials, and months of conversation histories exposed. Not because the project was careless, but because the security model for autonomous AI systems is fundamentally immature.

We're building agents faster than we're building guardrails. The companies that figure out how to ship capable agents with enterprise-grade security will own this market.

Share this post