The AI moment everyone missed (and why it matters)
Reasoning models just changed everything. Plus: the tools worth your time this week.
Welcome to the first issue of BullCity AI
You're reading this because you're curious about AI beyond the hype cycle. Good. That's exactly where the interesting stuff happens.
Here's what we're doing here: Every week, I'm cutting through the noise to bring you the AI developments that actually matter, the tools worth your time, and the insights that'll help you think differently about what's coming.
No buzzwords. No FOMO. Just signal. Let's jump in.
๐ฅ The Big Story: The Reasoning Models Are Here (And They're Weirder Than Expected)
OpenAI's o1 and o3 models don't just answer questions faster - they think out loud. They show their work. And that changes everything.
Why this matters: We've spent years teaching AI to be confident. Now we're teaching it to be uncertain, to reason, to check its work. It's like watching a very smart person think through a problem in real-time instead of just giving you the answer.
The practical impact? These models are crushing benchmarks in math, coding, and science. But more importantly, they're exposing the reasoning process itself. You can see where they get stuck, where they correct themselves, what assumptions they make.
The thing nobody's talking about: This makes AI explainable by default. When a model shows you its thought process, "black box AI" becomes a lot less black.
๐ฐ What Else Happened This Week
Google's Gemini 2.0 Flash Thinking Mode - They're not just following OpenAI - they're making reasoning models free. Gemini 2.0 Flash with experimental thinking is available in AI Studio right now, no waitlist. It's fast, it thinks, and it costs nothing. The race to commoditize reasoning has officially started.
Anthropic's Computer Use Hits General Availability - Claude can now actually use your computer. Click buttons, fill forms, navigate websites. It's clunky right now, but this is the first real step toward AI that doesn't just tell you how to do something - it does it for you.
Meta's Open Source Play Gets Serious - Llama 4 rumors suggest it'll match GPT-4 level performance. If true, the gap between open and closed models just got a lot smaller. What happens when state-of-the-art AI is free to download and run?
๐ ๏ธ Tools Worth Your Time
- Cursor - If you write code and haven't tried this yet, stop reading and go download it. It's VSCode but with AI that actually understands your entire codebase. Tab to accept suggestions, Cmd+K to edit with AI. It's the first coding tool that feels like the future.
- NotebookLM - Google's sleeper hit. Upload your documents and it generates a genuinely good podcast where two AI hosts discuss your content. Bizarre? Yes. Useful for processing long documents? Also yes.
- Napkin AI - Turn text into visual diagrams instantly. It's not perfect, but it's perfect enough for brainstorming sessions and quick presentations.
๐ญ One Thing I'm Thinking About
Everyone's focused on what AI can do today. I'm more interested in what happens when these capabilities compound. When you combine reasoning models + computer use + multimodal understanding + agents, you don't get "better chatbots." You get something fundamentally different.
The interesting question isn't "Can AI write code?" anymore. It's "What happens when AI can reason about a problem, research solutions, write code, test it, and deploy it - all while you're asleep?" We're about to find out.
๐ฏ Quick Hits
- Character AI hit 20M daily users - More than ChatGPT at its peak.
- Apple's on-device AI models are surprisingly good. The M-series chips were always about this moment.
- AI search is eating Google faster than expected. Perplexity just raised at $9B valuation. ChatGPT search launched. The 10 blue links are dying in real-time.
๐ Local Angle
If you're in the Triangle, keep an eye on what's happening at Duke and UNC. The AI research coming out of these labs is world-class, but most of it never makes it past academic papers.
