From Dragon-Slayer Skeptic to AI Convert

In astronomy, there’s a concept called proper motion — stars that appear fixed are actually drifting against the sky relative to each other. It only becomes obvious when measuring carefully with reference points many decades apart. What we see today is a temporary snapshot of a constantly moving firmament — an ocean of drifting stars across the abyss. Eventually our current constellations will be unrecognizable, but it will take 10s or 100s of thousands of years for that to be the case.

Until recently, my mental model for artificial intelligence was that we wouldn’t see dramatic improvements within a single career. I was confident that LLM technologies would progress with the same gradualness of the constellations changing shape: iterative, needing painstaking measurements. Now, I view some of the technical advances of agentic AI in late 2025 and early 2026 as more historic than the ChatGPT reveal in November 2022.

Back in late 2023, I wrote a post called “AGI is a Dragon,” in which I argued — with some confidence — that general intelligence in machines was a mythological figure: beautiful to imagine, impossible to build. The compute costs alone made it implausible. I wasn’t wrong exactly. But after a year of using these tools every day, of getting more done with less cognitive effort than I expected, the ground has moved under my argument. Pretending otherwise would be dishonest.

We tend to anchor ourselves to the snapshot we happen to be standing in, and mistake it for permanence.


My skepticism wasn’t unfounded. It rested on two specific problems that I didn’t see a path through. The first was hallucination — the tendency for LLMs to generate plausible-sounding nonsense with absolute confidence. If the machine couldn’t be trusted to know what it didn’t know, it would always need a human standing over its shoulder. The second was context pollution: the longer a conversation went, the more the model drifted, tripping over its own earlier outputs and losing the thread. These weren’t edge cases. They were structural limitations.

These limitations are basically now solved problems for both regular teams working on practial solutions and researchers working on theroetical ones.

Practically-speaking, the LLMs now have a harness that offers a structured process: an interview to establish context, a plan you can review and comment on, checkpoints at every stage where you can steer, correct, or push back. Hallucination matters a lot less when you’re reviewing a plan in plain English before anything gets built. Context pollution matters a lot less when the tool is designed to keep resetting around your input rather than drifting on its own.

Meanwhile, teams at the forefront of agentic AI tooling can write millions of lines of code to build one of the most complicated kinds of software to build in a week … and can solve Olympiad-level math problems


Here’s what changed my mind in practice. I’ve been using Claude Code — a command-line AI coding assistant — on a real project. The workflow goes something like this: I describe the goals and the expected behaviors, the tool builds up artifacts — documents about the architecture, the data model, the flows. I learn tons of properly contextualized details about what I need to consider further. A certain clarity was obtained, something normally only achievable with significantly more time spent on it.

Once everything is contextualized, it produces a plan. I read the plan, leave comments, push back where I need to. It revises. I comment again. Eventually I run out of things to reconsider. Then it builds the thing. And it pretty much nails it.

I want to be clear about what’s happening here. I’m not typing code. But I’m not vibing in absentia, either. The creative and strategic thinking — what should this do, who is it for, what trade-offs matter — that’s still entirely mine, and I’m still being thorough. What’s been removed is the friction. The hours spent translating an idea in my head into syntax on a screen. The tedious scaffolding. The looking-things-up. The boilerplate.

The technical load has dropped. The creative load hasn’t.


To the creative people working in schools who have always had ideas but felt stymied by computers. To those who thought “I wish there was a tool that did exactly this” but couldn’t justify the budget, couldn’t find a developer, couldn’t learn to code on top of everything else.

That barrier is dissolving. It is now possible to write software just for you. Not enterprise software. Not something that needs a procurement process. A small, specific tool that solves your specific problem, built in an afternoon by describing what you need in plain language.

Are you horrified at the idea? Yeah, me too — two years ago.


The constellations have shifted faster than I expected. The dragons in the night sky are looking a little less mythological. I’m still not sure we’ll get to AGI — I’m not sure anyone can define it well enough to know — but I no longer think the distance is as vast as I once argued.

And for creative people in schools, that uncertainty doesn’t matter. What matters is that right now, today, the friction between having an idea and making it real is lower than it has ever been.

That’s not a dragon. That’s a tool. Pick it up.