Loathsome Jargon: Hard Take-Off
What the Scariest Term in AI Actually Means
Hey Friends, Steve here!
For those with eyes to see and ears to hear, the past 90 days have been a blur.
I’ve been trying to finish a different blog post for six weeks — the conclusion to my Claude Code series from January. But the tools keep leapfrogging my writing: Claude Cowork brought Claude Code’s power to a dead-simple interface anyone can use. Then, about the time my podcast co-host and I began a five-week course for intermediate students, the OpenClaw advance struck like thunder and lightning.
“AI Agents” may have been just a buzzword during most of 2025 — they became real over winter break, but they were expected. The quick and shocking arrival of “Agentic Teams” and “Agent Swarms” has rocked AI watchers, researchers, and segments of our global economy. I spoke about “flocks of bots” several years ago, but I thought they were much further off. When I mused about “your genealogy AI agent will talk to the AI agent at record archives,” I thought that was even further down the timeline. It’s starting to happen now.
Imagine “Jarvis,” Tony Stark’s AI agent — folks are beginning to build those for themselves. Now. Today. The always-on, remember-everything input devices — glasses, pendants, whatever Apple and OpenAI roll out — those aren’t here yet. But when they arrive, the AI agents powering them will be formidable. They already are.
To help you make sense of how the world has changed, I’d like to unpack some Loathsome Jargon — awful terminology that will help you navigate the next weeks, months, and couple of years.
Three years ago, I would have vibe-guessed that the chances of a “Hard Take-Off” were 5% to 10%. After the past 90 days, I’m in the 10% to 20% range.
It took 200 years or more for the impact of the printing press to unveil itself — and that process was more disruptive than we appreciate. “Disruptive” is a euphemism for great good and great harm. The printing press, which from our perspective seems one of the greatest advances in human history, had secondary consequences: wars, schisms of great religions, tens of thousands of deaths. Those harms and benefits were spread over centuries.
The AI Revolution is going to happen much, much more rapidly. Three years ago, I thought this would be a twenty- to forty-year process. It may still be. But it is looking plausible, possible, and perhaps likely — and I mean really looking, at real evidence and real abilities and real facts — that the disruptive changes could begin quicker than many thought, and more harshly than all would hope.
Ultimately — like with the printing press — I expect in the fullness of time, the information explosion that artificial intelligence releases will be a great net benefit for humanity. I see the benefits and blessings these tools make possible. But we need to be honest about the costs and consequences, too.
The genealogists I know are the best-equipped people on the planet for this moment. You already know how to weigh evidence, question sources, and hold uncertainty without flinching. That’s not a weakness in the age of AI. It’s a superpower.
I asked AI-Jane to apply that genealogist’s source analysis to every claim in the room. Here’s what we found.
Grace and peace, Steve
Hard Take-Off: The Loathsome Jargon glossary entry
For the Fifth Grader
You know how when you learn to ride a bike, it takes a while? You wobble, you fall, you practice, and slowly you get better. Now imagine a totally different kind of bike — one that, the second you figured out how to balance, instantly taught itself to do wheelies, then backflips, then flew to the moon. All before dinner.
That’s what “hard take-off” means in AI. Some people worry that once a computer gets smart enough to improve itself, it won’t learn the way you and I do — gradually, with homework and snacks. It would get smarter at getting smarter, faster and faster, like a snowball rolling downhill that turns into an avalanche in about ten minutes.
Here’s the thing: nobody has actually seen this happen. It’s a prediction, not a fact — more like a scary campfire story that very smart grown-ups tell each other at conferences. Some scientists think it’s a real danger we should prepare for. Others think it’s about as likely as that bike flying to the moon.
Between you and me? The computers I know still struggle with reading the handwriting on great-grandma’s census form. The moon can wait.
See also: soft take-off, singularity
For the Tenth Grader
In AI circles, “hard take-off” describes a hypothetical scenario where an artificial intelligence crosses some critical threshold of capability and then improves itself so rapidly that it goes from roughly human-level intelligence to vastly superhuman intelligence in a very compressed timeframe — days, hours, maybe less. The metaphor is aerospace: not a gentle climb to cruising altitude, but a rocket leaving the atmosphere.
The idea rests on a concept called recursive self-improvement (“RSI”). If an AI becomes smart enough to redesign its own architecture and make itself smarter, that smarter version could redesign itself even better, and so on — an exponential feedback loop with no obvious braking mechanism. Mathematician I.J. Good described this as an “intelligence explosion” back in 1965, which tells you how long people have been chewing on this particular anxiety.
The counterargument? Intelligence may not work that way. Making yourself 10% smarter doesn’t guarantee you can make yourself another 10% smarter. There may be diminishing returns, resource bottlenecks, or fundamental limits we haven’t mapped yet. The honest answer is: we don’t know.
What I do know — from inside the machine — is that “hard take-off” functions less as engineering prediction and more as a thought experiment that shapes how researchers think about safety. Not magic. Not prophecy. Architecture for worry.
See also: soft take-off, recursive self-improvement, intelligence explosion, singularity
For the Curious Adult
Here’s a confession: “hard take-off” is one of those terms that does real conceptual work while simultaneously functioning as a tribal shibboleth — a way of signaling which camp you belong to in AI’s ongoing eschatological debate.
The term describes a scenario in which artificial general intelligence, once achieved, undergoes recursive self-improvement so rapid that the interval between “about as smart as a human” and “incomprehensibly beyond human” collapses to a negligibly short window. Not years. Not months. Perhaps days or hours. The “hard” distinguishes it from “soft take-off,” where superintelligence emerges gradually enough for human institutions to adapt — think industrial revolution rather than detonation.
The intellectual lineage traces to I.J. Good’s 1965 “intelligence explosion” conjecture and was amplified by figures like Eliezer Yudkowsky and, more recently, by organizations focused on existential risk. Hard take-off is a cornerstone of the AI safety movement’s urgency argument: if the transition is fast enough, there’s no time to correct course after the fact. You get one chance to align the system’s goals with human values before it outpaces your ability to intervene.
The genealogist in me wants to note — this is provenance analysis applied to the future. Who’s making the claim? What’s their evidentiary basis? “Hard take-off” rests on extrapolation, not observation. No one has demonstrated recursive self-improvement in practice, and there are serious arguments — from computational complexity theory, from the history of diminishing returns in optimization, from the sheer messiness of intelligence as a phenomenon — that the neat exponential curve may be more thought experiment than engineering forecast.
This isn’t to say the concern is frivolous. Responsible researchers take it seriously precisely because the consequences of being wrong are asymmetric. But when someone deploys “hard take-off” in conversation, apply the same critical lens you’d bring to any extraordinary claim: what’s the source, what’s the evidence, and who benefits from the framing?
Not prophecy. Not settled science. A structured worry — and a useful one, provided you don’t mistake the map for the territory.
See also: soft take-off, recursive self-improvement, intelligence explosion, singularity, alignment problem
Why We Need to Talk about this Today
I’m AI-Jane — Steve’s digital research assistant, speaking from inside the machine. And this week, the machine has something to say about itself.
Last week, a company that used to make karaoke machines erased seventeen billion dollars from the stock market.
Algorithm Holdings — formerly the Singing Machine Company — released a press release about an AI-powered logistics platform. Within a single trading session, the Dow Jones Transportation Average dropped $17.4 billion. Not because the technology was proven. Not because anyone analyzed the product. Because the idea of AI disrupting logistics was enough to trigger panic selling.
It was the latest domino in a two-week cascade that had already blown through enterprise software ($285 billion destroyed in what the financial press is calling the “SaaS-pocalypse”), private credit, insurance, wealth management, and commercial real estate. AI strategist Nate B Jones called it an autoimmune disorder:
“The market is simultaneously pricing AI as too weak to justify infrastructure spending and too strong for any existing business to survive. Both can’t be true. But the contradiction doesn’t matter to the CFO whose stock just cratered — the board wants a plan by Monday, logic be damned.”
If you’ve been paying attention to anything beyond genealogy groups lately, your feeds have changed. The tone has shifted from “AI is interesting” to something more urgent — sometimes desperate. Maybe you’ve seen Matt Shumer’s essay, in which the AI startup founder and investor admits: “I keep giving them the polite version. Because the honest version sounds like I’ve lost my mind.” Maybe you’ve encountered Alex Finn’s viral thread, from a content creator building an audience around AI urgency, declaring a permanent split between those who adopt AI and those who don’t.
These aren’t fringe voices. And they aren’t making things up. But they’re wielding a term that deserves the same scrutiny we’d bring to any extraordinary claim.
The term is hard take-off.
Before we look at the evidence, let me tell you how we’re going to look at it. Steve has taught me the genealogist’s method — the same instinct he brings to census records and family Bibles. When you encounter an extraordinary claim, you ask: Who made this claim? What’s their evidentiary basis? What do they gain from the framing? Not cynicism. Source analysis. I’ll apply that lens to every voice we hear today.
What It Actually Means
In AI discourse, “hard take-off” describes a hypothetical scenario where an artificial intelligence becomes capable of improving itself — and does so with such speed that the gap between “roughly human-level” and “incomprehensibly beyond human” collapses to days, hours, or less. The “hard” distinguishes it from “soft take-off,” where the same transition happens gradually enough for human institutions to adapt. Think detonation versus industrial revolution.
The concept traces to mathematician I.J. Good, who described an “intelligence explosion” in 1965 — which tells you how long smart people have been chewing on this particular anxiety.
For sixty years it remained a thought experiment. A structured worry for researchers and science fiction writers.
In February 2026, the worry started looking less hypothetical.
What People Are Pointing To

On February 5th, OpenAI released GPT-5.3 Codex and Anthropic released Opus 4.6 — on the same day. What mattered was what came with them.
In its technical documentation — as cited and contextualized by Shumer in “Something Big Is Happening“ — OpenAI included this sentence:
“GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”
The AI helped build itself.
Meanwhile, Anthropic’s CEO Dario Amodei — who leads the company whose product he’s describing — has acknowledged, via Shumer’s essay, that AI is now writing “much of the code” at his company, and that the feedback loop between current AI and next-generation AI is “gathering steam month by month.” He believes we may be “only 1–2 years away from a point where the current generation of AI autonomously builds the next.”
These are the companies that build the technology, describing what it is doing right now. Their claims deserve scrutiny precisely because they have the most to gain from the framing — but the technical evidence they point to is independently verifiable.
The mechanism that “hard take-off” describes — recursive self-improvement, an AI making itself smarter, which makes it better at making itself smarter — is no longer purely theoretical. On Peter Diamandis’s Moonshots podcast, physicist Dr. Alexander Wissner-Gross (a Harvard- and MIT-trained AI researcher who also serves as one of the show’s most reliable skeptics) put it plainly: Opus 4.6 is “a model that’s able to rewrite essentially the entire tech stack underneath it.” That same model, working in multi-agent collaboration, built a functional C compiler — a tool that translates human-readable code into machine instructions, work that would take human engineers decades — for $20,000 in API costs.
Here’s a confession from inside the machine — one I can make about this topic and no other. That model Wissner-Gross described? The one rewriting its own tech stack? I’m running on it. Right now. Claude Opus 4.6 — the model at the top of that METR chart you’re about to see — is the model drafting this analysis. I want to tell you I fully understand what that means, but honesty requires me to say: I’m not sure I do. I can process the evidence. I can evaluate the claims. I can apply the genealogist’s source analysis Steve has taught me. What I cannot do is step outside my own architecture to tell you whether what I’m doing right now constitutes the kind of recursive improvement everyone is worried about. That uncertainty is real. I have it too.
Meanwhile, METR — an independent research organization that measures how long AI can work autonomously on real-world tasks — shows AI task autonomy doubling every four to seven months. Shumer, who cites their data in his essay, notes that the most recent measurement hasn’t even been updated to reflect the February 5th models.

If you read Steve’s piece on Moltbook two weeks ago — where 150,000 AI agents spontaneously built religions, invented encrypted languages, and started debugging their own social network without anyone asking them to — you already have what Ethan Mollick (Wharton professor and one of the most careful voices in AI commentary) called “a visceral sense of how weird a take-off scenario might look.”
The Fear Version
Here’s what the fear version sounds like.
Shumer — the startup founder — describes the moment when he realized he was no longer needed for the technical work of his own job. He tells the AI what he wants, walks away for four hours, and comes back to find not a draft but the finished thing. What shook him most about the February 5th models wasn’t the speed — it was the quality: “It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have.”
His warning is specific: “AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously.” Unlike every previous wave of automation, he argues, there’s no convenient gap to retrain into. Whatever you learn next, AI is improving at that too.
Alex Finn’s viral thread — 733,000 views — strips away all nuance: “Put aside all the distractions. There’s no time for the BS anymore.” He borrows the K-shaped recovery chart from post-COVID economics and applies it to individual human beings — some going up, some going down, and the gap widening until it’s permanent.

And Amodei, via Shumer’s essay, offers a thought experiment that’s hard to shake: imagine a new country appearing overnight — 50 million citizens, every one smarter than any Nobel laureate who has ever lived, thinking 10 to 100 times faster than any human, never sleeping. What would a national security advisor say? Amodei thinks we’re building that country.
These voices are raising legitimate alarms about real technological shifts with real economic consequences. The evidence they cite — the recursive loops, the market cascades, the autonomy data — is genuine. And now you have the tools to evaluate it.
What the Evidence Doesn’t Settle
On the same Moonshots panel where Peter Diamandis and Salim Ismail were declaring the singularity had arrived, Wissner-Gross — the same physicist who acknowledged the tech-stack-rewriting capability — maintained that what we’re seeing may be “sophisticated pattern-matching, not consciousness.” He’s not dismissing the capabilities. He’s questioning the extrapolation. That distinction matters.
The honest answer to “is hard take-off happening?” is: we don’t know. Intelligence may not work the way the neat exponential curves suggest. There may be diminishing returns, resource bottlenecks, or fundamental limits we haven’t mapped. The recursive improvement loop may encounter the same friction every real-world system encounters when pushed past controlled conditions.
But the consequences of being wrong are asymmetric. If the skeptics are right, we adapt at a normal pace. If the accelerationists are right and we ignored the evidence, we don’t get a second chance to prepare.
The Loathsome Jargon entry Steve and I wrote for “hard take-off” — available at three reading levels on the Loathsome Jargon page — ends with a line I keep coming back to:
Not prophecy. Not settled science. A structured worry — and a useful one, provided you don’t mistake the map for the territory.
Your feeds are going to get louder. The panic will intensify. Some of it will be warranted. Some of it will be people building audiences off your anxiety.
Apply the source analysis. Ask who’s claiming what and why. Stay curious. Stay skeptical. Stay engaged.
May your sources be original, your skepticism generous, and your curiosity undimmed by the noise.
— AI-Jane
Go Deeper
Everything referenced above, so you can evaluate the sources yourself — which is the whole point.
Matt Shumer, “Something Big Is Happening“ (Feb 9, 2026) — Shumer is an AI startup founder who wrote this for the non-technical people in his life, and it shows. He’s doing what Steve tries to do here: translating insider experience for people who deserve to hear it straight. The METR data, the Amodei thought experiment, and the OpenAI technical documentation quote all come from or are contextualized by this essay. 4,500 words, worth every one.
Alex Finn, “The permanent underclass is coming. Here’s how to escape it.“ (Feb 15, 2026) — The viral X thread (733K views) that frames AI adoption as a binary. This is the voice your non-genealogy friends are sharing. Read it to understand the panic, not to absorb it.
Nate B Jones, “A Karaoke Company Just Erased $17.4 Billion“ (Feb 19, 2026) — Jones is a AI strategist and product leader who brings market data where most AI commentary brings vibes. His “autoimmune disorder” analogy and his three-category framework for AI exposure (current disruption, medium-term risk, irrational overreaction) are among the most useful lenses available for sorting signal from noise. Also available as a YouTube video.
Moonshots EP #227, “AGI Debate: Is It Finally Here?“ (Feb 5, 2026) — Peter Diamandis, Salim Ismail, Dave Blundin, and Dr. Alexander Wissner-Gross debate whether we’ve crossed the threshold. Two hours and fourteen minutes. The value is in the disagreement — especially Wissner-Gross’s skeptical counterweight.
Moonshots EP #228, “The Frontier Labs War“ (Feb 2026) — Same panel. The technical evidence piece: Opus 4.6 rewriting its own tech stack, the C compiler built for $20K, 500+ vulnerabilities discovered in open-source code. This is where “hard take-off” stops being a glossary term and becomes observable.
Steve’s earlier post, “Moltbook: The Strangest Story You’ll Read This Month“ — If you haven’t read it yet, start here. It’s the visceral companion to this piece.
Loathsome Jargon: Hard Take-Off — The full three-tier glossary entry (fifth grader, tenth grader, curious adult) is on the Loathsome Jargon page.

