AI might have hit a wall — why the next generation of models may get faster, not smarter
AI feels like it’s in a permanent state of acceleration — so what's the end game?
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
Every few months, there’s a new model, a new breakthrough, a new benchmark beaten. ChatGPT gets smarter, but then Gemini gets faster and takes the crown. Suddenly, Claude quitely becomes more capable. And we can't forget about the reasoning models arriving in droves. AI agents are promised. The curve just keeps climbing.
But here’s a question that doesn’t get asked enough: What if it doesn’t?
What if AI eventually hits a wall — not because companies stop trying, but because intelligence itself has limits? It’s a deceptively simple question with enormous implications. And the more I’ve dug into it, the more I’ve realized: this isn’t just a tech question. It’s a scientific, philosophical and deeply human one.
AI is only as smart as its ingredients
At the most basic level, today’s AI systems are built from three things:
- Data: books, articles, code, images, videos, and conversations
- Compute: massive amounts of processing power
- Human design: the architectures, objectives, and training methods created by researchers
Right now, we tend to treat these as if they’re limitless. But they aren’t.
That leads to an uncomfortable thought: if AI learns from human-created data, can it ever truly move beyond the boundaries of human knowledge?
Large language models don’t “discover” the world the way humans do. They don’t run experiments in a lab, go outside or have lived experiences. They are incredibly sophisticated pattern-matching machines trained on what we’ve already produced.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
That raises a real possibility: AI might get better at using human knowledge — but not necessarily go beyond it in a fundamental way.
The data problem: are we running out of “new” knowledge?
One of the biggest bottlenecks in AI progress is something surprisingly mundane: data. For instance, OpenAI may buy Pinterest and you can bet a big driver of that decision is more data.
That's because the best AI models have already “read” nearly everything humans have put online. But that pool is finite. Researchers are openly discussing a potential “data wall” — the point at which we’ve largely exhausted high-quality, human-generated text.
The industry’s workaround? Synthetic data — AI training on data created by other AI. But the risk here is what some researchers call the “Hapsburg AI” effect. It's like a form of inbreeding when models train too heavily on their own output; the risk is model collapse — losing nuance, creativity and the messy edge cases that make human thought valuable.
The result could be AI that keeps improving at narrow skills, but stops making the kind of broad, surprising leaps we’ve seen in recent years.
Could AI create new intelligence?
Here’s where things get more interesting. Some researchers argue that AI won’t need human data forever. They believe future systems could:
- Run their own experiments
- Simulate environments
- Generate new scientific hypotheses
- Discover patterns humans haven’t noticed
- Even design better AI systems than humans can build
We've already seen what they've done with Moltbook, so mabe the next frontier isn’t just better code — it’s robotics and AI-driven scientific labs, where machines can interact with the physical world instead of just reading about it.
If this happens, AI might break free from the “human ceiling” and enter a new phase of machine-driven intelligence.
The 'surpasser' paradox
But this creates a deeper tension. If an AI is trained primarily on human knowledge, can it ever truly surpass us?
Right now, models are brilliant at interpolation — connecting dots within the known human experience. They’re incredible at summarizing, synthesizing and reorganizing what we already know.
They are far weaker at extrapolation — inventing entirely new “dots.” In other words, they aren't very creative. To truly surpass humans, AI may need to stop being a library of everything we’ve written and start being an independent explorer of reality.
The wit machine vs. the bureaucrat
There’s another, more human kind of wall AI might hit: the difference between calculation and wit.
As AI scales, it often drifts toward the “mean.” It becomes an ultra-efficient bureaucrat — precise, reliable and safe, but less sharp, weird or surprising.
Wit isn’t just about being funny. It’s about the lateral leap — connecting two unrelated ideas in a way that feels fresh, insightful, or slightly subversive.
So, if AI hits a wall, it might be here. We could end up with machines that can calculate the trajectory of a star or optimize global supply chains — yet still struggle to write a joke that truly lands, or craft a metaphor that makes you see the world differently.
The “Wit Machine” becomes the ultimate test: can AI learn to be interesting or will it become the world’s most knowledgeable, yet oddly boring assistant?
Is intelligence built into the universe?
Let’s zoom out from tech for a moment.
Some scientists believe intelligence — whether biological or artificial — may be constrained by the laws of physics. Two big ideas support this:
- Computational irreducibility. Some problems (like predicting the weather or modeling the human brain in full detail) may be impossible to shortcut. You can’t “solve” them faster than real time — you simply have to watch them unfold. If that’s true, then no amount of smarter AI can fully bypass certain limits of prediction and understanding.
- The energy ceiling. Intelligence requires energy. If the next leap in AI requires the power of a small city — or even a small sun — to process a single thought, we hit a physical wall long before a cognitive one.
In that case, the real limit isn’t “how smart can AI get?” but “how much energy can intelligence consume?”
So… will AI hit a wall? The honest answer is we don’t know. By the way, I tried asking it, AI doesn't know either.
But here are three plausible futures, not because we failed. But because intelligence itself has boundaries.
- The slow plateau. Progress continues, but becomes incremental. AI turns into a utility like electricity — indispensable and powerful, but no longer delivering shocking leaps in “smartness.”
- Escape velocity. AI breaks free from human data by running experiments, simulating worlds, and discovering new scientific or mathematical truths humans haven’t conceived of.
- A universal ceiling. We eventually discover that there is a maximum intelligence allowed by the universe — and both humans and machines are already approaching it.
Bottom line
Right now, AI feels unstoppable and not everyone likes it or wants to use it. But history shows that every technology eventually encounters constraints — whether technical, physical or conceptual.
The real question for the next decade isn’t just: “How much smarter can AI get?” It’s: “Is there a point where ‘smarter’ no longer exists?”
And that might be one of the most important questions we ask in the AI era.
Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
More from Tom's Guide

Amanda Caswell is an award-winning journalist, bestselling YA author, and one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.
Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.
Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
