Are We Making AI Dumber?
12 Aug 2025
What if we’re not just using AI—we’re slowly making it worse? Every vague question, every lazy prompt, quietly training the machine (and ourselves) into mediocrity.
Let me start with a confession:
The more I use AI—especially GPT—the more I sometimes think… it’s getting dumber.
Not always. Not in every conversation. But often enough that I’ve had to ask myself:
Is the AI getting worse? Or am I?
I’ve caught myself spiralling with it—me asking a vague, half-formed question… GPT replying with something repetitive or bland… me rephrasing, it apologising… then we go again. Round and round. A little like arguing with someone who’s only half-listening but really wants you to think they understand.
It’s a weird feeling when the machine’s “thought process” becomes visible. You can almost see it hesitate. Reason. Backpedal. Then, sometimes, double down on the same mistake.
In those moments, I wonder: does my feedback help?
I hope the developers get it. I hope it’s part of some grand learning loop.
But part of me suspects I’m just burning through tokens while the AI and I are stuck in a shared loop of stupidity.
The double-dumb problem
It’s not just the AI that’s to blame. If I give vague instructions, I almost guarantee a vague answer. And the less I know about a topic—say, coding—the more vague my instructions get. This isn’t new: “garbage in, garbage out” has been around since the earliest days of computing.
But with AI, something new happens: instead of just failing, it tries to fill in the gaps. Sometimes brilliantly. Sometimes badly. Sometimes so badly that you can’t tell if the AI is stuck… or if you’ve just led it into a dead end.
The result is a kind of collective doubling-down: my poor input meets the AI’s overly-confident output, and we walk hand in hand into nonsense.
A few tricks I’ve learned
When this happens, the best thing I’ve tried is switching models. GPT-5, for instance, feels a little more deliberate—slower to respond, but with answers that seem to have more thought behind them. Sometimes, that alone is enough to snap the loop.
The other trick is harder: get better at asking. Which means I need to learn more, think more clearly, and express myself better. In other words, the AI won’t magically make me smarter—it’s only as sharp as the person holding it.
Why this matters for the future of AI (and us)
I suspect the real story of AI’s future—its success, failure, or world domination—won’t be just about the technology. It’ll be about the synergy between humans and machines.
If we learn to work together well, freeing the human mind from lower-level tasks so it can focus on higher-order thinking, there’s a chance for something remarkable: a leap in human achievement unlike anything in history.
But if we keep asking badly-formed questions and accepting mediocre answers, we might just train our own tools into mediocrity. Worse still, we might train ourselves into relying on mediocrity.
That’s why, strangely enough, AI makes me want to improve myself. Not because I fear being replaced, but because I fear becoming dependent without getting better.
Will AI Get Worse Before It Gets Better?
There’s something else I wonder about.
Right now, GPT and other large models have been trained on the “good stuff”—the best content humanity has produced online (plus plenty of average material, of course). But what happens when it’s read through all the diamonds and starts working its way down into the garbage?
Will AI inevitably become less sharp as the internet fills with more filler, spam, and AI-generated sludge? Will it be able to sift through the refuse and still locate the gems, or will the noise drown out the signal?
It’s a strange thought: we could be feeding it an endless buffet of low-quality content and then act surprised when it starts sounding… low-quality. And just like with our questions, the quality of what it consumes will directly shape the quality of what it produces.
The most exciting—and maybe the scariest—prospect is this:
The quality of AI will mirror the quality of the humans using it.
So maybe the real question isn’t: Is AI getting dumber?
It’s: Are we getting smarter with it?
- #ai
- #training
- #bias
- #editorial
- #future