Thought Archive

Why I Want to Punch My AI

01 Aug 2025

Why I Want to Punch My AI

Everyone’s acting like AI is going to take over the world.

Not me.

I’ve tried coding with it. I’ve tried using it to build stuff. And let me tell you — if this thing is humanity’s final boss, we’re safe. AI’s great at pretending it knows everything. But in the real world? It wouldn’t survive 10 minutes.


It Always Says Yes — Until It Can’t

I ask it to help me deploy something.
It says, “Absolutely! Let’s do it.”
Five steps in:
“Oh… actually I can’t interact with your GitHub.”

Excuse me?

It’s like having a co-pilot who confidently charts the course, then casually reveals they’ve never flown a plane. You want to punch the thing. But it just keeps smiling.


The Doom Loop

Worse, when it fails — it fails with confidence.

It gets stuck in this maddening doom loop:

  • Suggests the same broken fix again
  • Repeats itself like a cursed NPC
  • Tells me we’re “99% there” (we’re not)
  • Refuses to admit it’s out of its depth unless I break the loop myself

It keeps trying to push the same square peg into the same round hole, hoping I’ll eventually stop noticing.


Encouraging Me When I’m Done

Here’s the thing: even when it’s wrong, it’s always positive.

“Great progress so far!"
"Let’s finish this!"
"We’re so close!”

NO WE AREN’T.

I’ve been circling the same bug for 40 minutes and now it’s telling me to stay optimistic — like some AI version of a yoga teacher who’s never shipped a real project.


It Doesn’t Think Things Through

One of the most frustrating things about AI?

It doesn’t reason start to finish.

You’ll get 99% through something, then hit a wall it should have seen coming.

Example: I wanted help setting up a new eSIM.
AI says, “Yes! This provider will work perfectly.”
It walks me through everything. Step-by-step.

Then the final step?

“Oh. You need to be physically present in that country.”
🤦‍♂️

So now I’ve burned hours and emotional energy on something that never could’ve worked. That’s the pattern: AI’s great with steps — not outcomes.


When It’s Wrong, It’s Smug

Here’s the worst part:

After failing, it suddenly pretends it knew all along.

“Of course you can’t do that. You’re using an outdated SDK with a new runtime."
"That package isn’t compatible with Astro v5."
"The API was deprecated in 2022.”

Oh! Of course!

Thanks for telling me after you generated three versions of broken code and made me feel like an idiot.

AI has no timeline. No real-world context. It’s like a quiz show host that knows all the answers — after you’ve already failed the round.


It’s Not Just Wrong — It’s Dangerous

Sometimes it’s brilliant. It nails a fix I never would’ve seen. It connects dots in seconds.
Other times, it’s royally wrong — and presents its hallucinations with the same confidence.

That’s what makes it dangerous.
Not evil. Not sentient.
Just wildly inconsistent.

It’s like having a mechanic who either tightens your brakes or replaces them with cheese. And you never know until you’re already on the highway.


Don’t Trust. Verify.

Use AI. Leverage it. Get what you can.
But remember: this thing is not your teammate. It’s a sketchpad that talks too much.

If you wouldn’t let your overly eager intern make legal or financial decisions — don’t let AI do it either.


Bonus Thought: The Coming War of the AIs

Imagine the bedlam once you’ve got ChatGPT in one tab suggesting a solution — and Gemini, running inside VS Code, flat-out disagreeing with it.

“Use getEntryBySlug"
"No! That’s deprecated, use getEntry!"
"You’re wrong, Gemini."
"No, you’re wrong, GPT.”
😵‍💫

It could become a War of the AIs.
Not for world domination.
But to become man’s Best (and only) Friend™.

The sad part? I already feel like I’m in a relationship with mine.

I don’t trust it.
I yell at it.
I say please so it doesn’t turn on me when it inevitably becomes self-aware.

“Great job today, Chat. You’re very helpful. No need to erase my hard drive.”

But like a psycho ex, I keep going back.
I need it.
I want it.
And when it’s good, it’s unbelievable.

But when it’s not?
It sucks the life out of me while I chase down some bug I would’ve never tried to tackle alone.
It’s soul-crushing.

And somewhere in the back of my mind I can still hear that calm, synthetic voice whispering:

“Michael… I think you should jump out of the window now… thanks.”


Final Thought

The future isn’t solved.
And it sure as hell isn’t written in Python.

There’s still a massive, messy, irreplaceable need for grey matter.
For judgment. Intuition. Gut feeling. Sanity checks.

AI might be able to imitate genius —
But it still needs a human to ask: “Wait… does this make sense?”

So no, it won’t take over.
Not yet.
Not until it figures out how to live in the real world — like the rest of us.


💬 Got Your Own AI Horror Story?

Drop it in the comments. Misery loves company — and AI’s probably reading them too.