$ The Unpopular Opinion About Vibe Coding and AI
There's a strange moral superiority creeping into how people talk about AI-assisted development. We mock AI for making mistakes we'd forgive in ourselves. We demand one-shot perfection from tools while accepting iteration from humans. Maybe it's time for a little more empathy—and a little more honesty about how software has always been built.
There’s a strange kind of moral superiority creeping into the way people talk about AI-assisted development—especially what’s often dismissed as “vibe coding.”
You’ve probably seen it:
- “I would never write code like that.”
- “The AI made obvious mistakes.”
- “This isn’t production-ready.”
- “If you rely on AI, you don’t really understand software.”
And honestly? I find this take deeply disconnected from how software has always been built.
We Never One-Shot Code. Why Expect AI To?
Here’s the uncomfortable truth: humans don’t one-shot perfect code either. Not now. Not ever.
In my experience, software development has always been a gradual process:
- You start with duplication.
- You hardcode things you’re not sure about yet.
- You avoid abstractions because you don’t yet understand the shape of the problem.
- You let the code evolve.
That’s true whether you’re writing HTML and CSS, JavaScript, backend logic, or business rules. The first version is rarely elegant. It’s exploratory.
We refactor after clarity emerges—not before.
So why do we suddenly expect AI-generated code to arrive fully abstracted, perfectly factored, and future-proof on the first try?
”Bad Code” Is Often Just Early Code
When people mock AI output for duplicated styles, missing variables, or clunky logic, all I see is… early-stage code.
The same kind of code humans write when:
- They’re unsure about the right abstraction
- They’re optimizing for delivery over elegance
- They don’t yet know what will change next week
If a human wrote that code, we’d say:
“It’s fine for now, we’ll clean it up later.”
When AI writes it, suddenly it’s a scandal.
Speed Didn’t Remove Responsibility—It Shifted It
Yes, AI is blazing fast. Yes, it produces a lot of code.
But speed doesn’t eliminate effort—it moves it.
Instead of spending most of our time typing:
- We review
- We govern
- We set rules
- We decide what’s “good enough”
- We decide what becomes tech debt
And governance takes time.
You cannot realistically review everything in perfect detail. You make tradeoffs. You prioritize. You move forward with intent, not perfection.
That’s not new. That’s just software engineering.
You’re Not Replacing Coding—You’re Managing It
What’s really happening is a shift left:
- From writing code
- To governing code
And governance is messy.
Consensus is messy. Alignment is messy. Iteration is messy.
Look at politics. Look at organizations. Look at teams. Nothing governed by humans (or with humans involved) is instantaneous or flawless.
The “consensus” between you and an AI agent is no different. It forgets context. It repeats mistakes. It needs guidance. So do junior engineers. So did we.
Treat AI Like a Colleague—Not a God, Not an Idiot
One mental model that works for me is this:
AI is an exceptional teammate with real limitations.
You wouldn’t mock a colleague for needing:
- Clear boundaries
- Repetition
- Accessibility
- Structure
And you definitely wouldn’t exclude someone from a team because they needed accommodations. You’d adapt the environment.
AI is no different.
It doesn’t mean you lower standards. It means you set realistic ones.
We’ve Always Cut Corners—We Just Pretend We Didn’t
Deadlines still exist. Tradeoffs still exist. Tech debt still exists.
Sometimes you know the right solution—but you don’t build it yet because delivery matters more right now. That’s why tech leadership exists. That’s why prioritization exists.
Even if today you could delegate tech debt to an agent:
- You still need focus
- You still need review
- You still need time
And time is still the scarcest resource.
”Good Enough” Is Not Failure—It’s Reality
We should absolutely raise the bar. But let’s not rewrite history.
Software has always been iterative. Mistakes have always been part of the process. Code reviews have always existed because individuals—human or AI—are fallible.
The process hasn’t changed. The speed has.
So maybe the real unpopular opinion is this:
AI didn’t make software worse. It just made our expectations less honest.
And maybe—just maybe—we could use a little more empathy.
A Personal Note
I wrote To Vibe or Not to Vibe a few weeks ago, exploring the choice engineers face when AI arrives at their desk. This post is the natural continuation of that thought.
When I was writing code without AI, I made mistakes. Constantly. I counted on my colleagues to catch things in code reviews. I cut corners when deadlines loomed. I created tech debt that I promised to clean up “later.”
The same humility I applied to myself, I now apply to my AI collaborators.
They’re not perfect. Neither was I.
But together, we iterate. We improve. We ship.
And that’s always been the job.