opinion6 min read3 april 2026

Vibe Coding Is Here: How AI Is Flipping Developer Roles From Creation to Verification

Vibe coding shifts developers from writing code to verifying it — here's what that means for your skills, your team, and whether this is progress or a slow deskilling.

Vibe Coding Is Here: How AI Is Flipping Developer Roles From Creation to Verification

tl;dr

AI now writes the first draft; developers verify it. That inversion is real, it's accelerating, and it changes what competence as a developer actually means. Whether that's progress depends entirely on what you do with the shift.

The most consequential change in software development right now isn't a new language or framework. It's a role reversal. Developers who once spent most of their day writing code are now spending it reading, testing, and second-guessing code an AI wrote in seconds. That's vibe coding: describe what you want in plain language, get a working draft, then figure out if it actually works. The creation step has been handed off. The verification step has become the job.

Most of the discourse around vibe coding focuses on speed, on how fast you can go from idea to prototype. That's real. But the more durable question is what happens to developer judgement when the default mode is "review what the machine produced" rather than "build from first principles." That question determines whether vibe coding is a productivity shift or a slow erosion of craft.

What vibe coding actually is

The term was coined by Andrej Karpathy in early 2025 and describes a workflow where you prompt an AI, accept its output largely on trust, and iterate through natural language rather than code edits. It's the difference between a developer and a director: you're specifying intent, not implementation. Codeling's breakdown of vibe coding puts it plainly: it's optimised for speed and exploration, at the explicit cost of maintainability.

That trade-off is the thing people understate. Speed gains in prototyping are obvious and immediate. The maintainability cost is deferred. It shows up six months later when someone has to extend a system that was assembled through successive AI prompts, with no coherent architecture underneath and no single human who fully understood what was built.

Vibe coding optimises for the moment of creation. The debt it creates lives in every future sprint.

The distinction that matters here is between vibe coding and what some practitioners call agentic engineering. Voitanos draws this line clearly: agentic engineering uses AI as a collaborator within a structured, human-directed process. Vibe coding lets the AI drive and the human steer loosely. One produces AI-assisted development. The other produces AI-generated code with a human nearby. For anything beyond a prototype, the gap between those two approaches is significant.

The verification problem is harder than it looks

The concentration required to verify AI-generated code
The concentration required to verify AI-generated code

Reviewing code you didn't write is cognitively different from reviewing code you did. When you write code, you carry the reasoning in your head. You know why you made each decision. When you review AI-generated code, you're reconstructing intent from output, and the AI doesn't always have intent in any meaningful sense. It has pattern-matched to a plausible answer. Those aren't the same thing.

This is where the deskilling risk becomes concrete. If developers spend years reviewing AI output rather than constructing solutions themselves, the mental models required to spot subtle errors start to atrophy. You can verify syntax. You can run tests. But catching an architectural flaw, a security assumption that doesn't hold, or a race condition that only appears under load requires deep familiarity with the problem domain. Familiarity that comes from having built similar things yourself.

9

Critical issues found in one vibe-coded SaaS audit

Lasoft 2025

Lasoft audited a vibe-coded SaaS product and found nine critical issues, ranging from authentication gaps to exposed API keys. The product had been built and shipped. It ran. It wasn't safe or production-ready. That gap, between "it runs" and "it's sound," is exactly what AI verification workflows have to close.

The implication for teams is direct: vibe coding requires more rigorous review processes, not fewer. If you're adopting it without also upgrading your testing discipline, your security review process, and your architectural oversight, you're trading one kind of risk for another. As one non-developer's honest account of using GitHub Copilot describes it: the magic is real, and so are the brutal truths when something goes wrong in a way you didn't anticipate and can't diagnose.

Progress or deskilling? The answer is conditional

Vibe coding is progress if developers use it to handle the boilerplate so they can focus harder on the parts that require judgement. It's deskilling if it becomes a crutch that replaces the practice of thinking through a problem from scratch. The outcome depends on how you use the tool.

The developers who will thrive aren't the ones who prompt best. They're the ones who know enough to catch what the AI gets subtly wrong.

Junior developers are the group most at risk here. Learning to code by reviewing AI-generated code is like learning to cook by reheating meals someone else prepared. You can do it. You'll get fast at it. But you won't develop the underlying model of why things work, and that gap will show up the first time you face something the AI can't handle or handles badly. Senior developers face a different risk: the gradual narrowing of their hands-on practice until their expertise exists mostly as intuition they can no longer exercise directly.

The teams getting this right are treating AI verification as a skill in itself. They're writing explicit review checklists for AI-generated code, covering security, edge cases, and architectural coherence. They're pairing AI-generated drafts with test suites written before the AI touches the code, so the spec exists independently of the implementation. And they're maintaining deliberate practice of building things from scratch on a regular cadence, to keep the underlying skills sharp.

What to actually do differently

Upfront thinking and specification work replacing the coding phase
Upfront thinking and specification work replacing the coding phase

If your team is using AI coding tools without a structured verification layer, build one this week. It doesn't need to be elaborate. Start with a checklist that every AI-generated PR must pass before review: does it handle auth correctly, are there exposed credentials, does it fail gracefully, does it match the architectural patterns the rest of the codebase uses. That checklist is the beginning of a real AI verification workflow.

If you're a developer worried about deskilling, schedule one session a week where you build something without AI assistance. It can be small. The point is to keep the muscle active. And if you're leading a team adopting vibe coding, be explicit about where it's appropriate: prototypes, internal tools, low-stakes experimentation. Be equally explicit about where it's not: anything that handles user data, anything that touches payments, anything where a silent failure has real consequences.

The role shift is real. Verification is now the job. But verification done well is a high-skill activity. The mistake is treating it as a downgrade.

verdict

Vibe coding is a genuine productivity shift for exploration and prototyping, and a genuine risk for anything that needs to last. The developers who treat verification as a craft, not a formality, will come out ahead. Everyone else is accumulating technical debt they can't see yet.

Alec Chambers, founder of ToolsForHumans

Alec Chambers

Founder, ToolsForHumans

I've been building things online since I was 12 — 18 years of shipping products, picking tools, and finding out what actually works after the launch noise dies down. ToolsForHumans started as the research I kept needing: what practitioners are still recommending months after launch, and whether the search data backs it up. Since 2022 it's helped 600,000+ people find software that actually fits how they work.