Collins Dictionary named "vibe coding" its Word of the Year for 2025. Three months later, the man who coined the term declared it "passé" and replaced it with "agentic engineering." Between those two events, a Swedish startup called Lovable hit $400M ARR selling vibe-coded apps, Amazon lost 6.3 million orders to an AI-generated code deployment, and CodeRabbit found that AI-written code produces 1.7x more major bugs than human code.
Same technology. Wildly different outcomes. The difference isn't the AI. It's how you use it. And that difference is about to define which developers thrive and which become obsolete.
The Two Terms, One Person
Both "vibe coding" and "agentic engineering" come from Andrej Karpathy -- co-founder of OpenAI, former AI director at Tesla, one of the most respected voices in machine learning.
February 2, 2025: Karpathy tweets:
"There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists... I 'Accept All' always, I don't read the diffs anymore."
February 8, 2026: Karpathy declares vibe coding "passé" and introduces "agentic engineering":
"'Agentic' because the new default is that you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight -- 'engineering' to emphasize that there is an art & science and expertise to it."
One year. Same person. A complete philosophical reversal -- from "forget the code exists" to "there is an art and science and expertise to it." That arc tells you everything about where AI coding is headed.
The Data: Speed vs. Wreckage
Let's get the numbers on the table. They tell two stories simultaneously.
The Speed Story
The Wreckage Story
| Metric | Data | Source |
|---|
| AI code with OWASP Top 10 vulns | 45% | Veracode |
| More vulnerabilities in AI code vs human | 2.74x | Apiiro |
| More major bugs in AI code | 1.7x | CodeRabbit |
| More privilege escalation paths | 322% | Apiiro |
| Experienced devs slower with AI tools | 19% | METR |
| Developers who trust AI tools | 29% | Stack Overflow 2025 |
Both stories are true. AI makes you faster at producing code and worse at producing correct code. The question is whether you treat that tradeoff as acceptable.
What Vibe Coding Actually Is (And Isn't)
Simon Willison (Django co-creator) drew the critical distinction that most articles miss: "Vibe coding is NOT the same thing as writing code with the help of LLMs."
Vibe coding is a specific practice: you describe what you want in natural language, the AI generates code, and you accept it without review. You don't read the diffs. You don't understand the implementation. You trust the vibes.
This is different from using Copilot for autocomplete. It's different from asking Claude to explain a function. It's different from using an AI agent to implement a feature you've spec'd out and will review. Vibe coding means abdicating understanding.
Karpathy was explicit about this in his original tweet: "I 'Accept All' always, I don't read the diffs anymore." That's the defining characteristic. Not AI assistance. Blind acceptance.
Where Vibe Coding Genuinely Works
- Throwaway prototypes. You need a demo for a pitch meeting tomorrow. Quality doesn't matter. Speed does.
- Personal tools. A script that scrapes data for your own use. If it breaks, only you care.
- Learning projects. You're exploring a new framework. The code is disposable.
- MVPs where you'll rewrite everything. You're testing market fit, not building infrastructure.
Lovable's $400M ARR proves this. People want to build apps quickly. Lovable generates 100,000+ new projects daily. Most of those projects don't need production-grade code. They need something that works now.
Where Vibe Coding Destroys You
The disaster list is long and growing. Here are the highlights from 2025-2026:
- Amazon (March 2026): AI-assisted deployment caused a 6-hour outage affecting 99% of U.S. order volume. ~6.3 million lost orders.
- DataTalks.Club (March 2026): Claude Code executed
terraform destroy. 1.94 million database rows lost. 100,000+ students affected. 2.5 years of production data gone.
- Lovable itself (May 2025): CVE-2025-48757 -- missing Row Level Security. 170+ production apps exposed. The company selling vibe-coded apps had vibe-coded its own security.
- npm ecosystem (Aug-Oct 2025): 126 malicious packages exploiting AI-hallucinated package names. 86,000+ downloads. Credential theft at scale.
The pattern is always the same: the code looks correct, passes basic tests, and hides a vulnerability that a human reviewer would have caught in minutes.
Apiiro's study of Fortune 50 enterprises found that AI-assisted developers produced 3-4x more code but also 10x more security issues. By June 2025, their monitored enterprises were generating 10,000+ new security findings per month -- a 10x spike in six months.
After just 5 iterative revisions, AI-generated code contained 37% more critical vulnerabilities than the initial generation. The code gets less secure the more you iterate on it with AI. That's terrifying.
What Agentic Engineering Actually Is
Karpathy's shift from vibe coding to agentic engineering wasn't just a rebranding. It reflected a fundamental change in his workflow:
- November 2025: 80% manual coding + autocomplete, 20% agents
- December 2025: Flipped to 80% agent coding, 20% human edits
- February 2026: Coined "agentic engineering"
The key difference:
| Dimension | Vibe Coding | Agentic Engineering |
|---|
| Who writes the code | AI, unreviewed | AI, with human oversight |
| Developer's role | Describe the vibe | Architect, reviewer, QA |
| Code understanding | "Forget the code exists" | Deep understanding of what agents produce |
| Testing | Minimal or none | Rigorous, automated |
| Best for | Prototypes, throwaway code | Production systems, team codebases |
| Career risk | High (skill atrophy) | Low (skill amplification) |
In agentic engineering, you don't write the code. But you write the specs. You review the PRs. You design the architecture. You catch the bugs the AI introduces. You're a tech lead managing AI agents instead of junior developers.
Addy Osmani (Google) summarized the five principles:
- Plan before prompting -- write specs and design docs first
- Direct with precision -- well-scoped tasks, not open-ended vibes
- Review rigorously -- treat AI output like any human PR
- Test relentlessly -- the primary differentiator from vibe coding
- Own the system -- docs, version control, CI/CD, monitoring
That fifth point is the career-defining one. Vibe coders don't own the system. They can't, because they don't understand it. Agentic engineers own everything -- they just didn't type most of it.
The tooling split maps directly onto the vibe-vs-agentic divide.
Platforms like Lovable, Bolt, Replit Agent, and v0 are designed for vibe coding. You describe what you want, and you get a working app. No IDE. No terminal. No git. Just... a product.
These are excellent for what they're designed for. They're dangerous when used beyond their scope.
| Tool | SWE-bench Pro | Key Strength | Monthly Cost |
|---|
| OpenAI Codex CLI | 57.0% | Highest benchmark score, token-efficient | $20 + usage |
| Claude Code | 55.4% | Multi-agent, git integration, MCP | $20-200 |
| Cursor (Agent Mode) | 50.2% | Best autocomplete, VS Code familiarity | $16-20 |
| Windsurf | -- | Persistent context (Cascade), IDE plugins | $15-200 |
| Devin 2.0 | -- | Most autonomous, sandboxed environment | $20 + $2.25/ACU |
Source: Scale Labs SWE-Bench Pro Leaderboard
These tools don't generate apps from vibes. They integrate into professional workflows -- terminal, git, CI/CD, code review. They're designed for developers who understand what the AI is doing and can course-correct when it's wrong.
The distinction matters. Cursor's Agent Mode lets you review every change. Claude Code commits to git with proper messages. Codex CLI runs in a sandbox. These are guardrails that vibe coding platforms deliberately remove.
The Career Bifurcation
Here's where this gets personal.
The Junior Developer Crisis
Entry-level developer opportunities have plummeted ~67% since 2022. New graduates represent only 7% of Big Tech hires -- down from 32% in 2019. Over half of engineering leaders plan to hire fewer juniors because AI copilots let seniors handle more.
The logic is straightforward: why hire a junior for $90K when Copilot costs $10/month and a senior with AI tools can do the junior's work? The answer -- that juniors become seniors, and without juniors you have no pipeline -- is correct but doesn't show up in quarterly planning.
The Senior Premium
AI engineers earn 25% more than general tech roles. The average AI engineer salary jumped to $206,000 in 2025. AI-related job postings grew 74% year-over-year.
The market is bifurcating. On one side: developers who can architect systems, review AI output, and build production infrastructure. On the other: developers whose skills are indistinguishable from what AI can do.
Vibe coding puts you on the second side. Agentic engineering puts you on the first.
The Trust Gap
Only 29% of developers trust AI tools -- down 11 points from 2024. 84% are using them. That's a workforce that knows the tools are unreliable but uses them anyway because the productivity pressure is real.
The developers who'll succeed are the ones who channel that distrust into rigor. Not refusing to use AI. Not blindly accepting its output. Using it extensively while reviewing every line that matters.
The Productivity Illusion
One study cuts through the hype better than any other.
METR's randomized controlled trial tested 16 experienced open-source developers on 246 real issues, paying them $150/hour. The result: developers with AI tools took 19% longer than without them.
The kicker: those same developers predicted they'd be 24% faster. Even after the study, they believed they'd been 20% faster. The perception-reality gap is staggering. People feel faster with AI tools even when they're measurably slower.
Other studies show similar patterns:
The pattern: AI accelerates coding. But coding is a fraction of engineering time. Planning, understanding, debugging, reviewing, communicating -- AI doesn't accelerate those. It sometimes slows them down because you're now reviewing AI output on top of everything else.
A Practical Framework: When to Vibe, When to Engineer
Not all code deserves the same rigor. Here's my decision framework:
Vibe Code When:
- Stakes are zero. Personal scripts, throwaway prototypes, learning experiments.
- Lifespan is short. Demo for tomorrow's meeting, hackathon project, POC.
- Security is irrelevant. No user data, no authentication, no external access.
- You'll rewrite it. You're testing an idea, not building infrastructure.
Agentic Engineer When:
- Users depend on it. Production systems, customer-facing products.
- Data is involved. Databases, authentication, API keys, PII.
- It will live more than a week. Anything with maintenance requirements.
- A team will touch it. Code that others need to understand and modify.
- Money flows through it. Payments, billing, financial calculations.
The Skill Checklist for Agentic Engineering
- Learn to write specs before prompts. Detailed requirements documents, not vibes. The quality of AI output is directly proportional to the quality of your specs.
- Treat AI output like a junior dev's PR. Review every line that touches auth, data, money, or infrastructure. Accept AI's speed for boilerplate; demand human-level scrutiny for anything critical.
- Build automated test suites first. This is the single highest-leverage habit. If your tests are comprehensive, AI-generated code that passes them is probably fine. If they're not, nothing is safe.
- Learn context engineering. Karpathy endorsed this: "The delicate art and science of filling the context window with just the right information for the next step." This is the skill that separates 10% productivity gains from 50%.
- Own the architecture. Let AI write implementations. Never let it decide architecture. The moment you don't understand your own system's design, you've lost.
What I Actually Think
Vibe coding is the most dangerous idea in software engineering right now. Not because it doesn't work -- it works incredibly well for prototypes and demos. Because it teaches developers that understanding code is optional. And that lesson, internalized by a generation of new engineers, will cost the industry billions.
Here's what the data actually shows. AI-generated code has 2.74x more vulnerabilities. 45% of it contains OWASP Top 10 security flaws. It gets worse with iteration, not better. And the developers using it think they're faster when they're actually slower.
That combination -- invisible quality degradation plus false confidence -- is exactly how systemic failures happen. Not in one dramatic crash, but in thousands of small security holes, logic errors, and architectural decisions that compound over time.
The industry is already feeling it. 88% of developers report AI has negatively impacted technical debt. Analysts project $1.5 trillion in technical debt by 2027 from AI-generated code. CVE entries from AI code went from 6 in January to 35+ in March 2026. The curve is exponential.
Karpathy was right to evolve. His shift from "forget the code exists" to "there is an art and science and expertise to it" in exactly 12 months mirrors what every serious developer learns: AI coding tools are extraordinarily powerful if you understand what they're doing. The moment you stop understanding, they become liability generators.
The career implications are stark. Companies aren't hiring fewer developers because AI replaces them. They're hiring fewer junior developers (-67% since 2022) and paying more for senior developers who can review, architect, and correct AI output. The developers who embraced vibe coding and never learned the fundamentals are exactly the ones being squeezed.
I don't think vibe coding will disappear. It's too useful for prototyping. But I think the distinction between "person who vibe codes" and "engineer who uses AI agents" will become one of the most important career differentiators in tech. The first is a commodity. The second is increasingly rare, increasingly valuable, and increasingly well-compensated.
The question isn't whether you use AI to write code. Everyone does. The question is whether you understand what it wrote.
Sources
- Karpathy -- Original vibe coding tweet (Feb 2025)
- The New Stack -- Vibe Coding Is Passé: Karpathy on Agentic Engineering
- CNN -- Vibe coding named Collins Dictionary Word of the Year
- TechCrunch -- 25% of YC W25 startups have 95%+ AI-generated codebases
- TechCrunch -- Lovable raises $330M at $6.6B valuation
- TechCrunch -- Lovable crosses $100M ARR in 8 months
- CodeRabbit -- State of AI vs Human Code Generation Report
- Veracode -- GenAI Code Security Report
- Apiiro -- 4x Velocity, 10x Vulnerabilities
- METR -- AI Tools Made Experienced Developers 19% Slower
- METR -- 2026 Update
- Faros.ai -- The AI Productivity Paradox
- Google Cloud -- DORA Report 2024
- Stack Overflow -- 2025 Developer Survey: AI
- Stack Overflow -- Developers Willing but Reluctant to Use AI
- Simon Willison -- Not All AI-Assisted Programming Is Vibe Coding
- Addy Osmani -- Agentic Engineering
- Karpathy -- Context Engineering tweet
- Crackr.dev -- Vibe Coding Failures
- ArXiv -- Security Degradation in Iterative AI Code Generation
- The Hacker News -- CVE Entries from AI-Generated Code
- InfoQ -- AI-Generated Code Creates New Wave of Technical Debt
- Fastly -- Senior Developers Ship 2.5x More AI Code
- Scale Labs -- SWE-Bench Pro Leaderboard
- MorphLLM -- SWE-Bench Explained
- Index.dev -- AI Developer Salary Trends
- FinalRoundAI -- Software Engineering Job Market 2026
- Hakia -- Junior Developer Hiring Collapse: 67%
- CIO -- Demand for Junior Developers Softens
- DX Newsletter -- AI Productivity Gains Are 10%, Not 10x
- GitHub Blog -- Does Copilot Improve Code Quality?
- Panto -- GitHub Copilot Statistics 2026
- Anthropic -- How AI Is Transforming Work at Anthropic