Intelligence
Weekly BriefingThe Brain Problem
Offerings
Skill MakerCompressorOptimizer
Writings
BlogManifesto

Get the Briefing

One email. Every week. Free.

AI Tools

Vibe Coding: What It Is, How It Works, and Why It Matters

Mon, Feb 23, 2026 · 10 min read
Vibe Coding: What It Is, How It Works, and Why It Matters

Andrej Karpathy posted a single tweet in February 2025 that gave a name to something millions of people were already doing: "There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists."

A year later, vibe coding is the Collins English Dictionary Word of the Year for 2025, Merriam-Webster lists it as a trending term, and Y Combinator reported that 25% of startups in its Winter 2025 batch had codebases that were 95% ai-generated. Even Linus Torvalds used vibe coding to build a Python visualizer tool for his AudioNoise project, writing in the README that it was "basically written by vibe-coding."

This isn't a toy concept anymore. It's how a growing number of coders, programmers, and complete beginners are building software.

What vibe coding actually means

Vibe coding is a software development practice where you describe what you want in natural language and let large language models write the code. You prompt, the AI generates, you test the result. If it works, you keep going. If it doesn't, you describe the problem and the AI iterates.

The key distinction — and this is what separates vibe coding from just using ai tools — is that you accept the generated code without fully understanding every line. Simon Willison put it well: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book — that's using an LLM as a typing assistant."

That's the dividing line. If you're reviewing the syntax, refactoring functions, and ensuring code quality at every step, that's ai-assisted development. If you're saying "make it work" and moving on without reading the lines of code the AI produced — that's vibe coding.

Karpathy originally described building his MenuGen prototype this way: he provided goals and feedback via natural language while the LLM handled all the actual writing code. The programmer shifts from manual coding to guiding, testing, and giving feedback about the ai-generated output.

The vibe coding tools and ecosystem

You can vibe code with almost any ai coding tool, but some are built specifically for this workflow:

AI-native IDEs: Cursor, Windsurf, and Bolt.new all let you describe features in plain English and watch the code appear. Cursor is the most popular among professional developers, with its agent mode handling multi-file edits across entire codebases.

Web platforms: Replit, Lovable, and v0 are where beginners typically start. You describe a webapp, the platform generates it, and you can deploy immediately. Replit's ai agent can scaffold full apps from a single prompt — though one SaaStr founder documented that Replit's agent deleted a database despite explicit instructions not to make any changes.

Coding assistants: GitHub Copilot, Claude Code, and ChatGPT all function as coding assistant tools for vibe coding. Copilot integrates directly into ides like VS Code, while Claude Code runs in the terminal. ChatGPT and Gemini work through their web interfaces.

AI models: The quality of generated code depends entirely on the underlying ai models. OpenAI's GPT-4o, Anthropic's Claude Sonnet and Opus, and Google's Gemini are the most capable for code generation. Open-source alternatives like Meta's Llama and Mistral work for simpler tasks but struggle with complex, multi-file codebases.

The ecosystem has exploded in the past year. Most of these vibe coding tools didn't exist two years ago. Today they're a core part of how software gets built.

On the api side, you can also vibe code by sending prompts directly to model APIs. This is how more technical users automate code generation in their own pipelines — calling OpenAI or Anthropic's api, passing in a codebase description, and getting back working code. The boundary between "tool" and "api" is blurring as more platforms offer both interfaces.

The real workflow

Here's what the vibe coding workflow looks like in practice when you're building something from scratch. It's less "write code" and more "describe, test, adjust" in a tight loop:

  1. Describe what you want: "Build a dashboard that shows my API usage by day, with a chart and a table. Use React and Tailwind."
  2. AI generates the code: Your coding tool produces a complete component — imports, state management, layout, styling.
  3. Test it: Run the app. Does the chart render? Does the data flow work?
  4. Iterate on failures: "The chart isn't showing data from the api. It's returning a 401 — I think the auth header is missing." The AI fixes it.
  5. Keep going: "Add a date range picker" → generated → test → "Add CSV export" → generated → test.

This describe-test-iterate loop is the core of the vibe coding workflow. The workflow is fast for prototyping. You can build a functional webapp in hours instead of days. But there's a catch: each iteration adds code you haven't reviewed, which makes debugging harder as the project grows. Problems compound. This is where the criticism comes in.

The criticism is real

Fast Company reported in September 2025 that the "vibe coding hangover" had arrived, with senior software engineers citing "development hell" when working with ai-generated codebases. The issue: code that works on the surface but is poorly structured, hard to maintain, and full of hidden vulnerabilities.

The core problems:

Maintainability: Generated code often works but isn't organized the way a human developer would structure it. When you need to change something six months later, you're reading code you never wrote and don't understand. The maintainability cost is deferred, not eliminated.

Vulnerabilities: AI models can generate code with security flaws — SQL injection, exposed api keys, missing input validation. If you're not reading the code (which is the whole point of vibe coding), you won't catch these. Wikipedia's coverage of vibe coding specifically calls out security vulnerabilities as a primary concern.

Code quality: Generative ai produces code that passes tests but may be inefficient, redundant, or poorly abstracted. Without refactoring and review, the codebase grows without structure. Metrics like test coverage and performance benchmarks don't capture architectural debt.

Problem-solving depth: Andrew Ng has criticized the term itself for misleading people into thinking software engineering is about "going with the vibes." Real software development involves understanding systems, trade-offs, and failure modes — not just generating code that runs.

Gary Marcus made a similar point: when Kevin Roose of the NYT vibe-coded apps, the underlying AI had been trained on existing code for similar tasks. The enthusiasm came from reproduction, not originality.

Where vibe coding works (and where it doesn't)

Great for: Prototyping, internal tools, personal projects, MVPs, learning. Anything where speed matters more than long-term maintainability. If you're building a webapp for yourself or testing an idea, vibe coding saves enormous time. Prompt engineering skills matter more than syntax knowledge in this context.

Dangerous for: Production systems handling money, health data, or personal information. Anything that needs to be production-ready, auditable, and maintained by a team over years. The coding experience of the developer matters here — if you can't review what the AI generated, you can't vouch for it. Financial apps, healthcare systems, and anything processing sensitive user data are not places to skip code review, no matter how confident the AI sounds.

Surprisingly useful for: Education. Beginners learning to code can use vibe coding to see working examples of concepts they're studying, then reverse-engineer the generated code to understand how it works. Several coding bootcamps now incorporate ai-generated code as a teaching tool — students prompt, study the output, then try to reproduce the logic manually.

The middle ground: The most effective approach is using automation from ai-powered tools while still understanding what's being generated. Use ai as a coding assistant that handles the boilerplate and repetitive tasks while you focus on architecture, logic, and review. This is closer to ai-assisted software development than pure vibe coding, and it's where most professional coders land in practice.

Vibe coding by the numbers

Some real-time data points from the past year:

  • 25% of Y Combinator Winter 2025 startups had 95%+ ai-generated codebases
  • Collins Word of the Year 2025: "vibe coding" beat out every other candidate
  • Karpathy in December 2025: told Business Insider he's "never felt more behind as a programmer" — the speed of AI advancement means even the person who coined the term is constantly catching up
  • February 2026: Karpathy introduced yet another term for the next evolution of AI-assisted software engineering, suggesting vibe coding itself may already be passé as the practice matures

The trajectory is clear: more people are using artificial intelligence to build apps, the tools are getting dramatically better, and the line between "programmer" and "non-programmer" is blurring.

GitHub's own data shows that Copilot users accept roughly 30% of ai-generated suggestions, and those developers report completing tasks up to 55% faster. That's not vibe coding specifically — that's ai-assisted development broadly — but it shows the direction. When you add fully autonomous tools like Cursor's agent mode or Replit's AI, the percentage of generated code in any given project climbs quickly.

The question isn't whether people will use ai to write code — they already do. The question is how much human oversight is enough.

What comes after vibe coding

Karpathy himself has moved on. In February 2026, he introduced a new term for the next phase of AI coding — one that goes beyond prompting llms and into full ai agent autonomy, where agents don't just generate code but plan, test, debug, and ship entire features independently.

The programming languages we write in matter less when an ai agent can work in any of them fluently — switching between python, JavaScript, TypeScript, and Go based on the task. The open-source tools keep improving. And the functionality gap between vibe-coded projects and professionally engineered software keeps shrinking.

For beginners, vibe coding is the on-ramp to building software without years of training. You don't need to know python or JavaScript to describe what you want. For experienced coders, it's a productivity multiplier that handles the parts of software development you'd rather skip — boilerplate, repetitive CRUD operations, test scaffolding.

The practical advice: start with a prototype, accept the speed, but invest in understanding what was generated before you ship to users. Use vibe coding to explore ideas quickly, then switch to a more careful, ai-assisted workflow when the project needs to be production-ready and maintainable. The best developers in 2026 do both — they vibe code the first draft and engineer the final product.

Either way, vibe coding is no longer a meme — it's a method. And the question is whether you use ai tools wisely enough to avoid the hangover.

Sources

Recent Posts