Coding with AI: A Practical Workflow from Planning to Code Review

Coding with AI is a skill. Not the "prompt engineering" buzzword kind — the practical kind. Knowing when to use ai, when to ignore it, and how to structure your workflow so that ai-assisted coding actually makes your code better instead of worse.
This is the hands-on guide. We'll walk through the complete workflow: planning → scaffolding → implementation → debugging → testing → code review. Each step with real examples, real tools, and honest advice on where AI helps and where it hurts.
Whether you're one of the beginners just getting started or a veteran programmer integrating AI into an established workflow, the process is the same. The tools just scale with you.
Step 1: Planning with AI
Most coders skip straight to prompting. Don't. The best ai-assisted coding starts with planning — and AI is surprisingly useful here.
Use AI to explore the problem space
Before you write code, use a chat-based ai model to think through your approach. Describe the feature or bug in natural language prompts and ask for architectural suggestions.
You: I need to build a notification system for a SaaS app.
Users should get email, in-app, and push notifications.
They need per-channel preferences. The backend is
Python/FastAPI with PostgreSQL. What's a good architecture?
Claude: [outlines event-driven architecture with a notification
service, preference store, and per-channel adapters]
This isn't about generating code yet. It's about using the llm as a thinking partner. Ask follow-up questions: "What are the tradeoffs of a queue-based approach vs. direct dispatch?" "How does this scale to 100K users?" The ai model has seen thousands of similar system designs and can surface patterns you might not have considered.
Planning tools
- ChatGPT — Good for broad architectural discussions. GPT-5 is strong at generating multiple design options with tradeoffs.
- Claude (Anthropic) — Excellent at maintaining long, detailed conversations about system design. The ai coding assistant from Anthropic particularly shines at understanding constraints and iterating on plans.
- Gemini — Google's model with a massive context window. Useful when you need to paste existing code and plan changes around it.
Key principle: AI is great at suggesting patterns. It's bad at knowing which pattern fits your specific situation. You still make the decision.
Step 2: Scaffolding and prototype
Now you generate code. This is where ai code generation shines brightest — creating the skeleton of your project from natural language prompts.
Generate project structure
Use an agentic coding assistant to scaffold your entire project:
# Claude Code example
claude "Create a new Next.js 14 app with:
- TypeScript
- Tailwind CSS
- Prisma ORM with PostgreSQL
- NextAuth.js for authentication
- A /api/users endpoint with CRUD operations
- Basic middleware for rate limiting
Set up the project structure, config files, and a
working docker-compose for local development."
This generates hundreds of lines of code in minutes — project config, database schema, api routes, authentication flow, Docker setup. For a prototype, this gets you from zero to running app faster than any manual approach.
Scaffolding tools
- Claude Code — Terminal-based agent. Give it a detailed prompt and it'll create files, install dependencies, and set up config. Costs ~$6/developer/day on average via the API.
- Cursor — IDE-based agent. Create a new workspace, describe what you want in the composer, and it generates the full project. Pricing starts at $20/mo.
- GitHub Copilot — Best for scaffolding within an existing codebase. The agent mode can create new files and set up modules based on your description. Free tier available; Pro at $10/mo.
- OpenAI Codex CLI — OpenAI's terminal agent. Similar to Claude Code but backed by GPT models. Good for teams already invested in the OpenAI ecosystem.
What to watch for: Generated scaffolding often includes outdated dependency versions. Check your package.json / requirements.txt against current releases. Also verify that ai-generated code doesn't include placeholder secrets or insecure defaults.
Step 3: Implementation with AI-assisted coding
This is the core of the workflow — writing the actual function logic, business rules, and feature code. Here's where the interplay between you and the coding assistant matters most.
Use inline completion for flow
When you're in the zone, inline autocomplete is your best friend. You write the function signature and a comment describing intent. The AI completes the body.
def validate_notification_preferences(preferences: dict) -> dict:
"""Validate user notification preferences.
Ensures all channels have valid settings and applies
defaults for missing channels.
"""
# AI completes from here:
valid_channels = {"email", "push", "in_app"}
validated = {}
for channel in valid_channels:
if channel in preferences:
setting = preferences[channel]
if isinstance(setting, bool):
validated[channel] = setting
else:
validated[channel] = True # Default to enabled
else:
validated[channel] = True # Default to enabled
return validated
The key to high-quality generated code from inline completion: write clear function signatures and docstrings first. The more context you give the ai model, the better the code suggestions. This applies whether you're using Copilot, Cursor Tab, or any other coding assistant.
Use chat for complex logic
When the function is non-trivial — custom algorithms, complex state machines, tricky javascript async flows — switch to chat mode. Describe what you need in detail:
You: Write a sliding window rate limiter in Python that:
- Uses Redis for distributed state
- Supports configurable windows (1min, 1hr, 1day)
- Returns remaining quota and reset time
- Handles Redis connection failures gracefully (fail open)
- Is async-compatible for FastAPI
This produces a complete module that you then review, test, and integrate. The ai-generated code from chat tends to be more complete than inline completions because you can specify edge cases upfront.
Implementation tips for coders
-
Write code in small chunks. Don't ask the AI to generate an entire feature at once. Break it into functions and modules. Review each piece before moving on.
-
Keep your codebase well-organized. AI tools index your workspace to provide context-aware suggestions. Clean code structure = better ai suggestions. Good syntax, clear naming, modular design — these help the AI help you.
-
Use plugins to connect AI to your tools. VS Code extensions, JetBrains plugins, and CLI integrations all improve context. The more your ai coding tools know about your project, the better they perform.
-
Iterate fast. If the first generation isn't right, don't start over — refine. Say "make the error handling more robust" or "refactor this to use dependency injection." AI tools are built to iterate. Use them that way.
Step 4: Debugging with AI
Debugging is one of AI's strongest use cases. The pattern is simple: give the model the error, the context, and let it diagnose.
Error diagnosis
You: I'm getting this error in production:
TypeError: Cannot read properties of undefined
(reading 'preferences')
at NotificationService.send (notification.ts:47)
at UserController.update (user.ts:123)
Here's the relevant code: [paste notification.ts]
AI: The issue is on line 47 where you access
user.settings.preferences, but user.settings can be
undefined when a user hasn't configured their profile yet.
Add an optional chain: user.settings?.preferences ?? {}
This works because LLMs have been trained on millions of error patterns and their fixes. For common bugs — null reference errors, off-by-one issues, async race conditions — AI is faster than Stack Overflow.
Debugging tools and workflow
- Claude Code: Paste the error directly in your terminal session. Claude can read the relevant files, trace the execution path, and suggest fixes in-place.
- ChatGPT: Good for explaining unfamiliar errors. The generative ai model excels at translating cryptic error messages into plain English explanations.
- Cursor inline chat: Highlight the problematic code, hit Cmd+K, and ask "why is this failing?" The ide gives the model full file context.
When AI debugging fails
AI struggles with bugs that require understanding runtime state, distributed system interactions, or race conditions across services. If the bug requires reproducing a specific sequence of events, you need real-time monitoring and logging, not an AI's prediction. Machine learning models also struggle with bugs in their own generated code — a weird meta-problem where the model's blind spots create the bugs it can't find.
Step 5: Testing with AI
Test generation is one of the highest-ROI applications of ai-assisted coding. AI can analyze a function and produce comprehensive test suites that would take a human programmer significant time to write manually.
Generating test suites
# Using Claude Code
claude "Write comprehensive pytest tests for the
NotificationService class in src/services/notification.py.
Cover: successful sends for each channel, preference
filtering, error handling for failed deliveries,
rate limiting behavior, and edge cases with empty
preferences."
The model reads your implementation, understands the function signatures and dependencies, and generates tests that actually test meaningful behavior — not just "does it not crash."
Test quality checklist
AI-generated tests need review. Check for:
- Coverage of edge cases — Does it test empty inputs, None values, maximum sizes?
- Meaningful assertions — Is it checking behavior or just that code ran without error?
- Test isolation — Are tests independent? Do they mock external dependencies?
- Realistic test data — Are the test fixtures representative of real-world data?
Don't blindly trust code quality from generated tests. A common failure mode: the AI generates tests that mirror the implementation logic rather than testing behavior. These tests pass but catch nothing.
Integration testing
For integration tests — testing how modules work together — AI needs more context. Provide your API contracts, database schema, and expected data flow. Tools like Claude Code that can read your entire codebase do better here because they understand the full system.
Step 6: Code review with AI
The final step: review everything. This includes both AI reviewing your code and you reviewing the AI's code.
AI-powered code review
Several ai coding tools now offer automated code review:
- Cursor Bugbot — Reviews pull requests on GitHub, flags bugs, suggests fixes. Free tier includes limited reviews; Pro at $40/user/mo.
- GitHub Copilot — Native code review in pull requests with ai-powered suggestions.
- Amazon Q Developer — Security scanning that "outperforms leading publicly benchmarkable tools on detection" across popular programming languages.
These catch mechanical issues — security vulnerabilities, unused variables, inconsistent naming, potential null pointer exceptions. They don't catch architectural problems or business logic errors.
Reviewing AI-generated code (the most important step)
Every line of generated code needs human review. Here's what to check:
- Does it actually solve the problem? AI can produce elegant code that doesn't do what you asked.
- Security vulnerabilities — SQL injection, XSS, hardcoded secrets, insecure defaults. These are the most common ai-generated code problems.
- Performance — AI tends toward correctness over optimization. Check for unnecessary loops, N+1 queries, and missing indexes.
- Maintainability — Will another developer understand this in 6 months? Is it using your project's patterns and conventions?
- Dependencies — Did the AI import a library you don't want? Is it using deprecated apis?
# Common AI-generated anti-pattern: over-engineering
# AI might generate this:
class NotificationStrategyFactory:
def create_strategy(self, channel_type: str):
strategies = {
"email": EmailNotificationStrategy(),
"push": PushNotificationStrategy(),
"in_app": InAppNotificationStrategy(),
}
return strategies.get(channel_type)
# When this is all you need:
def send_notification(channel: str, message: str):
senders = {"email": send_email, "push": send_push, "in_app": send_in_app}
senders[channel](message)
The refactor instinct matters. AI tends to produce verbose, pattern-heavy code. Your job is to simplify it.
Tool pricing and recommendations
Here's what the AI-assisted coding toolkit costs in 2026:
| Tool | Free Tier | Paid | Best For |
|---|---|---|---|
| GitHub Copilot | Yes | $10-39/mo | Beginners, broad IDE support |
| Cursor | Limited | $20-200/mo | Power users, model flexibility |
| Claude Code | — | ~$6/day (API) | Terminal workflow, deep codebase work |
| ChatGPT | Yes | $20-200/mo | Planning, debugging, one-off generation |
| Amazon Q | Yes (50 chats/mo) | Paid tiers | AWS-heavy workflows |
For beginners: start with GitHub Copilot Free. It works in VS Code, has great language models powering it, and the social media community around it is massive — tons of tutorials and tips.
For experienced coders: Cursor Pro ($20/mo) gives you the best balance of model access, ide integration, and agentic capabilities. Or Claude Code if you prefer terminal-based workflows with the best ai from Anthropic powering it.
The workflow in practice
Here's what a real coding with AI session looks like:
- 10 minutes planning — Chat with Claude about architecture. Get a clear picture of what you're building.
- 15 minutes scaffolding — Use an agent to generate the project structure, configs, and boilerplate.
- 2 hours implementing — Write code with inline completion. Drop into chat mode for complex functions. Review each module as you go.
- 30 minutes debugging — Fix issues with AI assistance. Paste errors, get fixes, verify manually.
- 30 minutes testing — Generate test suites. Review them for quality. Run them and fix failures.
- 20 minutes code review — Run AI review on your changes. Then manually review all ai-generated code and code snippets. Check for security, performance, and maintainability.
That's a 4-hour session that would have taken 8-12 hours of pure manual software development. The productivity gain is real — but only if you don't skip the review steps.
Common mistakes to avoid
Don't use AI without understanding the code. If you can't explain what the generated code does, don't ship it. This is especially critical for beginners — use AI to learn, not to avoid learning.
Don't skip tests on generated code. AI-generated code has bugs. Always. The optimization is that AI also helps you write the tests faster.
Don't paste sensitive data into public AI services. Use self-hosted or enterprise tiers with privacy guarantees for proprietary codebases. Check your tool's data policies.
Don't fight the AI. If your prompt isn't working after 3 attempts, the problem is probably your prompt. Be more specific. Provide more context. Show examples of what you want.
Don't use one tool for everything. The best workflow combines inline completion (Copilot/Cursor) for writing, chat (Claude/ChatGPT) for thinking, and agents (Codex/Claude Code) for automation. Different ai agents excel at different parts of the workflow.
The bottom line
Coding with AI isn't about replacing your skills. It's about amplifying them. The write code → review → iterate loop gets tighter. The boring parts get automated. The high-quality, creative parts of software development — architecture, design, judgment — become a bigger share of your day.
Start with one tool. Learn it deeply. Build the workflow step by step. And always, always review the code before you ship it.
Related reading
- Best AI coding assistant — pick the right tools for each step of this workflow
- Claude Code tutorial — hands-on with the terminal coding agent
- AI code generation — how the generation step actually works under the hood
- AI code review tools — automate the review step with the right tool
- AI pair programming — the stats and adoption data behind AI coding
- Vibe coding — what happens when you skip the review steps entirely





