How to Use Claude Code: The Power User's Deep Dive

You've installed Claude Code. You've used it to write some functions, fix a bug, maybe create a pull request. But the gap between casual usage and expert-level output is enormous — and it comes down to a handful of techniques that most developers never learn.
This isn't a tutorial for beginners. If you need installation help, start with our Claude Code tutorial. This is a deep dive for developers who already use claude code and want to iterate faster, manage costs, and automate complex tasks across their entire codebase.
Anthropic's own engineering team and power users like Boris Cherny, author of a popular advanced Claude Code guide, have shared the workflows that actually matter. Here's the step-by-step breakdown.
Context window management: the skill that changes everything
Claude Code's context window is its working memory. Every message, every file read, every bash output accumulates in a shared 200K token budget (1M in beta with Opus 4.6). And here's what most people miss: performance degrades as context fills. A session at 90% capacity isn't just slower — it's dumber. Instructions get buried, earlier context gets lost, and the ai-powered coding assistant starts making mistakes it wouldn't make with a clean window.
Monitor with /context: Run this at any point to see where your tokens are going. MCP servers consume tokens just by being available — their tool definitions load on every request whether you use them or not. A few MCP servers can eat 30%+ of your window before you type a single prompt.
Reset with /clear: This wipes conversation history while keeping your CLAUDE.md and file access. Use it between distinct tasks. Finished a feature? /clear. Moving to a different repo? /clear. Stuck in a loop where Claude keeps making the same mistake? Definitely /clear. Stale context full of failed approaches actively hurts the next attempt.
Compress with /compact: A middle ground that summarizes the conversation history, reducing token count while preserving key decisions:
/compact Focus on the authentication implementation
Claude auto-compacts at ~95% capacity, but by then you've been operating in degraded territory. Manual compaction at 70-80% gives better results. Validate the summary after compacting — make sure it captured the important context.
Subagents and code agents: parallel execution without context pollution
Subagents are specialized ai agents that run in isolated context windows. Each one gets a custom system prompt, specific tool access, and independent permissions. When Claude encounters a matching task, it delegates to the subagent, which works independently and returns results — without polluting your main conversation.
Claude Code includes three built-in subagents:
- Explore — runs on Haiku (fast, cheap), read-only. Used for codebase search and analysis.
- Plan — research agent for Plan Mode. Gathers context before presenting a plan.
- General-purpose — full tool access for complex tasks. Handles multi-step operations.
You can create custom subagents with the /agents slash commands:
/agents
→ Create new agent
→ User-level (available in all projects)
→ Generate with Claude
→ "A code reviewer that checks for security vulnerabilities,
performance issues, and suggests improvements"
→ Select read-only tools
→ Model: Sonnet
→ Save
Subagent definition files use a markdown file format with YAML frontmatter:
---
name: security-reviewer
description: Reviews code for security vulnerabilities
tools: Read, Glob, Grep
model: sonnet
---
You are a security code reviewer. Analyze code for:
- SQL injection, XSS, CSRF vulnerabilities
- Hardcoded secrets or credentials
- Insecure authentication patterns
- Missing input validation
Report each finding with severity, location, and fix.
Save this to ~/.claude/agents/security-reviewer.md for user-level access or .claude/agents/ for project-level. The function of subagents is two-fold: they preserve your main context window AND they enforce constraints through limited tool access. This is what makes them different from just having a longer conversation — isolated context means better accuracy.
For tasks that need sustained parallelism, Claude Code now offers Agent Teams (research preview) — multiple code agents with independent contexts coordinating across separate sessions.
Advanced slash commands and shortcuts
Beyond the basics, these commands and shortcuts separate power users from everyone else:
| Command | What it does |
|---|---|
/context |
Token usage breakdown — find what's eating your window |
/agents |
Manage custom subagents |
/statusline |
Configure persistent status display |
/review |
AI code review of your changes |
/pr |
Generate a pull request with description |
/commit |
Create a git commit with AI-generated message |
Shift+Tab (2x) |
Cycle through modes: Normal → Plan → Auto-accept |
Esc (2x) |
Toggle auto-accept (skip approval prompts) |
? |
Show all keyboard shortcuts |
Plan Mode is the workflow Boris Cherny uses himself: start in Plan Mode (Shift+Tab twice), go back and forth with Claude until the plan is right, then switch to Auto-Accept and let Claude execute. In Plan Mode, Claude can only read files, search, and ask questions — it cannot write or modify anything. This forces the agent to think before it acts.
Session management is one of Claude Code's most underused features:
# Continue most recent session (fast — no picker)
claude --continue
# Continue with a new prompt
claude --continue "now add unit tests"
# Browse and select a specific session
claude --resume
# Name sessions for easy retrieval
/rename payment-integration
Sessions are stored locally with full conversation history, tool usage, and results. Use --continue for speed, --resume for precision.
GitHub Actions: automate code review and CI/CD
The official Claude Code GitHub Action lets you tag @claude on any issue or PR and get AI-powered implementation, debugging, or code review. Here's the production setup:
# .github/workflows/claude.yml
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
pull_request:
types: [opened, synchronize]
jobs:
claude:
runs-on: ubuntu-latest
steps:
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: "--max-turns 10 --model claude-sonnet-4-6"
The quick setup: run /install-github-app inside Claude Code. It walks you through installing the GitHub App and configuring secrets. You need repo admin authentication and permissions.
Real use cases in CI/CD:
- Automated code review on every PR: Claude reviews changes against your CLAUDE.md standards
- Issue-to-PR automation: Comment
@claude implement thison an issue and Claude creates a working pull request - Daily reports: Schedule a cron job that summarizes yesterday's commits and open issues
- Lint and fix: Automate running lint, fixing issues, and committing the results
Set --max-turns to prevent runaway jobs. 5-10 turns is enough for most reviews. Each run consumes github actions minutes AND api tokens based on task complexity.
Cost optimization: the pricing math that matters
Claude Code pricing depends on your subscription path:
| Plan | Monthly cost | Best for |
|---|---|---|
| Pro (claude subscription) | $20 | Light usage, individual devs |
| Max 5x | $100 | Daily usage, single project |
| Max 20x | $200 | Heavy usage, multiple projects |
| Console API | Per-token | CI/CD automation, variable loads |
The api path (Console) charges per token: $15/$75 per million tokens for Opus, $3/$15 for Sonnet. For heavy users, a claude subscription is almost always cheaper than api pricing.
Model routing: Not every task needs Opus. Subagents can run on Sonnet (cheaper, faster) for exploration and code review. Reserve Opus for complex tasks that need deep reasoning. The built-in Explore subagent already runs on Haiku — the cheapest model — for search and analysis. See our Sonnet vs Opus comparison for the full benchmark breakdown on when each model is worth it.
Context hygiene: Every token in your context window costs money when it gets sent with the next request. /clear between tasks doesn't just improve quality — it reduces cost. A 200K context window being sent repeatedly vs. a 20K clean window is a 10x token difference.
Caching: The api supports prompt caching with 90% cost reduction on cached content. If you're running Claude Code in automation (CI/CD), structure prompts to maximize cache hits.
Batch mode: For non-urgent tasks, the api offers 50% discounts on batch processing. This works well for scheduled reviews or docs generation.
Advanced workflows: prototyping to production
Here are the workflows power users rely on daily across their use cases:
Test-driven development with Claude:
- Ask Claude to write tests from your specs (emphasize: no mocking code that doesn't exist yet)
- Verify the tests fail — validate your assertions are correct
- Commit the tests
- Ask Claude to write code that passes all tests without modifying the test suite
- Deploy a subagent to verify the implementation doesn't overfit
- Commit
Boris Cherny argues this improves output quality 2-3x. The feedback loop is the key — you're giving Claude a way to validate its own work.
Multi-file refactor with Plan Mode:
[Plan Mode] I need to migrate from Express to Fastify across
the entire project. Analyze every file that imports Express,
map out the dependencies chain, and create a migration plan.
Review the plan. Iterate until it's right. Then switch to Normal mode and let Claude execute. For a large refactor across dozens of files, this prevents the ai coding assistant from making changes that break other parts of the codebase.
Hooks for automated quality gates:
Hooks let you run custom commands at specific lifecycle points. The most powerful is the Stop hook — it runs every time Claude finishes responding:
{
"hooks": {
"stop": [{
"command": "npm run lint && npm test",
"blocking": true
}]
}
}
This forces Claude to fix lint errors and test failures before considering a task complete. No human intervention needed — the agent self-corrects.
IDE and desktop app integration
Claude Code isn't just a cli tool. You can also use claude code through:
- VS Code extension: Adds a Claude panel alongside your editor. Good for visual diff reviews.
- JetBrains plugin: Available for IntelliJ, PyCharm, WebStorm.
- Desktop app: Standalone interface, useful for non-terminal workflows and Slack integration.
- Web at claude.ai: Browser-based access, works for prototyping on any machine.
The terminal remains the most powerful interface. The desktop app and ide extensions are convenience layers — the same llm, the same ai tools, just different UIs.
Troubleshooting common issues
Claude keeps making the same mistake: Your context is polluted with failed attempts. Run /clear and start fresh with a cleaner prompt. Take screenshots of the error before clearing so you can reference them.
Hitting rate limits on Pro plan: Anthropic introduced rate limits to curb heavy background usage. If you're hitting caps, upgrade to Max or switch to Console api for variable loads.
MCP servers eating your context: Run /context to audit. Disable any MCP servers you're not actively using. Each server's tool definitions load on every request.
Claude code works poorly on large repos: Create a .claudeignore file (like .gitignore) to exclude generated files, node_modules, build artifacts. Smaller codebase = better understanding.
Plugin or vs code extension not loading: Restart the ide. Claude Code loads subagents and plugins at session start. If you added a plugin file manually, use /agents to reload.
What OpenAI and other ai tools alternatives don't have
Claude Code's competitive advantage is the agentic architecture. OpenAI's Codex runs in the terminal but doesn't have the same subagent system, hooks framework, or github actions integration depth. Cursor is an excellent ide but it's a VS Code fork — you're locked into that editor. GitHub Copilot has the widest adoption but lacks the deep codebase understanding that comes with a 200K+ context window and agentic execution.
Claude code works best when you treat it as a development environment, not a chatbot. Give it a CLAUDE.md, manage your context, use subagents for isolation, automate with hooks and github actions, and route models by complexity. That's the difference between using Claude Code casually and being a power user who ships faster.
Related reading
- Claude Code tutorial: from zero to your first pull request — the beginner guide if you're just getting started
- Claude Code — the full overview of what Claude Code is and how it works
- Claude Code pricing — every plan, API cost, and optimization strategy
- Claude Sonnet vs Opus — which model to route to for which tasks
- AI terminal tools — how Claude Code fits into the broader terminal AI ecosystem
- Best AI coding assistant — Claude Code vs Copilot vs Cursor vs the rest





