How I Use Claude Code to Ship Better Code Faster
My actual workflow with AI pair programming. Custom skills, autonomous agents, and the specific ways Claude Code has changed how I work.
I've been using Claude Code since early beta, around May 2025. By then, Anthropic had already shipped the upgraded Claude 3.5 Sonnet (late 2024), which nearly doubled coding benchmark scores. The model improvements made the tooling genuinely useful.
Now I reach for a coding agent instinctively. Not for everything - but for the kinds of tasks where it saves real time.
This isn't a post about AI hype. It's about what actually works, what doesn't, and how I've configured things to make these tools genuinely useful for real development work.
The Tools
I've tested several AI coding tools. The space is moving fast and the terminology is becoming interchangeable - coding agents, AI pair programmers, agentic IDEs. They all follow similar patterns.
The ones I've spent real time with:
- Claude Code - Anthropic's terminal CLI, pioneered many of the patterns others now use
- OpenCode - Open source alternative I'm currently using for most work. Lets me swap models - I can use Claude, or switch to z.ai GLM 4.7
- Pi Agent - Mario Zechner's minimal coding agent. YOLO by default - no permission prompts, just reads, writes, and runs commands. Minimal system prompt, only four tools
If you want to understand how these tools actually work under the hood, Zechner's blog post on building Pi Agent is excellent. No magic - just filesystem access, command execution, and iteration on errors.
The config files are converging too - CLAUDE.md, AGENTS.md, similar concepts. Project-specific instructions that teach the agent your codebase conventions. Vercel's AI SDK repository has both files, a good example of how teams are documenting codebases for AI assistants.
The workflows I describe below apply to any of these tools. The patterns matter more than the specific implementation.
The Setup
Claude Code runs in your terminal. You start it, give it context about your project, and it can read files, write code, run tests, make commits. It has access to your actual codebase, not just snippets you paste in.
The default installation is fine, but the real power comes from customization. I've built a system of custom skills, project-specific instructions, and workflow patterns that make Claude genuinely useful rather than just another autocomplete.
Custom Skills: Teaching Claude How I Work
Skills are reusable prompts that teach Claude specific workflows. Instead of explaining my code review process every time, I load a skill and Claude knows the drill.
Here are the ones I use most:
Code simplifier. After I finish a feature, I run /code-simplifier on the changed files. It looks for DRY violations, unnecessary complexity, dead code, and over-engineering. Often catches things I missed when I was in the weeds.
Code review. Before I commit or open a PR, /code-review audits for bugs, security issues, error handling gaps. It's like having a second pair of eyes that never gets tired or rushes because it's Friday afternoon.
Humanizer. For blog posts and documentation. Identifies AI-sounding patterns and rewrites them to sound more natural. Based on Wikipedia's "Signs of AI writing" guide - found via this tweet:
Loading tweet...
Ironic to use AI to remove AI patterns? Maybe. But it catches the "tapestry of solutions" and "it's not just X, it's Y" constructions that creep in.
Dev workflow. My meta-skill that defines how I work with Claude on projects. It includes documentation conventions, commit message formats, and the plan-execute-simplify-review-test-commit loop I follow.
Building skills is straightforward. Each one is a markdown file with instructions that get loaded into Claude's context when you invoke them. The key is being specific about what you want.
Project-Specific Instructions
Every project gets a CLAUDE.md file in the root. This is where I put:
- Essential commands (
pnpm dev,pnpm test, etc.) - Architecture decisions and conventions
- File structure overview
- Common patterns and antipatterns for that codebase
Claude reads this automatically when I start working in a project. Instead of explaining "we use Nuxt 4 with Tailwind v4 and prefer composition API" every session, it's already in context.
For larger projects, I also keep workstream docs - plans for specific features broken into tasks. Claude can read these, understand what I'm trying to accomplish, and help execute the tasks systematically.
The Actual Workflow
Here's how a typical feature development goes:
1. Planning. I describe what I want to build. Claude asks clarifying questions or I provide more context. We agree on an approach. This conversation matters - taking time here prevents wasted work later.
2. Execution. Claude writes code, I review it. Sometimes I accept it directly. Sometimes I ask for changes. Sometimes I write parts myself and let Claude handle the boilerplate. It's collaborative, not "AI does everything."
3. Simplification. After a chunk of related changes, I run the code simplifier. This catches duplication I introduced, overly clever solutions, and opportunities to consolidate.
4. Review. Once the main work is done, code review skill looks at everything with fresh eyes. Security issues, edge cases I forgot, error handling gaps.
5. Testing. Run the actual tests. Fix what breaks. Claude helps debug failures quickly because it can see the test output and the code simultaneously.
6. Commit. Claude can make commits, but I review the staged changes and message before confirming. Atomic commits, semantic prefixes, one logical change per commit.
This loop repeats. Plan, execute, simplify, review, test, commit. Each cycle produces a small, verified piece of progress.
What Claude Is Good At
Boilerplate and repetition. Writing the 15th API endpoint that follows the same pattern as the other 14? Claude handles this perfectly. Set up the pattern once, let Claude replicate it.
Exploring unfamiliar codebases. "What does this function do?" "Where is X defined?" "How does the auth flow work?" Claude can read the code and explain it. Faster than grep-ing around myself.
Refactoring. "Rename this function everywhere" or "Extract this logic into a utility" - Claude can make changes across multiple files consistently.
Test writing. Especially for existing code. Claude can read the implementation and generate reasonable test cases. Not perfect, but a solid starting point.
Documentation. Given code, Claude writes decent docs. Given a task, Claude can update relevant documentation. I still edit the output, but it's faster than starting from blank.
Research and context. "What's the Nuxt 4 way to do X?" Claude knows a lot and can explain trade-offs. Saves trips to Stack Overflow.
What Claude Isn't Good At
Architecture decisions. Claude will confidently suggest approaches that don't fit your constraints. It doesn't know your team, your infrastructure, your maintenance budget. I make architectural calls myself and use Claude to implement them.
Understanding business context. Why does this feature exist? What are the actual user needs? Claude doesn't know. I have to provide that context explicitly.
Knowing when to stop. Claude will happily over-engineer a solution, adding abstractions you don't need. The simplifier skill helps catch this, but I have to actively resist letting Claude build more than necessary.
Catching its own mistakes. Claude doesn't inherently know when it's wrong. The code might run but be subtly incorrect. Tests and review are essential.
Staying current. Training data has a cutoff. For very new frameworks or recent changes, Claude might suggest outdated patterns. I double-check anything I'm not sure about.
Practical Tips
Be specific about what you want. "Make this better" gets vague results. "Reduce the complexity of this function by extracting the validation logic" gets useful results.
Review everything. Don't just accept code because Claude wrote it. Read it. Understand it. You're responsible for what ships.
Commit frequently. Small commits make it easy to revert if Claude goes in a wrong direction. I commit after each successful change rather than letting work accumulate.
Use files over chat. For complex context, write it in a markdown file and tell Claude to read it. Better than trying to explain everything in conversation.
Break big tasks into small ones. Claude handles focused tasks well. "Build the entire user authentication system" is too big. "Create the login form component" is right-sized.
Keep a CLAUDE.md. Project-specific instructions compound over time. Every convention you document is one less thing you have to explain later.
Session Documentation: Memory Across Sessions
One problem with AI assistants is they don't remember previous sessions. You close the terminal, context is gone. Next time you start, Claude doesn't know what you worked on yesterday or what decisions you made.
I solve this by committing session documentation directly to the repo. Every project has a docs/ folder with:
docs/
├── WORKSTREAMS_OVERVIEW.md # Status dashboard - what's done, in progress, pending
├── workstreams/
│ └── feature-name.md # Plans: goal, tasks, decisions, questions
└── sessions/
└── 2026-01-21-feature.md # Logs: what was done, commits, next steps
Workstream docs are plans. They define what we're building, break it into tasks, and capture decisions. When I start a new feature, I create a workstream doc first. Claude reads it and understands the goal.
Session docs are logs. At the end of each working session, I update the session doc with what was accomplished, which commits were made, and what's next. This takes 5 minutes but saves 20 minutes of context-setting next time.
The overview is a quick-reference dashboard. One glance shows what's complete, what's in progress, and what's blocked. I update it at the end of each session.
When I start a new session, Claude reads the overview and recent session docs. It knows where we left off. No re-explaining, no lost context. The documentation becomes project memory.
This also creates a useful audit trail. Months later, I can read the session docs and understand why certain decisions were made. It's like git history but for thought process.
The Honest Assessment
AI pair programming has changed how I work. I'm faster on certain tasks - boilerplate, refactoring, test writing, exploring unfamiliar code. The quality baseline is higher because I have code review built into my workflow.
But the bottleneck has shifted. Writing code is no longer the slow part. Reviewing code, testing properly, and making product decisions - that's where the time goes now.
This creates a tension. The tooling lets me ship "good enough" code faster than ever. But "good enough" compounds. Each quick decision becomes part of the codebase I'll maintain for years. The old calculus of "hand-craft everything" doesn't make sense when AI can write decent implementations in seconds. The new calculus of "ship fast, iterate" doesn't account for the review burden when AI outputs need human validation.
I'm still figuring out the right balance. Some days I let Claude run ahead and clean up afterward. Other days I slow down and think through architecture before writing a line. The right approach depends on what I'm building - throwaway prototype or long-term product.
What's clear is that AI hasn't replaced thinking. Understanding requirements, making design decisions, catching subtle bugs, maintaining code over time - these still require human judgment. Claude is a very capable assistant, not a replacement developer.
If you're a developer who hasn't tried AI coding tools seriously, it's worth experimenting. Start with something low-stakes, build up your prompting skills, and find the patterns that work for you. The tools are getting better fast, and the developers who learn to work with them effectively will have an advantage.
Related
If you're interested in development workflows, I've written about component-driven development and frontend best practices - both areas where having Claude as a second pair of eyes has been particularly useful.
The session documentation approach I mentioned is part of a larger system I use for knowledge management. I wrote about the full setup - including Obsidian integration, daily notes, and automated workflows - in my developer second brain post.