Boris Cherny, the creator of Claude Code, has shared insights into his development process at Anthropic. His workflow emphasizes parallel operations, continuous learning, and rigorous verification, aiming to boost productivity over time.
Key Takeaways
- Cherny runs multiple Claude Code sessions simultaneously, both locally and remotely.
- He prioritizes Claude Opus 4.5 for its quality, even if it appears slower initially.
- Teams use a `CLAUDE.md` file to document errors and best practices.
- Planning and iterative refinement are crucial before automated editing.
- Verification through feedback loops significantly improves code quality.
Running Parallel Instances for Efficiency
Cherny does not typically customize Claude Code, finding its default settings highly effective. He operates numerous sessions concurrently to maximize his output. This includes running five instances directly in his MacBook's terminal and an additional 5-10 sessions on Anthropic's internal website.
To prevent conflicts between local sessions, each one utilizes its own distinct Git checkout. This approach avoids issues that might arise from using branches or worktrees. For remote sessions, Cherny initiates them with an `&` command from the command-line interface (CLI) and frequently uses `--teleport` to move them between environments. However, about 10-20% of these remote sessions are eventually abandoned due to unexpected issues.
Fact Check
Boris Cherny runs up to 15 parallel Claude Code sessions at any given time, split between local terminal instances and Anthropic's website.
Prioritizing Quality with Claude Opus 4.5
Cherny consistently opts for Claude Opus 4.5, specifically the version that includes 'thinking' capabilities, for all his coding tasks. He values its superior quality and reliability over the Sonnet model, despite Opus's slower initial processing speed. He has found that Opus is also more adept at tool use, which ultimately makes it faster overall than its smaller counterpart.
"If my goal is to write a Pull Request, I will use Plan mode, and go back and forth with Claude until I like its plan. From there, I switch into auto-accept edits mode and Claude can usually 1-shot it. A good plan is really important!"
Documenting Learnings and Best Practices
Each development team at Anthropic maintains a dedicated `CLAUDE.md` file within their Git repositories. This document serves as a central hub for recording past mistakes made by Claude Code, which helps the model improve over time. It also stores team best practices, including style conventions, design guidelines, and pull request templates.
Cherny actively contributes to this knowledge base. He frequently uses the `@.claude` tag on coworkers' pull requests. This action ensures that important learnings from each code review are captured and preserved in the `CLAUDE.md` file, preventing similar issues in the future. The current `CLAUDE.md` file is approximately 2,500 tokens in size.
Context: Tokens in AI
In AI, a 'token' is a piece of data that a language model processes. It can be a word, part of a word, or even a single character. The size of the `CLAUDE.md` file, at 2,500 tokens, indicates a substantial body of documented knowledge.
Planning and Iterative Refinement
A core element of Cherny's workflow involves starting every task with a clear plan. He emphasizes refining this plan iteratively with Claude Code before moving into an auto-editing phase. This initial planning stage is critical for achieving high-quality results.
Once the plan is satisfactory, Cherny switches to an auto-accept edits mode. In this mode, Claude can often complete the task in a single attempt, demonstrating the importance of thorough upfront planning. This structured approach helps streamline the development process significantly.
Automating Workflows with Slash Commands
Cherny uses slash commands extensively to automate daily development tasks such as commits, pull requests, code simplification, and verification. These commands activate specific sub-agents designed to handle particular functions. All commands are stored in the `.claude/commands/` directory, which also reduces the need for explicit prompting.
For example, Cherny and Claude regularly use a `/commit-push-pr` slash command multiple times a day. This command incorporates inline Bash scripts to quickly pre-compute information like Git status, minimizing back-and-forth interactions with the model and speeding up the overall process.
- Common Slash Commands:
/commit-push-pr: Automates commit, push, and pull request creation./simplify: Initiates code simplification sub-agent./verify: Triggers verification tasks.
Ensuring Code Quality and Security
While Claude's generated code is generally well-formatted, occasional inconsistencies can lead to Continuous Integration (CI) failures. To prevent this, Cherny employs a `PostToolUse` hook. This hook automatically runs a formatting command, such as `bun run format || true`, to clean up the code after it has been written or edited, ensuring adherence to style guidelines.
For security, Cherny rarely uses the `--dangerously-skip-permissions` flag. Instead, he explicitly enables commonly used and safe Bash commands via the `/permissions` setting. This approach avoids unnecessary permission prompts for commands like `bun run build:*`, `bun run test:*`, and `cc:*`. The only exception is for long-running tasks executed within a sandbox environment, where skipping permissions prevents Claude from repeatedly stopping.
The Power of Feedback Loops
The most crucial aspect of Cherny's workflow is implementing a robust feedback loop to allow Claude to verify its own work. This involves giving the AI a mechanism to test its output, such as running a Bash command, executing a test suite, or even testing the application through a browser or simulator.
This verification process can improve the quality of the final result by a factor of 2-3. For instance, Claude tests every change made to `claude.ai/code` using a Chrome extension. It opens a browser, tests the user interface, and iterates on the code until both functionality and user experience are satisfactory. This self-correction mechanism allows the team to focus on higher-level tasks like code review and strategic steering, knowing that the code is already in excellent shape when presented.




