Boris Cherny's parallel Claude Code workflow and the gap between power users and average developers
| | |

Boris Cherny’s parallel Claude Code workflow and the gap between power users and average developers

The Parallel Sessions Gap: Why Boris Cherny’s Workflow Reveals the Real AI Productivity Divide

There is a version of AI-assisted development where you open one chat window, type a question, wait, copy some code, and close the tab. A lot of developers are still living there. Then there is Boris Cherny’s version, where 10 to 15 Claude sessions are running simultaneously, splitting terminal and web instances, shipping code across all of them at once. Cherny built Claude Code at Anthropic. The gap between those two pictures is the most interesting thing happening in software development right now.

The Fleet, Not the Tool

When Cherny shared his actual workflow, the detail that stuck with me was not the session count. It was the CLAUDE.md file. Every time Claude makes a mistake on his team’s codebase, they add a rule so it cannot repeat that mistake. His instruction to teammates is specific: “After every correction, end with: Update your CLAUDE.md so you don’t make that mistake again.” The model writes its own corrective rules. The system compounds over time.

That is not AI as autocomplete. That is AI as institutional memory. Most developers have not started thinking in those terms yet.

The number that gives this context: Claude Code now accounts for 4% of all public GitHub commits. That is not a rounding error. That is a measurable shift in how code is reaching production.

The Mental Shift Is The Hard Part

I have been running parallel sessions myself for a few months. My honest take is that the tooling is not the barrier. The mental model is. You have to stop treating each AI interaction as a discrete transaction where you ask something and get something back. Parallel sessions only make sense if you are thinking about your work as a set of independent tasks that can run concurrently without blocking each other. That is a different way of decomposing a problem.

Most developers were trained to work sequentially. Write a function, test it, move on. AI-native workflows require you to think more like a project manager than a solo coder, identifying what can proceed in parallel, assigning context appropriately across sessions, and reviewing output rather than writing from scratch.

That shift is uncomfortable. It also feels like the actual skill being selected for right now.

What the Benchmark Debate Is Missing

The AI conversation in developer circles is dominated by benchmark comparisons. Which model passes HumanEval at what percentage, which one handles a 200k context window better. Those numbers matter at the margins. But Cherny has not written a single line of SQL in over six months because Claude pulls BigQuery data directly via CLI on his behalf. That is not a benchmark achievement. That is a workflow architecture decision.

The productivity ceiling for most developers is not the model they are using. It is the workflow they have built around it. A better model plugged into a single-session, one-question-at-a-time pattern will not get you anywhere close to what a moderately capable model running across 15 parallel sessions with a well-maintained CLAUDE.md file will produce.

The Power User Divergence Is Already Here

What Cherny’s workflow makes visible is a divergence that is already happening and widening fast. There is a group of developers who have fundamentally reorganized how they work around these tools, treating them as infrastructure rather than features. They are pulling ahead on velocity in ways that are hard to see from the outside because the output just looks like they are very productive people.

The average developer is still evaluating whether AI coding tools are worth adding to their existing workflow. That framing is the problem. The question is not whether to add AI to your workflow. The question is whether you are willing to build a new one around it entirely.

That is a harder ask, and most tooling documentation does not acknowledge it honestly. The CLAUDE.md pattern, the parallel session architecture, the habit of having Claude correct its own future behavior. None of that appears in a getting-started guide. It comes from someone who built the thing and then figured out how to actually use it at scale.

If you are still treating Claude Code as a smarter tab-complete, you are not behind on tooling. You are behind on thinking.

Sources

#ClaudeCode #AIEngineering #DeveloperProductivity #SoftwareDevelopment #Anthropic

Watch the full breakdown on YouTube

Sources & Further Reading

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *