Back to blog

Claude Code Creator Reveals His Workflow and Developers Are Losing Their Minds: The Boris Cherny Method

Hello HaWkers, a viral thread is taking over developer networks this week. Boris Cherny, the creator and head of Claude Code at Anthropic, casually shared his terminal setup and development workflow. What started as a simple post has transformed into a massive discussion about the future of software development.

If you work in development and have not yet seriously experimented with programming using AI assistants, this article will change your perspective.

What Boris Cherny Revealed

The main revelation was surprisingly simple, but its implications are profound for the developer community.

The Creator's Configuration

Boris shared that he exclusively uses the Opus 4.5 model with thinking enabled for all his development work:

Revealed configuration:

  • Model: Claude Opus 4.5 (the heaviest and slowest)
  • Thinking mode: Always active
  • Context: Maximum allowed
  • Approach: Extensive delegation

💡 Boris quote: "I use Opus 4.5 with thinking for everything. It's the best coding model I've ever used."

Why Opus 4.5 and Not Sonnet?

The choice may seem counterintuitive. Sonnet is faster and cheaper. But Boris explained the logic:

Opus advantages for code:

  • Deeper reasoning about architecture
  • Fewer errors on first attempt
  • Better understanding of complex context
  • More elegant and maintainable solutions

The cost-benefit:

  • Time saved fixing errors > extra model cost
  • Fewer iterations needed
  • Better quality code on first version

The Workflow That Is Going Viral

More than the model choice, the workflow revealed by Boris is generating intense discussions about how developers should interact with AI.

Extensive Delegation

The core philosophy of the workflow is to delegate as much as possible to the model, but in a structured way:

Method principles:

  1. Complete context first: Before asking for any code, provide extensive context about the project, architecture, and constraints

  2. Self-contained tasks: Each interaction should be a complete unit of work, not fragments

  3. Critical review, not micromanagement: Focus on reviewing the final result, not each line during generation

  4. Iteration by refinement: Instead of correcting line by line, request rewrite with specific feedback

Prompt Structure

Boris shared the general structure he uses for development tasks:

Components of an effective prompt:

  • Project context and stack
  • Specific task objective
  • Technical and business constraints
  • Examples of existing code when relevant
  • Clear success criteria

The Role of Thinking Mode

Thinking mode (extended reasoning mode) is central to the workflow:

How Boris uses thinking:

  • For complex architectural decisions
  • When facing hard-to-reproduce bugs
  • For refactorings that affect multiple files
  • When there are important technical trade-offs

Community Reactions

The thread generated thousands of responses and heated debates about the future of the profession.

The Enthusiasts

Many developers reported similar experiences:

Positive feedback:

  • "My productivity tripled since I adopted a similar approach"
  • "I finally understood how to use AI for real code"
  • "The secret is in the quality of context, not the quantity of prompts"

The Skeptics

Others raised important concerns:

Questions raised:

  • Opus 4.5 cost for intensive use
  • Excessive dependence on AI tools
  • Junior developers losing learning opportunities
  • Security of AI-generated code

The Cost Debate

Opus 4.5 is significantly more expensive than alternatives. Boris responded to this criticism:

Cost-benefit analysis:

  • Senior developer time: $100-200/hour
  • Savings of 1-2 hours per day: $100-400/day
  • Extra Opus vs Sonnet cost: ~$20-50/day for intensive use
  • ROI: Clearly positive for experienced professionals

Practical Lessons To Apply Today

Regardless of the model you use, there are applicable lessons from Boris's workflow.

1. Invest in Context

Before asking for code, explain:

## Project Context
- Stack: React 18, TypeScript 5, Tailwind CSS
- Architecture: Component-based with custom hooks
- State: Zustand for global, React Query for server state
- Tests: Vitest + React Testing Library

## Conventions
- Functional components only
- Props typed with interfaces (not types)
- Custom hooks prefixed with use
- Tests co-located with components

2. Ask for Complete Units

Instead of:

"Give me a hook for data fetching"

Prefer:

"Create a useUserData hook that: fetches user data from API /users/:id, implements cache with stale-while-revalidate, handles loading/error/success states, includes unit tests, follows our code conventions."

3. Review Strategically

Don't micromanage generation. Instead:

Effective review process:

  • Run the generated code
  • Check if it meets functional requirements
  • Review critical security points
  • Request specific refinements if necessary

Implications For The Future

Boris's workflow represents a paradigmatic shift in how experienced developers work.

Skills That Gain Value

Architecture and design:

  • Understanding complex systems
  • Making trade-off decisions
  • Communicating context effectively

Review and curation:

  • Identifying problems in generated code
  • Evaluating quality and maintainability
  • Integrating solutions into existing systems

Effective prompting:

  • Structuring clear requests
  • Providing relevant context
  • Iterating based on results

Skills That Lose Relevance

Syntax and memorization:

  • Memorizing specific APIs
  • Remembering boilerplate patterns
  • Mastering syntax of multiple languages

Mechanical coding:

  • Writing repetitive code
  • Implementing well-documented patterns
  • Low cognitive complexity tasks

How To Start Experimenting

If you want to test a similar workflow, here is a practical roadmap.

Week 1: Fundamentals

Objectives:

  • Set up Claude Code or similar
  • Experiment with small tasks
  • Document what works and what doesn't

Exercises:

  • Request test generation for existing code
  • Refactor complex function with assistance
  • Debug problem with full context

Week 2: Scale

Objectives:

  • Increase task complexity
  • Develop context templates
  • Measure productivity impact

Exercises:

  • Complete feature with AI
  • Integration between multiple files
  • Systematic critical review

Week 3: Refinement

Objectives:

  • Identify patterns that work for you
  • Optimize prompts based on experience
  • Define when to use and when not to use AI

Final Reflection

The workflow revealed by Boris Cherny is not about replacing developers with AI. It's about increasing the capacity of experienced developers to deliver value.

Key takeaways:

  • Quality of context beats quantity of prompts
  • More capable models can have better ROI despite cost
  • Effective delegation requires clarity and structure
  • Critical review remains human responsibility

The era of the developer who types code line by line is evolving into the era of the developer who orchestrates intelligent systems. Those who adapt to this new reality will have significant competitive advantage.

If you want to explore more about tools and productivity techniques for developers, I recommend checking out another article: AI Tools For Developers in 2026 where you will discover the best available options.

Let's go! 🦅

Comments (0)

This article has no comments yet 😢. Be the first! 🚀🦅

Add comments