>_

How I Plan Complex Projects With Claude Code (Open-Source)

Robin||5 min
Last updated: February 17, 2026
claude-codeplanningopen-sourceproductivityworkflow
How I Plan Complex Projects With Claude Code (Open-Source)
Listen to this article (5 min)

My Plans Kept Failing at the Worst Possible Moment

Three months into using Claude Code full-time, I noticed a pattern. My plans looked solid on paper. Clear phases, reasonable timelines, specific outcomes. Then halfway through execution, something would surface that invalidated the entire approach. A dependency I did not check. An assumption that was wrong. A simpler solution I never considered.

This happened over a hundred times before I tracked it. Not a hundred plans total - a hundred plans where the critical insight arrived AFTER I started building. The fix was never in the plan itself. It was in what happened before the plan.

Want the complete system? The free 3-pattern guide covers memory, delegation, and planning - the foundations behind my Claude Code setup.

The Root Cause: Discovery Happens Too Late

Traditional planning follows a straight line: Goal, Approach, Steps, Execute. The problem is not bad plans. The problem is that discovery and planning happen in the same step.

You sit down, think about what to build, write phases, estimate time, and start coding. But you have not checked whether someone already solved 80% of this. You have not verified that the API you are counting on actually supports what you need. You have not asked whether there is a fundamentally different approach.

Traditional Planning
  • -Goal - Steps - Execute - Discover problems
  • -Discovery happens during execution
  • -Critical gaps found after commitment
  • -Replanning feels like failure
Discovery-First Planning
  • +Discover - Constrain - Assume - THEN Plan
  • +Discovery happens before any code
  • +Gaps found before they cost time
  • +Replanning is a built-in trigger

The insight was simple: separate discovery from planning completely. Run discovery first, then plan based on what you actually know.

The Framework: 4 Stages

I formalized this into a framework that Claude Code runs automatically before every non-trivial project. After hundreds of handoffs and iterations, it stabilized into four stages - the original three plus an adversarial hardening pass between Stage 1 and Stage 2.

planning-framework
Universal Planning Framework
Stage 0: Discovery & Sparring
v
12 checks before writing a single line of plan
v
Stage 1: The Plan
v
5 core + 18 conditional sections
v
Stage 1.5: Adversarial Hardening
v
6 perspectives stress-test the draft
v
Stage 2: Meta Review
v
7 final checks + 21 anti-patterns

Stage 0: Before the Plan

This is the most important stage. It runs 12 discovery checks before you write anything. Not all 12 every time - Claude decides which ones matter based on context.

Three checks run on every non-trivial project:

  1. Existing Work Audit - What already exists? I have lost count of how many times Claude found that my codebase already had 70% of what I was about to build.
  2. Feasibility Check - Is this even possible as described? Catches impossible scope before you invest hours.
  3. The AHA Check - Is there a fundamentally better approach? This single check has saved me more time than everything else combined.

The AHA check is where the magic happens. You want a custom CMS? "Considered Strapi? 90% of what you need, 10% of the effort." You want 50 blog posts? "5 deep pillar posts plus derivatives might outperform 50 shallow ones." It forces you to challenge your first idea before committing.

Stage 1: The Actual Plan

Five core sections, always required. No exceptions.

Context and Why tells Claude the problem in 3 sentences. Not "improve X" but WHY improve X. Success Criteria defines what DONE looks like - including what FAILED looks like. If you cannot write a failure condition, your criteria are too vague. Assumptions lists what you are betting on being true, with a validation method and what happens if you are wrong. Phases break work into 3-4 hour chunks with binary gates: pass or fail, nothing in between. Verification splits into automated and manual - if neither exists, you are shipping blind.

Then 18 conditional sections activate based on domain. Software projects get rollback, risk, and dependencies. Multi-agent systems add delegation and security. Business projects add timeline and stakeholder communication. Claude detects the domain automatically.

Stage 1.5: Adversarial Hardening

Between Stage 1 and Stage 2, the planner agent runs six adversarial perspectives on the draft plan: Outside Observer, Pessimistic Risk Assessor, Pedantic Lawyer, Skeptical Implementer, The Manager, and Devil's Advocate. Each one attacks a different weakness - vague goals, cascade risks, ambiguous gates, first-day blockers, unrealistic scope, invalid core assumptions. The UPF + DSV deep-dive covers the full stress-test pipeline.

Stage 2: The Meta Check

After hardening, seven final checks catch what the plan still missed. Delegation strategy assigns the right model and agent to each phase. Research needs flags which phases need web research during execution. Review gates confirm where to stop and validate. And an anti-pattern scan catches 21 cataloged planning failures split across three groups: 12 core anti-patterns, 5 AI-specific, and 4 quality.

The Deadliest Anti-Patterns

I cataloged the patterns that kept destroying my plans. The three deadliest:

Vague Success - Words like "improve", "better", "enhance" without numbers. If you cannot measure it, you cannot verify it. Fix: add specific thresholds.

Skipping Stage 0 - The most expensive plans I ever wrote skipped discovery entirely. They looked complete but missed something fundamental. The framework now flags any plan that starts at Phase 1.

Zombie Projects - Plans with no failure conditions and no timeout. They never officially fail, they just drain time. Fix: define what FAILED looks like and when to kill it.

Quality Grades
Red FlagsVague criteria, no Stage 0, assumptions untested
C - ViableAll 5 core sections + at least 1 conditional section
B - SolidStage 0 done + failure conditions + delegation strategy
A - ExcellentStage 0 + sparring + alternatives + gates on all phases

Results After 6 Months

Before the framework: roughly 40% of my plans needed significant replanning mid-execution. After: that dropped to under 10%. The plans that do need changes hit a built-in replanning trigger instead of a crisis.

The biggest shift was not the plan quality - it was the discovery phase. Stage 0 kills bad ideas early. It finds existing solutions before you reinvent them. It forces you to challenge your first approach. The 30 minutes you spend on discovery saves hours of execution on the wrong thing.

This integrates directly with my memory system for tracking progress across sessions, my CLAUDE.md patterns for project-specific rules, and my hook automations for enforcing quality gates automatically.

It is Open-Source Now

I extracted the framework from my system and published it as a standalone Claude Code package on GitHub. Drop it into any project and Claude runs it automatically for every non-trivial plan.

Install in 30 seconds:

code
git clone https://github.com/primeline-ai/universal-planning-framework
cp -r universal-planning-framework/.claude/* your-project/.claude/

This gives you the planning rule, four slash commands (/plan-new, /interview-plan, /plan-review, /plan-refine), and a planner agent. Works with any project type - software, content, business, infrastructure.

The framework is domain-agnostic. I use it for feature development, blog content calendars, context management, system migrations, and security audits. The conditional sections adapt automatically.

Evolving Lite integrates this framework with memory, delegation, and hooks into a complete autonomous development system. Free and open source.

This lives in primeline-ai/universal-planning-framework - the planning framework I use. Free, MIT, no build step.

FAQ

Does the Universal Planning Framework work with any Claude Code project?+
Yes. The framework is domain-agnostic and detects your project type automatically. It includes conditional sections for software development, AI systems, business strategy, content marketing, and infrastructure. Multi-domain projects get the union of all relevant sections.
How long does Stage 0 discovery take?+
Usually 10 to 30 minutes depending on complexity. For simple projects under 3 phases and 2 hours of effort, you can skip Stage 0 entirely. The time investment pays back by catching bad assumptions before you write any code.
Can I use just parts of the framework instead of all three stages?+
Yes. The minimal install option gives you just the core planning rule. You can run individual checks from Stage 0 without the full framework, or use only the anti-pattern scan from Stage 2 to review existing plans.
How is this different from other AI planning approaches?+
Most approaches focus on better prompts or templates. This framework focuses on what happens BEFORE the plan - the discovery phase that catches gaps early. It evolved from analyzing over a hundred real plans and nearly two hundred session handoffs, not from theory.

>_ Get the free Claude Code guide

>_ No spam. Unsubscribe anytime.

>_ Related