Vibecoding Blueprint by Claude
Complete vibecoding framework - all 13 files from the claude_blueprint directory.
0-overview.md
# Vibecoding: An Overview
## What Is It? Why Does It Matter?
**Version**: 2.1
**TL;DR**: A structured methodology for building software with AI, treating AI models as specialized team members with different roles.
---
## The Problem Vibecoding Solves
Building with AI today is like playing telephone:
- 🔇 **Context loss**: By step 15, the model "forgets" decisions from step 3
- 🎯 **Scope creep**: Vague specs lead to endless back-and-forth
- ❌ **Failed generations**: Models generate code that doesn't integrate
- 💸 **Cost surprises**: No visibility into token spending
- 🔒 **Security risks**: Secrets leak into prompts, no systematic review
- 📉 **No learning**: Each project, same mistakes repeated
Vibecoding fixes this with **structure, discipline, and automation**.
---
## The Core Insight
Good AI work requires:
1. **Crystal clear specs** (no ambiguity)
2. **Right-sized steps** (1-3 hours each, not 5-day marathons)
3. **Regular code reviews** (catch issues early, not at the end)
4. **Context management** (regenerate snapshot every 5 steps)
5. **Role-appropriate models** (don't use $20/MTok model for boilerplate)
6. **Security gates** (review before every sensitive prompt)
7. **Learning loops** (retrospective captures lessons for next project)
Vibecoding is a **framework that enforces all of these**.
---
## How It Works: The 4-Phase Workflow
Phase 1: SPEC (1-2 days) → Phase 2: BLUEPRINT (1 day) → Phase 3: IMPLEMENTATION (3-14 days) → Phase 4: DEPLOY (1 day)
Each phase has specific outputs and success criteria documented in the detailed guides below.
---
## Model Roles (Not Specific Models)
- **Reasoning Model**: o1-pro, Opus 4.5, Gemini 2.0 Thinking
- **Fast Coder**: Sonnet 4.5, GPT-4o, Flash
- **UI Specialist**: Opus 4.5, GPT-4o
- **Code Reviewer**: Gemini 2.0 Pro, Opus 4.5
- **Budget Model**: Haiku 3.5, GPT-4o-mini, Flash
Match the model to the task, not the other way around.
1-model-strategy.md
# Model Strategy & Configuration
## Choosing and Assigning AI Models by Role
**Version**: 2.1
**Key Concept**: Use the right model for each job, not the same model for everything.
---
## Role-Based Model Assignment
| Role | Capability | Example Models | Best For | Cost |
|------|------------|----------------|----------|------|
| **Reasoning Model** | Deep thinking, planning | o1-pro, Opus 4.5 | Spec, architecture | $$$ |
| **Fast Coder** | Rapid code generation | Sonnet 4.5, GPT-4o | Daily implementation | $$ |
| **UI Specialist** | Design sensibility | Opus 4.5, GPT-4o | Frontend, components | $$ |
| **Code Reviewer** | Analytical, thorough | Gemini 2.0 Pro, Opus 4.5 | Bug hunting, testing | $$ |
| **Budget Model** | Cost-effective, fast | Haiku 3.5, GPT-4o-mini | Simple tasks, boilerplate | $ |
## When to Use Each Role
**Reasoning Model**: Spec creation, blueprint generation, complex algorithms, architecture decisions
**Fast Coder**: 95% of implementation steps, standard features, boilerplate, integration
**UI Specialist**: Frontend steps, component design, visual polish, styling
**Code Reviewer**: Code reviews every 3-5 steps, debugging, testing, edge cases
**Budget Model**: Simple formatting, boilerplate, summaries, cost control
## Model Configuration
Create `.vibecoding/model-config.json` with your model choices, costs, and max tokens for each role.
## Cost Management Strategy
Budget allocation (example for 15-step project):
- Spec: 1 step, Reasoning (~$0.90)
- Blueprint: 1 step, Reasoning (~$1.20)
- Implementation: 12 steps, Fast Coder (~$1.92)
- Reviews: 3 steps, Code Reviewer (~$0.90)
- Debugging: variable, Fast Coder (~$0.40)
- **Total**: ~$5.32
## Switching Models Mid-Project
Signs it's time to switch:
- "This code doesn't integrate" → Try Code Reviewer
- "The UI looks terrible" → Switch to UI Specialist
- "I can't figure out this bug" → Switch to Code Reviewer
- "We're over budget" → Switch to Budget Model
- "This step is way too hard" → Switch to Reasoning Model
2-token-budget.md
# Token Budget & Cost Management
## Tracking Tokens, Planning Budgets, Staying in Control
**Version**: 2.1
**Key Concept**: Visibility into token usage prevents surprises.
---
## Why Track Tokens?
AI development is cheap but needs monitoring:
- Spec phase: ~$1 (tiny, don't worry)
- Blueprint phase: ~$2 (planning pays off)
- Implementation: $3-10 (varies wildly)
- Reviews/debugging: $1-5 (can accumulate quickly)
Without tracking: $50 spent on $10 project (5x budget)
With tracking: Know costs upfront, optimize intelligently
---
## Budget Template (budget.md)
Create `.vibecoding/budget.md` with:
- Project summary (name, dates, expected duration, total budget)
- Budget allocation (by phase and task)
- Cost per model (with pricing details)
- Daily/session tracking (log after each work session)
- Cost optimization log (document savings)
- Cost alerts & thresholds (green/yellow/red zones)
## Token Estimation Guide
**Spec creation**: 12,000-20,000 tokens
**Blueprint generation**: 12,000-25,000 tokens
**Implementation per step**: 1,500-8,000 tokens (depends on complexity)
**Code review**: 3,000-8,000 tokens per review
**Debugging/fixes**: 1,000-6,000 tokens per bug
**Context regeneration**: 2,000-3,000 tokens every 5 steps
## Token Optimization Strategies
1. **Context compression** - Save 30-40% tokens in handoffs
2. **Right-size your model** - Don't use expensive model for simple tasks
3. **Batch similar tasks** - Reduce 20-30% overhead
4. **Reuse expensive outputs** - Save 50%+ by not regenerating
5. **Limit context size** - Save 20-40% by summarizing
## When You're Over Budget
**Tier 1**: Optimization (no quality loss)
- Use Budget Model for remaining simple tasks
- Compress context more aggressively
- Batch remaining tasks
**Tier 2**: Scope reduction (what to cut)
- Drop non-critical tests
- Skip nice-to-have UI polish
- Defer optimization work
**Tier 3**: Plan adjustment (discuss with team)
- Extend timeline to reduce daily token burn
- Split feature into multiple phases
- Use cheaper models (accept some quality loss)
3-security-protocol.md
# Security Protocol
## Preventing Breaches Before They Happen
**Version**: 2.1
**Key Principle**: Security is not an afterthought. Review before every sensitive prompt.
---
## The Core Rule
⛔ **NEVER send these to AI models:**
- Actual API keys or tokens
- Real passwords or credentials
- Production database connection strings
- Private keys or certificates
- Customer/user PII
- Internal company secrets
- Credit card info or payment credentials
**If you see these in your code, REPLACE before pasting.**
---
## Security Checklist (security-checklist.md)
Create `.vibecoding/security-checklist.md` with sections:
- Pre-implementation gates (no secrets in test code)
- Secrets management (no hardcoded credentials)
- Input validation (server-side validation, parameterized queries)
- Authentication & authorization (hashing, JWT, CORS)
- Data protection (encryption at rest, HTTPS, TLS)
- Dependencies & third-party (no vulnerabilities, pinned versions)
- Error handling (no stack traces to users)
- Deployment & operations (prod vs dev, WAF, DDoS)
- Compliance & standards (GDPR, CCPA, SOC 2, PCI)
- Post-deployment (penetration testing, bug bounty)
## Security Gate Steps
Add security reviews at:
- After authentication implementation
- After API endpoint creation
- After database schema changes
- Before deployment to any environment
- After third-party integration
## How to Work with Security Prompts
**DO**: Describe requirements, use env var references, test with sandboxed data
**DON'T**: Paste actual API keys, hardcoded credentials, production passwords
## Secret Detection Checklist
Before pasting code, grep for:
- `password\s*=`
- `api[_-]key\s*=`
- `secret\s*=`
- `token\s*=`
- `"sk_"`
- `Bearer `
## Handling Breaches (If Secrets Leaked)
1. IMMEDIATELY revoke the secret
2. Tell your security team
3. Track in decisions.md
4. Prevent recurrence with process updates
4-testing-strategy.md
# Testing Strategy
## Patterns, Pyramid, and Practices
**Version**: 2.1
**Key Principle**: Test in the same step as code, not at the end.
---
## Testing Pyramid
/\
/ \ E2E Tests (few)
/----\ - Critical user journeys
/ \
/--------\
/ \ Integration Tests (some)
/————\ - API endpoints, DB operations
/
/—————-
Unit Tests (many)
- Pure functions, business logic
Many unit tests (fast, isolated)
Some integration tests (API, DB)
Few E2E tests (complete flows)
---
## By Step Type
| Step Type | Unit Tests | Integration | E2E | Coverage |
|-----------|-----------|------------|-----|----------|
| Utility function | ✅ Required | ❌ No | ❌ No | 100% |
| API endpoint | ✅ Required | ✅ Required | ❌ No | 90%+ |
| Database model | ✅ Required | ✅ Required | ❌ No | 90%+ |
| UI component | ✅ Required | ⚠️ Snapshot | ❌ No | 80%+ |
| User flow | ❌ No | ✅ Required | ✅ Required | End-to-end |
## Unit Tests
Pattern: Arrange, Act, Assert
- Arrange: Set up test data
- Act: Call the function
- Assert: Verify the result
Coverage targets:
- Business logic: 80%+ line coverage
- Utilities: 90%+ line coverage
- UI components: 70%+ coverage
## Integration Tests
Use real test database, not mocks
Test API endpoints with real (test) database
Test multiple components together
## E2E Tests
Critical user journeys only
Flow: login → action → verify
Run before deploying to production
## Edge Cases to Test
- Empty input
- Null/undefined
- Boundary values
- Very long input
- Special characters
- Negative numbers
- Concurrent operations
- Network errors
- Permission denied
## Cleanup
Always clean up after tests:
- Reset database before each test
- Clean up temp files after
- Don't leave test data behind
5-phases.md
# The 4 Phases: Detailed Breakdown
## Complete Guide to Spec, Blueprint, Implementation, and Review
**Version**: 2.1
---
## Phase 1: Specification (1-2 Days)
**Goal**: Create spec.md - unambiguous blueprint
**Model Role**: Reasoning Model
**Prompt Template**: Ask ONE question at a time, iterate until spec is crystal clear
**Deliverables**:
- spec.md (source of truth)
- budget.md (initialized with estimates)
- security-checklist.md (initialized for later)
- model-config.json (your model choices)
**Spec Quality Checklist**:
- Clear tech stack choices
- File structure skeleton
- Naming conventions
- State management approach
- Authentication/authorization requirements
- API design (endpoints, formats)
- Data validation rules
- Error handling philosophy
- Security requirements
- Testing framework chosen
- Deployment strategy
- MVP clearly flagged
- Edge cases identified
---
## Phase 2: Blueprint & Planning (1 Day)
**Goal**: Convert spec into 10-20 concrete implementation steps
**Model Role**: Reasoning Model
**Process**:
1. Draft high-level implementation plan
2. Break into iterative chunks (10-20 steps)
3. Review and break down further if needed
4. Ensure each step is 1-3 hours max
**Deliverables**:
- blueprint.md (step-by-step plan)
- prompts.md (ready-to-use generation prompts)
- todo.md (progress checklist)
- context.md (initial state)
- decisions.md (ADR tracking)
- manifest.json (updated)
---
## Phase 3: Implementation (3-14 Days)
**Goal**: Execute each step from prompts.md, test, integrate, commit
**Model Roles**:
- Fast Coder (95% of steps)
- UI Specialist (frontend steps)
- Code Reviewer (reviews every 3-5 steps)
- Budget Model (if cost control needed)
**Step Execution Workflow**:
1. Pre-flight: verify previous step working
2. Prepare: open prompts.md, read success criteria
3. Execute: copy prompt, choose model, run
4. Validate: run tests, verify integration
5. Integrate & commit: git add, git commit
6. Update tracking: check off todo.md, update manifest.json
**Every 5 Steps**:
- Regenerate context.md
**Every 3-5 Steps**:
- Run code review gate with Code Reviewer
**Deliverables**:
- Working implementation (all tests passing)
- Code integrated into app
- Updated context.md and review.md
- Updated todo.md and budget.md
- Git history (one commit per step)
---
## Phase 4: Polish & Deploy (1 Day)
**Goal**: Final checks, deploy, capture learnings
**Steps**:
1. Full test suite (80%+ coverage target)
2. Security audit (complete checklist)
3. Code quality (lint, format)
4. Performance checks
5. Documentation (README, CHANGELOG)
6. Deploy to staging then production
7. Complete retrospective.md
**Deliverables**:
- All tests passing
- Security checklist complete
- Deployed to production
- retrospective.md filled out
- Metrics logged in ~/.vibecoding/metrics.csv
6-automation.md
# Automation & Agent Integration
## Making Coding Agents Handle the Busywork
**Version**: 2.1
---
## Agent-Friendly File Formats
**todo.md**: Agents update checkboxes
```toml
- [x] Task complete <!-- completed: 2026-01-31 14:30 -->
- [ ] Task pending
- [ ] Task in progress <!-- started: 2026-01-31 15:00 -->
budget.md: Agents append rows
<!-- AGENT-MANAGED: Append row after each session -->
| 2026-02-04 | 5 | Step 3 | Fast Coder | 2,150 | 1,850 | 4,000 | $0.08 | API endpoint |
manifest.json: Agent-readable project state
{
"project": "Project Name",
"currentStep": 1,
"totalSteps": 15,
"phase": "implementation",
"status": "in-progress",
"lastUpdated": "2026-01-31T14:30:00Z",
"nextAction": "Execute Step 1 from prompts.md"
}
Agent Commands
Tell your agent once, it follows the pattern:
"After each step, please:
1. Update todo.md: check off completed step with timestamp
2. Update manifest.json: increment currentStep, update lastUpdated
3. Append row to budget.md with token usage
4. Git commit with message 'Step [N]: [Step Name]'"
Helper Scripts
Create .vibecoding/scripts/:
- update-manifest.sh - Update manifest with current state
- check-step-complete.sh - Verify step completion criteria
- generate-context.sh - Auto-generate context.md
- token-estimate.py - Estimate tokens for a file
---
## 7-error-recovery.md
```markdown
# Error Recovery & Debugging
## What to Do When Things Break
**Version**: 2.1
**Key Principle**: Errors are data. Learn from them.
---
## Failure Types & Recovery
**Type 1: Syntax Errors (Easy)**
- Symptoms: Code doesn't run, parser error
- Recovery: Quick fix request, provide error and code
- Prevention: Run immediately, use linter
**Type 2: Logic Errors (Medium)**
- Symptoms: Code runs but behavior is wrong
- Recovery: Describe expected vs actual, provide failing test
- Prevention: Write tests, test edge cases, use Code Reviewer
**Type 3: Integration Failures (Harder)**
- Symptoms: Step works alone but breaks existing code
- Recovery: Show what broke, steps to reproduce, previous code
- Prevention: Run full test suite after every step, integrate immediately
**Type 4: Model Hallucinations (Worst)**
- Symptoms: References non-existent files/functions
- Recovery: Provide accurate file structure, existing code
- Prevention: Provide context header with file list
---
## Debugging Workflow
1. Identify failure type (syntax/logic/integration/hallucination)
2. Capture evidence (error message, minimal repro)
3. Recovery request (use appropriate template)
4. Validate fix (run tests, verify old code still works)
5. Document (update decisions.md)
---
## When to Rollback
Rollback if:
- Same step failing 2+ times
- Fix is more complex than redoing step
- Accumulated too many workarounds
Process: `git revert HEAD` or `git reset --hard [commit]`
---
## Common Error Patterns
| Error | Cause | Fix |
|-------|-------|-----|
| undefined is not a function | Non-existent function | Provide context with file structure |
| Cannot read property X | Object doesn't exist | Add null checks, update ADR |
| Port already in use | Server still running | Kill process or use different port |
| Cannot find module X | Dependency not installed | npm install, update package.json |
| Test timeout | Async not awaited | Add await, return Promise, increase timeout |
| 401 Unauthorized | Token missing/invalid | Check token generation, refresh logic |
8-pivot-protocol.md
# Pivot & Course Correction
## When to Change Direction and How
**Version**: 2.1
**Key Question**: Are we still building the right thing?
---
## Red Flags for Pivoting
- Same step failing 3+ times
- Workarounds accumulating
- Features taking 3x expected effort
- "This would be easier if we had..."
- User/stakeholder feedback contradicts spec
- New dependencies that change everything
---
## Pivot Assessment
Create pivot assessment document:
- Current status (steps done, time invested, budget used)
- Problem identified (what's not working)
- Root cause analysis (spec incomplete? Architecture wrong?)
- Options analysis (continue as-is, partial pivot, full restart)
- Decision (chosen option and rationale)
- Action items (what to update)
---
## Partial Pivot (Most Common)
When some work is good, some needs changing:
1. Tag current state: `git tag pre-pivot-[date]`
2. Document what's working: update context.md
3. Extract reusable code: move to /lib or /utils
4. Update spec.md sections that changed
5. Regenerate blueprint for affected steps
6. Keep code that still applies
7. Update prompts.md
8. Run full test suite
9. Commit: `Pivot: [description]`
10. Update todo.md and manifest.json
---
## Full Restart (Rare)
Only when current path fundamentally wrong (30%+ code needs redoing)
Process:
1. Tag: `git tag abandoned-[date]`
2. Create new branch: `git checkout -b fresh-start`
3. Write spec.md.v2 with learnings
4. New blueprint, new prompts
5. Communicate to stakeholders
---
## Sunk Cost Recognition
Don't continue bad path because you've invested time.
It's okay to:
- Throw away code that isn't working
- Change direction if better approach found
- Restart if foundations are wrong
- Accept some loss
It's not okay to:
- Keep piling on workarounds
- Ignore major warning signs
- Commit to shipping bad code
- Blame the AI instead of the process
9-retrospective.md
# Retrospective & Learning
## Capturing Lessons for Next Time
**Version**: 2.1
**Timing**: Complete at end of project
---
## Retrospective Template
Create `.vibecoding/retrospective.md` with:
**Executive Summary**: 1-2 paragraph overview
**Metrics Summary**:
| Metric | Planned | Actual | Variance |
|--------|---------|--------|----------|
| Total Steps | 15 | 14 | -1 |
| Duration | 2 weeks | 10 days | -4 days |
| Token Budget | $10 | $8.50 | -15% |
| Pivots | 0 | 1 | +1 |
**What Went Well ✅**:
- Name accomplishment
- Impact
- Learning
- Reusable pattern
**What Didn't Go Well ❌**:
- Name problem
- Impact
- Root cause
- Prevention for next time
**Model Performance Ratings**:
- Reasoning Model: ⭐⭐⭐⭐⭐
- Fast Coder: ⭐⭐⭐⭐
- UI Specialist: ⭐⭐⭐⭐⭐
- Code Reviewer: ⭐⭐⭐⭐⭐
- Budget Model: ⭐⭐⭐
**Process Improvements**:
- For Spec Phase
- For Blueprint Phase
- For Implementation Phase
- For Deployment Phase
**Effective Prompt Patterns**: Patterns that worked
**Patterns to Avoid**: Patterns that didn't work
**Knowledge Gained**:
- Technical insights
- Process insights
- Tool insights
**Recommendations for Similar Projects**:
1. Do thorough spec
2. Isolate risk early
3. Review frequently
4. Security from start
5. Cost tracking
6. Context discipline
**If You Were to Do This Again**:
- Same: keep these approaches
- Different: change these approaches
**Metrics for Next Project**: Log to ~/.vibecoding/metrics.csv
**Learning Library**: Save reusable prompts, specs, blueprints
---
## Post-Project Checklist
1. Archive project files
2. Update personal metrics
3. Update model preferences
4. Share learnings
5. Close project with git tag
10-best-practices.md
# Best Practices & Anti-Patterns
## Do's, Don'ts, and Why
**Version**: 2.1
---
## Golden Rules
### Rule 1: Invest Time in Spec
- **❌ DON'T**: "Build me a chat app" (vague, rework inevitable)
- **✅ DO**: Spec with features, persistence, typing, read receipts, offline (clear, testable)
- **Why**: Spec phase ROI is 5-10x. 2 days in spec saves 5-10 days in implementation.
### Rule 2: Break Down Steps Ruthlessly
- **❌ DON'T**: "Implement authentication" (too big, missing things)
- **✅ DO**: Create model → signup endpoint → login endpoint → middleware → logout → tests (each 1-3 hours)
- **Why**: Smaller steps = higher success, easier debugging, more satisfaction
### Rule 3: Test After Every Step
- **❌ DON'T**: Implement steps 1-10, then test (bugs everywhere)
- **✅ DO**: Implement feature + tests per step (bug linked to one step, easy fix)
- **Why**: Compound problems. One broken step breaks all downstream.
### Rule 4: Regenerate Context Every 5 Steps
- **❌ DON'T**: Never regenerate (model forgets Step 3 decisions by Step 10)
- **✅ DO**: Regenerate at step 5, 10, 15 (models stay aligned)
- **Why**: Memory limits. Regeneration resets context at critical moments.
### Rule 5: Review Every 3-5 Steps
- **❌ DON'T**: Implement all code, review at end (overwhelming)
- **✅ DO**: Review every 3-5 steps (catch issues early, easier fixes)
- **Why**: Early detection. One issue is quick to fix. Fifteen are a nightmare.
---
## Best Practices by Phase
**Spec Phase DO**: Ask clarifying questions, document assumptions, prioritize features, define success metrics, test the spec, say no to scope creep
**Spec Phase DON'T**: Rush, leave ambiguity, add features as afterthoughts, ignore edge cases, change spec mid-project
**Blueprint Phase DO**: Review sizing, break down complex steps, build from foundation, integrate always, test throughout, security first
**Blueprint Phase DON'T**: Create 100 tiny steps, create 5 huge steps, skip setup, plan too far ahead, ignore integration
**Implementation DO**: Run tests immediately, commit after every successful step, provide context headers, use right model, keep PRs small, document decisions
**Implementation DON'T**: Batch steps, skip commits, hoard context, use expensive model for simple tasks, accept broken tests
**Security DO**: Review .env before sensitive prompt, use parameterized queries, hash passwords, expire tokens, validate input, use HTTPS, gate reviews
**Security DON'T**: Paste API keys, hardcode credentials, trust user input, use weak passwords, disable CORS, expose stack traces, skip security review
---
## Recovery Best Practices
**DO**: Fix failed tests, rollback if 2+ failures, document lessons, extract learning, test the fix, commit with clear message
**DON'T**: Ignore failing tests, accumulate broken code, blame the model, skip rollbacks, repeat mistakes
---
## Common Mistakes & Fixes
| Mistake | Why | Fix | Prevention |
|---------|-----|-----|-----------|
| Vague spec | Eager to code | Spend 1-2 days iterating | Schedule spec, protect time |
| Too-big steps | Impatience | Break to 1-3 hours | Review sizing 2x |
| Skipped tests | "Later" mentality | Run immediately | Tests = definition of done |
| Context drift | Model forgets | Regenerate every 5 steps | Calendar reminder |
| Over budget | Underestimated | Track daily | Check every session |
| Hallucinations | Insufficient context | Provide file structure | Paste relevant code |
| Integration failures | Isolated testing | Full suite after step | Automation |
| Secrets leaked | Forgot to redact | Grep before prompt | Pre-commit scanning |
workflow_SOP.md
# Vibecoding Workflow SOP (Standard Operating Procedure)
## Complete Guide to Using the Blueprint Files
**Version**: 2.1
---
## 🚀 Project Workflow Sequence
### Phase 1: Planning & Specification (Days 1-2)
1. Create project directory structure
2. Read 0-overview.md and 1-model-strategy.md
3. Copy model-config.json and manifest.json templates
4. Initialize spec.md, budget.md, security-checklist.md
5. Run specification phase with Reasoning Model
6. Validate against Spec Quality Checklist
**✅ Phase 1 Complete**: spec.md, budget.md, security-checklist.md, model-config.json
---
### Phase 2: Architecture & Planning (Day 2-3)
1. Read planning guides from 5-phases.md
2. Generate blueprint with Reasoning Model
3. Review step sizing (1-3 hours each)
4. Generate prompts.md and todo.md
5. Initialize tracking files (context.md, decisions.md, review.md, retrospective.md)
6. Create manifest.json
7. Git commit
**✅ Phase 2 Complete**: spec.md, blueprint.md, prompts.md, todo.md, tracking files
---
### Phase 3: Implementation (Days 3-N)
For each step:
1. Pre-flight: verify previous step working
2. Read prompt from prompts.md
3. Check if context regeneration point (every 5 steps)
4. Run prompt with appropriate model role
5. Test and validate
6. Integrate and commit
7. Update tracking files
8. Every 5 steps: regenerate context.md
9. Every 3-5 steps: run code review gate
**✅ Phase 3 Complete**: All steps implemented, tested, reviewed
---
### Phase 4: Polish & Deploy (Final days)
1. Full test suite (80%+ coverage)
2. Security audit (complete checklist)
3. Final review (no critical issues in review.md)
4. Documentation (README, CHANGELOG)
5. Deploy to staging then production
6. Complete retrospective.md
**✅ Project Complete**: Shipped and learned!
---
## 📚 Key Files
| File | Created | Purpose |
|------|---------|---------|
| spec.md | Phase 1 | Source of truth |
| blueprint.md | Phase 2 | Implementation plan |
| prompts.md | Phase 2 | Code generation prompts |
| todo.md | Phase 2 | Progress tracking |
| context.md | Phase 2+ | Project state snapshot |
| budget.md | Phase 1 | Token/cost tracking |
| security-checklist.md | Phase 1 | Security gates |
| decisions.md | Phase 2+ | ADR records |
| review.md | Phase 3 | Quality reviews |
| retrospective.md | Phase 4 | Learning capture |
| manifest.json | Phase 2 | Agent-readable state |
| model-config.json | Phase 1 | Model assignments |
---
## ⚠️ Common Mistakes to Avoid
- Skip spec phase → scope creep
- Make steps too large → generation fails
- Use mocked data → hides integration issues
- Ignore failing tests → debt accumulates
- Send secrets to models → security breach
- Skip reviews → big surprises
- Let context drift → model forgets
---
## 🔄 Troubleshooting Quick Links
| Problem | Read |
|---------|------|
| "A step failed" | 7-error-recovery.md |
| "Spec is wrong" | 8-pivot-protocol.md |
| "Tests failing" | 4-testing-strategy.md |
| "Over budget" | 2-token-budget.md |
| "Security issue" | 3-security-protocol.md |
| "Model output bad" | 1-model-strategy.md |
---
## 💡 Tips for Success
1. Invest time in spec (best 2 hours you'll spend)
2. Break down steps (smaller is always better)
3. Test after every step (don't accumulate debt)
4. Review frequently (catch issues early)
5. Track tokens (know your costs)
6. Regenerate context (keep models aligned)
7. Commit often (git is safety net)
8. Read security checklist (before every prompt)
9. Use right model role (don't overpay)
10. Learn from retrospective (improve next time)
INDEX.md
# Vibecoding Blueprint v2.1 - Complete Index
## 📚 Quick Navigation
**First time?** → Start with [`0-overview.md`](0-overview.md)
**Ready to start a project?** → Read [`workflow_SOP.md`](workflow_SOP.md)
**Need to reference something?** → Find it below
---
## 📖 All Files (What Each Does)
### Core Navigation
| File | Purpose | Read Time | When |
|------|---------|-----------|------|
| **INDEX.md** | You are here | 5 min | Anytime - use to find things |
| **workflow_SOP.md** | Step-by-step execution guide | 15 min | Start of project |
| **0-overview.md** | What is vibecoding? | 5 min | First introduction |
### Learning Guides
| File | Purpose | Read Time | When |
|------|---------|-----------|------|
| **1-model-strategy.md** | Choose AI models by role | 10 min | Before Phase 1 |
| **2-token-budget.md** | Track tokens and costs | 10 min | Before Phase 1 |
| **3-security-protocol.md** | Security best practices | 15 min | Before Phase 1 |
| **4-testing-strategy.md** | Testing patterns | 15 min | Before Phase 3 |
| **5-phases.md** | All 4 phases detailed | 30 min | Reference during project |
### How-To Guides
| File | Purpose | Read Time | When |
|------|---------|-----------|------|
| **6-automation.md** | Agent integration | 10 min | Before Phase 3 |
| **7-error-recovery.md** | Debugging strategies | 15 min | When things break |
| **8-pivot-protocol.md** | Course correction | 10 min | If you need to pivot |
### Retrospectives & Learning
| File | Purpose | Read Time | When |
|------|---------|-----------|------|
| **9-retrospective.md** | Learning template | 10 min | End of project |
| **10-best-practices.md** | Do's and don'ts | 10 min | Reference anytime |
---
## 🎯 Reading Sequences
**Brand New to Vibecoding (First Time)**:
1. 0-overview.md (5 min)
2. workflow_SOP.md (15 min)
3. 1-model-strategy.md (10 min)
4. 5-phases.md (30 min)
5. Start your first project!
**Starting a New Project**:
1. workflow_SOP.md → Phase 1 section
2. 1-model-strategy.md (model config)
3. 2-token-budget.md (create budget.md)
4. 3-security-protocol.md (create checklist)
5. Begin Phase 1: Specification
**In the Middle of a Project**:
1. Check workflow_SOP.md → Your current phase
2. Check manifest.json for current step
3. Read next prompt from prompts.md
4. Check 5-phases.md for context
5. Execute step, continue
**Something Broke**:
1. 7-error-recovery.md (15 min)
2. Identify failure type
3. Follow recovery template
4. Implement fix
5. Document in decisions.md
---
## 🚨 Emergency Links
- Something's broken → [`7-error-recovery.md`](7-error-recovery.md)
- Need to change direction → [`8-pivot-protocol.md`](8-pivot-protocol.md)
- Over budget → [`2-token-budget.md`](2-token-budget.md) → Token Optimization
- Security issue → [`3-security-protocol.md`](3-security-protocol.md)
- Test failing → [`4-testing-strategy.md`](4-testing-strategy.md)
- Can't figure out what to do → [`workflow_SOP.md`](workflow_SOP.md)
---
## ✅ Pre-Project Checklist
- [ ] Read 0-overview.md
- [ ] Read workflow_SOP.md
- [ ] Chosen models and filled model-config.json
- [ ] Created .vibecoding/ folder
- [ ] Initialized git
- [ ] Ready to start spec with Reasoning Model
If YES to all → You're good to go! 🚀
Vibecoding v2.1 - Building software with AI reliably and repeatably