Home

AI-Assisted Development Framework by Claude

Good research. Now I have a clear picture of the full current ecosystem. Here’s your complete framework:


Install the three layers first

Layer 1 — GSD (already done ✓)

Layer 2 — Superpowers (already installed via OpenCode/Gemini, but install for Claude Code explicitly):

# Check if it's already available in Claude Code
ls ~/.claude/skills/ | grep superpowers

# If not, install via the plugin
# In Claude Code: /plugin install superpowers
# Or via npx:
npx superpower-skills --claude --global

Layer 3 — gstack:

git clone --single-branch --depth 1 \
  https://github.com/garrytan/gstack.git \
  ~/.claude/skills/gstack
cd ~/.claude/skills/gstack && ./setup

The Mental Model

Now the detailed workflow. Install gstack first if you haven’t:

git clone --single-branch --depth 1 \
  https://github.com/garrytan/gstack.git \
  ~/.claude/skills/gstack
cd ~/.claude/skills/gstack && ./setup

Layer 1 — gstack: decide what to build

This runs before any code is written. Its job is to stress-test your idea the way a real engineering org would.

New project:

/office-hours     → Socratic Q&A: what are we building, who for, why now?
/plan-ceo-review  → CEO lens: is this worth building at all?
/plan-eng-review  → Staff engineer lens: architecture, risks, hidden complexity
/plan-design-review → Design lens: UX, AI slop detection, visual consistency

Existing feature:

/investigate      → Root cause analysis before touching code
/office-hours     → Scope the change, surface assumptions
/plan-eng-review  → Architecture check: will this fit cleanly?

The output is a clear spec. gstack’s /plan-ceo-review is the most underrated feature — it forces the “what are we actually building?” conversation before anyone touches code. Don’t skip it on anything you’ll spend more than 2 hours on.


Layer 2 — Superpowers: execute with discipline

Superpowers owns the build phase. It enforces a rigid 7-phase TDD-first pipeline — brainstorm, plan, test, implement, review — using what creator Jesse Vincent calls the “1% Rule”: if there’s even a 1% chance a skill applies, the agent must invoke it.

In practice for your Python/TypeScript stack:

1. Let Superpowers brainstorm the implementation approach
2. It writes tests FIRST (pytest / vitest)
3. Then implements to make tests pass
4. Auto-reviews before handing back to you
5. /qa runs real browser tests if it's a UI

You don’t invoke Superpowers explicitly — because of the way the hooks work you don’t need to think about it. It just works. It intercepts Claude’s normal coding flow and enforces the pipeline automatically.


Layer 3 — GSD: maintain context across the whole session

GSD prevents the quality degradation that kills long sessions. Use it as the outer wrapper:

New project:

/gsd:new-project          # interviews you, spawns research agents, creates spec
/gsd:discuss-phase 1      # clarify requirements for phase 1
/gsd:plan-phase 1         # creates PLAN.md — atomic tasks, each fits in ~50% context
/gsd:execute-phase 1      # spawns fresh subagents per task, atomic commits
/gsd:verify-work 1        # goal-backward verification: what must be TRUE?
/gsd:complete-milestone   # archives, tags release

Quick task (no full workflow needed):

/gsd:quick "add dark mode toggle to settings"

Existing project, new feature:

/gsd:map-codebase         # always run this first on brownfield projects
/gsd:new-milestone        # start next version

The combined workflow for a real feature

Here’s exactly how the three layers interact for something like “build a webhook handler for Stripe events”:

1. /office-hours
   → "What events? What should fail silently vs loudly? Idempotency?"

2. /plan-eng-review
   → "Signature verification before parsing. Queue vs synchronous.
      Replay attacks. Rate limiting."

3. /gsd:new-project  (or /gsd:quick for small features)
   → Creates PROJECT.md with requirements traced to phases

4. /gsd:plan-phase 1
   → Breaks into atomic tasks: schema, handler, tests, retry logic

5. /gsd:execute-phase 1
   → Each task runs in a fresh 200k context window
   → Superpowers enforces: tests written before implementation
   → Atomic git commit per task

6. /gsd:verify-work 1
   → "Does the handler reject invalid signatures? Does it deduplicate
      events? Does it return 200 before processing?"

7. /qa  (gstack)
   → Real browser test of the webhook dashboard UI

8. /review  (gstack)
   → Staff engineer code review

9. /ship  (gstack)
   → PR, changelog, deploy

When to skip layers

Task size What to use
Typo fix, rename Just Claude — no framework
Quick script / automation /gsd:quick only
New feature (1–2 days) Superpowers + GSD
New project or complex feature All 3 layers

One honest caveat: GSD is token-heavy — users have reported a 4:1 overhead ratio. The Max plan is strongly recommended for regular full-workflow GSD usage. Use /gsd:quick liberally for anything under ~4 hours of work, and save the full discuss → plan → execute → verify cycle for things you’re serious about shipping clean.

Tags: AiDevelopmentClaudeGsdCodingAi-Assisted-CodingAi-Assisted-DevelopmentFramework