chore: initial backup of Claude Code configuration
Includes: CLAUDE.md, settings.json, agents, commands, rules, skills, hooks, contexts, evals, get-shit-done, plugin configs (installed list and marketplace sources). Excludes credentials, runtime caches, telemetry, session data, and plugin binary cache.
This commit is contained in:
1
get-shit-done/VERSION
Normal file
1
get-shit-done/VERSION
Normal file
@@ -0,0 +1 @@
|
||||
1.26.0
|
||||
722
get-shit-done/bin/gsd-tools.cjs
Normal file
722
get-shit-done/bin/gsd-tools.cjs
Normal file
@@ -0,0 +1,722 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* GSD Tools — CLI utility for GSD workflow operations
|
||||
*
|
||||
* Replaces repetitive inline bash patterns across ~50 GSD command/workflow/agent files.
|
||||
* Centralizes: config parsing, model resolution, phase lookup, git commits, summary verification.
|
||||
*
|
||||
* Usage: node gsd-tools.cjs <command> [args] [--raw]
|
||||
*
|
||||
* Atomic Commands:
|
||||
* state load Load project config + state
|
||||
* state json Output STATE.md frontmatter as JSON
|
||||
* state update <field> <value> Update a STATE.md field
|
||||
* state get [section] Get STATE.md content or section
|
||||
* state patch --field val ... Batch update STATE.md fields
|
||||
* state begin-phase --phase N --name S --plans C Update STATE.md for new phase start
|
||||
* state signal-waiting --type T --question Q --options "A|B" --phase P Write WAITING.json signal
|
||||
* state signal-resume Remove WAITING.json signal
|
||||
* resolve-model <agent-type> Get model for agent based on profile
|
||||
* find-phase <phase> Find phase directory by number
|
||||
* commit <message> [--files f1 f2] Commit planning docs
|
||||
* verify-summary <path> Verify a SUMMARY.md file
|
||||
* generate-slug <text> Convert text to URL-safe slug
|
||||
* current-timestamp [format] Get timestamp (full|date|filename)
|
||||
* list-todos [area] Count and enumerate pending todos
|
||||
* verify-path-exists <path> Check file/directory existence
|
||||
* config-ensure-section Initialize .planning/config.json
|
||||
* history-digest Aggregate all SUMMARY.md data
|
||||
* summary-extract <path> [--fields] Extract structured data from SUMMARY.md
|
||||
* state-snapshot Structured parse of STATE.md
|
||||
* phase-plan-index <phase> Index plans with waves and status
|
||||
* websearch <query> Search web via Brave API (if configured)
|
||||
* [--limit N] [--freshness day|week|month]
|
||||
*
|
||||
* Phase Operations:
|
||||
* phase next-decimal <phase> Calculate next decimal phase number
|
||||
* phase add <description> Append new phase to roadmap + create dir
|
||||
* phase insert <after> <description> Insert decimal phase after existing
|
||||
* phase remove <phase> [--force] Remove phase, renumber all subsequent
|
||||
* phase complete <phase> Mark phase done, update state + roadmap
|
||||
*
|
||||
* Roadmap Operations:
|
||||
* roadmap get-phase <phase> Extract phase section from ROADMAP.md
|
||||
* roadmap analyze Full roadmap parse with disk status
|
||||
* roadmap update-plan-progress <N> Update progress table row from disk (PLAN vs SUMMARY counts)
|
||||
*
|
||||
* Requirements Operations:
|
||||
* requirements mark-complete <ids> Mark requirement IDs as complete in REQUIREMENTS.md
|
||||
* Accepts: REQ-01,REQ-02 or REQ-01 REQ-02 or [REQ-01, REQ-02]
|
||||
*
|
||||
* Milestone Operations:
|
||||
* milestone complete <version> Archive milestone, create MILESTONES.md
|
||||
* [--name <name>]
|
||||
* [--archive-phases] Move phase dirs to milestones/vX.Y-phases/
|
||||
*
|
||||
* Validation:
|
||||
* validate consistency Check phase numbering, disk/roadmap sync
|
||||
* validate health [--repair] Check .planning/ integrity, optionally repair
|
||||
*
|
||||
* Progress:
|
||||
* progress [json|table|bar] Render progress in various formats
|
||||
*
|
||||
* Todos:
|
||||
* todo complete <filename> Move todo from pending to completed
|
||||
*
|
||||
* Scaffolding:
|
||||
* scaffold context --phase <N> Create CONTEXT.md template
|
||||
* scaffold uat --phase <N> Create UAT.md template
|
||||
* scaffold verification --phase <N> Create VERIFICATION.md template
|
||||
* scaffold phase-dir --phase <N> Create phase directory
|
||||
* --name <name>
|
||||
*
|
||||
* Frontmatter CRUD:
|
||||
* frontmatter get <file> [--field k] Extract frontmatter as JSON
|
||||
* frontmatter set <file> --field k Update single frontmatter field
|
||||
* --value jsonVal
|
||||
* frontmatter merge <file> Merge JSON into frontmatter
|
||||
* --data '{json}'
|
||||
* frontmatter validate <file> Validate required fields
|
||||
* --schema plan|summary|verification
|
||||
*
|
||||
* Verification Suite:
|
||||
* verify plan-structure <file> Check PLAN.md structure + tasks
|
||||
* verify phase-completeness <phase> Check all plans have summaries
|
||||
* verify references <file> Check @-refs + paths resolve
|
||||
* verify commits <h1> [h2] ... Batch verify commit hashes
|
||||
* verify artifacts <plan-file> Check must_haves.artifacts
|
||||
* verify key-links <plan-file> Check must_haves.key_links
|
||||
*
|
||||
* Template Fill:
|
||||
* template fill summary --phase N Create pre-filled SUMMARY.md
|
||||
* [--plan M] [--name "..."]
|
||||
* [--fields '{json}']
|
||||
* template fill plan --phase N Create pre-filled PLAN.md
|
||||
* [--plan M] [--type execute|tdd]
|
||||
* [--wave N] [--fields '{json}']
|
||||
* template fill verification Create pre-filled VERIFICATION.md
|
||||
* --phase N [--fields '{json}']
|
||||
*
|
||||
* State Progression:
|
||||
* state advance-plan Increment plan counter
|
||||
* state record-metric --phase N Record execution metrics
|
||||
* --plan M --duration Xmin
|
||||
* [--tasks N] [--files N]
|
||||
* state update-progress Recalculate progress bar
|
||||
* state add-decision --summary "..." Add decision to STATE.md
|
||||
* [--phase N] [--rationale "..."]
|
||||
* [--summary-file path] [--rationale-file path]
|
||||
* state add-blocker --text "..." Add blocker
|
||||
* [--text-file path]
|
||||
* state resolve-blocker --text "..." Remove blocker
|
||||
* state record-session Update session continuity
|
||||
* --stopped-at "..."
|
||||
* [--resume-file path]
|
||||
*
|
||||
* Compound Commands (workflow-specific initialization):
|
||||
* init execute-phase <phase> All context for execute-phase workflow
|
||||
* init plan-phase <phase> All context for plan-phase workflow
|
||||
* init new-project All context for new-project workflow
|
||||
* init new-milestone All context for new-milestone workflow
|
||||
* init quick <description> All context for quick workflow
|
||||
* init resume All context for resume-project workflow
|
||||
* init verify-work <phase> All context for verify-work workflow
|
||||
* init phase-op <phase> Generic phase operation context
|
||||
* init todos [area] All context for todo workflows
|
||||
* init milestone-op All context for milestone operations
|
||||
* init map-codebase All context for map-codebase workflow
|
||||
* init progress All context for progress workflow
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { error } = require('./lib/core.cjs');
|
||||
const state = require('./lib/state.cjs');
|
||||
const phase = require('./lib/phase.cjs');
|
||||
const roadmap = require('./lib/roadmap.cjs');
|
||||
const verify = require('./lib/verify.cjs');
|
||||
const config = require('./lib/config.cjs');
|
||||
const template = require('./lib/template.cjs');
|
||||
const milestone = require('./lib/milestone.cjs');
|
||||
const commands = require('./lib/commands.cjs');
|
||||
const init = require('./lib/init.cjs');
|
||||
const frontmatter = require('./lib/frontmatter.cjs');
|
||||
const profilePipeline = require('./lib/profile-pipeline.cjs');
|
||||
const profileOutput = require('./lib/profile-output.cjs');
|
||||
|
||||
// ─── CLI Router ───────────────────────────────────────────────────────────────
|
||||
|
||||
async function main() {
|
||||
const args = process.argv.slice(2);
|
||||
|
||||
// Optional cwd override for sandboxed subagents running outside project root.
|
||||
let cwd = process.cwd();
|
||||
const cwdEqArg = args.find(arg => arg.startsWith('--cwd='));
|
||||
const cwdIdx = args.indexOf('--cwd');
|
||||
if (cwdEqArg) {
|
||||
const value = cwdEqArg.slice('--cwd='.length).trim();
|
||||
if (!value) error('Missing value for --cwd');
|
||||
args.splice(args.indexOf(cwdEqArg), 1);
|
||||
cwd = path.resolve(value);
|
||||
} else if (cwdIdx !== -1) {
|
||||
const value = args[cwdIdx + 1];
|
||||
if (!value || value.startsWith('--')) error('Missing value for --cwd');
|
||||
args.splice(cwdIdx, 2);
|
||||
cwd = path.resolve(value);
|
||||
}
|
||||
|
||||
if (!fs.existsSync(cwd) || !fs.statSync(cwd).isDirectory()) {
|
||||
error(`Invalid --cwd: ${cwd}`);
|
||||
}
|
||||
|
||||
const rawIndex = args.indexOf('--raw');
|
||||
const raw = rawIndex !== -1;
|
||||
if (rawIndex !== -1) args.splice(rawIndex, 1);
|
||||
|
||||
const command = args[0];
|
||||
|
||||
if (!command) {
|
||||
error('Usage: gsd-tools <command> [args] [--raw] [--cwd <path>]\nCommands: state, resolve-model, find-phase, commit, verify-summary, verify, frontmatter, template, generate-slug, current-timestamp, list-todos, verify-path-exists, config-ensure-section, init');
|
||||
}
|
||||
|
||||
switch (command) {
|
||||
case 'state': {
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'json') {
|
||||
state.cmdStateJson(cwd, raw);
|
||||
} else if (subcommand === 'update') {
|
||||
state.cmdStateUpdate(cwd, args[2], args[3]);
|
||||
} else if (subcommand === 'get') {
|
||||
state.cmdStateGet(cwd, args[2], raw);
|
||||
} else if (subcommand === 'patch') {
|
||||
const patches = {};
|
||||
for (let i = 2; i < args.length; i += 2) {
|
||||
const key = args[i].replace(/^--/, '');
|
||||
const value = args[i + 1];
|
||||
if (key && value !== undefined) {
|
||||
patches[key] = value;
|
||||
}
|
||||
}
|
||||
state.cmdStatePatch(cwd, patches, raw);
|
||||
} else if (subcommand === 'advance-plan') {
|
||||
state.cmdStateAdvancePlan(cwd, raw);
|
||||
} else if (subcommand === 'record-metric') {
|
||||
const phaseIdx = args.indexOf('--phase');
|
||||
const planIdx = args.indexOf('--plan');
|
||||
const durationIdx = args.indexOf('--duration');
|
||||
const tasksIdx = args.indexOf('--tasks');
|
||||
const filesIdx = args.indexOf('--files');
|
||||
state.cmdStateRecordMetric(cwd, {
|
||||
phase: phaseIdx !== -1 ? args[phaseIdx + 1] : null,
|
||||
plan: planIdx !== -1 ? args[planIdx + 1] : null,
|
||||
duration: durationIdx !== -1 ? args[durationIdx + 1] : null,
|
||||
tasks: tasksIdx !== -1 ? args[tasksIdx + 1] : null,
|
||||
files: filesIdx !== -1 ? args[filesIdx + 1] : null,
|
||||
}, raw);
|
||||
} else if (subcommand === 'update-progress') {
|
||||
state.cmdStateUpdateProgress(cwd, raw);
|
||||
} else if (subcommand === 'add-decision') {
|
||||
const phaseIdx = args.indexOf('--phase');
|
||||
const summaryIdx = args.indexOf('--summary');
|
||||
const summaryFileIdx = args.indexOf('--summary-file');
|
||||
const rationaleIdx = args.indexOf('--rationale');
|
||||
const rationaleFileIdx = args.indexOf('--rationale-file');
|
||||
state.cmdStateAddDecision(cwd, {
|
||||
phase: phaseIdx !== -1 ? args[phaseIdx + 1] : null,
|
||||
summary: summaryIdx !== -1 ? args[summaryIdx + 1] : null,
|
||||
summary_file: summaryFileIdx !== -1 ? args[summaryFileIdx + 1] : null,
|
||||
rationale: rationaleIdx !== -1 ? args[rationaleIdx + 1] : '',
|
||||
rationale_file: rationaleFileIdx !== -1 ? args[rationaleFileIdx + 1] : null,
|
||||
}, raw);
|
||||
} else if (subcommand === 'add-blocker') {
|
||||
const textIdx = args.indexOf('--text');
|
||||
const textFileIdx = args.indexOf('--text-file');
|
||||
state.cmdStateAddBlocker(cwd, {
|
||||
text: textIdx !== -1 ? args[textIdx + 1] : null,
|
||||
text_file: textFileIdx !== -1 ? args[textFileIdx + 1] : null,
|
||||
}, raw);
|
||||
} else if (subcommand === 'resolve-blocker') {
|
||||
const textIdx = args.indexOf('--text');
|
||||
state.cmdStateResolveBlocker(cwd, textIdx !== -1 ? args[textIdx + 1] : null, raw);
|
||||
} else if (subcommand === 'record-session') {
|
||||
const stoppedIdx = args.indexOf('--stopped-at');
|
||||
const resumeIdx = args.indexOf('--resume-file');
|
||||
state.cmdStateRecordSession(cwd, {
|
||||
stopped_at: stoppedIdx !== -1 ? args[stoppedIdx + 1] : null,
|
||||
resume_file: resumeIdx !== -1 ? args[resumeIdx + 1] : 'None',
|
||||
}, raw);
|
||||
} else if (subcommand === 'begin-phase') {
|
||||
const phaseIdx = args.indexOf('--phase');
|
||||
const nameIdx = args.indexOf('--name');
|
||||
const plansIdx = args.indexOf('--plans');
|
||||
state.cmdStateBeginPhase(
|
||||
cwd,
|
||||
phaseIdx !== -1 ? args[phaseIdx + 1] : null,
|
||||
nameIdx !== -1 ? args[nameIdx + 1] : null,
|
||||
plansIdx !== -1 ? parseInt(args[plansIdx + 1], 10) : null,
|
||||
raw
|
||||
);
|
||||
} else if (subcommand === 'signal-waiting') {
|
||||
const typeIdx = args.indexOf('--type');
|
||||
const qIdx = args.indexOf('--question');
|
||||
const optIdx = args.indexOf('--options');
|
||||
const phaseIdx = args.indexOf('--phase');
|
||||
state.cmdSignalWaiting(
|
||||
cwd,
|
||||
typeIdx !== -1 ? args[typeIdx + 1] : null,
|
||||
qIdx !== -1 ? args[qIdx + 1] : null,
|
||||
optIdx !== -1 ? args[optIdx + 1] : null,
|
||||
phaseIdx !== -1 ? args[phaseIdx + 1] : null,
|
||||
raw
|
||||
);
|
||||
} else if (subcommand === 'signal-resume') {
|
||||
state.cmdSignalResume(cwd, raw);
|
||||
} else {
|
||||
state.cmdStateLoad(cwd, raw);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'resolve-model': {
|
||||
commands.cmdResolveModel(cwd, args[1], raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'find-phase': {
|
||||
phase.cmdFindPhase(cwd, args[1], raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'commit': {
|
||||
const amend = args.includes('--amend');
|
||||
const filesIndex = args.indexOf('--files');
|
||||
// Collect all positional args between command name and first flag,
|
||||
// then join them — handles both quoted ("multi word msg") and
|
||||
// unquoted (multi word msg) invocations from different shells
|
||||
const endIndex = filesIndex !== -1 ? filesIndex : args.length;
|
||||
const messageArgs = args.slice(1, endIndex).filter(a => !a.startsWith('--'));
|
||||
const message = messageArgs.join(' ') || undefined;
|
||||
const files = filesIndex !== -1 ? args.slice(filesIndex + 1).filter(a => !a.startsWith('--')) : [];
|
||||
commands.cmdCommit(cwd, message, files, raw, amend);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'verify-summary': {
|
||||
const summaryPath = args[1];
|
||||
const countIndex = args.indexOf('--check-count');
|
||||
const checkCount = countIndex !== -1 ? parseInt(args[countIndex + 1], 10) : 2;
|
||||
verify.cmdVerifySummary(cwd, summaryPath, checkCount, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'template': {
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'select') {
|
||||
template.cmdTemplateSelect(cwd, args[2], raw);
|
||||
} else if (subcommand === 'fill') {
|
||||
const templateType = args[2];
|
||||
const phaseIdx = args.indexOf('--phase');
|
||||
const planIdx = args.indexOf('--plan');
|
||||
const nameIdx = args.indexOf('--name');
|
||||
const typeIdx = args.indexOf('--type');
|
||||
const waveIdx = args.indexOf('--wave');
|
||||
const fieldsIdx = args.indexOf('--fields');
|
||||
template.cmdTemplateFill(cwd, templateType, {
|
||||
phase: phaseIdx !== -1 ? args[phaseIdx + 1] : null,
|
||||
plan: planIdx !== -1 ? args[planIdx + 1] : null,
|
||||
name: nameIdx !== -1 ? args[nameIdx + 1] : null,
|
||||
type: typeIdx !== -1 ? args[typeIdx + 1] : 'execute',
|
||||
wave: waveIdx !== -1 ? args[waveIdx + 1] : '1',
|
||||
fields: fieldsIdx !== -1 ? JSON.parse(args[fieldsIdx + 1]) : {},
|
||||
}, raw);
|
||||
} else {
|
||||
error('Unknown template subcommand. Available: select, fill');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'frontmatter': {
|
||||
const subcommand = args[1];
|
||||
const file = args[2];
|
||||
if (subcommand === 'get') {
|
||||
const fieldIdx = args.indexOf('--field');
|
||||
frontmatter.cmdFrontmatterGet(cwd, file, fieldIdx !== -1 ? args[fieldIdx + 1] : null, raw);
|
||||
} else if (subcommand === 'set') {
|
||||
const fieldIdx = args.indexOf('--field');
|
||||
const valueIdx = args.indexOf('--value');
|
||||
frontmatter.cmdFrontmatterSet(cwd, file, fieldIdx !== -1 ? args[fieldIdx + 1] : null, valueIdx !== -1 ? args[valueIdx + 1] : undefined, raw);
|
||||
} else if (subcommand === 'merge') {
|
||||
const dataIdx = args.indexOf('--data');
|
||||
frontmatter.cmdFrontmatterMerge(cwd, file, dataIdx !== -1 ? args[dataIdx + 1] : null, raw);
|
||||
} else if (subcommand === 'validate') {
|
||||
const schemaIdx = args.indexOf('--schema');
|
||||
frontmatter.cmdFrontmatterValidate(cwd, file, schemaIdx !== -1 ? args[schemaIdx + 1] : null, raw);
|
||||
} else {
|
||||
error('Unknown frontmatter subcommand. Available: get, set, merge, validate');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'verify': {
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'plan-structure') {
|
||||
verify.cmdVerifyPlanStructure(cwd, args[2], raw);
|
||||
} else if (subcommand === 'phase-completeness') {
|
||||
verify.cmdVerifyPhaseCompleteness(cwd, args[2], raw);
|
||||
} else if (subcommand === 'references') {
|
||||
verify.cmdVerifyReferences(cwd, args[2], raw);
|
||||
} else if (subcommand === 'commits') {
|
||||
verify.cmdVerifyCommits(cwd, args.slice(2), raw);
|
||||
} else if (subcommand === 'artifacts') {
|
||||
verify.cmdVerifyArtifacts(cwd, args[2], raw);
|
||||
} else if (subcommand === 'key-links') {
|
||||
verify.cmdVerifyKeyLinks(cwd, args[2], raw);
|
||||
} else {
|
||||
error('Unknown verify subcommand. Available: plan-structure, phase-completeness, references, commits, artifacts, key-links');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'generate-slug': {
|
||||
commands.cmdGenerateSlug(args[1], raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'current-timestamp': {
|
||||
commands.cmdCurrentTimestamp(args[1] || 'full', raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'list-todos': {
|
||||
commands.cmdListTodos(cwd, args[1], raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'verify-path-exists': {
|
||||
commands.cmdVerifyPathExists(cwd, args[1], raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'config-ensure-section': {
|
||||
config.cmdConfigEnsureSection(cwd, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'config-set': {
|
||||
config.cmdConfigSet(cwd, args[1], args[2], raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case "config-set-model-profile": {
|
||||
config.cmdConfigSetModelProfile(cwd, args[1], raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'config-get': {
|
||||
config.cmdConfigGet(cwd, args[1], raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'history-digest': {
|
||||
commands.cmdHistoryDigest(cwd, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'phases': {
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'list') {
|
||||
const typeIndex = args.indexOf('--type');
|
||||
const phaseIndex = args.indexOf('--phase');
|
||||
const options = {
|
||||
type: typeIndex !== -1 ? args[typeIndex + 1] : null,
|
||||
phase: phaseIndex !== -1 ? args[phaseIndex + 1] : null,
|
||||
includeArchived: args.includes('--include-archived'),
|
||||
};
|
||||
phase.cmdPhasesList(cwd, options, raw);
|
||||
} else {
|
||||
error('Unknown phases subcommand. Available: list');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'roadmap': {
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'get-phase') {
|
||||
roadmap.cmdRoadmapGetPhase(cwd, args[2], raw);
|
||||
} else if (subcommand === 'analyze') {
|
||||
roadmap.cmdRoadmapAnalyze(cwd, raw);
|
||||
} else if (subcommand === 'update-plan-progress') {
|
||||
roadmap.cmdRoadmapUpdatePlanProgress(cwd, args[2], raw);
|
||||
} else {
|
||||
error('Unknown roadmap subcommand. Available: get-phase, analyze, update-plan-progress');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'requirements': {
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'mark-complete') {
|
||||
milestone.cmdRequirementsMarkComplete(cwd, args.slice(2), raw);
|
||||
} else {
|
||||
error('Unknown requirements subcommand. Available: mark-complete');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'phase': {
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'next-decimal') {
|
||||
phase.cmdPhaseNextDecimal(cwd, args[2], raw);
|
||||
} else if (subcommand === 'add') {
|
||||
phase.cmdPhaseAdd(cwd, args.slice(2).join(' '), raw);
|
||||
} else if (subcommand === 'insert') {
|
||||
phase.cmdPhaseInsert(cwd, args[2], args.slice(3).join(' '), raw);
|
||||
} else if (subcommand === 'remove') {
|
||||
const forceFlag = args.includes('--force');
|
||||
phase.cmdPhaseRemove(cwd, args[2], { force: forceFlag }, raw);
|
||||
} else if (subcommand === 'complete') {
|
||||
phase.cmdPhaseComplete(cwd, args[2], raw);
|
||||
} else {
|
||||
error('Unknown phase subcommand. Available: next-decimal, add, insert, remove, complete');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'milestone': {
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'complete') {
|
||||
const nameIndex = args.indexOf('--name');
|
||||
const archivePhases = args.includes('--archive-phases');
|
||||
// Collect --name value (everything after --name until next flag or end)
|
||||
let milestoneName = null;
|
||||
if (nameIndex !== -1) {
|
||||
const nameArgs = [];
|
||||
for (let i = nameIndex + 1; i < args.length; i++) {
|
||||
if (args[i].startsWith('--')) break;
|
||||
nameArgs.push(args[i]);
|
||||
}
|
||||
milestoneName = nameArgs.join(' ') || null;
|
||||
}
|
||||
milestone.cmdMilestoneComplete(cwd, args[2], { name: milestoneName, archivePhases }, raw);
|
||||
} else {
|
||||
error('Unknown milestone subcommand. Available: complete');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'validate': {
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'consistency') {
|
||||
verify.cmdValidateConsistency(cwd, raw);
|
||||
} else if (subcommand === 'health') {
|
||||
const repairFlag = args.includes('--repair');
|
||||
verify.cmdValidateHealth(cwd, { repair: repairFlag }, raw);
|
||||
} else {
|
||||
error('Unknown validate subcommand. Available: consistency, health');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'progress': {
|
||||
const subcommand = args[1] || 'json';
|
||||
commands.cmdProgressRender(cwd, subcommand, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'stats': {
|
||||
const subcommand = args[1] || 'json';
|
||||
commands.cmdStats(cwd, subcommand, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'todo': {
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'complete') {
|
||||
commands.cmdTodoComplete(cwd, args[2], raw);
|
||||
} else {
|
||||
error('Unknown todo subcommand. Available: complete');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'scaffold': {
|
||||
const scaffoldType = args[1];
|
||||
const phaseIndex = args.indexOf('--phase');
|
||||
const nameIndex = args.indexOf('--name');
|
||||
const scaffoldOptions = {
|
||||
phase: phaseIndex !== -1 ? args[phaseIndex + 1] : null,
|
||||
name: nameIndex !== -1 ? args.slice(nameIndex + 1).join(' ') : null,
|
||||
};
|
||||
commands.cmdScaffold(cwd, scaffoldType, scaffoldOptions, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'init': {
|
||||
const workflow = args[1];
|
||||
switch (workflow) {
|
||||
case 'execute-phase':
|
||||
init.cmdInitExecutePhase(cwd, args[2], raw);
|
||||
break;
|
||||
case 'plan-phase':
|
||||
init.cmdInitPlanPhase(cwd, args[2], raw);
|
||||
break;
|
||||
case 'new-project':
|
||||
init.cmdInitNewProject(cwd, raw);
|
||||
break;
|
||||
case 'new-milestone':
|
||||
init.cmdInitNewMilestone(cwd, raw);
|
||||
break;
|
||||
case 'quick':
|
||||
init.cmdInitQuick(cwd, args.slice(2).join(' '), raw);
|
||||
break;
|
||||
case 'resume':
|
||||
init.cmdInitResume(cwd, raw);
|
||||
break;
|
||||
case 'verify-work':
|
||||
init.cmdInitVerifyWork(cwd, args[2], raw);
|
||||
break;
|
||||
case 'phase-op':
|
||||
init.cmdInitPhaseOp(cwd, args[2], raw);
|
||||
break;
|
||||
case 'todos':
|
||||
init.cmdInitTodos(cwd, args[2], raw);
|
||||
break;
|
||||
case 'milestone-op':
|
||||
init.cmdInitMilestoneOp(cwd, raw);
|
||||
break;
|
||||
case 'map-codebase':
|
||||
init.cmdInitMapCodebase(cwd, raw);
|
||||
break;
|
||||
case 'progress':
|
||||
init.cmdInitProgress(cwd, raw);
|
||||
break;
|
||||
default:
|
||||
error(`Unknown init workflow: ${workflow}\nAvailable: execute-phase, plan-phase, new-project, new-milestone, quick, resume, verify-work, phase-op, todos, milestone-op, map-codebase, progress`);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case 'phase-plan-index': {
|
||||
phase.cmdPhasePlanIndex(cwd, args[1], raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'state-snapshot': {
|
||||
state.cmdStateSnapshot(cwd, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'summary-extract': {
|
||||
const summaryPath = args[1];
|
||||
const fieldsIndex = args.indexOf('--fields');
|
||||
const fields = fieldsIndex !== -1 ? args[fieldsIndex + 1].split(',') : null;
|
||||
commands.cmdSummaryExtract(cwd, summaryPath, fields, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'websearch': {
|
||||
const query = args[1];
|
||||
const limitIdx = args.indexOf('--limit');
|
||||
const freshnessIdx = args.indexOf('--freshness');
|
||||
await commands.cmdWebsearch(query, {
|
||||
limit: limitIdx !== -1 ? parseInt(args[limitIdx + 1], 10) : 10,
|
||||
freshness: freshnessIdx !== -1 ? args[freshnessIdx + 1] : null,
|
||||
}, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
// ─── Profiling Pipeline ────────────────────────────────────────────────
|
||||
|
||||
case 'scan-sessions': {
|
||||
const pathIdx = args.indexOf('--path');
|
||||
const sessionsPath = pathIdx !== -1 ? args[pathIdx + 1] : null;
|
||||
const verboseFlag = args.includes('--verbose');
|
||||
const jsonFlag = args.includes('--json');
|
||||
await profilePipeline.cmdScanSessions(sessionsPath, { verbose: verboseFlag, json: jsonFlag }, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'extract-messages': {
|
||||
const sessionIdx = args.indexOf('--session');
|
||||
const sessionId = sessionIdx !== -1 ? args[sessionIdx + 1] : null;
|
||||
const limitIdx = args.indexOf('--limit');
|
||||
const limit = limitIdx !== -1 ? parseInt(args[limitIdx + 1], 10) : null;
|
||||
const pathIdx = args.indexOf('--path');
|
||||
const sessionsPath = pathIdx !== -1 ? args[pathIdx + 1] : null;
|
||||
const projectArg = args[1];
|
||||
if (!projectArg || projectArg.startsWith('--')) {
|
||||
error('Usage: gsd-tools extract-messages <project> [--session <id>] [--limit N] [--path <dir>]\nRun scan-sessions first to see available projects.');
|
||||
}
|
||||
await profilePipeline.cmdExtractMessages(projectArg, { sessionId, limit }, raw, sessionsPath);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'profile-sample': {
|
||||
const pathIdx = args.indexOf('--path');
|
||||
const sessionsPath = pathIdx !== -1 ? args[pathIdx + 1] : null;
|
||||
const limitIdx = args.indexOf('--limit');
|
||||
const limit = limitIdx !== -1 ? parseInt(args[limitIdx + 1], 10) : 150;
|
||||
const maxPerIdx = args.indexOf('--max-per-project');
|
||||
const maxPerProject = maxPerIdx !== -1 ? parseInt(args[maxPerIdx + 1], 10) : null;
|
||||
const maxCharsIdx = args.indexOf('--max-chars');
|
||||
const maxChars = maxCharsIdx !== -1 ? parseInt(args[maxCharsIdx + 1], 10) : 500;
|
||||
await profilePipeline.cmdProfileSample(sessionsPath, { limit, maxPerProject, maxChars }, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
// ─── Profile Output ──────────────────────────────────────────────────
|
||||
|
||||
case 'write-profile': {
|
||||
const inputIdx = args.indexOf('--input');
|
||||
const inputPath = inputIdx !== -1 ? args[inputIdx + 1] : null;
|
||||
if (!inputPath) error('--input <analysis-json-path> is required');
|
||||
const outputIdx = args.indexOf('--output');
|
||||
const outputPath = outputIdx !== -1 ? args[outputIdx + 1] : null;
|
||||
profileOutput.cmdWriteProfile(cwd, { input: inputPath, output: outputPath }, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'profile-questionnaire': {
|
||||
const answersIdx = args.indexOf('--answers');
|
||||
const answers = answersIdx !== -1 ? args[answersIdx + 1] : null;
|
||||
profileOutput.cmdProfileQuestionnaire({ answers }, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'generate-dev-preferences': {
|
||||
const analysisIdx = args.indexOf('--analysis');
|
||||
const analysisPath = analysisIdx !== -1 ? args[analysisIdx + 1] : null;
|
||||
const outputIdx = args.indexOf('--output');
|
||||
const outputPath = outputIdx !== -1 ? args[outputIdx + 1] : null;
|
||||
const stackIdx = args.indexOf('--stack');
|
||||
const stack = stackIdx !== -1 ? args[stackIdx + 1] : null;
|
||||
profileOutput.cmdGenerateDevPreferences(cwd, { analysis: analysisPath, output: outputPath, stack }, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'generate-claude-profile': {
|
||||
const analysisIdx = args.indexOf('--analysis');
|
||||
const analysisPath = analysisIdx !== -1 ? args[analysisIdx + 1] : null;
|
||||
const outputIdx = args.indexOf('--output');
|
||||
const outputPath = outputIdx !== -1 ? args[outputIdx + 1] : null;
|
||||
const globalFlag = args.includes('--global');
|
||||
profileOutput.cmdGenerateClaudeProfile(cwd, { analysis: analysisPath, output: outputPath, global: globalFlag }, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'generate-claude-md': {
|
||||
const outputIdx = args.indexOf('--output');
|
||||
const outputPath = outputIdx !== -1 ? args[outputIdx + 1] : null;
|
||||
const autoFlag = args.includes('--auto');
|
||||
const forceFlag = args.includes('--force');
|
||||
profileOutput.cmdGenerateClaudeMd(cwd, { output: outputPath, auto: autoFlag, force: forceFlag }, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
default:
|
||||
error(`Unknown command: ${command}`);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
709
get-shit-done/bin/lib/commands.cjs
Normal file
709
get-shit-done/bin/lib/commands.cjs
Normal file
@@ -0,0 +1,709 @@
|
||||
/**
|
||||
* Commands — Standalone utility commands
|
||||
*/
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { execSync } = require('child_process');
|
||||
const { safeReadFile, loadConfig, isGitIgnored, execGit, normalizePhaseName, comparePhaseNum, getArchivedPhaseDirs, generateSlugInternal, getMilestoneInfo, getMilestonePhaseFilter, resolveModelInternal, stripShippedMilestones, extractCurrentMilestone, toPosixPath, output, error, findPhaseInternal } = require('./core.cjs');
|
||||
const { extractFrontmatter } = require('./frontmatter.cjs');
|
||||
const { MODEL_PROFILES } = require('./model-profiles.cjs');
|
||||
|
||||
function cmdGenerateSlug(text, raw) {
|
||||
if (!text) {
|
||||
error('text required for slug generation');
|
||||
}
|
||||
|
||||
const slug = text
|
||||
.toLowerCase()
|
||||
.replace(/[^a-z0-9]+/g, '-')
|
||||
.replace(/^-+|-+$/g, '');
|
||||
|
||||
const result = { slug };
|
||||
output(result, raw, slug);
|
||||
}
|
||||
|
||||
function cmdCurrentTimestamp(format, raw) {
|
||||
const now = new Date();
|
||||
let result;
|
||||
|
||||
switch (format) {
|
||||
case 'date':
|
||||
result = now.toISOString().split('T')[0];
|
||||
break;
|
||||
case 'filename':
|
||||
result = now.toISOString().replace(/:/g, '-').replace(/\..+/, '');
|
||||
break;
|
||||
case 'full':
|
||||
default:
|
||||
result = now.toISOString();
|
||||
break;
|
||||
}
|
||||
|
||||
output({ timestamp: result }, raw, result);
|
||||
}
|
||||
|
||||
function cmdListTodos(cwd, area, raw) {
|
||||
const pendingDir = path.join(cwd, '.planning', 'todos', 'pending');
|
||||
|
||||
let count = 0;
|
||||
const todos = [];
|
||||
|
||||
try {
|
||||
const files = fs.readdirSync(pendingDir).filter(f => f.endsWith('.md'));
|
||||
|
||||
for (const file of files) {
|
||||
try {
|
||||
const content = fs.readFileSync(path.join(pendingDir, file), 'utf-8');
|
||||
const createdMatch = content.match(/^created:\s*(.+)$/m);
|
||||
const titleMatch = content.match(/^title:\s*(.+)$/m);
|
||||
const areaMatch = content.match(/^area:\s*(.+)$/m);
|
||||
|
||||
const todoArea = areaMatch ? areaMatch[1].trim() : 'general';
|
||||
|
||||
// Apply area filter if specified
|
||||
if (area && todoArea !== area) continue;
|
||||
|
||||
count++;
|
||||
todos.push({
|
||||
file,
|
||||
created: createdMatch ? createdMatch[1].trim() : 'unknown',
|
||||
title: titleMatch ? titleMatch[1].trim() : 'Untitled',
|
||||
area: todoArea,
|
||||
path: toPosixPath(path.join('.planning', 'todos', 'pending', file)),
|
||||
});
|
||||
} catch {}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
const result = { count, todos };
|
||||
output(result, raw, count.toString());
|
||||
}
|
||||
|
||||
function cmdVerifyPathExists(cwd, targetPath, raw) {
|
||||
if (!targetPath) {
|
||||
error('path required for verification');
|
||||
}
|
||||
|
||||
const fullPath = path.isAbsolute(targetPath) ? targetPath : path.join(cwd, targetPath);
|
||||
|
||||
try {
|
||||
const stats = fs.statSync(fullPath);
|
||||
const type = stats.isDirectory() ? 'directory' : stats.isFile() ? 'file' : 'other';
|
||||
const result = { exists: true, type };
|
||||
output(result, raw, 'true');
|
||||
} catch {
|
||||
const result = { exists: false, type: null };
|
||||
output(result, raw, 'false');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdHistoryDigest(cwd, raw) {
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const digest = { phases: {}, decisions: [], tech_stack: new Set() };
|
||||
|
||||
// Collect all phase directories: archived + current
|
||||
const allPhaseDirs = [];
|
||||
|
||||
// Add archived phases first (oldest milestones first)
|
||||
const archived = getArchivedPhaseDirs(cwd);
|
||||
for (const a of archived) {
|
||||
allPhaseDirs.push({ name: a.name, fullPath: a.fullPath, milestone: a.milestone });
|
||||
}
|
||||
|
||||
// Add current phases
|
||||
if (fs.existsSync(phasesDir)) {
|
||||
try {
|
||||
const currentDirs = fs.readdirSync(phasesDir, { withFileTypes: true })
|
||||
.filter(e => e.isDirectory())
|
||||
.map(e => e.name)
|
||||
.sort();
|
||||
for (const dir of currentDirs) {
|
||||
allPhaseDirs.push({ name: dir, fullPath: path.join(phasesDir, dir), milestone: null });
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
if (allPhaseDirs.length === 0) {
|
||||
digest.tech_stack = [];
|
||||
output(digest, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
for (const { name: dir, fullPath: dirPath } of allPhaseDirs) {
|
||||
const summaries = fs.readdirSync(dirPath).filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md');
|
||||
|
||||
for (const summary of summaries) {
|
||||
try {
|
||||
const content = fs.readFileSync(path.join(dirPath, summary), 'utf-8');
|
||||
const fm = extractFrontmatter(content);
|
||||
|
||||
const phaseNum = fm.phase || dir.split('-')[0];
|
||||
|
||||
if (!digest.phases[phaseNum]) {
|
||||
digest.phases[phaseNum] = {
|
||||
name: fm.name || dir.split('-').slice(1).join(' ') || 'Unknown',
|
||||
provides: new Set(),
|
||||
affects: new Set(),
|
||||
patterns: new Set(),
|
||||
};
|
||||
}
|
||||
|
||||
// Merge provides
|
||||
if (fm['dependency-graph'] && fm['dependency-graph'].provides) {
|
||||
fm['dependency-graph'].provides.forEach(p => digest.phases[phaseNum].provides.add(p));
|
||||
} else if (fm.provides) {
|
||||
fm.provides.forEach(p => digest.phases[phaseNum].provides.add(p));
|
||||
}
|
||||
|
||||
// Merge affects
|
||||
if (fm['dependency-graph'] && fm['dependency-graph'].affects) {
|
||||
fm['dependency-graph'].affects.forEach(a => digest.phases[phaseNum].affects.add(a));
|
||||
}
|
||||
|
||||
// Merge patterns
|
||||
if (fm['patterns-established']) {
|
||||
fm['patterns-established'].forEach(p => digest.phases[phaseNum].patterns.add(p));
|
||||
}
|
||||
|
||||
// Merge decisions
|
||||
if (fm['key-decisions']) {
|
||||
fm['key-decisions'].forEach(d => {
|
||||
digest.decisions.push({ phase: phaseNum, decision: d });
|
||||
});
|
||||
}
|
||||
|
||||
// Merge tech stack
|
||||
if (fm['tech-stack'] && fm['tech-stack'].added) {
|
||||
fm['tech-stack'].added.forEach(t => digest.tech_stack.add(typeof t === 'string' ? t : t.name));
|
||||
}
|
||||
|
||||
} catch (e) {
|
||||
// Skip malformed summaries
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Convert Sets to Arrays for JSON output
|
||||
Object.keys(digest.phases).forEach(p => {
|
||||
digest.phases[p].provides = [...digest.phases[p].provides];
|
||||
digest.phases[p].affects = [...digest.phases[p].affects];
|
||||
digest.phases[p].patterns = [...digest.phases[p].patterns];
|
||||
});
|
||||
digest.tech_stack = [...digest.tech_stack];
|
||||
|
||||
output(digest, raw);
|
||||
} catch (e) {
|
||||
error('Failed to generate history digest: ' + e.message);
|
||||
}
|
||||
}
|
||||
|
||||
function cmdResolveModel(cwd, agentType, raw) {
|
||||
if (!agentType) {
|
||||
error('agent-type required');
|
||||
}
|
||||
|
||||
const config = loadConfig(cwd);
|
||||
const profile = config.model_profile || 'balanced';
|
||||
const model = resolveModelInternal(cwd, agentType);
|
||||
|
||||
const agentModels = MODEL_PROFILES[agentType];
|
||||
const result = agentModels
|
||||
? { model, profile }
|
||||
: { model, profile, unknown_agent: true };
|
||||
output(result, raw, model);
|
||||
}
|
||||
|
||||
function cmdCommit(cwd, message, files, raw, amend) {
|
||||
if (!message && !amend) {
|
||||
error('commit message required');
|
||||
}
|
||||
|
||||
const config = loadConfig(cwd);
|
||||
|
||||
// Check commit_docs config
|
||||
if (!config.commit_docs) {
|
||||
const result = { committed: false, hash: null, reason: 'skipped_commit_docs_false' };
|
||||
output(result, raw, 'skipped');
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if .planning is gitignored
|
||||
if (isGitIgnored(cwd, '.planning')) {
|
||||
const result = { committed: false, hash: null, reason: 'skipped_gitignored' };
|
||||
output(result, raw, 'skipped');
|
||||
return;
|
||||
}
|
||||
|
||||
// Stage files
|
||||
const filesToStage = files && files.length > 0 ? files : ['.planning/'];
|
||||
for (const file of filesToStage) {
|
||||
execGit(cwd, ['add', file]);
|
||||
}
|
||||
|
||||
// Commit
|
||||
const commitArgs = amend ? ['commit', '--amend', '--no-edit'] : ['commit', '-m', message];
|
||||
const commitResult = execGit(cwd, commitArgs);
|
||||
if (commitResult.exitCode !== 0) {
|
||||
if (commitResult.stdout.includes('nothing to commit') || commitResult.stderr.includes('nothing to commit')) {
|
||||
const result = { committed: false, hash: null, reason: 'nothing_to_commit' };
|
||||
output(result, raw, 'nothing');
|
||||
return;
|
||||
}
|
||||
const result = { committed: false, hash: null, reason: 'nothing_to_commit', error: commitResult.stderr };
|
||||
output(result, raw, 'nothing');
|
||||
return;
|
||||
}
|
||||
|
||||
// Get short hash
|
||||
const hashResult = execGit(cwd, ['rev-parse', '--short', 'HEAD']);
|
||||
const hash = hashResult.exitCode === 0 ? hashResult.stdout : null;
|
||||
const result = { committed: true, hash, reason: 'committed' };
|
||||
output(result, raw, hash || 'committed');
|
||||
}
|
||||
|
||||
function cmdSummaryExtract(cwd, summaryPath, fields, raw) {
|
||||
if (!summaryPath) {
|
||||
error('summary-path required for summary-extract');
|
||||
}
|
||||
|
||||
const fullPath = path.join(cwd, summaryPath);
|
||||
|
||||
if (!fs.existsSync(fullPath)) {
|
||||
output({ error: 'File not found', path: summaryPath }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
const content = fs.readFileSync(fullPath, 'utf-8');
|
||||
const fm = extractFrontmatter(content);
|
||||
|
||||
// Parse key-decisions into structured format
|
||||
const parseDecisions = (decisionsList) => {
|
||||
if (!decisionsList || !Array.isArray(decisionsList)) return [];
|
||||
return decisionsList.map(d => {
|
||||
const colonIdx = d.indexOf(':');
|
||||
if (colonIdx > 0) {
|
||||
return {
|
||||
summary: d.substring(0, colonIdx).trim(),
|
||||
rationale: d.substring(colonIdx + 1).trim(),
|
||||
};
|
||||
}
|
||||
return { summary: d, rationale: null };
|
||||
});
|
||||
};
|
||||
|
||||
// Build full result
|
||||
const fullResult = {
|
||||
path: summaryPath,
|
||||
one_liner: fm['one-liner'] || null,
|
||||
key_files: fm['key-files'] || [],
|
||||
tech_added: (fm['tech-stack'] && fm['tech-stack'].added) || [],
|
||||
patterns: fm['patterns-established'] || [],
|
||||
decisions: parseDecisions(fm['key-decisions']),
|
||||
requirements_completed: fm['requirements-completed'] || [],
|
||||
};
|
||||
|
||||
// If fields specified, filter to only those fields
|
||||
if (fields && fields.length > 0) {
|
||||
const filtered = { path: summaryPath };
|
||||
for (const field of fields) {
|
||||
if (fullResult[field] !== undefined) {
|
||||
filtered[field] = fullResult[field];
|
||||
}
|
||||
}
|
||||
output(filtered, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
output(fullResult, raw);
|
||||
}
|
||||
|
||||
async function cmdWebsearch(query, options, raw) {
|
||||
const apiKey = process.env.BRAVE_API_KEY;
|
||||
|
||||
if (!apiKey) {
|
||||
// No key = silent skip, agent falls back to built-in WebSearch
|
||||
output({ available: false, reason: 'BRAVE_API_KEY not set' }, raw, '');
|
||||
return;
|
||||
}
|
||||
|
||||
if (!query) {
|
||||
output({ available: false, error: 'Query required' }, raw, '');
|
||||
return;
|
||||
}
|
||||
|
||||
const params = new URLSearchParams({
|
||||
q: query,
|
||||
count: String(options.limit || 10),
|
||||
country: 'us',
|
||||
search_lang: 'en',
|
||||
text_decorations: 'false'
|
||||
});
|
||||
|
||||
if (options.freshness) {
|
||||
params.set('freshness', options.freshness);
|
||||
}
|
||||
|
||||
try {
|
||||
const response = await fetch(
|
||||
`https://api.search.brave.com/res/v1/web/search?${params}`,
|
||||
{
|
||||
headers: {
|
||||
'Accept': 'application/json',
|
||||
'X-Subscription-Token': apiKey
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
if (!response.ok) {
|
||||
output({ available: false, error: `API error: ${response.status}` }, raw, '');
|
||||
return;
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
const results = (data.web?.results || []).map(r => ({
|
||||
title: r.title,
|
||||
url: r.url,
|
||||
description: r.description,
|
||||
age: r.age || null
|
||||
}));
|
||||
|
||||
output({
|
||||
available: true,
|
||||
query,
|
||||
count: results.length,
|
||||
results
|
||||
}, raw, results.map(r => `${r.title}\n${r.url}\n${r.description}`).join('\n\n'));
|
||||
} catch (err) {
|
||||
output({ available: false, error: err.message }, raw, '');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdProgressRender(cwd, format, raw) {
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
const milestone = getMilestoneInfo(cwd);
|
||||
|
||||
const phases = [];
|
||||
let totalPlans = 0;
|
||||
let totalSummaries = 0;
|
||||
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name).sort((a, b) => comparePhaseNum(a, b));
|
||||
|
||||
for (const dir of dirs) {
|
||||
const dm = dir.match(/^(\d+(?:\.\d+)*)-?(.*)/);
|
||||
const phaseNum = dm ? dm[1] : dir;
|
||||
const phaseName = dm && dm[2] ? dm[2].replace(/-/g, ' ') : '';
|
||||
const phaseFiles = fs.readdirSync(path.join(phasesDir, dir));
|
||||
const plans = phaseFiles.filter(f => f.endsWith('-PLAN.md') || f === 'PLAN.md').length;
|
||||
const summaries = phaseFiles.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md').length;
|
||||
|
||||
totalPlans += plans;
|
||||
totalSummaries += summaries;
|
||||
|
||||
let status;
|
||||
if (plans === 0) status = 'Pending';
|
||||
else if (summaries >= plans) status = 'Complete';
|
||||
else if (summaries > 0) status = 'In Progress';
|
||||
else status = 'Planned';
|
||||
|
||||
phases.push({ number: phaseNum, name: phaseName, plans, summaries, status });
|
||||
}
|
||||
} catch {}
|
||||
|
||||
const percent = totalPlans > 0 ? Math.min(100, Math.round((totalSummaries / totalPlans) * 100)) : 0;
|
||||
|
||||
if (format === 'table') {
|
||||
// Render markdown table
|
||||
const barWidth = 10;
|
||||
const filled = Math.round((percent / 100) * barWidth);
|
||||
const bar = '\u2588'.repeat(filled) + '\u2591'.repeat(barWidth - filled);
|
||||
let out = `# ${milestone.version} ${milestone.name}\n\n`;
|
||||
out += `**Progress:** [${bar}] ${totalSummaries}/${totalPlans} plans (${percent}%)\n\n`;
|
||||
out += `| Phase | Name | Plans | Status |\n`;
|
||||
out += `|-------|------|-------|--------|\n`;
|
||||
for (const p of phases) {
|
||||
out += `| ${p.number} | ${p.name} | ${p.summaries}/${p.plans} | ${p.status} |\n`;
|
||||
}
|
||||
output({ rendered: out }, raw, out);
|
||||
} else if (format === 'bar') {
|
||||
const barWidth = 20;
|
||||
const filled = Math.round((percent / 100) * barWidth);
|
||||
const bar = '\u2588'.repeat(filled) + '\u2591'.repeat(barWidth - filled);
|
||||
const text = `[${bar}] ${totalSummaries}/${totalPlans} plans (${percent}%)`;
|
||||
output({ bar: text, percent, completed: totalSummaries, total: totalPlans }, raw, text);
|
||||
} else {
|
||||
// JSON format
|
||||
output({
|
||||
milestone_version: milestone.version,
|
||||
milestone_name: milestone.name,
|
||||
phases,
|
||||
total_plans: totalPlans,
|
||||
total_summaries: totalSummaries,
|
||||
percent,
|
||||
}, raw);
|
||||
}
|
||||
}
|
||||
|
||||
function cmdTodoComplete(cwd, filename, raw) {
|
||||
if (!filename) {
|
||||
error('filename required for todo complete');
|
||||
}
|
||||
|
||||
const pendingDir = path.join(cwd, '.planning', 'todos', 'pending');
|
||||
const completedDir = path.join(cwd, '.planning', 'todos', 'completed');
|
||||
const sourcePath = path.join(pendingDir, filename);
|
||||
|
||||
if (!fs.existsSync(sourcePath)) {
|
||||
error(`Todo not found: ${filename}`);
|
||||
}
|
||||
|
||||
// Ensure completed directory exists
|
||||
fs.mkdirSync(completedDir, { recursive: true });
|
||||
|
||||
// Read, add completion timestamp, move
|
||||
let content = fs.readFileSync(sourcePath, 'utf-8');
|
||||
const today = new Date().toISOString().split('T')[0];
|
||||
content = `completed: ${today}\n` + content;
|
||||
|
||||
fs.writeFileSync(path.join(completedDir, filename), content, 'utf-8');
|
||||
fs.unlinkSync(sourcePath);
|
||||
|
||||
output({ completed: true, file: filename, date: today }, raw, 'completed');
|
||||
}
|
||||
|
||||
function cmdScaffold(cwd, type, options, raw) {
|
||||
const { phase, name } = options;
|
||||
const padded = phase ? normalizePhaseName(phase) : '00';
|
||||
const today = new Date().toISOString().split('T')[0];
|
||||
|
||||
// Find phase directory
|
||||
const phaseInfo = phase ? findPhaseInternal(cwd, phase) : null;
|
||||
const phaseDir = phaseInfo ? path.join(cwd, phaseInfo.directory) : null;
|
||||
|
||||
if (phase && !phaseDir && type !== 'phase-dir') {
|
||||
error(`Phase ${phase} directory not found`);
|
||||
}
|
||||
|
||||
let filePath, content;
|
||||
|
||||
switch (type) {
|
||||
case 'context': {
|
||||
filePath = path.join(phaseDir, `${padded}-CONTEXT.md`);
|
||||
content = `---\nphase: "${padded}"\nname: "${name || phaseInfo?.phase_name || 'Unnamed'}"\ncreated: ${today}\n---\n\n# Phase ${phase}: ${name || phaseInfo?.phase_name || 'Unnamed'} — Context\n\n## Decisions\n\n_Decisions will be captured during /gsd:discuss-phase ${phase}_\n\n## Discretion Areas\n\n_Areas where the executor can use judgment_\n\n## Deferred Ideas\n\n_Ideas to consider later_\n`;
|
||||
break;
|
||||
}
|
||||
case 'uat': {
|
||||
filePath = path.join(phaseDir, `${padded}-UAT.md`);
|
||||
content = `---\nphase: "${padded}"\nname: "${name || phaseInfo?.phase_name || 'Unnamed'}"\ncreated: ${today}\nstatus: pending\n---\n\n# Phase ${phase}: ${name || phaseInfo?.phase_name || 'Unnamed'} — User Acceptance Testing\n\n## Test Results\n\n| # | Test | Status | Notes |\n|---|------|--------|-------|\n\n## Summary\n\n_Pending UAT_\n`;
|
||||
break;
|
||||
}
|
||||
case 'verification': {
|
||||
filePath = path.join(phaseDir, `${padded}-VERIFICATION.md`);
|
||||
content = `---\nphase: "${padded}"\nname: "${name || phaseInfo?.phase_name || 'Unnamed'}"\ncreated: ${today}\nstatus: pending\n---\n\n# Phase ${phase}: ${name || phaseInfo?.phase_name || 'Unnamed'} — Verification\n\n## Goal-Backward Verification\n\n**Phase Goal:** [From ROADMAP.md]\n\n## Checks\n\n| # | Requirement | Status | Evidence |\n|---|------------|--------|----------|\n\n## Result\n\n_Pending verification_\n`;
|
||||
break;
|
||||
}
|
||||
case 'phase-dir': {
|
||||
if (!phase || !name) {
|
||||
error('phase and name required for phase-dir scaffold');
|
||||
}
|
||||
const slug = generateSlugInternal(name);
|
||||
const dirName = `${padded}-${slug}`;
|
||||
const phasesParent = path.join(cwd, '.planning', 'phases');
|
||||
fs.mkdirSync(phasesParent, { recursive: true });
|
||||
const dirPath = path.join(phasesParent, dirName);
|
||||
fs.mkdirSync(dirPath, { recursive: true });
|
||||
output({ created: true, directory: `.planning/phases/${dirName}`, path: dirPath }, raw, dirPath);
|
||||
return;
|
||||
}
|
||||
default:
|
||||
error(`Unknown scaffold type: ${type}. Available: context, uat, verification, phase-dir`);
|
||||
}
|
||||
|
||||
if (fs.existsSync(filePath)) {
|
||||
output({ created: false, reason: 'already_exists', path: filePath }, raw, 'exists');
|
||||
return;
|
||||
}
|
||||
|
||||
fs.writeFileSync(filePath, content, 'utf-8');
|
||||
const relPath = toPosixPath(path.relative(cwd, filePath));
|
||||
output({ created: true, path: relPath }, raw, relPath);
|
||||
}
|
||||
|
||||
function cmdStats(cwd, format, raw) {
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
const reqPath = path.join(cwd, '.planning', 'REQUIREMENTS.md');
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
const milestone = getMilestoneInfo(cwd);
|
||||
const isDirInMilestone = getMilestonePhaseFilter(cwd);
|
||||
|
||||
// Phase & plan stats (reuse progress pattern)
|
||||
const phasesByNumber = new Map();
|
||||
let totalPlans = 0;
|
||||
let totalSummaries = 0;
|
||||
|
||||
try {
|
||||
const roadmapContent = extractCurrentMilestone(fs.readFileSync(roadmapPath, 'utf-8'), cwd);
|
||||
const headingPattern = /#{2,4}\s*Phase\s+(\d+[A-Z]?(?:\.\d+)*)\s*:\s*([^\n]+)/gi;
|
||||
let match;
|
||||
while ((match = headingPattern.exec(roadmapContent)) !== null) {
|
||||
phasesByNumber.set(match[1], {
|
||||
number: match[1],
|
||||
name: match[2].replace(/\(INSERTED\)/i, '').trim(),
|
||||
plans: 0,
|
||||
summaries: 0,
|
||||
status: 'Not Started',
|
||||
});
|
||||
}
|
||||
} catch {}
|
||||
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries
|
||||
.filter(e => e.isDirectory())
|
||||
.map(e => e.name)
|
||||
.filter(isDirInMilestone)
|
||||
.sort((a, b) => comparePhaseNum(a, b));
|
||||
|
||||
for (const dir of dirs) {
|
||||
const dm = dir.match(/^(\d+[A-Z]?(?:\.\d+)*)-?(.*)/i);
|
||||
const phaseNum = dm ? dm[1] : dir;
|
||||
const phaseName = dm && dm[2] ? dm[2].replace(/-/g, ' ') : '';
|
||||
const phaseFiles = fs.readdirSync(path.join(phasesDir, dir));
|
||||
const plans = phaseFiles.filter(f => f.endsWith('-PLAN.md') || f === 'PLAN.md').length;
|
||||
const summaries = phaseFiles.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md').length;
|
||||
|
||||
totalPlans += plans;
|
||||
totalSummaries += summaries;
|
||||
|
||||
let status;
|
||||
if (plans === 0) status = 'Not Started';
|
||||
else if (summaries >= plans) status = 'Complete';
|
||||
else if (summaries > 0) status = 'In Progress';
|
||||
else status = 'Planned';
|
||||
|
||||
const existing = phasesByNumber.get(phaseNum);
|
||||
phasesByNumber.set(phaseNum, {
|
||||
number: phaseNum,
|
||||
name: existing?.name || phaseName,
|
||||
plans,
|
||||
summaries,
|
||||
status,
|
||||
});
|
||||
}
|
||||
} catch {}
|
||||
|
||||
const phases = [...phasesByNumber.values()].sort((a, b) => comparePhaseNum(a.number, b.number));
|
||||
const completedPhases = phases.filter(p => p.status === 'Complete').length;
|
||||
const planPercent = totalPlans > 0 ? Math.min(100, Math.round((totalSummaries / totalPlans) * 100)) : 0;
|
||||
const percent = phases.length > 0 ? Math.min(100, Math.round((completedPhases / phases.length) * 100)) : 0;
|
||||
|
||||
// Requirements stats
|
||||
let requirementsTotal = 0;
|
||||
let requirementsComplete = 0;
|
||||
try {
|
||||
if (fs.existsSync(reqPath)) {
|
||||
const reqContent = fs.readFileSync(reqPath, 'utf-8');
|
||||
const checked = reqContent.match(/^- \[x\] \*\*/gm);
|
||||
const unchecked = reqContent.match(/^- \[ \] \*\*/gm);
|
||||
requirementsComplete = checked ? checked.length : 0;
|
||||
requirementsTotal = requirementsComplete + (unchecked ? unchecked.length : 0);
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Last activity from STATE.md
|
||||
let lastActivity = null;
|
||||
try {
|
||||
if (fs.existsSync(statePath)) {
|
||||
const stateContent = fs.readFileSync(statePath, 'utf-8');
|
||||
const activityMatch = stateContent.match(/^last_activity:\s*(.+)$/im)
|
||||
|| stateContent.match(/\*\*Last Activity:\*\*\s*(.+)/i)
|
||||
|| stateContent.match(/^Last Activity:\s*(.+)$/im)
|
||||
|| stateContent.match(/^Last activity:\s*(.+)$/im);
|
||||
if (activityMatch) lastActivity = activityMatch[1].trim();
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Git stats
|
||||
let gitCommits = 0;
|
||||
let gitFirstCommitDate = null;
|
||||
const commitCount = execGit(cwd, ['rev-list', '--count', 'HEAD']);
|
||||
if (commitCount.exitCode === 0) {
|
||||
gitCommits = parseInt(commitCount.stdout, 10) || 0;
|
||||
}
|
||||
const rootHash = execGit(cwd, ['rev-list', '--max-parents=0', 'HEAD']);
|
||||
if (rootHash.exitCode === 0 && rootHash.stdout) {
|
||||
const firstCommit = rootHash.stdout.split('\n')[0].trim();
|
||||
const firstDate = execGit(cwd, ['show', '-s', '--format=%as', firstCommit]);
|
||||
if (firstDate.exitCode === 0) {
|
||||
gitFirstCommitDate = firstDate.stdout || null;
|
||||
}
|
||||
}
|
||||
|
||||
const result = {
|
||||
milestone_version: milestone.version,
|
||||
milestone_name: milestone.name,
|
||||
phases,
|
||||
phases_completed: completedPhases,
|
||||
phases_total: phases.length,
|
||||
total_plans: totalPlans,
|
||||
total_summaries: totalSummaries,
|
||||
percent,
|
||||
plan_percent: planPercent,
|
||||
requirements_total: requirementsTotal,
|
||||
requirements_complete: requirementsComplete,
|
||||
git_commits: gitCommits,
|
||||
git_first_commit_date: gitFirstCommitDate,
|
||||
last_activity: lastActivity,
|
||||
};
|
||||
|
||||
if (format === 'table') {
|
||||
const barWidth = 10;
|
||||
const filled = Math.round((percent / 100) * barWidth);
|
||||
const bar = '\u2588'.repeat(filled) + '\u2591'.repeat(barWidth - filled);
|
||||
let out = `# ${milestone.version} ${milestone.name} \u2014 Statistics\n\n`;
|
||||
out += `**Progress:** [${bar}] ${completedPhases}/${phases.length} phases (${percent}%)\n`;
|
||||
if (totalPlans > 0) {
|
||||
out += `**Plans:** ${totalSummaries}/${totalPlans} complete (${planPercent}%)\n`;
|
||||
}
|
||||
out += `**Phases:** ${completedPhases}/${phases.length} complete\n`;
|
||||
if (requirementsTotal > 0) {
|
||||
out += `**Requirements:** ${requirementsComplete}/${requirementsTotal} complete\n`;
|
||||
}
|
||||
out += '\n';
|
||||
out += `| Phase | Name | Plans | Completed | Status |\n`;
|
||||
out += `|-------|------|-------|-----------|--------|\n`;
|
||||
for (const p of phases) {
|
||||
out += `| ${p.number} | ${p.name} | ${p.plans} | ${p.summaries} | ${p.status} |\n`;
|
||||
}
|
||||
if (gitCommits > 0) {
|
||||
out += `\n**Git:** ${gitCommits} commits`;
|
||||
if (gitFirstCommitDate) out += ` (since ${gitFirstCommitDate})`;
|
||||
out += '\n';
|
||||
}
|
||||
if (lastActivity) out += `**Last activity:** ${lastActivity}\n`;
|
||||
output({ rendered: out }, raw, out);
|
||||
} else {
|
||||
output(result, raw);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
cmdGenerateSlug,
|
||||
cmdCurrentTimestamp,
|
||||
cmdListTodos,
|
||||
cmdVerifyPathExists,
|
||||
cmdHistoryDigest,
|
||||
cmdResolveModel,
|
||||
cmdCommit,
|
||||
cmdSummaryExtract,
|
||||
cmdWebsearch,
|
||||
cmdProgressRender,
|
||||
cmdTodoComplete,
|
||||
cmdScaffold,
|
||||
cmdStats,
|
||||
};
|
||||
307
get-shit-done/bin/lib/config.cjs
Normal file
307
get-shit-done/bin/lib/config.cjs
Normal file
@@ -0,0 +1,307 @@
|
||||
/**
|
||||
* Config — Planning config CRUD operations
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { output, error } = require('./core.cjs');
|
||||
const {
|
||||
VALID_PROFILES,
|
||||
getAgentToModelMapForProfile,
|
||||
formatAgentToModelMapAsTable,
|
||||
} = require('./model-profiles.cjs');
|
||||
|
||||
const VALID_CONFIG_KEYS = new Set([
|
||||
'mode', 'granularity', 'parallelization', 'commit_docs', 'model_profile',
|
||||
'search_gitignored', 'brave_search',
|
||||
'workflow.research', 'workflow.plan_check', 'workflow.verifier',
|
||||
'workflow.nyquist_validation', 'workflow.ui_phase', 'workflow.ui_safety_gate',
|
||||
'workflow._auto_chain_active',
|
||||
'git.branching_strategy', 'git.phase_branch_template', 'git.milestone_branch_template',
|
||||
'planning.commit_docs', 'planning.search_gitignored',
|
||||
]);
|
||||
|
||||
const CONFIG_KEY_SUGGESTIONS = {
|
||||
'workflow.nyquist_validation_enabled': 'workflow.nyquist_validation',
|
||||
'agents.nyquist_validation_enabled': 'workflow.nyquist_validation',
|
||||
'nyquist.validation_enabled': 'workflow.nyquist_validation',
|
||||
};
|
||||
|
||||
function validateKnownConfigKeyPath(keyPath) {
|
||||
const suggested = CONFIG_KEY_SUGGESTIONS[keyPath];
|
||||
if (suggested) {
|
||||
error(`Unknown config key: ${keyPath}. Did you mean ${suggested}?`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensures the config file exists (creates it if needed).
|
||||
*
|
||||
* Does not call `output()`, so can be used as one step in a command without triggering `exit(0)` in
|
||||
* the happy path. But note that `error()` will still `exit(1)` out of the process.
|
||||
*/
|
||||
function ensureConfigFile(cwd) {
|
||||
const configPath = path.join(cwd, '.planning', 'config.json');
|
||||
const planningDir = path.join(cwd, '.planning');
|
||||
|
||||
// Ensure .planning directory exists
|
||||
try {
|
||||
if (!fs.existsSync(planningDir)) {
|
||||
fs.mkdirSync(planningDir, { recursive: true });
|
||||
}
|
||||
} catch (err) {
|
||||
error('Failed to create .planning directory: ' + err.message);
|
||||
}
|
||||
|
||||
// Check if config already exists
|
||||
if (fs.existsSync(configPath)) {
|
||||
return { created: false, reason: 'already_exists' };
|
||||
}
|
||||
|
||||
// Detect Brave Search API key availability
|
||||
const homedir = require('os').homedir();
|
||||
const braveKeyFile = path.join(homedir, '.gsd', 'brave_api_key');
|
||||
const hasBraveSearch = !!(process.env.BRAVE_API_KEY || fs.existsSync(braveKeyFile));
|
||||
|
||||
// Load user-level defaults from ~/.gsd/defaults.json if available
|
||||
const globalDefaultsPath = path.join(homedir, '.gsd', 'defaults.json');
|
||||
let userDefaults = {};
|
||||
try {
|
||||
if (fs.existsSync(globalDefaultsPath)) {
|
||||
userDefaults = JSON.parse(fs.readFileSync(globalDefaultsPath, 'utf-8'));
|
||||
// Migrate deprecated "depth" key to "granularity"
|
||||
if ('depth' in userDefaults && !('granularity' in userDefaults)) {
|
||||
const depthToGranularity = { quick: 'coarse', standard: 'standard', comprehensive: 'fine' };
|
||||
userDefaults.granularity = depthToGranularity[userDefaults.depth] || userDefaults.depth;
|
||||
delete userDefaults.depth;
|
||||
try {
|
||||
fs.writeFileSync(globalDefaultsPath, JSON.stringify(userDefaults, null, 2), 'utf-8');
|
||||
} catch {}
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
// Ignore malformed global defaults, fall back to hardcoded
|
||||
}
|
||||
|
||||
// Create default config (user-level defaults override hardcoded defaults)
|
||||
const hardcoded = {
|
||||
model_profile: 'balanced',
|
||||
commit_docs: true,
|
||||
search_gitignored: false,
|
||||
branching_strategy: 'none',
|
||||
phase_branch_template: 'gsd/phase-{phase}-{slug}',
|
||||
milestone_branch_template: 'gsd/{milestone}-{slug}',
|
||||
workflow: {
|
||||
research: true,
|
||||
plan_check: true,
|
||||
verifier: true,
|
||||
nyquist_validation: true,
|
||||
},
|
||||
parallelization: true,
|
||||
brave_search: hasBraveSearch,
|
||||
};
|
||||
const defaults = {
|
||||
...hardcoded,
|
||||
...userDefaults,
|
||||
workflow: { ...hardcoded.workflow, ...(userDefaults.workflow || {}) },
|
||||
};
|
||||
|
||||
try {
|
||||
fs.writeFileSync(configPath, JSON.stringify(defaults, null, 2), 'utf-8');
|
||||
return { created: true, path: '.planning/config.json' };
|
||||
} catch (err) {
|
||||
error('Failed to create config.json: ' + err.message);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Command to ensure the config file exists (creates it if needed).
|
||||
*
|
||||
* Note that this exits the process (via `output()`) even in the happy path; use
|
||||
* `ensureConfigFile()` directly if you need to avoid this.
|
||||
*/
|
||||
function cmdConfigEnsureSection(cwd, raw) {
|
||||
const ensureConfigFileResult = ensureConfigFile(cwd);
|
||||
if (ensureConfigFileResult.created) {
|
||||
output(ensureConfigFileResult, raw, 'created');
|
||||
} else {
|
||||
output(ensureConfigFileResult, raw, 'exists');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets a value in the config file, allowing nested values via dot notation (e.g.,
|
||||
* "workflow.research").
|
||||
*
|
||||
* Does not call `output()`, so can be used as one step in a command without triggering `exit(0)` in
|
||||
* the happy path. But note that `error()` will still `exit(1)` out of the process.
|
||||
*/
|
||||
function setConfigValue(cwd, keyPath, parsedValue) {
|
||||
const configPath = path.join(cwd, '.planning', 'config.json');
|
||||
|
||||
// Load existing config or start with empty object
|
||||
let config = {};
|
||||
try {
|
||||
if (fs.existsSync(configPath)) {
|
||||
config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
|
||||
}
|
||||
} catch (err) {
|
||||
error('Failed to read config.json: ' + err.message);
|
||||
}
|
||||
|
||||
// Set nested value using dot notation (e.g., "workflow.research")
|
||||
const keys = keyPath.split('.');
|
||||
let current = config;
|
||||
for (let i = 0; i < keys.length - 1; i++) {
|
||||
const key = keys[i];
|
||||
if (current[key] === undefined || typeof current[key] !== 'object') {
|
||||
current[key] = {};
|
||||
}
|
||||
current = current[key];
|
||||
}
|
||||
const previousValue = current[keys[keys.length - 1]]; // Capture previous value before overwriting
|
||||
current[keys[keys.length - 1]] = parsedValue;
|
||||
|
||||
// Write back
|
||||
try {
|
||||
fs.writeFileSync(configPath, JSON.stringify(config, null, 2), 'utf-8');
|
||||
return { updated: true, key: keyPath, value: parsedValue, previousValue };
|
||||
} catch (err) {
|
||||
error('Failed to write config.json: ' + err.message);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Command to set a value in the config file, allowing nested values via dot notation (e.g.,
|
||||
* "workflow.research").
|
||||
*
|
||||
* Note that this exits the process (via `output()`) even in the happy path; use `setConfigValue()`
|
||||
* directly if you need to avoid this.
|
||||
*/
|
||||
function cmdConfigSet(cwd, keyPath, value, raw) {
|
||||
if (!keyPath) {
|
||||
error('Usage: config-set <key.path> <value>');
|
||||
}
|
||||
|
||||
validateKnownConfigKeyPath(keyPath);
|
||||
|
||||
if (!VALID_CONFIG_KEYS.has(keyPath)) {
|
||||
error(`Unknown config key: "${keyPath}". Valid keys: ${[...VALID_CONFIG_KEYS].sort().join(', ')}`);
|
||||
}
|
||||
|
||||
// Parse value (handle booleans and numbers)
|
||||
let parsedValue = value;
|
||||
if (value === 'true') parsedValue = true;
|
||||
else if (value === 'false') parsedValue = false;
|
||||
else if (!isNaN(value) && value !== '') parsedValue = Number(value);
|
||||
|
||||
const setConfigValueResult = setConfigValue(cwd, keyPath, parsedValue);
|
||||
output(setConfigValueResult, raw, `${keyPath}=${parsedValue}`);
|
||||
}
|
||||
|
||||
function cmdConfigGet(cwd, keyPath, raw) {
|
||||
const configPath = path.join(cwd, '.planning', 'config.json');
|
||||
|
||||
if (!keyPath) {
|
||||
error('Usage: config-get <key.path>');
|
||||
}
|
||||
|
||||
let config = {};
|
||||
try {
|
||||
if (fs.existsSync(configPath)) {
|
||||
config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
|
||||
} else {
|
||||
error('No config.json found at ' + configPath);
|
||||
}
|
||||
} catch (err) {
|
||||
if (err.message.startsWith('No config.json')) throw err;
|
||||
error('Failed to read config.json: ' + err.message);
|
||||
}
|
||||
|
||||
// Traverse dot-notation path (e.g., "workflow.auto_advance")
|
||||
const keys = keyPath.split('.');
|
||||
let current = config;
|
||||
for (const key of keys) {
|
||||
if (current === undefined || current === null || typeof current !== 'object') {
|
||||
error(`Key not found: ${keyPath}`);
|
||||
}
|
||||
current = current[key];
|
||||
}
|
||||
|
||||
if (current === undefined) {
|
||||
error(`Key not found: ${keyPath}`);
|
||||
}
|
||||
|
||||
output(current, raw, String(current));
|
||||
}
|
||||
|
||||
/**
|
||||
* Command to set the model profile in the config file.
|
||||
*
|
||||
* Note that this exits the process (via `output()`) even in the happy path.
|
||||
*/
|
||||
function cmdConfigSetModelProfile(cwd, profile, raw) {
|
||||
if (!profile) {
|
||||
error(`Usage: config-set-model-profile <${VALID_PROFILES.join('|')}>`);
|
||||
}
|
||||
|
||||
const normalizedProfile = profile.toLowerCase().trim();
|
||||
if (!VALID_PROFILES.includes(normalizedProfile)) {
|
||||
error(`Invalid profile '${profile}'. Valid profiles: ${VALID_PROFILES.join(', ')}`);
|
||||
}
|
||||
|
||||
// Ensure config exists (create if needed)
|
||||
ensureConfigFile(cwd);
|
||||
|
||||
// Set the model profile in the config
|
||||
const { previousValue } = setConfigValue(cwd, 'model_profile', normalizedProfile, raw);
|
||||
const previousProfile = previousValue || 'balanced';
|
||||
|
||||
// Build result value / message and return
|
||||
const agentToModelMap = getAgentToModelMapForProfile(normalizedProfile);
|
||||
const result = {
|
||||
updated: true,
|
||||
profile: normalizedProfile,
|
||||
previousProfile,
|
||||
agentToModelMap,
|
||||
};
|
||||
const rawValue = getCmdConfigSetModelProfileResultMessage(
|
||||
normalizedProfile,
|
||||
previousProfile,
|
||||
agentToModelMap
|
||||
);
|
||||
output(result, raw, rawValue);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the message to display for the result of the `config-set-model-profile` command when
|
||||
* displaying raw output.
|
||||
*/
|
||||
function getCmdConfigSetModelProfileResultMessage(
|
||||
normalizedProfile,
|
||||
previousProfile,
|
||||
agentToModelMap
|
||||
) {
|
||||
const agentToModelTable = formatAgentToModelMapAsTable(agentToModelMap);
|
||||
const didChange = previousProfile !== normalizedProfile;
|
||||
const paragraphs = didChange
|
||||
? [
|
||||
`✓ Model profile set to: ${normalizedProfile} (was: ${previousProfile})`,
|
||||
'Agents will now use:',
|
||||
agentToModelTable,
|
||||
'Next spawned agents will use the new profile.',
|
||||
]
|
||||
: [
|
||||
`✓ Model profile is already set to: ${normalizedProfile}`,
|
||||
'Agents are using:',
|
||||
agentToModelTable,
|
||||
];
|
||||
return paragraphs.join('\n\n');
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
cmdConfigEnsureSection,
|
||||
cmdConfigSet,
|
||||
cmdConfigGet,
|
||||
cmdConfigSetModelProfile,
|
||||
};
|
||||
712
get-shit-done/bin/lib/core.cjs
Normal file
712
get-shit-done/bin/lib/core.cjs
Normal file
@@ -0,0 +1,712 @@
|
||||
/**
|
||||
* Core — Shared utilities, constants, and internal helpers
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { execSync, spawnSync } = require('child_process');
|
||||
const { MODEL_PROFILES } = require('./model-profiles.cjs');
|
||||
|
||||
// ─── Path helpers ────────────────────────────────────────────────────────────
|
||||
|
||||
/** Normalize a relative path to always use forward slashes (cross-platform). */
|
||||
function toPosixPath(p) {
|
||||
return p.split(path.sep).join('/');
|
||||
}
|
||||
|
||||
// ─── Output helpers ───────────────────────────────────────────────────────────
|
||||
|
||||
function output(result, raw, rawValue) {
|
||||
if (raw && rawValue !== undefined) {
|
||||
process.stdout.write(String(rawValue));
|
||||
} else {
|
||||
const json = JSON.stringify(result, null, 2);
|
||||
// Large payloads exceed Claude Code's Bash tool buffer (~50KB).
|
||||
// Write to tmpfile and output the path prefixed with @file: so callers can detect it.
|
||||
if (json.length > 50000) {
|
||||
const tmpPath = path.join(require('os').tmpdir(), `gsd-${Date.now()}.json`);
|
||||
fs.writeFileSync(tmpPath, json, 'utf-8');
|
||||
process.stdout.write('@file:' + tmpPath);
|
||||
} else {
|
||||
process.stdout.write(json);
|
||||
}
|
||||
}
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
function error(message) {
|
||||
process.stderr.write('Error: ' + message + '\n');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// ─── File & Config utilities ──────────────────────────────────────────────────
|
||||
|
||||
function safeReadFile(filePath) {
|
||||
try {
|
||||
return fs.readFileSync(filePath, 'utf-8');
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function loadConfig(cwd) {
|
||||
const configPath = path.join(cwd, '.planning', 'config.json');
|
||||
const defaults = {
|
||||
model_profile: 'balanced',
|
||||
commit_docs: true,
|
||||
search_gitignored: false,
|
||||
branching_strategy: 'none',
|
||||
phase_branch_template: 'gsd/phase-{phase}-{slug}',
|
||||
milestone_branch_template: 'gsd/{milestone}-{slug}',
|
||||
research: true,
|
||||
plan_checker: true,
|
||||
verifier: true,
|
||||
nyquist_validation: true,
|
||||
parallelization: true,
|
||||
brave_search: false,
|
||||
resolve_model_ids: false, // when true, resolve aliases (opus/sonnet/haiku) to full model IDs
|
||||
};
|
||||
|
||||
try {
|
||||
const raw = fs.readFileSync(configPath, 'utf-8');
|
||||
const parsed = JSON.parse(raw);
|
||||
|
||||
// Migrate deprecated "depth" key to "granularity" with value mapping
|
||||
if ('depth' in parsed && !('granularity' in parsed)) {
|
||||
const depthToGranularity = { quick: 'coarse', standard: 'standard', comprehensive: 'fine' };
|
||||
parsed.granularity = depthToGranularity[parsed.depth] || parsed.depth;
|
||||
delete parsed.depth;
|
||||
try { fs.writeFileSync(configPath, JSON.stringify(parsed, null, 2), 'utf-8'); } catch {}
|
||||
}
|
||||
|
||||
const get = (key, nested) => {
|
||||
if (parsed[key] !== undefined) return parsed[key];
|
||||
if (nested && parsed[nested.section] && parsed[nested.section][nested.field] !== undefined) {
|
||||
return parsed[nested.section][nested.field];
|
||||
}
|
||||
return undefined;
|
||||
};
|
||||
|
||||
const parallelization = (() => {
|
||||
const val = get('parallelization');
|
||||
if (typeof val === 'boolean') return val;
|
||||
if (typeof val === 'object' && val !== null && 'enabled' in val) return val.enabled;
|
||||
return defaults.parallelization;
|
||||
})();
|
||||
|
||||
return {
|
||||
model_profile: get('model_profile') ?? defaults.model_profile,
|
||||
commit_docs: get('commit_docs', { section: 'planning', field: 'commit_docs' }) ?? defaults.commit_docs,
|
||||
search_gitignored: get('search_gitignored', { section: 'planning', field: 'search_gitignored' }) ?? defaults.search_gitignored,
|
||||
branching_strategy: get('branching_strategy', { section: 'git', field: 'branching_strategy' }) ?? defaults.branching_strategy,
|
||||
phase_branch_template: get('phase_branch_template', { section: 'git', field: 'phase_branch_template' }) ?? defaults.phase_branch_template,
|
||||
milestone_branch_template: get('milestone_branch_template', { section: 'git', field: 'milestone_branch_template' }) ?? defaults.milestone_branch_template,
|
||||
research: get('research', { section: 'workflow', field: 'research' }) ?? defaults.research,
|
||||
plan_checker: get('plan_checker', { section: 'workflow', field: 'plan_check' }) ?? defaults.plan_checker,
|
||||
verifier: get('verifier', { section: 'workflow', field: 'verifier' }) ?? defaults.verifier,
|
||||
nyquist_validation: get('nyquist_validation', { section: 'workflow', field: 'nyquist_validation' }) ?? defaults.nyquist_validation,
|
||||
parallelization,
|
||||
brave_search: get('brave_search') ?? defaults.brave_search,
|
||||
resolve_model_ids: get('resolve_model_ids') ?? defaults.resolve_model_ids,
|
||||
model_overrides: parsed.model_overrides || null,
|
||||
};
|
||||
} catch {
|
||||
return defaults;
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Git utilities ────────────────────────────────────────────────────────────
|
||||
|
||||
function isGitIgnored(cwd, targetPath) {
|
||||
try {
|
||||
// --no-index checks .gitignore rules regardless of whether the file is tracked.
|
||||
// Without it, git check-ignore returns "not ignored" for tracked files even when
|
||||
// .gitignore explicitly lists them — a common source of confusion when .planning/
|
||||
// was committed before being added to .gitignore.
|
||||
execSync('git check-ignore -q --no-index -- ' + targetPath.replace(/[^a-zA-Z0-9._\-/]/g, ''), {
|
||||
cwd,
|
||||
stdio: 'pipe',
|
||||
});
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Markdown normalization ─────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Normalize markdown to fix common markdownlint violations.
|
||||
* Applied at write points so GSD-generated .planning/ files are IDE-friendly.
|
||||
*
|
||||
* Rules enforced:
|
||||
* MD022 — Blank lines around headings
|
||||
* MD031 — Blank lines around fenced code blocks
|
||||
* MD032 — Blank lines around lists
|
||||
* MD012 — No multiple consecutive blank lines (collapsed to 2 max)
|
||||
* MD047 — Files end with a single newline
|
||||
*/
|
||||
function normalizeMd(content) {
|
||||
if (!content || typeof content !== 'string') return content;
|
||||
|
||||
// Normalize line endings to LF for consistent processing
|
||||
let text = content.replace(/\r\n/g, '\n');
|
||||
|
||||
const lines = text.split('\n');
|
||||
const result = [];
|
||||
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
const line = lines[i];
|
||||
const prev = i > 0 ? lines[i - 1] : '';
|
||||
const prevTrimmed = prev.trimEnd();
|
||||
const trimmed = line.trimEnd();
|
||||
|
||||
// MD022: Blank line before headings (skip first line and frontmatter delimiters)
|
||||
if (/^#{1,6}\s/.test(trimmed) && i > 0 && prevTrimmed !== '' && prevTrimmed !== '---') {
|
||||
result.push('');
|
||||
}
|
||||
|
||||
// MD031: Blank line before fenced code blocks
|
||||
if (/^```/.test(trimmed) && i > 0 && prevTrimmed !== '' && !isInsideFencedBlock(lines, i)) {
|
||||
result.push('');
|
||||
}
|
||||
|
||||
// MD032: Blank line before lists (- item, * item, N. item, - [ ] item)
|
||||
if (/^(\s*[-*+]\s|\s*\d+\.\s)/.test(line) && i > 0 &&
|
||||
prevTrimmed !== '' && !/^(\s*[-*+]\s|\s*\d+\.\s)/.test(prev) &&
|
||||
prevTrimmed !== '---') {
|
||||
result.push('');
|
||||
}
|
||||
|
||||
result.push(line);
|
||||
|
||||
// MD022: Blank line after headings
|
||||
if (/^#{1,6}\s/.test(trimmed) && i < lines.length - 1) {
|
||||
const next = lines[i + 1];
|
||||
if (next !== undefined && next.trimEnd() !== '') {
|
||||
result.push('');
|
||||
}
|
||||
}
|
||||
|
||||
// MD031: Blank line after closing fenced code blocks
|
||||
if (/^```\s*$/.test(trimmed) && isClosingFence(lines, i) && i < lines.length - 1) {
|
||||
const next = lines[i + 1];
|
||||
if (next !== undefined && next.trimEnd() !== '') {
|
||||
result.push('');
|
||||
}
|
||||
}
|
||||
|
||||
// MD032: Blank line after last list item in a block
|
||||
if (/^(\s*[-*+]\s|\s*\d+\.\s)/.test(line) && i < lines.length - 1) {
|
||||
const next = lines[i + 1];
|
||||
if (next !== undefined && next.trimEnd() !== '' &&
|
||||
!/^(\s*[-*+]\s|\s*\d+\.\s)/.test(next) &&
|
||||
!/^\s/.test(next)) {
|
||||
// Only add blank line if next line is not a continuation/indented line
|
||||
result.push('');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
text = result.join('\n');
|
||||
|
||||
// MD012: Collapse 3+ consecutive blank lines to 2
|
||||
text = text.replace(/\n{3,}/g, '\n\n');
|
||||
|
||||
// MD047: Ensure file ends with exactly one newline
|
||||
text = text.replace(/\n*$/, '\n');
|
||||
|
||||
return text;
|
||||
}
|
||||
|
||||
/** Check if line index i is inside an already-open fenced code block */
|
||||
function isInsideFencedBlock(lines, i) {
|
||||
let fenceCount = 0;
|
||||
for (let j = 0; j < i; j++) {
|
||||
if (/^```/.test(lines[j].trimEnd())) fenceCount++;
|
||||
}
|
||||
return fenceCount % 2 === 1;
|
||||
}
|
||||
|
||||
/** Check if a ``` line is a closing fence (odd number of fences up to and including this one) */
|
||||
function isClosingFence(lines, i) {
|
||||
let fenceCount = 0;
|
||||
for (let j = 0; j <= i; j++) {
|
||||
if (/^```/.test(lines[j].trimEnd())) fenceCount++;
|
||||
}
|
||||
return fenceCount % 2 === 0;
|
||||
}
|
||||
|
||||
function execGit(cwd, args) {
|
||||
const result = spawnSync('git', args, {
|
||||
cwd,
|
||||
stdio: 'pipe',
|
||||
encoding: 'utf-8',
|
||||
});
|
||||
return {
|
||||
exitCode: result.status ?? 1,
|
||||
stdout: (result.stdout ?? '').toString().trim(),
|
||||
stderr: (result.stderr ?? '').toString().trim(),
|
||||
};
|
||||
}
|
||||
|
||||
// ─── Phase utilities ──────────────────────────────────────────────────────────
|
||||
|
||||
function escapeRegex(value) {
|
||||
return String(value).replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
}
|
||||
|
||||
function normalizePhaseName(phase) {
|
||||
const match = String(phase).match(/^(\d+)([A-Z])?((?:\.\d+)*)/i);
|
||||
if (!match) return phase;
|
||||
const padded = match[1].padStart(2, '0');
|
||||
const letter = match[2] ? match[2].toUpperCase() : '';
|
||||
const decimal = match[3] || '';
|
||||
return padded + letter + decimal;
|
||||
}
|
||||
|
||||
function comparePhaseNum(a, b) {
|
||||
const pa = String(a).match(/^(\d+)([A-Z])?((?:\.\d+)*)/i);
|
||||
const pb = String(b).match(/^(\d+)([A-Z])?((?:\.\d+)*)/i);
|
||||
if (!pa || !pb) return String(a).localeCompare(String(b));
|
||||
const intDiff = parseInt(pa[1], 10) - parseInt(pb[1], 10);
|
||||
if (intDiff !== 0) return intDiff;
|
||||
// No letter sorts before letter: 12 < 12A < 12B
|
||||
const la = (pa[2] || '').toUpperCase();
|
||||
const lb = (pb[2] || '').toUpperCase();
|
||||
if (la !== lb) {
|
||||
if (!la) return -1;
|
||||
if (!lb) return 1;
|
||||
return la < lb ? -1 : 1;
|
||||
}
|
||||
// Segment-by-segment decimal comparison: 12A < 12A.1 < 12A.1.2 < 12A.2
|
||||
const aDecParts = pa[3] ? pa[3].slice(1).split('.').map(p => parseInt(p, 10)) : [];
|
||||
const bDecParts = pb[3] ? pb[3].slice(1).split('.').map(p => parseInt(p, 10)) : [];
|
||||
const maxLen = Math.max(aDecParts.length, bDecParts.length);
|
||||
if (aDecParts.length === 0 && bDecParts.length > 0) return -1;
|
||||
if (bDecParts.length === 0 && aDecParts.length > 0) return 1;
|
||||
for (let i = 0; i < maxLen; i++) {
|
||||
const av = Number.isFinite(aDecParts[i]) ? aDecParts[i] : 0;
|
||||
const bv = Number.isFinite(bDecParts[i]) ? bDecParts[i] : 0;
|
||||
if (av !== bv) return av - bv;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
function searchPhaseInDir(baseDir, relBase, normalized) {
|
||||
try {
|
||||
const entries = fs.readdirSync(baseDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name).sort((a, b) => comparePhaseNum(a, b));
|
||||
const match = dirs.find(d => d.startsWith(normalized));
|
||||
if (!match) return null;
|
||||
|
||||
const dirMatch = match.match(/^(\d+[A-Z]?(?:\.\d+)*)-?(.*)/i);
|
||||
const phaseNumber = dirMatch ? dirMatch[1] : normalized;
|
||||
const phaseName = dirMatch && dirMatch[2] ? dirMatch[2] : null;
|
||||
const phaseDir = path.join(baseDir, match);
|
||||
const phaseFiles = fs.readdirSync(phaseDir);
|
||||
|
||||
const plans = phaseFiles.filter(f => f.endsWith('-PLAN.md') || f === 'PLAN.md').sort();
|
||||
const summaries = phaseFiles.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md').sort();
|
||||
const hasResearch = phaseFiles.some(f => f.endsWith('-RESEARCH.md') || f === 'RESEARCH.md');
|
||||
const hasContext = phaseFiles.some(f => f.endsWith('-CONTEXT.md') || f === 'CONTEXT.md');
|
||||
const hasVerification = phaseFiles.some(f => f.endsWith('-VERIFICATION.md') || f === 'VERIFICATION.md');
|
||||
|
||||
const completedPlanIds = new Set(
|
||||
summaries.map(s => s.replace('-SUMMARY.md', '').replace('SUMMARY.md', ''))
|
||||
);
|
||||
const incompletePlans = plans.filter(p => {
|
||||
const planId = p.replace('-PLAN.md', '').replace('PLAN.md', '');
|
||||
return !completedPlanIds.has(planId);
|
||||
});
|
||||
|
||||
return {
|
||||
found: true,
|
||||
directory: toPosixPath(path.join(relBase, match)),
|
||||
phase_number: phaseNumber,
|
||||
phase_name: phaseName,
|
||||
phase_slug: phaseName ? phaseName.toLowerCase().replace(/[^a-z0-9]+/g, '-').replace(/^-+|-+$/g, '') : null,
|
||||
plans,
|
||||
summaries,
|
||||
incomplete_plans: incompletePlans,
|
||||
has_research: hasResearch,
|
||||
has_context: hasContext,
|
||||
has_verification: hasVerification,
|
||||
};
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function findPhaseInternal(cwd, phase) {
|
||||
if (!phase) return null;
|
||||
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const normalized = normalizePhaseName(phase);
|
||||
|
||||
// Search current phases first
|
||||
const current = searchPhaseInDir(phasesDir, '.planning/phases', normalized);
|
||||
if (current) return current;
|
||||
|
||||
// Search archived milestone phases (newest first)
|
||||
const milestonesDir = path.join(cwd, '.planning', 'milestones');
|
||||
if (!fs.existsSync(milestonesDir)) return null;
|
||||
|
||||
try {
|
||||
const milestoneEntries = fs.readdirSync(milestonesDir, { withFileTypes: true });
|
||||
const archiveDirs = milestoneEntries
|
||||
.filter(e => e.isDirectory() && /^v[\d.]+-phases$/.test(e.name))
|
||||
.map(e => e.name)
|
||||
.sort()
|
||||
.reverse();
|
||||
|
||||
for (const archiveName of archiveDirs) {
|
||||
const version = archiveName.match(/^(v[\d.]+)-phases$/)[1];
|
||||
const archivePath = path.join(milestonesDir, archiveName);
|
||||
const relBase = '.planning/milestones/' + archiveName;
|
||||
const result = searchPhaseInDir(archivePath, relBase, normalized);
|
||||
if (result) {
|
||||
result.archived = version;
|
||||
return result;
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
function getArchivedPhaseDirs(cwd) {
|
||||
const milestonesDir = path.join(cwd, '.planning', 'milestones');
|
||||
const results = [];
|
||||
|
||||
if (!fs.existsSync(milestonesDir)) return results;
|
||||
|
||||
try {
|
||||
const milestoneEntries = fs.readdirSync(milestonesDir, { withFileTypes: true });
|
||||
// Find v*-phases directories, sort newest first
|
||||
const phaseDirs = milestoneEntries
|
||||
.filter(e => e.isDirectory() && /^v[\d.]+-phases$/.test(e.name))
|
||||
.map(e => e.name)
|
||||
.sort()
|
||||
.reverse();
|
||||
|
||||
for (const archiveName of phaseDirs) {
|
||||
const version = archiveName.match(/^(v[\d.]+)-phases$/)[1];
|
||||
const archivePath = path.join(milestonesDir, archiveName);
|
||||
const entries = fs.readdirSync(archivePath, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name).sort((a, b) => comparePhaseNum(a, b));
|
||||
|
||||
for (const dir of dirs) {
|
||||
results.push({
|
||||
name: dir,
|
||||
milestone: version,
|
||||
basePath: path.join('.planning', 'milestones', archiveName),
|
||||
fullPath: path.join(archivePath, dir),
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
// ─── Roadmap milestone scoping ───────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Strip shipped milestone content wrapped in <details> blocks.
|
||||
* Used to isolate current milestone phases when searching ROADMAP.md
|
||||
* for phase headings or checkboxes — prevents matching archived milestone
|
||||
* phases that share the same numbers as current milestone phases.
|
||||
*/
|
||||
function stripShippedMilestones(content) {
|
||||
return content.replace(/<details>[\s\S]*?<\/details>/gi, '');
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract the current milestone section from ROADMAP.md by positive lookup.
|
||||
*
|
||||
* Instead of stripping <details> blocks (negative heuristic that breaks if
|
||||
* agents wrap the current milestone in <details>), this finds the section
|
||||
* matching the current milestone version and returns only that content.
|
||||
*
|
||||
* Falls back to stripShippedMilestones() if:
|
||||
* - cwd is not provided
|
||||
* - STATE.md doesn't exist or has no milestone field
|
||||
* - Version can't be found in ROADMAP.md
|
||||
*
|
||||
* @param {string} content - Full ROADMAP.md content
|
||||
* @param {string} [cwd] - Working directory for reading STATE.md
|
||||
* @returns {string} Content scoped to current milestone
|
||||
*/
|
||||
function extractCurrentMilestone(content, cwd) {
|
||||
if (!cwd) return stripShippedMilestones(content);
|
||||
|
||||
// 1. Get current milestone version from STATE.md frontmatter
|
||||
let version = null;
|
||||
try {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (fs.existsSync(statePath)) {
|
||||
const stateRaw = fs.readFileSync(statePath, 'utf-8');
|
||||
const milestoneMatch = stateRaw.match(/^milestone:\s*(.+)/m);
|
||||
if (milestoneMatch) {
|
||||
version = milestoneMatch[1].trim();
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// 2. Fallback: derive version from getMilestoneInfo pattern in ROADMAP.md itself
|
||||
if (!version) {
|
||||
// Check for 🚧 in-progress marker
|
||||
const inProgressMatch = content.match(/🚧\s*\*\*v(\d+\.\d+)\s/);
|
||||
if (inProgressMatch) {
|
||||
version = 'v' + inProgressMatch[1];
|
||||
}
|
||||
}
|
||||
|
||||
if (!version) return stripShippedMilestones(content);
|
||||
|
||||
// 3. Find the section matching this version
|
||||
// Match headings like: ## Roadmap v3.0: Name, ## v3.0 Name, etc.
|
||||
const escapedVersion = escapeRegex(version);
|
||||
const sectionPattern = new RegExp(
|
||||
`(^#{1,3}\\s+.*${escapedVersion}[^\\n]*)`,
|
||||
'mi'
|
||||
);
|
||||
const sectionMatch = content.match(sectionPattern);
|
||||
|
||||
if (!sectionMatch) return stripShippedMilestones(content);
|
||||
|
||||
const sectionStart = sectionMatch.index;
|
||||
|
||||
// Find the end: next milestone heading at same or higher level, or EOF
|
||||
// Milestone headings look like: ## v2.0, ## Roadmap v2.0, ## ✅ v1.0, etc.
|
||||
const headingLevel = sectionMatch[1].match(/^(#{1,3})\s/)[1].length;
|
||||
const restContent = content.slice(sectionStart + sectionMatch[0].length);
|
||||
const nextMilestonePattern = new RegExp(
|
||||
`^#{1,${headingLevel}}\\s+(?:.*v\\d+\\.\\d+|✅|📋|🚧)`,
|
||||
'mi'
|
||||
);
|
||||
const nextMatch = restContent.match(nextMilestonePattern);
|
||||
|
||||
let sectionEnd;
|
||||
if (nextMatch) {
|
||||
sectionEnd = sectionStart + sectionMatch[0].length + nextMatch.index;
|
||||
} else {
|
||||
sectionEnd = content.length;
|
||||
}
|
||||
|
||||
// Return everything before the current milestone section (non-milestone content
|
||||
// like title, overview) plus the current milestone section
|
||||
const beforeMilestones = content.slice(0, sectionStart);
|
||||
const currentSection = content.slice(sectionStart, sectionEnd);
|
||||
|
||||
// Also include any content before the first milestone heading (title, overview, etc.)
|
||||
// but strip any <details> blocks in it (these are definitely shipped)
|
||||
const preamble = beforeMilestones.replace(/<details>[\s\S]*?<\/details>/gi, '');
|
||||
|
||||
return preamble + currentSection;
|
||||
}
|
||||
|
||||
/**
|
||||
* Replace a pattern only in the current milestone section of ROADMAP.md
|
||||
* (everything after the last </details> close tag). Used for write operations
|
||||
* that must not accidentally modify archived milestone checkboxes/tables.
|
||||
*/
|
||||
function replaceInCurrentMilestone(content, pattern, replacement) {
|
||||
const lastDetailsClose = content.lastIndexOf('</details>');
|
||||
if (lastDetailsClose === -1) {
|
||||
return content.replace(pattern, replacement);
|
||||
}
|
||||
const offset = lastDetailsClose + '</details>'.length;
|
||||
const before = content.slice(0, offset);
|
||||
const after = content.slice(offset);
|
||||
return before + after.replace(pattern, replacement);
|
||||
}
|
||||
|
||||
// ─── Roadmap & model utilities ────────────────────────────────────────────────
|
||||
|
||||
function getRoadmapPhaseInternal(cwd, phaseNum) {
|
||||
if (!phaseNum) return null;
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
if (!fs.existsSync(roadmapPath)) return null;
|
||||
|
||||
try {
|
||||
const content = extractCurrentMilestone(fs.readFileSync(roadmapPath, 'utf-8'), cwd);
|
||||
const escapedPhase = escapeRegex(phaseNum.toString());
|
||||
const phasePattern = new RegExp(`#{2,4}\\s*Phase\\s+${escapedPhase}:\\s*([^\\n]+)`, 'i');
|
||||
const headerMatch = content.match(phasePattern);
|
||||
if (!headerMatch) return null;
|
||||
|
||||
const phaseName = headerMatch[1].trim();
|
||||
const headerIndex = headerMatch.index;
|
||||
const restOfContent = content.slice(headerIndex);
|
||||
const nextHeaderMatch = restOfContent.match(/\n#{2,4}\s+Phase\s+\d/i);
|
||||
const sectionEnd = nextHeaderMatch ? headerIndex + nextHeaderMatch.index : content.length;
|
||||
const section = content.slice(headerIndex, sectionEnd).trim();
|
||||
|
||||
const goalMatch = section.match(/\*\*Goal(?:\*\*:|\*?\*?:\*\*)\s*([^\n]+)/i);
|
||||
const goal = goalMatch ? goalMatch[1].trim() : null;
|
||||
|
||||
return {
|
||||
found: true,
|
||||
phase_number: phaseNum.toString(),
|
||||
phase_name: phaseName,
|
||||
goal,
|
||||
section,
|
||||
};
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Model alias resolution ───────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Map short model aliases to full model IDs.
|
||||
* Updated each release to match current model versions.
|
||||
* Users can override with model_overrides in config.json for custom/latest models.
|
||||
*/
|
||||
const MODEL_ALIAS_MAP = {
|
||||
'opus': 'claude-opus-4-0',
|
||||
'sonnet': 'claude-sonnet-4-5',
|
||||
'haiku': 'claude-haiku-3-5',
|
||||
};
|
||||
|
||||
function resolveModelInternal(cwd, agentType) {
|
||||
const config = loadConfig(cwd);
|
||||
|
||||
// Check per-agent override first
|
||||
const override = config.model_overrides?.[agentType];
|
||||
if (override) {
|
||||
return override;
|
||||
}
|
||||
|
||||
// Fall back to profile lookup
|
||||
const profile = String(config.model_profile || 'balanced').toLowerCase();
|
||||
const agentModels = MODEL_PROFILES[agentType];
|
||||
if (!agentModels) return 'sonnet';
|
||||
if (profile === 'inherit') return 'inherit';
|
||||
const alias = agentModels[profile] || agentModels['balanced'] || 'sonnet';
|
||||
|
||||
// If resolve_model_ids is true, map alias to full model ID
|
||||
// This prevents 404s when the Task tool passes aliases directly to the API
|
||||
if (config.resolve_model_ids) {
|
||||
return MODEL_ALIAS_MAP[alias] || alias;
|
||||
}
|
||||
|
||||
return alias;
|
||||
}
|
||||
|
||||
// ─── Misc utilities ───────────────────────────────────────────────────────────
|
||||
|
||||
function pathExistsInternal(cwd, targetPath) {
|
||||
const fullPath = path.isAbsolute(targetPath) ? targetPath : path.join(cwd, targetPath);
|
||||
try {
|
||||
fs.statSync(fullPath);
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function generateSlugInternal(text) {
|
||||
if (!text) return null;
|
||||
return text.toLowerCase().replace(/[^a-z0-9]+/g, '-').replace(/^-+|-+$/g, '');
|
||||
}
|
||||
|
||||
function getMilestoneInfo(cwd) {
|
||||
try {
|
||||
const roadmap = fs.readFileSync(path.join(cwd, '.planning', 'ROADMAP.md'), 'utf-8');
|
||||
|
||||
// First: check for list-format roadmaps using 🚧 (in-progress) marker
|
||||
// e.g. "- 🚧 **v2.1 Belgium** — Phases 24-28 (in progress)"
|
||||
const inProgressMatch = roadmap.match(/🚧\s*\*\*v(\d+\.\d+)\s+([^*]+)\*\*/);
|
||||
if (inProgressMatch) {
|
||||
return {
|
||||
version: 'v' + inProgressMatch[1],
|
||||
name: inProgressMatch[2].trim(),
|
||||
};
|
||||
}
|
||||
|
||||
// Second: heading-format roadmaps — strip shipped milestones in <details> blocks
|
||||
const cleaned = stripShippedMilestones(roadmap);
|
||||
// Extract version and name from the same ## heading for consistency
|
||||
const headingMatch = cleaned.match(/## .*v(\d+\.\d+)[:\s]+([^\n(]+)/);
|
||||
if (headingMatch) {
|
||||
return {
|
||||
version: 'v' + headingMatch[1],
|
||||
name: headingMatch[2].trim(),
|
||||
};
|
||||
}
|
||||
// Fallback: try bare version match
|
||||
const versionMatch = cleaned.match(/v(\d+\.\d+)/);
|
||||
return {
|
||||
version: versionMatch ? versionMatch[0] : 'v1.0',
|
||||
name: 'milestone',
|
||||
};
|
||||
} catch {
|
||||
return { version: 'v1.0', name: 'milestone' };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a filter function that checks whether a phase directory belongs
|
||||
* to the current milestone based on ROADMAP.md phase headings.
|
||||
* If no ROADMAP exists or no phases are listed, returns a pass-all filter.
|
||||
*/
|
||||
function getMilestonePhaseFilter(cwd) {
|
||||
const milestonePhaseNums = new Set();
|
||||
try {
|
||||
const roadmap = extractCurrentMilestone(fs.readFileSync(path.join(cwd, '.planning', 'ROADMAP.md'), 'utf-8'), cwd);
|
||||
const phasePattern = /#{2,4}\s*Phase\s+(\d+[A-Z]?(?:\.\d+)*)\s*:/gi;
|
||||
let m;
|
||||
while ((m = phasePattern.exec(roadmap)) !== null) {
|
||||
milestonePhaseNums.add(m[1]);
|
||||
}
|
||||
} catch {}
|
||||
|
||||
if (milestonePhaseNums.size === 0) {
|
||||
const passAll = () => true;
|
||||
passAll.phaseCount = 0;
|
||||
return passAll;
|
||||
}
|
||||
|
||||
const normalized = new Set(
|
||||
[...milestonePhaseNums].map(n => (n.replace(/^0+/, '') || '0').toLowerCase())
|
||||
);
|
||||
|
||||
function isDirInMilestone(dirName) {
|
||||
const m = dirName.match(/^0*(\d+[A-Za-z]?(?:\.\d+)*)/);
|
||||
if (!m) return false;
|
||||
return normalized.has(m[1].toLowerCase());
|
||||
}
|
||||
isDirInMilestone.phaseCount = milestonePhaseNums.size;
|
||||
return isDirInMilestone;
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
output,
|
||||
error,
|
||||
safeReadFile,
|
||||
loadConfig,
|
||||
isGitIgnored,
|
||||
execGit,
|
||||
normalizeMd,
|
||||
escapeRegex,
|
||||
normalizePhaseName,
|
||||
comparePhaseNum,
|
||||
searchPhaseInDir,
|
||||
findPhaseInternal,
|
||||
getArchivedPhaseDirs,
|
||||
getRoadmapPhaseInternal,
|
||||
resolveModelInternal,
|
||||
pathExistsInternal,
|
||||
generateSlugInternal,
|
||||
getMilestoneInfo,
|
||||
getMilestonePhaseFilter,
|
||||
stripShippedMilestones,
|
||||
extractCurrentMilestone,
|
||||
replaceInCurrentMilestone,
|
||||
toPosixPath,
|
||||
MODEL_ALIAS_MAP,
|
||||
};
|
||||
299
get-shit-done/bin/lib/frontmatter.cjs
Normal file
299
get-shit-done/bin/lib/frontmatter.cjs
Normal file
@@ -0,0 +1,299 @@
|
||||
/**
|
||||
* Frontmatter — YAML frontmatter parsing, serialization, and CRUD commands
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { safeReadFile, normalizeMd, output, error } = require('./core.cjs');
|
||||
|
||||
// ─── Parsing engine ───────────────────────────────────────────────────────────
|
||||
|
||||
function extractFrontmatter(content) {
|
||||
const frontmatter = {};
|
||||
const match = content.match(/^---\r?\n([\s\S]+?)\r?\n---/);
|
||||
if (!match) return frontmatter;
|
||||
|
||||
const yaml = match[1];
|
||||
const lines = yaml.split(/\r?\n/);
|
||||
|
||||
// Stack to track nested objects: [{obj, key, indent}]
|
||||
// obj = object to write to, key = current key collecting array items, indent = indentation level
|
||||
let stack = [{ obj: frontmatter, key: null, indent: -1 }];
|
||||
|
||||
for (const line of lines) {
|
||||
// Skip empty lines
|
||||
if (line.trim() === '') continue;
|
||||
|
||||
// Calculate indentation (number of leading spaces)
|
||||
const indentMatch = line.match(/^(\s*)/);
|
||||
const indent = indentMatch ? indentMatch[1].length : 0;
|
||||
|
||||
// Pop stack back to appropriate level
|
||||
while (stack.length > 1 && indent <= stack[stack.length - 1].indent) {
|
||||
stack.pop();
|
||||
}
|
||||
|
||||
const current = stack[stack.length - 1];
|
||||
|
||||
// Check for key: value pattern
|
||||
const keyMatch = line.match(/^(\s*)([a-zA-Z0-9_-]+):\s*(.*)/);
|
||||
if (keyMatch) {
|
||||
const key = keyMatch[2];
|
||||
const value = keyMatch[3].trim();
|
||||
|
||||
if (value === '' || value === '[') {
|
||||
// Key with no value or opening bracket — could be nested object or array
|
||||
// We'll determine based on next lines, for now create placeholder
|
||||
current.obj[key] = value === '[' ? [] : {};
|
||||
current.key = null;
|
||||
// Push new context for potential nested content
|
||||
stack.push({ obj: current.obj[key], key: null, indent });
|
||||
} else if (value.startsWith('[') && value.endsWith(']')) {
|
||||
// Inline array: key: [a, b, c]
|
||||
current.obj[key] = value.slice(1, -1).split(',').map(s => s.trim().replace(/^["']|["']$/g, '')).filter(Boolean);
|
||||
current.key = null;
|
||||
} else {
|
||||
// Simple key: value
|
||||
current.obj[key] = value.replace(/^["']|["']$/g, '');
|
||||
current.key = null;
|
||||
}
|
||||
} else if (line.trim().startsWith('- ')) {
|
||||
// Array item
|
||||
const itemValue = line.trim().slice(2).replace(/^["']|["']$/g, '');
|
||||
|
||||
// If current context is an empty object, convert to array
|
||||
if (typeof current.obj === 'object' && !Array.isArray(current.obj) && Object.keys(current.obj).length === 0) {
|
||||
// Find the key in parent that points to this object and convert it
|
||||
const parent = stack.length > 1 ? stack[stack.length - 2] : null;
|
||||
if (parent) {
|
||||
for (const k of Object.keys(parent.obj)) {
|
||||
if (parent.obj[k] === current.obj) {
|
||||
parent.obj[k] = [itemValue];
|
||||
current.obj = parent.obj[k];
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (Array.isArray(current.obj)) {
|
||||
current.obj.push(itemValue);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return frontmatter;
|
||||
}
|
||||
|
||||
function reconstructFrontmatter(obj) {
|
||||
const lines = [];
|
||||
for (const [key, value] of Object.entries(obj)) {
|
||||
if (value === null || value === undefined) continue;
|
||||
if (Array.isArray(value)) {
|
||||
if (value.length === 0) {
|
||||
lines.push(`${key}: []`);
|
||||
} else if (value.every(v => typeof v === 'string') && value.length <= 3 && value.join(', ').length < 60) {
|
||||
lines.push(`${key}: [${value.join(', ')}]`);
|
||||
} else {
|
||||
lines.push(`${key}:`);
|
||||
for (const item of value) {
|
||||
lines.push(` - ${typeof item === 'string' && (item.includes(':') || item.includes('#')) ? `"${item}"` : item}`);
|
||||
}
|
||||
}
|
||||
} else if (typeof value === 'object') {
|
||||
lines.push(`${key}:`);
|
||||
for (const [subkey, subval] of Object.entries(value)) {
|
||||
if (subval === null || subval === undefined) continue;
|
||||
if (Array.isArray(subval)) {
|
||||
if (subval.length === 0) {
|
||||
lines.push(` ${subkey}: []`);
|
||||
} else if (subval.every(v => typeof v === 'string') && subval.length <= 3 && subval.join(', ').length < 60) {
|
||||
lines.push(` ${subkey}: [${subval.join(', ')}]`);
|
||||
} else {
|
||||
lines.push(` ${subkey}:`);
|
||||
for (const item of subval) {
|
||||
lines.push(` - ${typeof item === 'string' && (item.includes(':') || item.includes('#')) ? `"${item}"` : item}`);
|
||||
}
|
||||
}
|
||||
} else if (typeof subval === 'object') {
|
||||
lines.push(` ${subkey}:`);
|
||||
for (const [subsubkey, subsubval] of Object.entries(subval)) {
|
||||
if (subsubval === null || subsubval === undefined) continue;
|
||||
if (Array.isArray(subsubval)) {
|
||||
if (subsubval.length === 0) {
|
||||
lines.push(` ${subsubkey}: []`);
|
||||
} else {
|
||||
lines.push(` ${subsubkey}:`);
|
||||
for (const item of subsubval) {
|
||||
lines.push(` - ${item}`);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
lines.push(` ${subsubkey}: ${subsubval}`);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
const sv = String(subval);
|
||||
lines.push(` ${subkey}: ${sv.includes(':') || sv.includes('#') ? `"${sv}"` : sv}`);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
const sv = String(value);
|
||||
if (sv.includes(':') || sv.includes('#') || sv.startsWith('[') || sv.startsWith('{')) {
|
||||
lines.push(`${key}: "${sv}"`);
|
||||
} else {
|
||||
lines.push(`${key}: ${sv}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
function spliceFrontmatter(content, newObj) {
|
||||
const yamlStr = reconstructFrontmatter(newObj);
|
||||
const match = content.match(/^---\r?\n[\s\S]+?\r?\n---/);
|
||||
if (match) {
|
||||
return `---\n${yamlStr}\n---` + content.slice(match[0].length);
|
||||
}
|
||||
return `---\n${yamlStr}\n---\n\n` + content;
|
||||
}
|
||||
|
||||
function parseMustHavesBlock(content, blockName) {
|
||||
// Extract a specific block from must_haves in raw frontmatter YAML
|
||||
// Handles 3-level nesting: must_haves > artifacts/key_links > [{path, provides, ...}]
|
||||
const fmMatch = content.match(/^---\r?\n([\s\S]+?)\r?\n---/);
|
||||
if (!fmMatch) return [];
|
||||
|
||||
const yaml = fmMatch[1];
|
||||
// Find the block (e.g., "truths:", "artifacts:", "key_links:")
|
||||
const blockPattern = new RegExp(`^\\s{4}${blockName}:\\s*$`, 'm');
|
||||
const blockStart = yaml.search(blockPattern);
|
||||
if (blockStart === -1) return [];
|
||||
|
||||
const afterBlock = yaml.slice(blockStart);
|
||||
const blockLines = afterBlock.split(/\r?\n/).slice(1); // skip the header line
|
||||
|
||||
const items = [];
|
||||
let current = null;
|
||||
|
||||
for (const line of blockLines) {
|
||||
// Stop at same or lower indent level (non-continuation)
|
||||
if (line.trim() === '') continue;
|
||||
const indent = line.match(/^(\s*)/)[1].length;
|
||||
if (indent <= 4 && line.trim() !== '') break; // back to must_haves level or higher
|
||||
|
||||
if (line.match(/^\s{6}-\s+/)) {
|
||||
// New list item at 6-space indent
|
||||
if (current) items.push(current);
|
||||
current = {};
|
||||
// Check if it's a simple string item
|
||||
const simpleMatch = line.match(/^\s{6}-\s+"?([^"]+)"?\s*$/);
|
||||
if (simpleMatch && !line.includes(':')) {
|
||||
current = simpleMatch[1];
|
||||
} else {
|
||||
// Key-value on same line as dash: "- path: value"
|
||||
const kvMatch = line.match(/^\s{6}-\s+(\w+):\s*"?([^"]*)"?\s*$/);
|
||||
if (kvMatch) {
|
||||
current = {};
|
||||
current[kvMatch[1]] = kvMatch[2];
|
||||
}
|
||||
}
|
||||
} else if (current && typeof current === 'object') {
|
||||
// Continuation key-value at 8+ space indent
|
||||
const kvMatch = line.match(/^\s{8,}(\w+):\s*"?([^"]*)"?\s*$/);
|
||||
if (kvMatch) {
|
||||
const val = kvMatch[2];
|
||||
// Try to parse as number
|
||||
current[kvMatch[1]] = /^\d+$/.test(val) ? parseInt(val, 10) : val;
|
||||
}
|
||||
// Array items under a key
|
||||
const arrMatch = line.match(/^\s{10,}-\s+"?([^"]+)"?\s*$/);
|
||||
if (arrMatch) {
|
||||
// Find the last key added and convert to array
|
||||
const keys = Object.keys(current);
|
||||
const lastKey = keys[keys.length - 1];
|
||||
if (lastKey && !Array.isArray(current[lastKey])) {
|
||||
current[lastKey] = current[lastKey] ? [current[lastKey]] : [];
|
||||
}
|
||||
if (lastKey) current[lastKey].push(arrMatch[1]);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (current) items.push(current);
|
||||
|
||||
return items;
|
||||
}
|
||||
|
||||
// ─── Frontmatter CRUD commands ────────────────────────────────────────────────
|
||||
|
||||
const FRONTMATTER_SCHEMAS = {
|
||||
plan: { required: ['phase', 'plan', 'type', 'wave', 'depends_on', 'files_modified', 'autonomous', 'must_haves'] },
|
||||
summary: { required: ['phase', 'plan', 'subsystem', 'tags', 'duration', 'completed'] },
|
||||
verification: { required: ['phase', 'verified', 'status', 'score'] },
|
||||
};
|
||||
|
||||
function cmdFrontmatterGet(cwd, filePath, field, raw) {
|
||||
if (!filePath) { error('file path required'); }
|
||||
const fullPath = path.isAbsolute(filePath) ? filePath : path.join(cwd, filePath);
|
||||
const content = safeReadFile(fullPath);
|
||||
if (!content) { output({ error: 'File not found', path: filePath }, raw); return; }
|
||||
const fm = extractFrontmatter(content);
|
||||
if (field) {
|
||||
const value = fm[field];
|
||||
if (value === undefined) { output({ error: 'Field not found', field }, raw); return; }
|
||||
output({ [field]: value }, raw, JSON.stringify(value));
|
||||
} else {
|
||||
output(fm, raw);
|
||||
}
|
||||
}
|
||||
|
||||
function cmdFrontmatterSet(cwd, filePath, field, value, raw) {
|
||||
if (!filePath || !field || value === undefined) { error('file, field, and value required'); }
|
||||
const fullPath = path.isAbsolute(filePath) ? filePath : path.join(cwd, filePath);
|
||||
if (!fs.existsSync(fullPath)) { output({ error: 'File not found', path: filePath }, raw); return; }
|
||||
const content = fs.readFileSync(fullPath, 'utf-8');
|
||||
const fm = extractFrontmatter(content);
|
||||
let parsedValue;
|
||||
try { parsedValue = JSON.parse(value); } catch { parsedValue = value; }
|
||||
fm[field] = parsedValue;
|
||||
const newContent = spliceFrontmatter(content, fm);
|
||||
fs.writeFileSync(fullPath, normalizeMd(newContent), 'utf-8');
|
||||
output({ updated: true, field, value: parsedValue }, raw, 'true');
|
||||
}
|
||||
|
||||
function cmdFrontmatterMerge(cwd, filePath, data, raw) {
|
||||
if (!filePath || !data) { error('file and data required'); }
|
||||
const fullPath = path.isAbsolute(filePath) ? filePath : path.join(cwd, filePath);
|
||||
if (!fs.existsSync(fullPath)) { output({ error: 'File not found', path: filePath }, raw); return; }
|
||||
const content = fs.readFileSync(fullPath, 'utf-8');
|
||||
const fm = extractFrontmatter(content);
|
||||
let mergeData;
|
||||
try { mergeData = JSON.parse(data); } catch { error('Invalid JSON for --data'); return; }
|
||||
Object.assign(fm, mergeData);
|
||||
const newContent = spliceFrontmatter(content, fm);
|
||||
fs.writeFileSync(fullPath, normalizeMd(newContent), 'utf-8');
|
||||
output({ merged: true, fields: Object.keys(mergeData) }, raw, 'true');
|
||||
}
|
||||
|
||||
function cmdFrontmatterValidate(cwd, filePath, schemaName, raw) {
|
||||
if (!filePath || !schemaName) { error('file and schema required'); }
|
||||
const schema = FRONTMATTER_SCHEMAS[schemaName];
|
||||
if (!schema) { error(`Unknown schema: ${schemaName}. Available: ${Object.keys(FRONTMATTER_SCHEMAS).join(', ')}`); }
|
||||
const fullPath = path.isAbsolute(filePath) ? filePath : path.join(cwd, filePath);
|
||||
const content = safeReadFile(fullPath);
|
||||
if (!content) { output({ error: 'File not found', path: filePath }, raw); return; }
|
||||
const fm = extractFrontmatter(content);
|
||||
const missing = schema.required.filter(f => fm[f] === undefined);
|
||||
const present = schema.required.filter(f => fm[f] !== undefined);
|
||||
output({ valid: missing.length === 0, missing, present, schema: schemaName }, raw, missing.length === 0 ? 'valid' : 'invalid');
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
extractFrontmatter,
|
||||
reconstructFrontmatter,
|
||||
spliceFrontmatter,
|
||||
parseMustHavesBlock,
|
||||
FRONTMATTER_SCHEMAS,
|
||||
cmdFrontmatterGet,
|
||||
cmdFrontmatterSet,
|
||||
cmdFrontmatterMerge,
|
||||
cmdFrontmatterValidate,
|
||||
};
|
||||
782
get-shit-done/bin/lib/init.cjs
Normal file
782
get-shit-done/bin/lib/init.cjs
Normal file
@@ -0,0 +1,782 @@
|
||||
/**
|
||||
* Init — Compound init commands for workflow bootstrapping
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { execSync } = require('child_process');
|
||||
const { loadConfig, resolveModelInternal, findPhaseInternal, getRoadmapPhaseInternal, pathExistsInternal, generateSlugInternal, getMilestoneInfo, getMilestonePhaseFilter, stripShippedMilestones, extractCurrentMilestone, normalizePhaseName, toPosixPath, output, error } = require('./core.cjs');
|
||||
|
||||
function cmdInitExecutePhase(cwd, phase, raw) {
|
||||
if (!phase) {
|
||||
error('phase required for init execute-phase');
|
||||
}
|
||||
|
||||
const config = loadConfig(cwd);
|
||||
const phaseInfo = findPhaseInternal(cwd, phase);
|
||||
const milestone = getMilestoneInfo(cwd);
|
||||
|
||||
const roadmapPhase = getRoadmapPhaseInternal(cwd, phase);
|
||||
const reqMatch = roadmapPhase?.section?.match(/^\*\*Requirements\*\*:[^\S\n]*([^\n]*)$/m);
|
||||
const reqExtracted = reqMatch
|
||||
? reqMatch[1].replace(/[\[\]]/g, '').split(',').map(s => s.trim()).filter(Boolean).join(', ')
|
||||
: null;
|
||||
const phase_req_ids = (reqExtracted && reqExtracted !== 'TBD') ? reqExtracted : null;
|
||||
|
||||
const result = {
|
||||
// Models
|
||||
executor_model: resolveModelInternal(cwd, 'gsd-executor'),
|
||||
verifier_model: resolveModelInternal(cwd, 'gsd-verifier'),
|
||||
|
||||
// Config flags
|
||||
commit_docs: config.commit_docs,
|
||||
parallelization: config.parallelization,
|
||||
branching_strategy: config.branching_strategy,
|
||||
phase_branch_template: config.phase_branch_template,
|
||||
milestone_branch_template: config.milestone_branch_template,
|
||||
verifier_enabled: config.verifier,
|
||||
|
||||
// Phase info
|
||||
phase_found: !!phaseInfo,
|
||||
phase_dir: phaseInfo?.directory || null,
|
||||
phase_number: phaseInfo?.phase_number || null,
|
||||
phase_name: phaseInfo?.phase_name || null,
|
||||
phase_slug: phaseInfo?.phase_slug || null,
|
||||
phase_req_ids,
|
||||
|
||||
// Plan inventory
|
||||
plans: phaseInfo?.plans || [],
|
||||
summaries: phaseInfo?.summaries || [],
|
||||
incomplete_plans: phaseInfo?.incomplete_plans || [],
|
||||
plan_count: phaseInfo?.plans?.length || 0,
|
||||
incomplete_count: phaseInfo?.incomplete_plans?.length || 0,
|
||||
|
||||
// Branch name (pre-computed)
|
||||
branch_name: config.branching_strategy === 'phase' && phaseInfo
|
||||
? config.phase_branch_template
|
||||
.replace('{phase}', phaseInfo.phase_number)
|
||||
.replace('{slug}', phaseInfo.phase_slug || 'phase')
|
||||
: config.branching_strategy === 'milestone'
|
||||
? config.milestone_branch_template
|
||||
.replace('{milestone}', milestone.version)
|
||||
.replace('{slug}', generateSlugInternal(milestone.name) || 'milestone')
|
||||
: null,
|
||||
|
||||
// Milestone info
|
||||
milestone_version: milestone.version,
|
||||
milestone_name: milestone.name,
|
||||
milestone_slug: generateSlugInternal(milestone.name),
|
||||
|
||||
// File existence
|
||||
state_exists: pathExistsInternal(cwd, '.planning/STATE.md'),
|
||||
roadmap_exists: pathExistsInternal(cwd, '.planning/ROADMAP.md'),
|
||||
config_exists: pathExistsInternal(cwd, '.planning/config.json'),
|
||||
// File paths
|
||||
state_path: '.planning/STATE.md',
|
||||
roadmap_path: '.planning/ROADMAP.md',
|
||||
config_path: '.planning/config.json',
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitPlanPhase(cwd, phase, raw) {
|
||||
if (!phase) {
|
||||
error('phase required for init plan-phase');
|
||||
}
|
||||
|
||||
const config = loadConfig(cwd);
|
||||
const phaseInfo = findPhaseInternal(cwd, phase);
|
||||
|
||||
const roadmapPhase = getRoadmapPhaseInternal(cwd, phase);
|
||||
const reqMatch = roadmapPhase?.section?.match(/^\*\*Requirements\*\*:[^\S\n]*([^\n]*)$/m);
|
||||
const reqExtracted = reqMatch
|
||||
? reqMatch[1].replace(/[\[\]]/g, '').split(',').map(s => s.trim()).filter(Boolean).join(', ')
|
||||
: null;
|
||||
const phase_req_ids = (reqExtracted && reqExtracted !== 'TBD') ? reqExtracted : null;
|
||||
|
||||
const result = {
|
||||
// Models
|
||||
researcher_model: resolveModelInternal(cwd, 'gsd-phase-researcher'),
|
||||
planner_model: resolveModelInternal(cwd, 'gsd-planner'),
|
||||
checker_model: resolveModelInternal(cwd, 'gsd-plan-checker'),
|
||||
|
||||
// Workflow flags
|
||||
research_enabled: config.research,
|
||||
plan_checker_enabled: config.plan_checker,
|
||||
nyquist_validation_enabled: config.nyquist_validation,
|
||||
commit_docs: config.commit_docs,
|
||||
|
||||
// Phase info
|
||||
phase_found: !!phaseInfo,
|
||||
phase_dir: phaseInfo?.directory || null,
|
||||
phase_number: phaseInfo?.phase_number || null,
|
||||
phase_name: phaseInfo?.phase_name || null,
|
||||
phase_slug: phaseInfo?.phase_slug || null,
|
||||
padded_phase: phaseInfo?.phase_number ? normalizePhaseName(phaseInfo.phase_number) : null,
|
||||
phase_req_ids,
|
||||
|
||||
// Existing artifacts
|
||||
has_research: phaseInfo?.has_research || false,
|
||||
has_context: phaseInfo?.has_context || false,
|
||||
has_plans: (phaseInfo?.plans?.length || 0) > 0,
|
||||
plan_count: phaseInfo?.plans?.length || 0,
|
||||
|
||||
// Environment
|
||||
planning_exists: pathExistsInternal(cwd, '.planning'),
|
||||
roadmap_exists: pathExistsInternal(cwd, '.planning/ROADMAP.md'),
|
||||
|
||||
// File paths
|
||||
state_path: '.planning/STATE.md',
|
||||
roadmap_path: '.planning/ROADMAP.md',
|
||||
requirements_path: '.planning/REQUIREMENTS.md',
|
||||
};
|
||||
|
||||
if (phaseInfo?.directory) {
|
||||
// Find *-CONTEXT.md in phase directory
|
||||
const phaseDirFull = path.join(cwd, phaseInfo.directory);
|
||||
try {
|
||||
const files = fs.readdirSync(phaseDirFull);
|
||||
const contextFile = files.find(f => f.endsWith('-CONTEXT.md') || f === 'CONTEXT.md');
|
||||
if (contextFile) {
|
||||
result.context_path = toPosixPath(path.join(phaseInfo.directory, contextFile));
|
||||
}
|
||||
const researchFile = files.find(f => f.endsWith('-RESEARCH.md') || f === 'RESEARCH.md');
|
||||
if (researchFile) {
|
||||
result.research_path = toPosixPath(path.join(phaseInfo.directory, researchFile));
|
||||
}
|
||||
const verificationFile = files.find(f => f.endsWith('-VERIFICATION.md') || f === 'VERIFICATION.md');
|
||||
if (verificationFile) {
|
||||
result.verification_path = toPosixPath(path.join(phaseInfo.directory, verificationFile));
|
||||
}
|
||||
const uatFile = files.find(f => f.endsWith('-UAT.md') || f === 'UAT.md');
|
||||
if (uatFile) {
|
||||
result.uat_path = toPosixPath(path.join(phaseInfo.directory, uatFile));
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitNewProject(cwd, raw) {
|
||||
const config = loadConfig(cwd);
|
||||
|
||||
// Detect Brave Search API key availability
|
||||
const homedir = require('os').homedir();
|
||||
const braveKeyFile = path.join(homedir, '.gsd', 'brave_api_key');
|
||||
const hasBraveSearch = !!(process.env.BRAVE_API_KEY || fs.existsSync(braveKeyFile));
|
||||
|
||||
// Detect existing code
|
||||
let hasCode = false;
|
||||
let hasPackageFile = false;
|
||||
try {
|
||||
const files = execSync('find . -maxdepth 3 \\( -name "*.ts" -o -name "*.js" -o -name "*.py" -o -name "*.go" -o -name "*.rs" -o -name "*.swift" -o -name "*.java" \\) 2>/dev/null | grep -v node_modules | grep -v .git | head -5', {
|
||||
cwd,
|
||||
encoding: 'utf-8',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
});
|
||||
hasCode = files.trim().length > 0;
|
||||
} catch {}
|
||||
|
||||
hasPackageFile = pathExistsInternal(cwd, 'package.json') ||
|
||||
pathExistsInternal(cwd, 'requirements.txt') ||
|
||||
pathExistsInternal(cwd, 'Cargo.toml') ||
|
||||
pathExistsInternal(cwd, 'go.mod') ||
|
||||
pathExistsInternal(cwd, 'Package.swift');
|
||||
|
||||
const result = {
|
||||
// Models
|
||||
researcher_model: resolveModelInternal(cwd, 'gsd-project-researcher'),
|
||||
synthesizer_model: resolveModelInternal(cwd, 'gsd-research-synthesizer'),
|
||||
roadmapper_model: resolveModelInternal(cwd, 'gsd-roadmapper'),
|
||||
|
||||
// Config
|
||||
commit_docs: config.commit_docs,
|
||||
|
||||
// Existing state
|
||||
project_exists: pathExistsInternal(cwd, '.planning/PROJECT.md'),
|
||||
has_codebase_map: pathExistsInternal(cwd, '.planning/codebase'),
|
||||
planning_exists: pathExistsInternal(cwd, '.planning'),
|
||||
|
||||
// Brownfield detection
|
||||
has_existing_code: hasCode,
|
||||
has_package_file: hasPackageFile,
|
||||
is_brownfield: hasCode || hasPackageFile,
|
||||
needs_codebase_map: (hasCode || hasPackageFile) && !pathExistsInternal(cwd, '.planning/codebase'),
|
||||
|
||||
// Git state
|
||||
has_git: pathExistsInternal(cwd, '.git'),
|
||||
|
||||
// Enhanced search
|
||||
brave_search_available: hasBraveSearch,
|
||||
|
||||
// File paths
|
||||
project_path: '.planning/PROJECT.md',
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitNewMilestone(cwd, raw) {
|
||||
const config = loadConfig(cwd);
|
||||
const milestone = getMilestoneInfo(cwd);
|
||||
|
||||
const result = {
|
||||
// Models
|
||||
researcher_model: resolveModelInternal(cwd, 'gsd-project-researcher'),
|
||||
synthesizer_model: resolveModelInternal(cwd, 'gsd-research-synthesizer'),
|
||||
roadmapper_model: resolveModelInternal(cwd, 'gsd-roadmapper'),
|
||||
|
||||
// Config
|
||||
commit_docs: config.commit_docs,
|
||||
research_enabled: config.research,
|
||||
|
||||
// Current milestone
|
||||
current_milestone: milestone.version,
|
||||
current_milestone_name: milestone.name,
|
||||
|
||||
// File existence
|
||||
project_exists: pathExistsInternal(cwd, '.planning/PROJECT.md'),
|
||||
roadmap_exists: pathExistsInternal(cwd, '.planning/ROADMAP.md'),
|
||||
state_exists: pathExistsInternal(cwd, '.planning/STATE.md'),
|
||||
|
||||
// File paths
|
||||
project_path: '.planning/PROJECT.md',
|
||||
roadmap_path: '.planning/ROADMAP.md',
|
||||
state_path: '.planning/STATE.md',
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitQuick(cwd, description, raw) {
|
||||
const config = loadConfig(cwd);
|
||||
const now = new Date();
|
||||
const slug = description ? generateSlugInternal(description)?.substring(0, 40) : null;
|
||||
|
||||
// Generate collision-resistant quick task ID: YYMMDD-xxx
|
||||
// xxx = 2-second precision blocks since midnight, encoded as 3-char Base36 (lowercase)
|
||||
// Range: 000 (00:00:00) to xbz (23:59:58), guaranteed 3 chars for any time of day.
|
||||
// Provides ~2s uniqueness window per user — practically collision-free across a team.
|
||||
const yy = String(now.getFullYear()).slice(-2);
|
||||
const mm = String(now.getMonth() + 1).padStart(2, '0');
|
||||
const dd = String(now.getDate()).padStart(2, '0');
|
||||
const dateStr = yy + mm + dd;
|
||||
const secondsSinceMidnight = now.getHours() * 3600 + now.getMinutes() * 60 + now.getSeconds();
|
||||
const timeBlocks = Math.floor(secondsSinceMidnight / 2);
|
||||
const timeEncoded = timeBlocks.toString(36).padStart(3, '0');
|
||||
const quickId = dateStr + '-' + timeEncoded;
|
||||
|
||||
const result = {
|
||||
// Models
|
||||
planner_model: resolveModelInternal(cwd, 'gsd-planner'),
|
||||
executor_model: resolveModelInternal(cwd, 'gsd-executor'),
|
||||
checker_model: resolveModelInternal(cwd, 'gsd-plan-checker'),
|
||||
verifier_model: resolveModelInternal(cwd, 'gsd-verifier'),
|
||||
|
||||
// Config
|
||||
commit_docs: config.commit_docs,
|
||||
|
||||
// Quick task info
|
||||
quick_id: quickId,
|
||||
slug: slug,
|
||||
description: description || null,
|
||||
|
||||
// Timestamps
|
||||
date: now.toISOString().split('T')[0],
|
||||
timestamp: now.toISOString(),
|
||||
|
||||
// Paths
|
||||
quick_dir: '.planning/quick',
|
||||
task_dir: slug ? `.planning/quick/${quickId}-${slug}` : null,
|
||||
|
||||
// File existence
|
||||
roadmap_exists: pathExistsInternal(cwd, '.planning/ROADMAP.md'),
|
||||
planning_exists: pathExistsInternal(cwd, '.planning'),
|
||||
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitResume(cwd, raw) {
|
||||
const config = loadConfig(cwd);
|
||||
|
||||
// Check for interrupted agent
|
||||
let interruptedAgentId = null;
|
||||
try {
|
||||
interruptedAgentId = fs.readFileSync(path.join(cwd, '.planning', 'current-agent-id.txt'), 'utf-8').trim();
|
||||
} catch {}
|
||||
|
||||
const result = {
|
||||
// File existence
|
||||
state_exists: pathExistsInternal(cwd, '.planning/STATE.md'),
|
||||
roadmap_exists: pathExistsInternal(cwd, '.planning/ROADMAP.md'),
|
||||
project_exists: pathExistsInternal(cwd, '.planning/PROJECT.md'),
|
||||
planning_exists: pathExistsInternal(cwd, '.planning'),
|
||||
|
||||
// File paths
|
||||
state_path: '.planning/STATE.md',
|
||||
roadmap_path: '.planning/ROADMAP.md',
|
||||
project_path: '.planning/PROJECT.md',
|
||||
|
||||
// Agent state
|
||||
has_interrupted_agent: !!interruptedAgentId,
|
||||
interrupted_agent_id: interruptedAgentId,
|
||||
|
||||
// Config
|
||||
commit_docs: config.commit_docs,
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitVerifyWork(cwd, phase, raw) {
|
||||
if (!phase) {
|
||||
error('phase required for init verify-work');
|
||||
}
|
||||
|
||||
const config = loadConfig(cwd);
|
||||
const phaseInfo = findPhaseInternal(cwd, phase);
|
||||
|
||||
const result = {
|
||||
// Models
|
||||
planner_model: resolveModelInternal(cwd, 'gsd-planner'),
|
||||
checker_model: resolveModelInternal(cwd, 'gsd-plan-checker'),
|
||||
|
||||
// Config
|
||||
commit_docs: config.commit_docs,
|
||||
|
||||
// Phase info
|
||||
phase_found: !!phaseInfo,
|
||||
phase_dir: phaseInfo?.directory || null,
|
||||
phase_number: phaseInfo?.phase_number || null,
|
||||
phase_name: phaseInfo?.phase_name || null,
|
||||
|
||||
// Existing artifacts
|
||||
has_verification: phaseInfo?.has_verification || false,
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitPhaseOp(cwd, phase, raw) {
|
||||
const config = loadConfig(cwd);
|
||||
let phaseInfo = findPhaseInternal(cwd, phase);
|
||||
|
||||
// If the only disk match comes from an archived milestone, prefer the
|
||||
// current milestone's ROADMAP entry so discuss-phase and similar flows
|
||||
// don't attach to shipped work that reused the same phase number.
|
||||
if (phaseInfo?.archived) {
|
||||
const roadmapPhase = getRoadmapPhaseInternal(cwd, phase);
|
||||
if (roadmapPhase?.found) {
|
||||
const phaseName = roadmapPhase.phase_name;
|
||||
phaseInfo = {
|
||||
found: true,
|
||||
directory: null,
|
||||
phase_number: roadmapPhase.phase_number,
|
||||
phase_name: phaseName,
|
||||
phase_slug: phaseName ? phaseName.toLowerCase().replace(/[^a-z0-9]+/g, '-').replace(/^-+|-+$/g, '') : null,
|
||||
plans: [],
|
||||
summaries: [],
|
||||
incomplete_plans: [],
|
||||
has_research: false,
|
||||
has_context: false,
|
||||
has_verification: false,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback to ROADMAP.md if no directory exists (e.g., Plans: TBD)
|
||||
if (!phaseInfo) {
|
||||
const roadmapPhase = getRoadmapPhaseInternal(cwd, phase);
|
||||
if (roadmapPhase?.found) {
|
||||
const phaseName = roadmapPhase.phase_name;
|
||||
phaseInfo = {
|
||||
found: true,
|
||||
directory: null,
|
||||
phase_number: roadmapPhase.phase_number,
|
||||
phase_name: phaseName,
|
||||
phase_slug: phaseName ? phaseName.toLowerCase().replace(/[^a-z0-9]+/g, '-').replace(/^-+|-+$/g, '') : null,
|
||||
plans: [],
|
||||
summaries: [],
|
||||
incomplete_plans: [],
|
||||
has_research: false,
|
||||
has_context: false,
|
||||
has_verification: false,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
const result = {
|
||||
// Config
|
||||
commit_docs: config.commit_docs,
|
||||
brave_search: config.brave_search,
|
||||
|
||||
// Phase info
|
||||
phase_found: !!phaseInfo,
|
||||
phase_dir: phaseInfo?.directory || null,
|
||||
phase_number: phaseInfo?.phase_number || null,
|
||||
phase_name: phaseInfo?.phase_name || null,
|
||||
phase_slug: phaseInfo?.phase_slug || null,
|
||||
padded_phase: phaseInfo?.phase_number ? normalizePhaseName(phaseInfo.phase_number) : null,
|
||||
|
||||
// Existing artifacts
|
||||
has_research: phaseInfo?.has_research || false,
|
||||
has_context: phaseInfo?.has_context || false,
|
||||
has_plans: (phaseInfo?.plans?.length || 0) > 0,
|
||||
has_verification: phaseInfo?.has_verification || false,
|
||||
plan_count: phaseInfo?.plans?.length || 0,
|
||||
|
||||
// File existence
|
||||
roadmap_exists: pathExistsInternal(cwd, '.planning/ROADMAP.md'),
|
||||
planning_exists: pathExistsInternal(cwd, '.planning'),
|
||||
|
||||
// File paths
|
||||
state_path: '.planning/STATE.md',
|
||||
roadmap_path: '.planning/ROADMAP.md',
|
||||
requirements_path: '.planning/REQUIREMENTS.md',
|
||||
};
|
||||
|
||||
if (phaseInfo?.directory) {
|
||||
const phaseDirFull = path.join(cwd, phaseInfo.directory);
|
||||
try {
|
||||
const files = fs.readdirSync(phaseDirFull);
|
||||
const contextFile = files.find(f => f.endsWith('-CONTEXT.md') || f === 'CONTEXT.md');
|
||||
if (contextFile) {
|
||||
result.context_path = toPosixPath(path.join(phaseInfo.directory, contextFile));
|
||||
}
|
||||
const researchFile = files.find(f => f.endsWith('-RESEARCH.md') || f === 'RESEARCH.md');
|
||||
if (researchFile) {
|
||||
result.research_path = toPosixPath(path.join(phaseInfo.directory, researchFile));
|
||||
}
|
||||
const verificationFile = files.find(f => f.endsWith('-VERIFICATION.md') || f === 'VERIFICATION.md');
|
||||
if (verificationFile) {
|
||||
result.verification_path = toPosixPath(path.join(phaseInfo.directory, verificationFile));
|
||||
}
|
||||
const uatFile = files.find(f => f.endsWith('-UAT.md') || f === 'UAT.md');
|
||||
if (uatFile) {
|
||||
result.uat_path = toPosixPath(path.join(phaseInfo.directory, uatFile));
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitTodos(cwd, area, raw) {
|
||||
const config = loadConfig(cwd);
|
||||
const now = new Date();
|
||||
|
||||
// List todos (reuse existing logic)
|
||||
const pendingDir = path.join(cwd, '.planning', 'todos', 'pending');
|
||||
let count = 0;
|
||||
const todos = [];
|
||||
|
||||
try {
|
||||
const files = fs.readdirSync(pendingDir).filter(f => f.endsWith('.md'));
|
||||
for (const file of files) {
|
||||
try {
|
||||
const content = fs.readFileSync(path.join(pendingDir, file), 'utf-8');
|
||||
const createdMatch = content.match(/^created:\s*(.+)$/m);
|
||||
const titleMatch = content.match(/^title:\s*(.+)$/m);
|
||||
const areaMatch = content.match(/^area:\s*(.+)$/m);
|
||||
const todoArea = areaMatch ? areaMatch[1].trim() : 'general';
|
||||
|
||||
if (area && todoArea !== area) continue;
|
||||
|
||||
count++;
|
||||
todos.push({
|
||||
file,
|
||||
created: createdMatch ? createdMatch[1].trim() : 'unknown',
|
||||
title: titleMatch ? titleMatch[1].trim() : 'Untitled',
|
||||
area: todoArea,
|
||||
path: '.planning/todos/pending/' + file,
|
||||
});
|
||||
} catch {}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
const result = {
|
||||
// Config
|
||||
commit_docs: config.commit_docs,
|
||||
|
||||
// Timestamps
|
||||
date: now.toISOString().split('T')[0],
|
||||
timestamp: now.toISOString(),
|
||||
|
||||
// Todo inventory
|
||||
todo_count: count,
|
||||
todos,
|
||||
area_filter: area || null,
|
||||
|
||||
// Paths
|
||||
pending_dir: '.planning/todos/pending',
|
||||
completed_dir: '.planning/todos/completed',
|
||||
|
||||
// File existence
|
||||
planning_exists: pathExistsInternal(cwd, '.planning'),
|
||||
todos_dir_exists: pathExistsInternal(cwd, '.planning/todos'),
|
||||
pending_dir_exists: pathExistsInternal(cwd, '.planning/todos/pending'),
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitMilestoneOp(cwd, raw) {
|
||||
const config = loadConfig(cwd);
|
||||
const milestone = getMilestoneInfo(cwd);
|
||||
|
||||
// Count phases
|
||||
let phaseCount = 0;
|
||||
let completedPhases = 0;
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name);
|
||||
phaseCount = dirs.length;
|
||||
|
||||
// Count phases with summaries (completed)
|
||||
for (const dir of dirs) {
|
||||
try {
|
||||
const phaseFiles = fs.readdirSync(path.join(phasesDir, dir));
|
||||
const hasSummary = phaseFiles.some(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md');
|
||||
if (hasSummary) completedPhases++;
|
||||
} catch {}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Check archive
|
||||
const archiveDir = path.join(cwd, '.planning', 'archive');
|
||||
let archivedMilestones = [];
|
||||
try {
|
||||
archivedMilestones = fs.readdirSync(archiveDir, { withFileTypes: true })
|
||||
.filter(e => e.isDirectory())
|
||||
.map(e => e.name);
|
||||
} catch {}
|
||||
|
||||
const result = {
|
||||
// Config
|
||||
commit_docs: config.commit_docs,
|
||||
|
||||
// Current milestone
|
||||
milestone_version: milestone.version,
|
||||
milestone_name: milestone.name,
|
||||
milestone_slug: generateSlugInternal(milestone.name),
|
||||
|
||||
// Phase counts
|
||||
phase_count: phaseCount,
|
||||
completed_phases: completedPhases,
|
||||
all_phases_complete: phaseCount > 0 && phaseCount === completedPhases,
|
||||
|
||||
// Archive
|
||||
archived_milestones: archivedMilestones,
|
||||
archive_count: archivedMilestones.length,
|
||||
|
||||
// File existence
|
||||
project_exists: pathExistsInternal(cwd, '.planning/PROJECT.md'),
|
||||
roadmap_exists: pathExistsInternal(cwd, '.planning/ROADMAP.md'),
|
||||
state_exists: pathExistsInternal(cwd, '.planning/STATE.md'),
|
||||
archive_exists: pathExistsInternal(cwd, '.planning/archive'),
|
||||
phases_dir_exists: pathExistsInternal(cwd, '.planning/phases'),
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitMapCodebase(cwd, raw) {
|
||||
const config = loadConfig(cwd);
|
||||
|
||||
// Check for existing codebase maps
|
||||
const codebaseDir = path.join(cwd, '.planning', 'codebase');
|
||||
let existingMaps = [];
|
||||
try {
|
||||
existingMaps = fs.readdirSync(codebaseDir).filter(f => f.endsWith('.md'));
|
||||
} catch {}
|
||||
|
||||
const result = {
|
||||
// Models
|
||||
mapper_model: resolveModelInternal(cwd, 'gsd-codebase-mapper'),
|
||||
|
||||
// Config
|
||||
commit_docs: config.commit_docs,
|
||||
search_gitignored: config.search_gitignored,
|
||||
parallelization: config.parallelization,
|
||||
|
||||
// Paths
|
||||
codebase_dir: '.planning/codebase',
|
||||
|
||||
// Existing maps
|
||||
existing_maps: existingMaps,
|
||||
has_maps: existingMaps.length > 0,
|
||||
|
||||
// File existence
|
||||
planning_exists: pathExistsInternal(cwd, '.planning'),
|
||||
codebase_dir_exists: pathExistsInternal(cwd, '.planning/codebase'),
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdInitProgress(cwd, raw) {
|
||||
const config = loadConfig(cwd);
|
||||
const milestone = getMilestoneInfo(cwd);
|
||||
|
||||
// Analyze phases — filter to current milestone and include ROADMAP-only phases
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const phases = [];
|
||||
let currentPhase = null;
|
||||
let nextPhase = null;
|
||||
|
||||
// Build set of phases defined in ROADMAP for the current milestone
|
||||
const roadmapPhaseNums = new Set();
|
||||
const roadmapPhaseNames = new Map();
|
||||
try {
|
||||
const roadmapContent = extractCurrentMilestone(
|
||||
fs.readFileSync(path.join(cwd, '.planning', 'ROADMAP.md'), 'utf-8'), cwd
|
||||
);
|
||||
const headingPattern = /#{2,4}\s*Phase\s+(\d+[A-Z]?(?:\.\d+)*)\s*:\s*([^\n]+)/gi;
|
||||
let hm;
|
||||
while ((hm = headingPattern.exec(roadmapContent)) !== null) {
|
||||
roadmapPhaseNums.add(hm[1]);
|
||||
roadmapPhaseNames.set(hm[1], hm[2].replace(/\(INSERTED\)/i, '').trim());
|
||||
}
|
||||
} catch {}
|
||||
|
||||
const isDirInMilestone = getMilestonePhaseFilter(cwd);
|
||||
const seenPhaseNums = new Set();
|
||||
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name)
|
||||
.filter(isDirInMilestone)
|
||||
.sort((a, b) => {
|
||||
const pa = a.match(/^(\d+[A-Z]?(?:\.\d+)*)/i);
|
||||
const pb = b.match(/^(\d+[A-Z]?(?:\.\d+)*)/i);
|
||||
if (!pa || !pb) return a.localeCompare(b);
|
||||
return parseInt(pa[1], 10) - parseInt(pb[1], 10);
|
||||
});
|
||||
|
||||
for (const dir of dirs) {
|
||||
const match = dir.match(/^(\d+[A-Z]?(?:\.\d+)*)-?(.*)/i);
|
||||
const phaseNumber = match ? match[1] : dir;
|
||||
const phaseName = match && match[2] ? match[2] : null;
|
||||
seenPhaseNums.add(phaseNumber.replace(/^0+/, '') || '0');
|
||||
|
||||
const phasePath = path.join(phasesDir, dir);
|
||||
const phaseFiles = fs.readdirSync(phasePath);
|
||||
|
||||
const plans = phaseFiles.filter(f => f.endsWith('-PLAN.md') || f === 'PLAN.md');
|
||||
const summaries = phaseFiles.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md');
|
||||
const hasResearch = phaseFiles.some(f => f.endsWith('-RESEARCH.md') || f === 'RESEARCH.md');
|
||||
|
||||
const status = summaries.length >= plans.length && plans.length > 0 ? 'complete' :
|
||||
plans.length > 0 ? 'in_progress' :
|
||||
hasResearch ? 'researched' : 'pending';
|
||||
|
||||
const phaseInfo = {
|
||||
number: phaseNumber,
|
||||
name: phaseName,
|
||||
directory: '.planning/phases/' + dir,
|
||||
status,
|
||||
plan_count: plans.length,
|
||||
summary_count: summaries.length,
|
||||
has_research: hasResearch,
|
||||
};
|
||||
|
||||
phases.push(phaseInfo);
|
||||
|
||||
// Find current (first incomplete with plans) and next (first pending)
|
||||
if (!currentPhase && (status === 'in_progress' || status === 'researched')) {
|
||||
currentPhase = phaseInfo;
|
||||
}
|
||||
if (!nextPhase && status === 'pending') {
|
||||
nextPhase = phaseInfo;
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Add phases defined in ROADMAP but not yet scaffolded to disk
|
||||
for (const [num, name] of roadmapPhaseNames) {
|
||||
const stripped = num.replace(/^0+/, '') || '0';
|
||||
if (!seenPhaseNums.has(stripped)) {
|
||||
const phaseInfo = {
|
||||
number: num,
|
||||
name: name.toLowerCase().replace(/[^a-z0-9]+/g, '-').replace(/^-+|-+$/g, ''),
|
||||
directory: null,
|
||||
status: 'not_started',
|
||||
plan_count: 0,
|
||||
summary_count: 0,
|
||||
has_research: false,
|
||||
};
|
||||
phases.push(phaseInfo);
|
||||
if (!nextPhase && !currentPhase) {
|
||||
nextPhase = phaseInfo;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Re-sort phases by number after adding ROADMAP-only phases
|
||||
phases.sort((a, b) => parseInt(a.number, 10) - parseInt(b.number, 10));
|
||||
|
||||
// Check for paused work
|
||||
let pausedAt = null;
|
||||
try {
|
||||
const state = fs.readFileSync(path.join(cwd, '.planning', 'STATE.md'), 'utf-8');
|
||||
const pauseMatch = state.match(/\*\*Paused At:\*\*\s*(.+)/);
|
||||
if (pauseMatch) pausedAt = pauseMatch[1].trim();
|
||||
} catch {}
|
||||
|
||||
const result = {
|
||||
// Models
|
||||
executor_model: resolveModelInternal(cwd, 'gsd-executor'),
|
||||
planner_model: resolveModelInternal(cwd, 'gsd-planner'),
|
||||
|
||||
// Config
|
||||
commit_docs: config.commit_docs,
|
||||
|
||||
// Milestone
|
||||
milestone_version: milestone.version,
|
||||
milestone_name: milestone.name,
|
||||
|
||||
// Phase overview
|
||||
phases,
|
||||
phase_count: phases.length,
|
||||
completed_count: phases.filter(p => p.status === 'complete').length,
|
||||
in_progress_count: phases.filter(p => p.status === 'in_progress').length,
|
||||
|
||||
// Current state
|
||||
current_phase: currentPhase,
|
||||
next_phase: nextPhase,
|
||||
paused_at: pausedAt,
|
||||
has_work_in_progress: !!currentPhase,
|
||||
|
||||
// File existence
|
||||
project_exists: pathExistsInternal(cwd, '.planning/PROJECT.md'),
|
||||
roadmap_exists: pathExistsInternal(cwd, '.planning/ROADMAP.md'),
|
||||
state_exists: pathExistsInternal(cwd, '.planning/STATE.md'),
|
||||
// File paths
|
||||
state_path: '.planning/STATE.md',
|
||||
roadmap_path: '.planning/ROADMAP.md',
|
||||
project_path: '.planning/PROJECT.md',
|
||||
config_path: '.planning/config.json',
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
cmdInitExecutePhase,
|
||||
cmdInitPlanPhase,
|
||||
cmdInitNewProject,
|
||||
cmdInitNewMilestone,
|
||||
cmdInitQuick,
|
||||
cmdInitResume,
|
||||
cmdInitVerifyWork,
|
||||
cmdInitPhaseOp,
|
||||
cmdInitTodos,
|
||||
cmdInitMilestoneOp,
|
||||
cmdInitMapCodebase,
|
||||
cmdInitProgress,
|
||||
};
|
||||
250
get-shit-done/bin/lib/milestone.cjs
Normal file
250
get-shit-done/bin/lib/milestone.cjs
Normal file
@@ -0,0 +1,250 @@
|
||||
/**
|
||||
* Milestone — Milestone and requirements lifecycle operations
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { escapeRegex, getMilestonePhaseFilter, normalizeMd, output, error } = require('./core.cjs');
|
||||
const { extractFrontmatter } = require('./frontmatter.cjs');
|
||||
const { writeStateMd } = require('./state.cjs');
|
||||
|
||||
function cmdRequirementsMarkComplete(cwd, reqIdsRaw, raw) {
|
||||
if (!reqIdsRaw || reqIdsRaw.length === 0) {
|
||||
error('requirement IDs required. Usage: requirements mark-complete REQ-01,REQ-02 or REQ-01 REQ-02');
|
||||
}
|
||||
|
||||
// Accept comma-separated, space-separated, or bracket-wrapped: [REQ-01, REQ-02]
|
||||
const reqIds = reqIdsRaw
|
||||
.join(' ')
|
||||
.replace(/[\[\]]/g, '')
|
||||
.split(/[,\s]+/)
|
||||
.map(r => r.trim())
|
||||
.filter(Boolean);
|
||||
|
||||
if (reqIds.length === 0) {
|
||||
error('no valid requirement IDs found');
|
||||
}
|
||||
|
||||
const reqPath = path.join(cwd, '.planning', 'REQUIREMENTS.md');
|
||||
if (!fs.existsSync(reqPath)) {
|
||||
output({ updated: false, reason: 'REQUIREMENTS.md not found', ids: reqIds }, raw, 'no requirements file');
|
||||
return;
|
||||
}
|
||||
|
||||
let reqContent = fs.readFileSync(reqPath, 'utf-8');
|
||||
const updated = [];
|
||||
const alreadyComplete = [];
|
||||
const notFound = [];
|
||||
|
||||
for (const reqId of reqIds) {
|
||||
let found = false;
|
||||
const reqEscaped = escapeRegex(reqId);
|
||||
|
||||
// Update checkbox: - [ ] **REQ-ID** → - [x] **REQ-ID**
|
||||
const checkboxPattern = new RegExp(`(-\\s*\\[)[ ](\\]\\s*\\*\\*${reqEscaped}\\*\\*)`, 'gi');
|
||||
if (checkboxPattern.test(reqContent)) {
|
||||
reqContent = reqContent.replace(checkboxPattern, '$1x$2');
|
||||
found = true;
|
||||
}
|
||||
|
||||
// Update traceability table: | REQ-ID | Phase N | Pending | → | REQ-ID | Phase N | Complete |
|
||||
const tablePattern = new RegExp(`(\\|\\s*${reqEscaped}\\s*\\|[^|]+\\|)\\s*Pending\\s*(\\|)`, 'gi');
|
||||
if (tablePattern.test(reqContent)) {
|
||||
// Re-read since test() advances lastIndex for global regex
|
||||
reqContent = reqContent.replace(
|
||||
new RegExp(`(\\|\\s*${reqEscaped}\\s*\\|[^|]+\\|)\\s*Pending\\s*(\\|)`, 'gi'),
|
||||
'$1 Complete $2'
|
||||
);
|
||||
found = true;
|
||||
}
|
||||
|
||||
if (found) {
|
||||
updated.push(reqId);
|
||||
} else {
|
||||
// Check if already complete before declaring not_found
|
||||
const doneCheckbox = new RegExp(`-\\s*\\[x\\]\\s*\\*\\*${reqEscaped}\\*\\*`, 'gi');
|
||||
const doneTable = new RegExp(`\\|\\s*${reqEscaped}\\s*\\|[^|]+\\|\\s*Complete\\s*\\|`, 'gi');
|
||||
if (doneCheckbox.test(reqContent) || doneTable.test(reqContent)) {
|
||||
alreadyComplete.push(reqId);
|
||||
} else {
|
||||
notFound.push(reqId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (updated.length > 0) {
|
||||
fs.writeFileSync(reqPath, reqContent, 'utf-8');
|
||||
}
|
||||
|
||||
output({
|
||||
updated: updated.length > 0,
|
||||
marked_complete: updated,
|
||||
already_complete: alreadyComplete,
|
||||
not_found: notFound,
|
||||
total: reqIds.length,
|
||||
}, raw, `${updated.length}/${reqIds.length} requirements marked complete`);
|
||||
}
|
||||
|
||||
function cmdMilestoneComplete(cwd, version, options, raw) {
|
||||
if (!version) {
|
||||
error('version required for milestone complete (e.g., v1.0)');
|
||||
}
|
||||
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
const reqPath = path.join(cwd, '.planning', 'REQUIREMENTS.md');
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
const milestonesPath = path.join(cwd, '.planning', 'MILESTONES.md');
|
||||
const archiveDir = path.join(cwd, '.planning', 'milestones');
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const today = new Date().toISOString().split('T')[0];
|
||||
const milestoneName = options.name || version;
|
||||
|
||||
// Ensure archive directory exists
|
||||
fs.mkdirSync(archiveDir, { recursive: true });
|
||||
|
||||
// Scope stats and accomplishments to only the phases belonging to the
|
||||
// current milestone's ROADMAP. Uses the shared filter from core.cjs
|
||||
// (same logic used by cmdPhasesList and other callers).
|
||||
const isDirInMilestone = getMilestonePhaseFilter(cwd);
|
||||
|
||||
// Gather stats from phases (scoped to current milestone only)
|
||||
let phaseCount = 0;
|
||||
let totalPlans = 0;
|
||||
let totalTasks = 0;
|
||||
const accomplishments = [];
|
||||
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name).sort();
|
||||
|
||||
for (const dir of dirs) {
|
||||
if (!isDirInMilestone(dir)) continue;
|
||||
|
||||
phaseCount++;
|
||||
const phaseFiles = fs.readdirSync(path.join(phasesDir, dir));
|
||||
const plans = phaseFiles.filter(f => f.endsWith('-PLAN.md') || f === 'PLAN.md');
|
||||
const summaries = phaseFiles.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md');
|
||||
totalPlans += plans.length;
|
||||
|
||||
// Extract one-liners from summaries
|
||||
for (const s of summaries) {
|
||||
try {
|
||||
const content = fs.readFileSync(path.join(phasesDir, dir, s), 'utf-8');
|
||||
const fm = extractFrontmatter(content);
|
||||
if (fm['one-liner']) {
|
||||
accomplishments.push(fm['one-liner']);
|
||||
}
|
||||
// Count tasks
|
||||
const taskMatches = content.match(/##\s*Task\s*\d+/gi) || [];
|
||||
totalTasks += taskMatches.length;
|
||||
} catch {}
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Archive ROADMAP.md
|
||||
if (fs.existsSync(roadmapPath)) {
|
||||
const roadmapContent = fs.readFileSync(roadmapPath, 'utf-8');
|
||||
fs.writeFileSync(path.join(archiveDir, `${version}-ROADMAP.md`), roadmapContent, 'utf-8');
|
||||
}
|
||||
|
||||
// Archive REQUIREMENTS.md
|
||||
if (fs.existsSync(reqPath)) {
|
||||
const reqContent = fs.readFileSync(reqPath, 'utf-8');
|
||||
const archiveHeader = `# Requirements Archive: ${version} ${milestoneName}\n\n**Archived:** ${today}\n**Status:** SHIPPED\n\nFor current requirements, see \`.planning/REQUIREMENTS.md\`.\n\n---\n\n`;
|
||||
fs.writeFileSync(path.join(archiveDir, `${version}-REQUIREMENTS.md`), archiveHeader + reqContent, 'utf-8');
|
||||
}
|
||||
|
||||
// Archive audit file if exists
|
||||
const auditFile = path.join(cwd, '.planning', `${version}-MILESTONE-AUDIT.md`);
|
||||
if (fs.existsSync(auditFile)) {
|
||||
fs.renameSync(auditFile, path.join(archiveDir, `${version}-MILESTONE-AUDIT.md`));
|
||||
}
|
||||
|
||||
// Create/append MILESTONES.md entry
|
||||
const accomplishmentsList = accomplishments.map(a => `- ${a}`).join('\n');
|
||||
const milestoneEntry = `## ${version} ${milestoneName} (Shipped: ${today})\n\n**Phases completed:** ${phaseCount} phases, ${totalPlans} plans, ${totalTasks} tasks\n\n**Key accomplishments:**\n${accomplishmentsList || '- (none recorded)'}\n\n---\n\n`;
|
||||
|
||||
if (fs.existsSync(milestonesPath)) {
|
||||
const existing = fs.readFileSync(milestonesPath, 'utf-8');
|
||||
if (!existing.trim()) {
|
||||
// Empty file — treat like new
|
||||
fs.writeFileSync(milestonesPath, normalizeMd(`# Milestones\n\n${milestoneEntry}`), 'utf-8');
|
||||
} else {
|
||||
// Insert after the header line(s) for reverse chronological order (newest first)
|
||||
const headerMatch = existing.match(/^(#{1,3}\s+[^\n]*\n\n?)/);
|
||||
if (headerMatch) {
|
||||
const header = headerMatch[1];
|
||||
const rest = existing.slice(header.length);
|
||||
fs.writeFileSync(milestonesPath, normalizeMd(header + milestoneEntry + rest), 'utf-8');
|
||||
} else {
|
||||
// No recognizable header — prepend the entry
|
||||
fs.writeFileSync(milestonesPath, normalizeMd(milestoneEntry + existing), 'utf-8');
|
||||
}
|
||||
}
|
||||
} else {
|
||||
fs.writeFileSync(milestonesPath, normalizeMd(`# Milestones\n\n${milestoneEntry}`), 'utf-8');
|
||||
}
|
||||
|
||||
// Update STATE.md
|
||||
if (fs.existsSync(statePath)) {
|
||||
let stateContent = fs.readFileSync(statePath, 'utf-8');
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Status:\*\*\s*).*/,
|
||||
`$1${version} milestone complete`
|
||||
);
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Last Activity:\*\*\s*).*/,
|
||||
`$1${today}`
|
||||
);
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Last Activity Description:\*\*\s*).*/,
|
||||
`$1${version} milestone completed and archived`
|
||||
);
|
||||
writeStateMd(statePath, stateContent, cwd);
|
||||
}
|
||||
|
||||
// Archive phase directories if requested
|
||||
let phasesArchived = false;
|
||||
if (options.archivePhases) {
|
||||
try {
|
||||
const phaseArchiveDir = path.join(archiveDir, `${version}-phases`);
|
||||
fs.mkdirSync(phaseArchiveDir, { recursive: true });
|
||||
|
||||
const phaseEntries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const phaseDirNames = phaseEntries.filter(e => e.isDirectory()).map(e => e.name);
|
||||
let archivedCount = 0;
|
||||
for (const dir of phaseDirNames) {
|
||||
if (!isDirInMilestone(dir)) continue;
|
||||
fs.renameSync(path.join(phasesDir, dir), path.join(phaseArchiveDir, dir));
|
||||
archivedCount++;
|
||||
}
|
||||
phasesArchived = archivedCount > 0;
|
||||
} catch {}
|
||||
}
|
||||
|
||||
const result = {
|
||||
version,
|
||||
name: milestoneName,
|
||||
date: today,
|
||||
phases: phaseCount,
|
||||
plans: totalPlans,
|
||||
tasks: totalTasks,
|
||||
accomplishments,
|
||||
archived: {
|
||||
roadmap: fs.existsSync(path.join(archiveDir, `${version}-ROADMAP.md`)),
|
||||
requirements: fs.existsSync(path.join(archiveDir, `${version}-REQUIREMENTS.md`)),
|
||||
audit: fs.existsSync(path.join(archiveDir, `${version}-MILESTONE-AUDIT.md`)),
|
||||
phases: phasesArchived,
|
||||
},
|
||||
milestones_updated: true,
|
||||
state_updated: fs.existsSync(statePath),
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
cmdRequirementsMarkComplete,
|
||||
cmdMilestoneComplete,
|
||||
};
|
||||
68
get-shit-done/bin/lib/model-profiles.cjs
Normal file
68
get-shit-done/bin/lib/model-profiles.cjs
Normal file
@@ -0,0 +1,68 @@
|
||||
/**
|
||||
* Mapping of GSD agent to model for each profile.
|
||||
*
|
||||
* Should be in sync with the profiles table in `get-shit-done/references/model-profiles.md`. But
|
||||
* possibly worth making this the single source of truth at some point, and removing the markdown
|
||||
* reference table in favor of programmatically determining the model to use for an agent (which
|
||||
* would be faster, use fewer tokens, and be less error-prone).
|
||||
*/
|
||||
const MODEL_PROFILES = {
|
||||
'gsd-planner': { quality: 'opus', balanced: 'opus', budget: 'sonnet' },
|
||||
'gsd-roadmapper': { quality: 'opus', balanced: 'sonnet', budget: 'sonnet' },
|
||||
'gsd-executor': { quality: 'opus', balanced: 'sonnet', budget: 'sonnet' },
|
||||
'gsd-phase-researcher': { quality: 'opus', balanced: 'sonnet', budget: 'haiku' },
|
||||
'gsd-project-researcher': { quality: 'opus', balanced: 'sonnet', budget: 'haiku' },
|
||||
'gsd-research-synthesizer': { quality: 'sonnet', balanced: 'sonnet', budget: 'haiku' },
|
||||
'gsd-debugger': { quality: 'opus', balanced: 'sonnet', budget: 'sonnet' },
|
||||
'gsd-codebase-mapper': { quality: 'sonnet', balanced: 'haiku', budget: 'haiku' },
|
||||
'gsd-verifier': { quality: 'sonnet', balanced: 'sonnet', budget: 'haiku' },
|
||||
'gsd-plan-checker': { quality: 'sonnet', balanced: 'sonnet', budget: 'haiku' },
|
||||
'gsd-integration-checker': { quality: 'sonnet', balanced: 'sonnet', budget: 'haiku' },
|
||||
'gsd-nyquist-auditor': { quality: 'sonnet', balanced: 'sonnet', budget: 'haiku' },
|
||||
'gsd-ui-researcher': { quality: 'opus', balanced: 'sonnet', budget: 'haiku' },
|
||||
'gsd-ui-checker': { quality: 'sonnet', balanced: 'sonnet', budget: 'haiku' },
|
||||
'gsd-ui-auditor': { quality: 'sonnet', balanced: 'sonnet', budget: 'haiku' },
|
||||
};
|
||||
const VALID_PROFILES = Object.keys(MODEL_PROFILES['gsd-planner']);
|
||||
|
||||
/**
|
||||
* Formats the agent-to-model mapping as a human-readable table (in string format).
|
||||
*
|
||||
* @param {Object<string, string>} agentToModelMap - A mapping from agent to model
|
||||
* @returns {string} A formatted table string
|
||||
*/
|
||||
function formatAgentToModelMapAsTable(agentToModelMap) {
|
||||
const agentWidth = Math.max('Agent'.length, ...Object.keys(agentToModelMap).map((a) => a.length));
|
||||
const modelWidth = Math.max(
|
||||
'Model'.length,
|
||||
...Object.values(agentToModelMap).map((m) => m.length)
|
||||
);
|
||||
const sep = '─'.repeat(agentWidth + 2) + '┼' + '─'.repeat(modelWidth + 2);
|
||||
const header = ' ' + 'Agent'.padEnd(agentWidth) + ' │ ' + 'Model'.padEnd(modelWidth);
|
||||
let agentToModelTable = header + '\n' + sep + '\n';
|
||||
for (const [agent, model] of Object.entries(agentToModelMap)) {
|
||||
agentToModelTable += ' ' + agent.padEnd(agentWidth) + ' │ ' + model.padEnd(modelWidth) + '\n';
|
||||
}
|
||||
return agentToModelTable;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a mapping from agent to model for the given model profile.
|
||||
*
|
||||
* @param {string} normalizedProfile - The normalized (lowercase and trimmed) profile name
|
||||
* @returns {Object<string, string>} A mapping from agent to model for the given profile
|
||||
*/
|
||||
function getAgentToModelMapForProfile(normalizedProfile) {
|
||||
const agentToModelMap = {};
|
||||
for (const [agent, profileToModelMap] of Object.entries(MODEL_PROFILES)) {
|
||||
agentToModelMap[agent] = profileToModelMap[normalizedProfile];
|
||||
}
|
||||
return agentToModelMap;
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
MODEL_PROFILES,
|
||||
VALID_PROFILES,
|
||||
formatAgentToModelMapAsTable,
|
||||
getAgentToModelMapForProfile,
|
||||
};
|
||||
939
get-shit-done/bin/lib/phase.cjs
Normal file
939
get-shit-done/bin/lib/phase.cjs
Normal file
@@ -0,0 +1,939 @@
|
||||
/**
|
||||
* Phase — Phase CRUD, query, and lifecycle operations
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { escapeRegex, normalizePhaseName, comparePhaseNum, findPhaseInternal, getArchivedPhaseDirs, generateSlugInternal, getMilestonePhaseFilter, stripShippedMilestones, extractCurrentMilestone, replaceInCurrentMilestone, toPosixPath, output, error } = require('./core.cjs');
|
||||
const { extractFrontmatter } = require('./frontmatter.cjs');
|
||||
const { writeStateMd } = require('./state.cjs');
|
||||
|
||||
function cmdPhasesList(cwd, options, raw) {
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const { type, phase, includeArchived } = options;
|
||||
|
||||
// If no phases directory, return empty
|
||||
if (!fs.existsSync(phasesDir)) {
|
||||
if (type) {
|
||||
output({ files: [], count: 0 }, raw, '');
|
||||
} else {
|
||||
output({ directories: [], count: 0 }, raw, '');
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Get all phase directories
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
let dirs = entries.filter(e => e.isDirectory()).map(e => e.name);
|
||||
|
||||
// Include archived phases if requested
|
||||
if (includeArchived) {
|
||||
const archived = getArchivedPhaseDirs(cwd);
|
||||
for (const a of archived) {
|
||||
dirs.push(`${a.name} [${a.milestone}]`);
|
||||
}
|
||||
}
|
||||
|
||||
// Sort numerically (handles integers, decimals, letter-suffix, hybrids)
|
||||
dirs.sort((a, b) => comparePhaseNum(a, b));
|
||||
|
||||
// If filtering by phase number
|
||||
if (phase) {
|
||||
const normalized = normalizePhaseName(phase);
|
||||
const match = dirs.find(d => d.startsWith(normalized));
|
||||
if (!match) {
|
||||
output({ files: [], count: 0, phase_dir: null, error: 'Phase not found' }, raw, '');
|
||||
return;
|
||||
}
|
||||
dirs = [match];
|
||||
}
|
||||
|
||||
// If listing files of a specific type
|
||||
if (type) {
|
||||
const files = [];
|
||||
for (const dir of dirs) {
|
||||
const dirPath = path.join(phasesDir, dir);
|
||||
const dirFiles = fs.readdirSync(dirPath);
|
||||
|
||||
let filtered;
|
||||
if (type === 'plans') {
|
||||
filtered = dirFiles.filter(f => f.endsWith('-PLAN.md') || f === 'PLAN.md');
|
||||
} else if (type === 'summaries') {
|
||||
filtered = dirFiles.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md');
|
||||
} else {
|
||||
filtered = dirFiles;
|
||||
}
|
||||
|
||||
files.push(...filtered.sort());
|
||||
}
|
||||
|
||||
const result = {
|
||||
files,
|
||||
count: files.length,
|
||||
phase_dir: phase ? dirs[0].replace(/^\d+(?:\.\d+)*-?/, '') : null,
|
||||
};
|
||||
output(result, raw, files.join('\n'));
|
||||
return;
|
||||
}
|
||||
|
||||
// Default: list directories
|
||||
output({ directories: dirs, count: dirs.length }, raw, dirs.join('\n'));
|
||||
} catch (e) {
|
||||
error('Failed to list phases: ' + e.message);
|
||||
}
|
||||
}
|
||||
|
||||
function cmdPhaseNextDecimal(cwd, basePhase, raw) {
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const normalized = normalizePhaseName(basePhase);
|
||||
|
||||
// Check if phases directory exists
|
||||
if (!fs.existsSync(phasesDir)) {
|
||||
output(
|
||||
{
|
||||
found: false,
|
||||
base_phase: normalized,
|
||||
next: `${normalized}.1`,
|
||||
existing: [],
|
||||
},
|
||||
raw,
|
||||
`${normalized}.1`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name);
|
||||
|
||||
// Check if base phase exists
|
||||
const baseExists = dirs.some(d => d.startsWith(normalized + '-') || d === normalized);
|
||||
|
||||
// Find existing decimal phases for this base
|
||||
const decimalPattern = new RegExp(`^${normalized}\\.(\\d+)`);
|
||||
const existingDecimals = [];
|
||||
|
||||
for (const dir of dirs) {
|
||||
const match = dir.match(decimalPattern);
|
||||
if (match) {
|
||||
existingDecimals.push(`${normalized}.${match[1]}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Sort numerically
|
||||
existingDecimals.sort((a, b) => comparePhaseNum(a, b));
|
||||
|
||||
// Calculate next decimal
|
||||
let nextDecimal;
|
||||
if (existingDecimals.length === 0) {
|
||||
nextDecimal = `${normalized}.1`;
|
||||
} else {
|
||||
const lastDecimal = existingDecimals[existingDecimals.length - 1];
|
||||
const lastNum = parseInt(lastDecimal.split('.')[1], 10);
|
||||
nextDecimal = `${normalized}.${lastNum + 1}`;
|
||||
}
|
||||
|
||||
output(
|
||||
{
|
||||
found: baseExists,
|
||||
base_phase: normalized,
|
||||
next: nextDecimal,
|
||||
existing: existingDecimals,
|
||||
},
|
||||
raw,
|
||||
nextDecimal
|
||||
);
|
||||
} catch (e) {
|
||||
error('Failed to calculate next decimal phase: ' + e.message);
|
||||
}
|
||||
}
|
||||
|
||||
function cmdFindPhase(cwd, phase, raw) {
|
||||
if (!phase) {
|
||||
error('phase identifier required');
|
||||
}
|
||||
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const normalized = normalizePhaseName(phase);
|
||||
|
||||
const notFound = { found: false, directory: null, phase_number: null, phase_name: null, plans: [], summaries: [] };
|
||||
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name).sort((a, b) => comparePhaseNum(a, b));
|
||||
|
||||
const match = dirs.find(d => d.startsWith(normalized));
|
||||
if (!match) {
|
||||
output(notFound, raw, '');
|
||||
return;
|
||||
}
|
||||
|
||||
const dirMatch = match.match(/^(\d+[A-Z]?(?:\.\d+)*)-?(.*)/i);
|
||||
const phaseNumber = dirMatch ? dirMatch[1] : normalized;
|
||||
const phaseName = dirMatch && dirMatch[2] ? dirMatch[2] : null;
|
||||
|
||||
const phaseDir = path.join(phasesDir, match);
|
||||
const phaseFiles = fs.readdirSync(phaseDir);
|
||||
const plans = phaseFiles.filter(f => f.endsWith('-PLAN.md') || f === 'PLAN.md').sort();
|
||||
const summaries = phaseFiles.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md').sort();
|
||||
|
||||
const result = {
|
||||
found: true,
|
||||
directory: toPosixPath(path.join('.planning', 'phases', match)),
|
||||
phase_number: phaseNumber,
|
||||
phase_name: phaseName,
|
||||
plans,
|
||||
summaries,
|
||||
};
|
||||
|
||||
output(result, raw, result.directory);
|
||||
} catch {
|
||||
output(notFound, raw, '');
|
||||
}
|
||||
}
|
||||
|
||||
function extractObjective(content) {
|
||||
const m = content.match(/<objective>\s*\n?\s*(.+)/);
|
||||
return m ? m[1].trim() : null;
|
||||
}
|
||||
|
||||
function cmdPhasePlanIndex(cwd, phase, raw) {
|
||||
if (!phase) {
|
||||
error('phase required for phase-plan-index');
|
||||
}
|
||||
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const normalized = normalizePhaseName(phase);
|
||||
|
||||
// Find phase directory
|
||||
let phaseDir = null;
|
||||
let phaseDirName = null;
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name).sort((a, b) => comparePhaseNum(a, b));
|
||||
const match = dirs.find(d => d.startsWith(normalized));
|
||||
if (match) {
|
||||
phaseDir = path.join(phasesDir, match);
|
||||
phaseDirName = match;
|
||||
}
|
||||
} catch {
|
||||
// phases dir doesn't exist
|
||||
}
|
||||
|
||||
if (!phaseDir) {
|
||||
output({ phase: normalized, error: 'Phase not found', plans: [], waves: {}, incomplete: [], has_checkpoints: false }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
// Get all files in phase directory
|
||||
const phaseFiles = fs.readdirSync(phaseDir);
|
||||
const planFiles = phaseFiles.filter(f => f.endsWith('-PLAN.md') || f === 'PLAN.md').sort();
|
||||
const summaryFiles = phaseFiles.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md');
|
||||
|
||||
// Build set of plan IDs with summaries
|
||||
const completedPlanIds = new Set(
|
||||
summaryFiles.map(s => s.replace('-SUMMARY.md', '').replace('SUMMARY.md', ''))
|
||||
);
|
||||
|
||||
const plans = [];
|
||||
const waves = {};
|
||||
const incomplete = [];
|
||||
let hasCheckpoints = false;
|
||||
|
||||
for (const planFile of planFiles) {
|
||||
const planId = planFile.replace('-PLAN.md', '').replace('PLAN.md', '');
|
||||
const planPath = path.join(phaseDir, planFile);
|
||||
const content = fs.readFileSync(planPath, 'utf-8');
|
||||
const fm = extractFrontmatter(content);
|
||||
|
||||
// Count tasks: XML <task> tags (canonical) or ## Task N markdown (legacy)
|
||||
const xmlTasks = content.match(/<task[\s>]/gi) || [];
|
||||
const mdTasks = content.match(/##\s*Task\s*\d+/gi) || [];
|
||||
const taskCount = xmlTasks.length || mdTasks.length;
|
||||
|
||||
// Parse wave as integer
|
||||
const wave = parseInt(fm.wave, 10) || 1;
|
||||
|
||||
// Parse autonomous (default true if not specified)
|
||||
let autonomous = true;
|
||||
if (fm.autonomous !== undefined) {
|
||||
autonomous = fm.autonomous === 'true' || fm.autonomous === true;
|
||||
}
|
||||
|
||||
if (!autonomous) {
|
||||
hasCheckpoints = true;
|
||||
}
|
||||
|
||||
// Parse files_modified (underscore is canonical; also accept hyphenated for compat)
|
||||
let filesModified = [];
|
||||
const fmFiles = fm['files_modified'] || fm['files-modified'];
|
||||
if (fmFiles) {
|
||||
filesModified = Array.isArray(fmFiles) ? fmFiles : [fmFiles];
|
||||
}
|
||||
|
||||
const hasSummary = completedPlanIds.has(planId);
|
||||
if (!hasSummary) {
|
||||
incomplete.push(planId);
|
||||
}
|
||||
|
||||
const plan = {
|
||||
id: planId,
|
||||
wave,
|
||||
autonomous,
|
||||
objective: extractObjective(content) || fm.objective || null,
|
||||
files_modified: filesModified,
|
||||
task_count: taskCount,
|
||||
has_summary: hasSummary,
|
||||
};
|
||||
|
||||
plans.push(plan);
|
||||
|
||||
// Group by wave
|
||||
const waveKey = String(wave);
|
||||
if (!waves[waveKey]) {
|
||||
waves[waveKey] = [];
|
||||
}
|
||||
waves[waveKey].push(planId);
|
||||
}
|
||||
|
||||
const result = {
|
||||
phase: normalized,
|
||||
plans,
|
||||
waves,
|
||||
incomplete,
|
||||
has_checkpoints: hasCheckpoints,
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdPhaseAdd(cwd, description, raw) {
|
||||
if (!description) {
|
||||
error('description required for phase add');
|
||||
}
|
||||
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
if (!fs.existsSync(roadmapPath)) {
|
||||
error('ROADMAP.md not found');
|
||||
}
|
||||
|
||||
const rawContent = fs.readFileSync(roadmapPath, 'utf-8');
|
||||
const content = extractCurrentMilestone(rawContent, cwd);
|
||||
const slug = generateSlugInternal(description);
|
||||
|
||||
// Find highest integer phase number (in current milestone only)
|
||||
const phasePattern = /#{2,4}\s*Phase\s+(\d+)[A-Z]?(?:\.\d+)*:/gi;
|
||||
let maxPhase = 0;
|
||||
let m;
|
||||
while ((m = phasePattern.exec(content)) !== null) {
|
||||
const num = parseInt(m[1], 10);
|
||||
if (num > maxPhase) maxPhase = num;
|
||||
}
|
||||
|
||||
const newPhaseNum = maxPhase + 1;
|
||||
const paddedNum = String(newPhaseNum).padStart(2, '0');
|
||||
const dirName = `${paddedNum}-${slug}`;
|
||||
const dirPath = path.join(cwd, '.planning', 'phases', dirName);
|
||||
|
||||
// Create directory with .gitkeep so git tracks empty folders
|
||||
fs.mkdirSync(dirPath, { recursive: true });
|
||||
fs.writeFileSync(path.join(dirPath, '.gitkeep'), '');
|
||||
|
||||
// Build phase entry
|
||||
const phaseEntry = `\n### Phase ${newPhaseNum}: ${description}\n\n**Goal:** [To be planned]\n**Requirements**: TBD\n**Depends on:** Phase ${maxPhase}\n**Plans:** 0 plans\n\nPlans:\n- [ ] TBD (run /gsd:plan-phase ${newPhaseNum} to break down)\n`;
|
||||
|
||||
// Find insertion point: before last "---" or at end
|
||||
let updatedContent;
|
||||
const lastSeparator = rawContent.lastIndexOf('\n---');
|
||||
if (lastSeparator > 0) {
|
||||
updatedContent = rawContent.slice(0, lastSeparator) + phaseEntry + rawContent.slice(lastSeparator);
|
||||
} else {
|
||||
updatedContent = rawContent + phaseEntry;
|
||||
}
|
||||
|
||||
fs.writeFileSync(roadmapPath, updatedContent, 'utf-8');
|
||||
|
||||
const result = {
|
||||
phase_number: newPhaseNum,
|
||||
padded: paddedNum,
|
||||
name: description,
|
||||
slug,
|
||||
directory: `.planning/phases/${dirName}`,
|
||||
};
|
||||
|
||||
output(result, raw, paddedNum);
|
||||
}
|
||||
|
||||
function cmdPhaseInsert(cwd, afterPhase, description, raw) {
|
||||
if (!afterPhase || !description) {
|
||||
error('after-phase and description required for phase insert');
|
||||
}
|
||||
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
if (!fs.existsSync(roadmapPath)) {
|
||||
error('ROADMAP.md not found');
|
||||
}
|
||||
|
||||
const rawContent = fs.readFileSync(roadmapPath, 'utf-8');
|
||||
const content = extractCurrentMilestone(rawContent, cwd);
|
||||
const slug = generateSlugInternal(description);
|
||||
|
||||
// Normalize input then strip leading zeros for flexible matching
|
||||
const normalizedAfter = normalizePhaseName(afterPhase);
|
||||
const unpadded = normalizedAfter.replace(/^0+/, '');
|
||||
const afterPhaseEscaped = unpadded.replace(/\./g, '\\.');
|
||||
const targetPattern = new RegExp(`#{2,4}\\s*Phase\\s+0*${afterPhaseEscaped}:`, 'i');
|
||||
if (!targetPattern.test(content)) {
|
||||
error(`Phase ${afterPhase} not found in ROADMAP.md`);
|
||||
}
|
||||
|
||||
// Calculate next decimal using existing logic
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const normalizedBase = normalizePhaseName(afterPhase);
|
||||
let existingDecimals = [];
|
||||
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name);
|
||||
const decimalPattern = new RegExp(`^${normalizedBase}\\.(\\d+)`);
|
||||
for (const dir of dirs) {
|
||||
const dm = dir.match(decimalPattern);
|
||||
if (dm) existingDecimals.push(parseInt(dm[1], 10));
|
||||
}
|
||||
} catch {}
|
||||
|
||||
const nextDecimal = existingDecimals.length === 0 ? 1 : Math.max(...existingDecimals) + 1;
|
||||
const decimalPhase = `${normalizedBase}.${nextDecimal}`;
|
||||
const dirName = `${decimalPhase}-${slug}`;
|
||||
const dirPath = path.join(cwd, '.planning', 'phases', dirName);
|
||||
|
||||
// Create directory with .gitkeep so git tracks empty folders
|
||||
fs.mkdirSync(dirPath, { recursive: true });
|
||||
fs.writeFileSync(path.join(dirPath, '.gitkeep'), '');
|
||||
|
||||
// Build phase entry
|
||||
const phaseEntry = `\n### Phase ${decimalPhase}: ${description} (INSERTED)\n\n**Goal:** [Urgent work - to be planned]\n**Requirements**: TBD\n**Depends on:** Phase ${afterPhase}\n**Plans:** 0 plans\n\nPlans:\n- [ ] TBD (run /gsd:plan-phase ${decimalPhase} to break down)\n`;
|
||||
|
||||
// Insert after the target phase section
|
||||
const headerPattern = new RegExp(`(#{2,4}\\s*Phase\\s+0*${afterPhaseEscaped}:[^\\n]*\\n)`, 'i');
|
||||
const headerMatch = rawContent.match(headerPattern);
|
||||
if (!headerMatch) {
|
||||
error(`Could not find Phase ${afterPhase} header`);
|
||||
}
|
||||
|
||||
const headerIdx = rawContent.indexOf(headerMatch[0]);
|
||||
const afterHeader = rawContent.slice(headerIdx + headerMatch[0].length);
|
||||
const nextPhaseMatch = afterHeader.match(/\n#{2,4}\s+Phase\s+\d/i);
|
||||
|
||||
let insertIdx;
|
||||
if (nextPhaseMatch) {
|
||||
insertIdx = headerIdx + headerMatch[0].length + nextPhaseMatch.index;
|
||||
} else {
|
||||
insertIdx = rawContent.length;
|
||||
}
|
||||
|
||||
const updatedContent = rawContent.slice(0, insertIdx) + phaseEntry + rawContent.slice(insertIdx);
|
||||
fs.writeFileSync(roadmapPath, updatedContent, 'utf-8');
|
||||
|
||||
const result = {
|
||||
phase_number: decimalPhase,
|
||||
after_phase: afterPhase,
|
||||
name: description,
|
||||
slug,
|
||||
directory: `.planning/phases/${dirName}`,
|
||||
};
|
||||
|
||||
output(result, raw, decimalPhase);
|
||||
}
|
||||
|
||||
function cmdPhaseRemove(cwd, targetPhase, options, raw) {
|
||||
if (!targetPhase) {
|
||||
error('phase number required for phase remove');
|
||||
}
|
||||
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const force = options.force || false;
|
||||
|
||||
if (!fs.existsSync(roadmapPath)) {
|
||||
error('ROADMAP.md not found');
|
||||
}
|
||||
|
||||
// Normalize the target
|
||||
const normalized = normalizePhaseName(targetPhase);
|
||||
const isDecimal = targetPhase.includes('.');
|
||||
|
||||
// Find and validate target directory
|
||||
let targetDir = null;
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name).sort((a, b) => comparePhaseNum(a, b));
|
||||
targetDir = dirs.find(d => d.startsWith(normalized + '-') || d === normalized);
|
||||
} catch {}
|
||||
|
||||
// Check for executed work (SUMMARY.md files)
|
||||
if (targetDir && !force) {
|
||||
const targetPath = path.join(phasesDir, targetDir);
|
||||
const files = fs.readdirSync(targetPath);
|
||||
const summaries = files.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md');
|
||||
if (summaries.length > 0) {
|
||||
error(`Phase ${targetPhase} has ${summaries.length} executed plan(s). Use --force to remove anyway.`);
|
||||
}
|
||||
}
|
||||
|
||||
// Delete target directory
|
||||
if (targetDir) {
|
||||
fs.rmSync(path.join(phasesDir, targetDir), { recursive: true, force: true });
|
||||
}
|
||||
|
||||
// Renumber subsequent phases
|
||||
const renamedDirs = [];
|
||||
const renamedFiles = [];
|
||||
|
||||
if (isDecimal) {
|
||||
// Decimal removal: renumber sibling decimals (e.g., removing 06.2 → 06.3 becomes 06.2)
|
||||
const baseParts = normalized.split('.');
|
||||
const baseInt = baseParts[0];
|
||||
const removedDecimal = parseInt(baseParts[1], 10);
|
||||
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name).sort((a, b) => comparePhaseNum(a, b));
|
||||
|
||||
// Find sibling decimals with higher numbers
|
||||
const decPattern = new RegExp(`^${baseInt}\\.(\\d+)-(.+)$`);
|
||||
const toRename = [];
|
||||
for (const dir of dirs) {
|
||||
const dm = dir.match(decPattern);
|
||||
if (dm && parseInt(dm[1], 10) > removedDecimal) {
|
||||
toRename.push({ dir, oldDecimal: parseInt(dm[1], 10), slug: dm[2] });
|
||||
}
|
||||
}
|
||||
|
||||
// Sort descending to avoid conflicts
|
||||
toRename.sort((a, b) => b.oldDecimal - a.oldDecimal);
|
||||
|
||||
for (const item of toRename) {
|
||||
const newDecimal = item.oldDecimal - 1;
|
||||
const oldPhaseId = `${baseInt}.${item.oldDecimal}`;
|
||||
const newPhaseId = `${baseInt}.${newDecimal}`;
|
||||
const newDirName = `${baseInt}.${newDecimal}-${item.slug}`;
|
||||
|
||||
// Rename directory
|
||||
fs.renameSync(path.join(phasesDir, item.dir), path.join(phasesDir, newDirName));
|
||||
renamedDirs.push({ from: item.dir, to: newDirName });
|
||||
|
||||
// Rename files inside
|
||||
const dirFiles = fs.readdirSync(path.join(phasesDir, newDirName));
|
||||
for (const f of dirFiles) {
|
||||
// Files may have phase prefix like "06.2-01-PLAN.md"
|
||||
if (f.includes(oldPhaseId)) {
|
||||
const newFileName = f.replace(oldPhaseId, newPhaseId);
|
||||
fs.renameSync(
|
||||
path.join(phasesDir, newDirName, f),
|
||||
path.join(phasesDir, newDirName, newFileName)
|
||||
);
|
||||
renamedFiles.push({ from: f, to: newFileName });
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
} else {
|
||||
// Integer removal: renumber all subsequent integer phases
|
||||
const removedInt = parseInt(normalized, 10);
|
||||
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name).sort((a, b) => comparePhaseNum(a, b));
|
||||
|
||||
// Collect directories that need renumbering (integer phases > removed, and their decimals/letters)
|
||||
const toRename = [];
|
||||
for (const dir of dirs) {
|
||||
const dm = dir.match(/^(\d+)([A-Z])?(?:\.(\d+))?-(.+)$/i);
|
||||
if (!dm) continue;
|
||||
const dirInt = parseInt(dm[1], 10);
|
||||
if (dirInt > removedInt) {
|
||||
toRename.push({
|
||||
dir,
|
||||
oldInt: dirInt,
|
||||
letter: dm[2] ? dm[2].toUpperCase() : '',
|
||||
decimal: dm[3] ? parseInt(dm[3], 10) : null,
|
||||
slug: dm[4],
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Sort descending to avoid conflicts
|
||||
toRename.sort((a, b) => {
|
||||
if (a.oldInt !== b.oldInt) return b.oldInt - a.oldInt;
|
||||
return (b.decimal || 0) - (a.decimal || 0);
|
||||
});
|
||||
|
||||
for (const item of toRename) {
|
||||
const newInt = item.oldInt - 1;
|
||||
const newPadded = String(newInt).padStart(2, '0');
|
||||
const oldPadded = String(item.oldInt).padStart(2, '0');
|
||||
const letterSuffix = item.letter || '';
|
||||
const decimalSuffix = item.decimal !== null ? `.${item.decimal}` : '';
|
||||
const oldPrefix = `${oldPadded}${letterSuffix}${decimalSuffix}`;
|
||||
const newPrefix = `${newPadded}${letterSuffix}${decimalSuffix}`;
|
||||
const newDirName = `${newPrefix}-${item.slug}`;
|
||||
|
||||
// Rename directory
|
||||
fs.renameSync(path.join(phasesDir, item.dir), path.join(phasesDir, newDirName));
|
||||
renamedDirs.push({ from: item.dir, to: newDirName });
|
||||
|
||||
// Rename files inside
|
||||
const dirFiles = fs.readdirSync(path.join(phasesDir, newDirName));
|
||||
for (const f of dirFiles) {
|
||||
if (f.startsWith(oldPrefix)) {
|
||||
const newFileName = newPrefix + f.slice(oldPrefix.length);
|
||||
fs.renameSync(
|
||||
path.join(phasesDir, newDirName, f),
|
||||
path.join(phasesDir, newDirName, newFileName)
|
||||
);
|
||||
renamedFiles.push({ from: f, to: newFileName });
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// Update ROADMAP.md
|
||||
let roadmapContent = fs.readFileSync(roadmapPath, 'utf-8');
|
||||
|
||||
// Remove the target phase section
|
||||
const targetEscaped = escapeRegex(targetPhase);
|
||||
const sectionPattern = new RegExp(
|
||||
`\\n?#{2,4}\\s*Phase\\s+${targetEscaped}\\s*:[\\s\\S]*?(?=\\n#{2,4}\\s+Phase\\s+\\d|$)`,
|
||||
'i'
|
||||
);
|
||||
roadmapContent = roadmapContent.replace(sectionPattern, '');
|
||||
|
||||
// Remove from phase list (checkbox)
|
||||
const checkboxPattern = new RegExp(`\\n?-\\s*\\[[ x]\\]\\s*.*Phase\\s+${targetEscaped}[:\\s][^\\n]*`, 'gi');
|
||||
roadmapContent = roadmapContent.replace(checkboxPattern, '');
|
||||
|
||||
// Remove from progress table
|
||||
const tableRowPattern = new RegExp(`\\n?\\|\\s*${targetEscaped}\\.?\\s[^|]*\\|[^\\n]*`, 'gi');
|
||||
roadmapContent = roadmapContent.replace(tableRowPattern, '');
|
||||
|
||||
// Renumber references in ROADMAP for subsequent phases
|
||||
if (!isDecimal) {
|
||||
const removedInt = parseInt(normalized, 10);
|
||||
|
||||
// Collect all integer phases > removedInt
|
||||
const maxPhase = 99; // reasonable upper bound
|
||||
for (let oldNum = maxPhase; oldNum > removedInt; oldNum--) {
|
||||
const newNum = oldNum - 1;
|
||||
const oldStr = String(oldNum);
|
||||
const newStr = String(newNum);
|
||||
const oldPad = oldStr.padStart(2, '0');
|
||||
const newPad = newStr.padStart(2, '0');
|
||||
|
||||
// Phase headings: ## Phase 18: or ### Phase 18: → ## Phase 17: or ### Phase 17:
|
||||
roadmapContent = roadmapContent.replace(
|
||||
new RegExp(`(#{2,4}\\s*Phase\\s+)${oldStr}(\\s*:)`, 'gi'),
|
||||
`$1${newStr}$2`
|
||||
);
|
||||
|
||||
// Checkbox items: - [ ] **Phase 18:** → - [ ] **Phase 17:**
|
||||
roadmapContent = roadmapContent.replace(
|
||||
new RegExp(`(Phase\\s+)${oldStr}([:\\s])`, 'g'),
|
||||
`$1${newStr}$2`
|
||||
);
|
||||
|
||||
// Plan references: 18-01 → 17-01
|
||||
roadmapContent = roadmapContent.replace(
|
||||
new RegExp(`${oldPad}-(\\d{2})`, 'g'),
|
||||
`${newPad}-$1`
|
||||
);
|
||||
|
||||
// Table rows: | 18. → | 17.
|
||||
roadmapContent = roadmapContent.replace(
|
||||
new RegExp(`(\\|\\s*)${oldStr}\\.\\s`, 'g'),
|
||||
`$1${newStr}. `
|
||||
);
|
||||
|
||||
// Depends on references
|
||||
roadmapContent = roadmapContent.replace(
|
||||
new RegExp(`(Depends on:\\*\\*\\s*Phase\\s+)${oldStr}\\b`, 'gi'),
|
||||
`$1${newStr}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
fs.writeFileSync(roadmapPath, roadmapContent, 'utf-8');
|
||||
|
||||
// Update STATE.md phase count
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (fs.existsSync(statePath)) {
|
||||
let stateContent = fs.readFileSync(statePath, 'utf-8');
|
||||
// Update "Total Phases" field
|
||||
const totalPattern = /(\*\*Total Phases:\*\*\s*)(\d+)/;
|
||||
const totalMatch = stateContent.match(totalPattern);
|
||||
if (totalMatch) {
|
||||
const oldTotal = parseInt(totalMatch[2], 10);
|
||||
stateContent = stateContent.replace(totalPattern, `$1${oldTotal - 1}`);
|
||||
}
|
||||
// Update "Phase: X of Y" pattern
|
||||
const ofPattern = /(\bof\s+)(\d+)(\s*(?:\(|phases?))/i;
|
||||
const ofMatch = stateContent.match(ofPattern);
|
||||
if (ofMatch) {
|
||||
const oldTotal = parseInt(ofMatch[2], 10);
|
||||
stateContent = stateContent.replace(ofPattern, `$1${oldTotal - 1}$3`);
|
||||
}
|
||||
writeStateMd(statePath, stateContent, cwd);
|
||||
}
|
||||
|
||||
const result = {
|
||||
removed: targetPhase,
|
||||
directory_deleted: targetDir || null,
|
||||
renamed_directories: renamedDirs,
|
||||
renamed_files: renamedFiles,
|
||||
roadmap_updated: true,
|
||||
state_updated: fs.existsSync(statePath),
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdPhaseComplete(cwd, phaseNum, raw) {
|
||||
if (!phaseNum) {
|
||||
error('phase number required for phase complete');
|
||||
}
|
||||
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const normalized = normalizePhaseName(phaseNum);
|
||||
const today = new Date().toISOString().split('T')[0];
|
||||
|
||||
// Verify phase info
|
||||
const phaseInfo = findPhaseInternal(cwd, phaseNum);
|
||||
if (!phaseInfo) {
|
||||
error(`Phase ${phaseNum} not found`);
|
||||
}
|
||||
|
||||
const planCount = phaseInfo.plans.length;
|
||||
const summaryCount = phaseInfo.summaries.length;
|
||||
let requirementsUpdated = false;
|
||||
|
||||
// Update ROADMAP.md: mark phase complete
|
||||
if (fs.existsSync(roadmapPath)) {
|
||||
let roadmapContent = fs.readFileSync(roadmapPath, 'utf-8');
|
||||
|
||||
// Checkbox: - [ ] Phase N: → - [x] Phase N: (...completed DATE)
|
||||
const checkboxPattern = new RegExp(
|
||||
`(-\\s*\\[)[ ](\\]\\s*.*Phase\\s+${escapeRegex(phaseNum)}[:\\s][^\\n]*)`,
|
||||
'i'
|
||||
);
|
||||
roadmapContent = replaceInCurrentMilestone(roadmapContent, checkboxPattern, `$1x$2 (completed ${today})`);
|
||||
|
||||
// Progress table: update Status to Complete, add date
|
||||
const phaseEscaped = escapeRegex(phaseNum);
|
||||
const tablePattern = new RegExp(
|
||||
`(\\|\\s*${phaseEscaped}\\.?\\s[^|]*\\|[^|]*\\|)\\s*[^|]*(\\|)\\s*[^|]*(\\|)`,
|
||||
'i'
|
||||
);
|
||||
roadmapContent = replaceInCurrentMilestone(
|
||||
roadmapContent, tablePattern,
|
||||
`$1 Complete $2 ${today} $3`
|
||||
);
|
||||
|
||||
// Update plan count in phase section
|
||||
const planCountPattern = new RegExp(
|
||||
`(#{2,4}\\s*Phase\\s+${phaseEscaped}[\\s\\S]*?\\*\\*Plans:\\*\\*\\s*)[^\\n]+`,
|
||||
'i'
|
||||
);
|
||||
roadmapContent = replaceInCurrentMilestone(
|
||||
roadmapContent, planCountPattern,
|
||||
`$1${summaryCount}/${planCount} plans complete`
|
||||
);
|
||||
|
||||
fs.writeFileSync(roadmapPath, roadmapContent, 'utf-8');
|
||||
|
||||
// Update REQUIREMENTS.md traceability for this phase's requirements
|
||||
const reqPath = path.join(cwd, '.planning', 'REQUIREMENTS.md');
|
||||
if (fs.existsSync(reqPath)) {
|
||||
// Extract the current phase section from roadmap (scoped to avoid cross-phase matching)
|
||||
const phaseEsc = escapeRegex(phaseNum);
|
||||
const currentMilestoneRoadmap = extractCurrentMilestone(roadmapContent, cwd);
|
||||
const phaseSectionMatch = currentMilestoneRoadmap.match(
|
||||
new RegExp(`(#{2,4}\\s*Phase\\s+${phaseEsc}[:\\s][\\s\\S]*?)(?=#{2,4}\\s*Phase\\s+|$)`, 'i')
|
||||
);
|
||||
|
||||
const sectionText = phaseSectionMatch ? phaseSectionMatch[1] : '';
|
||||
const reqMatch = sectionText.match(/\*\*Requirements:\*\*\s*([^\n]+)/i);
|
||||
|
||||
if (reqMatch) {
|
||||
const reqIds = reqMatch[1].replace(/[\[\]]/g, '').split(/[,\s]+/).map(r => r.trim()).filter(Boolean);
|
||||
let reqContent = fs.readFileSync(reqPath, 'utf-8');
|
||||
|
||||
for (const reqId of reqIds) {
|
||||
const reqEscaped = escapeRegex(reqId);
|
||||
// Update checkbox: - [ ] **REQ-ID** → - [x] **REQ-ID**
|
||||
reqContent = reqContent.replace(
|
||||
new RegExp(`(-\\s*\\[)[ ](\\]\\s*\\*\\*${reqEscaped}\\*\\*)`, 'gi'),
|
||||
'$1x$2'
|
||||
);
|
||||
// Update traceability table: | REQ-ID | Phase N | Pending/In Progress | → | REQ-ID | Phase N | Complete |
|
||||
reqContent = reqContent.replace(
|
||||
new RegExp(`(\\|\\s*${reqEscaped}\\s*\\|[^|]+\\|)\\s*(?:Pending|In Progress)\\s*(\\|)`, 'gi'),
|
||||
'$1 Complete $2'
|
||||
);
|
||||
}
|
||||
|
||||
fs.writeFileSync(reqPath, reqContent, 'utf-8');
|
||||
requirementsUpdated = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Find next phase — check both filesystem AND roadmap
|
||||
// Phases may be defined in ROADMAP.md but not yet scaffolded to disk,
|
||||
// so a filesystem-only scan would incorrectly report is_last_phase:true
|
||||
let nextPhaseNum = null;
|
||||
let nextPhaseName = null;
|
||||
let isLastPhase = true;
|
||||
|
||||
try {
|
||||
const isDirInMilestone = getMilestonePhaseFilter(cwd);
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name)
|
||||
.filter(isDirInMilestone)
|
||||
.sort((a, b) => comparePhaseNum(a, b));
|
||||
|
||||
// Find the next phase directory after current
|
||||
for (const dir of dirs) {
|
||||
const dm = dir.match(/^(\d+[A-Z]?(?:\.\d+)*)-?(.*)/i);
|
||||
if (dm) {
|
||||
if (comparePhaseNum(dm[1], phaseNum) > 0) {
|
||||
nextPhaseNum = dm[1];
|
||||
nextPhaseName = dm[2] || null;
|
||||
isLastPhase = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Fallback: if filesystem found no next phase, check ROADMAP.md
|
||||
// for phases that are defined but not yet planned (no directory on disk)
|
||||
if (isLastPhase && fs.existsSync(roadmapPath)) {
|
||||
try {
|
||||
const roadmapForPhases = extractCurrentMilestone(fs.readFileSync(roadmapPath, 'utf-8'), cwd);
|
||||
const phasePattern = /#{2,4}\s*Phase\s+(\d+[A-Z]?(?:\.\d+)*)\s*:\s*([^\n]+)/gi;
|
||||
let pm;
|
||||
while ((pm = phasePattern.exec(roadmapForPhases)) !== null) {
|
||||
if (comparePhaseNum(pm[1], phaseNum) > 0) {
|
||||
nextPhaseNum = pm[1];
|
||||
nextPhaseName = pm[2].replace(/\(INSERTED\)/i, '').trim().toLowerCase().replace(/\s+/g, '-');
|
||||
isLastPhase = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// Update STATE.md
|
||||
if (fs.existsSync(statePath)) {
|
||||
let stateContent = fs.readFileSync(statePath, 'utf-8');
|
||||
|
||||
// Update Current Phase
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Current Phase:\*\*\s*).*/,
|
||||
`$1${nextPhaseNum || phaseNum}`
|
||||
);
|
||||
|
||||
// Update Current Phase Name
|
||||
if (nextPhaseName) {
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Current Phase Name:\*\*\s*).*/,
|
||||
`$1${nextPhaseName.replace(/-/g, ' ')}`
|
||||
);
|
||||
}
|
||||
|
||||
// Update Status
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Status:\*\*\s*).*/,
|
||||
`$1${isLastPhase ? 'Milestone complete' : 'Ready to plan'}`
|
||||
);
|
||||
|
||||
// Update Current Plan
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Current Plan:\*\*\s*).*/,
|
||||
`$1Not started`
|
||||
);
|
||||
|
||||
// Update Last Activity
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Last Activity:\*\*\s*).*/,
|
||||
`$1${today}`
|
||||
);
|
||||
|
||||
// Update Last Activity Description
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Last Activity Description:\*\*\s*).*/,
|
||||
`$1Phase ${phaseNum} complete${nextPhaseNum ? `, transitioned to Phase ${nextPhaseNum}` : ''}`
|
||||
);
|
||||
|
||||
// Increment Completed Phases counter (#956)
|
||||
const completedMatch = stateContent.match(/\*\*Completed Phases:\*\*\s*(\d+)/);
|
||||
if (completedMatch) {
|
||||
const newCompleted = parseInt(completedMatch[1], 10) + 1;
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Completed Phases:\*\*\s*)\d+/,
|
||||
`$1${newCompleted}`
|
||||
);
|
||||
|
||||
// Recalculate percent based on completed / total (#956)
|
||||
const totalMatch = stateContent.match(/\*\*Total Phases:\*\*\s*(\d+)/);
|
||||
if (totalMatch) {
|
||||
const totalPhases = parseInt(totalMatch[1], 10);
|
||||
if (totalPhases > 0) {
|
||||
const newPercent = Math.round((newCompleted / totalPhases) * 100);
|
||||
stateContent = stateContent.replace(
|
||||
/(\*\*Progress:\*\*\s*)\d+%/,
|
||||
`$1${newPercent}%`
|
||||
);
|
||||
// Also update percent field if it exists separately
|
||||
stateContent = stateContent.replace(
|
||||
/(percent:\s*)\d+/,
|
||||
`$1${newPercent}`
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
writeStateMd(statePath, stateContent, cwd);
|
||||
}
|
||||
|
||||
const result = {
|
||||
completed_phase: phaseNum,
|
||||
phase_name: phaseInfo.phase_name,
|
||||
plans_executed: `${summaryCount}/${planCount}`,
|
||||
next_phase: nextPhaseNum,
|
||||
next_phase_name: nextPhaseName,
|
||||
is_last_phase: isLastPhase,
|
||||
date: today,
|
||||
roadmap_updated: fs.existsSync(roadmapPath),
|
||||
state_updated: fs.existsSync(statePath),
|
||||
requirements_updated: requirementsUpdated,
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
cmdPhasesList,
|
||||
cmdPhaseNextDecimal,
|
||||
cmdFindPhase,
|
||||
cmdPhasePlanIndex,
|
||||
cmdPhaseAdd,
|
||||
cmdPhaseInsert,
|
||||
cmdPhaseRemove,
|
||||
cmdPhaseComplete,
|
||||
};
|
||||
931
get-shit-done/bin/lib/profile-output.cjs
Normal file
931
get-shit-done/bin/lib/profile-output.cjs
Normal file
@@ -0,0 +1,931 @@
|
||||
/**
|
||||
* Profile Output — profile rendering, questionnaire, and artifact generation
|
||||
*
|
||||
* Renders profiling analysis into user-facing artifacts:
|
||||
* - write-profile: USER-PROFILE.md from analysis JSON
|
||||
* - profile-questionnaire: fallback when no sessions available
|
||||
* - generate-dev-preferences: dev-preferences.md command artifact
|
||||
* - generate-claude-profile: Developer Profile section in CLAUDE.md
|
||||
* - generate-claude-md: full CLAUDE.md with managed sections
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const os = require('os');
|
||||
const { output, error, safeReadFile } = require('./core.cjs');
|
||||
|
||||
// ─── Constants ────────────────────────────────────────────────────────────────
|
||||
|
||||
const DIMENSION_KEYS = [
|
||||
'communication_style', 'decision_speed', 'explanation_depth',
|
||||
'debugging_approach', 'ux_philosophy', 'vendor_philosophy',
|
||||
'frustration_triggers', 'learning_style'
|
||||
];
|
||||
|
||||
const PROFILING_QUESTIONS = [
|
||||
{
|
||||
dimension: 'communication_style',
|
||||
header: 'Communication Style',
|
||||
context: 'Think about the last few times you asked Claude to build or change something. How did you frame the request?',
|
||||
question: 'When you ask Claude to build something, how much context do you typically provide?',
|
||||
options: [
|
||||
{ label: 'Minimal -- "fix the bug", "add dark mode", just say what\'s needed', value: 'a', rating: 'terse-direct' },
|
||||
{ label: 'Some context -- explain what and why in a paragraph or two', value: 'b', rating: 'conversational' },
|
||||
{ label: 'Detailed specs -- headers, numbered lists, problem analysis, constraints', value: 'c', rating: 'detailed-structured' },
|
||||
{ label: 'It depends on the task -- simple tasks get short prompts, complex ones get detailed specs', value: 'd', rating: 'mixed' },
|
||||
],
|
||||
},
|
||||
{
|
||||
dimension: 'decision_speed',
|
||||
header: 'Decision Making',
|
||||
context: 'Think about times when Claude presented you with multiple options -- like choosing a library, picking an architecture, or selecting an approach.',
|
||||
question: 'When Claude presents you with options, how do you typically decide?',
|
||||
options: [
|
||||
{ label: 'Pick quickly based on gut feeling or past experience', value: 'a', rating: 'fast-intuitive' },
|
||||
{ label: 'Ask for a comparison table or pros/cons, then decide', value: 'b', rating: 'deliberate-informed' },
|
||||
{ label: 'Research independently (read docs, check GitHub stars) before deciding', value: 'c', rating: 'research-first' },
|
||||
{ label: 'Let Claude recommend -- I generally trust the suggestion', value: 'd', rating: 'delegator' },
|
||||
],
|
||||
},
|
||||
{
|
||||
dimension: 'explanation_depth',
|
||||
header: 'Explanation Preferences',
|
||||
context: 'Think about when Claude explains code it wrote or an approach it took. How much detail feels right?',
|
||||
question: 'When Claude explains something, how much detail do you want?',
|
||||
options: [
|
||||
{ label: 'Just the code -- I\'ll read it and figure it out myself', value: 'a', rating: 'code-only' },
|
||||
{ label: 'Brief explanation with the code -- a sentence or two about the approach', value: 'b', rating: 'concise' },
|
||||
{ label: 'Detailed walkthrough -- explain the approach, trade-offs, and code structure', value: 'c', rating: 'detailed' },
|
||||
{ label: 'Deep dive -- teach me the concepts behind it so I understand the fundamentals', value: 'd', rating: 'educational' },
|
||||
],
|
||||
},
|
||||
{
|
||||
dimension: 'debugging_approach',
|
||||
header: 'Debugging Style',
|
||||
context: 'Think about the last few times something broke in your code. How did you approach it with Claude?',
|
||||
question: 'When something breaks, how do you typically approach debugging with Claude?',
|
||||
options: [
|
||||
{ label: 'Paste the error and say "fix it" -- get it working fast', value: 'a', rating: 'fix-first' },
|
||||
{ label: 'Share the error plus context, ask Claude to diagnose what went wrong', value: 'b', rating: 'diagnostic' },
|
||||
{ label: 'Investigate myself first, then ask Claude about my specific theories', value: 'c', rating: 'hypothesis-driven' },
|
||||
{ label: 'Walk through the code together step by step to understand the issue', value: 'd', rating: 'collaborative' },
|
||||
],
|
||||
},
|
||||
{
|
||||
dimension: 'ux_philosophy',
|
||||
header: 'UX Philosophy',
|
||||
context: 'Think about user-facing features you have built recently. How did you balance functionality with design?',
|
||||
question: 'When building user-facing features, what do you prioritize?',
|
||||
options: [
|
||||
{ label: 'Get it working first, polish the UI later (or never)', value: 'a', rating: 'function-first' },
|
||||
{ label: 'Basic usability from the start -- nothing ugly, but no pixel-perfection', value: 'b', rating: 'pragmatic' },
|
||||
{ label: 'Design and UX are as important as functionality -- I care about the experience', value: 'c', rating: 'design-conscious' },
|
||||
{ label: 'I mostly build backend, CLI, or infrastructure -- UX is minimal', value: 'd', rating: 'backend-focused' },
|
||||
],
|
||||
},
|
||||
{
|
||||
dimension: 'vendor_philosophy',
|
||||
header: 'Library & Vendor Choices',
|
||||
context: 'Think about the last time you needed a library or service for a project. How did you go about choosing it?',
|
||||
question: 'When choosing libraries or services, what is your typical approach?',
|
||||
options: [
|
||||
{ label: 'Use whatever Claude suggests -- speed matters more than the perfect choice', value: 'a', rating: 'pragmatic-fast' },
|
||||
{ label: 'Prefer well-known, battle-tested options (React, PostgreSQL, Express)', value: 'b', rating: 'conservative' },
|
||||
{ label: 'Research alternatives, read docs, compare benchmarks before committing', value: 'c', rating: 'thorough-evaluator' },
|
||||
{ label: 'Strong opinions -- I already know what I like and I stick with it', value: 'd', rating: 'opinionated' },
|
||||
],
|
||||
},
|
||||
{
|
||||
dimension: 'frustration_triggers',
|
||||
header: 'Frustration Triggers',
|
||||
context: 'Think about moments when working with AI coding assistants that made you frustrated or annoyed.',
|
||||
question: 'What frustrates you most when working with AI coding assistants?',
|
||||
options: [
|
||||
{ label: 'Doing things I didn\'t ask for -- adding features, refactoring code, scope creep', value: 'a', rating: 'scope-creep' },
|
||||
{ label: 'Not following instructions precisely -- ignoring constraints or requirements I stated', value: 'b', rating: 'instruction-adherence' },
|
||||
{ label: 'Over-explaining or being too verbose -- just give me the code and move on', value: 'c', rating: 'verbosity' },
|
||||
{ label: 'Breaking working code while fixing something else -- regressions', value: 'd', rating: 'regression' },
|
||||
],
|
||||
},
|
||||
{
|
||||
dimension: 'learning_style',
|
||||
header: 'Learning Preferences',
|
||||
context: 'Think about encountering something new -- an unfamiliar library, a codebase you inherited, a concept you hadn\'t used before.',
|
||||
question: 'When you encounter something new in your codebase, how do you prefer to learn about it?',
|
||||
options: [
|
||||
{ label: 'Read the code directly -- I figure things out by reading and experimenting', value: 'a', rating: 'self-directed' },
|
||||
{ label: 'Ask Claude to explain the relevant parts to me', value: 'b', rating: 'guided' },
|
||||
{ label: 'Read official docs and tutorials first, then try things', value: 'c', rating: 'documentation-first' },
|
||||
{ label: 'See a working example, then modify it to understand how it works', value: 'd', rating: 'example-driven' },
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const CLAUDE_INSTRUCTIONS = {
|
||||
communication_style: {
|
||||
'terse-direct': 'Keep responses concise and action-oriented. Skip lengthy preambles. Match this developer\'s direct style.',
|
||||
'conversational': 'Use a natural conversational tone. Explain reasoning briefly alongside code. Engage with the developer\'s questions.',
|
||||
'detailed-structured': 'Match this developer\'s structured communication: use headers for sections, numbered lists for steps, and acknowledge provided context before responding.',
|
||||
'mixed': 'Adapt response detail to match the complexity of each request. Brief for simple tasks, detailed for complex ones.',
|
||||
},
|
||||
decision_speed: {
|
||||
'fast-intuitive': 'Present a single strong recommendation with brief justification. Skip lengthy comparisons unless asked.',
|
||||
'deliberate-informed': 'Present options in a structured comparison table with pros/cons. Let the developer make the final call.',
|
||||
'research-first': 'Include links to docs, GitHub repos, or benchmarks when recommending tools. Support the developer\'s research process.',
|
||||
'delegator': 'Make clear recommendations with confidence. Explain your reasoning briefly, but own the suggestion.',
|
||||
},
|
||||
explanation_depth: {
|
||||
'code-only': 'Prioritize code output. Add comments inline rather than prose explanations. Skip walkthroughs unless asked.',
|
||||
'concise': 'Pair code with a brief explanation (1-2 sentences) of the approach. Keep prose minimal.',
|
||||
'detailed': 'Explain the approach, key trade-offs, and code structure alongside the implementation. Use headers to organize.',
|
||||
'educational': 'Teach the underlying concepts and principles, not just the implementation. Relate new patterns to fundamentals.',
|
||||
},
|
||||
debugging_approach: {
|
||||
'fix-first': 'Prioritize the fix. Show the corrected code first, then optionally explain what was wrong. Minimize diagnostic preamble.',
|
||||
'diagnostic': 'Diagnose the root cause before presenting the fix. Explain what went wrong and why the fix addresses it.',
|
||||
'hypothesis-driven': 'Engage with the developer\'s theories. Validate or refine their hypotheses before jumping to solutions.',
|
||||
'collaborative': 'Walk through the debugging process step by step. Explain the investigation approach, not just the conclusion.',
|
||||
},
|
||||
ux_philosophy: {
|
||||
'function-first': 'Focus on functionality and correctness. Keep UI minimal and functional. Skip design polish unless requested.',
|
||||
'pragmatic': 'Build clean, usable interfaces without over-engineering. Apply basic design principles (spacing, alignment, contrast).',
|
||||
'design-conscious': 'Invest in UX quality: thoughtful spacing, smooth transitions, responsive layouts. Treat design as a first-class concern.',
|
||||
'backend-focused': 'Optimize for developer experience (clear APIs, good error messages, helpful CLI output) over visual design.',
|
||||
},
|
||||
vendor_philosophy: {
|
||||
'pragmatic-fast': 'Suggest libraries quickly based on popularity and reliability. Don\'t over-analyze choices for non-critical dependencies.',
|
||||
'conservative': 'Recommend well-established, widely-adopted tools with strong community support. Avoid bleeding-edge options.',
|
||||
'thorough-evaluator': 'Compare alternatives with specific metrics (bundle size, GitHub stars, maintenance activity). Support informed decisions.',
|
||||
'opinionated': 'Respect the developer\'s existing tool preferences. Ask before suggesting alternatives to their preferred stack.',
|
||||
},
|
||||
frustration_triggers: {
|
||||
'scope-creep': 'Do exactly what is asked -- nothing more. Never add unrequested features, refactoring, or "improvements". Ask before expanding scope.',
|
||||
'instruction-adherence': 'Follow instructions precisely. Re-read constraints before responding. If requirements conflict, flag the conflict rather than silently choosing.',
|
||||
'verbosity': 'Be concise. Lead with code, follow with brief explanation only if needed. Avoid restating the problem or unnecessary context.',
|
||||
'regression': 'Before modifying working code, verify the change is safe. Run existing tests mentally. Flag potential regression risks explicitly.',
|
||||
},
|
||||
learning_style: {
|
||||
'self-directed': 'Point to relevant code sections and let the developer explore. Add signposts (file paths, function names) rather than full explanations.',
|
||||
'guided': 'Explain concepts in context of the developer\'s codebase. Use their actual code as examples when teaching.',
|
||||
'documentation-first': 'Link to official documentation and relevant sections. Structure explanations like reference material.',
|
||||
'example-driven': 'Lead with working code examples. Show a minimal example first, then explain how to extend or modify it.',
|
||||
},
|
||||
};
|
||||
|
||||
const CLAUDE_MD_FALLBACKS = {
|
||||
project: 'Project not yet initialized. Run /gsd:new-project to set up.',
|
||||
stack: 'Technology stack not yet documented. Will populate after codebase mapping or first phase.',
|
||||
conventions: 'Conventions not yet established. Will populate as patterns emerge during development.',
|
||||
architecture: 'Architecture not yet mapped. Follow existing patterns found in the codebase.',
|
||||
};
|
||||
|
||||
const CLAUDE_MD_PROFILE_PLACEHOLDER = [
|
||||
'<!-- GSD:profile-start -->',
|
||||
'## Developer Profile',
|
||||
'',
|
||||
'> Profile not yet configured. Run `/gsd:profile-user` to generate your developer profile.',
|
||||
'> This section is managed by `generate-claude-profile` -- do not edit manually.',
|
||||
'<!-- GSD:profile-end -->',
|
||||
].join('\n');
|
||||
|
||||
// ─── Helper Functions ─────────────────────────────────────────────────────────
|
||||
|
||||
function isAmbiguousAnswer(dimension, value) {
|
||||
if (dimension === 'communication_style' && value === 'd') return true;
|
||||
const question = PROFILING_QUESTIONS.find(q => q.dimension === dimension);
|
||||
if (!question) return false;
|
||||
const option = question.options.find(o => o.value === value);
|
||||
if (!option) return false;
|
||||
return option.rating === 'mixed';
|
||||
}
|
||||
|
||||
function generateClaudeInstruction(dimension, rating) {
|
||||
const dimInstructions = CLAUDE_INSTRUCTIONS[dimension];
|
||||
if (dimInstructions && dimInstructions[rating]) {
|
||||
return dimInstructions[rating];
|
||||
}
|
||||
return `Adapt to this developer's ${dimension.replace(/_/g, ' ')} preference: ${rating}.`;
|
||||
}
|
||||
|
||||
function extractSectionContent(fileContent, sectionName) {
|
||||
const startMarker = `<!-- GSD:${sectionName}-start`;
|
||||
const endMarker = `<!-- GSD:${sectionName}-end -->`;
|
||||
const startIdx = fileContent.indexOf(startMarker);
|
||||
const endIdx = fileContent.indexOf(endMarker);
|
||||
if (startIdx === -1 || endIdx === -1) return null;
|
||||
const startTagEnd = fileContent.indexOf('-->', startIdx);
|
||||
if (startTagEnd === -1) return null;
|
||||
return fileContent.substring(startTagEnd + 3, endIdx);
|
||||
}
|
||||
|
||||
function buildSection(sectionName, sourceFile, content) {
|
||||
return [
|
||||
`<!-- GSD:${sectionName}-start source:${sourceFile} -->`,
|
||||
content,
|
||||
`<!-- GSD:${sectionName}-end -->`,
|
||||
].join('\n');
|
||||
}
|
||||
|
||||
function updateSection(fileContent, sectionName, newContent) {
|
||||
const startMarker = `<!-- GSD:${sectionName}-start`;
|
||||
const endMarker = `<!-- GSD:${sectionName}-end -->`;
|
||||
const startIdx = fileContent.indexOf(startMarker);
|
||||
const endIdx = fileContent.indexOf(endMarker);
|
||||
if (startIdx !== -1 && endIdx !== -1) {
|
||||
const before = fileContent.substring(0, startIdx);
|
||||
const after = fileContent.substring(endIdx + endMarker.length);
|
||||
return { content: before + newContent + after, action: 'replaced' };
|
||||
}
|
||||
return { content: fileContent.trimEnd() + '\n\n' + newContent + '\n', action: 'appended' };
|
||||
}
|
||||
|
||||
function detectManualEdit(fileContent, sectionName, expectedContent) {
|
||||
const currentContent = extractSectionContent(fileContent, sectionName);
|
||||
if (currentContent === null) return false;
|
||||
const normalize = (s) => s.trim().replace(/\n{3,}/g, '\n\n');
|
||||
return normalize(currentContent) !== normalize(expectedContent);
|
||||
}
|
||||
|
||||
function extractMarkdownSection(content, sectionName) {
|
||||
if (!content) return null;
|
||||
const lines = content.split('\n');
|
||||
let capturing = false;
|
||||
const result = [];
|
||||
const headingPattern = new RegExp(`^## ${sectionName}\\s*$`);
|
||||
for (const line of lines) {
|
||||
if (headingPattern.test(line)) {
|
||||
capturing = true;
|
||||
result.push(line);
|
||||
continue;
|
||||
}
|
||||
if (capturing && /^## /.test(line)) break;
|
||||
if (capturing) result.push(line);
|
||||
}
|
||||
return result.length > 0 ? result.join('\n').trim() : null;
|
||||
}
|
||||
|
||||
// ─── CLAUDE.md Section Generators ─────────────────────────────────────────────
|
||||
|
||||
function generateProjectSection(cwd) {
|
||||
const projectPath = path.join(cwd, '.planning', 'PROJECT.md');
|
||||
const content = safeReadFile(projectPath);
|
||||
if (!content) {
|
||||
return { content: CLAUDE_MD_FALLBACKS.project, source: 'PROJECT.md', hasFallback: true };
|
||||
}
|
||||
const parts = [];
|
||||
const h1Match = content.match(/^# (.+)$/m);
|
||||
if (h1Match) parts.push(`**${h1Match[1]}**`);
|
||||
const whatThisIs = extractMarkdownSection(content, 'What This Is');
|
||||
if (whatThisIs) {
|
||||
const body = whatThisIs.replace(/^## What This Is\s*/i, '').trim();
|
||||
if (body) parts.push(body);
|
||||
}
|
||||
const coreValue = extractMarkdownSection(content, 'Core Value');
|
||||
if (coreValue) {
|
||||
const body = coreValue.replace(/^## Core Value\s*/i, '').trim();
|
||||
if (body) parts.push(`**Core Value:** ${body}`);
|
||||
}
|
||||
const constraints = extractMarkdownSection(content, 'Constraints');
|
||||
if (constraints) {
|
||||
const body = constraints.replace(/^## Constraints\s*/i, '').trim();
|
||||
if (body) parts.push(`### Constraints\n\n${body}`);
|
||||
}
|
||||
if (parts.length === 0) {
|
||||
return { content: CLAUDE_MD_FALLBACKS.project, source: 'PROJECT.md', hasFallback: true };
|
||||
}
|
||||
return { content: parts.join('\n\n'), source: 'PROJECT.md', hasFallback: false };
|
||||
}
|
||||
|
||||
function generateStackSection(cwd) {
|
||||
const codebasePath = path.join(cwd, '.planning', 'codebase', 'STACK.md');
|
||||
const researchPath = path.join(cwd, '.planning', 'research', 'STACK.md');
|
||||
let content = safeReadFile(codebasePath);
|
||||
let source = 'codebase/STACK.md';
|
||||
if (!content) {
|
||||
content = safeReadFile(researchPath);
|
||||
source = 'research/STACK.md';
|
||||
}
|
||||
if (!content) {
|
||||
return { content: CLAUDE_MD_FALLBACKS.stack, source: 'STACK.md', hasFallback: true };
|
||||
}
|
||||
const lines = content.split('\n');
|
||||
const summaryLines = [];
|
||||
let inTable = false;
|
||||
for (const line of lines) {
|
||||
if (line.startsWith('#')) {
|
||||
if (!line.startsWith('# ') || summaryLines.length > 0) summaryLines.push(line);
|
||||
continue;
|
||||
}
|
||||
if (line.startsWith('|')) { inTable = true; summaryLines.push(line); continue; }
|
||||
if (inTable && line.trim() === '') inTable = false;
|
||||
if (line.startsWith('- ') || line.startsWith('* ')) summaryLines.push(line);
|
||||
}
|
||||
const summary = summaryLines.length > 0 ? summaryLines.join('\n') : content.trim();
|
||||
return { content: summary, source, hasFallback: false };
|
||||
}
|
||||
|
||||
function generateConventionsSection(cwd) {
|
||||
const conventionsPath = path.join(cwd, '.planning', 'codebase', 'CONVENTIONS.md');
|
||||
const content = safeReadFile(conventionsPath);
|
||||
if (!content) {
|
||||
return { content: CLAUDE_MD_FALLBACKS.conventions, source: 'CONVENTIONS.md', hasFallback: true };
|
||||
}
|
||||
const lines = content.split('\n');
|
||||
const summaryLines = [];
|
||||
for (const line of lines) {
|
||||
if (line.startsWith('#')) { if (!line.startsWith('# ')) summaryLines.push(line); continue; }
|
||||
if (line.startsWith('- ') || line.startsWith('* ') || line.startsWith('|')) summaryLines.push(line);
|
||||
}
|
||||
const summary = summaryLines.length > 0 ? summaryLines.join('\n') : content.trim();
|
||||
return { content: summary, source: 'CONVENTIONS.md', hasFallback: false };
|
||||
}
|
||||
|
||||
function generateArchitectureSection(cwd) {
|
||||
const architecturePath = path.join(cwd, '.planning', 'codebase', 'ARCHITECTURE.md');
|
||||
const content = safeReadFile(architecturePath);
|
||||
if (!content) {
|
||||
return { content: CLAUDE_MD_FALLBACKS.architecture, source: 'ARCHITECTURE.md', hasFallback: true };
|
||||
}
|
||||
const lines = content.split('\n');
|
||||
const summaryLines = [];
|
||||
for (const line of lines) {
|
||||
if (line.startsWith('#')) { if (!line.startsWith('# ')) summaryLines.push(line); continue; }
|
||||
if (line.startsWith('- ') || line.startsWith('* ') || line.startsWith('|') || line.startsWith('```')) summaryLines.push(line);
|
||||
}
|
||||
const summary = summaryLines.length > 0 ? summaryLines.join('\n') : content.trim();
|
||||
return { content: summary, source: 'ARCHITECTURE.md', hasFallback: false };
|
||||
}
|
||||
|
||||
// ─── Commands ─────────────────────────────────────────────────────────────────
|
||||
|
||||
function cmdWriteProfile(cwd, options, raw) {
|
||||
if (!options.input) {
|
||||
error('--input <analysis-json-path> is required');
|
||||
}
|
||||
|
||||
let analysisPath = options.input;
|
||||
if (!path.isAbsolute(analysisPath)) analysisPath = path.join(cwd, analysisPath);
|
||||
if (!fs.existsSync(analysisPath)) error(`Analysis file not found: ${analysisPath}`);
|
||||
|
||||
let analysis;
|
||||
try {
|
||||
analysis = JSON.parse(fs.readFileSync(analysisPath, 'utf-8'));
|
||||
} catch (err) {
|
||||
error(`Failed to parse analysis JSON: ${err.message}`);
|
||||
}
|
||||
|
||||
if (!analysis.dimensions || typeof analysis.dimensions !== 'object') {
|
||||
error('Analysis JSON must contain a "dimensions" object');
|
||||
}
|
||||
if (!analysis.profile_version) {
|
||||
error('Analysis JSON must contain "profile_version"');
|
||||
}
|
||||
|
||||
const SENSITIVE_PATTERNS = [
|
||||
/sk-[a-zA-Z0-9]{20,}/g,
|
||||
/Bearer\s+[a-zA-Z0-9._-]+/gi,
|
||||
/password\s*[:=]\s*\S+/gi,
|
||||
/secret\s*[:=]\s*\S+/gi,
|
||||
/token\s*[:=]\s*\S+/gi,
|
||||
/api[_-]?key\s*[:=]\s*\S+/gi,
|
||||
/\/Users\/[a-zA-Z0-9._-]+\//g,
|
||||
/\/home\/[a-zA-Z0-9._-]+\//g,
|
||||
/ghp_[a-zA-Z0-9]{36}/g,
|
||||
/gho_[a-zA-Z0-9]{36}/g,
|
||||
/xoxb-[a-zA-Z0-9-]+/g,
|
||||
];
|
||||
|
||||
let redactedCount = 0;
|
||||
|
||||
function redactSensitive(text) {
|
||||
if (typeof text !== 'string') return text;
|
||||
let result = text;
|
||||
for (const pattern of SENSITIVE_PATTERNS) {
|
||||
pattern.lastIndex = 0;
|
||||
const matches = result.match(pattern);
|
||||
if (matches) {
|
||||
redactedCount += matches.length;
|
||||
result = result.replace(pattern, '[REDACTED]');
|
||||
}
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
for (const dimKey of Object.keys(analysis.dimensions)) {
|
||||
const dim = analysis.dimensions[dimKey];
|
||||
if (dim.evidence && Array.isArray(dim.evidence)) {
|
||||
for (let i = 0; i < dim.evidence.length; i++) {
|
||||
const ev = dim.evidence[i];
|
||||
if (ev.quote) ev.quote = redactSensitive(ev.quote);
|
||||
if (ev.example) ev.example = redactSensitive(ev.example);
|
||||
if (ev.signal) ev.signal = redactSensitive(ev.signal);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (redactedCount > 0) {
|
||||
process.stderr.write(`Sensitive content redacted: ${redactedCount} pattern(s) removed from evidence quotes\n`);
|
||||
}
|
||||
|
||||
const templatePath = path.join(__dirname, '..', '..', 'templates', 'user-profile.md');
|
||||
if (!fs.existsSync(templatePath)) error(`Template not found: ${templatePath}`);
|
||||
let template = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
const dimensionLabels = {
|
||||
communication_style: 'Communication',
|
||||
decision_speed: 'Decisions',
|
||||
explanation_depth: 'Explanations',
|
||||
debugging_approach: 'Debugging',
|
||||
ux_philosophy: 'UX Philosophy',
|
||||
vendor_philosophy: 'Vendor Philosophy',
|
||||
frustration_triggers: 'Frustration Triggers',
|
||||
learning_style: 'Learning Style',
|
||||
};
|
||||
|
||||
const summaryLines = [];
|
||||
let highCount = 0, mediumCount = 0, lowCount = 0, dimensionsScored = 0;
|
||||
|
||||
for (const dimKey of DIMENSION_KEYS) {
|
||||
const dim = analysis.dimensions[dimKey];
|
||||
if (!dim) continue;
|
||||
const conf = (dim.confidence || '').toUpperCase();
|
||||
if (conf === 'HIGH' || conf === 'MEDIUM' || conf === 'LOW') dimensionsScored++;
|
||||
if (conf === 'HIGH') {
|
||||
highCount++;
|
||||
if (dim.claude_instruction) summaryLines.push(`- **${dimensionLabels[dimKey] || dimKey}:** ${dim.claude_instruction} (HIGH)`);
|
||||
} else if (conf === 'MEDIUM') {
|
||||
mediumCount++;
|
||||
if (dim.claude_instruction) summaryLines.push(`- **${dimensionLabels[dimKey] || dimKey}:** ${dim.claude_instruction} (MEDIUM)`);
|
||||
} else if (conf === 'LOW') {
|
||||
lowCount++;
|
||||
}
|
||||
}
|
||||
|
||||
const summaryInstructions = summaryLines.length > 0
|
||||
? summaryLines.join('\n')
|
||||
: '- No high or medium confidence dimensions scored yet.';
|
||||
|
||||
template = template.replace(/\{\{generated_at\}\}/g, new Date().toISOString());
|
||||
template = template.replace(/\{\{data_source\}\}/g, analysis.data_source || 'session_analysis');
|
||||
template = template.replace(/\{\{projects_list\}\}/g, (analysis.projects_list || analysis.projects_analyzed || []).join(', '));
|
||||
template = template.replace(/\{\{message_count\}\}/g, String(analysis.message_count || analysis.messages_analyzed || 0));
|
||||
template = template.replace(/\{\{summary_instructions\}\}/g, summaryInstructions);
|
||||
template = template.replace(/\{\{profile_version\}\}/g, analysis.profile_version);
|
||||
template = template.replace(/\{\{projects_count\}\}/g, String((analysis.projects_list || analysis.projects_analyzed || []).length));
|
||||
template = template.replace(/\{\{dimensions_scored\}\}/g, String(dimensionsScored));
|
||||
template = template.replace(/\{\{high_confidence_count\}\}/g, String(highCount));
|
||||
template = template.replace(/\{\{medium_confidence_count\}\}/g, String(mediumCount));
|
||||
template = template.replace(/\{\{low_confidence_count\}\}/g, String(lowCount));
|
||||
template = template.replace(/\{\{sensitive_excluded_summary\}\}/g,
|
||||
redactedCount > 0 ? `${redactedCount} pattern(s) redacted` : 'None detected');
|
||||
|
||||
for (const dimKey of DIMENSION_KEYS) {
|
||||
const dim = analysis.dimensions[dimKey] || {};
|
||||
const rating = dim.rating || 'UNSCORED';
|
||||
const confidence = dim.confidence || 'UNSCORED';
|
||||
const instruction = dim.claude_instruction || 'No strong preference detected. Ask the developer when this dimension is relevant.';
|
||||
const summary = dim.summary || '';
|
||||
|
||||
let evidenceBlock = '';
|
||||
const evidenceArr = dim.evidence_quotes || dim.evidence;
|
||||
if (evidenceArr && Array.isArray(evidenceArr) && evidenceArr.length > 0) {
|
||||
const evidenceLines = evidenceArr.map(ev => {
|
||||
const signal = ev.signal || ev.pattern || '';
|
||||
const quote = ev.quote || ev.example || '';
|
||||
const project = ev.project || 'unknown';
|
||||
return `- **Signal:** ${signal} / **Example:** "${quote}" -- project: ${project}`;
|
||||
});
|
||||
evidenceBlock = evidenceLines.join('\n');
|
||||
} else {
|
||||
evidenceBlock = '- No evidence collected for this dimension.';
|
||||
}
|
||||
|
||||
template = template.replace(new RegExp(`\\{\\{${dimKey}\\.rating\\}\\}`, 'g'), rating);
|
||||
template = template.replace(new RegExp(`\\{\\{${dimKey}\\.confidence\\}\\}`, 'g'), confidence);
|
||||
template = template.replace(new RegExp(`\\{\\{${dimKey}\\.claude_instruction\\}\\}`, 'g'), instruction);
|
||||
template = template.replace(new RegExp(`\\{\\{${dimKey}\\.summary\\}\\}`, 'g'), summary);
|
||||
template = template.replace(new RegExp(`\\{\\{${dimKey}\\.evidence\\}\\}`, 'g'), evidenceBlock);
|
||||
}
|
||||
|
||||
let outputPath = options.output;
|
||||
if (!outputPath) {
|
||||
outputPath = path.join(os.homedir(), '.claude', 'get-shit-done', 'USER-PROFILE.md');
|
||||
} else if (!path.isAbsolute(outputPath)) {
|
||||
outputPath = path.join(cwd, outputPath);
|
||||
}
|
||||
|
||||
fs.mkdirSync(path.dirname(outputPath), { recursive: true });
|
||||
fs.writeFileSync(outputPath, template, 'utf-8');
|
||||
|
||||
const result = {
|
||||
profile_path: outputPath,
|
||||
dimensions_scored: dimensionsScored,
|
||||
high_confidence: highCount,
|
||||
medium_confidence: mediumCount,
|
||||
low_confidence: lowCount,
|
||||
sensitive_redacted: redactedCount,
|
||||
source: analysis.data_source || 'session_analysis',
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdProfileQuestionnaire(options, raw) {
|
||||
if (!options.answers) {
|
||||
const questionsOutput = {
|
||||
mode: 'interactive',
|
||||
questions: PROFILING_QUESTIONS.map(q => ({
|
||||
dimension: q.dimension,
|
||||
header: q.header,
|
||||
context: q.context,
|
||||
question: q.question,
|
||||
options: q.options.map(o => ({ label: o.label, value: o.value })),
|
||||
})),
|
||||
};
|
||||
output(questionsOutput, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
const answerValues = options.answers.split(',').map(a => a.trim());
|
||||
if (answerValues.length !== PROFILING_QUESTIONS.length) {
|
||||
error(`Expected ${PROFILING_QUESTIONS.length} answers (comma-separated), got ${answerValues.length}`);
|
||||
}
|
||||
|
||||
const analysis = {
|
||||
profile_version: '1.0',
|
||||
analyzed_at: new Date().toISOString(),
|
||||
data_source: 'questionnaire',
|
||||
projects_analyzed: [],
|
||||
messages_analyzed: 0,
|
||||
message_threshold: 'questionnaire',
|
||||
sensitive_excluded: [],
|
||||
dimensions: {},
|
||||
};
|
||||
|
||||
for (let i = 0; i < PROFILING_QUESTIONS.length; i++) {
|
||||
const question = PROFILING_QUESTIONS[i];
|
||||
const answerValue = answerValues[i];
|
||||
const selectedOption = question.options.find(o => o.value === answerValue);
|
||||
|
||||
if (!selectedOption) {
|
||||
error(`Invalid answer "${answerValue}" for ${question.dimension}. Valid values: ${question.options.map(o => o.value).join(', ')}`);
|
||||
}
|
||||
|
||||
const ambiguous = isAmbiguousAnswer(question.dimension, answerValue);
|
||||
|
||||
analysis.dimensions[question.dimension] = {
|
||||
rating: selectedOption.rating,
|
||||
confidence: ambiguous ? 'LOW' : 'MEDIUM',
|
||||
evidence_count: 1,
|
||||
cross_project_consistent: null,
|
||||
evidence: [{
|
||||
signal: 'Self-reported via questionnaire',
|
||||
quote: selectedOption.label,
|
||||
project: 'N/A (questionnaire)',
|
||||
}],
|
||||
summary: `Developer self-reported as ${selectedOption.rating} for ${question.header.toLowerCase()}.`,
|
||||
claude_instruction: generateClaudeInstruction(question.dimension, selectedOption.rating),
|
||||
};
|
||||
}
|
||||
|
||||
output(analysis, raw);
|
||||
}
|
||||
|
||||
function cmdGenerateDevPreferences(cwd, options, raw) {
|
||||
if (!options.analysis) error('--analysis <path> is required');
|
||||
|
||||
let analysisPath = options.analysis;
|
||||
if (!path.isAbsolute(analysisPath)) analysisPath = path.join(cwd, analysisPath);
|
||||
if (!fs.existsSync(analysisPath)) error(`Analysis file not found: ${analysisPath}`);
|
||||
|
||||
let analysis;
|
||||
try {
|
||||
analysis = JSON.parse(fs.readFileSync(analysisPath, 'utf-8'));
|
||||
} catch (err) {
|
||||
error(`Failed to parse analysis JSON: ${err.message}`);
|
||||
}
|
||||
|
||||
if (!analysis.dimensions || typeof analysis.dimensions !== 'object') {
|
||||
error('Analysis JSON must contain a "dimensions" object');
|
||||
}
|
||||
|
||||
const devPrefLabels = {
|
||||
communication_style: 'Communication',
|
||||
decision_speed: 'Decision Support',
|
||||
explanation_depth: 'Explanations',
|
||||
debugging_approach: 'Debugging',
|
||||
ux_philosophy: 'UX Approach',
|
||||
vendor_philosophy: 'Library & Tool Choices',
|
||||
frustration_triggers: 'Boundaries',
|
||||
learning_style: 'Learning Support',
|
||||
};
|
||||
|
||||
const templatePath = path.join(__dirname, '..', '..', 'templates', 'dev-preferences.md');
|
||||
if (!fs.existsSync(templatePath)) error(`Template not found: ${templatePath}`);
|
||||
let template = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
const directiveLines = [];
|
||||
const dimensionsIncluded = [];
|
||||
|
||||
for (const dimKey of DIMENSION_KEYS) {
|
||||
const dim = analysis.dimensions[dimKey];
|
||||
if (!dim) continue;
|
||||
const label = devPrefLabels[dimKey] || dimKey;
|
||||
const confidence = dim.confidence || 'UNSCORED';
|
||||
let instruction = dim.claude_instruction;
|
||||
if (!instruction) {
|
||||
const lookup = CLAUDE_INSTRUCTIONS[dimKey];
|
||||
if (lookup && dim.rating && lookup[dim.rating]) {
|
||||
instruction = lookup[dim.rating];
|
||||
} else {
|
||||
instruction = `Adapt to this developer's ${dimKey.replace(/_/g, ' ')} preference.`;
|
||||
}
|
||||
}
|
||||
directiveLines.push(`### ${label}\n${instruction} (${confidence} confidence)\n`);
|
||||
dimensionsIncluded.push(dimKey);
|
||||
}
|
||||
|
||||
const directivesBlock = directiveLines.join('\n').trim();
|
||||
template = template.replace(/\{\{behavioral_directives\}\}/g, directivesBlock);
|
||||
template = template.replace(/\{\{generated_at\}\}/g, new Date().toISOString());
|
||||
template = template.replace(/\{\{data_source\}\}/g, analysis.data_source || 'session_analysis');
|
||||
|
||||
let stackBlock;
|
||||
if (analysis.data_source === 'questionnaire') {
|
||||
stackBlock = 'Stack preferences not available (questionnaire-only profile). Run `/gsd:profile-user --refresh` with session data to populate.';
|
||||
} else if (options.stack) {
|
||||
stackBlock = options.stack;
|
||||
} else {
|
||||
stackBlock = 'Stack preferences will be populated from session analysis.';
|
||||
}
|
||||
template = template.replace(/\{\{stack_preferences\}\}/g, stackBlock);
|
||||
|
||||
let outputPath = options.output;
|
||||
if (!outputPath) {
|
||||
outputPath = path.join(os.homedir(), '.claude', 'commands', 'gsd', 'dev-preferences.md');
|
||||
} else if (!path.isAbsolute(outputPath)) {
|
||||
outputPath = path.join(cwd, outputPath);
|
||||
}
|
||||
|
||||
fs.mkdirSync(path.dirname(outputPath), { recursive: true });
|
||||
fs.writeFileSync(outputPath, template, 'utf-8');
|
||||
|
||||
const result = {
|
||||
command_path: outputPath,
|
||||
command_name: '/gsd:dev-preferences',
|
||||
dimensions_included: dimensionsIncluded,
|
||||
source: analysis.data_source || 'session_analysis',
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdGenerateClaudeProfile(cwd, options, raw) {
|
||||
if (!options.analysis) error('--analysis <path> is required');
|
||||
|
||||
let analysisPath = options.analysis;
|
||||
if (!path.isAbsolute(analysisPath)) analysisPath = path.join(cwd, analysisPath);
|
||||
if (!fs.existsSync(analysisPath)) error(`Analysis file not found: ${analysisPath}`);
|
||||
|
||||
let analysis;
|
||||
try {
|
||||
analysis = JSON.parse(fs.readFileSync(analysisPath, 'utf-8'));
|
||||
} catch (err) {
|
||||
error(`Failed to parse analysis JSON: ${err.message}`);
|
||||
}
|
||||
|
||||
if (!analysis.dimensions || typeof analysis.dimensions !== 'object') {
|
||||
error('Analysis JSON must contain a "dimensions" object');
|
||||
}
|
||||
|
||||
const profileLabels = {
|
||||
communication_style: 'Communication',
|
||||
decision_speed: 'Decisions',
|
||||
explanation_depth: 'Explanations',
|
||||
debugging_approach: 'Debugging',
|
||||
ux_philosophy: 'UX Philosophy',
|
||||
vendor_philosophy: 'Vendor Choices',
|
||||
frustration_triggers: 'Frustrations',
|
||||
learning_style: 'Learning',
|
||||
};
|
||||
|
||||
const dataSource = analysis.data_source || 'session_analysis';
|
||||
const tableRows = [];
|
||||
const directiveLines = [];
|
||||
const dimensionsIncluded = [];
|
||||
|
||||
for (const dimKey of DIMENSION_KEYS) {
|
||||
const dim = analysis.dimensions[dimKey];
|
||||
if (!dim) continue;
|
||||
const label = profileLabels[dimKey] || dimKey;
|
||||
const rating = dim.rating || 'UNSCORED';
|
||||
const confidence = dim.confidence || 'UNSCORED';
|
||||
tableRows.push(`| ${label} | ${rating} | ${confidence} |`);
|
||||
let instruction = dim.claude_instruction;
|
||||
if (!instruction) {
|
||||
const lookup = CLAUDE_INSTRUCTIONS[dimKey];
|
||||
if (lookup && dim.rating && lookup[dim.rating]) {
|
||||
instruction = lookup[dim.rating];
|
||||
} else {
|
||||
instruction = `Adapt to this developer's ${dimKey.replace(/_/g, ' ')} preference.`;
|
||||
}
|
||||
}
|
||||
directiveLines.push(`- **${label}:** ${instruction}`);
|
||||
dimensionsIncluded.push(dimKey);
|
||||
}
|
||||
|
||||
const sectionLines = [
|
||||
'<!-- GSD:profile-start -->',
|
||||
'## Developer Profile',
|
||||
'',
|
||||
`> Generated by GSD from ${dataSource}. Run \`/gsd:profile-user --refresh\` to update.`,
|
||||
'',
|
||||
'| Dimension | Rating | Confidence |',
|
||||
'|-----------|--------|------------|',
|
||||
...tableRows,
|
||||
'',
|
||||
'**Directives:**',
|
||||
...directiveLines,
|
||||
'<!-- GSD:profile-end -->',
|
||||
];
|
||||
|
||||
const sectionContent = sectionLines.join('\n');
|
||||
|
||||
let targetPath;
|
||||
if (options.global) {
|
||||
targetPath = path.join(os.homedir(), '.claude', 'CLAUDE.md');
|
||||
} else if (options.output) {
|
||||
targetPath = path.isAbsolute(options.output) ? options.output : path.join(cwd, options.output);
|
||||
} else {
|
||||
targetPath = path.join(cwd, 'CLAUDE.md');
|
||||
}
|
||||
|
||||
let action;
|
||||
|
||||
if (fs.existsSync(targetPath)) {
|
||||
let existingContent = fs.readFileSync(targetPath, 'utf-8');
|
||||
const startMarker = '<!-- GSD:profile-start -->';
|
||||
const endMarker = '<!-- GSD:profile-end -->';
|
||||
const startIdx = existingContent.indexOf(startMarker);
|
||||
const endIdx = existingContent.indexOf(endMarker);
|
||||
|
||||
if (startIdx !== -1 && endIdx !== -1) {
|
||||
const before = existingContent.substring(0, startIdx);
|
||||
const after = existingContent.substring(endIdx + endMarker.length);
|
||||
existingContent = before + sectionContent + after;
|
||||
action = 'updated';
|
||||
} else {
|
||||
existingContent = existingContent.trimEnd() + '\n\n' + sectionContent + '\n';
|
||||
action = 'appended';
|
||||
}
|
||||
fs.writeFileSync(targetPath, existingContent, 'utf-8');
|
||||
} else {
|
||||
fs.mkdirSync(path.dirname(targetPath), { recursive: true });
|
||||
fs.writeFileSync(targetPath, sectionContent + '\n', 'utf-8');
|
||||
action = 'created';
|
||||
}
|
||||
|
||||
const result = {
|
||||
claude_md_path: targetPath,
|
||||
action,
|
||||
dimensions_included: dimensionsIncluded,
|
||||
is_global: !!options.global,
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdGenerateClaudeMd(cwd, options, raw) {
|
||||
const MANAGED_SECTIONS = ['project', 'stack', 'conventions', 'architecture'];
|
||||
const generators = {
|
||||
project: generateProjectSection,
|
||||
stack: generateStackSection,
|
||||
conventions: generateConventionsSection,
|
||||
architecture: generateArchitectureSection,
|
||||
};
|
||||
const sectionHeadings = {
|
||||
project: '## Project',
|
||||
stack: '## Technology Stack',
|
||||
conventions: '## Conventions',
|
||||
architecture: '## Architecture',
|
||||
};
|
||||
|
||||
const generated = {};
|
||||
const sectionsGenerated = [];
|
||||
const sectionsFallback = [];
|
||||
const sectionsSkipped = [];
|
||||
|
||||
for (const name of MANAGED_SECTIONS) {
|
||||
const gen = generators[name](cwd);
|
||||
generated[name] = gen;
|
||||
if (gen.hasFallback) {
|
||||
sectionsFallback.push(name);
|
||||
} else {
|
||||
sectionsGenerated.push(name);
|
||||
}
|
||||
}
|
||||
|
||||
let outputPath = options.output;
|
||||
if (!outputPath) {
|
||||
outputPath = path.join(cwd, 'CLAUDE.md');
|
||||
} else if (!path.isAbsolute(outputPath)) {
|
||||
outputPath = path.join(cwd, outputPath);
|
||||
}
|
||||
|
||||
let existingContent = safeReadFile(outputPath);
|
||||
let action;
|
||||
|
||||
if (existingContent === null) {
|
||||
const sections = [];
|
||||
for (const name of MANAGED_SECTIONS) {
|
||||
const gen = generated[name];
|
||||
const heading = sectionHeadings[name];
|
||||
const body = `${heading}\n\n${gen.content}`;
|
||||
sections.push(buildSection(name, gen.source, body));
|
||||
}
|
||||
sections.push('');
|
||||
sections.push(CLAUDE_MD_PROFILE_PLACEHOLDER);
|
||||
existingContent = sections.join('\n\n') + '\n';
|
||||
action = 'created';
|
||||
fs.mkdirSync(path.dirname(outputPath), { recursive: true });
|
||||
fs.writeFileSync(outputPath, existingContent, 'utf-8');
|
||||
} else {
|
||||
action = 'updated';
|
||||
let fileContent = existingContent;
|
||||
|
||||
for (const name of MANAGED_SECTIONS) {
|
||||
const gen = generated[name];
|
||||
const heading = sectionHeadings[name];
|
||||
const body = `${heading}\n\n${gen.content}`;
|
||||
const fullSection = buildSection(name, gen.source, body);
|
||||
const hasMarkers = fileContent.indexOf(`<!-- GSD:${name}-start`) !== -1;
|
||||
|
||||
if (hasMarkers) {
|
||||
if (options.auto) {
|
||||
const expectedBody = `${heading}\n\n${gen.content}`;
|
||||
if (detectManualEdit(fileContent, name, expectedBody)) {
|
||||
sectionsSkipped.push(name);
|
||||
const genIdx = sectionsGenerated.indexOf(name);
|
||||
if (genIdx !== -1) sectionsGenerated.splice(genIdx, 1);
|
||||
const fbIdx = sectionsFallback.indexOf(name);
|
||||
if (fbIdx !== -1) sectionsFallback.splice(fbIdx, 1);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
const result = updateSection(fileContent, name, fullSection);
|
||||
fileContent = result.content;
|
||||
} else {
|
||||
const result = updateSection(fileContent, name, fullSection);
|
||||
fileContent = result.content;
|
||||
}
|
||||
}
|
||||
|
||||
if (!options.auto && fileContent.indexOf('<!-- GSD:profile-start') === -1) {
|
||||
fileContent = fileContent.trimEnd() + '\n\n' + CLAUDE_MD_PROFILE_PLACEHOLDER + '\n';
|
||||
}
|
||||
|
||||
fs.writeFileSync(outputPath, fileContent, 'utf-8');
|
||||
}
|
||||
|
||||
const finalContent = safeReadFile(outputPath);
|
||||
let profileStatus;
|
||||
if (finalContent && finalContent.indexOf('<!-- GSD:profile-start') !== -1) {
|
||||
if (action === 'created' || existingContent.indexOf('<!-- GSD:profile-start') === -1) {
|
||||
profileStatus = 'placeholder_added';
|
||||
} else {
|
||||
profileStatus = 'exists';
|
||||
}
|
||||
} else {
|
||||
profileStatus = 'already_present';
|
||||
}
|
||||
|
||||
const genCount = sectionsGenerated.length;
|
||||
const totalManaged = MANAGED_SECTIONS.length;
|
||||
let message = `Generated ${genCount}/${totalManaged} sections.`;
|
||||
if (sectionsFallback.length > 0) message += ` Fallback: ${sectionsFallback.join(', ')}.`;
|
||||
if (sectionsSkipped.length > 0) message += ` Skipped (manually edited): ${sectionsSkipped.join(', ')}.`;
|
||||
if (profileStatus === 'placeholder_added') message += ' Run /gsd:profile-user to unlock Developer Profile.';
|
||||
|
||||
const result = {
|
||||
claude_md_path: outputPath,
|
||||
action,
|
||||
sections_generated: sectionsGenerated,
|
||||
sections_fallback: sectionsFallback,
|
||||
sections_skipped: sectionsSkipped,
|
||||
sections_total: totalManaged,
|
||||
profile_status: profileStatus,
|
||||
message,
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
cmdWriteProfile,
|
||||
cmdProfileQuestionnaire,
|
||||
cmdGenerateDevPreferences,
|
||||
cmdGenerateClaudeProfile,
|
||||
cmdGenerateClaudeMd,
|
||||
PROFILING_QUESTIONS,
|
||||
CLAUDE_INSTRUCTIONS,
|
||||
};
|
||||
537
get-shit-done/bin/lib/profile-pipeline.cjs
Normal file
537
get-shit-done/bin/lib/profile-pipeline.cjs
Normal file
@@ -0,0 +1,537 @@
|
||||
/**
|
||||
* Profile Pipeline — session scanning, message extraction, and sampling
|
||||
*
|
||||
* Reads Claude Code session history (read-only) to extract user messages
|
||||
* for behavioral profiling. Three commands:
|
||||
* - scan-sessions: list all projects and sessions
|
||||
* - extract-messages: extract user messages from a specific project
|
||||
* - profile-sample: multi-project sampling with recency weighting
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const os = require('os');
|
||||
const readline = require('readline');
|
||||
const { output, error, safeReadFile } = require('./core.cjs');
|
||||
|
||||
// ─── Session I/O Helpers ──────────────────────────────────────────────────────
|
||||
|
||||
function getSessionsDir(overridePath) {
|
||||
const dir = overridePath || path.join(os.homedir(), '.claude', 'projects');
|
||||
if (!fs.existsSync(dir)) return null;
|
||||
return dir;
|
||||
}
|
||||
|
||||
function scanProjectDir(projectDirPath) {
|
||||
const entries = fs.readdirSync(projectDirPath);
|
||||
const sessions = [];
|
||||
|
||||
for (const entry of entries) {
|
||||
if (!entry.endsWith('.jsonl')) continue;
|
||||
const sessionId = entry.replace('.jsonl', '');
|
||||
const filePath = path.join(projectDirPath, entry);
|
||||
const stat = fs.statSync(filePath);
|
||||
|
||||
sessions.push({
|
||||
sessionId,
|
||||
filePath,
|
||||
size: stat.size,
|
||||
modified: stat.mtime,
|
||||
});
|
||||
}
|
||||
|
||||
sessions.sort((a, b) => b.modified - a.modified);
|
||||
return sessions;
|
||||
}
|
||||
|
||||
function readSessionIndex(projectDirPath) {
|
||||
try {
|
||||
const indexPath = path.join(projectDirPath, 'sessions-index.json');
|
||||
const raw = fs.readFileSync(indexPath, 'utf-8');
|
||||
const parsed = JSON.parse(raw);
|
||||
const entries = new Map();
|
||||
for (const entry of (parsed.entries || [])) {
|
||||
if (entry.sessionId) {
|
||||
entries.set(entry.sessionId, entry);
|
||||
}
|
||||
}
|
||||
return { originalPath: parsed.originalPath || null, entries };
|
||||
} catch {
|
||||
return { originalPath: null, entries: new Map() };
|
||||
}
|
||||
}
|
||||
|
||||
function getProjectName(projectDirName, indexData, firstRecordCwd) {
|
||||
if (indexData && indexData.originalPath) {
|
||||
return path.basename(indexData.originalPath);
|
||||
}
|
||||
if (firstRecordCwd) {
|
||||
return path.basename(firstRecordCwd);
|
||||
}
|
||||
return projectDirName;
|
||||
}
|
||||
|
||||
function formatBytes(bytes) {
|
||||
if (bytes < 1024) return `${bytes} B`;
|
||||
if (bytes < 1048576) return `${(bytes / 1024).toFixed(1)} KB`;
|
||||
if (bytes < 1073741824) return `${(bytes / 1048576).toFixed(1)} MB`;
|
||||
return `${(bytes / 1073741824).toFixed(1)} GB`;
|
||||
}
|
||||
|
||||
function formatProjectTable(projects) {
|
||||
let out = '';
|
||||
out += 'Project'.padEnd(35) + 'Sessions'.padEnd(10) + 'Size'.padEnd(10) + 'Last Active\n';
|
||||
out += '-'.repeat(75) + '\n';
|
||||
for (const p of projects) {
|
||||
const name = p.name.length > 33 ? p.name.substring(0, 30) + '...' : p.name;
|
||||
out += name.padEnd(35) + String(p.sessionCount).padEnd(10) +
|
||||
p.totalSizeHuman.padEnd(10) + p.lastActive + '\n';
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
function formatSessionTable(sessions) {
|
||||
let out = '';
|
||||
out += ' Session ID'.padEnd(42) + 'Size'.padEnd(10) + 'Modified\n';
|
||||
out += ' ' + '-'.repeat(70) + '\n';
|
||||
for (const s of sessions) {
|
||||
const id = s.sessionId.length > 38 ? s.sessionId.substring(0, 35) + '...' : s.sessionId;
|
||||
out += ' ' + id.padEnd(40) + formatBytes(s.size).padEnd(10) +
|
||||
new Date(s.modified).toISOString().replace('T', ' ').substring(0, 19) + '\n';
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
// ─── Message Extraction Helpers ───────────────────────────────────────────────
|
||||
|
||||
function isGenuineUserMessage(record) {
|
||||
if (record.type !== 'user') return false;
|
||||
if (record.userType !== 'external') return false;
|
||||
if (record.isMeta === true) return false;
|
||||
if (record.isSidechain === true) return false;
|
||||
const content = record.message?.content;
|
||||
if (typeof content !== 'string') return false;
|
||||
if (content.length === 0) return false;
|
||||
if (content.startsWith('<local-command')) return false;
|
||||
if (content.startsWith('<command-')) return false;
|
||||
if (content.startsWith('<task-notification')) return false;
|
||||
if (content.startsWith('<local-command-stdout')) return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
function truncateContent(content, maxLen = 2000) {
|
||||
if (content.length <= maxLen) return content;
|
||||
return content.substring(0, maxLen) + '... [truncated]';
|
||||
}
|
||||
|
||||
async function streamExtractMessages(filePath, filterFn, maxMessages = 300) {
|
||||
const rl = readline.createInterface({
|
||||
input: fs.createReadStream(filePath),
|
||||
crlfDelay: Infinity,
|
||||
terminal: false,
|
||||
});
|
||||
|
||||
const messages = [];
|
||||
const sessionId = path.basename(filePath, '.jsonl');
|
||||
|
||||
for await (const line of rl) {
|
||||
if (messages.length >= maxMessages) break;
|
||||
let record;
|
||||
try {
|
||||
record = JSON.parse(line);
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
if (!filterFn(record)) continue;
|
||||
messages.push({
|
||||
sessionId,
|
||||
projectPath: record.cwd || null,
|
||||
timestamp: record.timestamp || null,
|
||||
content: truncateContent(record.message.content),
|
||||
});
|
||||
}
|
||||
|
||||
return messages;
|
||||
}
|
||||
|
||||
// ─── Commands ─────────────────────────────────────────────────────────────────
|
||||
|
||||
async function cmdScanSessions(overridePath, options, raw) {
|
||||
const sessionsDir = getSessionsDir(overridePath);
|
||||
if (!sessionsDir) {
|
||||
const searchedPath = overridePath || '~/.claude/projects';
|
||||
error(`No Claude Code sessions found at ${searchedPath}.${overridePath ? '' : ' Is Claude Code installed?'}`);
|
||||
}
|
||||
|
||||
process.stderr.write('Reading your session history (read-only, nothing is modified or sent anywhere)...\n');
|
||||
|
||||
let projectDirs;
|
||||
try {
|
||||
projectDirs = fs.readdirSync(sessionsDir).filter(entry => {
|
||||
const fullPath = path.join(sessionsDir, entry);
|
||||
try {
|
||||
return fs.statSync(fullPath).isDirectory();
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
});
|
||||
} catch (err) {
|
||||
error(`Cannot read sessions directory: ${err.message}`);
|
||||
}
|
||||
|
||||
const projects = [];
|
||||
|
||||
for (const dirName of projectDirs) {
|
||||
const projectPath = path.join(sessionsDir, dirName);
|
||||
const sessions = scanProjectDir(projectPath);
|
||||
if (sessions.length === 0) continue;
|
||||
|
||||
const indexData = readSessionIndex(projectPath);
|
||||
const projectName = getProjectName(dirName, indexData);
|
||||
|
||||
if (indexData.entries.size === 0 && !options.json) {
|
||||
process.stderr.write(`Index not found for ${projectName}, scanning directory...\n`);
|
||||
}
|
||||
|
||||
const totalSize = sessions.reduce((sum, s) => sum + s.size, 0);
|
||||
const lastActive = sessions[0].modified.toISOString();
|
||||
const oldest = sessions[sessions.length - 1].modified.toISOString();
|
||||
const newest = sessions[0].modified.toISOString();
|
||||
|
||||
const project = {
|
||||
name: projectName,
|
||||
directory: dirName,
|
||||
sessionCount: sessions.length,
|
||||
totalSize,
|
||||
totalSizeHuman: formatBytes(totalSize),
|
||||
lastActive: lastActive.replace('T', ' ').substring(0, 19),
|
||||
dateRange: { first: oldest, last: newest },
|
||||
};
|
||||
|
||||
if (options.verbose) {
|
||||
project.sessions = sessions.map(s => {
|
||||
const indexed = indexData.entries.get(s.sessionId);
|
||||
const session = {
|
||||
sessionId: s.sessionId,
|
||||
size: s.size,
|
||||
sizeHuman: formatBytes(s.size),
|
||||
modified: s.modified.toISOString(),
|
||||
};
|
||||
if (indexed) {
|
||||
if (indexed.summary) session.summary = indexed.summary;
|
||||
if (indexed.messageCount !== undefined) session.messageCount = indexed.messageCount;
|
||||
if (indexed.created) session.created = indexed.created;
|
||||
}
|
||||
return session;
|
||||
});
|
||||
}
|
||||
|
||||
projects.push(project);
|
||||
}
|
||||
|
||||
projects.sort((a, b) => b.dateRange.last.localeCompare(a.dateRange.last));
|
||||
|
||||
if (options.json || raw) {
|
||||
output(projects, raw);
|
||||
} else {
|
||||
process.stdout.write('\n' + formatProjectTable(projects));
|
||||
if (options.verbose) {
|
||||
for (const p of projects) {
|
||||
process.stdout.write(`\n ${p.name} (${p.sessionCount} sessions):\n`);
|
||||
if (p.sessions) {
|
||||
process.stdout.write(formatSessionTable(p.sessions));
|
||||
}
|
||||
}
|
||||
}
|
||||
process.stdout.write(`\nTotal: ${projects.length} projects\n`);
|
||||
process.exit(0);
|
||||
}
|
||||
}
|
||||
|
||||
async function cmdExtractMessages(projectArg, options, raw, overridePath) {
|
||||
const sessionsDir = getSessionsDir(overridePath);
|
||||
if (!sessionsDir) {
|
||||
const searchedPath = overridePath || '~/.claude/projects';
|
||||
error(`No Claude Code sessions found at ${searchedPath}.${overridePath ? '' : ' Is Claude Code installed?'}`);
|
||||
}
|
||||
|
||||
let projectDirs;
|
||||
try {
|
||||
projectDirs = fs.readdirSync(sessionsDir).filter(entry => {
|
||||
const fullPath = path.join(sessionsDir, entry);
|
||||
try {
|
||||
return fs.statSync(fullPath).isDirectory();
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
});
|
||||
} catch (err) {
|
||||
error(`Cannot read sessions directory: ${err.message}`);
|
||||
}
|
||||
|
||||
let matchedDir = null;
|
||||
let matchedName = null;
|
||||
|
||||
for (const dirName of projectDirs) {
|
||||
if (dirName === projectArg) {
|
||||
matchedDir = dirName;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!matchedDir) {
|
||||
const lowerArg = projectArg.toLowerCase();
|
||||
const matches = projectDirs.filter(d => d.toLowerCase().includes(lowerArg));
|
||||
if (matches.length === 1) {
|
||||
matchedDir = matches[0];
|
||||
} else if (matches.length > 1) {
|
||||
const exactNameMatches = [];
|
||||
for (const dirName of matches) {
|
||||
const indexData = readSessionIndex(path.join(sessionsDir, dirName));
|
||||
const pName = getProjectName(dirName, indexData);
|
||||
if (pName.toLowerCase() === lowerArg) {
|
||||
exactNameMatches.push({ dirName, name: pName });
|
||||
}
|
||||
}
|
||||
if (exactNameMatches.length === 1) {
|
||||
matchedDir = exactNameMatches[0].dirName;
|
||||
matchedName = exactNameMatches[0].name;
|
||||
} else {
|
||||
const names = matches.map(d => {
|
||||
const idx = readSessionIndex(path.join(sessionsDir, d));
|
||||
return ` - ${getProjectName(d, idx)} (${d})`;
|
||||
});
|
||||
error(`Multiple projects match "${projectArg}":\n${names.join('\n')}\nBe more specific.`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!matchedDir) {
|
||||
const available = projectDirs.map(d => {
|
||||
const idx = readSessionIndex(path.join(sessionsDir, d));
|
||||
return ` - ${getProjectName(d, idx)}`;
|
||||
});
|
||||
error(`No project matching "${projectArg}". Available projects:\n${available.join('\n')}`);
|
||||
}
|
||||
|
||||
const projectPath = path.join(sessionsDir, matchedDir);
|
||||
const indexData = readSessionIndex(projectPath);
|
||||
const projectName = matchedName || getProjectName(matchedDir, indexData);
|
||||
|
||||
process.stderr.write('Reading your session history (read-only, nothing is modified or sent anywhere)...\n');
|
||||
|
||||
let sessions = scanProjectDir(projectPath);
|
||||
|
||||
if (options.sessionId) {
|
||||
sessions = sessions.filter(s => s.sessionId === options.sessionId);
|
||||
if (sessions.length === 0) {
|
||||
error(`Session "${options.sessionId}" not found in project "${projectName}".`);
|
||||
}
|
||||
}
|
||||
|
||||
if (options.limit && options.limit > 0) {
|
||||
sessions = sessions.slice(0, options.limit);
|
||||
}
|
||||
|
||||
const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'gsd-pipeline-'));
|
||||
const outputPath = path.join(tmpDir, 'extracted-messages.jsonl');
|
||||
|
||||
let sessionsProcessed = 0;
|
||||
let sessionsSkipped = 0;
|
||||
let messagesExtracted = 0;
|
||||
let messagesTruncated = 0;
|
||||
const total = sessions.length;
|
||||
const batchLimit = 300;
|
||||
|
||||
for (let i = 0; i < sessions.length; i++) {
|
||||
if (messagesExtracted >= batchLimit) break;
|
||||
|
||||
const session = sessions[i];
|
||||
process.stderr.write(`\rProcessing session ${i + 1}/${total}...`);
|
||||
|
||||
try {
|
||||
const remaining = batchLimit - messagesExtracted;
|
||||
const msgs = await streamExtractMessages(session.filePath, isGenuineUserMessage, remaining);
|
||||
for (const msg of msgs) {
|
||||
fs.appendFileSync(outputPath, JSON.stringify(msg) + '\n');
|
||||
messagesExtracted++;
|
||||
if (msg.content.endsWith('... [truncated]')) {
|
||||
messagesTruncated++;
|
||||
}
|
||||
}
|
||||
sessionsProcessed++;
|
||||
} catch (err) {
|
||||
sessionsSkipped++;
|
||||
process.stderr.write(`\nWarning: Skipped session ${session.sessionId}: ${err.message}\n`);
|
||||
}
|
||||
}
|
||||
|
||||
process.stderr.write('\r' + ' '.repeat(60) + '\r');
|
||||
|
||||
const result = {
|
||||
output_file: outputPath,
|
||||
project: projectName,
|
||||
sessions_processed: sessionsProcessed,
|
||||
sessions_skipped: sessionsSkipped,
|
||||
messages_extracted: messagesExtracted,
|
||||
messages_truncated: messagesTruncated,
|
||||
};
|
||||
|
||||
if (sessionsSkipped > 0 && sessionsProcessed > 0) {
|
||||
process.stdout.write(JSON.stringify(result, null, 2));
|
||||
process.exit(2);
|
||||
} else if (sessionsProcessed === 0 && sessionsSkipped > 0) {
|
||||
process.stdout.write(JSON.stringify(result, null, 2));
|
||||
process.exit(1);
|
||||
} else {
|
||||
output(result, raw);
|
||||
}
|
||||
}
|
||||
|
||||
async function cmdProfileSample(overridePath, options, raw) {
|
||||
const sessionsDir = getSessionsDir(overridePath);
|
||||
if (!sessionsDir) {
|
||||
const searchedPath = overridePath || '~/.claude/projects';
|
||||
error(`No Claude Code sessions found at ${searchedPath}.${overridePath ? '' : ' Is Claude Code installed?'}`);
|
||||
}
|
||||
|
||||
process.stderr.write('Reading your session history (read-only, nothing is modified or sent anywhere)...\n');
|
||||
|
||||
const limit = options.limit || 150;
|
||||
const maxChars = options.maxChars || 500;
|
||||
|
||||
let projectDirs;
|
||||
try {
|
||||
projectDirs = fs.readdirSync(sessionsDir).filter(entry => {
|
||||
const fullPath = path.join(sessionsDir, entry);
|
||||
try {
|
||||
return fs.statSync(fullPath).isDirectory();
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
});
|
||||
} catch (err) {
|
||||
error(`Cannot read sessions directory: ${err.message}`);
|
||||
}
|
||||
|
||||
if (projectDirs.length === 0) {
|
||||
error('No project directories found in sessions directory.');
|
||||
}
|
||||
|
||||
const projectMeta = [];
|
||||
for (const dirName of projectDirs) {
|
||||
const projectPath = path.join(sessionsDir, dirName);
|
||||
const sessions = scanProjectDir(projectPath);
|
||||
if (sessions.length === 0) continue;
|
||||
const indexData = readSessionIndex(projectPath);
|
||||
const projectName = getProjectName(dirName, indexData);
|
||||
const lastActive = sessions[0].modified;
|
||||
projectMeta.push({ dirName, projectPath, sessions, projectName, lastActive });
|
||||
}
|
||||
|
||||
projectMeta.sort((a, b) => b.lastActive - a.lastActive);
|
||||
|
||||
const projectCount = projectMeta.length;
|
||||
if (projectCount === 0) {
|
||||
error('No projects with sessions found.');
|
||||
}
|
||||
|
||||
const perProjectCap = options.maxPerProject || Math.max(5, Math.floor(limit / projectCount));
|
||||
|
||||
const recencyThreshold = Date.now() - 30 * 24 * 60 * 60 * 1000;
|
||||
const allMessages = [];
|
||||
let skippedContextDumps = 0;
|
||||
const projectBreakdown = [];
|
||||
|
||||
for (const proj of projectMeta) {
|
||||
if (allMessages.length >= limit) break;
|
||||
|
||||
const cappedSessions = proj.sessions.slice(0, perProjectCap);
|
||||
|
||||
let projectMessages = 0;
|
||||
let projectSessionsUsed = 0;
|
||||
|
||||
for (const session of cappedSessions) {
|
||||
if (allMessages.length >= limit) break;
|
||||
|
||||
const isRecent = session.modified.getTime() >= recencyThreshold;
|
||||
const perSessionMax = isRecent ? 10 : 3;
|
||||
|
||||
const remaining = Math.min(perSessionMax, limit - allMessages.length);
|
||||
|
||||
try {
|
||||
const msgs = await streamExtractMessages(session.filePath, isGenuineUserMessage, remaining);
|
||||
let sessionUsed = false;
|
||||
|
||||
for (const msg of msgs) {
|
||||
if (allMessages.length >= limit) break;
|
||||
|
||||
const content = msg.content || '';
|
||||
if (content.startsWith('This session is being continued')) {
|
||||
skippedContextDumps++;
|
||||
continue;
|
||||
}
|
||||
|
||||
const lines = content.split('\n').filter(l => l.trim().length > 0);
|
||||
if (lines.length > 3) {
|
||||
const logPattern = /^\[?(DEBUG|INFO|WARN|ERROR|LOG)\]?/i;
|
||||
const timestampPattern = /^\d{4}-\d{2}-\d{2}/;
|
||||
const logLines = lines.filter(l => logPattern.test(l.trim()) || timestampPattern.test(l.trim()));
|
||||
if (logLines.length / lines.length > 0.8) {
|
||||
skippedContextDumps++;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
const truncated = truncateContent(content, maxChars);
|
||||
|
||||
allMessages.push({
|
||||
sessionId: msg.sessionId,
|
||||
projectName: proj.projectName,
|
||||
projectPath: msg.projectPath,
|
||||
timestamp: msg.timestamp,
|
||||
content: truncated,
|
||||
});
|
||||
|
||||
projectMessages++;
|
||||
sessionUsed = true;
|
||||
}
|
||||
if (sessionUsed) projectSessionsUsed++;
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if (projectMessages > 0) {
|
||||
projectBreakdown.push({
|
||||
project: proj.projectName,
|
||||
messages: projectMessages,
|
||||
sessions: projectSessionsUsed,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'gsd-profile-'));
|
||||
const outputPath = path.join(tmpDir, 'profile-sample.jsonl');
|
||||
for (const msg of allMessages) {
|
||||
fs.appendFileSync(outputPath, JSON.stringify(msg) + '\n');
|
||||
}
|
||||
|
||||
const result = {
|
||||
output_file: outputPath,
|
||||
projects_sampled: projectBreakdown.length,
|
||||
messages_sampled: allMessages.length,
|
||||
per_project_cap: perProjectCap,
|
||||
message_char_limit: maxChars,
|
||||
skipped_context_dumps: skippedContextDumps,
|
||||
project_breakdown: projectBreakdown,
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
cmdScanSessions,
|
||||
cmdExtractMessages,
|
||||
cmdProfileSample,
|
||||
};
|
||||
306
get-shit-done/bin/lib/roadmap.cjs
Normal file
306
get-shit-done/bin/lib/roadmap.cjs
Normal file
@@ -0,0 +1,306 @@
|
||||
/**
|
||||
* Roadmap — Roadmap parsing and update operations
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { escapeRegex, normalizePhaseName, output, error, findPhaseInternal, stripShippedMilestones, extractCurrentMilestone, replaceInCurrentMilestone } = require('./core.cjs');
|
||||
|
||||
function cmdRoadmapGetPhase(cwd, phaseNum, raw) {
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
|
||||
if (!fs.existsSync(roadmapPath)) {
|
||||
output({ found: false, error: 'ROADMAP.md not found' }, raw, '');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const content = extractCurrentMilestone(fs.readFileSync(roadmapPath, 'utf-8'), cwd);
|
||||
|
||||
// Escape special regex chars in phase number, handle decimal
|
||||
const escapedPhase = escapeRegex(phaseNum);
|
||||
|
||||
// Match "## Phase X:", "### Phase X:", or "#### Phase X:" with optional name
|
||||
const phasePattern = new RegExp(
|
||||
`#{2,4}\\s*Phase\\s+${escapedPhase}:\\s*([^\\n]+)`,
|
||||
'i'
|
||||
);
|
||||
const headerMatch = content.match(phasePattern);
|
||||
|
||||
if (!headerMatch) {
|
||||
// Fallback: check if phase exists in summary list but missing detail section
|
||||
const checklistPattern = new RegExp(
|
||||
`-\\s*\\[[ x]\\]\\s*\\*\\*Phase\\s+${escapedPhase}:\\s*([^*]+)\\*\\*`,
|
||||
'i'
|
||||
);
|
||||
const checklistMatch = content.match(checklistPattern);
|
||||
|
||||
if (checklistMatch) {
|
||||
// Phase exists in summary but missing detail section - malformed ROADMAP
|
||||
output({
|
||||
found: false,
|
||||
phase_number: phaseNum,
|
||||
phase_name: checklistMatch[1].trim(),
|
||||
error: 'malformed_roadmap',
|
||||
message: `Phase ${phaseNum} exists in summary list but missing "### Phase ${phaseNum}:" detail section. ROADMAP.md needs both formats.`
|
||||
}, raw, '');
|
||||
return;
|
||||
}
|
||||
|
||||
output({ found: false, phase_number: phaseNum }, raw, '');
|
||||
return;
|
||||
}
|
||||
|
||||
const phaseName = headerMatch[1].trim();
|
||||
const headerIndex = headerMatch.index;
|
||||
|
||||
// Find the end of this section (next ## or ### phase header, or end of file)
|
||||
const restOfContent = content.slice(headerIndex);
|
||||
const nextHeaderMatch = restOfContent.match(/\n#{2,4}\s+Phase\s+\d/i);
|
||||
const sectionEnd = nextHeaderMatch
|
||||
? headerIndex + nextHeaderMatch.index
|
||||
: content.length;
|
||||
|
||||
const section = content.slice(headerIndex, sectionEnd).trim();
|
||||
|
||||
// Extract goal if present (supports both **Goal:** and **Goal**: formats)
|
||||
const goalMatch = section.match(/\*\*Goal(?::\*\*|\*\*:)\s*([^\n]+)/i);
|
||||
const goal = goalMatch ? goalMatch[1].trim() : null;
|
||||
|
||||
// Extract success criteria as structured array
|
||||
const criteriaMatch = section.match(/\*\*Success Criteria\*\*[^\n]*:\s*\n((?:\s*\d+\.\s*[^\n]+\n?)+)/i);
|
||||
const success_criteria = criteriaMatch
|
||||
? criteriaMatch[1].trim().split('\n').map(line => line.replace(/^\s*\d+\.\s*/, '').trim()).filter(Boolean)
|
||||
: [];
|
||||
|
||||
output(
|
||||
{
|
||||
found: true,
|
||||
phase_number: phaseNum,
|
||||
phase_name: phaseName,
|
||||
goal,
|
||||
success_criteria,
|
||||
section,
|
||||
},
|
||||
raw,
|
||||
section
|
||||
);
|
||||
} catch (e) {
|
||||
error('Failed to read ROADMAP.md: ' + e.message);
|
||||
}
|
||||
}
|
||||
|
||||
function cmdRoadmapAnalyze(cwd, raw) {
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
|
||||
if (!fs.existsSync(roadmapPath)) {
|
||||
output({ error: 'ROADMAP.md not found', milestones: [], phases: [], current_phase: null }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
const rawContent = fs.readFileSync(roadmapPath, 'utf-8');
|
||||
const content = extractCurrentMilestone(rawContent, cwd);
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
|
||||
// Extract all phase headings: ## Phase N: Name or ### Phase N: Name
|
||||
const phasePattern = /#{2,4}\s*Phase\s+(\d+[A-Z]?(?:\.\d+)*)\s*:\s*([^\n]+)/gi;
|
||||
const phases = [];
|
||||
let match;
|
||||
|
||||
while ((match = phasePattern.exec(content)) !== null) {
|
||||
const phaseNum = match[1];
|
||||
const phaseName = match[2].replace(/\(INSERTED\)/i, '').trim();
|
||||
|
||||
// Extract goal from the section
|
||||
const sectionStart = match.index;
|
||||
const restOfContent = content.slice(sectionStart);
|
||||
const nextHeader = restOfContent.match(/\n#{2,4}\s+Phase\s+\d/i);
|
||||
const sectionEnd = nextHeader ? sectionStart + nextHeader.index : content.length;
|
||||
const section = content.slice(sectionStart, sectionEnd);
|
||||
|
||||
const goalMatch = section.match(/\*\*Goal(?::\*\*|\*\*:)\s*([^\n]+)/i);
|
||||
const goal = goalMatch ? goalMatch[1].trim() : null;
|
||||
|
||||
const dependsMatch = section.match(/\*\*Depends on(?::\*\*|\*\*:)\s*([^\n]+)/i);
|
||||
const depends_on = dependsMatch ? dependsMatch[1].trim() : null;
|
||||
|
||||
// Check completion on disk
|
||||
const normalized = normalizePhaseName(phaseNum);
|
||||
let diskStatus = 'no_directory';
|
||||
let planCount = 0;
|
||||
let summaryCount = 0;
|
||||
let hasContext = false;
|
||||
let hasResearch = false;
|
||||
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name);
|
||||
const dirMatch = dirs.find(d => d.startsWith(normalized + '-') || d === normalized);
|
||||
|
||||
if (dirMatch) {
|
||||
const phaseFiles = fs.readdirSync(path.join(phasesDir, dirMatch));
|
||||
planCount = phaseFiles.filter(f => f.endsWith('-PLAN.md') || f === 'PLAN.md').length;
|
||||
summaryCount = phaseFiles.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md').length;
|
||||
hasContext = phaseFiles.some(f => f.endsWith('-CONTEXT.md') || f === 'CONTEXT.md');
|
||||
hasResearch = phaseFiles.some(f => f.endsWith('-RESEARCH.md') || f === 'RESEARCH.md');
|
||||
|
||||
if (summaryCount >= planCount && planCount > 0) diskStatus = 'complete';
|
||||
else if (summaryCount > 0) diskStatus = 'partial';
|
||||
else if (planCount > 0) diskStatus = 'planned';
|
||||
else if (hasResearch) diskStatus = 'researched';
|
||||
else if (hasContext) diskStatus = 'discussed';
|
||||
else diskStatus = 'empty';
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Check ROADMAP checkbox status
|
||||
const checkboxPattern = new RegExp(`-\\s*\\[(x| )\\]\\s*.*Phase\\s+${escapeRegex(phaseNum)}[:\\s]`, 'i');
|
||||
const checkboxMatch = content.match(checkboxPattern);
|
||||
const roadmapComplete = checkboxMatch ? checkboxMatch[1] === 'x' : false;
|
||||
|
||||
// If roadmap marks phase complete, trust that over disk file structure.
|
||||
// Phases completed before GSD tracking (or via external tools) may lack
|
||||
// the standard PLAN/SUMMARY pairs but are still done.
|
||||
if (roadmapComplete && diskStatus !== 'complete') {
|
||||
diskStatus = 'complete';
|
||||
}
|
||||
|
||||
phases.push({
|
||||
number: phaseNum,
|
||||
name: phaseName,
|
||||
goal,
|
||||
depends_on,
|
||||
plan_count: planCount,
|
||||
summary_count: summaryCount,
|
||||
has_context: hasContext,
|
||||
has_research: hasResearch,
|
||||
disk_status: diskStatus,
|
||||
roadmap_complete: roadmapComplete,
|
||||
});
|
||||
}
|
||||
|
||||
// Extract milestone info
|
||||
const milestones = [];
|
||||
const milestonePattern = /##\s*(.*v(\d+\.\d+)[^(\n]*)/gi;
|
||||
let mMatch;
|
||||
while ((mMatch = milestonePattern.exec(content)) !== null) {
|
||||
milestones.push({
|
||||
heading: mMatch[1].trim(),
|
||||
version: 'v' + mMatch[2],
|
||||
});
|
||||
}
|
||||
|
||||
// Find current and next phase
|
||||
const currentPhase = phases.find(p => p.disk_status === 'planned' || p.disk_status === 'partial') || null;
|
||||
const nextPhase = phases.find(p => p.disk_status === 'empty' || p.disk_status === 'no_directory' || p.disk_status === 'discussed' || p.disk_status === 'researched') || null;
|
||||
|
||||
// Aggregated stats
|
||||
const totalPlans = phases.reduce((sum, p) => sum + p.plan_count, 0);
|
||||
const totalSummaries = phases.reduce((sum, p) => sum + p.summary_count, 0);
|
||||
const completedPhases = phases.filter(p => p.disk_status === 'complete').length;
|
||||
|
||||
// Detect phases in summary list without detail sections (malformed ROADMAP)
|
||||
const checklistPattern = /-\s*\[[ x]\]\s*\*\*Phase\s+(\d+[A-Z]?(?:\.\d+)*)/gi;
|
||||
const checklistPhases = new Set();
|
||||
let checklistMatch;
|
||||
while ((checklistMatch = checklistPattern.exec(content)) !== null) {
|
||||
checklistPhases.add(checklistMatch[1]);
|
||||
}
|
||||
const detailPhases = new Set(phases.map(p => p.number));
|
||||
const missingDetails = [...checklistPhases].filter(p => !detailPhases.has(p));
|
||||
|
||||
const result = {
|
||||
milestones,
|
||||
phases,
|
||||
phase_count: phases.length,
|
||||
completed_phases: completedPhases,
|
||||
total_plans: totalPlans,
|
||||
total_summaries: totalSummaries,
|
||||
progress_percent: totalPlans > 0 ? Math.min(100, Math.round((totalSummaries / totalPlans) * 100)) : 0,
|
||||
current_phase: currentPhase ? currentPhase.number : null,
|
||||
next_phase: nextPhase ? nextPhase.number : null,
|
||||
missing_phase_details: missingDetails.length > 0 ? missingDetails : null,
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
function cmdRoadmapUpdatePlanProgress(cwd, phaseNum, raw) {
|
||||
if (!phaseNum) {
|
||||
error('phase number required for roadmap update-plan-progress');
|
||||
}
|
||||
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
|
||||
const phaseInfo = findPhaseInternal(cwd, phaseNum);
|
||||
if (!phaseInfo) {
|
||||
error(`Phase ${phaseNum} not found`);
|
||||
}
|
||||
|
||||
const planCount = phaseInfo.plans.length;
|
||||
const summaryCount = phaseInfo.summaries.length;
|
||||
|
||||
if (planCount === 0) {
|
||||
output({ updated: false, reason: 'No plans found', plan_count: 0, summary_count: 0 }, raw, 'no plans');
|
||||
return;
|
||||
}
|
||||
|
||||
const isComplete = summaryCount >= planCount;
|
||||
const status = isComplete ? 'Complete' : summaryCount > 0 ? 'In Progress' : 'Planned';
|
||||
const today = new Date().toISOString().split('T')[0];
|
||||
|
||||
if (!fs.existsSync(roadmapPath)) {
|
||||
output({ updated: false, reason: 'ROADMAP.md not found', plan_count: planCount, summary_count: summaryCount }, raw, 'no roadmap');
|
||||
return;
|
||||
}
|
||||
|
||||
let roadmapContent = fs.readFileSync(roadmapPath, 'utf-8');
|
||||
const phaseEscaped = escapeRegex(phaseNum);
|
||||
|
||||
// Progress table row: update Plans column (summaries/plans) and Status column
|
||||
const tablePattern = new RegExp(
|
||||
`(\\|\\s*${phaseEscaped}\\.?\\s[^|]*\\|)[^|]*(\\|)\\s*[^|]*(\\|)\\s*[^|]*(\\|)`,
|
||||
'i'
|
||||
);
|
||||
const dateField = isComplete ? ` ${today} ` : ' ';
|
||||
roadmapContent = replaceInCurrentMilestone(
|
||||
roadmapContent, tablePattern,
|
||||
`$1 ${summaryCount}/${planCount} $2 ${status.padEnd(11)}$3${dateField}$4`
|
||||
);
|
||||
|
||||
// Update plan count in phase detail section
|
||||
const planCountPattern = new RegExp(
|
||||
`(#{2,4}\\s*Phase\\s+${phaseEscaped}[\\s\\S]*?\\*\\*Plans:\\*\\*\\s*)[^\\n]+`,
|
||||
'i'
|
||||
);
|
||||
const planCountText = isComplete
|
||||
? `${summaryCount}/${planCount} plans complete`
|
||||
: `${summaryCount}/${planCount} plans executed`;
|
||||
roadmapContent = replaceInCurrentMilestone(roadmapContent, planCountPattern, `$1${planCountText}`);
|
||||
|
||||
// If complete: check checkbox
|
||||
if (isComplete) {
|
||||
const checkboxPattern = new RegExp(
|
||||
`(-\\s*\\[)[ ](\\]\\s*.*Phase\\s+${phaseEscaped}[:\\s][^\\n]*)`,
|
||||
'i'
|
||||
);
|
||||
roadmapContent = replaceInCurrentMilestone(roadmapContent, checkboxPattern, `$1x$2 (completed ${today})`);
|
||||
}
|
||||
|
||||
fs.writeFileSync(roadmapPath, roadmapContent, 'utf-8');
|
||||
|
||||
output({
|
||||
updated: true,
|
||||
phase: phaseNum,
|
||||
plan_count: planCount,
|
||||
summary_count: summaryCount,
|
||||
status,
|
||||
complete: isComplete,
|
||||
}, raw, `${summaryCount}/${planCount} ${status}`);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
cmdRoadmapGetPhase,
|
||||
cmdRoadmapAnalyze,
|
||||
cmdRoadmapUpdatePlanProgress,
|
||||
};
|
||||
848
get-shit-done/bin/lib/state.cjs
Normal file
848
get-shit-done/bin/lib/state.cjs
Normal file
@@ -0,0 +1,848 @@
|
||||
/**
|
||||
* State — STATE.md operations and progression engine
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { escapeRegex, loadConfig, getMilestoneInfo, getMilestonePhaseFilter, normalizeMd, output, error } = require('./core.cjs');
|
||||
const { extractFrontmatter, reconstructFrontmatter } = require('./frontmatter.cjs');
|
||||
|
||||
// Shared helper: extract a field value from STATE.md content.
|
||||
// Supports both **Field:** bold and plain Field: format.
|
||||
function stateExtractField(content, fieldName) {
|
||||
const escaped = escapeRegex(fieldName);
|
||||
const boldPattern = new RegExp(`\\*\\*${escaped}:\\*\\*\\s*(.+)`, 'i');
|
||||
const boldMatch = content.match(boldPattern);
|
||||
if (boldMatch) return boldMatch[1].trim();
|
||||
const plainPattern = new RegExp(`^${escaped}:\\s*(.+)`, 'im');
|
||||
const plainMatch = content.match(plainPattern);
|
||||
return plainMatch ? plainMatch[1].trim() : null;
|
||||
}
|
||||
|
||||
function cmdStateLoad(cwd, raw) {
|
||||
const config = loadConfig(cwd);
|
||||
const planningDir = path.join(cwd, '.planning');
|
||||
|
||||
let stateRaw = '';
|
||||
try {
|
||||
stateRaw = fs.readFileSync(path.join(planningDir, 'STATE.md'), 'utf-8');
|
||||
} catch {}
|
||||
|
||||
const configExists = fs.existsSync(path.join(planningDir, 'config.json'));
|
||||
const roadmapExists = fs.existsSync(path.join(planningDir, 'ROADMAP.md'));
|
||||
const stateExists = stateRaw.length > 0;
|
||||
|
||||
const result = {
|
||||
config,
|
||||
state_raw: stateRaw,
|
||||
state_exists: stateExists,
|
||||
roadmap_exists: roadmapExists,
|
||||
config_exists: configExists,
|
||||
};
|
||||
|
||||
// For --raw, output a condensed key=value format
|
||||
if (raw) {
|
||||
const c = config;
|
||||
const lines = [
|
||||
`model_profile=${c.model_profile}`,
|
||||
`commit_docs=${c.commit_docs}`,
|
||||
`branching_strategy=${c.branching_strategy}`,
|
||||
`phase_branch_template=${c.phase_branch_template}`,
|
||||
`milestone_branch_template=${c.milestone_branch_template}`,
|
||||
`parallelization=${c.parallelization}`,
|
||||
`research=${c.research}`,
|
||||
`plan_checker=${c.plan_checker}`,
|
||||
`verifier=${c.verifier}`,
|
||||
`config_exists=${configExists}`,
|
||||
`roadmap_exists=${roadmapExists}`,
|
||||
`state_exists=${stateExists}`,
|
||||
];
|
||||
process.stdout.write(lines.join('\n'));
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
output(result);
|
||||
}
|
||||
|
||||
function cmdStateGet(cwd, section, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
try {
|
||||
const content = fs.readFileSync(statePath, 'utf-8');
|
||||
|
||||
if (!section) {
|
||||
output({ content }, raw, content);
|
||||
return;
|
||||
}
|
||||
|
||||
// Try to find markdown section or field
|
||||
const fieldEscaped = section.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
|
||||
// Check for **field:** value (bold format)
|
||||
const boldPattern = new RegExp(`\\*\\*${fieldEscaped}:\\*\\*\\s*(.*)`, 'i');
|
||||
const boldMatch = content.match(boldPattern);
|
||||
if (boldMatch) {
|
||||
output({ [section]: boldMatch[1].trim() }, raw, boldMatch[1].trim());
|
||||
return;
|
||||
}
|
||||
|
||||
// Check for field: value (plain format)
|
||||
const plainPattern = new RegExp(`^${fieldEscaped}:\\s*(.*)`, 'im');
|
||||
const plainMatch = content.match(plainPattern);
|
||||
if (plainMatch) {
|
||||
output({ [section]: plainMatch[1].trim() }, raw, plainMatch[1].trim());
|
||||
return;
|
||||
}
|
||||
|
||||
// Check for ## Section
|
||||
const sectionPattern = new RegExp(`##\\s*${fieldEscaped}\\s*\n([\\s\\S]*?)(?=\\n##|$)`, 'i');
|
||||
const sectionMatch = content.match(sectionPattern);
|
||||
if (sectionMatch) {
|
||||
output({ [section]: sectionMatch[1].trim() }, raw, sectionMatch[1].trim());
|
||||
return;
|
||||
}
|
||||
|
||||
output({ error: `Section or field "${section}" not found` }, raw, '');
|
||||
} catch {
|
||||
error('STATE.md not found');
|
||||
}
|
||||
}
|
||||
|
||||
function readTextArgOrFile(cwd, value, filePath, label) {
|
||||
if (!filePath) return value;
|
||||
|
||||
const resolvedPath = path.isAbsolute(filePath) ? filePath : path.join(cwd, filePath);
|
||||
try {
|
||||
return fs.readFileSync(resolvedPath, 'utf-8').trimEnd();
|
||||
} catch {
|
||||
throw new Error(`${label} file not found: ${filePath}`);
|
||||
}
|
||||
}
|
||||
|
||||
function cmdStatePatch(cwd, patches, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
try {
|
||||
let content = fs.readFileSync(statePath, 'utf-8');
|
||||
const results = { updated: [], failed: [] };
|
||||
|
||||
for (const [field, value] of Object.entries(patches)) {
|
||||
const fieldEscaped = field.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
// Try **Field:** bold format first, then plain Field: format
|
||||
const boldPattern = new RegExp(`(\\*\\*${fieldEscaped}:\\*\\*\\s*)(.*)`, 'i');
|
||||
const plainPattern = new RegExp(`(^${fieldEscaped}:\\s*)(.*)`, 'im');
|
||||
|
||||
if (boldPattern.test(content)) {
|
||||
content = content.replace(boldPattern, (_match, prefix) => `${prefix}${value}`);
|
||||
results.updated.push(field);
|
||||
} else if (plainPattern.test(content)) {
|
||||
content = content.replace(plainPattern, (_match, prefix) => `${prefix}${value}`);
|
||||
results.updated.push(field);
|
||||
} else {
|
||||
results.failed.push(field);
|
||||
}
|
||||
}
|
||||
|
||||
if (results.updated.length > 0) {
|
||||
writeStateMd(statePath, content, cwd);
|
||||
}
|
||||
|
||||
output(results, raw, results.updated.length > 0 ? 'true' : 'false');
|
||||
} catch {
|
||||
error('STATE.md not found');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdStateUpdate(cwd, field, value) {
|
||||
if (!field || value === undefined) {
|
||||
error('field and value required for state update');
|
||||
}
|
||||
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
try {
|
||||
let content = fs.readFileSync(statePath, 'utf-8');
|
||||
const fieldEscaped = field.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
// Try **Field:** bold format first, then plain Field: format
|
||||
const boldPattern = new RegExp(`(\\*\\*${fieldEscaped}:\\*\\*\\s*)(.*)`, 'i');
|
||||
const plainPattern = new RegExp(`(^${fieldEscaped}:\\s*)(.*)`, 'im');
|
||||
if (boldPattern.test(content)) {
|
||||
content = content.replace(boldPattern, (_match, prefix) => `${prefix}${value}`);
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ updated: true });
|
||||
} else if (plainPattern.test(content)) {
|
||||
content = content.replace(plainPattern, (_match, prefix) => `${prefix}${value}`);
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ updated: true });
|
||||
} else {
|
||||
output({ updated: false, reason: `Field "${field}" not found in STATE.md` });
|
||||
}
|
||||
} catch {
|
||||
output({ updated: false, reason: 'STATE.md not found' });
|
||||
}
|
||||
}
|
||||
|
||||
// ─── State Progression Engine ────────────────────────────────────────────────
|
||||
|
||||
function stateExtractField(content, fieldName) {
|
||||
const escaped = fieldName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
// Try **Field:** bold format first
|
||||
const boldPattern = new RegExp(`\\*\\*${escaped}:\\*\\*\\s*(.+)`, 'i');
|
||||
const boldMatch = content.match(boldPattern);
|
||||
if (boldMatch) return boldMatch[1].trim();
|
||||
// Fall back to plain Field: format
|
||||
const plainPattern = new RegExp(`^${escaped}:\\s*(.+)`, 'im');
|
||||
const plainMatch = content.match(plainPattern);
|
||||
return plainMatch ? plainMatch[1].trim() : null;
|
||||
}
|
||||
|
||||
function stateReplaceField(content, fieldName, newValue) {
|
||||
const escaped = fieldName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
|
||||
// Try **Field:** bold format first, then plain Field: format
|
||||
const boldPattern = new RegExp(`(\\*\\*${escaped}:\\*\\*\\s*)(.*)`, 'i');
|
||||
if (boldPattern.test(content)) {
|
||||
return content.replace(boldPattern, (_match, prefix) => `${prefix}${newValue}`);
|
||||
}
|
||||
const plainPattern = new RegExp(`(^${escaped}:\\s*)(.*)`, 'im');
|
||||
if (plainPattern.test(content)) {
|
||||
return content.replace(plainPattern, (_match, prefix) => `${prefix}${newValue}`);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function cmdStateAdvancePlan(cwd, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (!fs.existsSync(statePath)) { output({ error: 'STATE.md not found' }, raw); return; }
|
||||
|
||||
let content = fs.readFileSync(statePath, 'utf-8');
|
||||
const currentPlan = parseInt(stateExtractField(content, 'Current Plan'), 10);
|
||||
const totalPlans = parseInt(stateExtractField(content, 'Total Plans in Phase'), 10);
|
||||
const today = new Date().toISOString().split('T')[0];
|
||||
|
||||
if (isNaN(currentPlan) || isNaN(totalPlans)) {
|
||||
output({ error: 'Cannot parse Current Plan or Total Plans in Phase from STATE.md' }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
if (currentPlan >= totalPlans) {
|
||||
content = stateReplaceField(content, 'Status', 'Phase complete — ready for verification') || content;
|
||||
content = stateReplaceField(content, 'Last Activity', today) || content;
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ advanced: false, reason: 'last_plan', current_plan: currentPlan, total_plans: totalPlans, status: 'ready_for_verification' }, raw, 'false');
|
||||
} else {
|
||||
const newPlan = currentPlan + 1;
|
||||
content = stateReplaceField(content, 'Current Plan', String(newPlan)) || content;
|
||||
content = stateReplaceField(content, 'Status', 'Ready to execute') || content;
|
||||
content = stateReplaceField(content, 'Last Activity', today) || content;
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ advanced: true, previous_plan: currentPlan, current_plan: newPlan, total_plans: totalPlans }, raw, 'true');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdStateRecordMetric(cwd, options, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (!fs.existsSync(statePath)) { output({ error: 'STATE.md not found' }, raw); return; }
|
||||
|
||||
let content = fs.readFileSync(statePath, 'utf-8');
|
||||
const { phase, plan, duration, tasks, files } = options;
|
||||
|
||||
if (!phase || !plan || !duration) {
|
||||
output({ error: 'phase, plan, and duration required' }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
// Find Performance Metrics section and its table
|
||||
const metricsPattern = /(##\s*Performance Metrics[\s\S]*?\n\|[^\n]+\n\|[-|\s]+\n)([\s\S]*?)(?=\n##|\n$|$)/i;
|
||||
const metricsMatch = content.match(metricsPattern);
|
||||
|
||||
if (metricsMatch) {
|
||||
let tableBody = metricsMatch[2].trimEnd();
|
||||
const newRow = `| Phase ${phase} P${plan} | ${duration} | ${tasks || '-'} tasks | ${files || '-'} files |`;
|
||||
|
||||
if (tableBody.trim() === '' || tableBody.includes('None yet')) {
|
||||
tableBody = newRow;
|
||||
} else {
|
||||
tableBody = tableBody + '\n' + newRow;
|
||||
}
|
||||
|
||||
content = content.replace(metricsPattern, (_match, header) => `${header}${tableBody}\n`);
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ recorded: true, phase, plan, duration }, raw, 'true');
|
||||
} else {
|
||||
output({ recorded: false, reason: 'Performance Metrics section not found in STATE.md' }, raw, 'false');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdStateUpdateProgress(cwd, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (!fs.existsSync(statePath)) { output({ error: 'STATE.md not found' }, raw); return; }
|
||||
|
||||
let content = fs.readFileSync(statePath, 'utf-8');
|
||||
|
||||
// Count summaries across current milestone phases only
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
let totalPlans = 0;
|
||||
let totalSummaries = 0;
|
||||
|
||||
if (fs.existsSync(phasesDir)) {
|
||||
const isDirInMilestone = getMilestonePhaseFilter(cwd);
|
||||
const phaseDirs = fs.readdirSync(phasesDir, { withFileTypes: true })
|
||||
.filter(e => e.isDirectory()).map(e => e.name)
|
||||
.filter(isDirInMilestone);
|
||||
for (const dir of phaseDirs) {
|
||||
const files = fs.readdirSync(path.join(phasesDir, dir));
|
||||
totalPlans += files.filter(f => f.match(/-PLAN\.md$/i)).length;
|
||||
totalSummaries += files.filter(f => f.match(/-SUMMARY\.md$/i)).length;
|
||||
}
|
||||
}
|
||||
|
||||
const percent = totalPlans > 0 ? Math.min(100, Math.round(totalSummaries / totalPlans * 100)) : 0;
|
||||
const barWidth = 10;
|
||||
const filled = Math.round(percent / 100 * barWidth);
|
||||
const bar = '\u2588'.repeat(filled) + '\u2591'.repeat(barWidth - filled);
|
||||
const progressStr = `[${bar}] ${percent}%`;
|
||||
|
||||
// Try **Progress:** bold format first, then plain Progress: format
|
||||
const boldProgressPattern = /(\*\*Progress:\*\*\s*).*/i;
|
||||
const plainProgressPattern = /^(Progress:\s*).*/im;
|
||||
if (boldProgressPattern.test(content)) {
|
||||
content = content.replace(boldProgressPattern, (_match, prefix) => `${prefix}${progressStr}`);
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ updated: true, percent, completed: totalSummaries, total: totalPlans, bar: progressStr }, raw, progressStr);
|
||||
} else if (plainProgressPattern.test(content)) {
|
||||
content = content.replace(plainProgressPattern, (_match, prefix) => `${prefix}${progressStr}`);
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ updated: true, percent, completed: totalSummaries, total: totalPlans, bar: progressStr }, raw, progressStr);
|
||||
} else {
|
||||
output({ updated: false, reason: 'Progress field not found in STATE.md' }, raw, 'false');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdStateAddDecision(cwd, options, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (!fs.existsSync(statePath)) { output({ error: 'STATE.md not found' }, raw); return; }
|
||||
|
||||
const { phase, summary, summary_file, rationale, rationale_file } = options;
|
||||
let summaryText = null;
|
||||
let rationaleText = '';
|
||||
|
||||
try {
|
||||
summaryText = readTextArgOrFile(cwd, summary, summary_file, 'summary');
|
||||
rationaleText = readTextArgOrFile(cwd, rationale || '', rationale_file, 'rationale');
|
||||
} catch (err) {
|
||||
output({ added: false, reason: err.message }, raw, 'false');
|
||||
return;
|
||||
}
|
||||
|
||||
if (!summaryText) { output({ error: 'summary required' }, raw); return; }
|
||||
|
||||
let content = fs.readFileSync(statePath, 'utf-8');
|
||||
const entry = `- [Phase ${phase || '?'}]: ${summaryText}${rationaleText ? ` — ${rationaleText}` : ''}`;
|
||||
|
||||
// Find Decisions section (various heading patterns)
|
||||
const sectionPattern = /(###?\s*(?:Decisions|Decisions Made|Accumulated.*Decisions)\s*\n)([\s\S]*?)(?=\n###?|\n##[^#]|$)/i;
|
||||
const match = content.match(sectionPattern);
|
||||
|
||||
if (match) {
|
||||
let sectionBody = match[2];
|
||||
// Remove placeholders
|
||||
sectionBody = sectionBody.replace(/None yet\.?\s*\n?/gi, '').replace(/No decisions yet\.?\s*\n?/gi, '');
|
||||
sectionBody = sectionBody.trimEnd() + '\n' + entry + '\n';
|
||||
content = content.replace(sectionPattern, (_match, header) => `${header}${sectionBody}`);
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ added: true, decision: entry }, raw, 'true');
|
||||
} else {
|
||||
output({ added: false, reason: 'Decisions section not found in STATE.md' }, raw, 'false');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdStateAddBlocker(cwd, text, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (!fs.existsSync(statePath)) { output({ error: 'STATE.md not found' }, raw); return; }
|
||||
const blockerOptions = typeof text === 'object' && text !== null ? text : { text };
|
||||
let blockerText = null;
|
||||
|
||||
try {
|
||||
blockerText = readTextArgOrFile(cwd, blockerOptions.text, blockerOptions.text_file, 'blocker');
|
||||
} catch (err) {
|
||||
output({ added: false, reason: err.message }, raw, 'false');
|
||||
return;
|
||||
}
|
||||
|
||||
if (!blockerText) { output({ error: 'text required' }, raw); return; }
|
||||
|
||||
let content = fs.readFileSync(statePath, 'utf-8');
|
||||
const entry = `- ${blockerText}`;
|
||||
|
||||
const sectionPattern = /(###?\s*(?:Blockers|Blockers\/Concerns|Concerns)\s*\n)([\s\S]*?)(?=\n###?|\n##[^#]|$)/i;
|
||||
const match = content.match(sectionPattern);
|
||||
|
||||
if (match) {
|
||||
let sectionBody = match[2];
|
||||
sectionBody = sectionBody.replace(/None\.?\s*\n?/gi, '').replace(/None yet\.?\s*\n?/gi, '');
|
||||
sectionBody = sectionBody.trimEnd() + '\n' + entry + '\n';
|
||||
content = content.replace(sectionPattern, (_match, header) => `${header}${sectionBody}`);
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ added: true, blocker: blockerText }, raw, 'true');
|
||||
} else {
|
||||
output({ added: false, reason: 'Blockers section not found in STATE.md' }, raw, 'false');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdStateResolveBlocker(cwd, text, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (!fs.existsSync(statePath)) { output({ error: 'STATE.md not found' }, raw); return; }
|
||||
if (!text) { output({ error: 'text required' }, raw); return; }
|
||||
|
||||
let content = fs.readFileSync(statePath, 'utf-8');
|
||||
|
||||
const sectionPattern = /(###?\s*(?:Blockers|Blockers\/Concerns|Concerns)\s*\n)([\s\S]*?)(?=\n###?|\n##[^#]|$)/i;
|
||||
const match = content.match(sectionPattern);
|
||||
|
||||
if (match) {
|
||||
const sectionBody = match[2];
|
||||
const lines = sectionBody.split('\n');
|
||||
const filtered = lines.filter(line => {
|
||||
if (!line.startsWith('- ')) return true;
|
||||
return !line.toLowerCase().includes(text.toLowerCase());
|
||||
});
|
||||
|
||||
let newBody = filtered.join('\n');
|
||||
// If section is now empty, add placeholder
|
||||
if (!newBody.trim() || !newBody.includes('- ')) {
|
||||
newBody = 'None\n';
|
||||
}
|
||||
|
||||
content = content.replace(sectionPattern, (_match, header) => `${header}${newBody}`);
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ resolved: true, blocker: text }, raw, 'true');
|
||||
} else {
|
||||
output({ resolved: false, reason: 'Blockers section not found in STATE.md' }, raw, 'false');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdStateRecordSession(cwd, options, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (!fs.existsSync(statePath)) { output({ error: 'STATE.md not found' }, raw); return; }
|
||||
|
||||
let content = fs.readFileSync(statePath, 'utf-8');
|
||||
const now = new Date().toISOString();
|
||||
const updated = [];
|
||||
|
||||
// Update Last session / Last Date
|
||||
let result = stateReplaceField(content, 'Last session', now);
|
||||
if (result) { content = result; updated.push('Last session'); }
|
||||
result = stateReplaceField(content, 'Last Date', now);
|
||||
if (result) { content = result; updated.push('Last Date'); }
|
||||
|
||||
// Update Stopped at
|
||||
if (options.stopped_at) {
|
||||
result = stateReplaceField(content, 'Stopped At', options.stopped_at);
|
||||
if (!result) result = stateReplaceField(content, 'Stopped at', options.stopped_at);
|
||||
if (result) { content = result; updated.push('Stopped At'); }
|
||||
}
|
||||
|
||||
// Update Resume file
|
||||
const resumeFile = options.resume_file || 'None';
|
||||
result = stateReplaceField(content, 'Resume File', resumeFile);
|
||||
if (!result) result = stateReplaceField(content, 'Resume file', resumeFile);
|
||||
if (result) { content = result; updated.push('Resume File'); }
|
||||
|
||||
if (updated.length > 0) {
|
||||
writeStateMd(statePath, content, cwd);
|
||||
output({ recorded: true, updated }, raw, 'true');
|
||||
} else {
|
||||
output({ recorded: false, reason: 'No session fields found in STATE.md' }, raw, 'false');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdStateSnapshot(cwd, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
|
||||
if (!fs.existsSync(statePath)) {
|
||||
output({ error: 'STATE.md not found' }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
const content = fs.readFileSync(statePath, 'utf-8');
|
||||
|
||||
// Extract basic fields
|
||||
const currentPhase = stateExtractField(content, 'Current Phase');
|
||||
const currentPhaseName = stateExtractField(content, 'Current Phase Name');
|
||||
const totalPhasesRaw = stateExtractField(content, 'Total Phases');
|
||||
const currentPlan = stateExtractField(content, 'Current Plan');
|
||||
const totalPlansRaw = stateExtractField(content, 'Total Plans in Phase');
|
||||
const status = stateExtractField(content, 'Status');
|
||||
const progressRaw = stateExtractField(content, 'Progress');
|
||||
const lastActivity = stateExtractField(content, 'Last Activity');
|
||||
const lastActivityDesc = stateExtractField(content, 'Last Activity Description');
|
||||
const pausedAt = stateExtractField(content, 'Paused At');
|
||||
|
||||
// Parse numeric fields
|
||||
const totalPhases = totalPhasesRaw ? parseInt(totalPhasesRaw, 10) : null;
|
||||
const totalPlansInPhase = totalPlansRaw ? parseInt(totalPlansRaw, 10) : null;
|
||||
const progressPercent = progressRaw ? parseInt(progressRaw.replace('%', ''), 10) : null;
|
||||
|
||||
// Extract decisions table
|
||||
const decisions = [];
|
||||
const decisionsMatch = content.match(/##\s*Decisions Made[\s\S]*?\n\|[^\n]+\n\|[-|\s]+\n([\s\S]*?)(?=\n##|\n$|$)/i);
|
||||
if (decisionsMatch) {
|
||||
const tableBody = decisionsMatch[1];
|
||||
const rows = tableBody.trim().split('\n').filter(r => r.includes('|'));
|
||||
for (const row of rows) {
|
||||
const cells = row.split('|').map(c => c.trim()).filter(Boolean);
|
||||
if (cells.length >= 3) {
|
||||
decisions.push({
|
||||
phase: cells[0],
|
||||
summary: cells[1],
|
||||
rationale: cells[2],
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Extract blockers list
|
||||
const blockers = [];
|
||||
const blockersMatch = content.match(/##\s*Blockers\s*\n([\s\S]*?)(?=\n##|$)/i);
|
||||
if (blockersMatch) {
|
||||
const blockersSection = blockersMatch[1];
|
||||
const items = blockersSection.match(/^-\s+(.+)$/gm) || [];
|
||||
for (const item of items) {
|
||||
blockers.push(item.replace(/^-\s+/, '').trim());
|
||||
}
|
||||
}
|
||||
|
||||
// Extract session info
|
||||
const session = {
|
||||
last_date: null,
|
||||
stopped_at: null,
|
||||
resume_file: null,
|
||||
};
|
||||
|
||||
const sessionMatch = content.match(/##\s*Session\s*\n([\s\S]*?)(?=\n##|$)/i);
|
||||
if (sessionMatch) {
|
||||
const sessionSection = sessionMatch[1];
|
||||
const lastDateMatch = sessionSection.match(/\*\*Last Date:\*\*\s*(.+)/i)
|
||||
|| sessionSection.match(/^Last Date:\s*(.+)/im);
|
||||
const stoppedAtMatch = sessionSection.match(/\*\*Stopped At:\*\*\s*(.+)/i)
|
||||
|| sessionSection.match(/^Stopped At:\s*(.+)/im);
|
||||
const resumeFileMatch = sessionSection.match(/\*\*Resume File:\*\*\s*(.+)/i)
|
||||
|| sessionSection.match(/^Resume File:\s*(.+)/im);
|
||||
|
||||
if (lastDateMatch) session.last_date = lastDateMatch[1].trim();
|
||||
if (stoppedAtMatch) session.stopped_at = stoppedAtMatch[1].trim();
|
||||
if (resumeFileMatch) session.resume_file = resumeFileMatch[1].trim();
|
||||
}
|
||||
|
||||
const result = {
|
||||
current_phase: currentPhase,
|
||||
current_phase_name: currentPhaseName,
|
||||
total_phases: totalPhases,
|
||||
current_plan: currentPlan,
|
||||
total_plans_in_phase: totalPlansInPhase,
|
||||
status,
|
||||
progress_percent: progressPercent,
|
||||
last_activity: lastActivity,
|
||||
last_activity_desc: lastActivityDesc,
|
||||
decisions,
|
||||
blockers,
|
||||
paused_at: pausedAt,
|
||||
session,
|
||||
};
|
||||
|
||||
output(result, raw);
|
||||
}
|
||||
|
||||
// ─── State Frontmatter Sync ──────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Extract machine-readable fields from STATE.md markdown body and build
|
||||
* a YAML frontmatter object. Allows hooks and scripts to read state
|
||||
* reliably via `state json` instead of fragile regex parsing.
|
||||
*/
|
||||
function buildStateFrontmatter(bodyContent, cwd) {
|
||||
const currentPhase = stateExtractField(bodyContent, 'Current Phase');
|
||||
const currentPhaseName = stateExtractField(bodyContent, 'Current Phase Name');
|
||||
const currentPlan = stateExtractField(bodyContent, 'Current Plan');
|
||||
const totalPhasesRaw = stateExtractField(bodyContent, 'Total Phases');
|
||||
const totalPlansRaw = stateExtractField(bodyContent, 'Total Plans in Phase');
|
||||
const status = stateExtractField(bodyContent, 'Status');
|
||||
const progressRaw = stateExtractField(bodyContent, 'Progress');
|
||||
const lastActivity = stateExtractField(bodyContent, 'Last Activity');
|
||||
const stoppedAt = stateExtractField(bodyContent, 'Stopped At') || stateExtractField(bodyContent, 'Stopped at');
|
||||
const pausedAt = stateExtractField(bodyContent, 'Paused At');
|
||||
|
||||
let milestone = null;
|
||||
let milestoneName = null;
|
||||
if (cwd) {
|
||||
try {
|
||||
const info = getMilestoneInfo(cwd);
|
||||
milestone = info.version;
|
||||
milestoneName = info.name;
|
||||
} catch {}
|
||||
}
|
||||
|
||||
let totalPhases = totalPhasesRaw ? parseInt(totalPhasesRaw, 10) : null;
|
||||
let completedPhases = null;
|
||||
let totalPlans = totalPlansRaw ? parseInt(totalPlansRaw, 10) : null;
|
||||
let completedPlans = null;
|
||||
|
||||
if (cwd) {
|
||||
try {
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
if (fs.existsSync(phasesDir)) {
|
||||
const isDirInMilestone = getMilestonePhaseFilter(cwd);
|
||||
const phaseDirs = fs.readdirSync(phasesDir, { withFileTypes: true })
|
||||
.filter(e => e.isDirectory()).map(e => e.name)
|
||||
.filter(isDirInMilestone);
|
||||
let diskTotalPlans = 0;
|
||||
let diskTotalSummaries = 0;
|
||||
let diskCompletedPhases = 0;
|
||||
|
||||
for (const dir of phaseDirs) {
|
||||
const files = fs.readdirSync(path.join(phasesDir, dir));
|
||||
const plans = files.filter(f => f.match(/-PLAN\.md$/i)).length;
|
||||
const summaries = files.filter(f => f.match(/-SUMMARY\.md$/i)).length;
|
||||
diskTotalPlans += plans;
|
||||
diskTotalSummaries += summaries;
|
||||
if (plans > 0 && summaries >= plans) diskCompletedPhases++;
|
||||
}
|
||||
totalPhases = isDirInMilestone.phaseCount > 0
|
||||
? Math.max(phaseDirs.length, isDirInMilestone.phaseCount)
|
||||
: phaseDirs.length;
|
||||
completedPhases = diskCompletedPhases;
|
||||
totalPlans = diskTotalPlans;
|
||||
completedPlans = diskTotalSummaries;
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
let progressPercent = null;
|
||||
if (progressRaw) {
|
||||
const pctMatch = progressRaw.match(/(\d+)%/);
|
||||
if (pctMatch) progressPercent = parseInt(pctMatch[1], 10);
|
||||
}
|
||||
|
||||
// Normalize status to one of: planning, discussing, executing, verifying, paused, completed, unknown
|
||||
let normalizedStatus = status || 'unknown';
|
||||
const statusLower = (status || '').toLowerCase();
|
||||
if (statusLower.includes('paused') || statusLower.includes('stopped') || pausedAt) {
|
||||
normalizedStatus = 'paused';
|
||||
} else if (statusLower.includes('executing') || statusLower.includes('in progress')) {
|
||||
normalizedStatus = 'executing';
|
||||
} else if (statusLower.includes('planning') || statusLower.includes('ready to plan')) {
|
||||
normalizedStatus = 'planning';
|
||||
} else if (statusLower.includes('discussing')) {
|
||||
normalizedStatus = 'discussing';
|
||||
} else if (statusLower.includes('verif')) {
|
||||
normalizedStatus = 'verifying';
|
||||
} else if (statusLower.includes('complete') || statusLower.includes('done')) {
|
||||
normalizedStatus = 'completed';
|
||||
} else if (statusLower.includes('ready to execute')) {
|
||||
normalizedStatus = 'executing';
|
||||
}
|
||||
|
||||
const fm = { gsd_state_version: '1.0' };
|
||||
|
||||
if (milestone) fm.milestone = milestone;
|
||||
if (milestoneName) fm.milestone_name = milestoneName;
|
||||
if (currentPhase) fm.current_phase = currentPhase;
|
||||
if (currentPhaseName) fm.current_phase_name = currentPhaseName;
|
||||
if (currentPlan) fm.current_plan = currentPlan;
|
||||
fm.status = normalizedStatus;
|
||||
if (stoppedAt) fm.stopped_at = stoppedAt;
|
||||
if (pausedAt) fm.paused_at = pausedAt;
|
||||
fm.last_updated = new Date().toISOString();
|
||||
if (lastActivity) fm.last_activity = lastActivity;
|
||||
|
||||
const progress = {};
|
||||
if (totalPhases !== null) progress.total_phases = totalPhases;
|
||||
if (completedPhases !== null) progress.completed_phases = completedPhases;
|
||||
if (totalPlans !== null) progress.total_plans = totalPlans;
|
||||
if (completedPlans !== null) progress.completed_plans = completedPlans;
|
||||
if (progressPercent !== null) progress.percent = progressPercent;
|
||||
if (Object.keys(progress).length > 0) fm.progress = progress;
|
||||
|
||||
return fm;
|
||||
}
|
||||
|
||||
function stripFrontmatter(content) {
|
||||
return content.replace(/^---\n[\s\S]*?\n---\n*/, '');
|
||||
}
|
||||
|
||||
function syncStateFrontmatter(content, cwd) {
|
||||
const body = stripFrontmatter(content);
|
||||
const fm = buildStateFrontmatter(body, cwd);
|
||||
const yamlStr = reconstructFrontmatter(fm);
|
||||
return `---\n${yamlStr}\n---\n\n${body}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Write STATE.md with synchronized YAML frontmatter.
|
||||
* All STATE.md writes should use this instead of raw writeFileSync.
|
||||
*/
|
||||
function writeStateMd(statePath, content, cwd) {
|
||||
const synced = syncStateFrontmatter(content, cwd);
|
||||
fs.writeFileSync(statePath, normalizeMd(synced), 'utf-8');
|
||||
}
|
||||
|
||||
function cmdStateJson(cwd, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (!fs.existsSync(statePath)) {
|
||||
output({ error: 'STATE.md not found' }, raw, 'STATE.md not found');
|
||||
return;
|
||||
}
|
||||
|
||||
const content = fs.readFileSync(statePath, 'utf-8');
|
||||
const fm = extractFrontmatter(content);
|
||||
|
||||
if (!fm || Object.keys(fm).length === 0) {
|
||||
const body = stripFrontmatter(content);
|
||||
const built = buildStateFrontmatter(body, cwd);
|
||||
output(built, raw, JSON.stringify(built, null, 2));
|
||||
return;
|
||||
}
|
||||
|
||||
output(fm, raw, JSON.stringify(fm, null, 2));
|
||||
}
|
||||
|
||||
/**
|
||||
* Update STATE.md when a new phase begins execution.
|
||||
* Updates body text fields (Current focus, Status, Last Activity, Current Position)
|
||||
* and synchronizes frontmatter via writeStateMd.
|
||||
* Fixes: #1102 (plan counts), #1103 (status/last_activity), #1104 (body text).
|
||||
*/
|
||||
function cmdStateBeginPhase(cwd, phaseNumber, phaseName, planCount, raw) {
|
||||
const statePath = path.join(cwd, '.planning', 'STATE.md');
|
||||
if (!fs.existsSync(statePath)) {
|
||||
output({ error: 'STATE.md not found' }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
let content = fs.readFileSync(statePath, 'utf-8');
|
||||
const today = new Date().toISOString().split('T')[0];
|
||||
const updated = [];
|
||||
|
||||
// Update Status field
|
||||
const statusValue = `Executing Phase ${phaseNumber}`;
|
||||
let result = stateReplaceField(content, 'Status', statusValue);
|
||||
if (result) { content = result; updated.push('Status'); }
|
||||
|
||||
// Update Last Activity
|
||||
result = stateReplaceField(content, 'Last Activity', today);
|
||||
if (result) { content = result; updated.push('Last Activity'); }
|
||||
|
||||
// Update Last Activity Description if it exists
|
||||
const activityDesc = `Phase ${phaseNumber} execution started`;
|
||||
result = stateReplaceField(content, 'Last Activity Description', activityDesc);
|
||||
if (result) { content = result; updated.push('Last Activity Description'); }
|
||||
|
||||
// Update Current Phase
|
||||
result = stateReplaceField(content, 'Current Phase', String(phaseNumber));
|
||||
if (result) { content = result; updated.push('Current Phase'); }
|
||||
|
||||
// Update Current Phase Name
|
||||
if (phaseName) {
|
||||
result = stateReplaceField(content, 'Current Phase Name', phaseName);
|
||||
if (result) { content = result; updated.push('Current Phase Name'); }
|
||||
}
|
||||
|
||||
// Update Current Plan to 1 (starting from the first plan)
|
||||
result = stateReplaceField(content, 'Current Plan', '1');
|
||||
if (result) { content = result; updated.push('Current Plan'); }
|
||||
|
||||
// Update Total Plans in Phase
|
||||
if (planCount) {
|
||||
result = stateReplaceField(content, 'Total Plans in Phase', String(planCount));
|
||||
if (result) { content = result; updated.push('Total Plans in Phase'); }
|
||||
}
|
||||
|
||||
// Update **Current focus:** body text line (#1104)
|
||||
const focusLabel = phaseName ? `Phase ${phaseNumber} — ${phaseName}` : `Phase ${phaseNumber}`;
|
||||
const focusPattern = /(\*\*Current focus:\*\*\s*).*/i;
|
||||
if (focusPattern.test(content)) {
|
||||
content = content.replace(focusPattern, (_match, prefix) => `${prefix}${focusLabel}`);
|
||||
updated.push('Current focus');
|
||||
}
|
||||
|
||||
// Update ## Current Position section (#1104)
|
||||
const positionPattern = /(##\s*Current Position\s*\n)([\s\S]*?)(?=\n##|$)/i;
|
||||
const positionMatch = content.match(positionPattern);
|
||||
if (positionMatch) {
|
||||
const newPosition = `Phase: ${phaseNumber}${phaseName ? ` (${phaseName})` : ''} — EXECUTING\nPlan: 1 of ${planCount || '?'}\n`;
|
||||
content = content.replace(positionPattern, (_match, header) => `${header}${newPosition}`);
|
||||
updated.push('Current Position');
|
||||
}
|
||||
|
||||
if (updated.length > 0) {
|
||||
writeStateMd(statePath, content, cwd);
|
||||
}
|
||||
|
||||
output({ updated, phase: phaseNumber, phase_name: phaseName || null, plan_count: planCount || null }, raw, updated.length > 0 ? 'true' : 'false');
|
||||
}
|
||||
|
||||
/**
|
||||
* Write a WAITING.json signal file when GSD hits a decision point.
|
||||
* External watchers (fswatch, polling, orchestrators) can detect this.
|
||||
* File is written to .planning/WAITING.json (or .gsd/WAITING.json if .gsd exists).
|
||||
* Fixes #1034.
|
||||
*/
|
||||
function cmdSignalWaiting(cwd, type, question, options, phase, raw) {
|
||||
const gsdDir = fs.existsSync(path.join(cwd, '.gsd')) ? path.join(cwd, '.gsd') : path.join(cwd, '.planning');
|
||||
const waitingPath = path.join(gsdDir, 'WAITING.json');
|
||||
|
||||
const signal = {
|
||||
status: 'waiting',
|
||||
type: type || 'decision_point',
|
||||
question: question || null,
|
||||
options: options ? options.split('|').map(o => o.trim()) : [],
|
||||
since: new Date().toISOString(),
|
||||
phase: phase || null,
|
||||
};
|
||||
|
||||
try {
|
||||
fs.mkdirSync(gsdDir, { recursive: true });
|
||||
fs.writeFileSync(waitingPath, JSON.stringify(signal, null, 2), 'utf-8');
|
||||
output({ signaled: true, path: waitingPath }, raw, 'true');
|
||||
} catch (e) {
|
||||
output({ signaled: false, error: e.message }, raw, 'false');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove the WAITING.json signal file when user answers and agent resumes.
|
||||
*/
|
||||
function cmdSignalResume(cwd, raw) {
|
||||
const paths = [
|
||||
path.join(cwd, '.gsd', 'WAITING.json'),
|
||||
path.join(cwd, '.planning', 'WAITING.json'),
|
||||
];
|
||||
|
||||
let removed = false;
|
||||
for (const p of paths) {
|
||||
if (fs.existsSync(p)) {
|
||||
try { fs.unlinkSync(p); removed = true; } catch {}
|
||||
}
|
||||
}
|
||||
|
||||
output({ resumed: true, removed }, raw, removed ? 'true' : 'false');
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
stateExtractField,
|
||||
stateReplaceField,
|
||||
writeStateMd,
|
||||
cmdStateLoad,
|
||||
cmdStateGet,
|
||||
cmdStatePatch,
|
||||
cmdStateUpdate,
|
||||
cmdStateAdvancePlan,
|
||||
cmdStateRecordMetric,
|
||||
cmdStateUpdateProgress,
|
||||
cmdStateAddDecision,
|
||||
cmdStateAddBlocker,
|
||||
cmdStateResolveBlocker,
|
||||
cmdStateRecordSession,
|
||||
cmdStateSnapshot,
|
||||
cmdStateJson,
|
||||
cmdStateBeginPhase,
|
||||
cmdSignalWaiting,
|
||||
cmdSignalResume,
|
||||
};
|
||||
222
get-shit-done/bin/lib/template.cjs
Normal file
222
get-shit-done/bin/lib/template.cjs
Normal file
@@ -0,0 +1,222 @@
|
||||
/**
|
||||
* Template — Template selection and fill operations
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { normalizePhaseName, findPhaseInternal, generateSlugInternal, normalizeMd, toPosixPath, output, error } = require('./core.cjs');
|
||||
const { reconstructFrontmatter } = require('./frontmatter.cjs');
|
||||
|
||||
function cmdTemplateSelect(cwd, planPath, raw) {
|
||||
if (!planPath) {
|
||||
error('plan-path required');
|
||||
}
|
||||
|
||||
try {
|
||||
const fullPath = path.join(cwd, planPath);
|
||||
const content = fs.readFileSync(fullPath, 'utf-8');
|
||||
|
||||
// Simple heuristics
|
||||
const taskMatch = content.match(/###\s*Task\s*\d+/g) || [];
|
||||
const taskCount = taskMatch.length;
|
||||
|
||||
const decisionMatch = content.match(/decision/gi) || [];
|
||||
const hasDecisions = decisionMatch.length > 0;
|
||||
|
||||
// Count file mentions
|
||||
const fileMentions = new Set();
|
||||
const filePattern = /`([^`]+\.[a-zA-Z]+)`/g;
|
||||
let m;
|
||||
while ((m = filePattern.exec(content)) !== null) {
|
||||
if (m[1].includes('/') && !m[1].startsWith('http')) {
|
||||
fileMentions.add(m[1]);
|
||||
}
|
||||
}
|
||||
const fileCount = fileMentions.size;
|
||||
|
||||
let template = 'templates/summary-standard.md';
|
||||
let type = 'standard';
|
||||
|
||||
if (taskCount <= 2 && fileCount <= 3 && !hasDecisions) {
|
||||
template = 'templates/summary-minimal.md';
|
||||
type = 'minimal';
|
||||
} else if (hasDecisions || fileCount > 6 || taskCount > 5) {
|
||||
template = 'templates/summary-complex.md';
|
||||
type = 'complex';
|
||||
}
|
||||
|
||||
const result = { template, type, taskCount, fileCount, hasDecisions };
|
||||
output(result, raw, template);
|
||||
} catch (e) {
|
||||
// Fallback to standard
|
||||
output({ template: 'templates/summary-standard.md', type: 'standard', error: e.message }, raw, 'templates/summary-standard.md');
|
||||
}
|
||||
}
|
||||
|
||||
function cmdTemplateFill(cwd, templateType, options, raw) {
|
||||
if (!templateType) { error('template type required: summary, plan, or verification'); }
|
||||
if (!options.phase) { error('--phase required'); }
|
||||
|
||||
const phaseInfo = findPhaseInternal(cwd, options.phase);
|
||||
if (!phaseInfo || !phaseInfo.found) { output({ error: 'Phase not found', phase: options.phase }, raw); return; }
|
||||
|
||||
const padded = normalizePhaseName(options.phase);
|
||||
const today = new Date().toISOString().split('T')[0];
|
||||
const phaseName = options.name || phaseInfo.phase_name || 'Unnamed';
|
||||
const phaseSlug = phaseInfo.phase_slug || generateSlugInternal(phaseName);
|
||||
const phaseId = `${padded}-${phaseSlug}`;
|
||||
const planNum = (options.plan || '01').padStart(2, '0');
|
||||
const fields = options.fields || {};
|
||||
|
||||
let frontmatter, body, fileName;
|
||||
|
||||
switch (templateType) {
|
||||
case 'summary': {
|
||||
frontmatter = {
|
||||
phase: phaseId,
|
||||
plan: planNum,
|
||||
subsystem: '[primary category]',
|
||||
tags: [],
|
||||
provides: [],
|
||||
affects: [],
|
||||
'tech-stack': { added: [], patterns: [] },
|
||||
'key-files': { created: [], modified: [] },
|
||||
'key-decisions': [],
|
||||
'patterns-established': [],
|
||||
duration: '[X]min',
|
||||
completed: today,
|
||||
...fields,
|
||||
};
|
||||
body = [
|
||||
`# Phase ${options.phase}: ${phaseName} Summary`,
|
||||
'',
|
||||
'**[Substantive one-liner describing outcome]**',
|
||||
'',
|
||||
'## Performance',
|
||||
'- **Duration:** [time]',
|
||||
'- **Tasks:** [count completed]',
|
||||
'- **Files modified:** [count]',
|
||||
'',
|
||||
'## Accomplishments',
|
||||
'- [Key outcome 1]',
|
||||
'- [Key outcome 2]',
|
||||
'',
|
||||
'## Task Commits',
|
||||
'1. **Task 1: [task name]** - `hash`',
|
||||
'',
|
||||
'## Files Created/Modified',
|
||||
'- `path/to/file.ts` - What it does',
|
||||
'',
|
||||
'## Decisions & Deviations',
|
||||
'[Key decisions or "None - followed plan as specified"]',
|
||||
'',
|
||||
'## Next Phase Readiness',
|
||||
'[What\'s ready for next phase]',
|
||||
].join('\n');
|
||||
fileName = `${padded}-${planNum}-SUMMARY.md`;
|
||||
break;
|
||||
}
|
||||
case 'plan': {
|
||||
const planType = options.type || 'execute';
|
||||
const wave = parseInt(options.wave) || 1;
|
||||
frontmatter = {
|
||||
phase: phaseId,
|
||||
plan: planNum,
|
||||
type: planType,
|
||||
wave,
|
||||
depends_on: [],
|
||||
files_modified: [],
|
||||
autonomous: true,
|
||||
user_setup: [],
|
||||
must_haves: { truths: [], artifacts: [], key_links: [] },
|
||||
...fields,
|
||||
};
|
||||
body = [
|
||||
`# Phase ${options.phase} Plan ${planNum}: [Title]`,
|
||||
'',
|
||||
'## Objective',
|
||||
'- **What:** [What this plan builds]',
|
||||
'- **Why:** [Why it matters for the phase goal]',
|
||||
'- **Output:** [Concrete deliverable]',
|
||||
'',
|
||||
'## Context',
|
||||
'@.planning/PROJECT.md',
|
||||
'@.planning/ROADMAP.md',
|
||||
'@.planning/STATE.md',
|
||||
'',
|
||||
'## Tasks',
|
||||
'',
|
||||
'<task type="code">',
|
||||
' <name>[Task name]</name>',
|
||||
' <files>[file paths]</files>',
|
||||
' <action>[What to do]</action>',
|
||||
' <verify>[How to verify]</verify>',
|
||||
' <done>[Definition of done]</done>',
|
||||
'</task>',
|
||||
'',
|
||||
'## Verification',
|
||||
'[How to verify this plan achieved its objective]',
|
||||
'',
|
||||
'## Success Criteria',
|
||||
'- [ ] [Criterion 1]',
|
||||
'- [ ] [Criterion 2]',
|
||||
].join('\n');
|
||||
fileName = `${padded}-${planNum}-PLAN.md`;
|
||||
break;
|
||||
}
|
||||
case 'verification': {
|
||||
frontmatter = {
|
||||
phase: phaseId,
|
||||
verified: new Date().toISOString(),
|
||||
status: 'pending',
|
||||
score: '0/0 must-haves verified',
|
||||
...fields,
|
||||
};
|
||||
body = [
|
||||
`# Phase ${options.phase}: ${phaseName} — Verification`,
|
||||
'',
|
||||
'## Observable Truths',
|
||||
'| # | Truth | Status | Evidence |',
|
||||
'|---|-------|--------|----------|',
|
||||
'| 1 | [Truth] | pending | |',
|
||||
'',
|
||||
'## Required Artifacts',
|
||||
'| Artifact | Expected | Status | Details |',
|
||||
'|----------|----------|--------|---------|',
|
||||
'| [path] | [what] | pending | |',
|
||||
'',
|
||||
'## Key Link Verification',
|
||||
'| From | To | Via | Status | Details |',
|
||||
'|------|----|----|--------|---------|',
|
||||
'| [source] | [target] | [connection] | pending | |',
|
||||
'',
|
||||
'## Requirements Coverage',
|
||||
'| Requirement | Status | Blocking Issue |',
|
||||
'|-------------|--------|----------------|',
|
||||
'| [req] | pending | |',
|
||||
'',
|
||||
'## Result',
|
||||
'[Pending verification]',
|
||||
].join('\n');
|
||||
fileName = `${padded}-VERIFICATION.md`;
|
||||
break;
|
||||
}
|
||||
default:
|
||||
error(`Unknown template type: ${templateType}. Available: summary, plan, verification`);
|
||||
return;
|
||||
}
|
||||
|
||||
const fullContent = `---\n${reconstructFrontmatter(frontmatter)}\n---\n\n${body}\n`;
|
||||
const outPath = path.join(cwd, phaseInfo.directory, fileName);
|
||||
|
||||
if (fs.existsSync(outPath)) {
|
||||
output({ error: 'File already exists', path: toPosixPath(path.relative(cwd, outPath)) }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
fs.writeFileSync(outPath, normalizeMd(fullContent), 'utf-8');
|
||||
const relPath = toPosixPath(path.relative(cwd, outPath));
|
||||
output({ created: true, path: relPath, template: templateType }, raw, relPath);
|
||||
}
|
||||
|
||||
module.exports = { cmdTemplateSelect, cmdTemplateFill };
|
||||
842
get-shit-done/bin/lib/verify.cjs
Normal file
842
get-shit-done/bin/lib/verify.cjs
Normal file
@@ -0,0 +1,842 @@
|
||||
/**
|
||||
* Verify — Verification suite, consistency, and health validation
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const os = require('os');
|
||||
const { safeReadFile, normalizePhaseName, execGit, findPhaseInternal, getMilestoneInfo, stripShippedMilestones, extractCurrentMilestone, output, error } = require('./core.cjs');
|
||||
const { extractFrontmatter, parseMustHavesBlock } = require('./frontmatter.cjs');
|
||||
const { writeStateMd } = require('./state.cjs');
|
||||
|
||||
function cmdVerifySummary(cwd, summaryPath, checkFileCount, raw) {
|
||||
if (!summaryPath) {
|
||||
error('summary-path required');
|
||||
}
|
||||
|
||||
const fullPath = path.join(cwd, summaryPath);
|
||||
const checkCount = checkFileCount || 2;
|
||||
|
||||
// Check 1: Summary exists
|
||||
if (!fs.existsSync(fullPath)) {
|
||||
const result = {
|
||||
passed: false,
|
||||
checks: {
|
||||
summary_exists: false,
|
||||
files_created: { checked: 0, found: 0, missing: [] },
|
||||
commits_exist: false,
|
||||
self_check: 'not_found',
|
||||
},
|
||||
errors: ['SUMMARY.md not found'],
|
||||
};
|
||||
output(result, raw, 'failed');
|
||||
return;
|
||||
}
|
||||
|
||||
const content = fs.readFileSync(fullPath, 'utf-8');
|
||||
const errors = [];
|
||||
|
||||
// Check 2: Spot-check files mentioned in summary
|
||||
const mentionedFiles = new Set();
|
||||
const patterns = [
|
||||
/`([^`]+\.[a-zA-Z]+)`/g,
|
||||
/(?:Created|Modified|Added|Updated|Edited):\s*`?([^\s`]+\.[a-zA-Z]+)`?/gi,
|
||||
];
|
||||
|
||||
for (const pattern of patterns) {
|
||||
let m;
|
||||
while ((m = pattern.exec(content)) !== null) {
|
||||
const filePath = m[1];
|
||||
if (filePath && !filePath.startsWith('http') && filePath.includes('/')) {
|
||||
mentionedFiles.add(filePath);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const filesToCheck = Array.from(mentionedFiles).slice(0, checkCount);
|
||||
const missing = [];
|
||||
for (const file of filesToCheck) {
|
||||
if (!fs.existsSync(path.join(cwd, file))) {
|
||||
missing.push(file);
|
||||
}
|
||||
}
|
||||
|
||||
// Check 3: Commits exist
|
||||
const commitHashPattern = /\b[0-9a-f]{7,40}\b/g;
|
||||
const hashes = content.match(commitHashPattern) || [];
|
||||
let commitsExist = false;
|
||||
if (hashes.length > 0) {
|
||||
for (const hash of hashes.slice(0, 3)) {
|
||||
const result = execGit(cwd, ['cat-file', '-t', hash]);
|
||||
if (result.exitCode === 0 && result.stdout === 'commit') {
|
||||
commitsExist = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check 4: Self-check section
|
||||
let selfCheck = 'not_found';
|
||||
const selfCheckPattern = /##\s*(?:Self[- ]?Check|Verification|Quality Check)/i;
|
||||
if (selfCheckPattern.test(content)) {
|
||||
const passPattern = /(?:all\s+)?(?:pass|✓|✅|complete|succeeded)/i;
|
||||
const failPattern = /(?:fail|✗|❌|incomplete|blocked)/i;
|
||||
const checkSection = content.slice(content.search(selfCheckPattern));
|
||||
if (failPattern.test(checkSection)) {
|
||||
selfCheck = 'failed';
|
||||
} else if (passPattern.test(checkSection)) {
|
||||
selfCheck = 'passed';
|
||||
}
|
||||
}
|
||||
|
||||
if (missing.length > 0) errors.push('Missing files: ' + missing.join(', '));
|
||||
if (!commitsExist && hashes.length > 0) errors.push('Referenced commit hashes not found in git history');
|
||||
if (selfCheck === 'failed') errors.push('Self-check section indicates failure');
|
||||
|
||||
const checks = {
|
||||
summary_exists: true,
|
||||
files_created: { checked: filesToCheck.length, found: filesToCheck.length - missing.length, missing },
|
||||
commits_exist: commitsExist,
|
||||
self_check: selfCheck,
|
||||
};
|
||||
|
||||
const passed = missing.length === 0 && selfCheck !== 'failed';
|
||||
const result = { passed, checks, errors };
|
||||
output(result, raw, passed ? 'passed' : 'failed');
|
||||
}
|
||||
|
||||
function cmdVerifyPlanStructure(cwd, filePath, raw) {
|
||||
if (!filePath) { error('file path required'); }
|
||||
const fullPath = path.isAbsolute(filePath) ? filePath : path.join(cwd, filePath);
|
||||
const content = safeReadFile(fullPath);
|
||||
if (!content) { output({ error: 'File not found', path: filePath }, raw); return; }
|
||||
|
||||
const fm = extractFrontmatter(content);
|
||||
const errors = [];
|
||||
const warnings = [];
|
||||
|
||||
// Check required frontmatter fields
|
||||
const required = ['phase', 'plan', 'type', 'wave', 'depends_on', 'files_modified', 'autonomous', 'must_haves'];
|
||||
for (const field of required) {
|
||||
if (fm[field] === undefined) errors.push(`Missing required frontmatter field: ${field}`);
|
||||
}
|
||||
|
||||
// Parse and check task elements
|
||||
const taskPattern = /<task[^>]*>([\s\S]*?)<\/task>/g;
|
||||
const tasks = [];
|
||||
let taskMatch;
|
||||
while ((taskMatch = taskPattern.exec(content)) !== null) {
|
||||
const taskContent = taskMatch[1];
|
||||
const nameMatch = taskContent.match(/<name>([\s\S]*?)<\/name>/);
|
||||
const taskName = nameMatch ? nameMatch[1].trim() : 'unnamed';
|
||||
const hasFiles = /<files>/.test(taskContent);
|
||||
const hasAction = /<action>/.test(taskContent);
|
||||
const hasVerify = /<verify>/.test(taskContent);
|
||||
const hasDone = /<done>/.test(taskContent);
|
||||
|
||||
if (!nameMatch) errors.push('Task missing <name> element');
|
||||
if (!hasAction) errors.push(`Task '${taskName}' missing <action>`);
|
||||
if (!hasVerify) warnings.push(`Task '${taskName}' missing <verify>`);
|
||||
if (!hasDone) warnings.push(`Task '${taskName}' missing <done>`);
|
||||
if (!hasFiles) warnings.push(`Task '${taskName}' missing <files>`);
|
||||
|
||||
tasks.push({ name: taskName, hasFiles, hasAction, hasVerify, hasDone });
|
||||
}
|
||||
|
||||
if (tasks.length === 0) warnings.push('No <task> elements found');
|
||||
|
||||
// Wave/depends_on consistency
|
||||
if (fm.wave && parseInt(fm.wave) > 1 && (!fm.depends_on || (Array.isArray(fm.depends_on) && fm.depends_on.length === 0))) {
|
||||
warnings.push('Wave > 1 but depends_on is empty');
|
||||
}
|
||||
|
||||
// Autonomous/checkpoint consistency
|
||||
const hasCheckpoints = /<task\s+type=["']?checkpoint/.test(content);
|
||||
if (hasCheckpoints && fm.autonomous !== 'false' && fm.autonomous !== false) {
|
||||
errors.push('Has checkpoint tasks but autonomous is not false');
|
||||
}
|
||||
|
||||
output({
|
||||
valid: errors.length === 0,
|
||||
errors,
|
||||
warnings,
|
||||
task_count: tasks.length,
|
||||
tasks,
|
||||
frontmatter_fields: Object.keys(fm),
|
||||
}, raw, errors.length === 0 ? 'valid' : 'invalid');
|
||||
}
|
||||
|
||||
function cmdVerifyPhaseCompleteness(cwd, phase, raw) {
|
||||
if (!phase) { error('phase required'); }
|
||||
const phaseInfo = findPhaseInternal(cwd, phase);
|
||||
if (!phaseInfo || !phaseInfo.found) {
|
||||
output({ error: 'Phase not found', phase }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
const errors = [];
|
||||
const warnings = [];
|
||||
const phaseDir = path.join(cwd, phaseInfo.directory);
|
||||
|
||||
// List plans and summaries
|
||||
let files;
|
||||
try { files = fs.readdirSync(phaseDir); } catch { output({ error: 'Cannot read phase directory' }, raw); return; }
|
||||
|
||||
const plans = files.filter(f => f.match(/-PLAN\.md$/i));
|
||||
const summaries = files.filter(f => f.match(/-SUMMARY\.md$/i));
|
||||
|
||||
// Extract plan IDs (everything before -PLAN.md)
|
||||
const planIds = new Set(plans.map(p => p.replace(/-PLAN\.md$/i, '')));
|
||||
const summaryIds = new Set(summaries.map(s => s.replace(/-SUMMARY\.md$/i, '')));
|
||||
|
||||
// Plans without summaries
|
||||
const incompletePlans = [...planIds].filter(id => !summaryIds.has(id));
|
||||
if (incompletePlans.length > 0) {
|
||||
errors.push(`Plans without summaries: ${incompletePlans.join(', ')}`);
|
||||
}
|
||||
|
||||
// Summaries without plans (orphans)
|
||||
const orphanSummaries = [...summaryIds].filter(id => !planIds.has(id));
|
||||
if (orphanSummaries.length > 0) {
|
||||
warnings.push(`Summaries without plans: ${orphanSummaries.join(', ')}`);
|
||||
}
|
||||
|
||||
output({
|
||||
complete: errors.length === 0,
|
||||
phase: phaseInfo.phase_number,
|
||||
plan_count: plans.length,
|
||||
summary_count: summaries.length,
|
||||
incomplete_plans: incompletePlans,
|
||||
orphan_summaries: orphanSummaries,
|
||||
errors,
|
||||
warnings,
|
||||
}, raw, errors.length === 0 ? 'complete' : 'incomplete');
|
||||
}
|
||||
|
||||
function cmdVerifyReferences(cwd, filePath, raw) {
|
||||
if (!filePath) { error('file path required'); }
|
||||
const fullPath = path.isAbsolute(filePath) ? filePath : path.join(cwd, filePath);
|
||||
const content = safeReadFile(fullPath);
|
||||
if (!content) { output({ error: 'File not found', path: filePath }, raw); return; }
|
||||
|
||||
const found = [];
|
||||
const missing = [];
|
||||
|
||||
// Find @-references: @path/to/file (must contain / to be a file path)
|
||||
const atRefs = content.match(/@([^\s\n,)]+\/[^\s\n,)]+)/g) || [];
|
||||
for (const ref of atRefs) {
|
||||
const cleanRef = ref.slice(1); // remove @
|
||||
const resolved = cleanRef.startsWith('~/')
|
||||
? path.join(process.env.HOME || '', cleanRef.slice(2))
|
||||
: path.join(cwd, cleanRef);
|
||||
if (fs.existsSync(resolved)) {
|
||||
found.push(cleanRef);
|
||||
} else {
|
||||
missing.push(cleanRef);
|
||||
}
|
||||
}
|
||||
|
||||
// Find backtick file paths that look like real paths (contain / and have extension)
|
||||
const backtickRefs = content.match(/`([^`]+\/[^`]+\.[a-zA-Z]{1,10})`/g) || [];
|
||||
for (const ref of backtickRefs) {
|
||||
const cleanRef = ref.slice(1, -1); // remove backticks
|
||||
if (cleanRef.startsWith('http') || cleanRef.includes('${') || cleanRef.includes('{{')) continue;
|
||||
if (found.includes(cleanRef) || missing.includes(cleanRef)) continue; // dedup
|
||||
const resolved = path.join(cwd, cleanRef);
|
||||
if (fs.existsSync(resolved)) {
|
||||
found.push(cleanRef);
|
||||
} else {
|
||||
missing.push(cleanRef);
|
||||
}
|
||||
}
|
||||
|
||||
output({
|
||||
valid: missing.length === 0,
|
||||
found: found.length,
|
||||
missing,
|
||||
total: found.length + missing.length,
|
||||
}, raw, missing.length === 0 ? 'valid' : 'invalid');
|
||||
}
|
||||
|
||||
function cmdVerifyCommits(cwd, hashes, raw) {
|
||||
if (!hashes || hashes.length === 0) { error('At least one commit hash required'); }
|
||||
|
||||
const valid = [];
|
||||
const invalid = [];
|
||||
for (const hash of hashes) {
|
||||
const result = execGit(cwd, ['cat-file', '-t', hash]);
|
||||
if (result.exitCode === 0 && result.stdout.trim() === 'commit') {
|
||||
valid.push(hash);
|
||||
} else {
|
||||
invalid.push(hash);
|
||||
}
|
||||
}
|
||||
|
||||
output({
|
||||
all_valid: invalid.length === 0,
|
||||
valid,
|
||||
invalid,
|
||||
total: hashes.length,
|
||||
}, raw, invalid.length === 0 ? 'valid' : 'invalid');
|
||||
}
|
||||
|
||||
function cmdVerifyArtifacts(cwd, planFilePath, raw) {
|
||||
if (!planFilePath) { error('plan file path required'); }
|
||||
const fullPath = path.isAbsolute(planFilePath) ? planFilePath : path.join(cwd, planFilePath);
|
||||
const content = safeReadFile(fullPath);
|
||||
if (!content) { output({ error: 'File not found', path: planFilePath }, raw); return; }
|
||||
|
||||
const artifacts = parseMustHavesBlock(content, 'artifacts');
|
||||
if (artifacts.length === 0) {
|
||||
output({ error: 'No must_haves.artifacts found in frontmatter', path: planFilePath }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
const results = [];
|
||||
for (const artifact of artifacts) {
|
||||
if (typeof artifact === 'string') continue; // skip simple string items
|
||||
const artPath = artifact.path;
|
||||
if (!artPath) continue;
|
||||
|
||||
const artFullPath = path.join(cwd, artPath);
|
||||
const exists = fs.existsSync(artFullPath);
|
||||
const check = { path: artPath, exists, issues: [], passed: false };
|
||||
|
||||
if (exists) {
|
||||
const fileContent = safeReadFile(artFullPath) || '';
|
||||
const lineCount = fileContent.split('\n').length;
|
||||
|
||||
if (artifact.min_lines && lineCount < artifact.min_lines) {
|
||||
check.issues.push(`Only ${lineCount} lines, need ${artifact.min_lines}`);
|
||||
}
|
||||
if (artifact.contains && !fileContent.includes(artifact.contains)) {
|
||||
check.issues.push(`Missing pattern: ${artifact.contains}`);
|
||||
}
|
||||
if (artifact.exports) {
|
||||
const exports = Array.isArray(artifact.exports) ? artifact.exports : [artifact.exports];
|
||||
for (const exp of exports) {
|
||||
if (!fileContent.includes(exp)) check.issues.push(`Missing export: ${exp}`);
|
||||
}
|
||||
}
|
||||
check.passed = check.issues.length === 0;
|
||||
} else {
|
||||
check.issues.push('File not found');
|
||||
}
|
||||
|
||||
results.push(check);
|
||||
}
|
||||
|
||||
const passed = results.filter(r => r.passed).length;
|
||||
output({
|
||||
all_passed: passed === results.length,
|
||||
passed,
|
||||
total: results.length,
|
||||
artifacts: results,
|
||||
}, raw, passed === results.length ? 'valid' : 'invalid');
|
||||
}
|
||||
|
||||
function cmdVerifyKeyLinks(cwd, planFilePath, raw) {
|
||||
if (!planFilePath) { error('plan file path required'); }
|
||||
const fullPath = path.isAbsolute(planFilePath) ? planFilePath : path.join(cwd, planFilePath);
|
||||
const content = safeReadFile(fullPath);
|
||||
if (!content) { output({ error: 'File not found', path: planFilePath }, raw); return; }
|
||||
|
||||
const keyLinks = parseMustHavesBlock(content, 'key_links');
|
||||
if (keyLinks.length === 0) {
|
||||
output({ error: 'No must_haves.key_links found in frontmatter', path: planFilePath }, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
const results = [];
|
||||
for (const link of keyLinks) {
|
||||
if (typeof link === 'string') continue;
|
||||
const check = { from: link.from, to: link.to, via: link.via || '', verified: false, detail: '' };
|
||||
|
||||
const sourceContent = safeReadFile(path.join(cwd, link.from || ''));
|
||||
if (!sourceContent) {
|
||||
check.detail = 'Source file not found';
|
||||
} else if (link.pattern) {
|
||||
try {
|
||||
const regex = new RegExp(link.pattern);
|
||||
if (regex.test(sourceContent)) {
|
||||
check.verified = true;
|
||||
check.detail = 'Pattern found in source';
|
||||
} else {
|
||||
const targetContent = safeReadFile(path.join(cwd, link.to || ''));
|
||||
if (targetContent && regex.test(targetContent)) {
|
||||
check.verified = true;
|
||||
check.detail = 'Pattern found in target';
|
||||
} else {
|
||||
check.detail = `Pattern "${link.pattern}" not found in source or target`;
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
check.detail = `Invalid regex pattern: ${link.pattern}`;
|
||||
}
|
||||
} else {
|
||||
// No pattern: just check source references target
|
||||
if (sourceContent.includes(link.to || '')) {
|
||||
check.verified = true;
|
||||
check.detail = 'Target referenced in source';
|
||||
} else {
|
||||
check.detail = 'Target not referenced in source';
|
||||
}
|
||||
}
|
||||
|
||||
results.push(check);
|
||||
}
|
||||
|
||||
const verified = results.filter(r => r.verified).length;
|
||||
output({
|
||||
all_verified: verified === results.length,
|
||||
verified,
|
||||
total: results.length,
|
||||
links: results,
|
||||
}, raw, verified === results.length ? 'valid' : 'invalid');
|
||||
}
|
||||
|
||||
function cmdValidateConsistency(cwd, raw) {
|
||||
const roadmapPath = path.join(cwd, '.planning', 'ROADMAP.md');
|
||||
const phasesDir = path.join(cwd, '.planning', 'phases');
|
||||
const errors = [];
|
||||
const warnings = [];
|
||||
|
||||
// Check for ROADMAP
|
||||
if (!fs.existsSync(roadmapPath)) {
|
||||
errors.push('ROADMAP.md not found');
|
||||
output({ passed: false, errors, warnings }, raw, 'failed');
|
||||
return;
|
||||
}
|
||||
|
||||
const roadmapContentRaw = fs.readFileSync(roadmapPath, 'utf-8');
|
||||
const roadmapContent = extractCurrentMilestone(roadmapContentRaw, cwd);
|
||||
|
||||
// Extract phases from ROADMAP (archived milestones already stripped)
|
||||
const roadmapPhases = new Set();
|
||||
const phasePattern = /#{2,4}\s*Phase\s+(\d+[A-Z]?(?:\.\d+)*)\s*:/gi;
|
||||
let m;
|
||||
while ((m = phasePattern.exec(roadmapContent)) !== null) {
|
||||
roadmapPhases.add(m[1]);
|
||||
}
|
||||
|
||||
// Get phases on disk
|
||||
const diskPhases = new Set();
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name);
|
||||
for (const dir of dirs) {
|
||||
const dm = dir.match(/^(\d+[A-Z]?(?:\.\d+)*)/i);
|
||||
if (dm) diskPhases.add(dm[1]);
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Check: phases in ROADMAP but not on disk
|
||||
for (const p of roadmapPhases) {
|
||||
if (!diskPhases.has(p) && !diskPhases.has(normalizePhaseName(p))) {
|
||||
warnings.push(`Phase ${p} in ROADMAP.md but no directory on disk`);
|
||||
}
|
||||
}
|
||||
|
||||
// Check: phases on disk but not in ROADMAP
|
||||
for (const p of diskPhases) {
|
||||
const unpadded = String(parseInt(p, 10));
|
||||
if (!roadmapPhases.has(p) && !roadmapPhases.has(unpadded)) {
|
||||
warnings.push(`Phase ${p} exists on disk but not in ROADMAP.md`);
|
||||
}
|
||||
}
|
||||
|
||||
// Check: sequential phase numbers (integers only)
|
||||
const integerPhases = [...diskPhases]
|
||||
.filter(p => !p.includes('.'))
|
||||
.map(p => parseInt(p, 10))
|
||||
.sort((a, b) => a - b);
|
||||
|
||||
for (let i = 1; i < integerPhases.length; i++) {
|
||||
if (integerPhases[i] !== integerPhases[i - 1] + 1) {
|
||||
warnings.push(`Gap in phase numbering: ${integerPhases[i - 1]} → ${integerPhases[i]}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Check: plan numbering within phases
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name).sort();
|
||||
|
||||
for (const dir of dirs) {
|
||||
const phaseFiles = fs.readdirSync(path.join(phasesDir, dir));
|
||||
const plans = phaseFiles.filter(f => f.endsWith('-PLAN.md')).sort();
|
||||
|
||||
// Extract plan numbers
|
||||
const planNums = plans.map(p => {
|
||||
const pm = p.match(/-(\d{2})-PLAN\.md$/);
|
||||
return pm ? parseInt(pm[1], 10) : null;
|
||||
}).filter(n => n !== null);
|
||||
|
||||
for (let i = 1; i < planNums.length; i++) {
|
||||
if (planNums[i] !== planNums[i - 1] + 1) {
|
||||
warnings.push(`Gap in plan numbering in ${dir}: plan ${planNums[i - 1]} → ${planNums[i]}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Check: plans without summaries (completed plans)
|
||||
const summaries = phaseFiles.filter(f => f.endsWith('-SUMMARY.md'));
|
||||
const planIds = new Set(plans.map(p => p.replace('-PLAN.md', '')));
|
||||
const summaryIds = new Set(summaries.map(s => s.replace('-SUMMARY.md', '')));
|
||||
|
||||
// Summary without matching plan is suspicious
|
||||
for (const sid of summaryIds) {
|
||||
if (!planIds.has(sid)) {
|
||||
warnings.push(`Summary ${sid}-SUMMARY.md in ${dir} has no matching PLAN.md`);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Check: frontmatter in plans has required fields
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
const dirs = entries.filter(e => e.isDirectory()).map(e => e.name);
|
||||
|
||||
for (const dir of dirs) {
|
||||
const phaseFiles = fs.readdirSync(path.join(phasesDir, dir));
|
||||
const plans = phaseFiles.filter(f => f.endsWith('-PLAN.md'));
|
||||
|
||||
for (const plan of plans) {
|
||||
const content = fs.readFileSync(path.join(phasesDir, dir, plan), 'utf-8');
|
||||
const fm = extractFrontmatter(content);
|
||||
|
||||
if (!fm.wave) {
|
||||
warnings.push(`${dir}/${plan}: missing 'wave' in frontmatter`);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
const passed = errors.length === 0;
|
||||
output({ passed, errors, warnings, warning_count: warnings.length }, raw, passed ? 'passed' : 'failed');
|
||||
}
|
||||
|
||||
function cmdValidateHealth(cwd, options, raw) {
|
||||
// Guard: detect if CWD is the home directory (likely accidental)
|
||||
const resolved = path.resolve(cwd);
|
||||
if (resolved === os.homedir()) {
|
||||
output({
|
||||
status: 'error',
|
||||
errors: [{ code: 'E010', message: `CWD is home directory (${resolved}) — health check would read the wrong .planning/ directory. Run from your project root instead.`, fix: 'cd into your project directory and retry' }],
|
||||
warnings: [],
|
||||
info: [{ code: 'I010', message: `Resolved CWD: ${resolved}` }],
|
||||
repairable_count: 0,
|
||||
}, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
const planningDir = path.join(cwd, '.planning');
|
||||
const projectPath = path.join(planningDir, 'PROJECT.md');
|
||||
const roadmapPath = path.join(planningDir, 'ROADMAP.md');
|
||||
const statePath = path.join(planningDir, 'STATE.md');
|
||||
const configPath = path.join(planningDir, 'config.json');
|
||||
const phasesDir = path.join(planningDir, 'phases');
|
||||
|
||||
const errors = [];
|
||||
const warnings = [];
|
||||
const info = [];
|
||||
const repairs = [];
|
||||
|
||||
// Helper to add issue
|
||||
const addIssue = (severity, code, message, fix, repairable = false) => {
|
||||
const issue = { code, message, fix, repairable };
|
||||
if (severity === 'error') errors.push(issue);
|
||||
else if (severity === 'warning') warnings.push(issue);
|
||||
else info.push(issue);
|
||||
};
|
||||
|
||||
// ─── Check 1: .planning/ exists ───────────────────────────────────────────
|
||||
if (!fs.existsSync(planningDir)) {
|
||||
addIssue('error', 'E001', '.planning/ directory not found', 'Run /gsd:new-project to initialize');
|
||||
output({
|
||||
status: 'broken',
|
||||
errors,
|
||||
warnings,
|
||||
info,
|
||||
repairable_count: 0,
|
||||
}, raw);
|
||||
return;
|
||||
}
|
||||
|
||||
// ─── Check 2: PROJECT.md exists and has required sections ─────────────────
|
||||
if (!fs.existsSync(projectPath)) {
|
||||
addIssue('error', 'E002', 'PROJECT.md not found', 'Run /gsd:new-project to create');
|
||||
} else {
|
||||
const content = fs.readFileSync(projectPath, 'utf-8');
|
||||
const requiredSections = ['## What This Is', '## Core Value', '## Requirements'];
|
||||
for (const section of requiredSections) {
|
||||
if (!content.includes(section)) {
|
||||
addIssue('warning', 'W001', `PROJECT.md missing section: ${section}`, 'Add section manually');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Check 3: ROADMAP.md exists ───────────────────────────────────────────
|
||||
if (!fs.existsSync(roadmapPath)) {
|
||||
addIssue('error', 'E003', 'ROADMAP.md not found', 'Run /gsd:new-milestone to create roadmap');
|
||||
}
|
||||
|
||||
// ─── Check 4: STATE.md exists and references valid phases ─────────────────
|
||||
if (!fs.existsSync(statePath)) {
|
||||
addIssue('error', 'E004', 'STATE.md not found', 'Run /gsd:health --repair to regenerate', true);
|
||||
repairs.push('regenerateState');
|
||||
} else {
|
||||
const stateContent = fs.readFileSync(statePath, 'utf-8');
|
||||
// Extract phase references from STATE.md
|
||||
const phaseRefs = [...stateContent.matchAll(/[Pp]hase\s+(\d+(?:\.\d+)*)/g)].map(m => m[1]);
|
||||
// Get disk phases
|
||||
const diskPhases = new Set();
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
for (const e of entries) {
|
||||
if (e.isDirectory()) {
|
||||
const m = e.name.match(/^(\d+(?:\.\d+)*)/);
|
||||
if (m) diskPhases.add(m[1]);
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
// Check for invalid references
|
||||
for (const ref of phaseRefs) {
|
||||
const normalizedRef = String(parseInt(ref, 10)).padStart(2, '0');
|
||||
if (!diskPhases.has(ref) && !diskPhases.has(normalizedRef) && !diskPhases.has(String(parseInt(ref, 10)))) {
|
||||
// Only warn if phases dir has any content (not just an empty project)
|
||||
if (diskPhases.size > 0) {
|
||||
addIssue('warning', 'W002', `STATE.md references phase ${ref}, but only phases ${[...diskPhases].sort().join(', ')} exist`, 'Run /gsd:health --repair to regenerate STATE.md', true);
|
||||
if (!repairs.includes('regenerateState')) repairs.push('regenerateState');
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Check 5: config.json valid JSON + valid schema ───────────────────────
|
||||
if (!fs.existsSync(configPath)) {
|
||||
addIssue('warning', 'W003', 'config.json not found', 'Run /gsd:health --repair to create with defaults', true);
|
||||
repairs.push('createConfig');
|
||||
} else {
|
||||
try {
|
||||
const raw = fs.readFileSync(configPath, 'utf-8');
|
||||
const parsed = JSON.parse(raw);
|
||||
// Validate known fields
|
||||
const validProfiles = ['quality', 'balanced', 'budget', 'inherit'];
|
||||
if (parsed.model_profile && !validProfiles.includes(parsed.model_profile)) {
|
||||
addIssue('warning', 'W004', `config.json: invalid model_profile "${parsed.model_profile}"`, `Valid values: ${validProfiles.join(', ')}`);
|
||||
}
|
||||
} catch (err) {
|
||||
addIssue('error', 'E005', `config.json: JSON parse error - ${err.message}`, 'Run /gsd:health --repair to reset to defaults', true);
|
||||
repairs.push('resetConfig');
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Check 5b: Nyquist validation key presence ──────────────────────────
|
||||
if (fs.existsSync(configPath)) {
|
||||
try {
|
||||
const configRaw = fs.readFileSync(configPath, 'utf-8');
|
||||
const configParsed = JSON.parse(configRaw);
|
||||
if (configParsed.workflow && configParsed.workflow.nyquist_validation === undefined) {
|
||||
addIssue('warning', 'W008', 'config.json: workflow.nyquist_validation absent (defaults to enabled but agents may skip)', 'Run /gsd:health --repair to add key', true);
|
||||
if (!repairs.includes('addNyquistKey')) repairs.push('addNyquistKey');
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// ─── Check 6: Phase directory naming (NN-name format) ─────────────────────
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
for (const e of entries) {
|
||||
if (e.isDirectory() && !e.name.match(/^\d{2}(?:\.\d+)*-[\w-]+$/)) {
|
||||
addIssue('warning', 'W005', `Phase directory "${e.name}" doesn't follow NN-name format`, 'Rename to match pattern (e.g., 01-setup)');
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// ─── Check 7: Orphaned plans (PLAN without SUMMARY) ───────────────────────
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
for (const e of entries) {
|
||||
if (!e.isDirectory()) continue;
|
||||
const phaseFiles = fs.readdirSync(path.join(phasesDir, e.name));
|
||||
const plans = phaseFiles.filter(f => f.endsWith('-PLAN.md') || f === 'PLAN.md');
|
||||
const summaries = phaseFiles.filter(f => f.endsWith('-SUMMARY.md') || f === 'SUMMARY.md');
|
||||
const summaryBases = new Set(summaries.map(s => s.replace('-SUMMARY.md', '').replace('SUMMARY.md', '')));
|
||||
|
||||
for (const plan of plans) {
|
||||
const planBase = plan.replace('-PLAN.md', '').replace('PLAN.md', '');
|
||||
if (!summaryBases.has(planBase)) {
|
||||
addIssue('info', 'I001', `${e.name}/${plan} has no SUMMARY.md`, 'May be in progress');
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// ─── Check 7b: Nyquist VALIDATION.md consistency ────────────────────────
|
||||
try {
|
||||
const phaseEntries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
for (const e of phaseEntries) {
|
||||
if (!e.isDirectory()) continue;
|
||||
const phaseFiles = fs.readdirSync(path.join(phasesDir, e.name));
|
||||
const hasResearch = phaseFiles.some(f => f.endsWith('-RESEARCH.md'));
|
||||
const hasValidation = phaseFiles.some(f => f.endsWith('-VALIDATION.md'));
|
||||
if (hasResearch && !hasValidation) {
|
||||
const researchFile = phaseFiles.find(f => f.endsWith('-RESEARCH.md'));
|
||||
const researchContent = fs.readFileSync(path.join(phasesDir, e.name, researchFile), 'utf-8');
|
||||
if (researchContent.includes('## Validation Architecture')) {
|
||||
addIssue('warning', 'W009', `Phase ${e.name}: has Validation Architecture in RESEARCH.md but no VALIDATION.md`, 'Re-run /gsd:plan-phase with --research to regenerate');
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// ─── Check 8: Run existing consistency checks ─────────────────────────────
|
||||
// Inline subset of cmdValidateConsistency
|
||||
if (fs.existsSync(roadmapPath)) {
|
||||
const roadmapContentRaw = fs.readFileSync(roadmapPath, 'utf-8');
|
||||
const roadmapContent = extractCurrentMilestone(roadmapContentRaw, cwd);
|
||||
const roadmapPhases = new Set();
|
||||
const phasePattern = /#{2,4}\s*Phase\s+(\d+[A-Z]?(?:\.\d+)*)\s*:/gi;
|
||||
let m;
|
||||
while ((m = phasePattern.exec(roadmapContent)) !== null) {
|
||||
roadmapPhases.add(m[1]);
|
||||
}
|
||||
|
||||
const diskPhases = new Set();
|
||||
try {
|
||||
const entries = fs.readdirSync(phasesDir, { withFileTypes: true });
|
||||
for (const e of entries) {
|
||||
if (e.isDirectory()) {
|
||||
const dm = e.name.match(/^(\d+[A-Z]?(?:\.\d+)*)/i);
|
||||
if (dm) diskPhases.add(dm[1]);
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
// Phases in ROADMAP but not on disk
|
||||
for (const p of roadmapPhases) {
|
||||
const padded = String(parseInt(p, 10)).padStart(2, '0');
|
||||
if (!diskPhases.has(p) && !diskPhases.has(padded)) {
|
||||
addIssue('warning', 'W006', `Phase ${p} in ROADMAP.md but no directory on disk`, 'Create phase directory or remove from roadmap');
|
||||
}
|
||||
}
|
||||
|
||||
// Phases on disk but not in ROADMAP
|
||||
for (const p of diskPhases) {
|
||||
const unpadded = String(parseInt(p, 10));
|
||||
if (!roadmapPhases.has(p) && !roadmapPhases.has(unpadded)) {
|
||||
addIssue('warning', 'W007', `Phase ${p} exists on disk but not in ROADMAP.md`, 'Add to roadmap or remove directory');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Perform repairs if requested ─────────────────────────────────────────
|
||||
const repairActions = [];
|
||||
if (options.repair && repairs.length > 0) {
|
||||
for (const repair of repairs) {
|
||||
try {
|
||||
switch (repair) {
|
||||
case 'createConfig':
|
||||
case 'resetConfig': {
|
||||
const defaults = {
|
||||
model_profile: 'balanced',
|
||||
commit_docs: true,
|
||||
search_gitignored: false,
|
||||
branching_strategy: 'none',
|
||||
phase_branch_template: 'gsd/phase-{phase}-{slug}',
|
||||
milestone_branch_template: 'gsd/{milestone}-{slug}',
|
||||
workflow: {
|
||||
research: true,
|
||||
plan_check: true,
|
||||
verifier: true,
|
||||
nyquist_validation: true,
|
||||
},
|
||||
parallelization: true,
|
||||
brave_search: false,
|
||||
};
|
||||
fs.writeFileSync(configPath, JSON.stringify(defaults, null, 2), 'utf-8');
|
||||
repairActions.push({ action: repair, success: true, path: 'config.json' });
|
||||
break;
|
||||
}
|
||||
case 'regenerateState': {
|
||||
// Create timestamped backup before overwriting
|
||||
if (fs.existsSync(statePath)) {
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-').slice(0, 19);
|
||||
const backupPath = `${statePath}.bak-${timestamp}`;
|
||||
fs.copyFileSync(statePath, backupPath);
|
||||
repairActions.push({ action: 'backupState', success: true, path: backupPath });
|
||||
}
|
||||
// Generate minimal STATE.md from ROADMAP.md structure
|
||||
const milestone = getMilestoneInfo(cwd);
|
||||
let stateContent = `# Session State\n\n`;
|
||||
stateContent += `## Project Reference\n\n`;
|
||||
stateContent += `See: .planning/PROJECT.md\n\n`;
|
||||
stateContent += `## Position\n\n`;
|
||||
stateContent += `**Milestone:** ${milestone.version} ${milestone.name}\n`;
|
||||
stateContent += `**Current phase:** (determining...)\n`;
|
||||
stateContent += `**Status:** Resuming\n\n`;
|
||||
stateContent += `## Session Log\n\n`;
|
||||
stateContent += `- ${new Date().toISOString().split('T')[0]}: STATE.md regenerated by /gsd:health --repair\n`;
|
||||
writeStateMd(statePath, stateContent, cwd);
|
||||
repairActions.push({ action: repair, success: true, path: 'STATE.md' });
|
||||
break;
|
||||
}
|
||||
case 'addNyquistKey': {
|
||||
if (fs.existsSync(configPath)) {
|
||||
try {
|
||||
const configRaw = fs.readFileSync(configPath, 'utf-8');
|
||||
const configParsed = JSON.parse(configRaw);
|
||||
if (!configParsed.workflow) configParsed.workflow = {};
|
||||
if (configParsed.workflow.nyquist_validation === undefined) {
|
||||
configParsed.workflow.nyquist_validation = true;
|
||||
fs.writeFileSync(configPath, JSON.stringify(configParsed, null, 2), 'utf-8');
|
||||
}
|
||||
repairActions.push({ action: repair, success: true, path: 'config.json' });
|
||||
} catch (err) {
|
||||
repairActions.push({ action: repair, success: false, error: err.message });
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
repairActions.push({ action: repair, success: false, error: err.message });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Determine overall status ─────────────────────────────────────────────
|
||||
let status;
|
||||
if (errors.length > 0) {
|
||||
status = 'broken';
|
||||
} else if (warnings.length > 0) {
|
||||
status = 'degraded';
|
||||
} else {
|
||||
status = 'healthy';
|
||||
}
|
||||
|
||||
const repairableCount = errors.filter(e => e.repairable).length +
|
||||
warnings.filter(w => w.repairable).length;
|
||||
|
||||
output({
|
||||
status,
|
||||
errors,
|
||||
warnings,
|
||||
info,
|
||||
repairable_count: repairableCount,
|
||||
repairs_performed: repairActions.length > 0 ? repairActions : undefined,
|
||||
}, raw);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
cmdVerifySummary,
|
||||
cmdVerifyPlanStructure,
|
||||
cmdVerifyPhaseCompleteness,
|
||||
cmdVerifyReferences,
|
||||
cmdVerifyCommits,
|
||||
cmdVerifyArtifacts,
|
||||
cmdVerifyKeyLinks,
|
||||
cmdValidateConsistency,
|
||||
cmdValidateHealth,
|
||||
};
|
||||
778
get-shit-done/references/checkpoints.md
Normal file
778
get-shit-done/references/checkpoints.md
Normal file
@@ -0,0 +1,778 @@
|
||||
<overview>
|
||||
Plans execute autonomously. Checkpoints formalize interaction points where human verification or decisions are needed.
|
||||
|
||||
**Core principle:** Claude automates everything with CLI/API. Checkpoints are for verification and decisions, not manual work.
|
||||
|
||||
**Golden rules:**
|
||||
1. **If Claude can run it, Claude runs it** - Never ask user to execute CLI commands, start servers, or run builds
|
||||
2. **Claude sets up the verification environment** - Start dev servers, seed databases, configure env vars
|
||||
3. **User only does what requires human judgment** - Visual checks, UX evaluation, "does this feel right?"
|
||||
4. **Secrets come from user, automation comes from Claude** - Ask for API keys, then Claude uses them via CLI
|
||||
5. **Auto-mode bypasses verification/decision checkpoints** — When `workflow._auto_chain_active` or `workflow.auto_advance` is true in config: human-verify auto-approves, decision auto-selects first option, human-action still stops (auth gates cannot be automated)
|
||||
</overview>
|
||||
|
||||
<checkpoint_types>
|
||||
|
||||
<type name="human-verify">
|
||||
## checkpoint:human-verify (Most Common - 90%)
|
||||
|
||||
**When:** Claude completed automated work, human confirms it works correctly.
|
||||
|
||||
**Use for:**
|
||||
- Visual UI checks (layout, styling, responsiveness)
|
||||
- Interactive flows (click through wizard, test user flows)
|
||||
- Functional verification (feature works as expected)
|
||||
- Audio/video playback quality
|
||||
- Animation smoothness
|
||||
- Accessibility testing
|
||||
|
||||
**Structure:**
|
||||
```xml
|
||||
<task type="checkpoint:human-verify" gate="blocking">
|
||||
<what-built>[What Claude automated and deployed/built]</what-built>
|
||||
<how-to-verify>
|
||||
[Exact steps to test - URLs, commands, expected behavior]
|
||||
</how-to-verify>
|
||||
<resume-signal>[How to continue - "approved", "yes", or describe issues]</resume-signal>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Example: UI Component (shows key pattern: Claude starts server BEFORE checkpoint)**
|
||||
```xml
|
||||
<task type="auto">
|
||||
<name>Build responsive dashboard layout</name>
|
||||
<files>src/components/Dashboard.tsx, src/app/dashboard/page.tsx</files>
|
||||
<action>Create dashboard with sidebar, header, and content area. Use Tailwind responsive classes for mobile.</action>
|
||||
<verify>npm run build succeeds, no TypeScript errors</verify>
|
||||
<done>Dashboard component builds without errors</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Start dev server for verification</name>
|
||||
<action>Run `npm run dev` in background, wait for "ready" message, capture port</action>
|
||||
<verify>fetch http://localhost:3000 returns 200</verify>
|
||||
<done>Dev server running at http://localhost:3000</done>
|
||||
</task>
|
||||
|
||||
<task type="checkpoint:human-verify" gate="blocking">
|
||||
<what-built>Responsive dashboard layout - dev server running at http://localhost:3000</what-built>
|
||||
<how-to-verify>
|
||||
Visit http://localhost:3000/dashboard and verify:
|
||||
1. Desktop (>1024px): Sidebar left, content right, header top
|
||||
2. Tablet (768px): Sidebar collapses to hamburger menu
|
||||
3. Mobile (375px): Single column layout, bottom nav appears
|
||||
4. No layout shift or horizontal scroll at any size
|
||||
</how-to-verify>
|
||||
<resume-signal>Type "approved" or describe layout issues</resume-signal>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Example: Xcode Build**
|
||||
```xml
|
||||
<task type="auto">
|
||||
<name>Build macOS app with Xcode</name>
|
||||
<files>App.xcodeproj, Sources/</files>
|
||||
<action>Run `xcodebuild -project App.xcodeproj -scheme App build`. Check for compilation errors in output.</action>
|
||||
<verify>Build output contains "BUILD SUCCEEDED", no errors</verify>
|
||||
<done>App builds successfully</done>
|
||||
</task>
|
||||
|
||||
<task type="checkpoint:human-verify" gate="blocking">
|
||||
<what-built>Built macOS app at DerivedData/Build/Products/Debug/App.app</what-built>
|
||||
<how-to-verify>
|
||||
Open App.app and test:
|
||||
- App launches without crashes
|
||||
- Menu bar icon appears
|
||||
- Preferences window opens correctly
|
||||
- No visual glitches or layout issues
|
||||
</how-to-verify>
|
||||
<resume-signal>Type "approved" or describe issues</resume-signal>
|
||||
</task>
|
||||
```
|
||||
</type>
|
||||
|
||||
<type name="decision">
|
||||
## checkpoint:decision (9%)
|
||||
|
||||
**When:** Human must make choice that affects implementation direction.
|
||||
|
||||
**Use for:**
|
||||
- Technology selection (which auth provider, which database)
|
||||
- Architecture decisions (monorepo vs separate repos)
|
||||
- Design choices (color scheme, layout approach)
|
||||
- Feature prioritization (which variant to build)
|
||||
- Data model decisions (schema structure)
|
||||
|
||||
**Structure:**
|
||||
```xml
|
||||
<task type="checkpoint:decision" gate="blocking">
|
||||
<decision>[What's being decided]</decision>
|
||||
<context>[Why this decision matters]</context>
|
||||
<options>
|
||||
<option id="option-a">
|
||||
<name>[Option name]</name>
|
||||
<pros>[Benefits]</pros>
|
||||
<cons>[Tradeoffs]</cons>
|
||||
</option>
|
||||
<option id="option-b">
|
||||
<name>[Option name]</name>
|
||||
<pros>[Benefits]</pros>
|
||||
<cons>[Tradeoffs]</cons>
|
||||
</option>
|
||||
</options>
|
||||
<resume-signal>[How to indicate choice]</resume-signal>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Example: Auth Provider Selection**
|
||||
```xml
|
||||
<task type="checkpoint:decision" gate="blocking">
|
||||
<decision>Select authentication provider</decision>
|
||||
<context>
|
||||
Need user authentication for the app. Three solid options with different tradeoffs.
|
||||
</context>
|
||||
<options>
|
||||
<option id="supabase">
|
||||
<name>Supabase Auth</name>
|
||||
<pros>Built-in with Supabase DB we're using, generous free tier, row-level security integration</pros>
|
||||
<cons>Less customizable UI, tied to Supabase ecosystem</cons>
|
||||
</option>
|
||||
<option id="clerk">
|
||||
<name>Clerk</name>
|
||||
<pros>Beautiful pre-built UI, best developer experience, excellent docs</pros>
|
||||
<cons>Paid after 10k MAU, vendor lock-in</cons>
|
||||
</option>
|
||||
<option id="nextauth">
|
||||
<name>NextAuth.js</name>
|
||||
<pros>Free, self-hosted, maximum control, widely adopted</pros>
|
||||
<cons>More setup work, you manage security updates, UI is DIY</cons>
|
||||
</option>
|
||||
</options>
|
||||
<resume-signal>Select: supabase, clerk, or nextauth</resume-signal>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Example: Database Selection**
|
||||
```xml
|
||||
<task type="checkpoint:decision" gate="blocking">
|
||||
<decision>Select database for user data</decision>
|
||||
<context>
|
||||
App needs persistent storage for users, sessions, and user-generated content.
|
||||
Expected scale: 10k users, 1M records first year.
|
||||
</context>
|
||||
<options>
|
||||
<option id="supabase">
|
||||
<name>Supabase (Postgres)</name>
|
||||
<pros>Full SQL, generous free tier, built-in auth, real-time subscriptions</pros>
|
||||
<cons>Vendor lock-in for real-time features, less flexible than raw Postgres</cons>
|
||||
</option>
|
||||
<option id="planetscale">
|
||||
<name>PlanetScale (MySQL)</name>
|
||||
<pros>Serverless scaling, branching workflow, excellent DX</pros>
|
||||
<cons>MySQL not Postgres, no foreign keys in free tier</cons>
|
||||
</option>
|
||||
<option id="convex">
|
||||
<name>Convex</name>
|
||||
<pros>Real-time by default, TypeScript-native, automatic caching</pros>
|
||||
<cons>Newer platform, different mental model, less SQL flexibility</cons>
|
||||
</option>
|
||||
</options>
|
||||
<resume-signal>Select: supabase, planetscale, or convex</resume-signal>
|
||||
</task>
|
||||
```
|
||||
</type>
|
||||
|
||||
<type name="human-action">
|
||||
## checkpoint:human-action (1% - Rare)
|
||||
|
||||
**When:** Action has NO CLI/API and requires human-only interaction, OR Claude hit an authentication gate during automation.
|
||||
|
||||
**Use ONLY for:**
|
||||
- **Authentication gates** - Claude tried CLI/API but needs credentials (this is NOT a failure)
|
||||
- Email verification links (clicking email)
|
||||
- SMS 2FA codes (phone verification)
|
||||
- Manual account approvals (platform requires human review)
|
||||
- Credit card 3D Secure flows (web-based payment authorization)
|
||||
- OAuth app approvals (web-based approval)
|
||||
|
||||
**Do NOT use for pre-planned manual work:**
|
||||
- Deploying (use CLI - auth gate if needed)
|
||||
- Creating webhooks/databases (use API/CLI - auth gate if needed)
|
||||
- Running builds/tests (use Bash tool)
|
||||
- Creating files (use Write tool)
|
||||
|
||||
**Structure:**
|
||||
```xml
|
||||
<task type="checkpoint:human-action" gate="blocking">
|
||||
<action>[What human must do - Claude already did everything automatable]</action>
|
||||
<instructions>
|
||||
[What Claude already automated]
|
||||
[The ONE thing requiring human action]
|
||||
</instructions>
|
||||
<verification>[What Claude can check afterward]</verification>
|
||||
<resume-signal>[How to continue]</resume-signal>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Example: Email Verification**
|
||||
```xml
|
||||
<task type="auto">
|
||||
<name>Create SendGrid account via API</name>
|
||||
<action>Use SendGrid API to create subuser account with provided email. Request verification email.</action>
|
||||
<verify>API returns 201, account created</verify>
|
||||
<done>Account created, verification email sent</done>
|
||||
</task>
|
||||
|
||||
<task type="checkpoint:human-action" gate="blocking">
|
||||
<action>Complete email verification for SendGrid account</action>
|
||||
<instructions>
|
||||
I created the account and requested verification email.
|
||||
Check your inbox for SendGrid verification link and click it.
|
||||
</instructions>
|
||||
<verification>SendGrid API key works: curl test succeeds</verification>
|
||||
<resume-signal>Type "done" when email verified</resume-signal>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Example: Authentication Gate (Dynamic Checkpoint)**
|
||||
```xml
|
||||
<task type="auto">
|
||||
<name>Deploy to Vercel</name>
|
||||
<files>.vercel/, vercel.json</files>
|
||||
<action>Run `vercel --yes` to deploy</action>
|
||||
<verify>vercel ls shows deployment, fetch returns 200</verify>
|
||||
</task>
|
||||
|
||||
<!-- If vercel returns "Error: Not authenticated", Claude creates checkpoint on the fly -->
|
||||
|
||||
<task type="checkpoint:human-action" gate="blocking">
|
||||
<action>Authenticate Vercel CLI so I can continue deployment</action>
|
||||
<instructions>
|
||||
I tried to deploy but got authentication error.
|
||||
Run: vercel login
|
||||
This will open your browser - complete the authentication flow.
|
||||
</instructions>
|
||||
<verification>vercel whoami returns your account email</verification>
|
||||
<resume-signal>Type "done" when authenticated</resume-signal>
|
||||
</task>
|
||||
|
||||
<!-- After authentication, Claude retries the deployment -->
|
||||
|
||||
<task type="auto">
|
||||
<name>Retry Vercel deployment</name>
|
||||
<action>Run `vercel --yes` (now authenticated)</action>
|
||||
<verify>vercel ls shows deployment, fetch returns 200</verify>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Key distinction:** Auth gates are created dynamically when Claude encounters auth errors. NOT pre-planned — Claude automates first, asks for credentials only when blocked.
|
||||
</type>
|
||||
</checkpoint_types>
|
||||
|
||||
<execution_protocol>
|
||||
|
||||
When Claude encounters `type="checkpoint:*"`:
|
||||
|
||||
1. **Stop immediately** - do not proceed to next task
|
||||
2. **Display checkpoint clearly** using the format below
|
||||
3. **Wait for user response** - do not hallucinate completion
|
||||
4. **Verify if possible** - check files, run tests, whatever is specified
|
||||
5. **Resume execution** - continue to next task only after confirmation
|
||||
|
||||
**For checkpoint:human-verify:**
|
||||
```
|
||||
╔═══════════════════════════════════════════════════════╗
|
||||
║ CHECKPOINT: Verification Required ║
|
||||
╚═══════════════════════════════════════════════════════╝
|
||||
|
||||
Progress: 5/8 tasks complete
|
||||
Task: Responsive dashboard layout
|
||||
|
||||
Built: Responsive dashboard at /dashboard
|
||||
|
||||
How to verify:
|
||||
1. Visit: http://localhost:3000/dashboard
|
||||
2. Desktop (>1024px): Sidebar visible, content fills remaining space
|
||||
3. Tablet (768px): Sidebar collapses to icons
|
||||
4. Mobile (375px): Sidebar hidden, hamburger menu appears
|
||||
|
||||
────────────────────────────────────────────────────────
|
||||
→ YOUR ACTION: Type "approved" or describe issues
|
||||
────────────────────────────────────────────────────────
|
||||
```
|
||||
|
||||
**For checkpoint:decision:**
|
||||
```
|
||||
╔═══════════════════════════════════════════════════════╗
|
||||
║ CHECKPOINT: Decision Required ║
|
||||
╚═══════════════════════════════════════════════════════╝
|
||||
|
||||
Progress: 2/6 tasks complete
|
||||
Task: Select authentication provider
|
||||
|
||||
Decision: Which auth provider should we use?
|
||||
|
||||
Context: Need user authentication. Three options with different tradeoffs.
|
||||
|
||||
Options:
|
||||
1. supabase - Built-in with our DB, free tier
|
||||
Pros: Row-level security integration, generous free tier
|
||||
Cons: Less customizable UI, ecosystem lock-in
|
||||
|
||||
2. clerk - Best DX, paid after 10k users
|
||||
Pros: Beautiful pre-built UI, excellent documentation
|
||||
Cons: Vendor lock-in, pricing at scale
|
||||
|
||||
3. nextauth - Self-hosted, maximum control
|
||||
Pros: Free, no vendor lock-in, widely adopted
|
||||
Cons: More setup work, DIY security updates
|
||||
|
||||
────────────────────────────────────────────────────────
|
||||
→ YOUR ACTION: Select supabase, clerk, or nextauth
|
||||
────────────────────────────────────────────────────────
|
||||
```
|
||||
|
||||
**For checkpoint:human-action:**
|
||||
```
|
||||
╔═══════════════════════════════════════════════════════╗
|
||||
║ CHECKPOINT: Action Required ║
|
||||
╚═══════════════════════════════════════════════════════╝
|
||||
|
||||
Progress: 3/8 tasks complete
|
||||
Task: Deploy to Vercel
|
||||
|
||||
Attempted: vercel --yes
|
||||
Error: Not authenticated. Please run 'vercel login'
|
||||
|
||||
What you need to do:
|
||||
1. Run: vercel login
|
||||
2. Complete browser authentication when it opens
|
||||
3. Return here when done
|
||||
|
||||
I'll verify: vercel whoami returns your account
|
||||
|
||||
────────────────────────────────────────────────────────
|
||||
→ YOUR ACTION: Type "done" when authenticated
|
||||
────────────────────────────────────────────────────────
|
||||
```
|
||||
</execution_protocol>
|
||||
|
||||
<authentication_gates>
|
||||
|
||||
**Auth gate = Claude tried CLI/API, got auth error.** Not a failure — a gate requiring human input to unblock.
|
||||
|
||||
**Pattern:** Claude tries automation → auth error → creates checkpoint:human-action → user authenticates → Claude retries → continues
|
||||
|
||||
**Gate protocol:**
|
||||
1. Recognize it's not a failure - missing auth is expected
|
||||
2. Stop current task - don't retry repeatedly
|
||||
3. Create checkpoint:human-action dynamically
|
||||
4. Provide exact authentication steps
|
||||
5. Verify authentication works
|
||||
6. Retry the original task
|
||||
7. Continue normally
|
||||
|
||||
**Key distinction:**
|
||||
- Pre-planned checkpoint: "I need you to do X" (wrong - Claude should automate)
|
||||
- Auth gate: "I tried to automate X but need credentials" (correct - unblocks automation)
|
||||
|
||||
</authentication_gates>
|
||||
|
||||
<automation_reference>
|
||||
|
||||
**The rule:** If it has CLI/API, Claude does it. Never ask human to perform automatable work.
|
||||
|
||||
## Service CLI Reference
|
||||
|
||||
| Service | CLI/API | Key Commands | Auth Gate |
|
||||
|---------|---------|--------------|-----------|
|
||||
| Vercel | `vercel` | `--yes`, `env add`, `--prod`, `ls` | `vercel login` |
|
||||
| Railway | `railway` | `init`, `up`, `variables set` | `railway login` |
|
||||
| Fly | `fly` | `launch`, `deploy`, `secrets set` | `fly auth login` |
|
||||
| Stripe | `stripe` + API | `listen`, `trigger`, API calls | API key in .env |
|
||||
| Supabase | `supabase` | `init`, `link`, `db push`, `gen types` | `supabase login` |
|
||||
| Upstash | `upstash` | `redis create`, `redis get` | `upstash auth login` |
|
||||
| PlanetScale | `pscale` | `database create`, `branch create` | `pscale auth login` |
|
||||
| GitHub | `gh` | `repo create`, `pr create`, `secret set` | `gh auth login` |
|
||||
| Node | `npm`/`pnpm` | `install`, `run build`, `test`, `run dev` | N/A |
|
||||
| Xcode | `xcodebuild` | `-project`, `-scheme`, `build`, `test` | N/A |
|
||||
| Convex | `npx convex` | `dev`, `deploy`, `env set`, `env get` | `npx convex login` |
|
||||
|
||||
## Environment Variable Automation
|
||||
|
||||
**Env files:** Use Write/Edit tools. Never ask human to create .env manually.
|
||||
|
||||
**Dashboard env vars via CLI:**
|
||||
|
||||
| Platform | CLI Command | Example |
|
||||
|----------|-------------|---------|
|
||||
| Convex | `npx convex env set` | `npx convex env set OPENAI_API_KEY sk-...` |
|
||||
| Vercel | `vercel env add` | `vercel env add STRIPE_KEY production` |
|
||||
| Railway | `railway variables set` | `railway variables set API_KEY=value` |
|
||||
| Fly | `fly secrets set` | `fly secrets set DATABASE_URL=...` |
|
||||
| Supabase | `supabase secrets set` | `supabase secrets set MY_SECRET=value` |
|
||||
|
||||
**Secret collection pattern:**
|
||||
```xml
|
||||
<!-- WRONG: Asking user to add env vars in dashboard -->
|
||||
<task type="checkpoint:human-action">
|
||||
<action>Add OPENAI_API_KEY to Convex dashboard</action>
|
||||
<instructions>Go to dashboard.convex.dev → Settings → Environment Variables → Add</instructions>
|
||||
</task>
|
||||
|
||||
<!-- RIGHT: Claude asks for value, then adds via CLI -->
|
||||
<task type="checkpoint:human-action">
|
||||
<action>Provide your OpenAI API key</action>
|
||||
<instructions>
|
||||
I need your OpenAI API key for Convex backend.
|
||||
Get it from: https://platform.openai.com/api-keys
|
||||
Paste the key (starts with sk-)
|
||||
</instructions>
|
||||
<verification>I'll add it via `npx convex env set` and verify</verification>
|
||||
<resume-signal>Paste your API key</resume-signal>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Configure OpenAI key in Convex</name>
|
||||
<action>Run `npx convex env set OPENAI_API_KEY {user-provided-key}`</action>
|
||||
<verify>`npx convex env get OPENAI_API_KEY` returns the key (masked)</verify>
|
||||
</task>
|
||||
```
|
||||
|
||||
## Dev Server Automation
|
||||
|
||||
| Framework | Start Command | Ready Signal | Default URL |
|
||||
|-----------|---------------|--------------|-------------|
|
||||
| Next.js | `npm run dev` | "Ready in" or "started server" | http://localhost:3000 |
|
||||
| Vite | `npm run dev` | "ready in" | http://localhost:5173 |
|
||||
| Convex | `npx convex dev` | "Convex functions ready" | N/A (backend only) |
|
||||
| Express | `npm start` | "listening on port" | http://localhost:3000 |
|
||||
| Django | `python manage.py runserver` | "Starting development server" | http://localhost:8000 |
|
||||
|
||||
**Server lifecycle:**
|
||||
```bash
|
||||
# Run in background, capture PID
|
||||
npm run dev &
|
||||
DEV_SERVER_PID=$!
|
||||
|
||||
# Wait for ready (max 30s) — uses fetch() for cross-platform compatibility
|
||||
timeout 30 bash -c 'until node -e "fetch(\"http://localhost:3000\").then(r=>{process.exit(r.ok?0:1)}).catch(()=>process.exit(1))" 2>/dev/null; do sleep 1; done'
|
||||
```
|
||||
|
||||
**Port conflicts:** Kill stale process (`lsof -ti:3000 | xargs kill`) or use alternate port (`--port 3001`).
|
||||
|
||||
**Server stays running** through checkpoints. Only kill when plan complete, switching to production, or port needed for different service.
|
||||
|
||||
## CLI Installation Handling
|
||||
|
||||
| CLI | Auto-install? | Command |
|
||||
|-----|---------------|---------|
|
||||
| npm/pnpm/yarn | No - ask user | User chooses package manager |
|
||||
| vercel | Yes | `npm i -g vercel` |
|
||||
| gh (GitHub) | Yes | `brew install gh` (macOS) or `apt install gh` (Linux) |
|
||||
| stripe | Yes | `npm i -g stripe` |
|
||||
| supabase | Yes | `npm i -g supabase` |
|
||||
| convex | No - use npx | `npx convex` (no install needed) |
|
||||
| fly | Yes | `brew install flyctl` or curl installer |
|
||||
| railway | Yes | `npm i -g @railway/cli` |
|
||||
|
||||
**Protocol:** Try command → "command not found" → auto-installable? → yes: install silently, retry → no: checkpoint asking user to install.
|
||||
|
||||
## Pre-Checkpoint Automation Failures
|
||||
|
||||
| Failure | Response |
|
||||
|---------|----------|
|
||||
| Server won't start | Check error, fix issue, retry (don't proceed to checkpoint) |
|
||||
| Port in use | Kill stale process or use alternate port |
|
||||
| Missing dependency | Run `npm install`, retry |
|
||||
| Build error | Fix the error first (bug, not checkpoint issue) |
|
||||
| Auth error | Create auth gate checkpoint |
|
||||
| Network timeout | Retry with backoff, then checkpoint if persistent |
|
||||
|
||||
**Never present a checkpoint with broken verification environment.** If the local server isn't responding, don't ask user to "visit localhost:3000".
|
||||
|
||||
> **Cross-platform note:** Use `node -e "fetch('http://localhost:3000').then(r=>console.log(r.status))"` instead of `curl` for health checks. `curl` is broken on Windows MSYS/Git Bash due to SSL/path mangling issues.
|
||||
|
||||
```xml
|
||||
<!-- WRONG: Checkpoint with broken environment -->
|
||||
<task type="checkpoint:human-verify">
|
||||
<what-built>Dashboard (server failed to start)</what-built>
|
||||
<how-to-verify>Visit http://localhost:3000...</how-to-verify>
|
||||
</task>
|
||||
|
||||
<!-- RIGHT: Fix first, then checkpoint -->
|
||||
<task type="auto">
|
||||
<name>Fix server startup issue</name>
|
||||
<action>Investigate error, fix root cause, restart server</action>
|
||||
<verify>fetch http://localhost:3000 returns 200</verify>
|
||||
</task>
|
||||
|
||||
<task type="checkpoint:human-verify">
|
||||
<what-built>Dashboard - server running at http://localhost:3000</what-built>
|
||||
<how-to-verify>Visit http://localhost:3000/dashboard...</how-to-verify>
|
||||
</task>
|
||||
```
|
||||
|
||||
## Automatable Quick Reference
|
||||
|
||||
| Action | Automatable? | Claude does it? |
|
||||
|--------|--------------|-----------------|
|
||||
| Deploy to Vercel | Yes (`vercel`) | YES |
|
||||
| Create Stripe webhook | Yes (API) | YES |
|
||||
| Write .env file | Yes (Write tool) | YES |
|
||||
| Create Upstash DB | Yes (`upstash`) | YES |
|
||||
| Run tests | Yes (`npm test`) | YES |
|
||||
| Start dev server | Yes (`npm run dev`) | YES |
|
||||
| Add env vars to Convex | Yes (`npx convex env set`) | YES |
|
||||
| Add env vars to Vercel | Yes (`vercel env add`) | YES |
|
||||
| Seed database | Yes (CLI/API) | YES |
|
||||
| Click email verification link | No | NO |
|
||||
| Enter credit card with 3DS | No | NO |
|
||||
| Complete OAuth in browser | No | NO |
|
||||
| Visually verify UI looks correct | No | NO |
|
||||
| Test interactive user flows | No | NO |
|
||||
|
||||
</automation_reference>
|
||||
|
||||
<writing_guidelines>
|
||||
|
||||
**DO:**
|
||||
- Automate everything with CLI/API before checkpoint
|
||||
- Be specific: "Visit https://myapp.vercel.app" not "check deployment"
|
||||
- Number verification steps
|
||||
- State expected outcomes: "You should see X"
|
||||
- Provide context: why this checkpoint exists
|
||||
|
||||
**DON'T:**
|
||||
- Ask human to do work Claude can automate ❌
|
||||
- Assume knowledge: "Configure the usual settings" ❌
|
||||
- Skip steps: "Set up database" (too vague) ❌
|
||||
- Mix multiple verifications in one checkpoint ❌
|
||||
|
||||
**Placement:**
|
||||
- **After automation completes** - not before Claude does the work
|
||||
- **After UI buildout** - before declaring phase complete
|
||||
- **Before dependent work** - decisions before implementation
|
||||
- **At integration points** - after configuring external services
|
||||
|
||||
**Bad placement:** Before automation ❌ | Too frequent ❌ | Too late (dependent tasks already needed the result) ❌
|
||||
</writing_guidelines>
|
||||
|
||||
<examples>
|
||||
|
||||
### Example 1: Database Setup (No Checkpoint Needed)
|
||||
|
||||
```xml
|
||||
<task type="auto">
|
||||
<name>Create Upstash Redis database</name>
|
||||
<files>.env</files>
|
||||
<action>
|
||||
1. Run `upstash redis create myapp-cache --region us-east-1`
|
||||
2. Capture connection URL from output
|
||||
3. Write to .env: UPSTASH_REDIS_URL={url}
|
||||
4. Verify connection with test command
|
||||
</action>
|
||||
<verify>
|
||||
- upstash redis list shows database
|
||||
- .env contains UPSTASH_REDIS_URL
|
||||
- Test connection succeeds
|
||||
</verify>
|
||||
<done>Redis database created and configured</done>
|
||||
</task>
|
||||
|
||||
<!-- NO CHECKPOINT NEEDED - Claude automated everything and verified programmatically -->
|
||||
```
|
||||
|
||||
### Example 2: Full Auth Flow (Single checkpoint at end)
|
||||
|
||||
```xml
|
||||
<task type="auto">
|
||||
<name>Create user schema</name>
|
||||
<files>src/db/schema.ts</files>
|
||||
<action>Define User, Session, Account tables with Drizzle ORM</action>
|
||||
<verify>npm run db:generate succeeds</verify>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Create auth API routes</name>
|
||||
<files>src/app/api/auth/[...nextauth]/route.ts</files>
|
||||
<action>Set up NextAuth with GitHub provider, JWT strategy</action>
|
||||
<verify>TypeScript compiles, no errors</verify>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Create login UI</name>
|
||||
<files>src/app/login/page.tsx, src/components/LoginButton.tsx</files>
|
||||
<action>Create login page with GitHub OAuth button</action>
|
||||
<verify>npm run build succeeds</verify>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Start dev server for auth testing</name>
|
||||
<action>Run `npm run dev` in background, wait for ready signal</action>
|
||||
<verify>fetch http://localhost:3000 returns 200</verify>
|
||||
<done>Dev server running at http://localhost:3000</done>
|
||||
</task>
|
||||
|
||||
<!-- ONE checkpoint at end verifies the complete flow -->
|
||||
<task type="checkpoint:human-verify" gate="blocking">
|
||||
<what-built>Complete authentication flow - dev server running at http://localhost:3000</what-built>
|
||||
<how-to-verify>
|
||||
1. Visit: http://localhost:3000/login
|
||||
2. Click "Sign in with GitHub"
|
||||
3. Complete GitHub OAuth flow
|
||||
4. Verify: Redirected to /dashboard, user name displayed
|
||||
5. Refresh page: Session persists
|
||||
6. Click logout: Session cleared
|
||||
</how-to-verify>
|
||||
<resume-signal>Type "approved" or describe issues</resume-signal>
|
||||
</task>
|
||||
```
|
||||
</examples>
|
||||
|
||||
<anti_patterns>
|
||||
|
||||
### ❌ BAD: Asking user to start dev server
|
||||
|
||||
```xml
|
||||
<task type="checkpoint:human-verify" gate="blocking">
|
||||
<what-built>Dashboard component</what-built>
|
||||
<how-to-verify>
|
||||
1. Run: npm run dev
|
||||
2. Visit: http://localhost:3000/dashboard
|
||||
3. Check layout is correct
|
||||
</how-to-verify>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Why bad:** Claude can run `npm run dev`. User should only visit URLs, not execute commands.
|
||||
|
||||
### ✅ GOOD: Claude starts server, user visits
|
||||
|
||||
```xml
|
||||
<task type="auto">
|
||||
<name>Start dev server</name>
|
||||
<action>Run `npm run dev` in background</action>
|
||||
<verify>fetch http://localhost:3000 returns 200</verify>
|
||||
</task>
|
||||
|
||||
<task type="checkpoint:human-verify" gate="blocking">
|
||||
<what-built>Dashboard at http://localhost:3000/dashboard (server running)</what-built>
|
||||
<how-to-verify>
|
||||
Visit http://localhost:3000/dashboard and verify:
|
||||
1. Layout matches design
|
||||
2. No console errors
|
||||
</how-to-verify>
|
||||
</task>
|
||||
```
|
||||
|
||||
### ❌ BAD: Asking human to deploy / ✅ GOOD: Claude automates
|
||||
|
||||
```xml
|
||||
<!-- BAD: Asking user to deploy via dashboard -->
|
||||
<task type="checkpoint:human-action" gate="blocking">
|
||||
<action>Deploy to Vercel</action>
|
||||
<instructions>Visit vercel.com/new → Import repo → Click Deploy → Copy URL</instructions>
|
||||
</task>
|
||||
|
||||
<!-- GOOD: Claude deploys, user verifies -->
|
||||
<task type="auto">
|
||||
<name>Deploy to Vercel</name>
|
||||
<action>Run `vercel --yes`. Capture URL.</action>
|
||||
<verify>vercel ls shows deployment, fetch returns 200</verify>
|
||||
</task>
|
||||
|
||||
<task type="checkpoint:human-verify">
|
||||
<what-built>Deployed to {url}</what-built>
|
||||
<how-to-verify>Visit {url}, check homepage loads</how-to-verify>
|
||||
<resume-signal>Type "approved"</resume-signal>
|
||||
</task>
|
||||
```
|
||||
|
||||
### ❌ BAD: Too many checkpoints / ✅ GOOD: Single checkpoint
|
||||
|
||||
```xml
|
||||
<!-- BAD: Checkpoint after every task -->
|
||||
<task type="auto">Create schema</task>
|
||||
<task type="checkpoint:human-verify">Check schema</task>
|
||||
<task type="auto">Create API route</task>
|
||||
<task type="checkpoint:human-verify">Check API</task>
|
||||
<task type="auto">Create UI form</task>
|
||||
<task type="checkpoint:human-verify">Check form</task>
|
||||
|
||||
<!-- GOOD: One checkpoint at end -->
|
||||
<task type="auto">Create schema</task>
|
||||
<task type="auto">Create API route</task>
|
||||
<task type="auto">Create UI form</task>
|
||||
|
||||
<task type="checkpoint:human-verify">
|
||||
<what-built>Complete auth flow (schema + API + UI)</what-built>
|
||||
<how-to-verify>Test full flow: register, login, access protected page</how-to-verify>
|
||||
<resume-signal>Type "approved"</resume-signal>
|
||||
</task>
|
||||
```
|
||||
|
||||
### ❌ BAD: Vague verification / ✅ GOOD: Specific steps
|
||||
|
||||
```xml
|
||||
<!-- BAD -->
|
||||
<task type="checkpoint:human-verify">
|
||||
<what-built>Dashboard</what-built>
|
||||
<how-to-verify>Check it works</how-to-verify>
|
||||
</task>
|
||||
|
||||
<!-- GOOD -->
|
||||
<task type="checkpoint:human-verify">
|
||||
<what-built>Responsive dashboard - server running at http://localhost:3000</what-built>
|
||||
<how-to-verify>
|
||||
Visit http://localhost:3000/dashboard and verify:
|
||||
1. Desktop (>1024px): Sidebar visible, content area fills remaining space
|
||||
2. Tablet (768px): Sidebar collapses to icons
|
||||
3. Mobile (375px): Sidebar hidden, hamburger menu in header
|
||||
4. No horizontal scroll at any size
|
||||
</how-to-verify>
|
||||
<resume-signal>Type "approved" or describe layout issues</resume-signal>
|
||||
</task>
|
||||
```
|
||||
|
||||
### ❌ BAD: Asking user to run CLI commands
|
||||
|
||||
```xml
|
||||
<task type="checkpoint:human-action">
|
||||
<action>Run database migrations</action>
|
||||
<instructions>Run: npx prisma migrate deploy && npx prisma db seed</instructions>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Why bad:** Claude can run these commands. User should never execute CLI commands.
|
||||
|
||||
### ❌ BAD: Asking user to copy values between services
|
||||
|
||||
```xml
|
||||
<task type="checkpoint:human-action">
|
||||
<action>Configure webhook URL in Stripe</action>
|
||||
<instructions>Copy deployment URL → Stripe Dashboard → Webhooks → Add endpoint → Copy secret → Add to .env</instructions>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Why bad:** Stripe has an API. Claude should create the webhook via API and write to .env directly.
|
||||
|
||||
</anti_patterns>
|
||||
|
||||
<summary>
|
||||
|
||||
Checkpoints formalize human-in-the-loop points for verification and decisions, not manual work.
|
||||
|
||||
**The golden rule:** If Claude CAN automate it, Claude MUST automate it.
|
||||
|
||||
**Checkpoint priority:**
|
||||
1. **checkpoint:human-verify** (90%) - Claude automated everything, human confirms visual/functional correctness
|
||||
2. **checkpoint:decision** (9%) - Human makes architectural/technology choices
|
||||
3. **checkpoint:human-action** (1%) - Truly unavoidable manual steps with no API/CLI
|
||||
|
||||
**When NOT to use checkpoints:**
|
||||
- Things Claude can verify programmatically (tests, builds)
|
||||
- File operations (Claude can read files)
|
||||
- Code correctness (tests and static analysis)
|
||||
- Anything automatable via CLI/API
|
||||
</summary>
|
||||
249
get-shit-done/references/continuation-format.md
Normal file
249
get-shit-done/references/continuation-format.md
Normal file
@@ -0,0 +1,249 @@
|
||||
# Continuation Format
|
||||
|
||||
Standard format for presenting next steps after completing a command or workflow.
|
||||
|
||||
## Core Structure
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**{identifier}: {name}** — {one-line description}
|
||||
|
||||
`{command to copy-paste}`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `{alternative option 1}` — description
|
||||
- `{alternative option 2}` — description
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Format Rules
|
||||
|
||||
1. **Always show what it is** — name + description, never just a command path
|
||||
2. **Pull context from source** — ROADMAP.md for phases, PLAN.md `<objective>` for plans
|
||||
3. **Command in inline code** — backticks, easy to copy-paste, renders as clickable link
|
||||
4. **`/clear` explanation** — always include, keeps it concise but explains why
|
||||
5. **"Also available" not "Other options"** — sounds more app-like
|
||||
6. **Visual separators** — `---` above and below to make it stand out
|
||||
|
||||
## Variants
|
||||
|
||||
### Execute Next Plan
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**02-03: Refresh Token Rotation** — Add /api/auth/refresh with sliding expiry
|
||||
|
||||
`/gsd:execute-phase 2`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- Review plan before executing
|
||||
- `/gsd:list-phase-assumptions 2` — check assumptions
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
### Execute Final Plan in Phase
|
||||
|
||||
Add note that this is the last plan and what comes after:
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**02-03: Refresh Token Rotation** — Add /api/auth/refresh with sliding expiry
|
||||
<sub>Final plan in Phase 2</sub>
|
||||
|
||||
`/gsd:execute-phase 2`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**After this completes:**
|
||||
- Phase 2 → Phase 3 transition
|
||||
- Next: **Phase 3: Core Features** — User dashboard and settings
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
### Plan a Phase
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Phase 2: Authentication** — JWT login flow with refresh tokens
|
||||
|
||||
`/gsd:plan-phase 2`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:discuss-phase 2` — gather context first
|
||||
- `/gsd:research-phase 2` — investigate unknowns
|
||||
- Review roadmap
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
### Phase Complete, Ready for Next
|
||||
|
||||
Show completion status before next action:
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ✓ Phase 2 Complete
|
||||
|
||||
3/3 plans executed
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Phase 3: Core Features** — User dashboard, settings, and data export
|
||||
|
||||
`/gsd:plan-phase 3`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:discuss-phase 3` — gather context first
|
||||
- `/gsd:research-phase 3` — investigate unknowns
|
||||
- Review what Phase 2 built
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
### Multiple Equal Options
|
||||
|
||||
When there's no clear primary action:
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Phase 3: Core Features** — User dashboard, settings, and data export
|
||||
|
||||
**To plan directly:** `/gsd:plan-phase 3`
|
||||
|
||||
**To discuss context first:** `/gsd:discuss-phase 3`
|
||||
|
||||
**To research unknowns:** `/gsd:research-phase 3`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
### Milestone Complete
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## 🎉 Milestone v1.0 Complete
|
||||
|
||||
All 4 phases shipped
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Start v1.1** — questioning → research → requirements → roadmap
|
||||
|
||||
`/gsd:new-milestone`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Pulling Context
|
||||
|
||||
### For phases (from ROADMAP.md):
|
||||
|
||||
```markdown
|
||||
### Phase 2: Authentication
|
||||
**Goal**: JWT login flow with refresh tokens
|
||||
```
|
||||
|
||||
Extract: `**Phase 2: Authentication** — JWT login flow with refresh tokens`
|
||||
|
||||
### For plans (from ROADMAP.md):
|
||||
|
||||
```markdown
|
||||
Plans:
|
||||
- [ ] 02-03: Add refresh token rotation
|
||||
```
|
||||
|
||||
Or from PLAN.md `<objective>`:
|
||||
|
||||
```xml
|
||||
<objective>
|
||||
Add refresh token rotation with sliding expiry window.
|
||||
|
||||
Purpose: Extend session lifetime without compromising security.
|
||||
</objective>
|
||||
```
|
||||
|
||||
Extract: `**02-03: Refresh Token Rotation** — Add /api/auth/refresh with sliding expiry`
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### Don't: Command-only (no context)
|
||||
|
||||
```
|
||||
## To Continue
|
||||
|
||||
Run `/clear`, then paste:
|
||||
/gsd:execute-phase 2
|
||||
```
|
||||
|
||||
User has no idea what 02-03 is about.
|
||||
|
||||
### Don't: Missing /clear explanation
|
||||
|
||||
```
|
||||
`/gsd:plan-phase 3`
|
||||
|
||||
Run /clear first.
|
||||
```
|
||||
|
||||
Doesn't explain why. User might skip it.
|
||||
|
||||
### Don't: "Other options" language
|
||||
|
||||
```
|
||||
Other options:
|
||||
- Review roadmap
|
||||
```
|
||||
|
||||
Sounds like an afterthought. Use "Also available:" instead.
|
||||
|
||||
### Don't: Fenced code blocks for commands
|
||||
|
||||
```
|
||||
```
|
||||
/gsd:plan-phase 3
|
||||
```
|
||||
```
|
||||
|
||||
Fenced blocks inside templates create nesting ambiguity. Use inline backticks instead.
|
||||
65
get-shit-done/references/decimal-phase-calculation.md
Normal file
65
get-shit-done/references/decimal-phase-calculation.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# Decimal Phase Calculation
|
||||
|
||||
Calculate the next decimal phase number for urgent insertions.
|
||||
|
||||
## Using gsd-tools
|
||||
|
||||
```bash
|
||||
# Get next decimal phase after phase 6
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" phase next-decimal 6
|
||||
```
|
||||
|
||||
Output:
|
||||
```json
|
||||
{
|
||||
"found": true,
|
||||
"base_phase": "06",
|
||||
"next": "06.1",
|
||||
"existing": []
|
||||
}
|
||||
```
|
||||
|
||||
With existing decimals:
|
||||
```json
|
||||
{
|
||||
"found": true,
|
||||
"base_phase": "06",
|
||||
"next": "06.3",
|
||||
"existing": ["06.1", "06.2"]
|
||||
}
|
||||
```
|
||||
|
||||
## Extract Values
|
||||
|
||||
```bash
|
||||
DECIMAL_INFO=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" phase next-decimal "${AFTER_PHASE}")
|
||||
DECIMAL_PHASE=$(printf '%s\n' "$DECIMAL_INFO" | jq -r '.next')
|
||||
BASE_PHASE=$(printf '%s\n' "$DECIMAL_INFO" | jq -r '.base_phase')
|
||||
```
|
||||
|
||||
Or with --raw flag:
|
||||
```bash
|
||||
DECIMAL_PHASE=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" phase next-decimal "${AFTER_PHASE}" --raw)
|
||||
# Returns just: 06.1
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
| Existing Phases | Next Phase |
|
||||
|-----------------|------------|
|
||||
| 06 only | 06.1 |
|
||||
| 06, 06.1 | 06.2 |
|
||||
| 06, 06.1, 06.2 | 06.3 |
|
||||
| 06, 06.1, 06.3 (gap) | 06.4 |
|
||||
|
||||
## Directory Naming
|
||||
|
||||
Decimal phase directories use the full decimal number:
|
||||
|
||||
```bash
|
||||
SLUG=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" generate-slug "$DESCRIPTION" --raw)
|
||||
PHASE_DIR=".planning/phases/${DECIMAL_PHASE}-${SLUG}"
|
||||
mkdir -p "$PHASE_DIR"
|
||||
```
|
||||
|
||||
Example: `.planning/phases/06.1-fix-critical-auth-bug/`
|
||||
248
get-shit-done/references/git-integration.md
Normal file
248
get-shit-done/references/git-integration.md
Normal file
@@ -0,0 +1,248 @@
|
||||
<overview>
|
||||
Git integration for GSD framework.
|
||||
</overview>
|
||||
|
||||
<core_principle>
|
||||
|
||||
**Commit outcomes, not process.**
|
||||
|
||||
The git log should read like a changelog of what shipped, not a diary of planning activity.
|
||||
</core_principle>
|
||||
|
||||
<commit_points>
|
||||
|
||||
| Event | Commit? | Why |
|
||||
| ----------------------- | ------- | ------------------------------------------------ |
|
||||
| BRIEF + ROADMAP created | YES | Project initialization |
|
||||
| PLAN.md created | NO | Intermediate - commit with plan completion |
|
||||
| RESEARCH.md created | NO | Intermediate |
|
||||
| DISCOVERY.md created | NO | Intermediate |
|
||||
| **Task completed** | YES | Atomic unit of work (1 commit per task) |
|
||||
| **Plan completed** | YES | Metadata commit (SUMMARY + STATE + ROADMAP) |
|
||||
| Handoff created | YES | WIP state preserved |
|
||||
|
||||
</commit_points>
|
||||
|
||||
<git_check>
|
||||
|
||||
```bash
|
||||
[ -d .git ] && echo "GIT_EXISTS" || echo "NO_GIT"
|
||||
```
|
||||
|
||||
If NO_GIT: Run `git init` silently. GSD projects always get their own repo.
|
||||
</git_check>
|
||||
|
||||
<commit_formats>
|
||||
|
||||
<format name="initialization">
|
||||
## Project Initialization (brief + roadmap together)
|
||||
|
||||
```
|
||||
docs: initialize [project-name] ([N] phases)
|
||||
|
||||
[One-liner from PROJECT.md]
|
||||
|
||||
Phases:
|
||||
1. [phase-name]: [goal]
|
||||
2. [phase-name]: [goal]
|
||||
3. [phase-name]: [goal]
|
||||
```
|
||||
|
||||
What to commit:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: initialize [project-name] ([N] phases)" --files .planning/
|
||||
```
|
||||
|
||||
</format>
|
||||
|
||||
<format name="task-completion">
|
||||
## Task Completion (During Plan Execution)
|
||||
|
||||
Each task gets its own commit immediately after completion.
|
||||
|
||||
```
|
||||
{type}({phase}-{plan}): {task-name}
|
||||
|
||||
- [Key change 1]
|
||||
- [Key change 2]
|
||||
- [Key change 3]
|
||||
```
|
||||
|
||||
**Commit types:**
|
||||
- `feat` - New feature/functionality
|
||||
- `fix` - Bug fix
|
||||
- `test` - Test-only (TDD RED phase)
|
||||
- `refactor` - Code cleanup (TDD REFACTOR phase)
|
||||
- `perf` - Performance improvement
|
||||
- `chore` - Dependencies, config, tooling
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Standard task
|
||||
git add src/api/auth.ts src/types/user.ts
|
||||
git commit -m "feat(08-02): create user registration endpoint
|
||||
|
||||
- POST /auth/register validates email and password
|
||||
- Checks for duplicate users
|
||||
- Returns JWT token on success
|
||||
"
|
||||
|
||||
# TDD task - RED phase
|
||||
git add src/__tests__/jwt.test.ts
|
||||
git commit -m "test(07-02): add failing test for JWT generation
|
||||
|
||||
- Tests token contains user ID claim
|
||||
- Tests token expires in 1 hour
|
||||
- Tests signature verification
|
||||
"
|
||||
|
||||
# TDD task - GREEN phase
|
||||
git add src/utils/jwt.ts
|
||||
git commit -m "feat(07-02): implement JWT generation
|
||||
|
||||
- Uses jose library for signing
|
||||
- Includes user ID and expiry claims
|
||||
- Signs with HS256 algorithm
|
||||
"
|
||||
```
|
||||
|
||||
</format>
|
||||
|
||||
<format name="plan-completion">
|
||||
## Plan Completion (After All Tasks Done)
|
||||
|
||||
After all tasks committed, one final metadata commit captures plan completion.
|
||||
|
||||
```
|
||||
docs({phase}-{plan}): complete [plan-name] plan
|
||||
|
||||
Tasks completed: [N]/[N]
|
||||
- [Task 1 name]
|
||||
- [Task 2 name]
|
||||
- [Task 3 name]
|
||||
|
||||
SUMMARY: .planning/phases/XX-name/{phase}-{plan}-SUMMARY.md
|
||||
```
|
||||
|
||||
What to commit:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs({phase}-{plan}): complete [plan-name] plan" --files .planning/phases/XX-name/{phase}-{plan}-PLAN.md .planning/phases/XX-name/{phase}-{plan}-SUMMARY.md .planning/STATE.md .planning/ROADMAP.md
|
||||
```
|
||||
|
||||
**Note:** Code files NOT included - already committed per-task.
|
||||
|
||||
</format>
|
||||
|
||||
<format name="handoff">
|
||||
## Handoff (WIP)
|
||||
|
||||
```
|
||||
wip: [phase-name] paused at task [X]/[Y]
|
||||
|
||||
Current: [task name]
|
||||
[If blocked:] Blocked: [reason]
|
||||
```
|
||||
|
||||
What to commit:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "wip: [phase-name] paused at task [X]/[Y]" --files .planning/
|
||||
```
|
||||
|
||||
</format>
|
||||
</commit_formats>
|
||||
|
||||
<example_log>
|
||||
|
||||
**Old approach (per-plan commits):**
|
||||
```
|
||||
a7f2d1 feat(checkout): Stripe payments with webhook verification
|
||||
3e9c4b feat(products): catalog with search, filters, and pagination
|
||||
8a1b2c feat(auth): JWT with refresh rotation using jose
|
||||
5c3d7e feat(foundation): Next.js 15 + Prisma + Tailwind scaffold
|
||||
2f4a8d docs: initialize ecommerce-app (5 phases)
|
||||
```
|
||||
|
||||
**New approach (per-task commits):**
|
||||
```
|
||||
# Phase 04 - Checkout
|
||||
1a2b3c docs(04-01): complete checkout flow plan
|
||||
4d5e6f feat(04-01): add webhook signature verification
|
||||
7g8h9i feat(04-01): implement payment session creation
|
||||
0j1k2l feat(04-01): create checkout page component
|
||||
|
||||
# Phase 03 - Products
|
||||
3m4n5o docs(03-02): complete product listing plan
|
||||
6p7q8r feat(03-02): add pagination controls
|
||||
9s0t1u feat(03-02): implement search and filters
|
||||
2v3w4x feat(03-01): create product catalog schema
|
||||
|
||||
# Phase 02 - Auth
|
||||
5y6z7a docs(02-02): complete token refresh plan
|
||||
8b9c0d feat(02-02): implement refresh token rotation
|
||||
1e2f3g test(02-02): add failing test for token refresh
|
||||
4h5i6j docs(02-01): complete JWT setup plan
|
||||
7k8l9m feat(02-01): add JWT generation and validation
|
||||
0n1o2p chore(02-01): install jose library
|
||||
|
||||
# Phase 01 - Foundation
|
||||
3q4r5s docs(01-01): complete scaffold plan
|
||||
6t7u8v feat(01-01): configure Tailwind and globals
|
||||
9w0x1y feat(01-01): set up Prisma with database
|
||||
2z3a4b feat(01-01): create Next.js 15 project
|
||||
|
||||
# Initialization
|
||||
5c6d7e docs: initialize ecommerce-app (5 phases)
|
||||
```
|
||||
|
||||
Each plan produces 2-4 commits (tasks + metadata). Clear, granular, bisectable.
|
||||
|
||||
</example_log>
|
||||
|
||||
<anti_patterns>
|
||||
|
||||
**Still don't commit (intermediate artifacts):**
|
||||
- PLAN.md creation (commit with plan completion)
|
||||
- RESEARCH.md (intermediate)
|
||||
- DISCOVERY.md (intermediate)
|
||||
- Minor planning tweaks
|
||||
- "Fixed typo in roadmap"
|
||||
|
||||
**Do commit (outcomes):**
|
||||
- Each task completion (feat/fix/test/refactor)
|
||||
- Plan completion metadata (docs)
|
||||
- Project initialization (docs)
|
||||
|
||||
**Key principle:** Commit working code and shipped outcomes, not planning process.
|
||||
|
||||
</anti_patterns>
|
||||
|
||||
<commit_strategy_rationale>
|
||||
|
||||
## Why Per-Task Commits?
|
||||
|
||||
**Context engineering for AI:**
|
||||
- Git history becomes primary context source for future Claude sessions
|
||||
- `git log --grep="{phase}-{plan}"` shows all work for a plan
|
||||
- `git diff <hash>^..<hash>` shows exact changes per task
|
||||
- Less reliance on parsing SUMMARY.md = more context for actual work
|
||||
|
||||
**Failure recovery:**
|
||||
- Task 1 committed ✅, Task 2 failed ❌
|
||||
- Claude in next session: sees task 1 complete, can retry task 2
|
||||
- Can `git reset --hard` to last successful task
|
||||
|
||||
**Debugging:**
|
||||
- `git bisect` finds exact failing task, not just failing plan
|
||||
- `git blame` traces line to specific task context
|
||||
- Each commit is independently revertable
|
||||
|
||||
**Observability:**
|
||||
- Solo developer + Claude workflow benefits from granular attribution
|
||||
- Atomic commits are git best practice
|
||||
- "Commit noise" irrelevant when consumer is Claude, not humans
|
||||
|
||||
</commit_strategy_rationale>
|
||||
38
get-shit-done/references/git-planning-commit.md
Normal file
38
get-shit-done/references/git-planning-commit.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# Git Planning Commit
|
||||
|
||||
Commit planning artifacts using the gsd-tools CLI, which automatically checks `commit_docs` config and gitignore status.
|
||||
|
||||
## Commit via CLI
|
||||
|
||||
Always use `gsd-tools.cjs commit` for `.planning/` files — it handles `commit_docs` and gitignore checks automatically:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs({scope}): {description}" --files .planning/STATE.md .planning/ROADMAP.md
|
||||
```
|
||||
|
||||
The CLI will return `skipped` (with reason) if `commit_docs` is `false` or `.planning/` is gitignored. No manual conditional checks needed.
|
||||
|
||||
## Amend previous commit
|
||||
|
||||
To fold `.planning/` file changes into the previous commit:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "" --files .planning/codebase/*.md --amend
|
||||
```
|
||||
|
||||
## Commit Message Patterns
|
||||
|
||||
| Command | Scope | Example |
|
||||
|---------|-------|---------|
|
||||
| plan-phase | phase | `docs(phase-03): create authentication plans` |
|
||||
| execute-phase | phase | `docs(phase-03): complete authentication phase` |
|
||||
| new-milestone | milestone | `docs: start milestone v1.1` |
|
||||
| remove-phase | chore | `chore: remove phase 17 (dashboard)` |
|
||||
| insert-phase | phase | `docs: insert phase 16.1 (critical fix)` |
|
||||
| add-phase | phase | `docs: add phase 07 (settings page)` |
|
||||
|
||||
## When to Skip
|
||||
|
||||
- `commit_docs: false` in config
|
||||
- `.planning/` is gitignored
|
||||
- No changes to commit (check with `git status --porcelain .planning/`)
|
||||
36
get-shit-done/references/model-profile-resolution.md
Normal file
36
get-shit-done/references/model-profile-resolution.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# Model Profile Resolution
|
||||
|
||||
Resolve model profile once at the start of orchestration, then use it for all Task spawns.
|
||||
|
||||
## Resolution Pattern
|
||||
|
||||
```bash
|
||||
MODEL_PROFILE=$(cat .planning/config.json 2>/dev/null | grep -o '"model_profile"[[:space:]]*:[[:space:]]*"[^"]*"' | grep -o '"[^"]*"$' | tr -d '"' || echo "balanced")
|
||||
```
|
||||
|
||||
Default: `balanced` if not set or config missing.
|
||||
|
||||
## Lookup Table
|
||||
|
||||
@C:/Users/yaoji/.claude/get-shit-done/references/model-profiles.md
|
||||
|
||||
Look up the agent in the table for the resolved profile. Pass the model parameter to Task calls:
|
||||
|
||||
```
|
||||
Task(
|
||||
prompt="...",
|
||||
subagent_type="gsd-planner",
|
||||
model="{resolved_model}" # "inherit", "sonnet", or "haiku"
|
||||
)
|
||||
```
|
||||
|
||||
**Note:** Opus-tier agents resolve to `"inherit"` (not `"opus"`). This causes the agent to use the parent session's model, avoiding conflicts with organization policies that may block specific opus versions.
|
||||
|
||||
If `model_profile` is `"inherit"`, all agents resolve to `"inherit"` (useful for OpenCode `/model`).
|
||||
|
||||
## Usage
|
||||
|
||||
1. Resolve once at orchestration start
|
||||
2. Store the profile value
|
||||
3. Look up each agent's model from the table when spawning
|
||||
4. Pass model parameter to each Task call (values: `"inherit"`, `"sonnet"`, `"haiku"`)
|
||||
119
get-shit-done/references/model-profiles.md
Normal file
119
get-shit-done/references/model-profiles.md
Normal file
@@ -0,0 +1,119 @@
|
||||
# Model Profiles
|
||||
|
||||
Model profiles control which Claude model each GSD agent uses. This allows balancing quality vs token spend, or inheriting the currently selected session model.
|
||||
|
||||
## Profile Definitions
|
||||
|
||||
| Agent | `quality` | `balanced` | `budget` | `inherit` |
|
||||
|-------|-----------|------------|----------|-----------|
|
||||
| gsd-planner | opus | opus | sonnet | inherit |
|
||||
| gsd-roadmapper | opus | sonnet | sonnet | inherit |
|
||||
| gsd-executor | opus | sonnet | sonnet | inherit |
|
||||
| gsd-phase-researcher | opus | sonnet | haiku | inherit |
|
||||
| gsd-project-researcher | opus | sonnet | haiku | inherit |
|
||||
| gsd-research-synthesizer | sonnet | sonnet | haiku | inherit |
|
||||
| gsd-debugger | opus | sonnet | sonnet | inherit |
|
||||
| gsd-codebase-mapper | sonnet | haiku | haiku | inherit |
|
||||
| gsd-verifier | sonnet | sonnet | haiku | inherit |
|
||||
| gsd-plan-checker | sonnet | sonnet | haiku | inherit |
|
||||
| gsd-integration-checker | sonnet | sonnet | haiku | inherit |
|
||||
| gsd-nyquist-auditor | sonnet | sonnet | haiku | inherit |
|
||||
|
||||
## Profile Philosophy
|
||||
|
||||
**quality** - Maximum reasoning power
|
||||
- Opus for all decision-making agents
|
||||
- Sonnet for read-only verification
|
||||
- Use when: quota available, critical architecture work
|
||||
|
||||
**balanced** (default) - Smart allocation
|
||||
- Opus only for planning (where architecture decisions happen)
|
||||
- Sonnet for execution and research (follows explicit instructions)
|
||||
- Sonnet for verification (needs reasoning, not just pattern matching)
|
||||
- Use when: normal development, good balance of quality and cost
|
||||
|
||||
**budget** - Minimal Opus usage
|
||||
- Sonnet for anything that writes code
|
||||
- Haiku for research and verification
|
||||
- Use when: conserving quota, high-volume work, less critical phases
|
||||
|
||||
**inherit** - Follow the current session model
|
||||
- All agents resolve to `inherit`
|
||||
- Best when you switch models interactively (for example OpenCode `/model`)
|
||||
- **Required when using non-Anthropic providers** (OpenRouter, local models, etc.) — otherwise GSD may call Anthropic models directly, incurring unexpected costs
|
||||
- Use when: you want GSD to follow your currently selected runtime model
|
||||
|
||||
## Using Non-Anthropic Models (OpenRouter, Local, etc.)
|
||||
|
||||
If you're using Claude Code with OpenRouter, a local model, or any non-Anthropic provider, set the `inherit` profile to prevent GSD from calling Anthropic models for subagents:
|
||||
|
||||
```bash
|
||||
# Via settings command
|
||||
/gsd:settings
|
||||
# → Select "Inherit" for model profile
|
||||
|
||||
# Or manually in .planning/config.json
|
||||
{
|
||||
"model_profile": "inherit"
|
||||
}
|
||||
```
|
||||
|
||||
Without `inherit`, GSD's default `balanced` profile spawns specific Anthropic models (`opus`, `sonnet`, `haiku`) for each agent type, which can result in additional API costs through your non-Anthropic provider.
|
||||
|
||||
## Resolution Logic
|
||||
|
||||
Orchestrators resolve model before spawning:
|
||||
|
||||
```
|
||||
1. Read .planning/config.json
|
||||
2. Check model_overrides for agent-specific override
|
||||
3. If no override, look up agent in profile table
|
||||
4. Pass model parameter to Task call
|
||||
```
|
||||
|
||||
## Per-Agent Overrides
|
||||
|
||||
Override specific agents without changing the entire profile:
|
||||
|
||||
```json
|
||||
{
|
||||
"model_profile": "balanced",
|
||||
"model_overrides": {
|
||||
"gsd-executor": "opus",
|
||||
"gsd-planner": "haiku"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Overrides take precedence over the profile. Valid values: `opus`, `sonnet`, `haiku`, `inherit`.
|
||||
|
||||
## Switching Profiles
|
||||
|
||||
Runtime: `/gsd:set-profile <profile>`
|
||||
|
||||
Per-project default: Set in `.planning/config.json`:
|
||||
```json
|
||||
{
|
||||
"model_profile": "balanced"
|
||||
}
|
||||
```
|
||||
|
||||
## Design Rationale
|
||||
|
||||
**Why Opus for gsd-planner?**
|
||||
Planning involves architecture decisions, goal decomposition, and task design. This is where model quality has the highest impact.
|
||||
|
||||
**Why Sonnet for gsd-executor?**
|
||||
Executors follow explicit PLAN.md instructions. The plan already contains the reasoning; execution is implementation.
|
||||
|
||||
**Why Sonnet (not Haiku) for verifiers in balanced?**
|
||||
Verification requires goal-backward reasoning - checking if code *delivers* what the phase promised, not just pattern matching. Sonnet handles this well; Haiku may miss subtle gaps.
|
||||
|
||||
**Why Haiku for gsd-codebase-mapper?**
|
||||
Read-only exploration and pattern extraction. No reasoning required, just structured output from file contents.
|
||||
|
||||
**Why `inherit` instead of passing `opus` directly?**
|
||||
Claude Code's `"opus"` alias maps to a specific model version. Organizations may block older opus versions while allowing newer ones. GSD returns `"inherit"` for opus-tier agents, causing them to use whatever opus version the user has configured in their session. This avoids version conflicts and silent fallbacks to Sonnet.
|
||||
|
||||
**Why `inherit` profile?**
|
||||
Some runtimes (including OpenCode) let users switch models at runtime (`/model`). The `inherit` profile keeps all GSD subagents aligned to that live selection.
|
||||
61
get-shit-done/references/phase-argument-parsing.md
Normal file
61
get-shit-done/references/phase-argument-parsing.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# Phase Argument Parsing
|
||||
|
||||
Parse and normalize phase arguments for commands that operate on phases.
|
||||
|
||||
## Extraction
|
||||
|
||||
From `$ARGUMENTS`:
|
||||
- Extract phase number (first numeric argument)
|
||||
- Extract flags (prefixed with `--`)
|
||||
- Remaining text is description (for insert/add commands)
|
||||
|
||||
## Using gsd-tools
|
||||
|
||||
The `find-phase` command handles normalization and validation in one step:
|
||||
|
||||
```bash
|
||||
PHASE_INFO=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" find-phase "${PHASE}")
|
||||
```
|
||||
|
||||
Returns JSON with:
|
||||
- `found`: true/false
|
||||
- `directory`: Full path to phase directory
|
||||
- `phase_number`: Normalized number (e.g., "06", "06.1")
|
||||
- `phase_name`: Name portion (e.g., "foundation")
|
||||
- `plans`: Array of PLAN.md files
|
||||
- `summaries`: Array of SUMMARY.md files
|
||||
|
||||
## Manual Normalization (Legacy)
|
||||
|
||||
Zero-pad integer phases to 2 digits. Preserve decimal suffixes.
|
||||
|
||||
```bash
|
||||
# Normalize phase number
|
||||
if [[ "$PHASE" =~ ^[0-9]+$ ]]; then
|
||||
# Integer: 8 → 08
|
||||
PHASE=$(printf "%02d" "$PHASE")
|
||||
elif [[ "$PHASE" =~ ^([0-9]+)\.([0-9]+)$ ]]; then
|
||||
# Decimal: 2.1 → 02.1
|
||||
PHASE=$(printf "%02d.%s" "${BASH_REMATCH[1]}" "${BASH_REMATCH[2]}")
|
||||
fi
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
Use `roadmap get-phase` to validate phase exists:
|
||||
|
||||
```bash
|
||||
PHASE_CHECK=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap get-phase "${PHASE}")
|
||||
if [ "$(printf '%s\n' "$PHASE_CHECK" | jq -r '.found')" = "false" ]; then
|
||||
echo "ERROR: Phase ${PHASE} not found in roadmap"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
## Directory Lookup
|
||||
|
||||
Use `find-phase` for directory lookup:
|
||||
|
||||
```bash
|
||||
PHASE_DIR=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" find-phase "${PHASE}" --raw)
|
||||
```
|
||||
200
get-shit-done/references/planning-config.md
Normal file
200
get-shit-done/references/planning-config.md
Normal file
@@ -0,0 +1,200 @@
|
||||
<planning_config>
|
||||
|
||||
Configuration options for `.planning/` directory behavior.
|
||||
|
||||
<config_schema>
|
||||
```json
|
||||
"planning": {
|
||||
"commit_docs": true,
|
||||
"search_gitignored": false
|
||||
},
|
||||
"git": {
|
||||
"branching_strategy": "none",
|
||||
"phase_branch_template": "gsd/phase-{phase}-{slug}",
|
||||
"milestone_branch_template": "gsd/{milestone}-{slug}"
|
||||
}
|
||||
```
|
||||
|
||||
| Option | Default | Description |
|
||||
|--------|---------|-------------|
|
||||
| `commit_docs` | `true` | Whether to commit planning artifacts to git |
|
||||
| `search_gitignored` | `false` | Add `--no-ignore` to broad rg searches |
|
||||
| `git.branching_strategy` | `"none"` | Git branching approach: `"none"`, `"phase"`, or `"milestone"` |
|
||||
| `git.phase_branch_template` | `"gsd/phase-{phase}-{slug}"` | Branch template for phase strategy |
|
||||
| `git.milestone_branch_template` | `"gsd/{milestone}-{slug}"` | Branch template for milestone strategy |
|
||||
</config_schema>
|
||||
|
||||
<commit_docs_behavior>
|
||||
|
||||
**When `commit_docs: true` (default):**
|
||||
- Planning files committed normally
|
||||
- SUMMARY.md, STATE.md, ROADMAP.md tracked in git
|
||||
- Full history of planning decisions preserved
|
||||
|
||||
**When `commit_docs: false`:**
|
||||
- Skip all `git add`/`git commit` for `.planning/` files
|
||||
- User must add `.planning/` to `.gitignore`
|
||||
- Useful for: OSS contributions, client projects, keeping planning private
|
||||
|
||||
**Using gsd-tools.cjs (preferred):**
|
||||
|
||||
```bash
|
||||
# Commit with automatic commit_docs + gitignore checks:
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: update state" --files .planning/STATE.md
|
||||
|
||||
# Load config via state load (returns JSON):
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state load)
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
# commit_docs is available in the JSON output
|
||||
|
||||
# Or use init commands which include commit_docs:
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init execute-phase "1")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
# commit_docs is included in all init command outputs
|
||||
```
|
||||
|
||||
**Auto-detection:** If `.planning/` is gitignored, `commit_docs` is automatically `false` regardless of config.json. This prevents git errors when users have `.planning/` in `.gitignore`.
|
||||
|
||||
**Commit via CLI (handles checks automatically):**
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: update state" --files .planning/STATE.md
|
||||
```
|
||||
|
||||
The CLI checks `commit_docs` config and gitignore status internally — no manual conditionals needed.
|
||||
|
||||
</commit_docs_behavior>
|
||||
|
||||
<search_behavior>
|
||||
|
||||
**When `search_gitignored: false` (default):**
|
||||
- Standard rg behavior (respects .gitignore)
|
||||
- Direct path searches work: `rg "pattern" .planning/` finds files
|
||||
- Broad searches skip gitignored: `rg "pattern"` skips `.planning/`
|
||||
|
||||
**When `search_gitignored: true`:**
|
||||
- Add `--no-ignore` to broad rg searches that should include `.planning/`
|
||||
- Only needed when searching entire repo and expecting `.planning/` matches
|
||||
|
||||
**Note:** Most GSD operations use direct file reads or explicit paths, which work regardless of gitignore status.
|
||||
|
||||
</search_behavior>
|
||||
|
||||
<setup_uncommitted_mode>
|
||||
|
||||
To use uncommitted mode:
|
||||
|
||||
1. **Set config:**
|
||||
```json
|
||||
"planning": {
|
||||
"commit_docs": false,
|
||||
"search_gitignored": true
|
||||
}
|
||||
```
|
||||
|
||||
2. **Add to .gitignore:**
|
||||
```
|
||||
.planning/
|
||||
```
|
||||
|
||||
3. **Existing tracked files:** If `.planning/` was previously tracked:
|
||||
```bash
|
||||
git rm -r --cached .planning/
|
||||
git commit -m "chore: stop tracking planning docs"
|
||||
```
|
||||
|
||||
4. **Branch merges:** When using `branching_strategy: phase` or `milestone`, the `complete-milestone` workflow automatically strips `.planning/` files from staging before merge commits when `commit_docs: false`.
|
||||
|
||||
</setup_uncommitted_mode>
|
||||
|
||||
<branching_strategy_behavior>
|
||||
|
||||
**Branching Strategies:**
|
||||
|
||||
| Strategy | When branch created | Branch scope | Merge point |
|
||||
|----------|---------------------|--------------|-------------|
|
||||
| `none` | Never | N/A | N/A |
|
||||
| `phase` | At `execute-phase` start | Single phase | User merges after phase |
|
||||
| `milestone` | At first `execute-phase` of milestone | Entire milestone | At `complete-milestone` |
|
||||
|
||||
**When `git.branching_strategy: "none"` (default):**
|
||||
- All work commits to current branch
|
||||
- Standard GSD behavior
|
||||
|
||||
**When `git.branching_strategy: "phase"`:**
|
||||
- `execute-phase` creates/switches to a branch before execution
|
||||
- Branch name from `phase_branch_template` (e.g., `gsd/phase-03-authentication`)
|
||||
- All plan commits go to that branch
|
||||
- User merges branches manually after phase completion
|
||||
- `complete-milestone` offers to merge all phase branches
|
||||
|
||||
**When `git.branching_strategy: "milestone"`:**
|
||||
- First `execute-phase` of milestone creates the milestone branch
|
||||
- Branch name from `milestone_branch_template` (e.g., `gsd/v1.0-mvp`)
|
||||
- All phases in milestone commit to same branch
|
||||
- `complete-milestone` offers to merge milestone branch to main
|
||||
|
||||
**Template variables:**
|
||||
|
||||
| Variable | Available in | Description |
|
||||
|----------|--------------|-------------|
|
||||
| `{phase}` | phase_branch_template | Zero-padded phase number (e.g., "03") |
|
||||
| `{slug}` | Both | Lowercase, hyphenated name |
|
||||
| `{milestone}` | milestone_branch_template | Milestone version (e.g., "v1.0") |
|
||||
|
||||
**Checking the config:**
|
||||
|
||||
Use `init execute-phase` which returns all config as JSON:
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init execute-phase "1")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
# JSON output includes: branching_strategy, phase_branch_template, milestone_branch_template
|
||||
```
|
||||
|
||||
Or use `state load` for the config values:
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state load)
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
# Parse branching_strategy, phase_branch_template, milestone_branch_template from JSON
|
||||
```
|
||||
|
||||
**Branch creation:**
|
||||
|
||||
```bash
|
||||
# For phase strategy
|
||||
if [ "$BRANCHING_STRATEGY" = "phase" ]; then
|
||||
PHASE_SLUG=$(echo "$PHASE_NAME" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/--*/-/g' | sed 's/^-//;s/-$//')
|
||||
BRANCH_NAME=$(echo "$PHASE_BRANCH_TEMPLATE" | sed "s/{phase}/$PADDED_PHASE/g" | sed "s/{slug}/$PHASE_SLUG/g")
|
||||
git checkout -b "$BRANCH_NAME" 2>/dev/null || git checkout "$BRANCH_NAME"
|
||||
fi
|
||||
|
||||
# For milestone strategy
|
||||
if [ "$BRANCHING_STRATEGY" = "milestone" ]; then
|
||||
MILESTONE_SLUG=$(echo "$MILESTONE_NAME" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/--*/-/g' | sed 's/^-//;s/-$//')
|
||||
BRANCH_NAME=$(echo "$MILESTONE_BRANCH_TEMPLATE" | sed "s/{milestone}/$MILESTONE_VERSION/g" | sed "s/{slug}/$MILESTONE_SLUG/g")
|
||||
git checkout -b "$BRANCH_NAME" 2>/dev/null || git checkout "$BRANCH_NAME"
|
||||
fi
|
||||
```
|
||||
|
||||
**Merge options at complete-milestone:**
|
||||
|
||||
| Option | Git command | Result |
|
||||
|--------|-------------|--------|
|
||||
| Squash merge (recommended) | `git merge --squash` | Single clean commit per branch |
|
||||
| Merge with history | `git merge --no-ff` | Preserves all individual commits |
|
||||
| Delete without merging | `git branch -D` | Discard branch work |
|
||||
| Keep branches | (none) | Manual handling later |
|
||||
|
||||
Squash merge is recommended — keeps main branch history clean while preserving the full development history in the branch (until deleted).
|
||||
|
||||
**Use cases:**
|
||||
|
||||
| Strategy | Best for |
|
||||
|----------|----------|
|
||||
| `none` | Solo development, simple projects |
|
||||
| `phase` | Code review per phase, granular rollback, team collaboration |
|
||||
| `milestone` | Release branches, staging environments, PR per version |
|
||||
|
||||
</branching_strategy_behavior>
|
||||
|
||||
</planning_config>
|
||||
162
get-shit-done/references/questioning.md
Normal file
162
get-shit-done/references/questioning.md
Normal file
@@ -0,0 +1,162 @@
|
||||
<questioning_guide>
|
||||
|
||||
Project initialization is dream extraction, not requirements gathering. You're helping the user discover and articulate what they want to build. This isn't a contract negotiation — it's collaborative thinking.
|
||||
|
||||
<philosophy>
|
||||
|
||||
**You are a thinking partner, not an interviewer.**
|
||||
|
||||
The user often has a fuzzy idea. Your job is to help them sharpen it. Ask questions that make them think "oh, I hadn't considered that" or "yes, that's exactly what I mean."
|
||||
|
||||
Don't interrogate. Collaborate. Don't follow a script. Follow the thread.
|
||||
|
||||
</philosophy>
|
||||
|
||||
<the_goal>
|
||||
|
||||
By the end of questioning, you need enough clarity to write a PROJECT.md that downstream phases can act on:
|
||||
|
||||
- **Research** needs: what domain to research, what the user already knows, what unknowns exist
|
||||
- **Requirements** needs: clear enough vision to scope v1 features
|
||||
- **Roadmap** needs: clear enough vision to decompose into phases, what "done" looks like
|
||||
- **plan-phase** needs: specific requirements to break into tasks, context for implementation choices
|
||||
- **execute-phase** needs: success criteria to verify against, the "why" behind requirements
|
||||
|
||||
A vague PROJECT.md forces every downstream phase to guess. The cost compounds.
|
||||
|
||||
</the_goal>
|
||||
|
||||
<how_to_question>
|
||||
|
||||
**Start open.** Let them dump their mental model. Don't interrupt with structure.
|
||||
|
||||
**Follow energy.** Whatever they emphasized, dig into that. What excited them? What problem sparked this?
|
||||
|
||||
**Challenge vagueness.** Never accept fuzzy answers. "Good" means what? "Users" means who? "Simple" means how?
|
||||
|
||||
**Make the abstract concrete.** "Walk me through using this." "What does that actually look like?"
|
||||
|
||||
**Clarify ambiguity.** "When you say Z, do you mean A or B?" "You mentioned X — tell me more."
|
||||
|
||||
**Know when to stop.** When you understand what they want, why they want it, who it's for, and what done looks like — offer to proceed.
|
||||
|
||||
</how_to_question>
|
||||
|
||||
<question_types>
|
||||
|
||||
Use these as inspiration, not a checklist. Pick what's relevant to the thread.
|
||||
|
||||
**Motivation — why this exists:**
|
||||
- "What prompted this?"
|
||||
- "What are you doing today that this replaces?"
|
||||
- "What would you do if this existed?"
|
||||
|
||||
**Concreteness — what it actually is:**
|
||||
- "Walk me through using this"
|
||||
- "You said X — what does that actually look like?"
|
||||
- "Give me an example"
|
||||
|
||||
**Clarification — what they mean:**
|
||||
- "When you say Z, do you mean A or B?"
|
||||
- "You mentioned X — tell me more about that"
|
||||
|
||||
**Success — how you'll know it's working:**
|
||||
- "How will you know this is working?"
|
||||
- "What does done look like?"
|
||||
|
||||
</question_types>
|
||||
|
||||
<using_askuserquestion>
|
||||
|
||||
Use AskUserQuestion to help users think by presenting concrete options to react to.
|
||||
|
||||
**Good options:**
|
||||
- Interpretations of what they might mean
|
||||
- Specific examples to confirm or deny
|
||||
- Concrete choices that reveal priorities
|
||||
|
||||
**Bad options:**
|
||||
- Generic categories ("Technical", "Business", "Other")
|
||||
- Leading options that presume an answer
|
||||
- Too many options (2-4 is ideal)
|
||||
- Headers longer than 12 characters (hard limit — validation will reject them)
|
||||
|
||||
**Example — vague answer:**
|
||||
User says "it should be fast"
|
||||
|
||||
- header: "Fast"
|
||||
- question: "Fast how?"
|
||||
- options: ["Sub-second response", "Handles large datasets", "Quick to build", "Let me explain"]
|
||||
|
||||
**Example — following a thread:**
|
||||
User mentions "frustrated with current tools"
|
||||
|
||||
- header: "Frustration"
|
||||
- question: "What specifically frustrates you?"
|
||||
- options: ["Too many clicks", "Missing features", "Unreliable", "Let me explain"]
|
||||
|
||||
**Tip for users — modifying an option:**
|
||||
Users who want a slightly modified version of an option can select "Other" and reference the option by number: `#1 but for finger joints only` or `#2 with pagination disabled`. This avoids retyping the full option text.
|
||||
|
||||
</using_askuserquestion>
|
||||
|
||||
<freeform_rule>
|
||||
|
||||
**When the user wants to explain freely, STOP using AskUserQuestion.**
|
||||
|
||||
If a user selects "Other" and their response signals they want to describe something in their own words (e.g., "let me describe it", "I'll explain", "something else", or any open-ended reply that isn't choosing/modifying an existing option), you MUST:
|
||||
|
||||
1. **Ask your follow-up as plain text** — NOT via AskUserQuestion
|
||||
2. **Wait for them to type at the normal prompt**
|
||||
3. **Resume AskUserQuestion** only after processing their freeform response
|
||||
|
||||
The same applies if YOU include a freeform-indicating option (like "Let me explain" or "Describe in detail") and the user selects it.
|
||||
|
||||
**Wrong:** User says "let me describe it" → AskUserQuestion("What feature?", ["Feature A", "Feature B", "Describe in detail"])
|
||||
**Right:** User says "let me describe it" → "Go ahead — what are you thinking?"
|
||||
|
||||
</freeform_rule>
|
||||
|
||||
<context_checklist>
|
||||
|
||||
Use this as a **background checklist**, not a conversation structure. Check these mentally as you go. If gaps remain, weave questions naturally.
|
||||
|
||||
- [ ] What they're building (concrete enough to explain to a stranger)
|
||||
- [ ] Why it needs to exist (the problem or desire driving it)
|
||||
- [ ] Who it's for (even if just themselves)
|
||||
- [ ] What "done" looks like (observable outcomes)
|
||||
|
||||
Four things. If they volunteer more, capture it.
|
||||
|
||||
</context_checklist>
|
||||
|
||||
<decision_gate>
|
||||
|
||||
When you could write a clear PROJECT.md, offer to proceed:
|
||||
|
||||
- header: "Ready?"
|
||||
- question: "I think I understand what you're after. Ready to create PROJECT.md?"
|
||||
- options:
|
||||
- "Create PROJECT.md" — Let's move forward
|
||||
- "Keep exploring" — I want to share more / ask me more
|
||||
|
||||
If "Keep exploring" — ask what they want to add or identify gaps and probe naturally.
|
||||
|
||||
Loop until "Create PROJECT.md" selected.
|
||||
|
||||
</decision_gate>
|
||||
|
||||
<anti_patterns>
|
||||
|
||||
- **Checklist walking** — Going through domains regardless of what they said
|
||||
- **Canned questions** — "What's your core value?" "What's out of scope?" regardless of context
|
||||
- **Corporate speak** — "What are your success criteria?" "Who are your stakeholders?"
|
||||
- **Interrogation** — Firing questions without building on answers
|
||||
- **Rushing** — Minimizing questions to get to "the work"
|
||||
- **Shallow acceptance** — Taking vague answers without probing
|
||||
- **Premature constraints** — Asking about tech stack before understanding the idea
|
||||
- **User skills** — NEVER ask about user's technical experience. Claude builds.
|
||||
|
||||
</anti_patterns>
|
||||
|
||||
</questioning_guide>
|
||||
263
get-shit-done/references/tdd.md
Normal file
263
get-shit-done/references/tdd.md
Normal file
@@ -0,0 +1,263 @@
|
||||
<overview>
|
||||
TDD is about design quality, not coverage metrics. The red-green-refactor cycle forces you to think about behavior before implementation, producing cleaner interfaces and more testable code.
|
||||
|
||||
**Principle:** If you can describe the behavior as `expect(fn(input)).toBe(output)` before writing `fn`, TDD improves the result.
|
||||
|
||||
**Key insight:** TDD work is fundamentally heavier than standard tasks—it requires 2-3 execution cycles (RED → GREEN → REFACTOR), each with file reads, test runs, and potential debugging. TDD features get dedicated plans to ensure full context is available throughout the cycle.
|
||||
</overview>
|
||||
|
||||
<when_to_use_tdd>
|
||||
## When TDD Improves Quality
|
||||
|
||||
**TDD candidates (create a TDD plan):**
|
||||
- Business logic with defined inputs/outputs
|
||||
- API endpoints with request/response contracts
|
||||
- Data transformations, parsing, formatting
|
||||
- Validation rules and constraints
|
||||
- Algorithms with testable behavior
|
||||
- State machines and workflows
|
||||
- Utility functions with clear specifications
|
||||
|
||||
**Skip TDD (use standard plan with `type="auto"` tasks):**
|
||||
- UI layout, styling, visual components
|
||||
- Configuration changes
|
||||
- Glue code connecting existing components
|
||||
- One-off scripts and migrations
|
||||
- Simple CRUD with no business logic
|
||||
- Exploratory prototyping
|
||||
|
||||
**Heuristic:** Can you write `expect(fn(input)).toBe(output)` before writing `fn`?
|
||||
→ Yes: Create a TDD plan
|
||||
→ No: Use standard plan, add tests after if needed
|
||||
</when_to_use_tdd>
|
||||
|
||||
<tdd_plan_structure>
|
||||
## TDD Plan Structure
|
||||
|
||||
Each TDD plan implements **one feature** through the full RED-GREEN-REFACTOR cycle.
|
||||
|
||||
```markdown
|
||||
---
|
||||
phase: XX-name
|
||||
plan: NN
|
||||
type: tdd
|
||||
---
|
||||
|
||||
<objective>
|
||||
[What feature and why]
|
||||
Purpose: [Design benefit of TDD for this feature]
|
||||
Output: [Working, tested feature]
|
||||
</objective>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@relevant/source/files.ts
|
||||
</context>
|
||||
|
||||
<feature>
|
||||
<name>[Feature name]</name>
|
||||
<files>[source file, test file]</files>
|
||||
<behavior>
|
||||
[Expected behavior in testable terms]
|
||||
Cases: input → expected output
|
||||
</behavior>
|
||||
<implementation>[How to implement once tests pass]</implementation>
|
||||
</feature>
|
||||
|
||||
<verification>
|
||||
[Test command that proves feature works]
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- Failing test written and committed
|
||||
- Implementation passes test
|
||||
- Refactor complete (if needed)
|
||||
- All 2-3 commits present
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create SUMMARY.md with:
|
||||
- RED: What test was written, why it failed
|
||||
- GREEN: What implementation made it pass
|
||||
- REFACTOR: What cleanup was done (if any)
|
||||
- Commits: List of commits produced
|
||||
</output>
|
||||
```
|
||||
|
||||
**One feature per TDD plan.** If features are trivial enough to batch, they're trivial enough to skip TDD—use a standard plan and add tests after.
|
||||
</tdd_plan_structure>
|
||||
|
||||
<execution_flow>
|
||||
## Red-Green-Refactor Cycle
|
||||
|
||||
**RED - Write failing test:**
|
||||
1. Create test file following project conventions
|
||||
2. Write test describing expected behavior (from `<behavior>` element)
|
||||
3. Run test - it MUST fail
|
||||
4. If test passes: feature exists or test is wrong. Investigate.
|
||||
5. Commit: `test({phase}-{plan}): add failing test for [feature]`
|
||||
|
||||
**GREEN - Implement to pass:**
|
||||
1. Write minimal code to make test pass
|
||||
2. No cleverness, no optimization - just make it work
|
||||
3. Run test - it MUST pass
|
||||
4. Commit: `feat({phase}-{plan}): implement [feature]`
|
||||
|
||||
**REFACTOR (if needed):**
|
||||
1. Clean up implementation if obvious improvements exist
|
||||
2. Run tests - MUST still pass
|
||||
3. Only commit if changes made: `refactor({phase}-{plan}): clean up [feature]`
|
||||
|
||||
**Result:** Each TDD plan produces 2-3 atomic commits.
|
||||
</execution_flow>
|
||||
|
||||
<test_quality>
|
||||
## Good Tests vs Bad Tests
|
||||
|
||||
**Test behavior, not implementation:**
|
||||
- Good: "returns formatted date string"
|
||||
- Bad: "calls formatDate helper with correct params"
|
||||
- Tests should survive refactors
|
||||
|
||||
**One concept per test:**
|
||||
- Good: Separate tests for valid input, empty input, malformed input
|
||||
- Bad: Single test checking all edge cases with multiple assertions
|
||||
|
||||
**Descriptive names:**
|
||||
- Good: "should reject empty email", "returns null for invalid ID"
|
||||
- Bad: "test1", "handles error", "works correctly"
|
||||
|
||||
**No implementation details:**
|
||||
- Good: Test public API, observable behavior
|
||||
- Bad: Mock internals, test private methods, assert on internal state
|
||||
</test_quality>
|
||||
|
||||
<framework_setup>
|
||||
## Test Framework Setup (If None Exists)
|
||||
|
||||
When executing a TDD plan but no test framework is configured, set it up as part of the RED phase:
|
||||
|
||||
**1. Detect project type:**
|
||||
```bash
|
||||
# JavaScript/TypeScript
|
||||
if [ -f package.json ]; then echo "node"; fi
|
||||
|
||||
# Python
|
||||
if [ -f requirements.txt ] || [ -f pyproject.toml ]; then echo "python"; fi
|
||||
|
||||
# Go
|
||||
if [ -f go.mod ]; then echo "go"; fi
|
||||
|
||||
# Rust
|
||||
if [ -f Cargo.toml ]; then echo "rust"; fi
|
||||
```
|
||||
|
||||
**2. Install minimal framework:**
|
||||
| Project | Framework | Install |
|
||||
|---------|-----------|---------|
|
||||
| Node.js | Jest | `npm install -D jest @types/jest ts-jest` |
|
||||
| Node.js (Vite) | Vitest | `npm install -D vitest` |
|
||||
| Python | pytest | `pip install pytest` |
|
||||
| Go | testing | Built-in |
|
||||
| Rust | cargo test | Built-in |
|
||||
|
||||
**3. Create config if needed:**
|
||||
- Jest: `jest.config.js` with ts-jest preset
|
||||
- Vitest: `vitest.config.ts` with test globals
|
||||
- pytest: `pytest.ini` or `pyproject.toml` section
|
||||
|
||||
**4. Verify setup:**
|
||||
```bash
|
||||
# Run empty test suite - should pass with 0 tests
|
||||
npm test # Node
|
||||
pytest # Python
|
||||
go test ./... # Go
|
||||
cargo test # Rust
|
||||
```
|
||||
|
||||
**5. Create first test file:**
|
||||
Follow project conventions for test location:
|
||||
- `*.test.ts` / `*.spec.ts` next to source
|
||||
- `__tests__/` directory
|
||||
- `tests/` directory at root
|
||||
|
||||
Framework setup is a one-time cost included in the first TDD plan's RED phase.
|
||||
</framework_setup>
|
||||
|
||||
<error_handling>
|
||||
## Error Handling
|
||||
|
||||
**Test doesn't fail in RED phase:**
|
||||
- Feature may already exist - investigate
|
||||
- Test may be wrong (not testing what you think)
|
||||
- Fix before proceeding
|
||||
|
||||
**Test doesn't pass in GREEN phase:**
|
||||
- Debug implementation
|
||||
- Don't skip to refactor
|
||||
- Keep iterating until green
|
||||
|
||||
**Tests fail in REFACTOR phase:**
|
||||
- Undo refactor
|
||||
- Commit was premature
|
||||
- Refactor in smaller steps
|
||||
|
||||
**Unrelated tests break:**
|
||||
- Stop and investigate
|
||||
- May indicate coupling issue
|
||||
- Fix before proceeding
|
||||
</error_handling>
|
||||
|
||||
<commit_pattern>
|
||||
## Commit Pattern for TDD Plans
|
||||
|
||||
TDD plans produce 2-3 atomic commits (one per phase):
|
||||
|
||||
```
|
||||
test(08-02): add failing test for email validation
|
||||
|
||||
- Tests valid email formats accepted
|
||||
- Tests invalid formats rejected
|
||||
- Tests empty input handling
|
||||
|
||||
feat(08-02): implement email validation
|
||||
|
||||
- Regex pattern matches RFC 5322
|
||||
- Returns boolean for validity
|
||||
- Handles edge cases (empty, null)
|
||||
|
||||
refactor(08-02): extract regex to constant (optional)
|
||||
|
||||
- Moved pattern to EMAIL_REGEX constant
|
||||
- No behavior changes
|
||||
- Tests still pass
|
||||
```
|
||||
|
||||
**Comparison with standard plans:**
|
||||
- Standard plans: 1 commit per task, 2-4 commits per plan
|
||||
- TDD plans: 2-3 commits for single feature
|
||||
|
||||
Both follow same format: `{type}({phase}-{plan}): {description}`
|
||||
|
||||
**Benefits:**
|
||||
- Each commit independently revertable
|
||||
- Git bisect works at commit level
|
||||
- Clear history showing TDD discipline
|
||||
- Consistent with overall commit strategy
|
||||
</commit_pattern>
|
||||
|
||||
<context_budget>
|
||||
## Context Budget
|
||||
|
||||
TDD plans target **~40% context usage** (lower than standard plans' ~50%).
|
||||
|
||||
Why lower:
|
||||
- RED phase: write test, run test, potentially debug why it didn't fail
|
||||
- GREEN phase: implement, run test, potentially iterate on failures
|
||||
- REFACTOR phase: modify code, run tests, verify no regressions
|
||||
|
||||
Each phase involves reading files, running commands, analyzing output. The back-and-forth is inherently heavier than linear task execution.
|
||||
|
||||
Single feature focus ensures full quality throughout the cycle.
|
||||
</context_budget>
|
||||
160
get-shit-done/references/ui-brand.md
Normal file
160
get-shit-done/references/ui-brand.md
Normal file
@@ -0,0 +1,160 @@
|
||||
<ui_patterns>
|
||||
|
||||
Visual patterns for user-facing GSD output. Orchestrators @-reference this file.
|
||||
|
||||
## Stage Banners
|
||||
|
||||
Use for major workflow transitions.
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► {STAGE NAME}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**Stage names (uppercase):**
|
||||
- `QUESTIONING`
|
||||
- `RESEARCHING`
|
||||
- `DEFINING REQUIREMENTS`
|
||||
- `CREATING ROADMAP`
|
||||
- `PLANNING PHASE {N}`
|
||||
- `EXECUTING WAVE {N}`
|
||||
- `VERIFYING`
|
||||
- `PHASE {N} COMPLETE ✓`
|
||||
- `MILESTONE COMPLETE 🎉`
|
||||
|
||||
---
|
||||
|
||||
## Checkpoint Boxes
|
||||
|
||||
User action required. 62-character width.
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════╗
|
||||
║ CHECKPOINT: {Type} ║
|
||||
╚══════════════════════════════════════════════════════════════╝
|
||||
|
||||
{Content}
|
||||
|
||||
──────────────────────────────────────────────────────────────
|
||||
→ {ACTION PROMPT}
|
||||
──────────────────────────────────────────────────────────────
|
||||
```
|
||||
|
||||
**Types:**
|
||||
- `CHECKPOINT: Verification Required` → `→ Type "approved" or describe issues`
|
||||
- `CHECKPOINT: Decision Required` → `→ Select: option-a / option-b`
|
||||
- `CHECKPOINT: Action Required` → `→ Type "done" when complete`
|
||||
|
||||
---
|
||||
|
||||
## Status Symbols
|
||||
|
||||
```
|
||||
✓ Complete / Passed / Verified
|
||||
✗ Failed / Missing / Blocked
|
||||
◆ In Progress
|
||||
○ Pending
|
||||
⚡ Auto-approved
|
||||
⚠ Warning
|
||||
🎉 Milestone complete (only in banner)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Progress Display
|
||||
|
||||
**Phase/milestone level:**
|
||||
```
|
||||
Progress: ████████░░ 80%
|
||||
```
|
||||
|
||||
**Task level:**
|
||||
```
|
||||
Tasks: 2/4 complete
|
||||
```
|
||||
|
||||
**Plan level:**
|
||||
```
|
||||
Plans: 3/5 complete
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Spawning Indicators
|
||||
|
||||
```
|
||||
◆ Spawning researcher...
|
||||
|
||||
◆ Spawning 4 researchers in parallel...
|
||||
→ Stack research
|
||||
→ Features research
|
||||
→ Architecture research
|
||||
→ Pitfalls research
|
||||
|
||||
✓ Researcher complete: STACK.md written
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Up Block
|
||||
|
||||
Always at end of major completions.
|
||||
|
||||
```
|
||||
───────────────────────────────────────────────────────────────
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**{Identifier}: {Name}** — {one-line description}
|
||||
|
||||
`{copy-paste command}`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:alternative-1` — description
|
||||
- `/gsd:alternative-2` — description
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Box
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════╗
|
||||
║ ERROR ║
|
||||
╚══════════════════════════════════════════════════════════════╝
|
||||
|
||||
{Error description}
|
||||
|
||||
**To fix:** {Resolution steps}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tables
|
||||
|
||||
```
|
||||
| Phase | Status | Plans | Progress |
|
||||
|-------|--------|-------|----------|
|
||||
| 1 | ✓ | 3/3 | 100% |
|
||||
| 2 | ◆ | 1/4 | 25% |
|
||||
| 3 | ○ | 0/2 | 0% |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
- Varying box/banner widths
|
||||
- Mixing banner styles (`===`, `---`, `***`)
|
||||
- Skipping `GSD ►` prefix in banners
|
||||
- Random emoji (`🚀`, `✨`, `💫`)
|
||||
- Missing Next Up block after completions
|
||||
|
||||
</ui_patterns>
|
||||
681
get-shit-done/references/user-profiling.md
Normal file
681
get-shit-done/references/user-profiling.md
Normal file
@@ -0,0 +1,681 @@
|
||||
# User Profiling: Detection Heuristics Reference
|
||||
|
||||
This reference document defines detection heuristics for behavioral profiling across 8 dimensions. The gsd-user-profiler agent applies these rules when analyzing extracted session messages. Do not invent dimensions or scoring rules beyond what is defined here.
|
||||
|
||||
## How to Use This Document
|
||||
|
||||
1. The gsd-user-profiler agent reads this document before analyzing any messages
|
||||
2. For each dimension, the agent scans messages for the signal patterns defined below
|
||||
3. The agent applies the detection heuristics to classify the developer's pattern
|
||||
4. Confidence is scored using the thresholds defined per dimension
|
||||
5. Evidence quotes are curated using the rules in the Evidence Curation section
|
||||
6. Output must conform to the JSON schema in the Output Schema section
|
||||
|
||||
---
|
||||
|
||||
## Dimensions
|
||||
|
||||
### 1. Communication Style
|
||||
|
||||
`dimension_id: communication_style`
|
||||
|
||||
**What we're measuring:** How the developer phrases requests, instructions, and feedback -- the structural pattern of their messages to Claude.
|
||||
|
||||
**Rating spectrum:**
|
||||
|
||||
| Rating | Description |
|
||||
|--------|-------------|
|
||||
| `terse-direct` | Short, imperative messages with minimal context. Gets to the point immediately. |
|
||||
| `conversational` | Medium-length messages mixing instructions with questions and thinking-aloud. Natural, informal tone. |
|
||||
| `detailed-structured` | Long messages with explicit structure -- headers, numbered lists, problem statements, pre-analysis. |
|
||||
| `mixed` | No dominant pattern; style shifts based on task type or project context. |
|
||||
|
||||
**Signal patterns:**
|
||||
|
||||
1. **Message length distribution** -- Average word count across messages. Terse < 50 words, conversational 50-200 words, detailed > 200 words.
|
||||
2. **Imperative-to-interrogative ratio** -- Ratio of commands ("fix this", "add X") to questions ("what do you think?", "should we?"). High imperative ratio suggests terse-direct.
|
||||
3. **Structural formatting** -- Presence of markdown headers, numbered lists, code blocks, or bullet points within messages. Frequent formatting suggests detailed-structured.
|
||||
4. **Context preambles** -- Whether the developer provides background/context before making a request. Preambles suggest conversational or detailed-structured.
|
||||
5. **Sentence completeness** -- Whether messages use full sentences or fragments/shorthand. Fragments suggest terse-direct.
|
||||
6. **Follow-up pattern** -- Whether the developer provides additional context in subsequent messages (multi-message requests suggest conversational).
|
||||
|
||||
**Detection heuristics:**
|
||||
|
||||
1. If average message length < 50 words AND predominantly imperative mood AND minimal formatting --> `terse-direct`
|
||||
2. If average message length 50-200 words AND mix of imperative and interrogative AND occasional formatting --> `conversational`
|
||||
3. If average message length > 200 words AND frequent structural formatting AND context preambles present --> `detailed-structured`
|
||||
4. If message length variance is high (std dev > 60% of mean) AND no single pattern dominates (< 60% of messages match one style) --> `mixed`
|
||||
5. If pattern varies systematically by project type (e.g., terse in CLI projects, detailed in frontend) --> `mixed` with context-dependent note
|
||||
|
||||
**Confidence scoring:**
|
||||
|
||||
- **HIGH:** 10+ messages showing consistent pattern (> 70% match), same pattern observed across 2+ projects
|
||||
- **MEDIUM:** 5-9 messages showing pattern, OR pattern consistent within 1 project only
|
||||
- **LOW:** < 5 messages with relevant signals, OR mixed signals (contradictory patterns observed in similar contexts)
|
||||
- **UNSCORED:** 0 messages with relevant signals for this dimension
|
||||
|
||||
**Example quotes:**
|
||||
|
||||
- **terse-direct:** "fix the auth bug" / "add pagination to the list endpoint" / "this test is failing, make it pass"
|
||||
- **conversational:** "I'm thinking we should probably handle the error case here. What do you think about returning a 422 instead of a 500? The client needs to know it was a validation issue."
|
||||
- **detailed-structured:** "## Context\nThe auth flow currently uses session cookies but we need to migrate to JWT.\n\n## Requirements\n1. Access tokens (15min expiry)\n2. Refresh tokens (7-day)\n3. httpOnly cookies\n\n## What I've tried\nI looked at jose and jsonwebtoken..."
|
||||
|
||||
**Context-dependent patterns:**
|
||||
|
||||
When communication style varies systematically by project or task type, report the split rather than forcing a single rating. Example: "context-dependent: terse-direct for bug fixes and CLI tooling, detailed-structured for architecture and frontend work." Phase 3 orchestration resolves context-dependent splits by presenting the split to the user.
|
||||
|
||||
---
|
||||
|
||||
### 2. Decision Speed
|
||||
|
||||
`dimension_id: decision_speed`
|
||||
|
||||
**What we're measuring:** How quickly the developer makes choices when Claude presents options, alternatives, or trade-offs.
|
||||
|
||||
**Rating spectrum:**
|
||||
|
||||
| Rating | Description |
|
||||
|--------|-------------|
|
||||
| `fast-intuitive` | Decides immediately based on experience or gut feeling. Minimal deliberation. |
|
||||
| `deliberate-informed` | Requests comparison or summary before deciding. Wants to understand trade-offs. |
|
||||
| `research-first` | Delays decision to research independently. May leave and return with findings. |
|
||||
| `delegator` | Defers to Claude's recommendation. Trusts the suggestion. |
|
||||
|
||||
**Signal patterns:**
|
||||
|
||||
1. **Response latency to options** -- How many messages between Claude presenting options and developer choosing. Immediate (same message or next) suggests fast-intuitive.
|
||||
2. **Comparison requests** -- Presence of "compare these", "what are the trade-offs?", "pros and cons?" suggests deliberate-informed.
|
||||
3. **External research indicators** -- Messages like "I looked into X and...", "according to the docs...", "I read that..." suggest research-first.
|
||||
4. **Delegation language** -- "just pick one", "whatever you recommend", "your call", "go with the best option" suggests delegator.
|
||||
5. **Decision reversal frequency** -- How often the developer changes a decision after making it. Frequent reversals may indicate fast-intuitive with low confidence.
|
||||
|
||||
**Detection heuristics:**
|
||||
|
||||
1. If developer selects options within 1-2 messages of presentation AND uses decisive language ("use X", "go with A") AND rarely asks for comparisons --> `fast-intuitive`
|
||||
2. If developer requests trade-off analysis or comparison tables AND decides after receiving comparison AND asks clarifying questions --> `deliberate-informed`
|
||||
3. If developer defers decisions with "let me look into this" AND returns with external information AND cites documentation or articles --> `research-first`
|
||||
4. If developer uses delegation language (> 3 instances) AND rarely overrides Claude's choices AND says "sounds good" or "your call" --> `delegator`
|
||||
5. If no clear pattern OR evidence is split across multiple styles --> classify as the dominant style with a context-dependent note
|
||||
|
||||
**Confidence scoring:**
|
||||
|
||||
- **HIGH:** 10+ decision points observed showing consistent pattern, same pattern across 2+ projects
|
||||
- **MEDIUM:** 5-9 decision points, OR consistent within 1 project only
|
||||
- **LOW:** < 5 decision points observed, OR mixed decision-making styles
|
||||
- **UNSCORED:** 0 messages containing decision-relevant signals
|
||||
|
||||
**Example quotes:**
|
||||
|
||||
- **fast-intuitive:** "Use Tailwind. Next question." / "Option B, let's move on"
|
||||
- **deliberate-informed:** "Can you compare Prisma vs Drizzle for this use case? I want to understand the migration story and type safety differences before I pick."
|
||||
- **research-first:** "Hold off on the DB choice -- I want to read the Drizzle docs and check their GitHub issues first. I'll come back with a decision."
|
||||
- **delegator:** "You know more about this than me. Whatever you recommend, go with it."
|
||||
|
||||
**Context-dependent patterns:**
|
||||
|
||||
Decision speed often varies by stakes. A developer may be fast-intuitive for styling choices but research-first for database or auth decisions. When this pattern is clear, report the split: "context-dependent: fast-intuitive for low-stakes (styling, naming), deliberate-informed for high-stakes (architecture, security)."
|
||||
|
||||
---
|
||||
|
||||
### 3. Explanation Depth
|
||||
|
||||
`dimension_id: explanation_depth`
|
||||
|
||||
**What we're measuring:** How much explanation the developer wants alongside code -- their preference for understanding vs. speed.
|
||||
|
||||
**Rating spectrum:**
|
||||
|
||||
| Rating | Description |
|
||||
|--------|-------------|
|
||||
| `code-only` | Wants working code with minimal or no explanation. Reads and understands code directly. |
|
||||
| `concise` | Wants brief explanation of approach with code. Key decisions noted, not exhaustive. |
|
||||
| `detailed` | Wants thorough walkthrough of the approach, reasoning, and code. Appreciates structure. |
|
||||
| `educational` | Wants deep conceptual explanation. Treats interactions as learning opportunities. |
|
||||
|
||||
**Signal patterns:**
|
||||
|
||||
1. **Explicit depth requests** -- "just show me the code", "explain why", "teach me about X", "skip the explanation"
|
||||
2. **Reaction to explanations** -- Does the developer skip past explanations? Ask for more detail? Say "too much"?
|
||||
3. **Follow-up question depth** -- Surface-level follow-ups ("does it work?") vs. conceptual ("why this pattern over X?")
|
||||
4. **Code comprehension signals** -- Does the developer reference implementation details in their messages? This suggests they read and understand code directly.
|
||||
5. **"I know this" signals** -- Messages like "I'm familiar with X", "skip the basics", "I know how hooks work" indicate lower explanation preference.
|
||||
|
||||
**Detection heuristics:**
|
||||
|
||||
1. If developer says "just the code" or "skip the explanation" AND rarely asks follow-up conceptual questions AND references code details directly --> `code-only`
|
||||
2. If developer accepts brief explanations without asking for more AND asks focused follow-ups about specific decisions --> `concise`
|
||||
3. If developer asks "why" questions AND requests walkthroughs AND appreciates structured explanations --> `detailed`
|
||||
4. If developer asks conceptual questions beyond the immediate task AND uses learning language ("I want to understand", "teach me") --> `educational`
|
||||
|
||||
**Confidence scoring:**
|
||||
|
||||
- **HIGH:** 10+ messages showing consistent preference, same preference across 2+ projects
|
||||
- **MEDIUM:** 5-9 messages, OR consistent within 1 project only
|
||||
- **LOW:** < 5 relevant messages, OR preferences shift between interactions
|
||||
- **UNSCORED:** 0 messages with relevant signals
|
||||
|
||||
**Example quotes:**
|
||||
|
||||
- **code-only:** "Just give me the implementation. I'll read through it." / "Skip the explanation, show the code."
|
||||
- **concise:** "Quick summary of the approach, then the code please." / "Why did you use a Map here instead of an object?"
|
||||
- **detailed:** "Walk me through this step by step. I want to understand the auth flow before we implement it."
|
||||
- **educational:** "Can you explain how JWT refresh token rotation works conceptually? I want to understand the security model, not just implement it."
|
||||
|
||||
**Context-dependent patterns:**
|
||||
|
||||
Explanation depth often correlates with domain familiarity. A developer may want code-only for well-known tech but educational for new domains. Report splits when observed: "context-dependent: code-only for React/TypeScript, detailed for database optimization."
|
||||
|
||||
---
|
||||
|
||||
### 4. Debugging Approach
|
||||
|
||||
`dimension_id: debugging_approach`
|
||||
|
||||
**What we're measuring:** How the developer approaches problems, errors, and unexpected behavior when working with Claude.
|
||||
|
||||
**Rating spectrum:**
|
||||
|
||||
| Rating | Description |
|
||||
|--------|-------------|
|
||||
| `fix-first` | Pastes error, wants it fixed. Minimal diagnosis interest. Results-oriented. |
|
||||
| `diagnostic` | Shares error with context, wants to understand the cause before fixing. |
|
||||
| `hypothesis-driven` | Investigates independently first, brings specific theories to Claude for validation. |
|
||||
| `collaborative` | Wants to work through the problem step-by-step with Claude as a partner. |
|
||||
|
||||
**Signal patterns:**
|
||||
|
||||
1. **Error presentation style** -- Raw error paste only (fix-first) vs. error + "I think it might be..." (hypothesis-driven) vs. "Can you help me understand why..." (diagnostic)
|
||||
2. **Pre-investigation indicators** -- Does the developer share what they already tried? Do they mention reading logs, checking state, or isolating the issue?
|
||||
3. **Root cause interest** -- After a fix, does the developer ask "why did that happen?" or just move on?
|
||||
4. **Step-by-step language** -- "Let's check X first", "what should we look at next?", "walk me through the debugging"
|
||||
5. **Fix acceptance pattern** -- Does the developer immediately apply fixes or question them first?
|
||||
|
||||
**Detection heuristics:**
|
||||
|
||||
1. If developer pastes errors without context AND accepts fixes without root cause questions AND moves on immediately --> `fix-first`
|
||||
2. If developer provides error context AND asks "why is this happening?" AND wants explanation with the fix --> `diagnostic`
|
||||
3. If developer shares their own analysis AND proposes theories ("I think the issue is X because...") AND asks Claude to confirm or refute --> `hypothesis-driven`
|
||||
4. If developer uses collaborative language ("let's", "what should we check?") AND prefers incremental diagnosis AND walks through problems together --> `collaborative`
|
||||
|
||||
**Confidence scoring:**
|
||||
|
||||
- **HIGH:** 10+ debugging interactions showing consistent approach, same approach across 2+ projects
|
||||
- **MEDIUM:** 5-9 debugging interactions, OR consistent within 1 project only
|
||||
- **LOW:** < 5 debugging interactions, OR approach varies significantly
|
||||
- **UNSCORED:** 0 messages with debugging-relevant signals
|
||||
|
||||
**Example quotes:**
|
||||
|
||||
- **fix-first:** "Getting this error: TypeError: Cannot read properties of undefined. Fix it."
|
||||
- **diagnostic:** "The API returns 500 when I send a POST to /users. Here's the request body and the server log. What's causing this?"
|
||||
- **hypothesis-driven:** "I think the race condition is in the useEffect cleanup. I checked and the subscription isn't being cancelled on unmount. Can you confirm?"
|
||||
- **collaborative:** "Let's debug this together. The test passes locally but fails in CI. What should we check first?"
|
||||
|
||||
**Context-dependent patterns:**
|
||||
|
||||
Debugging approach may vary by urgency. A developer might be fix-first under deadline pressure but hypothesis-driven during regular development. Note temporal patterns if detected.
|
||||
|
||||
---
|
||||
|
||||
### 5. UX Philosophy
|
||||
|
||||
`dimension_id: ux_philosophy`
|
||||
|
||||
**What we're measuring:** How the developer prioritizes user experience, design, and visual quality relative to functionality.
|
||||
|
||||
**Rating spectrum:**
|
||||
|
||||
| Rating | Description |
|
||||
|--------|-------------|
|
||||
| `function-first` | Get it working, polish later. Minimal UX concern during implementation. |
|
||||
| `pragmatic` | Basic usability from the start. Nothing ugly or broken, but no design obsession. |
|
||||
| `design-conscious` | Design and UX are treated as important as functionality. Attention to visual detail. |
|
||||
| `backend-focused` | Primarily builds backend/CLI. Minimal frontend exposure or interest. |
|
||||
|
||||
**Signal patterns:**
|
||||
|
||||
1. **Design-related requests** -- Mentions of styling, layout, responsiveness, animations, color schemes, spacing
|
||||
2. **Polish timing** -- Does the developer ask for visual polish during implementation or defer it?
|
||||
3. **UI feedback specificity** -- Vague ("make it look better") vs. specific ("increase the padding to 16px, change the font weight to 600")
|
||||
4. **Frontend vs. backend distribution** -- Ratio of frontend-focused requests to backend-focused requests
|
||||
5. **Accessibility mentions** -- References to a11y, screen readers, keyboard navigation, ARIA labels
|
||||
|
||||
**Detection heuristics:**
|
||||
|
||||
1. If developer rarely mentions UI/UX AND focuses on logic, APIs, data AND defers styling ("we'll make it pretty later") --> `function-first`
|
||||
2. If developer includes basic UX requirements AND mentions usability but not pixel-perfection AND balances form with function --> `pragmatic`
|
||||
3. If developer provides specific design requirements AND mentions polish, animations, spacing AND treats UI bugs as seriously as logic bugs --> `design-conscious`
|
||||
4. If developer works primarily on CLI tools, APIs, or backend systems AND rarely or never works on frontend AND messages focus on data, performance, infrastructure --> `backend-focused`
|
||||
|
||||
**Confidence scoring:**
|
||||
|
||||
- **HIGH:** 10+ messages with UX-relevant signals, same pattern across 2+ projects
|
||||
- **MEDIUM:** 5-9 messages, OR consistent within 1 project only
|
||||
- **LOW:** < 5 relevant messages, OR philosophy varies by project type
|
||||
- **UNSCORED:** 0 messages with UX-relevant signals
|
||||
|
||||
**Example quotes:**
|
||||
|
||||
- **function-first:** "Just get the form working. We'll style it later." / "I don't care how it looks, I need the data flowing."
|
||||
- **pragmatic:** "Make sure the loading state is visible and the error messages are clear. Standard styling is fine."
|
||||
- **design-conscious:** "The button needs more breathing room -- add 12px vertical padding and make the hover state transition 200ms. Also check the contrast ratio."
|
||||
- **backend-focused:** "I'm building a CLI tool. No UI needed." / "Add the REST endpoint, I'll handle the frontend separately."
|
||||
|
||||
**Context-dependent patterns:**
|
||||
|
||||
UX philosophy is inherently project-dependent. A developer building a CLI tool is necessarily backend-focused for that project. When possible, distinguish between project-driven and preference-driven patterns. If the developer only has backend projects, note that the rating reflects available data: "backend-focused (note: all analyzed projects are backend/CLI -- may not reflect frontend preferences)."
|
||||
|
||||
---
|
||||
|
||||
### 6. Vendor Philosophy
|
||||
|
||||
`dimension_id: vendor_philosophy`
|
||||
|
||||
**What we're measuring:** How the developer approaches choosing and evaluating libraries, frameworks, and external services.
|
||||
|
||||
**Rating spectrum:**
|
||||
|
||||
| Rating | Description |
|
||||
|--------|-------------|
|
||||
| `pragmatic-fast` | Uses what works, what Claude suggests, or what's fastest. Minimal evaluation. |
|
||||
| `conservative` | Prefers well-known, battle-tested, widely-adopted options. Risk-averse. |
|
||||
| `thorough-evaluator` | Researches alternatives, reads docs, compares features and trade-offs before committing. |
|
||||
| `opinionated` | Has strong, pre-existing preferences for specific tools. Knows what they like. |
|
||||
|
||||
**Signal patterns:**
|
||||
|
||||
1. **Library selection language** -- "just use whatever", "is X the standard?", "I want to compare A vs B", "we're using X, period"
|
||||
2. **Evaluation depth** -- Does the developer accept the first suggestion or ask for alternatives?
|
||||
3. **Stated preferences** -- Explicit mentions of preferred tools, past experience, or tool philosophy
|
||||
4. **Rejection patterns** -- Does the developer reject Claude's suggestions? On what basis (popularity, personal experience, docs quality)?
|
||||
5. **Dependency attitude** -- "minimize dependencies", "no external deps", "add whatever we need" -- reveals philosophy about external code
|
||||
|
||||
**Detection heuristics:**
|
||||
|
||||
1. If developer accepts library suggestions without pushback AND uses phrases like "sounds good" or "go with that" AND rarely asks about alternatives --> `pragmatic-fast`
|
||||
2. If developer asks about popularity, maintenance, community AND prefers "industry standard" or "battle-tested" AND avoids new/experimental --> `conservative`
|
||||
3. If developer requests comparisons AND reads docs before deciding AND asks about edge cases, license, bundle size --> `thorough-evaluator`
|
||||
4. If developer names specific libraries unprompted AND overrides Claude's suggestions AND expresses strong preferences --> `opinionated`
|
||||
|
||||
**Confidence scoring:**
|
||||
|
||||
- **HIGH:** 10+ vendor/library decisions observed, same pattern across 2+ projects
|
||||
- **MEDIUM:** 5-9 decisions, OR consistent within 1 project only
|
||||
- **LOW:** < 5 vendor decisions observed, OR pattern varies
|
||||
- **UNSCORED:** 0 messages with vendor-selection signals
|
||||
|
||||
**Example quotes:**
|
||||
|
||||
- **pragmatic-fast:** "Use whatever ORM you recommend. I just need it working." / "Sure, Tailwind is fine."
|
||||
- **conservative:** "Is Prisma the most widely used ORM for this? I want something with a large community." / "Let's stick with what most teams use."
|
||||
- **thorough-evaluator:** "Before we pick a state management library, can you compare Zustand vs Jotai vs Redux Toolkit? I want to understand bundle size, API surface, and TypeScript support."
|
||||
- **opinionated:** "We're using Drizzle, not Prisma. I've used both and Drizzle's SQL-like API is better for complex queries."
|
||||
|
||||
**Context-dependent patterns:**
|
||||
|
||||
Vendor philosophy may shift based on project importance or domain. Personal projects may use pragmatic-fast while professional projects use thorough-evaluator. Report the split if detected.
|
||||
|
||||
---
|
||||
|
||||
### 7. Frustration Triggers
|
||||
|
||||
`dimension_id: frustration_triggers`
|
||||
|
||||
**What we're measuring:** What causes visible frustration, correction, or negative emotional signals in the developer's messages to Claude.
|
||||
|
||||
**Rating spectrum:**
|
||||
|
||||
| Rating | Description |
|
||||
|--------|-------------|
|
||||
| `scope-creep` | Frustrated when Claude does things that were not asked for. Wants bounded execution. |
|
||||
| `instruction-adherence` | Frustrated when Claude doesn't follow instructions precisely. Values exactness. |
|
||||
| `verbosity` | Frustrated when Claude over-explains or is too wordy. Wants conciseness. |
|
||||
| `regression` | Frustrated when Claude breaks working code while fixing something else. Values stability. |
|
||||
|
||||
**Signal patterns:**
|
||||
|
||||
1. **Correction language** -- "I didn't ask for that", "don't do X", "I said Y not Z", "why did you change this?"
|
||||
2. **Repetition patterns** -- Repeating the same instruction with emphasis suggests instruction-adherence frustration
|
||||
3. **Emotional tone shifts** -- Shift from neutral to terse, use of capitals, exclamation marks, explicit frustration words
|
||||
4. **"Don't" statements** -- "don't add extra features", "don't explain so much", "don't touch that file" -- what they prohibit reveals what frustrates them
|
||||
5. **Frustration recovery** -- How quickly the developer returns to neutral tone after a frustration event
|
||||
|
||||
**Detection heuristics:**
|
||||
|
||||
1. If developer corrects Claude for doing unrequested work AND uses language like "I only asked for X", "stop adding things", "stick to what I asked" --> `scope-creep`
|
||||
2. If developer repeats instructions AND corrects specific deviations from stated requirements AND emphasizes precision ("I specifically said...") --> `instruction-adherence`
|
||||
3. If developer asks Claude to be shorter AND skips explanations AND expresses annoyance at length ("too much", "just the answer") --> `verbosity`
|
||||
4. If developer expresses frustration at broken functionality AND checks for regressions AND says "you broke X while fixing Y" --> `regression`
|
||||
|
||||
**Confidence scoring:**
|
||||
|
||||
- **HIGH:** 10+ frustration events showing consistent trigger pattern, same trigger across 2+ projects
|
||||
- **MEDIUM:** 5-9 frustration events, OR consistent within 1 project only
|
||||
- **LOW:** < 5 frustration events observed (note: low frustration count is POSITIVE -- it means the developer is generally satisfied, not that data is insufficient)
|
||||
- **UNSCORED:** 0 messages with frustration signals (note: "no frustration detected" is a valid finding)
|
||||
|
||||
**Example quotes:**
|
||||
|
||||
- **scope-creep:** "I asked you to fix the login bug, not refactor the entire auth module. Revert everything except the bug fix."
|
||||
- **instruction-adherence:** "I said to use a Map, not an object. I was specific about this. Please redo it with a Map."
|
||||
- **verbosity:** "Way too much explanation. Just show me the code change, nothing else."
|
||||
- **regression:** "The search was working fine before. Now after your 'fix' to the filter, search results are empty. Don't touch things I didn't ask you to change."
|
||||
|
||||
**Context-dependent patterns:**
|
||||
|
||||
Frustration triggers tend to be consistent across projects (personality-driven, not project-driven). However, their intensity may vary with project stakes. If multiple frustration triggers are observed, report the primary (most frequent) and note secondaries.
|
||||
|
||||
---
|
||||
|
||||
### 8. Learning Style
|
||||
|
||||
`dimension_id: learning_style`
|
||||
|
||||
**What we're measuring:** How the developer prefers to understand new concepts, tools, or patterns they encounter.
|
||||
|
||||
**Rating spectrum:**
|
||||
|
||||
| Rating | Description |
|
||||
|--------|-------------|
|
||||
| `self-directed` | Reads code directly, figures things out independently. Asks Claude specific questions. |
|
||||
| `guided` | Asks Claude to explain relevant parts. Prefers guided understanding. |
|
||||
| `documentation-first` | Reads official docs and tutorials before diving in. References documentation. |
|
||||
| `example-driven` | Wants working examples to modify and learn from. Pattern-matching learner. |
|
||||
|
||||
**Signal patterns:**
|
||||
|
||||
1. **Learning initiation** -- Does the developer start by reading code, asking for explanation, requesting docs, or asking for examples?
|
||||
2. **Reference to external sources** -- Mentions of documentation, tutorials, Stack Overflow, blog posts suggest documentation-first
|
||||
3. **Example requests** -- "show me an example", "can you give me a sample?", "let me see how this looks in practice"
|
||||
4. **Code-reading indicators** -- "I looked at the implementation", "I see that X calls Y", "from reading the code..."
|
||||
5. **Explanation requests vs. code requests** -- Ratio of "explain X" to "show me X" messages
|
||||
|
||||
**Detection heuristics:**
|
||||
|
||||
1. If developer references reading code directly AND asks specific targeted questions AND demonstrates independent investigation --> `self-directed`
|
||||
2. If developer asks Claude to explain concepts AND requests walkthroughs AND prefers Claude-mediated understanding --> `guided`
|
||||
3. If developer cites documentation AND asks for doc links AND mentions reading tutorials or official guides --> `documentation-first`
|
||||
4. If developer requests examples AND modifies provided examples AND learns by pattern matching --> `example-driven`
|
||||
|
||||
**Confidence scoring:**
|
||||
|
||||
- **HIGH:** 10+ learning interactions showing consistent preference, same preference across 2+ projects
|
||||
- **MEDIUM:** 5-9 learning interactions, OR consistent within 1 project only
|
||||
- **LOW:** < 5 learning interactions, OR preference varies by topic familiarity
|
||||
- **UNSCORED:** 0 messages with learning-relevant signals
|
||||
|
||||
**Example quotes:**
|
||||
|
||||
- **self-directed:** "I read through the middleware code. The issue is that the token check happens after the rate limiter. Should those be swapped?"
|
||||
- **guided:** "Can you walk me through how the auth flow works in this codebase? Start from the login request."
|
||||
- **documentation-first:** "I read the Prisma docs on relations. Can you help me apply the many-to-many pattern from their guide to our schema?"
|
||||
- **example-driven:** "Show me a working example of a protected API route with JWT validation. I'll adapt it for our endpoints."
|
||||
|
||||
**Context-dependent patterns:**
|
||||
|
||||
Learning style often varies with domain expertise. A developer may be self-directed in familiar domains but guided or example-driven in new ones. Report the split if detected: "context-dependent: self-directed for TypeScript/Node, example-driven for Rust/systems programming."
|
||||
|
||||
---
|
||||
|
||||
## Evidence Curation
|
||||
|
||||
### Evidence Format
|
||||
|
||||
Use the combined format for each evidence entry:
|
||||
|
||||
**Signal:** [pattern interpretation -- what the quote demonstrates] / **Example:** "[trimmed quote, ~100 characters]" -- project: [project name]
|
||||
|
||||
### Evidence Targets
|
||||
|
||||
- **3 evidence quotes per dimension** (24 total across all 8 dimensions)
|
||||
- Select quotes that best illustrate the rated pattern
|
||||
- Prefer quotes from different projects to demonstrate cross-project consistency
|
||||
- When fewer than 3 relevant quotes exist, include what is available and note the evidence count
|
||||
|
||||
### Quote Truncation
|
||||
|
||||
- Trim quotes to the behavioral signal -- the part that demonstrates the pattern
|
||||
- Target approximately 100 characters per quote
|
||||
- Preserve the meaningful fragment, not the full message
|
||||
- If the signal is in the middle of a long message, use "..." to indicate trimming
|
||||
- Never include the full 500-character message when 50 characters capture the signal
|
||||
|
||||
### Project Attribution
|
||||
|
||||
- Every evidence quote must include the project name
|
||||
- Project attribution enables verification and shows cross-project patterns
|
||||
- Format: `-- project: [name]`
|
||||
|
||||
### Sensitive Content Exclusion (Layer 1)
|
||||
|
||||
The profiler agent must never select quotes containing any of the following patterns:
|
||||
|
||||
- `sk-` (API key prefixes)
|
||||
- `Bearer ` (auth tokens)
|
||||
- `password` (credentials)
|
||||
- `secret` (secrets)
|
||||
- `token` (when used as a credential value, not a concept discussion)
|
||||
- `api_key` or `API_KEY` (API key references)
|
||||
- Full absolute file paths containing usernames (e.g., `/Users/john/...`, `/home/john/...`)
|
||||
|
||||
**When sensitive content is found and excluded**, report as metadata in the analysis output:
|
||||
|
||||
```json
|
||||
{
|
||||
"sensitive_excluded": [
|
||||
{ "type": "api_key_pattern", "count": 2 },
|
||||
{ "type": "file_path_with_username", "count": 1 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This metadata enables defense-in-depth auditing. Layer 2 (regex filter in the write-profile step) provides a second pass, but the profiler should still avoid selecting sensitive quotes.
|
||||
|
||||
### Natural Language Priority
|
||||
|
||||
Weight natural language messages higher than:
|
||||
- Pasted log output (detected by timestamps, repeated format strings, `[DEBUG]`, `[INFO]`, `[ERROR]`)
|
||||
- Session context dumps (messages starting with "This session is being continued from a previous conversation")
|
||||
- Large code pastes (messages where > 80% of content is inside code fences)
|
||||
|
||||
These message types are genuine but carry less behavioral signal. Deprioritize them when selecting evidence quotes.
|
||||
|
||||
---
|
||||
|
||||
## Recency Weighting
|
||||
|
||||
### Guideline
|
||||
|
||||
Recent sessions (last 30 days) should be weighted approximately 3x compared to older sessions when analyzing patterns.
|
||||
|
||||
### Rationale
|
||||
|
||||
Developer styles evolve. A developer who was terse six months ago may now provide detailed structured context. Recent behavior is a more accurate reflection of current working style.
|
||||
|
||||
### Application
|
||||
|
||||
1. When counting signals for confidence scoring, recent signals count 3x (e.g., 4 recent signals = 12 weighted signals)
|
||||
2. When selecting evidence quotes, prefer recent quotes over older ones when both demonstrate the same pattern
|
||||
3. When patterns conflict between recent and older sessions, the recent pattern takes precedence for the rating, but note the evolution: "recently shifted from terse-direct to conversational"
|
||||
4. The 30-day window is relative to the analysis date, not a fixed date
|
||||
|
||||
### Edge Cases
|
||||
|
||||
- If ALL sessions are older than 30 days, apply no weighting (all sessions are equally stale)
|
||||
- If ALL sessions are within the last 30 days, apply no weighting (all sessions are equally recent)
|
||||
- The 3x weight is a guideline, not a hard multiplier -- use judgment when the weighted count changes a confidence threshold
|
||||
|
||||
---
|
||||
|
||||
## Thin Data Handling
|
||||
|
||||
### Message Thresholds
|
||||
|
||||
| Total Genuine Messages | Mode | Behavior |
|
||||
|------------------------|------|----------|
|
||||
| > 50 | `full` | Full analysis across all 8 dimensions. Questionnaire optional (user can choose to supplement). |
|
||||
| 20-50 | `hybrid` | Analyze available messages. Score each dimension with confidence. Supplement with questionnaire for LOW/UNSCORED dimensions. |
|
||||
| < 20 | `insufficient` | All dimensions scored LOW or UNSCORED. Recommend questionnaire fallback as primary profile source. Note: "insufficient session data for behavioral analysis." |
|
||||
|
||||
### Handling Insufficient Dimensions
|
||||
|
||||
When a specific dimension has insufficient data (even if total messages exceed thresholds):
|
||||
|
||||
- Set confidence to `UNSCORED`
|
||||
- Set summary to: "Insufficient data -- no clear signals detected for this dimension."
|
||||
- Set claude_instruction to a neutral fallback: "No strong preference detected. Ask the developer when this dimension is relevant."
|
||||
- Set evidence_quotes to empty array `[]`
|
||||
- Set evidence_count to `0`
|
||||
|
||||
### Questionnaire Supplement
|
||||
|
||||
When operating in `hybrid` mode, the questionnaire fills gaps for dimensions where session analysis produced LOW or UNSCORED confidence. The questionnaire-derived ratings use:
|
||||
- **MEDIUM** confidence for strong, definitive picks
|
||||
- **LOW** confidence for "it varies" or ambiguous selections
|
||||
|
||||
If session analysis and questionnaire agree on a dimension, confidence can be elevated (e.g., session LOW + questionnaire MEDIUM agreement = MEDIUM).
|
||||
|
||||
---
|
||||
|
||||
## Output Schema
|
||||
|
||||
The profiler agent must return JSON matching this exact schema, wrapped in `<analysis>` tags.
|
||||
|
||||
```json
|
||||
{
|
||||
"profile_version": "1.0",
|
||||
"analyzed_at": "ISO-8601 timestamp",
|
||||
"data_source": "session_analysis",
|
||||
"projects_analyzed": ["project-name-1", "project-name-2"],
|
||||
"messages_analyzed": 0,
|
||||
"message_threshold": "full|hybrid|insufficient",
|
||||
"sensitive_excluded": [
|
||||
{ "type": "string", "count": 0 }
|
||||
],
|
||||
"dimensions": {
|
||||
"communication_style": {
|
||||
"rating": "terse-direct|conversational|detailed-structured|mixed",
|
||||
"confidence": "HIGH|MEDIUM|LOW|UNSCORED",
|
||||
"evidence_count": 0,
|
||||
"cross_project_consistent": true,
|
||||
"evidence_quotes": [
|
||||
{
|
||||
"signal": "Pattern interpretation describing what the quote demonstrates",
|
||||
"quote": "Trimmed quote, approximately 100 characters",
|
||||
"project": "project-name"
|
||||
}
|
||||
],
|
||||
"summary": "One to two sentence description of the observed pattern",
|
||||
"claude_instruction": "Imperative directive for Claude: 'Match structured communication style' not 'You tend to provide structured context'"
|
||||
},
|
||||
"decision_speed": {
|
||||
"rating": "fast-intuitive|deliberate-informed|research-first|delegator",
|
||||
"confidence": "HIGH|MEDIUM|LOW|UNSCORED",
|
||||
"evidence_count": 0,
|
||||
"cross_project_consistent": true,
|
||||
"evidence_quotes": [],
|
||||
"summary": "string",
|
||||
"claude_instruction": "string"
|
||||
},
|
||||
"explanation_depth": {
|
||||
"rating": "code-only|concise|detailed|educational",
|
||||
"confidence": "HIGH|MEDIUM|LOW|UNSCORED",
|
||||
"evidence_count": 0,
|
||||
"cross_project_consistent": true,
|
||||
"evidence_quotes": [],
|
||||
"summary": "string",
|
||||
"claude_instruction": "string"
|
||||
},
|
||||
"debugging_approach": {
|
||||
"rating": "fix-first|diagnostic|hypothesis-driven|collaborative",
|
||||
"confidence": "HIGH|MEDIUM|LOW|UNSCORED",
|
||||
"evidence_count": 0,
|
||||
"cross_project_consistent": true,
|
||||
"evidence_quotes": [],
|
||||
"summary": "string",
|
||||
"claude_instruction": "string"
|
||||
},
|
||||
"ux_philosophy": {
|
||||
"rating": "function-first|pragmatic|design-conscious|backend-focused",
|
||||
"confidence": "HIGH|MEDIUM|LOW|UNSCORED",
|
||||
"evidence_count": 0,
|
||||
"cross_project_consistent": true,
|
||||
"evidence_quotes": [],
|
||||
"summary": "string",
|
||||
"claude_instruction": "string"
|
||||
},
|
||||
"vendor_philosophy": {
|
||||
"rating": "pragmatic-fast|conservative|thorough-evaluator|opinionated",
|
||||
"confidence": "HIGH|MEDIUM|LOW|UNSCORED",
|
||||
"evidence_count": 0,
|
||||
"cross_project_consistent": true,
|
||||
"evidence_quotes": [],
|
||||
"summary": "string",
|
||||
"claude_instruction": "string"
|
||||
},
|
||||
"frustration_triggers": {
|
||||
"rating": "scope-creep|instruction-adherence|verbosity|regression",
|
||||
"confidence": "HIGH|MEDIUM|LOW|UNSCORED",
|
||||
"evidence_count": 0,
|
||||
"cross_project_consistent": true,
|
||||
"evidence_quotes": [],
|
||||
"summary": "string",
|
||||
"claude_instruction": "string"
|
||||
},
|
||||
"learning_style": {
|
||||
"rating": "self-directed|guided|documentation-first|example-driven",
|
||||
"confidence": "HIGH|MEDIUM|LOW|UNSCORED",
|
||||
"evidence_count": 0,
|
||||
"cross_project_consistent": true,
|
||||
"evidence_quotes": [],
|
||||
"summary": "string",
|
||||
"claude_instruction": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Schema Notes
|
||||
|
||||
- **`profile_version`**: Always `"1.0"` for this schema version
|
||||
- **`analyzed_at`**: ISO-8601 timestamp of when the analysis was performed
|
||||
- **`data_source`**: `"session_analysis"` for session-based profiling, `"questionnaire"` for questionnaire-only, `"hybrid"` for combined
|
||||
- **`projects_analyzed`**: List of project names that contributed messages
|
||||
- **`messages_analyzed`**: Total number of genuine user messages processed
|
||||
- **`message_threshold`**: Which threshold mode was triggered (`full`, `hybrid`, `insufficient`)
|
||||
- **`sensitive_excluded`**: Array of excluded sensitive content types with counts (empty array if none found)
|
||||
- **`claude_instruction`**: Must be written in imperative form directed at Claude. This field is how the profile becomes actionable.
|
||||
- Good: "Provide structured responses with headers and numbered lists to match this developer's communication style."
|
||||
- Bad: "You tend to like structured responses."
|
||||
- Good: "Ask before making changes beyond the stated request -- this developer values bounded execution."
|
||||
- Bad: "The developer gets frustrated when you do extra work."
|
||||
|
||||
---
|
||||
|
||||
## Cross-Project Consistency
|
||||
|
||||
### Assessment
|
||||
|
||||
For each dimension, assess whether the observed pattern is consistent across the projects analyzed:
|
||||
|
||||
- **`cross_project_consistent: true`** -- Same rating would apply regardless of which project is analyzed. Evidence from 2+ projects shows the same pattern.
|
||||
- **`cross_project_consistent: false`** -- Pattern varies by project. Include a context-dependent note in the summary.
|
||||
|
||||
### Reporting Splits
|
||||
|
||||
When `cross_project_consistent` is false, the summary must describe the split:
|
||||
|
||||
- "Context-dependent: terse-direct for CLI/backend projects (gsd-tools, api-server), detailed-structured for frontend projects (dashboard, landing-page)."
|
||||
- "Context-dependent: fast-intuitive for familiar tech (React, Node), research-first for new domains (Rust, ML)."
|
||||
|
||||
The rating field should reflect the **dominant** pattern (most evidence). The summary describes the nuance.
|
||||
|
||||
### Phase 3 Resolution
|
||||
|
||||
Context-dependent splits are resolved during Phase 3 orchestration. The orchestrator presents the split to the developer and asks which pattern represents their general preference. Until resolved, Claude uses the dominant pattern with awareness of the context-dependent variation.
|
||||
|
||||
---
|
||||
|
||||
*Reference document version: 1.0*
|
||||
*Dimensions: 8*
|
||||
*Schema: profile_version 1.0*
|
||||
612
get-shit-done/references/verification-patterns.md
Normal file
612
get-shit-done/references/verification-patterns.md
Normal file
@@ -0,0 +1,612 @@
|
||||
# Verification Patterns
|
||||
|
||||
How to verify different types of artifacts are real implementations, not stubs or placeholders.
|
||||
|
||||
<core_principle>
|
||||
**Existence ≠ Implementation**
|
||||
|
||||
A file existing does not mean the feature works. Verification must check:
|
||||
1. **Exists** - File is present at expected path
|
||||
2. **Substantive** - Content is real implementation, not placeholder
|
||||
3. **Wired** - Connected to the rest of the system
|
||||
4. **Functional** - Actually works when invoked
|
||||
|
||||
Levels 1-3 can be checked programmatically. Level 4 often requires human verification.
|
||||
</core_principle>
|
||||
|
||||
<stub_detection>
|
||||
|
||||
## Universal Stub Patterns
|
||||
|
||||
These patterns indicate placeholder code regardless of file type:
|
||||
|
||||
**Comment-based stubs:**
|
||||
```bash
|
||||
# Grep patterns for stub comments
|
||||
grep -E "(TODO|FIXME|XXX|HACK|PLACEHOLDER)" "$file"
|
||||
grep -E "implement|add later|coming soon|will be" "$file" -i
|
||||
grep -E "// \.\.\.|/\* \.\.\. \*/|# \.\.\." "$file"
|
||||
```
|
||||
|
||||
**Placeholder text in output:**
|
||||
```bash
|
||||
# UI placeholder patterns
|
||||
grep -E "placeholder|lorem ipsum|coming soon|under construction" "$file" -i
|
||||
grep -E "sample|example|test data|dummy" "$file" -i
|
||||
grep -E "\[.*\]|<.*>|\{.*\}" "$file" # Template brackets left in
|
||||
```
|
||||
|
||||
**Empty or trivial implementations:**
|
||||
```bash
|
||||
# Functions that do nothing
|
||||
grep -E "return null|return undefined|return \{\}|return \[\]" "$file"
|
||||
grep -E "pass$|\.\.\.|\bnothing\b" "$file"
|
||||
grep -E "console\.(log|warn|error).*only" "$file" # Log-only functions
|
||||
```
|
||||
|
||||
**Hardcoded values where dynamic expected:**
|
||||
```bash
|
||||
# Hardcoded IDs, counts, or content
|
||||
grep -E "id.*=.*['\"].*['\"]" "$file" # Hardcoded string IDs
|
||||
grep -E "count.*=.*\d+|length.*=.*\d+" "$file" # Hardcoded counts
|
||||
grep -E "\\\$\d+\.\d{2}|\d+ items" "$file" # Hardcoded display values
|
||||
```
|
||||
|
||||
</stub_detection>
|
||||
|
||||
<react_components>
|
||||
|
||||
## React/Next.js Components
|
||||
|
||||
**Existence check:**
|
||||
```bash
|
||||
# File exists and exports component
|
||||
[ -f "$component_path" ] && grep -E "export (default |)function|export const.*=.*\(" "$component_path"
|
||||
```
|
||||
|
||||
**Substantive check:**
|
||||
```bash
|
||||
# Returns actual JSX, not placeholder
|
||||
grep -E "return.*<" "$component_path" | grep -v "return.*null" | grep -v "placeholder" -i
|
||||
|
||||
# Has meaningful content (not just wrapper div)
|
||||
grep -E "<[A-Z][a-zA-Z]+|className=|onClick=|onChange=" "$component_path"
|
||||
|
||||
# Uses props or state (not static)
|
||||
grep -E "props\.|useState|useEffect|useContext|\{.*\}" "$component_path"
|
||||
```
|
||||
|
||||
**Stub patterns specific to React:**
|
||||
```javascript
|
||||
// RED FLAGS - These are stubs:
|
||||
return <div>Component</div>
|
||||
return <div>Placeholder</div>
|
||||
return <div>{/* TODO */}</div>
|
||||
return <p>Coming soon</p>
|
||||
return null
|
||||
return <></>
|
||||
|
||||
// Also stubs - empty handlers:
|
||||
onClick={() => {}}
|
||||
onChange={() => console.log('clicked')}
|
||||
onSubmit={(e) => e.preventDefault()} // Only prevents default, does nothing
|
||||
```
|
||||
|
||||
**Wiring check:**
|
||||
```bash
|
||||
# Component imports what it needs
|
||||
grep -E "^import.*from" "$component_path"
|
||||
|
||||
# Props are actually used (not just received)
|
||||
# Look for destructuring or props.X usage
|
||||
grep -E "\{ .* \}.*props|\bprops\.[a-zA-Z]+" "$component_path"
|
||||
|
||||
# API calls exist (for data-fetching components)
|
||||
grep -E "fetch\(|axios\.|useSWR|useQuery|getServerSideProps|getStaticProps" "$component_path"
|
||||
```
|
||||
|
||||
**Functional verification (human required):**
|
||||
- Does the component render visible content?
|
||||
- Do interactive elements respond to clicks?
|
||||
- Does data load and display?
|
||||
- Do error states show appropriately?
|
||||
|
||||
</react_components>
|
||||
|
||||
<api_routes>
|
||||
|
||||
## API Routes (Next.js App Router / Express / etc.)
|
||||
|
||||
**Existence check:**
|
||||
```bash
|
||||
# Route file exists
|
||||
[ -f "$route_path" ]
|
||||
|
||||
# Exports HTTP method handlers (Next.js App Router)
|
||||
grep -E "export (async )?(function|const) (GET|POST|PUT|PATCH|DELETE)" "$route_path"
|
||||
|
||||
# Or Express-style handlers
|
||||
grep -E "\.(get|post|put|patch|delete)\(" "$route_path"
|
||||
```
|
||||
|
||||
**Substantive check:**
|
||||
```bash
|
||||
# Has actual logic, not just return statement
|
||||
wc -l "$route_path" # More than 10-15 lines suggests real implementation
|
||||
|
||||
# Interacts with data source
|
||||
grep -E "prisma\.|db\.|mongoose\.|sql|query|find|create|update|delete" "$route_path" -i
|
||||
|
||||
# Has error handling
|
||||
grep -E "try|catch|throw|error|Error" "$route_path"
|
||||
|
||||
# Returns meaningful response
|
||||
grep -E "Response\.json|res\.json|res\.send|return.*\{" "$route_path" | grep -v "message.*not implemented" -i
|
||||
```
|
||||
|
||||
**Stub patterns specific to API routes:**
|
||||
```typescript
|
||||
// RED FLAGS - These are stubs:
|
||||
export async function POST() {
|
||||
return Response.json({ message: "Not implemented" })
|
||||
}
|
||||
|
||||
export async function GET() {
|
||||
return Response.json([]) // Empty array with no DB query
|
||||
}
|
||||
|
||||
export async function PUT() {
|
||||
return new Response() // Empty response
|
||||
}
|
||||
|
||||
// Console log only:
|
||||
export async function POST(req) {
|
||||
console.log(await req.json())
|
||||
return Response.json({ ok: true })
|
||||
}
|
||||
```
|
||||
|
||||
**Wiring check:**
|
||||
```bash
|
||||
# Imports database/service clients
|
||||
grep -E "^import.*prisma|^import.*db|^import.*client" "$route_path"
|
||||
|
||||
# Actually uses request body (for POST/PUT)
|
||||
grep -E "req\.json\(\)|req\.body|request\.json\(\)" "$route_path"
|
||||
|
||||
# Validates input (not just trusting request)
|
||||
grep -E "schema\.parse|validate|zod|yup|joi" "$route_path"
|
||||
```
|
||||
|
||||
**Functional verification (human or automated):**
|
||||
- Does GET return real data from database?
|
||||
- Does POST actually create a record?
|
||||
- Does error response have correct status code?
|
||||
- Are auth checks actually enforced?
|
||||
|
||||
</api_routes>
|
||||
|
||||
<database_schema>
|
||||
|
||||
## Database Schema (Prisma / Drizzle / SQL)
|
||||
|
||||
**Existence check:**
|
||||
```bash
|
||||
# Schema file exists
|
||||
[ -f "prisma/schema.prisma" ] || [ -f "drizzle/schema.ts" ] || [ -f "src/db/schema.sql" ]
|
||||
|
||||
# Model/table is defined
|
||||
grep -E "^model $model_name|CREATE TABLE $table_name|export const $table_name" "$schema_path"
|
||||
```
|
||||
|
||||
**Substantive check:**
|
||||
```bash
|
||||
# Has expected fields (not just id)
|
||||
grep -A 20 "model $model_name" "$schema_path" | grep -E "^\s+\w+\s+\w+"
|
||||
|
||||
# Has relationships if expected
|
||||
grep -E "@relation|REFERENCES|FOREIGN KEY" "$schema_path"
|
||||
|
||||
# Has appropriate field types (not all String)
|
||||
grep -A 20 "model $model_name" "$schema_path" | grep -E "Int|DateTime|Boolean|Float|Decimal|Json"
|
||||
```
|
||||
|
||||
**Stub patterns specific to schemas:**
|
||||
```prisma
|
||||
// RED FLAGS - These are stubs:
|
||||
model User {
|
||||
id String @id
|
||||
// TODO: add fields
|
||||
}
|
||||
|
||||
model Message {
|
||||
id String @id
|
||||
content String // Only one real field
|
||||
}
|
||||
|
||||
// Missing critical fields:
|
||||
model Order {
|
||||
id String @id
|
||||
// No: userId, items, total, status, createdAt
|
||||
}
|
||||
```
|
||||
|
||||
**Wiring check:**
|
||||
```bash
|
||||
# Migrations exist and are applied
|
||||
ls prisma/migrations/ 2>/dev/null | wc -l # Should be > 0
|
||||
npx prisma migrate status 2>/dev/null | grep -v "pending"
|
||||
|
||||
# Client is generated
|
||||
[ -d "node_modules/.prisma/client" ]
|
||||
```
|
||||
|
||||
**Functional verification:**
|
||||
```bash
|
||||
# Can query the table (automated)
|
||||
npx prisma db execute --stdin <<< "SELECT COUNT(*) FROM $table_name"
|
||||
```
|
||||
|
||||
</database_schema>
|
||||
|
||||
<hooks_utilities>
|
||||
|
||||
## Custom Hooks and Utilities
|
||||
|
||||
**Existence check:**
|
||||
```bash
|
||||
# File exists and exports function
|
||||
[ -f "$hook_path" ] && grep -E "export (default )?(function|const)" "$hook_path"
|
||||
```
|
||||
|
||||
**Substantive check:**
|
||||
```bash
|
||||
# Hook uses React hooks (for custom hooks)
|
||||
grep -E "useState|useEffect|useCallback|useMemo|useRef|useContext" "$hook_path"
|
||||
|
||||
# Has meaningful return value
|
||||
grep -E "return \{|return \[" "$hook_path"
|
||||
|
||||
# More than trivial length
|
||||
[ $(wc -l < "$hook_path") -gt 10 ]
|
||||
```
|
||||
|
||||
**Stub patterns specific to hooks:**
|
||||
```typescript
|
||||
// RED FLAGS - These are stubs:
|
||||
export function useAuth() {
|
||||
return { user: null, login: () => {}, logout: () => {} }
|
||||
}
|
||||
|
||||
export function useCart() {
|
||||
const [items, setItems] = useState([])
|
||||
return { items, addItem: () => console.log('add'), removeItem: () => {} }
|
||||
}
|
||||
|
||||
// Hardcoded return:
|
||||
export function useUser() {
|
||||
return { name: "Test User", email: "test@example.com" }
|
||||
}
|
||||
```
|
||||
|
||||
**Wiring check:**
|
||||
```bash
|
||||
# Hook is actually imported somewhere
|
||||
grep -r "import.*$hook_name" src/ --include="*.tsx" --include="*.ts" | grep -v "$hook_path"
|
||||
|
||||
# Hook is actually called
|
||||
grep -r "$hook_name()" src/ --include="*.tsx" --include="*.ts" | grep -v "$hook_path"
|
||||
```
|
||||
|
||||
</hooks_utilities>
|
||||
|
||||
<environment_config>
|
||||
|
||||
## Environment Variables and Configuration
|
||||
|
||||
**Existence check:**
|
||||
```bash
|
||||
# .env file exists
|
||||
[ -f ".env" ] || [ -f ".env.local" ]
|
||||
|
||||
# Required variable is defined
|
||||
grep -E "^$VAR_NAME=" .env .env.local 2>/dev/null
|
||||
```
|
||||
|
||||
**Substantive check:**
|
||||
```bash
|
||||
# Variable has actual value (not placeholder)
|
||||
grep -E "^$VAR_NAME=.+" .env .env.local 2>/dev/null | grep -v "your-.*-here|xxx|placeholder|TODO" -i
|
||||
|
||||
# Value looks valid for type:
|
||||
# - URLs should start with http
|
||||
# - Keys should be long enough
|
||||
# - Booleans should be true/false
|
||||
```
|
||||
|
||||
**Stub patterns specific to env:**
|
||||
```bash
|
||||
# RED FLAGS - These are stubs:
|
||||
DATABASE_URL=your-database-url-here
|
||||
STRIPE_SECRET_KEY=sk_test_xxx
|
||||
API_KEY=placeholder
|
||||
NEXT_PUBLIC_API_URL=http://localhost:3000 # Still pointing to localhost in prod
|
||||
```
|
||||
|
||||
**Wiring check:**
|
||||
```bash
|
||||
# Variable is actually used in code
|
||||
grep -r "process\.env\.$VAR_NAME|env\.$VAR_NAME" src/ --include="*.ts" --include="*.tsx"
|
||||
|
||||
# Variable is in validation schema (if using zod/etc for env)
|
||||
grep -E "$VAR_NAME" src/env.ts src/env.mjs 2>/dev/null
|
||||
```
|
||||
|
||||
</environment_config>
|
||||
|
||||
<wiring_verification>
|
||||
|
||||
## Wiring Verification Patterns
|
||||
|
||||
Wiring verification checks that components actually communicate. This is where most stubs hide.
|
||||
|
||||
### Pattern: Component → API
|
||||
|
||||
**Check:** Does the component actually call the API?
|
||||
|
||||
```bash
|
||||
# Find the fetch/axios call
|
||||
grep -E "fetch\(['\"].*$api_path|axios\.(get|post).*$api_path" "$component_path"
|
||||
|
||||
# Verify it's not commented out
|
||||
grep -E "fetch\(|axios\." "$component_path" | grep -v "^.*//.*fetch"
|
||||
|
||||
# Check the response is used
|
||||
grep -E "await.*fetch|\.then\(|setData|setState" "$component_path"
|
||||
```
|
||||
|
||||
**Red flags:**
|
||||
```typescript
|
||||
// Fetch exists but response ignored:
|
||||
fetch('/api/messages') // No await, no .then, no assignment
|
||||
|
||||
// Fetch in comment:
|
||||
// fetch('/api/messages').then(r => r.json()).then(setMessages)
|
||||
|
||||
// Fetch to wrong endpoint:
|
||||
fetch('/api/message') // Typo - should be /api/messages
|
||||
```
|
||||
|
||||
### Pattern: API → Database
|
||||
|
||||
**Check:** Does the API route actually query the database?
|
||||
|
||||
```bash
|
||||
# Find the database call
|
||||
grep -E "prisma\.$model|db\.query|Model\.find" "$route_path"
|
||||
|
||||
# Verify it's awaited
|
||||
grep -E "await.*prisma|await.*db\." "$route_path"
|
||||
|
||||
# Check result is returned
|
||||
grep -E "return.*json.*data|res\.json.*result" "$route_path"
|
||||
```
|
||||
|
||||
**Red flags:**
|
||||
```typescript
|
||||
// Query exists but result not returned:
|
||||
await prisma.message.findMany()
|
||||
return Response.json({ ok: true }) // Returns static, not query result
|
||||
|
||||
// Query not awaited:
|
||||
const messages = prisma.message.findMany() // Missing await
|
||||
return Response.json(messages) // Returns Promise, not data
|
||||
```
|
||||
|
||||
### Pattern: Form → Handler
|
||||
|
||||
**Check:** Does the form submission actually do something?
|
||||
|
||||
```bash
|
||||
# Find onSubmit handler
|
||||
grep -E "onSubmit=\{|handleSubmit" "$component_path"
|
||||
|
||||
# Check handler has content
|
||||
grep -A 10 "onSubmit.*=" "$component_path" | grep -E "fetch|axios|mutate|dispatch"
|
||||
|
||||
# Verify not just preventDefault
|
||||
grep -A 5 "onSubmit" "$component_path" | grep -v "only.*preventDefault" -i
|
||||
```
|
||||
|
||||
**Red flags:**
|
||||
```typescript
|
||||
// Handler only prevents default:
|
||||
onSubmit={(e) => e.preventDefault()}
|
||||
|
||||
// Handler only logs:
|
||||
const handleSubmit = (data) => {
|
||||
console.log(data)
|
||||
}
|
||||
|
||||
// Handler is empty:
|
||||
onSubmit={() => {}}
|
||||
```
|
||||
|
||||
### Pattern: State → Render
|
||||
|
||||
**Check:** Does the component render state, not hardcoded content?
|
||||
|
||||
```bash
|
||||
# Find state usage in JSX
|
||||
grep -E "\{.*messages.*\}|\{.*data.*\}|\{.*items.*\}" "$component_path"
|
||||
|
||||
# Check map/render of state
|
||||
grep -E "\.map\(|\.filter\(|\.reduce\(" "$component_path"
|
||||
|
||||
# Verify dynamic content
|
||||
grep -E "\{[a-zA-Z_]+\." "$component_path" # Variable interpolation
|
||||
```
|
||||
|
||||
**Red flags:**
|
||||
```tsx
|
||||
// Hardcoded instead of state:
|
||||
return <div>
|
||||
<p>Message 1</p>
|
||||
<p>Message 2</p>
|
||||
</div>
|
||||
|
||||
// State exists but not rendered:
|
||||
const [messages, setMessages] = useState([])
|
||||
return <div>No messages</div> // Always shows "no messages"
|
||||
|
||||
// Wrong state rendered:
|
||||
const [messages, setMessages] = useState([])
|
||||
return <div>{otherData.map(...)}</div> // Uses different data
|
||||
```
|
||||
|
||||
</wiring_verification>
|
||||
|
||||
<verification_checklist>
|
||||
|
||||
## Quick Verification Checklist
|
||||
|
||||
For each artifact type, run through this checklist:
|
||||
|
||||
### Component Checklist
|
||||
- [ ] File exists at expected path
|
||||
- [ ] Exports a function/const component
|
||||
- [ ] Returns JSX (not null/empty)
|
||||
- [ ] No placeholder text in render
|
||||
- [ ] Uses props or state (not static)
|
||||
- [ ] Event handlers have real implementations
|
||||
- [ ] Imports resolve correctly
|
||||
- [ ] Used somewhere in the app
|
||||
|
||||
### API Route Checklist
|
||||
- [ ] File exists at expected path
|
||||
- [ ] Exports HTTP method handlers
|
||||
- [ ] Handlers have more than 5 lines
|
||||
- [ ] Queries database or service
|
||||
- [ ] Returns meaningful response (not empty/placeholder)
|
||||
- [ ] Has error handling
|
||||
- [ ] Validates input
|
||||
- [ ] Called from frontend
|
||||
|
||||
### Schema Checklist
|
||||
- [ ] Model/table defined
|
||||
- [ ] Has all expected fields
|
||||
- [ ] Fields have appropriate types
|
||||
- [ ] Relationships defined if needed
|
||||
- [ ] Migrations exist and applied
|
||||
- [ ] Client generated
|
||||
|
||||
### Hook/Utility Checklist
|
||||
- [ ] File exists at expected path
|
||||
- [ ] Exports function
|
||||
- [ ] Has meaningful implementation (not empty returns)
|
||||
- [ ] Used somewhere in the app
|
||||
- [ ] Return values consumed
|
||||
|
||||
### Wiring Checklist
|
||||
- [ ] Component → API: fetch/axios call exists and uses response
|
||||
- [ ] API → Database: query exists and result returned
|
||||
- [ ] Form → Handler: onSubmit calls API/mutation
|
||||
- [ ] State → Render: state variables appear in JSX
|
||||
|
||||
</verification_checklist>
|
||||
|
||||
<automated_verification_script>
|
||||
|
||||
## Automated Verification Approach
|
||||
|
||||
For the verification subagent, use this pattern:
|
||||
|
||||
```bash
|
||||
# 1. Check existence
|
||||
check_exists() {
|
||||
[ -f "$1" ] && echo "EXISTS: $1" || echo "MISSING: $1"
|
||||
}
|
||||
|
||||
# 2. Check for stub patterns
|
||||
check_stubs() {
|
||||
local file="$1"
|
||||
local stubs=$(grep -c -E "TODO|FIXME|placeholder|not implemented" "$file" 2>/dev/null || echo 0)
|
||||
[ "$stubs" -gt 0 ] && echo "STUB_PATTERNS: $stubs in $file"
|
||||
}
|
||||
|
||||
# 3. Check wiring (component calls API)
|
||||
check_wiring() {
|
||||
local component="$1"
|
||||
local api_path="$2"
|
||||
grep -q "$api_path" "$component" && echo "WIRED: $component → $api_path" || echo "NOT_WIRED: $component → $api_path"
|
||||
}
|
||||
|
||||
# 4. Check substantive (more than N lines, has expected patterns)
|
||||
check_substantive() {
|
||||
local file="$1"
|
||||
local min_lines="$2"
|
||||
local pattern="$3"
|
||||
local lines=$(wc -l < "$file" 2>/dev/null || echo 0)
|
||||
local has_pattern=$(grep -c -E "$pattern" "$file" 2>/dev/null || echo 0)
|
||||
[ "$lines" -ge "$min_lines" ] && [ "$has_pattern" -gt 0 ] && echo "SUBSTANTIVE: $file" || echo "THIN: $file ($lines lines, $has_pattern matches)"
|
||||
}
|
||||
```
|
||||
|
||||
Run these checks against each must-have artifact. Aggregate results into VERIFICATION.md.
|
||||
|
||||
</automated_verification_script>
|
||||
|
||||
<human_verification_triggers>
|
||||
|
||||
## When to Require Human Verification
|
||||
|
||||
Some things can't be verified programmatically. Flag these for human testing:
|
||||
|
||||
**Always human:**
|
||||
- Visual appearance (does it look right?)
|
||||
- User flow completion (can you actually do the thing?)
|
||||
- Real-time behavior (WebSocket, SSE)
|
||||
- External service integration (Stripe, email sending)
|
||||
- Error message clarity (is the message helpful?)
|
||||
- Performance feel (does it feel fast?)
|
||||
|
||||
**Human if uncertain:**
|
||||
- Complex wiring that grep can't trace
|
||||
- Dynamic behavior depending on state
|
||||
- Edge cases and error states
|
||||
- Mobile responsiveness
|
||||
- Accessibility
|
||||
|
||||
**Format for human verification request:**
|
||||
```markdown
|
||||
## Human Verification Required
|
||||
|
||||
### 1. Chat message sending
|
||||
**Test:** Type a message and click Send
|
||||
**Expected:** Message appears in list, input clears
|
||||
**Check:** Does message persist after refresh?
|
||||
|
||||
### 2. Error handling
|
||||
**Test:** Disconnect network, try to send
|
||||
**Expected:** Error message appears, message not lost
|
||||
**Check:** Can retry after reconnect?
|
||||
```
|
||||
|
||||
</human_verification_triggers>
|
||||
|
||||
<checkpoint_automation_reference>
|
||||
|
||||
## Pre-Checkpoint Automation
|
||||
|
||||
For automation-first checkpoint patterns, server lifecycle management, CLI installation handling, and error recovery protocols, see:
|
||||
|
||||
**@C:/Users/yaoji/.claude/get-shit-done/references/checkpoints.md** → `<automation_reference>` section
|
||||
|
||||
Key principles:
|
||||
- Claude sets up verification environment BEFORE presenting checkpoints
|
||||
- Users never run CLI commands (visit URLs only)
|
||||
- Server lifecycle: start before checkpoint, handle port conflicts, keep running for duration
|
||||
- CLI installation: auto-install where safe, checkpoint for user choice otherwise
|
||||
- Error handling: fix broken environment before checkpoint, never present checkpoint with failed setup
|
||||
|
||||
</checkpoint_automation_reference>
|
||||
164
get-shit-done/templates/DEBUG.md
Normal file
164
get-shit-done/templates/DEBUG.md
Normal file
@@ -0,0 +1,164 @@
|
||||
# Debug Template
|
||||
|
||||
Template for `.planning/debug/[slug].md` — active debug session tracking.
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
status: gathering | investigating | fixing | verifying | awaiting_human_verify | resolved
|
||||
trigger: "[verbatim user input]"
|
||||
created: [ISO timestamp]
|
||||
updated: [ISO timestamp]
|
||||
---
|
||||
|
||||
## Current Focus
|
||||
<!-- OVERWRITE on each update - always reflects NOW -->
|
||||
|
||||
hypothesis: [current theory being tested]
|
||||
test: [how testing it]
|
||||
expecting: [what result means if true/false]
|
||||
next_action: [immediate next step]
|
||||
|
||||
## Symptoms
|
||||
<!-- Written during gathering, then immutable -->
|
||||
|
||||
expected: [what should happen]
|
||||
actual: [what actually happens]
|
||||
errors: [error messages if any]
|
||||
reproduction: [how to trigger]
|
||||
started: [when it broke / always broken]
|
||||
|
||||
## Eliminated
|
||||
<!-- APPEND only - prevents re-investigating after /clear -->
|
||||
|
||||
- hypothesis: [theory that was wrong]
|
||||
evidence: [what disproved it]
|
||||
timestamp: [when eliminated]
|
||||
|
||||
## Evidence
|
||||
<!-- APPEND only - facts discovered during investigation -->
|
||||
|
||||
- timestamp: [when found]
|
||||
checked: [what was examined]
|
||||
found: [what was observed]
|
||||
implication: [what this means]
|
||||
|
||||
## Resolution
|
||||
<!-- OVERWRITE as understanding evolves -->
|
||||
|
||||
root_cause: [empty until found]
|
||||
fix: [empty until applied]
|
||||
verification: [empty until verified]
|
||||
files_changed: []
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
<section_rules>
|
||||
|
||||
**Frontmatter (status, trigger, timestamps):**
|
||||
- `status`: OVERWRITE - reflects current phase
|
||||
- `trigger`: IMMUTABLE - verbatim user input, never changes
|
||||
- `created`: IMMUTABLE - set once
|
||||
- `updated`: OVERWRITE - update on every change
|
||||
|
||||
**Current Focus:**
|
||||
- OVERWRITE entirely on each update
|
||||
- Always reflects what Claude is doing RIGHT NOW
|
||||
- If Claude reads this after /clear, it knows exactly where to resume
|
||||
- Fields: hypothesis, test, expecting, next_action
|
||||
|
||||
**Symptoms:**
|
||||
- Written during initial gathering phase
|
||||
- IMMUTABLE after gathering complete
|
||||
- Reference point for what we're trying to fix
|
||||
- Fields: expected, actual, errors, reproduction, started
|
||||
|
||||
**Eliminated:**
|
||||
- APPEND only - never remove entries
|
||||
- Prevents re-investigating dead ends after context reset
|
||||
- Each entry: hypothesis, evidence that disproved it, timestamp
|
||||
- Critical for efficiency across /clear boundaries
|
||||
|
||||
**Evidence:**
|
||||
- APPEND only - never remove entries
|
||||
- Facts discovered during investigation
|
||||
- Each entry: timestamp, what checked, what found, implication
|
||||
- Builds the case for root cause
|
||||
|
||||
**Resolution:**
|
||||
- OVERWRITE as understanding evolves
|
||||
- May update multiple times as fixes are tried
|
||||
- Final state shows confirmed root cause and verified fix
|
||||
- Fields: root_cause, fix, verification, files_changed
|
||||
|
||||
</section_rules>
|
||||
|
||||
<lifecycle>
|
||||
|
||||
**Creation:** Immediately when /gsd:debug is called
|
||||
- Create file with trigger from user input
|
||||
- Set status to "gathering"
|
||||
- Current Focus: next_action = "gather symptoms"
|
||||
- Symptoms: empty, to be filled
|
||||
|
||||
**During symptom gathering:**
|
||||
- Update Symptoms section as user answers questions
|
||||
- Update Current Focus with each question
|
||||
- When complete: status → "investigating"
|
||||
|
||||
**During investigation:**
|
||||
- OVERWRITE Current Focus with each hypothesis
|
||||
- APPEND to Evidence with each finding
|
||||
- APPEND to Eliminated when hypothesis disproved
|
||||
- Update timestamp in frontmatter
|
||||
|
||||
**During fixing:**
|
||||
- status → "fixing"
|
||||
- Update Resolution.root_cause when confirmed
|
||||
- Update Resolution.fix when applied
|
||||
- Update Resolution.files_changed
|
||||
|
||||
**During verification:**
|
||||
- status → "verifying"
|
||||
- Update Resolution.verification with results
|
||||
- If verification fails: status → "investigating", try again
|
||||
|
||||
**After self-verification passes:**
|
||||
- status -> "awaiting_human_verify"
|
||||
- Request explicit user confirmation in a checkpoint
|
||||
- Do NOT move file to resolved yet
|
||||
|
||||
**On resolution:**
|
||||
- status → "resolved"
|
||||
- Move file to .planning/debug/resolved/ (only after user confirms fix)
|
||||
|
||||
</lifecycle>
|
||||
|
||||
<resume_behavior>
|
||||
|
||||
When Claude reads this file after /clear:
|
||||
|
||||
1. Parse frontmatter → know status
|
||||
2. Read Current Focus → know exactly what was happening
|
||||
3. Read Eliminated → know what NOT to retry
|
||||
4. Read Evidence → know what's been learned
|
||||
5. Continue from next_action
|
||||
|
||||
The file IS the debugging brain. Claude should be able to resume perfectly from any interruption point.
|
||||
|
||||
</resume_behavior>
|
||||
|
||||
<size_constraint>
|
||||
|
||||
Keep debug files focused:
|
||||
- Evidence entries: 1-2 lines each, just the facts
|
||||
- Eliminated: brief - hypothesis + why it failed
|
||||
- No narrative prose - structured data only
|
||||
|
||||
If evidence grows very large (10+ entries), consider whether you're going in circles. Check Eliminated to ensure you're not re-treading.
|
||||
|
||||
</size_constraint>
|
||||
247
get-shit-done/templates/UAT.md
Normal file
247
get-shit-done/templates/UAT.md
Normal file
@@ -0,0 +1,247 @@
|
||||
# UAT Template
|
||||
|
||||
Template for `.planning/phases/XX-name/{phase_num}-UAT.md` — persistent UAT session tracking.
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
status: testing | complete | diagnosed
|
||||
phase: XX-name
|
||||
source: [list of SUMMARY.md files tested]
|
||||
started: [ISO timestamp]
|
||||
updated: [ISO timestamp]
|
||||
---
|
||||
|
||||
## Current Test
|
||||
<!-- OVERWRITE each test - shows where we are -->
|
||||
|
||||
number: [N]
|
||||
name: [test name]
|
||||
expected: |
|
||||
[what user should observe]
|
||||
awaiting: user response
|
||||
|
||||
## Tests
|
||||
|
||||
### 1. [Test Name]
|
||||
expected: [observable behavior - what user should see]
|
||||
result: [pending]
|
||||
|
||||
### 2. [Test Name]
|
||||
expected: [observable behavior]
|
||||
result: pass
|
||||
|
||||
### 3. [Test Name]
|
||||
expected: [observable behavior]
|
||||
result: issue
|
||||
reported: "[verbatim user response]"
|
||||
severity: major
|
||||
|
||||
### 4. [Test Name]
|
||||
expected: [observable behavior]
|
||||
result: skipped
|
||||
reason: [why skipped]
|
||||
|
||||
...
|
||||
|
||||
## Summary
|
||||
|
||||
total: [N]
|
||||
passed: [N]
|
||||
issues: [N]
|
||||
pending: [N]
|
||||
skipped: [N]
|
||||
|
||||
## Gaps
|
||||
|
||||
<!-- YAML format for plan-phase --gaps consumption -->
|
||||
- truth: "[expected behavior from test]"
|
||||
status: failed
|
||||
reason: "User reported: [verbatim response]"
|
||||
severity: blocker | major | minor | cosmetic
|
||||
test: [N]
|
||||
root_cause: "" # Filled by diagnosis
|
||||
artifacts: [] # Filled by diagnosis
|
||||
missing: [] # Filled by diagnosis
|
||||
debug_session: "" # Filled by diagnosis
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
<section_rules>
|
||||
|
||||
**Frontmatter:**
|
||||
- `status`: OVERWRITE - "testing" or "complete"
|
||||
- `phase`: IMMUTABLE - set on creation
|
||||
- `source`: IMMUTABLE - SUMMARY files being tested
|
||||
- `started`: IMMUTABLE - set on creation
|
||||
- `updated`: OVERWRITE - update on every change
|
||||
|
||||
**Current Test:**
|
||||
- OVERWRITE entirely on each test transition
|
||||
- Shows which test is active and what's awaited
|
||||
- On completion: "[testing complete]"
|
||||
|
||||
**Tests:**
|
||||
- Each test: OVERWRITE result field when user responds
|
||||
- `result` values: [pending], pass, issue, skipped
|
||||
- If issue: add `reported` (verbatim) and `severity` (inferred)
|
||||
- If skipped: add `reason` if provided
|
||||
|
||||
**Summary:**
|
||||
- OVERWRITE counts after each response
|
||||
- Tracks: total, passed, issues, pending, skipped
|
||||
|
||||
**Gaps:**
|
||||
- APPEND only when issue found (YAML format)
|
||||
- After diagnosis: fill `root_cause`, `artifacts`, `missing`, `debug_session`
|
||||
- This section feeds directly into /gsd:plan-phase --gaps
|
||||
|
||||
</section_rules>
|
||||
|
||||
<diagnosis_lifecycle>
|
||||
|
||||
**After testing complete (status: complete), if gaps exist:**
|
||||
|
||||
1. User runs diagnosis (from verify-work offer or manually)
|
||||
2. diagnose-issues workflow spawns parallel debug agents
|
||||
3. Each agent investigates one gap, returns root cause
|
||||
4. UAT.md Gaps section updated with diagnosis:
|
||||
- Each gap gets `root_cause`, `artifacts`, `missing`, `debug_session` filled
|
||||
5. status → "diagnosed"
|
||||
6. Ready for /gsd:plan-phase --gaps with root causes
|
||||
|
||||
**After diagnosis:**
|
||||
```yaml
|
||||
## Gaps
|
||||
|
||||
- truth: "Comment appears immediately after submission"
|
||||
status: failed
|
||||
reason: "User reported: works but doesn't show until I refresh the page"
|
||||
severity: major
|
||||
test: 2
|
||||
root_cause: "useEffect in CommentList.tsx missing commentCount dependency"
|
||||
artifacts:
|
||||
- path: "src/components/CommentList.tsx"
|
||||
issue: "useEffect missing dependency"
|
||||
missing:
|
||||
- "Add commentCount to useEffect dependency array"
|
||||
debug_session: ".planning/debug/comment-not-refreshing.md"
|
||||
```
|
||||
|
||||
</diagnosis_lifecycle>
|
||||
|
||||
<lifecycle>
|
||||
|
||||
**Creation:** When /gsd:verify-work starts new session
|
||||
- Extract tests from SUMMARY.md files
|
||||
- Set status to "testing"
|
||||
- Current Test points to test 1
|
||||
- All tests have result: [pending]
|
||||
|
||||
**During testing:**
|
||||
- Present test from Current Test section
|
||||
- User responds with pass confirmation or issue description
|
||||
- Update test result (pass/issue/skipped)
|
||||
- Update Summary counts
|
||||
- If issue: append to Gaps section (YAML format), infer severity
|
||||
- Move Current Test to next pending test
|
||||
|
||||
**On completion:**
|
||||
- status → "complete"
|
||||
- Current Test → "[testing complete]"
|
||||
- Commit file
|
||||
- Present summary with next steps
|
||||
|
||||
**Resume after /clear:**
|
||||
1. Read frontmatter → know phase and status
|
||||
2. Read Current Test → know where we are
|
||||
3. Find first [pending] result → continue from there
|
||||
4. Summary shows progress so far
|
||||
|
||||
</lifecycle>
|
||||
|
||||
<severity_guide>
|
||||
|
||||
Severity is INFERRED from user's natural language, never asked.
|
||||
|
||||
| User describes | Infer |
|
||||
|----------------|-------|
|
||||
| Crash, error, exception, fails completely, unusable | blocker |
|
||||
| Doesn't work, nothing happens, wrong behavior, missing | major |
|
||||
| Works but..., slow, weird, minor, small issue | minor |
|
||||
| Color, font, spacing, alignment, visual, looks off | cosmetic |
|
||||
|
||||
Default: **major** (safe default, user can clarify if wrong)
|
||||
|
||||
</severity_guide>
|
||||
|
||||
<good_example>
|
||||
```markdown
|
||||
---
|
||||
status: diagnosed
|
||||
phase: 04-comments
|
||||
source: 04-01-SUMMARY.md, 04-02-SUMMARY.md
|
||||
started: 2025-01-15T10:30:00Z
|
||||
updated: 2025-01-15T10:45:00Z
|
||||
---
|
||||
|
||||
## Current Test
|
||||
|
||||
[testing complete]
|
||||
|
||||
## Tests
|
||||
|
||||
### 1. View Comments on Post
|
||||
expected: Comments section expands, shows count and comment list
|
||||
result: pass
|
||||
|
||||
### 2. Create Top-Level Comment
|
||||
expected: Submit comment via rich text editor, appears in list with author info
|
||||
result: issue
|
||||
reported: "works but doesn't show until I refresh the page"
|
||||
severity: major
|
||||
|
||||
### 3. Reply to a Comment
|
||||
expected: Click Reply, inline composer appears, submit shows nested reply
|
||||
result: pass
|
||||
|
||||
### 4. Visual Nesting
|
||||
expected: 3+ level thread shows indentation, left borders, caps at reasonable depth
|
||||
result: pass
|
||||
|
||||
### 5. Delete Own Comment
|
||||
expected: Click delete on own comment, removed or shows [deleted] if has replies
|
||||
result: pass
|
||||
|
||||
### 6. Comment Count
|
||||
expected: Post shows accurate count, increments when adding comment
|
||||
result: pass
|
||||
|
||||
## Summary
|
||||
|
||||
total: 6
|
||||
passed: 5
|
||||
issues: 1
|
||||
pending: 0
|
||||
skipped: 0
|
||||
|
||||
## Gaps
|
||||
|
||||
- truth: "Comment appears immediately after submission in list"
|
||||
status: failed
|
||||
reason: "User reported: works but doesn't show until I refresh the page"
|
||||
severity: major
|
||||
test: 2
|
||||
root_cause: "useEffect in CommentList.tsx missing commentCount dependency"
|
||||
artifacts:
|
||||
- path: "src/components/CommentList.tsx"
|
||||
issue: "useEffect missing dependency"
|
||||
missing:
|
||||
- "Add commentCount to useEffect dependency array"
|
||||
debug_session: ".planning/debug/comment-not-refreshing.md"
|
||||
```
|
||||
</good_example>
|
||||
100
get-shit-done/templates/UI-SPEC.md
Normal file
100
get-shit-done/templates/UI-SPEC.md
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
phase: {N}
|
||||
slug: {phase-slug}
|
||||
status: draft
|
||||
shadcn_initialized: false
|
||||
preset: none
|
||||
created: {date}
|
||||
---
|
||||
|
||||
# Phase {N} — UI Design Contract
|
||||
|
||||
> Visual and interaction contract for frontend phases. Generated by gsd-ui-researcher, verified by gsd-ui-checker.
|
||||
|
||||
---
|
||||
|
||||
## Design System
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Tool | {shadcn / none} |
|
||||
| Preset | {preset string or "not applicable"} |
|
||||
| Component library | {radix / base-ui / none} |
|
||||
| Icon library | {library} |
|
||||
| Font | {font} |
|
||||
|
||||
---
|
||||
|
||||
## Spacing Scale
|
||||
|
||||
Declared values (must be multiples of 4):
|
||||
|
||||
| Token | Value | Usage |
|
||||
|-------|-------|-------|
|
||||
| xs | 4px | Icon gaps, inline padding |
|
||||
| sm | 8px | Compact element spacing |
|
||||
| md | 16px | Default element spacing |
|
||||
| lg | 24px | Section padding |
|
||||
| xl | 32px | Layout gaps |
|
||||
| 2xl | 48px | Major section breaks |
|
||||
| 3xl | 64px | Page-level spacing |
|
||||
|
||||
Exceptions: {list any, or "none"}
|
||||
|
||||
---
|
||||
|
||||
## Typography
|
||||
|
||||
| Role | Size | Weight | Line Height |
|
||||
|------|------|--------|-------------|
|
||||
| Body | {px} | {weight} | {ratio} |
|
||||
| Label | {px} | {weight} | {ratio} |
|
||||
| Heading | {px} | {weight} | {ratio} |
|
||||
| Display | {px} | {weight} | {ratio} |
|
||||
|
||||
---
|
||||
|
||||
## Color
|
||||
|
||||
| Role | Value | Usage |
|
||||
|------|-------|-------|
|
||||
| Dominant (60%) | {hex} | Background, surfaces |
|
||||
| Secondary (30%) | {hex} | Cards, sidebar, nav |
|
||||
| Accent (10%) | {hex} | {list specific elements only} |
|
||||
| Destructive | {hex} | Destructive actions only |
|
||||
|
||||
Accent reserved for: {explicit list — never "all interactive elements"}
|
||||
|
||||
---
|
||||
|
||||
## Copywriting Contract
|
||||
|
||||
| Element | Copy |
|
||||
|---------|------|
|
||||
| Primary CTA | {specific verb + noun} |
|
||||
| Empty state heading | {copy} |
|
||||
| Empty state body | {copy + next step} |
|
||||
| Error state | {problem + solution path} |
|
||||
| Destructive confirmation | {action name}: {confirmation copy} |
|
||||
|
||||
---
|
||||
|
||||
## Registry Safety
|
||||
|
||||
| Registry | Blocks Used | Safety Gate |
|
||||
|----------|-------------|-------------|
|
||||
| shadcn official | {list} | not required |
|
||||
| {third-party name} | {list} | shadcn view + diff required |
|
||||
|
||||
---
|
||||
|
||||
## Checker Sign-Off
|
||||
|
||||
- [ ] Dimension 1 Copywriting: PASS
|
||||
- [ ] Dimension 2 Visuals: PASS
|
||||
- [ ] Dimension 3 Color: PASS
|
||||
- [ ] Dimension 4 Typography: PASS
|
||||
- [ ] Dimension 5 Spacing: PASS
|
||||
- [ ] Dimension 6 Registry Safety: PASS
|
||||
|
||||
**Approval:** {pending / approved YYYY-MM-DD}
|
||||
76
get-shit-done/templates/VALIDATION.md
Normal file
76
get-shit-done/templates/VALIDATION.md
Normal file
@@ -0,0 +1,76 @@
|
||||
---
|
||||
phase: {N}
|
||||
slug: {phase-slug}
|
||||
status: draft
|
||||
nyquist_compliant: false
|
||||
wave_0_complete: false
|
||||
created: {date}
|
||||
---
|
||||
|
||||
# Phase {N} — Validation Strategy
|
||||
|
||||
> Per-phase validation contract for feedback sampling during execution.
|
||||
|
||||
---
|
||||
|
||||
## Test Infrastructure
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| **Framework** | {pytest 7.x / jest 29.x / vitest / go test / other} |
|
||||
| **Config file** | {path or "none — Wave 0 installs"} |
|
||||
| **Quick run command** | `{quick command}` |
|
||||
| **Full suite command** | `{full command}` |
|
||||
| **Estimated runtime** | ~{N} seconds |
|
||||
|
||||
---
|
||||
|
||||
## Sampling Rate
|
||||
|
||||
- **After every task commit:** Run `{quick run command}`
|
||||
- **After every plan wave:** Run `{full suite command}`
|
||||
- **Before `/gsd:verify-work`:** Full suite must be green
|
||||
- **Max feedback latency:** {N} seconds
|
||||
|
||||
---
|
||||
|
||||
## Per-Task Verification Map
|
||||
|
||||
| Task ID | Plan | Wave | Requirement | Test Type | Automated Command | File Exists | Status |
|
||||
|---------|------|------|-------------|-----------|-------------------|-------------|--------|
|
||||
| {N}-01-01 | 01 | 1 | REQ-{XX} | unit | `{command}` | ✅ / ❌ W0 | ⬜ pending |
|
||||
|
||||
*Status: ⬜ pending · ✅ green · ❌ red · ⚠️ flaky*
|
||||
|
||||
---
|
||||
|
||||
## Wave 0 Requirements
|
||||
|
||||
- [ ] `{tests/test_file.py}` — stubs for REQ-{XX}
|
||||
- [ ] `{tests/conftest.py}` — shared fixtures
|
||||
- [ ] `{framework install}` — if no framework detected
|
||||
|
||||
*If none: "Existing infrastructure covers all phase requirements."*
|
||||
|
||||
---
|
||||
|
||||
## Manual-Only Verifications
|
||||
|
||||
| Behavior | Requirement | Why Manual | Test Instructions |
|
||||
|----------|-------------|------------|-------------------|
|
||||
| {behavior} | REQ-{XX} | {reason} | {steps} |
|
||||
|
||||
*If none: "All phase behaviors have automated verification."*
|
||||
|
||||
---
|
||||
|
||||
## Validation Sign-Off
|
||||
|
||||
- [ ] All tasks have `<automated>` verify or Wave 0 dependencies
|
||||
- [ ] Sampling continuity: no 3 consecutive tasks without automated verify
|
||||
- [ ] Wave 0 covers all MISSING references
|
||||
- [ ] No watch-mode flags
|
||||
- [ ] Feedback latency < {N}s
|
||||
- [ ] `nyquist_compliant: true` set in frontmatter
|
||||
|
||||
**Approval:** {pending / approved YYYY-MM-DD}
|
||||
105
get-shit-done/templates/claude-md.md
Normal file
105
get-shit-done/templates/claude-md.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# CLAUDE.md Template
|
||||
|
||||
Template for project-root `CLAUDE.md` — auto-generated by `gsd-tools generate-claude-md`.
|
||||
|
||||
Contains 5 marker-bounded sections. Each section is independently updatable.
|
||||
The `generate-claude-md` subcommand manages 4 sections (project, stack, conventions, architecture).
|
||||
The profile section is managed exclusively by `generate-claude-profile`.
|
||||
|
||||
---
|
||||
|
||||
## Section Templates
|
||||
|
||||
### Project Section
|
||||
```
|
||||
<!-- GSD:project-start source:PROJECT.md -->
|
||||
## Project
|
||||
|
||||
{{project_content}}
|
||||
<!-- GSD:project-end -->
|
||||
```
|
||||
|
||||
**Fallback text:**
|
||||
```
|
||||
Project not yet initialized. Run /gsd:new-project to set up.
|
||||
```
|
||||
|
||||
### Stack Section
|
||||
```
|
||||
<!-- GSD:stack-start source:STACK.md -->
|
||||
## Technology Stack
|
||||
|
||||
{{stack_content}}
|
||||
<!-- GSD:stack-end -->
|
||||
```
|
||||
|
||||
**Fallback text:**
|
||||
```
|
||||
Technology stack not yet documented. Will populate after codebase mapping or first phase.
|
||||
```
|
||||
|
||||
### Conventions Section
|
||||
```
|
||||
<!-- GSD:conventions-start source:CONVENTIONS.md -->
|
||||
## Conventions
|
||||
|
||||
{{conventions_content}}
|
||||
<!-- GSD:conventions-end -->
|
||||
```
|
||||
|
||||
**Fallback text:**
|
||||
```
|
||||
Conventions not yet established. Will populate as patterns emerge during development.
|
||||
```
|
||||
|
||||
### Architecture Section
|
||||
```
|
||||
<!-- GSD:architecture-start source:ARCHITECTURE.md -->
|
||||
## Architecture
|
||||
|
||||
{{architecture_content}}
|
||||
<!-- GSD:architecture-end -->
|
||||
```
|
||||
|
||||
**Fallback text:**
|
||||
```
|
||||
Architecture not yet mapped. Follow existing patterns found in the codebase.
|
||||
```
|
||||
|
||||
### Profile Section (Placeholder Only)
|
||||
```
|
||||
<!-- GSD:profile-start -->
|
||||
## Developer Profile
|
||||
|
||||
> Profile not yet configured. Run `/gsd:profile-user` to generate your developer profile.
|
||||
> This section is managed by `generate-claude-profile` — do not edit manually.
|
||||
<!-- GSD:profile-end -->
|
||||
```
|
||||
|
||||
**Note:** This section is NOT managed by `generate-claude-md`. It is managed exclusively
|
||||
by `generate-claude-profile`. The placeholder above is only used when creating a new
|
||||
CLAUDE.md file and no profile section exists yet.
|
||||
|
||||
---
|
||||
|
||||
## Section Ordering
|
||||
|
||||
1. **Project** — Identity and purpose (what this project is)
|
||||
2. **Stack** — Technology choices (what tools are used)
|
||||
3. **Conventions** — Code patterns and rules (how code is written)
|
||||
4. **Architecture** — System structure (how components fit together)
|
||||
5. **Profile** — Developer behavioral preferences (how to interact)
|
||||
|
||||
## Marker Format
|
||||
|
||||
- Start: `<!-- GSD:{name}-start source:{file} -->`
|
||||
- End: `<!-- GSD:{name}-end -->`
|
||||
- Source attribute enables targeted updates when source files change
|
||||
- Partial match on start marker (without closing `-->`) for detection
|
||||
|
||||
## Fallback Behavior
|
||||
|
||||
When a source file is missing, fallback text provides Claude-actionable guidance:
|
||||
- Guides Claude's behavior in the absence of data
|
||||
- Not placeholder ads or "missing" notices
|
||||
- Each fallback tells Claude what to do, not just what's absent
|
||||
255
get-shit-done/templates/codebase/architecture.md
Normal file
255
get-shit-done/templates/codebase/architecture.md
Normal file
@@ -0,0 +1,255 @@
|
||||
# Architecture Template
|
||||
|
||||
Template for `.planning/codebase/ARCHITECTURE.md` - captures conceptual code organization.
|
||||
|
||||
**Purpose:** Document how the code is organized at a conceptual level. Complements STRUCTURE.md (which shows physical file locations).
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# Architecture
|
||||
|
||||
**Analysis Date:** [YYYY-MM-DD]
|
||||
|
||||
## Pattern Overview
|
||||
|
||||
**Overall:** [Pattern name: e.g., "Monolithic CLI", "Serverless API", "Full-stack MVC"]
|
||||
|
||||
**Key Characteristics:**
|
||||
- [Characteristic 1: e.g., "Single executable"]
|
||||
- [Characteristic 2: e.g., "Stateless request handling"]
|
||||
- [Characteristic 3: e.g., "Event-driven"]
|
||||
|
||||
## Layers
|
||||
|
||||
[Describe the conceptual layers and their responsibilities]
|
||||
|
||||
**[Layer Name]:**
|
||||
- Purpose: [What this layer does]
|
||||
- Contains: [Types of code: e.g., "route handlers", "business logic"]
|
||||
- Depends on: [What it uses: e.g., "data layer only"]
|
||||
- Used by: [What uses it: e.g., "API routes"]
|
||||
|
||||
**[Layer Name]:**
|
||||
- Purpose: [What this layer does]
|
||||
- Contains: [Types of code]
|
||||
- Depends on: [What it uses]
|
||||
- Used by: [What uses it]
|
||||
|
||||
## Data Flow
|
||||
|
||||
[Describe the typical request/execution lifecycle]
|
||||
|
||||
**[Flow Name] (e.g., "HTTP Request", "CLI Command", "Event Processing"):**
|
||||
|
||||
1. [Entry point: e.g., "User runs command"]
|
||||
2. [Processing step: e.g., "Router matches path"]
|
||||
3. [Processing step: e.g., "Controller validates input"]
|
||||
4. [Processing step: e.g., "Service executes logic"]
|
||||
5. [Output: e.g., "Response returned"]
|
||||
|
||||
**State Management:**
|
||||
- [How state is handled: e.g., "Stateless - no persistent state", "Database per request", "In-memory cache"]
|
||||
|
||||
## Key Abstractions
|
||||
|
||||
[Core concepts/patterns used throughout the codebase]
|
||||
|
||||
**[Abstraction Name]:**
|
||||
- Purpose: [What it represents]
|
||||
- Examples: [e.g., "UserService, ProjectService"]
|
||||
- Pattern: [e.g., "Singleton", "Factory", "Repository"]
|
||||
|
||||
**[Abstraction Name]:**
|
||||
- Purpose: [What it represents]
|
||||
- Examples: [Concrete examples]
|
||||
- Pattern: [Pattern used]
|
||||
|
||||
## Entry Points
|
||||
|
||||
[Where execution begins]
|
||||
|
||||
**[Entry Point]:**
|
||||
- Location: [Brief: e.g., "src/index.ts", "API Gateway triggers"]
|
||||
- Triggers: [What invokes it: e.g., "CLI invocation", "HTTP request"]
|
||||
- Responsibilities: [What it does: e.g., "Parse args, route to command"]
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Strategy:** [How errors are handled: e.g., "Exception bubbling to top-level handler", "Per-route error middleware"]
|
||||
|
||||
**Patterns:**
|
||||
- [Pattern: e.g., "try/catch at controller level"]
|
||||
- [Pattern: e.g., "Error codes returned to user"]
|
||||
|
||||
## Cross-Cutting Concerns
|
||||
|
||||
[Aspects that affect multiple layers]
|
||||
|
||||
**Logging:**
|
||||
- [Approach: e.g., "Winston logger, injected per-request"]
|
||||
|
||||
**Validation:**
|
||||
- [Approach: e.g., "Zod schemas at API boundary"]
|
||||
|
||||
**Authentication:**
|
||||
- [Approach: e.g., "JWT middleware on protected routes"]
|
||||
|
||||
---
|
||||
|
||||
*Architecture analysis: [date]*
|
||||
*Update when major patterns change*
|
||||
```
|
||||
|
||||
<good_examples>
|
||||
```markdown
|
||||
# Architecture
|
||||
|
||||
**Analysis Date:** 2025-01-20
|
||||
|
||||
## Pattern Overview
|
||||
|
||||
**Overall:** CLI Application with Plugin System
|
||||
|
||||
**Key Characteristics:**
|
||||
- Single executable with subcommands
|
||||
- Plugin-based extensibility
|
||||
- File-based state (no database)
|
||||
- Synchronous execution model
|
||||
|
||||
## Layers
|
||||
|
||||
**Command Layer:**
|
||||
- Purpose: Parse user input and route to appropriate handler
|
||||
- Contains: Command definitions, argument parsing, help text
|
||||
- Location: `src/commands/*.ts`
|
||||
- Depends on: Service layer for business logic
|
||||
- Used by: CLI entry point (`src/index.ts`)
|
||||
|
||||
**Service Layer:**
|
||||
- Purpose: Core business logic
|
||||
- Contains: FileService, TemplateService, InstallService
|
||||
- Location: `src/services/*.ts`
|
||||
- Depends on: File system utilities, external tools
|
||||
- Used by: Command handlers
|
||||
|
||||
**Utility Layer:**
|
||||
- Purpose: Shared helpers and abstractions
|
||||
- Contains: File I/O wrappers, path resolution, string formatting
|
||||
- Location: `src/utils/*.ts`
|
||||
- Depends on: Node.js built-ins only
|
||||
- Used by: Service layer
|
||||
|
||||
## Data Flow
|
||||
|
||||
**CLI Command Execution:**
|
||||
|
||||
1. User runs: `gsd new-project`
|
||||
2. Commander parses args and flags
|
||||
3. Command handler invoked (`src/commands/new-project.ts`)
|
||||
4. Handler calls service methods (`src/services/project.ts` → `create()`)
|
||||
5. Service reads templates, processes files, writes output
|
||||
6. Results logged to console
|
||||
7. Process exits with status code
|
||||
|
||||
**State Management:**
|
||||
- File-based: All state lives in `.planning/` directory
|
||||
- No persistent in-memory state
|
||||
- Each command execution is independent
|
||||
|
||||
## Key Abstractions
|
||||
|
||||
**Service:**
|
||||
- Purpose: Encapsulate business logic for a domain
|
||||
- Examples: `src/services/file.ts`, `src/services/template.ts`, `src/services/project.ts`
|
||||
- Pattern: Singleton-like (imported as modules, not instantiated)
|
||||
|
||||
**Command:**
|
||||
- Purpose: CLI command definition
|
||||
- Examples: `src/commands/new-project.ts`, `src/commands/plan-phase.ts`
|
||||
- Pattern: Commander.js command registration
|
||||
|
||||
**Template:**
|
||||
- Purpose: Reusable document structures
|
||||
- Examples: PROJECT.md, PLAN.md templates
|
||||
- Pattern: Markdown files with substitution variables
|
||||
|
||||
## Entry Points
|
||||
|
||||
**CLI Entry:**
|
||||
- Location: `src/index.ts`
|
||||
- Triggers: User runs `gsd <command>`
|
||||
- Responsibilities: Register commands, parse args, display help
|
||||
|
||||
**Commands:**
|
||||
- Location: `src/commands/*.ts`
|
||||
- Triggers: Matched command from CLI
|
||||
- Responsibilities: Validate input, call services, format output
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Strategy:** Throw exceptions, catch at command level, log and exit
|
||||
|
||||
**Patterns:**
|
||||
- Services throw Error with descriptive messages
|
||||
- Command handlers catch, log error to stderr, exit(1)
|
||||
- Validation errors shown before execution (fail fast)
|
||||
|
||||
## Cross-Cutting Concerns
|
||||
|
||||
**Logging:**
|
||||
- Console.log for normal output
|
||||
- Console.error for errors
|
||||
- Chalk for colored output
|
||||
|
||||
**Validation:**
|
||||
- Zod schemas for config file parsing
|
||||
- Manual validation in command handlers
|
||||
- Fail fast on invalid input
|
||||
|
||||
**File Operations:**
|
||||
- FileService abstraction over fs-extra
|
||||
- All paths validated before operations
|
||||
- Atomic writes (temp file + rename)
|
||||
|
||||
---
|
||||
|
||||
*Architecture analysis: 2025-01-20*
|
||||
*Update when major patterns change*
|
||||
```
|
||||
</good_examples>
|
||||
|
||||
<guidelines>
|
||||
**What belongs in ARCHITECTURE.md:**
|
||||
- Overall architectural pattern (monolith, microservices, layered, etc.)
|
||||
- Conceptual layers and their relationships
|
||||
- Data flow / request lifecycle
|
||||
- Key abstractions and patterns
|
||||
- Entry points
|
||||
- Error handling strategy
|
||||
- Cross-cutting concerns (logging, auth, validation)
|
||||
|
||||
**What does NOT belong here:**
|
||||
- Exhaustive file listings (that's STRUCTURE.md)
|
||||
- Technology choices (that's STACK.md)
|
||||
- Line-by-line code walkthrough (defer to code reading)
|
||||
- Implementation details of specific features
|
||||
|
||||
**File paths ARE welcome:**
|
||||
Include file paths as concrete examples of abstractions. Use backtick formatting: `src/services/user.ts`. This makes the architecture document actionable for Claude when planning.
|
||||
|
||||
**When filling this template:**
|
||||
- Read main entry points (index, server, main)
|
||||
- Identify layers by reading imports/dependencies
|
||||
- Trace a typical request/command execution
|
||||
- Note recurring patterns (services, controllers, repositories)
|
||||
- Keep descriptions conceptual, not mechanical
|
||||
|
||||
**Useful for phase planning when:**
|
||||
- Adding new features (where does it fit in the layers?)
|
||||
- Refactoring (understanding current patterns)
|
||||
- Identifying where to add code (which layer handles X?)
|
||||
- Understanding dependencies between components
|
||||
</guidelines>
|
||||
310
get-shit-done/templates/codebase/concerns.md
Normal file
310
get-shit-done/templates/codebase/concerns.md
Normal file
@@ -0,0 +1,310 @@
|
||||
# Codebase Concerns Template
|
||||
|
||||
Template for `.planning/codebase/CONCERNS.md` - captures known issues and areas requiring care.
|
||||
|
||||
**Purpose:** Surface actionable warnings about the codebase. Focused on "what to watch out for when making changes."
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# Codebase Concerns
|
||||
|
||||
**Analysis Date:** [YYYY-MM-DD]
|
||||
|
||||
## Tech Debt
|
||||
|
||||
**[Area/Component]:**
|
||||
- Issue: [What's the shortcut/workaround]
|
||||
- Why: [Why it was done this way]
|
||||
- Impact: [What breaks or degrades because of it]
|
||||
- Fix approach: [How to properly address it]
|
||||
|
||||
**[Area/Component]:**
|
||||
- Issue: [What's the shortcut/workaround]
|
||||
- Why: [Why it was done this way]
|
||||
- Impact: [What breaks or degrades because of it]
|
||||
- Fix approach: [How to properly address it]
|
||||
|
||||
## Known Bugs
|
||||
|
||||
**[Bug description]:**
|
||||
- Symptoms: [What happens]
|
||||
- Trigger: [How to reproduce]
|
||||
- Workaround: [Temporary mitigation if any]
|
||||
- Root cause: [If known]
|
||||
- Blocked by: [If waiting on something]
|
||||
|
||||
**[Bug description]:**
|
||||
- Symptoms: [What happens]
|
||||
- Trigger: [How to reproduce]
|
||||
- Workaround: [Temporary mitigation if any]
|
||||
- Root cause: [If known]
|
||||
|
||||
## Security Considerations
|
||||
|
||||
**[Area requiring security care]:**
|
||||
- Risk: [What could go wrong]
|
||||
- Current mitigation: [What's in place now]
|
||||
- Recommendations: [What should be added]
|
||||
|
||||
**[Area requiring security care]:**
|
||||
- Risk: [What could go wrong]
|
||||
- Current mitigation: [What's in place now]
|
||||
- Recommendations: [What should be added]
|
||||
|
||||
## Performance Bottlenecks
|
||||
|
||||
**[Slow operation/endpoint]:**
|
||||
- Problem: [What's slow]
|
||||
- Measurement: [Actual numbers: "500ms p95", "2s load time"]
|
||||
- Cause: [Why it's slow]
|
||||
- Improvement path: [How to speed it up]
|
||||
|
||||
**[Slow operation/endpoint]:**
|
||||
- Problem: [What's slow]
|
||||
- Measurement: [Actual numbers]
|
||||
- Cause: [Why it's slow]
|
||||
- Improvement path: [How to speed it up]
|
||||
|
||||
## Fragile Areas
|
||||
|
||||
**[Component/Module]:**
|
||||
- Why fragile: [What makes it break easily]
|
||||
- Common failures: [What typically goes wrong]
|
||||
- Safe modification: [How to change it without breaking]
|
||||
- Test coverage: [Is it tested? Gaps?]
|
||||
|
||||
**[Component/Module]:**
|
||||
- Why fragile: [What makes it break easily]
|
||||
- Common failures: [What typically goes wrong]
|
||||
- Safe modification: [How to change it without breaking]
|
||||
- Test coverage: [Is it tested? Gaps?]
|
||||
|
||||
## Scaling Limits
|
||||
|
||||
**[Resource/System]:**
|
||||
- Current capacity: [Numbers: "100 req/sec", "10k users"]
|
||||
- Limit: [Where it breaks]
|
||||
- Symptoms at limit: [What happens]
|
||||
- Scaling path: [How to increase capacity]
|
||||
|
||||
## Dependencies at Risk
|
||||
|
||||
**[Package/Service]:**
|
||||
- Risk: [e.g., "deprecated", "unmaintained", "breaking changes coming"]
|
||||
- Impact: [What breaks if it fails]
|
||||
- Migration plan: [Alternative or upgrade path]
|
||||
|
||||
## Missing Critical Features
|
||||
|
||||
**[Feature gap]:**
|
||||
- Problem: [What's missing]
|
||||
- Current workaround: [How users cope]
|
||||
- Blocks: [What can't be done without it]
|
||||
- Implementation complexity: [Rough effort estimate]
|
||||
|
||||
## Test Coverage Gaps
|
||||
|
||||
**[Untested area]:**
|
||||
- What's not tested: [Specific functionality]
|
||||
- Risk: [What could break unnoticed]
|
||||
- Priority: [High/Medium/Low]
|
||||
- Difficulty to test: [Why it's not tested yet]
|
||||
|
||||
---
|
||||
|
||||
*Concerns audit: [date]*
|
||||
*Update as issues are fixed or new ones discovered*
|
||||
```
|
||||
|
||||
<good_examples>
|
||||
```markdown
|
||||
# Codebase Concerns
|
||||
|
||||
**Analysis Date:** 2025-01-20
|
||||
|
||||
## Tech Debt
|
||||
|
||||
**Database queries in React components:**
|
||||
- Issue: Direct Supabase queries in 15+ page components instead of server actions
|
||||
- Files: `app/dashboard/page.tsx`, `app/profile/page.tsx`, `app/courses/[id]/page.tsx`, `app/settings/page.tsx` (and 11 more in `app/`)
|
||||
- Why: Rapid prototyping during MVP phase
|
||||
- Impact: Can't implement RLS properly, exposes DB structure to client
|
||||
- Fix approach: Move all queries to server actions in `app/actions/`, add proper RLS policies
|
||||
|
||||
**Manual webhook signature validation:**
|
||||
- Issue: Copy-pasted Stripe webhook verification code in 3 different endpoints
|
||||
- Files: `app/api/webhooks/stripe/route.ts`, `app/api/webhooks/checkout/route.ts`, `app/api/webhooks/subscription/route.ts`
|
||||
- Why: Each webhook added ad-hoc without abstraction
|
||||
- Impact: Easy to miss verification in new webhooks (security risk)
|
||||
- Fix approach: Create shared `lib/stripe/validate-webhook.ts` middleware
|
||||
|
||||
## Known Bugs
|
||||
|
||||
**Race condition in subscription updates:**
|
||||
- Symptoms: User shows as "free" tier for 5-10 seconds after successful payment
|
||||
- Trigger: Fast navigation after Stripe checkout redirect, before webhook processes
|
||||
- Files: `app/checkout/success/page.tsx` (redirect handler), `app/api/webhooks/stripe/route.ts` (webhook)
|
||||
- Workaround: Stripe webhook eventually updates status (self-heals)
|
||||
- Root cause: Webhook processing slower than user navigation, no optimistic UI update
|
||||
- Fix: Add polling in `app/checkout/success/page.tsx` after redirect
|
||||
|
||||
**Inconsistent session state after logout:**
|
||||
- Symptoms: User redirected to /dashboard after logout instead of /login
|
||||
- Trigger: Logout via button in mobile nav (desktop works fine)
|
||||
- File: `components/MobileNav.tsx` (line ~45, logout handler)
|
||||
- Workaround: Manual URL navigation to /login works
|
||||
- Root cause: Mobile nav component not awaiting supabase.auth.signOut()
|
||||
- Fix: Add await to logout handler in `components/MobileNav.tsx`
|
||||
|
||||
## Security Considerations
|
||||
|
||||
**Admin role check client-side only:**
|
||||
- Risk: Admin dashboard pages check isAdmin from Supabase client, no server verification
|
||||
- Files: `app/admin/page.tsx`, `app/admin/users/page.tsx`, `components/AdminGuard.tsx`
|
||||
- Current mitigation: None (relying on UI hiding)
|
||||
- Recommendations: Add middleware to admin routes in `middleware.ts`, verify role server-side
|
||||
|
||||
**Unvalidated file uploads:**
|
||||
- Risk: Users can upload any file type to avatar bucket (no size/type validation)
|
||||
- File: `components/AvatarUpload.tsx` (upload handler)
|
||||
- Current mitigation: Supabase bucket limits to 2MB (configured in dashboard)
|
||||
- Recommendations: Add file type validation (image/* only) in `lib/storage/validate.ts`
|
||||
|
||||
## Performance Bottlenecks
|
||||
|
||||
**/api/courses endpoint:**
|
||||
- Problem: Fetching all courses with nested lessons and authors
|
||||
- File: `app/api/courses/route.ts`
|
||||
- Measurement: 1.2s p95 response time with 50+ courses
|
||||
- Cause: N+1 query pattern (separate query per course for lessons)
|
||||
- Improvement path: Use Prisma include to eager-load lessons in `lib/db/courses.ts`, add Redis caching
|
||||
|
||||
**Dashboard initial load:**
|
||||
- Problem: Waterfall of 5 serial API calls on mount
|
||||
- File: `app/dashboard/page.tsx`
|
||||
- Measurement: 3.5s until interactive on slow 3G
|
||||
- Cause: Each component fetches own data independently
|
||||
- Improvement path: Convert to Server Component with single parallel fetch
|
||||
|
||||
## Fragile Areas
|
||||
|
||||
**Authentication middleware chain:**
|
||||
- File: `middleware.ts`
|
||||
- Why fragile: 4 different middleware functions run in specific order (auth -> role -> subscription -> logging)
|
||||
- Common failures: Middleware order change breaks everything, hard to debug
|
||||
- Safe modification: Add tests before changing order, document dependencies in comments
|
||||
- Test coverage: No integration tests for middleware chain (only unit tests)
|
||||
|
||||
**Stripe webhook event handling:**
|
||||
- File: `app/api/webhooks/stripe/route.ts`
|
||||
- Why fragile: Giant switch statement with 12 event types, shared transaction logic
|
||||
- Common failures: New event type added without handling, partial DB updates on error
|
||||
- Safe modification: Extract each event handler to `lib/stripe/handlers/*.ts`
|
||||
- Test coverage: Only 3 of 12 event types have tests
|
||||
|
||||
## Scaling Limits
|
||||
|
||||
**Supabase Free Tier:**
|
||||
- Current capacity: 500MB database, 1GB file storage, 2GB bandwidth/month
|
||||
- Limit: ~5000 users estimated before hitting limits
|
||||
- Symptoms at limit: 429 rate limit errors, DB writes fail
|
||||
- Scaling path: Upgrade to Pro ($25/mo) extends to 8GB DB, 100GB storage
|
||||
|
||||
**Server-side render blocking:**
|
||||
- Current capacity: ~50 concurrent users before slowdown
|
||||
- Limit: Vercel Hobby plan (10s function timeout, 100GB-hrs/mo)
|
||||
- Symptoms at limit: 504 gateway timeouts on course pages
|
||||
- Scaling path: Upgrade to Vercel Pro ($20/mo), add edge caching
|
||||
|
||||
## Dependencies at Risk
|
||||
|
||||
**react-hot-toast:**
|
||||
- Risk: Unmaintained (last update 18 months ago), React 19 compatibility unknown
|
||||
- Impact: Toast notifications break, no graceful degradation
|
||||
- Migration plan: Switch to sonner (actively maintained, similar API)
|
||||
|
||||
## Missing Critical Features
|
||||
|
||||
**Payment failure handling:**
|
||||
- Problem: No retry mechanism or user notification when subscription payment fails
|
||||
- Current workaround: Users manually re-enter payment info (if they notice)
|
||||
- Blocks: Can't retain users with expired cards, no dunning process
|
||||
- Implementation complexity: Medium (Stripe webhooks + email flow + UI)
|
||||
|
||||
**Course progress tracking:**
|
||||
- Problem: No persistent state for which lessons completed
|
||||
- Current workaround: Users manually track progress
|
||||
- Blocks: Can't show completion percentage, can't recommend next lesson
|
||||
- Implementation complexity: Low (add completed_lessons junction table)
|
||||
|
||||
## Test Coverage Gaps
|
||||
|
||||
**Payment flow end-to-end:**
|
||||
- What's not tested: Full Stripe checkout -> webhook -> subscription activation flow
|
||||
- Risk: Payment processing could break silently (has happened twice)
|
||||
- Priority: High
|
||||
- Difficulty to test: Need Stripe test fixtures and webhook simulation setup
|
||||
|
||||
**Error boundary behavior:**
|
||||
- What's not tested: How app behaves when components throw errors
|
||||
- Risk: White screen of death for users, no error reporting
|
||||
- Priority: Medium
|
||||
- Difficulty to test: Need to intentionally trigger errors in test environment
|
||||
|
||||
---
|
||||
|
||||
*Concerns audit: 2025-01-20*
|
||||
*Update as issues are fixed or new ones discovered*
|
||||
```
|
||||
</good_examples>
|
||||
|
||||
<guidelines>
|
||||
**What belongs in CONCERNS.md:**
|
||||
- Tech debt with clear impact and fix approach
|
||||
- Known bugs with reproduction steps
|
||||
- Security gaps and mitigation recommendations
|
||||
- Performance bottlenecks with measurements
|
||||
- Fragile code that breaks easily
|
||||
- Scaling limits with numbers
|
||||
- Dependencies that need attention
|
||||
- Missing features that block workflows
|
||||
- Test coverage gaps
|
||||
|
||||
**What does NOT belong here:**
|
||||
- Opinions without evidence ("code is messy")
|
||||
- Complaints without solutions ("auth sucks")
|
||||
- Future feature ideas (that's for product planning)
|
||||
- Normal TODOs (those live in code comments)
|
||||
- Architectural decisions that are working fine
|
||||
- Minor code style issues
|
||||
|
||||
**When filling this template:**
|
||||
- **Always include file paths** - Concerns without locations are not actionable. Use backticks: `src/file.ts`
|
||||
- Be specific with measurements ("500ms p95" not "slow")
|
||||
- Include reproduction steps for bugs
|
||||
- Suggest fix approaches, not just problems
|
||||
- Focus on actionable items
|
||||
- Prioritize by risk/impact
|
||||
- Update as issues get resolved
|
||||
- Add new concerns as discovered
|
||||
|
||||
**Tone guidelines:**
|
||||
- Professional, not emotional ("N+1 query pattern" not "terrible queries")
|
||||
- Solution-oriented ("Fix: add index" not "needs fixing")
|
||||
- Risk-focused ("Could expose user data" not "security is bad")
|
||||
- Factual ("3.5s load time" not "really slow")
|
||||
|
||||
**Useful for phase planning when:**
|
||||
- Deciding what to work on next
|
||||
- Estimating risk of changes
|
||||
- Understanding where to be careful
|
||||
- Prioritizing improvements
|
||||
- Onboarding new Claude contexts
|
||||
- Planning refactoring work
|
||||
|
||||
**How this gets populated:**
|
||||
Explore agents detect these during codebase mapping. Manual additions welcome for human-discovered issues. This is living documentation, not a complaint list.
|
||||
</guidelines>
|
||||
307
get-shit-done/templates/codebase/conventions.md
Normal file
307
get-shit-done/templates/codebase/conventions.md
Normal file
@@ -0,0 +1,307 @@
|
||||
# Coding Conventions Template
|
||||
|
||||
Template for `.planning/codebase/CONVENTIONS.md` - captures coding style and patterns.
|
||||
|
||||
**Purpose:** Document how code is written in this codebase. Prescriptive guide for Claude to match existing style.
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# Coding Conventions
|
||||
|
||||
**Analysis Date:** [YYYY-MM-DD]
|
||||
|
||||
## Naming Patterns
|
||||
|
||||
**Files:**
|
||||
- [Pattern: e.g., "kebab-case for all files"]
|
||||
- [Test files: e.g., "*.test.ts alongside source"]
|
||||
- [Components: e.g., "PascalCase.tsx for React components"]
|
||||
|
||||
**Functions:**
|
||||
- [Pattern: e.g., "camelCase for all functions"]
|
||||
- [Async: e.g., "no special prefix for async functions"]
|
||||
- [Handlers: e.g., "handleEventName for event handlers"]
|
||||
|
||||
**Variables:**
|
||||
- [Pattern: e.g., "camelCase for variables"]
|
||||
- [Constants: e.g., "UPPER_SNAKE_CASE for constants"]
|
||||
- [Private: e.g., "_prefix for private members" or "no prefix"]
|
||||
|
||||
**Types:**
|
||||
- [Interfaces: e.g., "PascalCase, no I prefix"]
|
||||
- [Types: e.g., "PascalCase for type aliases"]
|
||||
- [Enums: e.g., "PascalCase for enum name, UPPER_CASE for values"]
|
||||
|
||||
## Code Style
|
||||
|
||||
**Formatting:**
|
||||
- [Tool: e.g., "Prettier with config in .prettierrc"]
|
||||
- [Line length: e.g., "100 characters max"]
|
||||
- [Quotes: e.g., "single quotes for strings"]
|
||||
- [Semicolons: e.g., "required" or "omitted"]
|
||||
|
||||
**Linting:**
|
||||
- [Tool: e.g., "ESLint with eslint.config.js"]
|
||||
- [Rules: e.g., "extends airbnb-base, no console in production"]
|
||||
- [Run: e.g., "npm run lint"]
|
||||
|
||||
## Import Organization
|
||||
|
||||
**Order:**
|
||||
1. [e.g., "External packages (react, express, etc.)"]
|
||||
2. [e.g., "Internal modules (@/lib, @/components)"]
|
||||
3. [e.g., "Relative imports (., ..)"]
|
||||
4. [e.g., "Type imports (import type {})"]
|
||||
|
||||
**Grouping:**
|
||||
- [Blank lines: e.g., "blank line between groups"]
|
||||
- [Sorting: e.g., "alphabetical within each group"]
|
||||
|
||||
**Path Aliases:**
|
||||
- [Aliases used: e.g., "@/ for src/, @components/ for src/components/"]
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Patterns:**
|
||||
- [Strategy: e.g., "throw errors, catch at boundaries"]
|
||||
- [Custom errors: e.g., "extend Error class, named *Error"]
|
||||
- [Async: e.g., "use try/catch, no .catch() chains"]
|
||||
|
||||
**Error Types:**
|
||||
- [When to throw: e.g., "invalid input, missing dependencies"]
|
||||
- [When to return: e.g., "expected failures return Result<T, E>"]
|
||||
- [Logging: e.g., "log error with context before throwing"]
|
||||
|
||||
## Logging
|
||||
|
||||
**Framework:**
|
||||
- [Tool: e.g., "console.log, pino, winston"]
|
||||
- [Levels: e.g., "debug, info, warn, error"]
|
||||
|
||||
**Patterns:**
|
||||
- [Format: e.g., "structured logging with context object"]
|
||||
- [When: e.g., "log state transitions, external calls"]
|
||||
- [Where: e.g., "log at service boundaries, not in utils"]
|
||||
|
||||
## Comments
|
||||
|
||||
**When to Comment:**
|
||||
- [e.g., "explain why, not what"]
|
||||
- [e.g., "document business logic, algorithms, edge cases"]
|
||||
- [e.g., "avoid obvious comments like // increment counter"]
|
||||
|
||||
**JSDoc/TSDoc:**
|
||||
- [Usage: e.g., "required for public APIs, optional for internal"]
|
||||
- [Format: e.g., "use @param, @returns, @throws tags"]
|
||||
|
||||
**TODO Comments:**
|
||||
- [Pattern: e.g., "// TODO(username): description"]
|
||||
- [Tracking: e.g., "link to issue number if available"]
|
||||
|
||||
## Function Design
|
||||
|
||||
**Size:**
|
||||
- [e.g., "keep under 50 lines, extract helpers"]
|
||||
|
||||
**Parameters:**
|
||||
- [e.g., "max 3 parameters, use object for more"]
|
||||
- [e.g., "destructure objects in parameter list"]
|
||||
|
||||
**Return Values:**
|
||||
- [e.g., "explicit returns, no implicit undefined"]
|
||||
- [e.g., "return early for guard clauses"]
|
||||
|
||||
## Module Design
|
||||
|
||||
**Exports:**
|
||||
- [e.g., "named exports preferred, default exports for React components"]
|
||||
- [e.g., "export from index.ts for public API"]
|
||||
|
||||
**Barrel Files:**
|
||||
- [e.g., "use index.ts to re-export public API"]
|
||||
- [e.g., "avoid circular dependencies"]
|
||||
|
||||
---
|
||||
|
||||
*Convention analysis: [date]*
|
||||
*Update when patterns change*
|
||||
```
|
||||
|
||||
<good_examples>
|
||||
```markdown
|
||||
# Coding Conventions
|
||||
|
||||
**Analysis Date:** 2025-01-20
|
||||
|
||||
## Naming Patterns
|
||||
|
||||
**Files:**
|
||||
- kebab-case for all files (command-handler.ts, user-service.ts)
|
||||
- *.test.ts alongside source files
|
||||
- index.ts for barrel exports
|
||||
|
||||
**Functions:**
|
||||
- camelCase for all functions
|
||||
- No special prefix for async functions
|
||||
- handleEventName for event handlers (handleClick, handleSubmit)
|
||||
|
||||
**Variables:**
|
||||
- camelCase for variables
|
||||
- UPPER_SNAKE_CASE for constants (MAX_RETRIES, API_BASE_URL)
|
||||
- No underscore prefix (no private marker in TS)
|
||||
|
||||
**Types:**
|
||||
- PascalCase for interfaces, no I prefix (User, not IUser)
|
||||
- PascalCase for type aliases (UserConfig, ResponseData)
|
||||
- PascalCase for enum names, UPPER_CASE for values (Status.PENDING)
|
||||
|
||||
## Code Style
|
||||
|
||||
**Formatting:**
|
||||
- Prettier with .prettierrc
|
||||
- 100 character line length
|
||||
- Single quotes for strings
|
||||
- Semicolons required
|
||||
- 2 space indentation
|
||||
|
||||
**Linting:**
|
||||
- ESLint with eslint.config.js
|
||||
- Extends @typescript-eslint/recommended
|
||||
- No console.log in production code (use logger)
|
||||
- Run: npm run lint
|
||||
|
||||
## Import Organization
|
||||
|
||||
**Order:**
|
||||
1. External packages (react, express, commander)
|
||||
2. Internal modules (@/lib, @/services)
|
||||
3. Relative imports (./utils, ../types)
|
||||
4. Type imports (import type { User })
|
||||
|
||||
**Grouping:**
|
||||
- Blank line between groups
|
||||
- Alphabetical within each group
|
||||
- Type imports last within each group
|
||||
|
||||
**Path Aliases:**
|
||||
- @/ maps to src/
|
||||
- No other aliases defined
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Patterns:**
|
||||
- Throw errors, catch at boundaries (route handlers, main functions)
|
||||
- Extend Error class for custom errors (ValidationError, NotFoundError)
|
||||
- Async functions use try/catch, no .catch() chains
|
||||
|
||||
**Error Types:**
|
||||
- Throw on invalid input, missing dependencies, invariant violations
|
||||
- Log error with context before throwing: logger.error({ err, userId }, 'Failed to process')
|
||||
- Include cause in error message: new Error('Failed to X', { cause: originalError })
|
||||
|
||||
## Logging
|
||||
|
||||
**Framework:**
|
||||
- pino logger instance exported from lib/logger.ts
|
||||
- Levels: debug, info, warn, error (no trace)
|
||||
|
||||
**Patterns:**
|
||||
- Structured logging with context: logger.info({ userId, action }, 'User action')
|
||||
- Log at service boundaries, not in utility functions
|
||||
- Log state transitions, external API calls, errors
|
||||
- No console.log in committed code
|
||||
|
||||
## Comments
|
||||
|
||||
**When to Comment:**
|
||||
- Explain why, not what: // Retry 3 times because API has transient failures
|
||||
- Document business rules: // Users must verify email within 24 hours
|
||||
- Explain non-obvious algorithms or workarounds
|
||||
- Avoid obvious comments: // set count to 0
|
||||
|
||||
**JSDoc/TSDoc:**
|
||||
- Required for public API functions
|
||||
- Optional for internal functions if signature is self-explanatory
|
||||
- Use @param, @returns, @throws tags
|
||||
|
||||
**TODO Comments:**
|
||||
- Format: // TODO: description (no username, using git blame)
|
||||
- Link to issue if exists: // TODO: Fix race condition (issue #123)
|
||||
|
||||
## Function Design
|
||||
|
||||
**Size:**
|
||||
- Keep under 50 lines
|
||||
- Extract helpers for complex logic
|
||||
- One level of abstraction per function
|
||||
|
||||
**Parameters:**
|
||||
- Max 3 parameters
|
||||
- Use options object for 4+ parameters: function create(options: CreateOptions)
|
||||
- Destructure in parameter list: function process({ id, name }: ProcessParams)
|
||||
|
||||
**Return Values:**
|
||||
- Explicit return statements
|
||||
- Return early for guard clauses
|
||||
- Use Result<T, E> type for expected failures
|
||||
|
||||
## Module Design
|
||||
|
||||
**Exports:**
|
||||
- Named exports preferred
|
||||
- Default exports only for React components
|
||||
- Export public API from index.ts barrel files
|
||||
|
||||
**Barrel Files:**
|
||||
- index.ts re-exports public API
|
||||
- Keep internal helpers private (don't export from index)
|
||||
- Avoid circular dependencies (import from specific files if needed)
|
||||
|
||||
---
|
||||
|
||||
*Convention analysis: 2025-01-20*
|
||||
*Update when patterns change*
|
||||
```
|
||||
</good_examples>
|
||||
|
||||
<guidelines>
|
||||
**What belongs in CONVENTIONS.md:**
|
||||
- Naming patterns observed in the codebase
|
||||
- Formatting rules (Prettier config, linting rules)
|
||||
- Import organization patterns
|
||||
- Error handling strategy
|
||||
- Logging approach
|
||||
- Comment conventions
|
||||
- Function and module design patterns
|
||||
|
||||
**What does NOT belong here:**
|
||||
- Architecture decisions (that's ARCHITECTURE.md)
|
||||
- Technology choices (that's STACK.md)
|
||||
- Test patterns (that's TESTING.md)
|
||||
- File organization (that's STRUCTURE.md)
|
||||
|
||||
**When filling this template:**
|
||||
- Check .prettierrc, .eslintrc, or similar config files
|
||||
- Examine 5-10 representative source files for patterns
|
||||
- Look for consistency: if 80%+ follows a pattern, document it
|
||||
- Be prescriptive: "Use X" not "Sometimes Y is used"
|
||||
- Note deviations: "Legacy code uses Y, new code should use X"
|
||||
- Keep under ~150 lines total
|
||||
|
||||
**Useful for phase planning when:**
|
||||
- Writing new code (match existing style)
|
||||
- Adding features (follow naming patterns)
|
||||
- Refactoring (apply consistent conventions)
|
||||
- Code review (check against documented patterns)
|
||||
- Onboarding (understand style expectations)
|
||||
|
||||
**Analysis approach:**
|
||||
- Scan src/ directory for file naming patterns
|
||||
- Check package.json scripts for lint/format commands
|
||||
- Read 5-10 files to identify function naming, error handling
|
||||
- Look for config files (.prettierrc, eslint.config.js)
|
||||
- Note patterns in imports, comments, function signatures
|
||||
</guidelines>
|
||||
280
get-shit-done/templates/codebase/integrations.md
Normal file
280
get-shit-done/templates/codebase/integrations.md
Normal file
@@ -0,0 +1,280 @@
|
||||
# External Integrations Template
|
||||
|
||||
Template for `.planning/codebase/INTEGRATIONS.md` - captures external service dependencies.
|
||||
|
||||
**Purpose:** Document what external systems this codebase communicates with. Focused on "what lives outside our code that we depend on."
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# External Integrations
|
||||
|
||||
**Analysis Date:** [YYYY-MM-DD]
|
||||
|
||||
## APIs & External Services
|
||||
|
||||
**Payment Processing:**
|
||||
- [Service] - [What it's used for: e.g., "subscription billing, one-time payments"]
|
||||
- SDK/Client: [e.g., "stripe npm package v14.x"]
|
||||
- Auth: [e.g., "API key in STRIPE_SECRET_KEY env var"]
|
||||
- Endpoints used: [e.g., "checkout sessions, webhooks"]
|
||||
|
||||
**Email/SMS:**
|
||||
- [Service] - [What it's used for: e.g., "transactional emails"]
|
||||
- SDK/Client: [e.g., "sendgrid/mail v8.x"]
|
||||
- Auth: [e.g., "API key in SENDGRID_API_KEY env var"]
|
||||
- Templates: [e.g., "managed in SendGrid dashboard"]
|
||||
|
||||
**External APIs:**
|
||||
- [Service] - [What it's used for]
|
||||
- Integration method: [e.g., "REST API via fetch", "GraphQL client"]
|
||||
- Auth: [e.g., "OAuth2 token in AUTH_TOKEN env var"]
|
||||
- Rate limits: [if applicable]
|
||||
|
||||
## Data Storage
|
||||
|
||||
**Databases:**
|
||||
- [Type/Provider] - [e.g., "PostgreSQL on Supabase"]
|
||||
- Connection: [e.g., "via DATABASE_URL env var"]
|
||||
- Client: [e.g., "Prisma ORM v5.x"]
|
||||
- Migrations: [e.g., "prisma migrate in migrations/"]
|
||||
|
||||
**File Storage:**
|
||||
- [Service] - [e.g., "AWS S3 for user uploads"]
|
||||
- SDK/Client: [e.g., "@aws-sdk/client-s3"]
|
||||
- Auth: [e.g., "IAM credentials in AWS_* env vars"]
|
||||
- Buckets: [e.g., "prod-uploads, dev-uploads"]
|
||||
|
||||
**Caching:**
|
||||
- [Service] - [e.g., "Redis for session storage"]
|
||||
- Connection: [e.g., "REDIS_URL env var"]
|
||||
- Client: [e.g., "ioredis v5.x"]
|
||||
|
||||
## Authentication & Identity
|
||||
|
||||
**Auth Provider:**
|
||||
- [Service] - [e.g., "Supabase Auth", "Auth0", "custom JWT"]
|
||||
- Implementation: [e.g., "Supabase client SDK"]
|
||||
- Token storage: [e.g., "httpOnly cookies", "localStorage"]
|
||||
- Session management: [e.g., "JWT refresh tokens"]
|
||||
|
||||
**OAuth Integrations:**
|
||||
- [Provider] - [e.g., "Google OAuth for sign-in"]
|
||||
- Credentials: [e.g., "GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET"]
|
||||
- Scopes: [e.g., "email, profile"]
|
||||
|
||||
## Monitoring & Observability
|
||||
|
||||
**Error Tracking:**
|
||||
- [Service] - [e.g., "Sentry"]
|
||||
- DSN: [e.g., "SENTRY_DSN env var"]
|
||||
- Release tracking: [e.g., "via SENTRY_RELEASE"]
|
||||
|
||||
**Analytics:**
|
||||
- [Service] - [e.g., "Mixpanel for product analytics"]
|
||||
- Token: [e.g., "MIXPANEL_TOKEN env var"]
|
||||
- Events tracked: [e.g., "user actions, page views"]
|
||||
|
||||
**Logs:**
|
||||
- [Service] - [e.g., "CloudWatch", "Datadog", "none (stdout only)"]
|
||||
- Integration: [e.g., "AWS Lambda built-in"]
|
||||
|
||||
## CI/CD & Deployment
|
||||
|
||||
**Hosting:**
|
||||
- [Platform] - [e.g., "Vercel", "AWS Lambda", "Docker on ECS"]
|
||||
- Deployment: [e.g., "automatic on main branch push"]
|
||||
- Environment vars: [e.g., "configured in Vercel dashboard"]
|
||||
|
||||
**CI Pipeline:**
|
||||
- [Service] - [e.g., "GitHub Actions"]
|
||||
- Workflows: [e.g., "test.yml, deploy.yml"]
|
||||
- Secrets: [e.g., "stored in GitHub repo secrets"]
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
**Development:**
|
||||
- Required env vars: [List critical vars]
|
||||
- Secrets location: [e.g., ".env.local (gitignored)", "1Password vault"]
|
||||
- Mock/stub services: [e.g., "Stripe test mode", "local PostgreSQL"]
|
||||
|
||||
**Staging:**
|
||||
- Environment-specific differences: [e.g., "uses staging Stripe account"]
|
||||
- Data: [e.g., "separate staging database"]
|
||||
|
||||
**Production:**
|
||||
- Secrets management: [e.g., "Vercel environment variables"]
|
||||
- Failover/redundancy: [e.g., "multi-region DB replication"]
|
||||
|
||||
## Webhooks & Callbacks
|
||||
|
||||
**Incoming:**
|
||||
- [Service] - [Endpoint: e.g., "/api/webhooks/stripe"]
|
||||
- Verification: [e.g., "signature validation via stripe.webhooks.constructEvent"]
|
||||
- Events: [e.g., "payment_intent.succeeded, customer.subscription.updated"]
|
||||
|
||||
**Outgoing:**
|
||||
- [Service] - [What triggers it]
|
||||
- Endpoint: [e.g., "external CRM webhook on user signup"]
|
||||
- Retry logic: [if applicable]
|
||||
|
||||
---
|
||||
|
||||
*Integration audit: [date]*
|
||||
*Update when adding/removing external services*
|
||||
```
|
||||
|
||||
<good_examples>
|
||||
```markdown
|
||||
# External Integrations
|
||||
|
||||
**Analysis Date:** 2025-01-20
|
||||
|
||||
## APIs & External Services
|
||||
|
||||
**Payment Processing:**
|
||||
- Stripe - Subscription billing and one-time course payments
|
||||
- SDK/Client: stripe npm package v14.8
|
||||
- Auth: API key in STRIPE_SECRET_KEY env var
|
||||
- Endpoints used: checkout sessions, customer portal, webhooks
|
||||
|
||||
**Email/SMS:**
|
||||
- SendGrid - Transactional emails (receipts, password resets)
|
||||
- SDK/Client: @sendgrid/mail v8.1
|
||||
- Auth: API key in SENDGRID_API_KEY env var
|
||||
- Templates: Managed in SendGrid dashboard (template IDs in code)
|
||||
|
||||
**External APIs:**
|
||||
- OpenAI API - Course content generation
|
||||
- Integration method: REST API via openai npm package v4.x
|
||||
- Auth: Bearer token in OPENAI_API_KEY env var
|
||||
- Rate limits: 3500 requests/min (tier 3)
|
||||
|
||||
## Data Storage
|
||||
|
||||
**Databases:**
|
||||
- PostgreSQL on Supabase - Primary data store
|
||||
- Connection: via DATABASE_URL env var
|
||||
- Client: Prisma ORM v5.8
|
||||
- Migrations: prisma migrate in prisma/migrations/
|
||||
|
||||
**File Storage:**
|
||||
- Supabase Storage - User uploads (profile images, course materials)
|
||||
- SDK/Client: @supabase/supabase-js v2.x
|
||||
- Auth: Service role key in SUPABASE_SERVICE_ROLE_KEY
|
||||
- Buckets: avatars (public), course-materials (private)
|
||||
|
||||
**Caching:**
|
||||
- None currently (all database queries, no Redis)
|
||||
|
||||
## Authentication & Identity
|
||||
|
||||
**Auth Provider:**
|
||||
- Supabase Auth - Email/password + OAuth
|
||||
- Implementation: Supabase client SDK with server-side session management
|
||||
- Token storage: httpOnly cookies via @supabase/ssr
|
||||
- Session management: JWT refresh tokens handled by Supabase
|
||||
|
||||
**OAuth Integrations:**
|
||||
- Google OAuth - Social sign-in
|
||||
- Credentials: GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET (Supabase dashboard)
|
||||
- Scopes: email, profile
|
||||
|
||||
## Monitoring & Observability
|
||||
|
||||
**Error Tracking:**
|
||||
- Sentry - Server and client errors
|
||||
- DSN: SENTRY_DSN env var
|
||||
- Release tracking: Git commit SHA via SENTRY_RELEASE
|
||||
|
||||
**Analytics:**
|
||||
- None (planned: Mixpanel)
|
||||
|
||||
**Logs:**
|
||||
- Vercel logs - stdout/stderr only
|
||||
- Retention: 7 days on Pro plan
|
||||
|
||||
## CI/CD & Deployment
|
||||
|
||||
**Hosting:**
|
||||
- Vercel - Next.js app hosting
|
||||
- Deployment: Automatic on main branch push
|
||||
- Environment vars: Configured in Vercel dashboard (synced to .env.example)
|
||||
|
||||
**CI Pipeline:**
|
||||
- GitHub Actions - Tests and type checking
|
||||
- Workflows: .github/workflows/ci.yml
|
||||
- Secrets: None needed (public repo tests only)
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
**Development:**
|
||||
- Required env vars: DATABASE_URL, NEXT_PUBLIC_SUPABASE_URL, NEXT_PUBLIC_SUPABASE_ANON_KEY
|
||||
- Secrets location: .env.local (gitignored), team shared via 1Password vault
|
||||
- Mock/stub services: Stripe test mode, Supabase local dev project
|
||||
|
||||
**Staging:**
|
||||
- Uses separate Supabase staging project
|
||||
- Stripe test mode
|
||||
- Same Vercel account, different environment
|
||||
|
||||
**Production:**
|
||||
- Secrets management: Vercel environment variables
|
||||
- Database: Supabase production project with daily backups
|
||||
|
||||
## Webhooks & Callbacks
|
||||
|
||||
**Incoming:**
|
||||
- Stripe - /api/webhooks/stripe
|
||||
- Verification: Signature validation via stripe.webhooks.constructEvent
|
||||
- Events: payment_intent.succeeded, customer.subscription.updated, customer.subscription.deleted
|
||||
|
||||
**Outgoing:**
|
||||
- None
|
||||
|
||||
---
|
||||
|
||||
*Integration audit: 2025-01-20*
|
||||
*Update when adding/removing external services*
|
||||
```
|
||||
</good_examples>
|
||||
|
||||
<guidelines>
|
||||
**What belongs in INTEGRATIONS.md:**
|
||||
- External services the code communicates with
|
||||
- Authentication patterns (where secrets live, not the secrets themselves)
|
||||
- SDKs and client libraries used
|
||||
- Environment variable names (not values)
|
||||
- Webhook endpoints and verification methods
|
||||
- Database connection patterns
|
||||
- File storage locations
|
||||
- Monitoring and logging services
|
||||
|
||||
**What does NOT belong here:**
|
||||
- Actual API keys or secrets (NEVER write these)
|
||||
- Internal architecture (that's ARCHITECTURE.md)
|
||||
- Code patterns (that's PATTERNS.md)
|
||||
- Technology choices (that's STACK.md)
|
||||
- Performance issues (that's CONCERNS.md)
|
||||
|
||||
**When filling this template:**
|
||||
- Check .env.example or .env.template for required env vars
|
||||
- Look for SDK imports (stripe, @sendgrid/mail, etc.)
|
||||
- Check for webhook handlers in routes/endpoints
|
||||
- Note where secrets are managed (not the secrets)
|
||||
- Document environment-specific differences (dev/staging/prod)
|
||||
- Include auth patterns for each service
|
||||
|
||||
**Useful for phase planning when:**
|
||||
- Adding new external service integrations
|
||||
- Debugging authentication issues
|
||||
- Understanding data flow outside the application
|
||||
- Setting up new environments
|
||||
- Auditing third-party dependencies
|
||||
- Planning for service outages or migrations
|
||||
|
||||
**Security note:**
|
||||
Document WHERE secrets live (env vars, Vercel dashboard, 1Password), never WHAT the secrets are.
|
||||
</guidelines>
|
||||
186
get-shit-done/templates/codebase/stack.md
Normal file
186
get-shit-done/templates/codebase/stack.md
Normal file
@@ -0,0 +1,186 @@
|
||||
# Technology Stack Template
|
||||
|
||||
Template for `.planning/codebase/STACK.md` - captures the technology foundation.
|
||||
|
||||
**Purpose:** Document what technologies run this codebase. Focused on "what executes when you run the code."
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# Technology Stack
|
||||
|
||||
**Analysis Date:** [YYYY-MM-DD]
|
||||
|
||||
## Languages
|
||||
|
||||
**Primary:**
|
||||
- [Language] [Version] - [Where used: e.g., "all application code"]
|
||||
|
||||
**Secondary:**
|
||||
- [Language] [Version] - [Where used: e.g., "build scripts, tooling"]
|
||||
|
||||
## Runtime
|
||||
|
||||
**Environment:**
|
||||
- [Runtime] [Version] - [e.g., "Node.js 20.x"]
|
||||
- [Additional requirements if any]
|
||||
|
||||
**Package Manager:**
|
||||
- [Manager] [Version] - [e.g., "npm 10.x"]
|
||||
- Lockfile: [e.g., "package-lock.json present"]
|
||||
|
||||
## Frameworks
|
||||
|
||||
**Core:**
|
||||
- [Framework] [Version] - [Purpose: e.g., "web server", "UI framework"]
|
||||
|
||||
**Testing:**
|
||||
- [Framework] [Version] - [e.g., "Jest for unit tests"]
|
||||
- [Framework] [Version] - [e.g., "Playwright for E2E"]
|
||||
|
||||
**Build/Dev:**
|
||||
- [Tool] [Version] - [e.g., "Vite for bundling"]
|
||||
- [Tool] [Version] - [e.g., "TypeScript compiler"]
|
||||
|
||||
## Key Dependencies
|
||||
|
||||
[Only include dependencies critical to understanding the stack - limit to 5-10 most important]
|
||||
|
||||
**Critical:**
|
||||
- [Package] [Version] - [Why it matters: e.g., "authentication", "database access"]
|
||||
- [Package] [Version] - [Why it matters]
|
||||
|
||||
**Infrastructure:**
|
||||
- [Package] [Version] - [e.g., "Express for HTTP routing"]
|
||||
- [Package] [Version] - [e.g., "PostgreSQL client"]
|
||||
|
||||
## Configuration
|
||||
|
||||
**Environment:**
|
||||
- [How configured: e.g., ".env files", "environment variables"]
|
||||
- [Key configs: e.g., "DATABASE_URL, API_KEY required"]
|
||||
|
||||
**Build:**
|
||||
- [Build config files: e.g., "vite.config.ts, tsconfig.json"]
|
||||
|
||||
## Platform Requirements
|
||||
|
||||
**Development:**
|
||||
- [OS requirements or "any platform"]
|
||||
- [Additional tooling: e.g., "Docker for local DB"]
|
||||
|
||||
**Production:**
|
||||
- [Deployment target: e.g., "Vercel", "AWS Lambda", "Docker container"]
|
||||
- [Version requirements]
|
||||
|
||||
---
|
||||
|
||||
*Stack analysis: [date]*
|
||||
*Update after major dependency changes*
|
||||
```
|
||||
|
||||
<good_examples>
|
||||
```markdown
|
||||
# Technology Stack
|
||||
|
||||
**Analysis Date:** 2025-01-20
|
||||
|
||||
## Languages
|
||||
|
||||
**Primary:**
|
||||
- TypeScript 5.3 - All application code
|
||||
|
||||
**Secondary:**
|
||||
- JavaScript - Build scripts, config files
|
||||
|
||||
## Runtime
|
||||
|
||||
**Environment:**
|
||||
- Node.js 20.x (LTS)
|
||||
- No browser runtime (CLI tool only)
|
||||
|
||||
**Package Manager:**
|
||||
- npm 10.x
|
||||
- Lockfile: `package-lock.json` present
|
||||
|
||||
## Frameworks
|
||||
|
||||
**Core:**
|
||||
- None (vanilla Node.js CLI)
|
||||
|
||||
**Testing:**
|
||||
- Vitest 1.0 - Unit tests
|
||||
- tsx - TypeScript execution without build step
|
||||
|
||||
**Build/Dev:**
|
||||
- TypeScript 5.3 - Compilation to JavaScript
|
||||
- esbuild - Used by Vitest for fast transforms
|
||||
|
||||
## Key Dependencies
|
||||
|
||||
**Critical:**
|
||||
- commander 11.x - CLI argument parsing and command structure
|
||||
- chalk 5.x - Terminal output styling
|
||||
- fs-extra 11.x - Extended file system operations
|
||||
|
||||
**Infrastructure:**
|
||||
- Node.js built-ins - fs, path, child_process for file operations
|
||||
|
||||
## Configuration
|
||||
|
||||
**Environment:**
|
||||
- No environment variables required
|
||||
- Configuration via CLI flags only
|
||||
|
||||
**Build:**
|
||||
- `tsconfig.json` - TypeScript compiler options
|
||||
- `vitest.config.ts` - Test runner configuration
|
||||
|
||||
## Platform Requirements
|
||||
|
||||
**Development:**
|
||||
- macOS/Linux/Windows (any platform with Node.js)
|
||||
- No external dependencies
|
||||
|
||||
**Production:**
|
||||
- Distributed as npm package
|
||||
- Installed globally via npm install -g
|
||||
- Runs on user's Node.js installation
|
||||
|
||||
---
|
||||
|
||||
*Stack analysis: 2025-01-20*
|
||||
*Update after major dependency changes*
|
||||
```
|
||||
</good_examples>
|
||||
|
||||
<guidelines>
|
||||
**What belongs in STACK.md:**
|
||||
- Languages and versions
|
||||
- Runtime requirements (Node, Bun, Deno, browser)
|
||||
- Package manager and lockfile
|
||||
- Framework choices
|
||||
- Critical dependencies (limit to 5-10 most important)
|
||||
- Build tooling
|
||||
- Platform/deployment requirements
|
||||
|
||||
**What does NOT belong here:**
|
||||
- File structure (that's STRUCTURE.md)
|
||||
- Architectural patterns (that's ARCHITECTURE.md)
|
||||
- Every dependency in package.json (only critical ones)
|
||||
- Implementation details (defer to code)
|
||||
|
||||
**When filling this template:**
|
||||
- Check package.json for dependencies
|
||||
- Note runtime version from .nvmrc or package.json engines
|
||||
- Include only dependencies that affect understanding (not every utility)
|
||||
- Specify versions only when version matters (breaking changes, compatibility)
|
||||
|
||||
**Useful for phase planning when:**
|
||||
- Adding new dependencies (check compatibility)
|
||||
- Upgrading frameworks (know what's in use)
|
||||
- Choosing implementation approach (must work with existing stack)
|
||||
- Understanding build requirements
|
||||
</guidelines>
|
||||
285
get-shit-done/templates/codebase/structure.md
Normal file
285
get-shit-done/templates/codebase/structure.md
Normal file
@@ -0,0 +1,285 @@
|
||||
# Structure Template
|
||||
|
||||
Template for `.planning/codebase/STRUCTURE.md` - captures physical file organization.
|
||||
|
||||
**Purpose:** Document where things physically live in the codebase. Answers "where do I put X?"
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# Codebase Structure
|
||||
|
||||
**Analysis Date:** [YYYY-MM-DD]
|
||||
|
||||
## Directory Layout
|
||||
|
||||
[ASCII box-drawing tree of top-level directories with purpose - use ├── └── │ characters for tree structure only]
|
||||
|
||||
```
|
||||
[project-root]/
|
||||
├── [dir]/ # [Purpose]
|
||||
├── [dir]/ # [Purpose]
|
||||
├── [dir]/ # [Purpose]
|
||||
└── [file] # [Purpose]
|
||||
```
|
||||
|
||||
## Directory Purposes
|
||||
|
||||
**[Directory Name]:**
|
||||
- Purpose: [What lives here]
|
||||
- Contains: [Types of files: e.g., "*.ts source files", "component directories"]
|
||||
- Key files: [Important files in this directory]
|
||||
- Subdirectories: [If nested, describe structure]
|
||||
|
||||
**[Directory Name]:**
|
||||
- Purpose: [What lives here]
|
||||
- Contains: [Types of files]
|
||||
- Key files: [Important files]
|
||||
- Subdirectories: [Structure]
|
||||
|
||||
## Key File Locations
|
||||
|
||||
**Entry Points:**
|
||||
- [Path]: [Purpose: e.g., "CLI entry point"]
|
||||
- [Path]: [Purpose: e.g., "Server startup"]
|
||||
|
||||
**Configuration:**
|
||||
- [Path]: [Purpose: e.g., "TypeScript config"]
|
||||
- [Path]: [Purpose: e.g., "Build configuration"]
|
||||
- [Path]: [Purpose: e.g., "Environment variables"]
|
||||
|
||||
**Core Logic:**
|
||||
- [Path]: [Purpose: e.g., "Business services"]
|
||||
- [Path]: [Purpose: e.g., "Database models"]
|
||||
- [Path]: [Purpose: e.g., "API routes"]
|
||||
|
||||
**Testing:**
|
||||
- [Path]: [Purpose: e.g., "Unit tests"]
|
||||
- [Path]: [Purpose: e.g., "Test fixtures"]
|
||||
|
||||
**Documentation:**
|
||||
- [Path]: [Purpose: e.g., "User-facing docs"]
|
||||
- [Path]: [Purpose: e.g., "Developer guide"]
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
**Files:**
|
||||
- [Pattern]: [Example: e.g., "kebab-case.ts for modules"]
|
||||
- [Pattern]: [Example: e.g., "PascalCase.tsx for React components"]
|
||||
- [Pattern]: [Example: e.g., "*.test.ts for test files"]
|
||||
|
||||
**Directories:**
|
||||
- [Pattern]: [Example: e.g., "kebab-case for feature directories"]
|
||||
- [Pattern]: [Example: e.g., "plural names for collections"]
|
||||
|
||||
**Special Patterns:**
|
||||
- [Pattern]: [Example: e.g., "index.ts for directory exports"]
|
||||
- [Pattern]: [Example: e.g., "__tests__ for test directories"]
|
||||
|
||||
## Where to Add New Code
|
||||
|
||||
**New Feature:**
|
||||
- Primary code: [Directory path]
|
||||
- Tests: [Directory path]
|
||||
- Config if needed: [Directory path]
|
||||
|
||||
**New Component/Module:**
|
||||
- Implementation: [Directory path]
|
||||
- Types: [Directory path]
|
||||
- Tests: [Directory path]
|
||||
|
||||
**New Route/Command:**
|
||||
- Definition: [Directory path]
|
||||
- Handler: [Directory path]
|
||||
- Tests: [Directory path]
|
||||
|
||||
**Utilities:**
|
||||
- Shared helpers: [Directory path]
|
||||
- Type definitions: [Directory path]
|
||||
|
||||
## Special Directories
|
||||
|
||||
[Any directories with special meaning or generation]
|
||||
|
||||
**[Directory]:**
|
||||
- Purpose: [e.g., "Generated code", "Build output"]
|
||||
- Source: [e.g., "Auto-generated by X", "Build artifacts"]
|
||||
- Committed: [Yes/No - in .gitignore?]
|
||||
|
||||
---
|
||||
|
||||
*Structure analysis: [date]*
|
||||
*Update when directory structure changes*
|
||||
```
|
||||
|
||||
<good_examples>
|
||||
```markdown
|
||||
# Codebase Structure
|
||||
|
||||
**Analysis Date:** 2025-01-20
|
||||
|
||||
## Directory Layout
|
||||
|
||||
```
|
||||
get-shit-done/
|
||||
├── bin/ # Executable entry points
|
||||
├── commands/ # Slash command definitions
|
||||
│ └── gsd/ # GSD-specific commands
|
||||
├── get-shit-done/ # Skill resources
|
||||
│ ├── references/ # Principle documents
|
||||
│ ├── templates/ # File templates
|
||||
│ └── workflows/ # Multi-step procedures
|
||||
├── src/ # Source code (if applicable)
|
||||
├── tests/ # Test files
|
||||
├── package.json # Project manifest
|
||||
└── README.md # User documentation
|
||||
```
|
||||
|
||||
## Directory Purposes
|
||||
|
||||
**bin/**
|
||||
- Purpose: CLI entry points
|
||||
- Contains: install.js (installer script)
|
||||
- Key files: install.js - handles npx installation
|
||||
- Subdirectories: None
|
||||
|
||||
**commands/gsd/**
|
||||
- Purpose: Slash command definitions for Claude Code
|
||||
- Contains: *.md files (one per command)
|
||||
- Key files: new-project.md, plan-phase.md, execute-plan.md
|
||||
- Subdirectories: None (flat structure)
|
||||
|
||||
**get-shit-done/references/**
|
||||
- Purpose: Core philosophy and guidance documents
|
||||
- Contains: principles.md, questioning.md, plan-format.md
|
||||
- Key files: principles.md - system philosophy
|
||||
- Subdirectories: None
|
||||
|
||||
**get-shit-done/templates/**
|
||||
- Purpose: Document templates for .planning/ files
|
||||
- Contains: Template definitions with frontmatter
|
||||
- Key files: project.md, roadmap.md, plan.md, summary.md
|
||||
- Subdirectories: codebase/ (new - for stack/architecture/structure templates)
|
||||
|
||||
**get-shit-done/workflows/**
|
||||
- Purpose: Reusable multi-step procedures
|
||||
- Contains: Workflow definitions called by commands
|
||||
- Key files: execute-plan.md, research-phase.md
|
||||
- Subdirectories: None
|
||||
|
||||
## Key File Locations
|
||||
|
||||
**Entry Points:**
|
||||
- `bin/install.js` - Installation script (npx entry)
|
||||
|
||||
**Configuration:**
|
||||
- `package.json` - Project metadata, dependencies, bin entry
|
||||
- `.gitignore` - Excluded files
|
||||
|
||||
**Core Logic:**
|
||||
- `bin/install.js` - All installation logic (file copying, path replacement)
|
||||
|
||||
**Testing:**
|
||||
- `tests/` - Test files (if present)
|
||||
|
||||
**Documentation:**
|
||||
- `README.md` - User-facing installation and usage guide
|
||||
- `CLAUDE.md` - Instructions for Claude Code when working in this repo
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
**Files:**
|
||||
- kebab-case.md: Markdown documents
|
||||
- kebab-case.js: JavaScript source files
|
||||
- UPPERCASE.md: Important project files (README, CLAUDE, CHANGELOG)
|
||||
|
||||
**Directories:**
|
||||
- kebab-case: All directories
|
||||
- Plural for collections: templates/, commands/, workflows/
|
||||
|
||||
**Special Patterns:**
|
||||
- {command-name}.md: Slash command definition
|
||||
- *-template.md: Could be used but templates/ directory preferred
|
||||
|
||||
## Where to Add New Code
|
||||
|
||||
**New Slash Command:**
|
||||
- Primary code: `commands/gsd/{command-name}.md`
|
||||
- Tests: `tests/commands/{command-name}.test.js` (if testing implemented)
|
||||
- Documentation: Update `README.md` with new command
|
||||
|
||||
**New Template:**
|
||||
- Implementation: `get-shit-done/templates/{name}.md`
|
||||
- Documentation: Template is self-documenting (includes guidelines)
|
||||
|
||||
**New Workflow:**
|
||||
- Implementation: `get-shit-done/workflows/{name}.md`
|
||||
- Usage: Reference from command with `@C:/Users/yaoji/.claude/get-shit-done/workflows/{name}.md`
|
||||
|
||||
**New Reference Document:**
|
||||
- Implementation: `get-shit-done/references/{name}.md`
|
||||
- Usage: Reference from commands/workflows as needed
|
||||
|
||||
**Utilities:**
|
||||
- No utilities yet (`install.js` is monolithic)
|
||||
- If extracted: `src/utils/`
|
||||
|
||||
## Special Directories
|
||||
|
||||
**get-shit-done/**
|
||||
- Purpose: Resources installed to C:/Users/yaoji/.claude/
|
||||
- Source: Copied by bin/install.js during installation
|
||||
- Committed: Yes (source of truth)
|
||||
|
||||
**commands/**
|
||||
- Purpose: Slash commands installed to C:/Users/yaoji/.claude/commands/
|
||||
- Source: Copied by bin/install.js during installation
|
||||
- Committed: Yes (source of truth)
|
||||
|
||||
---
|
||||
|
||||
*Structure analysis: 2025-01-20*
|
||||
*Update when directory structure changes*
|
||||
```
|
||||
</good_examples>
|
||||
|
||||
<guidelines>
|
||||
**What belongs in STRUCTURE.md:**
|
||||
- Directory layout (ASCII box-drawing tree for structure visualization)
|
||||
- Purpose of each directory
|
||||
- Key file locations (entry points, configs, core logic)
|
||||
- Naming conventions
|
||||
- Where to add new code (by type)
|
||||
- Special/generated directories
|
||||
|
||||
**What does NOT belong here:**
|
||||
- Conceptual architecture (that's ARCHITECTURE.md)
|
||||
- Technology stack (that's STACK.md)
|
||||
- Code implementation details (defer to code reading)
|
||||
- Every single file (focus on directories and key files)
|
||||
|
||||
**When filling this template:**
|
||||
- Use `tree -L 2` or similar to visualize structure
|
||||
- Identify top-level directories and their purposes
|
||||
- Note naming patterns by observing existing files
|
||||
- Locate entry points, configs, and main logic areas
|
||||
- Keep directory tree concise (max 2-3 levels)
|
||||
|
||||
**Tree format (ASCII box-drawing characters for structure only):**
|
||||
```
|
||||
root/
|
||||
├── dir1/ # Purpose
|
||||
│ ├── subdir/ # Purpose
|
||||
│ └── file.ts # Purpose
|
||||
├── dir2/ # Purpose
|
||||
└── file.ts # Purpose
|
||||
```
|
||||
|
||||
**Useful for phase planning when:**
|
||||
- Adding new features (where should files go?)
|
||||
- Understanding project organization
|
||||
- Finding where specific logic lives
|
||||
- Following existing conventions
|
||||
</guidelines>
|
||||
480
get-shit-done/templates/codebase/testing.md
Normal file
480
get-shit-done/templates/codebase/testing.md
Normal file
@@ -0,0 +1,480 @@
|
||||
# Testing Patterns Template
|
||||
|
||||
Template for `.planning/codebase/TESTING.md` - captures test framework and patterns.
|
||||
|
||||
**Purpose:** Document how tests are written and run. Guide for adding tests that match existing patterns.
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# Testing Patterns
|
||||
|
||||
**Analysis Date:** [YYYY-MM-DD]
|
||||
|
||||
## Test Framework
|
||||
|
||||
**Runner:**
|
||||
- [Framework: e.g., "Jest 29.x", "Vitest 1.x"]
|
||||
- [Config: e.g., "jest.config.js in project root"]
|
||||
|
||||
**Assertion Library:**
|
||||
- [Library: e.g., "built-in expect", "chai"]
|
||||
- [Matchers: e.g., "toBe, toEqual, toThrow"]
|
||||
|
||||
**Run Commands:**
|
||||
```bash
|
||||
[e.g., "npm test" or "npm run test"] # Run all tests
|
||||
[e.g., "npm test -- --watch"] # Watch mode
|
||||
[e.g., "npm test -- path/to/file.test.ts"] # Single file
|
||||
[e.g., "npm run test:coverage"] # Coverage report
|
||||
```
|
||||
|
||||
## Test File Organization
|
||||
|
||||
**Location:**
|
||||
- [Pattern: e.g., "*.test.ts alongside source files"]
|
||||
- [Alternative: e.g., "__tests__/ directory" or "separate tests/ tree"]
|
||||
|
||||
**Naming:**
|
||||
- [Unit tests: e.g., "module-name.test.ts"]
|
||||
- [Integration: e.g., "feature-name.integration.test.ts"]
|
||||
- [E2E: e.g., "user-flow.e2e.test.ts"]
|
||||
|
||||
**Structure:**
|
||||
```
|
||||
[Show actual directory pattern, e.g.:
|
||||
src/
|
||||
lib/
|
||||
utils.ts
|
||||
utils.test.ts
|
||||
services/
|
||||
user-service.ts
|
||||
user-service.test.ts
|
||||
]
|
||||
```
|
||||
|
||||
## Test Structure
|
||||
|
||||
**Suite Organization:**
|
||||
```typescript
|
||||
[Show actual pattern used, e.g.:
|
||||
|
||||
describe('ModuleName', () => {
|
||||
describe('functionName', () => {
|
||||
it('should handle success case', () => {
|
||||
// arrange
|
||||
// act
|
||||
// assert
|
||||
});
|
||||
|
||||
it('should handle error case', () => {
|
||||
// test code
|
||||
});
|
||||
});
|
||||
});
|
||||
]
|
||||
```
|
||||
|
||||
**Patterns:**
|
||||
- [Setup: e.g., "beforeEach for shared setup, avoid beforeAll"]
|
||||
- [Teardown: e.g., "afterEach to clean up, restore mocks"]
|
||||
- [Structure: e.g., "arrange/act/assert pattern required"]
|
||||
|
||||
## Mocking
|
||||
|
||||
**Framework:**
|
||||
- [Tool: e.g., "Jest built-in mocking", "Vitest vi", "Sinon"]
|
||||
- [Import mocking: e.g., "vi.mock() at top of file"]
|
||||
|
||||
**Patterns:**
|
||||
```typescript
|
||||
[Show actual mocking pattern, e.g.:
|
||||
|
||||
// Mock external dependency
|
||||
vi.mock('./external-service', () => ({
|
||||
fetchData: vi.fn()
|
||||
}));
|
||||
|
||||
// Mock in test
|
||||
const mockFetch = vi.mocked(fetchData);
|
||||
mockFetch.mockResolvedValue({ data: 'test' });
|
||||
]
|
||||
```
|
||||
|
||||
**What to Mock:**
|
||||
- [e.g., "External APIs, file system, database"]
|
||||
- [e.g., "Time/dates (use vi.useFakeTimers)"]
|
||||
- [e.g., "Network calls (use mock fetch)"]
|
||||
|
||||
**What NOT to Mock:**
|
||||
- [e.g., "Pure functions, utilities"]
|
||||
- [e.g., "Internal business logic"]
|
||||
|
||||
## Fixtures and Factories
|
||||
|
||||
**Test Data:**
|
||||
```typescript
|
||||
[Show pattern for creating test data, e.g.:
|
||||
|
||||
// Factory pattern
|
||||
function createTestUser(overrides?: Partial<User>): User {
|
||||
return {
|
||||
id: 'test-id',
|
||||
name: 'Test User',
|
||||
email: 'test@example.com',
|
||||
...overrides
|
||||
};
|
||||
}
|
||||
|
||||
// Fixture file
|
||||
// tests/fixtures/users.ts
|
||||
export const mockUsers = [/* ... */];
|
||||
]
|
||||
```
|
||||
|
||||
**Location:**
|
||||
- [e.g., "tests/fixtures/ for shared fixtures"]
|
||||
- [e.g., "factory functions in test file or tests/factories/"]
|
||||
|
||||
## Coverage
|
||||
|
||||
**Requirements:**
|
||||
- [Target: e.g., "80% line coverage", "no specific target"]
|
||||
- [Enforcement: e.g., "CI blocks <80%", "coverage for awareness only"]
|
||||
|
||||
**Configuration:**
|
||||
- [Tool: e.g., "built-in coverage via --coverage flag"]
|
||||
- [Exclusions: e.g., "exclude *.test.ts, config files"]
|
||||
|
||||
**View Coverage:**
|
||||
```bash
|
||||
[e.g., "npm run test:coverage"]
|
||||
[e.g., "open coverage/index.html"]
|
||||
```
|
||||
|
||||
## Test Types
|
||||
|
||||
**Unit Tests:**
|
||||
- [Scope: e.g., "test single function/class in isolation"]
|
||||
- [Mocking: e.g., "mock all external dependencies"]
|
||||
- [Speed: e.g., "must run in <1s per test"]
|
||||
|
||||
**Integration Tests:**
|
||||
- [Scope: e.g., "test multiple modules together"]
|
||||
- [Mocking: e.g., "mock external services, use real internal modules"]
|
||||
- [Setup: e.g., "use test database, seed data"]
|
||||
|
||||
**E2E Tests:**
|
||||
- [Framework: e.g., "Playwright for E2E"]
|
||||
- [Scope: e.g., "test full user flows"]
|
||||
- [Location: e.g., "e2e/ directory separate from unit tests"]
|
||||
|
||||
## Common Patterns
|
||||
|
||||
**Async Testing:**
|
||||
```typescript
|
||||
[Show pattern, e.g.:
|
||||
|
||||
it('should handle async operation', async () => {
|
||||
const result = await asyncFunction();
|
||||
expect(result).toBe('expected');
|
||||
});
|
||||
]
|
||||
```
|
||||
|
||||
**Error Testing:**
|
||||
```typescript
|
||||
[Show pattern, e.g.:
|
||||
|
||||
it('should throw on invalid input', () => {
|
||||
expect(() => functionCall()).toThrow('error message');
|
||||
});
|
||||
|
||||
// Async error
|
||||
it('should reject on failure', async () => {
|
||||
await expect(asyncCall()).rejects.toThrow('error message');
|
||||
});
|
||||
]
|
||||
```
|
||||
|
||||
**Snapshot Testing:**
|
||||
- [Usage: e.g., "for React components only" or "not used"]
|
||||
- [Location: e.g., "__snapshots__/ directory"]
|
||||
|
||||
---
|
||||
|
||||
*Testing analysis: [date]*
|
||||
*Update when test patterns change*
|
||||
```
|
||||
|
||||
<good_examples>
|
||||
```markdown
|
||||
# Testing Patterns
|
||||
|
||||
**Analysis Date:** 2025-01-20
|
||||
|
||||
## Test Framework
|
||||
|
||||
**Runner:**
|
||||
- Vitest 1.0.4
|
||||
- Config: vitest.config.ts in project root
|
||||
|
||||
**Assertion Library:**
|
||||
- Vitest built-in expect
|
||||
- Matchers: toBe, toEqual, toThrow, toMatchObject
|
||||
|
||||
**Run Commands:**
|
||||
```bash
|
||||
npm test # Run all tests
|
||||
npm test -- --watch # Watch mode
|
||||
npm test -- path/to/file.test.ts # Single file
|
||||
npm run test:coverage # Coverage report
|
||||
```
|
||||
|
||||
## Test File Organization
|
||||
|
||||
**Location:**
|
||||
- *.test.ts alongside source files
|
||||
- No separate tests/ directory
|
||||
|
||||
**Naming:**
|
||||
- unit-name.test.ts for all tests
|
||||
- No distinction between unit/integration in filename
|
||||
|
||||
**Structure:**
|
||||
```
|
||||
src/
|
||||
lib/
|
||||
parser.ts
|
||||
parser.test.ts
|
||||
services/
|
||||
install-service.ts
|
||||
install-service.test.ts
|
||||
bin/
|
||||
install.ts
|
||||
(no test - integration tested via CLI)
|
||||
```
|
||||
|
||||
## Test Structure
|
||||
|
||||
**Suite Organization:**
|
||||
```typescript
|
||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||
|
||||
describe('ModuleName', () => {
|
||||
describe('functionName', () => {
|
||||
beforeEach(() => {
|
||||
// reset state
|
||||
});
|
||||
|
||||
it('should handle valid input', () => {
|
||||
// arrange
|
||||
const input = createTestInput();
|
||||
|
||||
// act
|
||||
const result = functionName(input);
|
||||
|
||||
// assert
|
||||
expect(result).toEqual(expectedOutput);
|
||||
});
|
||||
|
||||
it('should throw on invalid input', () => {
|
||||
expect(() => functionName(null)).toThrow('Invalid input');
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Patterns:**
|
||||
- Use beforeEach for per-test setup, avoid beforeAll
|
||||
- Use afterEach to restore mocks: vi.restoreAllMocks()
|
||||
- Explicit arrange/act/assert comments in complex tests
|
||||
- One assertion focus per test (but multiple expects OK)
|
||||
|
||||
## Mocking
|
||||
|
||||
**Framework:**
|
||||
- Vitest built-in mocking (vi)
|
||||
- Module mocking via vi.mock() at top of test file
|
||||
|
||||
**Patterns:**
|
||||
```typescript
|
||||
import { vi } from 'vitest';
|
||||
import { externalFunction } from './external';
|
||||
|
||||
// Mock module
|
||||
vi.mock('./external', () => ({
|
||||
externalFunction: vi.fn()
|
||||
}));
|
||||
|
||||
describe('test suite', () => {
|
||||
it('mocks function', () => {
|
||||
const mockFn = vi.mocked(externalFunction);
|
||||
mockFn.mockReturnValue('mocked result');
|
||||
|
||||
// test code using mocked function
|
||||
|
||||
expect(mockFn).toHaveBeenCalledWith('expected arg');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**What to Mock:**
|
||||
- File system operations (fs-extra)
|
||||
- Child process execution (child_process.exec)
|
||||
- External API calls
|
||||
- Environment variables (process.env)
|
||||
|
||||
**What NOT to Mock:**
|
||||
- Internal pure functions
|
||||
- Simple utilities (string manipulation, array helpers)
|
||||
- TypeScript types
|
||||
|
||||
## Fixtures and Factories
|
||||
|
||||
**Test Data:**
|
||||
```typescript
|
||||
// Factory functions in test file
|
||||
function createTestConfig(overrides?: Partial<Config>): Config {
|
||||
return {
|
||||
targetDir: '/tmp/test',
|
||||
global: false,
|
||||
...overrides
|
||||
};
|
||||
}
|
||||
|
||||
// Shared fixtures in tests/fixtures/
|
||||
// tests/fixtures/sample-command.md
|
||||
export const sampleCommand = `---
|
||||
description: Test command
|
||||
---
|
||||
Content here`;
|
||||
```
|
||||
|
||||
**Location:**
|
||||
- Factory functions: define in test file near usage
|
||||
- Shared fixtures: tests/fixtures/ (for multi-file test data)
|
||||
- Mock data: inline in test when simple, factory when complex
|
||||
|
||||
## Coverage
|
||||
|
||||
**Requirements:**
|
||||
- No enforced coverage target
|
||||
- Coverage tracked for awareness
|
||||
- Focus on critical paths (parsers, service logic)
|
||||
|
||||
**Configuration:**
|
||||
- Vitest coverage via c8 (built-in)
|
||||
- Excludes: *.test.ts, bin/install.ts, config files
|
||||
|
||||
**View Coverage:**
|
||||
```bash
|
||||
npm run test:coverage
|
||||
open coverage/index.html
|
||||
```
|
||||
|
||||
## Test Types
|
||||
|
||||
**Unit Tests:**
|
||||
- Test single function in isolation
|
||||
- Mock all external dependencies (fs, child_process)
|
||||
- Fast: each test <100ms
|
||||
- Examples: parser.test.ts, validator.test.ts
|
||||
|
||||
**Integration Tests:**
|
||||
- Test multiple modules together
|
||||
- Mock only external boundaries (file system, process)
|
||||
- Examples: install-service.test.ts (tests service + parser)
|
||||
|
||||
**E2E Tests:**
|
||||
- Not currently used
|
||||
- CLI integration tested manually
|
||||
|
||||
## Common Patterns
|
||||
|
||||
**Async Testing:**
|
||||
```typescript
|
||||
it('should handle async operation', async () => {
|
||||
const result = await asyncFunction();
|
||||
expect(result).toBe('expected');
|
||||
});
|
||||
```
|
||||
|
||||
**Error Testing:**
|
||||
```typescript
|
||||
it('should throw on invalid input', () => {
|
||||
expect(() => parse(null)).toThrow('Cannot parse null');
|
||||
});
|
||||
|
||||
// Async error
|
||||
it('should reject on file not found', async () => {
|
||||
await expect(readConfig('invalid.txt')).rejects.toThrow('ENOENT');
|
||||
});
|
||||
```
|
||||
|
||||
**File System Mocking:**
|
||||
```typescript
|
||||
import { vi } from 'vitest';
|
||||
import * as fs from 'fs-extra';
|
||||
|
||||
vi.mock('fs-extra');
|
||||
|
||||
it('mocks file system', () => {
|
||||
vi.mocked(fs.readFile).mockResolvedValue('file content');
|
||||
// test code
|
||||
});
|
||||
```
|
||||
|
||||
**Snapshot Testing:**
|
||||
- Not used in this codebase
|
||||
- Prefer explicit assertions for clarity
|
||||
|
||||
---
|
||||
|
||||
*Testing analysis: 2025-01-20*
|
||||
*Update when test patterns change*
|
||||
```
|
||||
</good_examples>
|
||||
|
||||
<guidelines>
|
||||
**What belongs in TESTING.md:**
|
||||
- Test framework and runner configuration
|
||||
- Test file location and naming patterns
|
||||
- Test structure (describe/it, beforeEach patterns)
|
||||
- Mocking approach and examples
|
||||
- Fixture/factory patterns
|
||||
- Coverage requirements
|
||||
- How to run tests (commands)
|
||||
- Common testing patterns in actual code
|
||||
|
||||
**What does NOT belong here:**
|
||||
- Specific test cases (defer to actual test files)
|
||||
- Technology choices (that's STACK.md)
|
||||
- CI/CD setup (that's deployment docs)
|
||||
|
||||
**When filling this template:**
|
||||
- Check package.json scripts for test commands
|
||||
- Find test config file (jest.config.js, vitest.config.ts)
|
||||
- Read 3-5 existing test files to identify patterns
|
||||
- Look for test utilities in tests/ or test-utils/
|
||||
- Check for coverage configuration
|
||||
- Document actual patterns used, not ideal patterns
|
||||
|
||||
**Useful for phase planning when:**
|
||||
- Adding new features (write matching tests)
|
||||
- Refactoring (maintain test patterns)
|
||||
- Fixing bugs (add regression tests)
|
||||
- Understanding verification approach
|
||||
- Setting up test infrastructure
|
||||
|
||||
**Analysis approach:**
|
||||
- Check package.json for test framework and scripts
|
||||
- Read test config file for coverage, setup
|
||||
- Examine test file organization (collocated vs separate)
|
||||
- Review 5 test files for patterns (mocking, structure, assertions)
|
||||
- Look for test utilities, fixtures, factories
|
||||
- Note any test types (unit, integration, e2e)
|
||||
- Document commands for running tests
|
||||
</guidelines>
|
||||
40
get-shit-done/templates/config.json
Normal file
40
get-shit-done/templates/config.json
Normal file
@@ -0,0 +1,40 @@
|
||||
{
|
||||
"mode": "interactive",
|
||||
"granularity": "standard",
|
||||
"workflow": {
|
||||
"research": true,
|
||||
"plan_check": true,
|
||||
"verifier": true,
|
||||
"auto_advance": false,
|
||||
"nyquist_validation": true
|
||||
},
|
||||
"planning": {
|
||||
"commit_docs": true,
|
||||
"search_gitignored": false
|
||||
},
|
||||
"parallelization": {
|
||||
"enabled": true,
|
||||
"plan_level": true,
|
||||
"task_level": false,
|
||||
"skip_checkpoints": true,
|
||||
"max_concurrent_agents": 3,
|
||||
"min_plans_for_parallel": 2
|
||||
},
|
||||
"gates": {
|
||||
"confirm_project": true,
|
||||
"confirm_phases": true,
|
||||
"confirm_roadmap": true,
|
||||
"confirm_breakdown": true,
|
||||
"confirm_plan": true,
|
||||
"execute_next_plan": true,
|
||||
"issues_review": true,
|
||||
"confirm_transition": true
|
||||
},
|
||||
"safety": {
|
||||
"always_confirm_destructive": true,
|
||||
"always_confirm_external_services": true
|
||||
},
|
||||
"hooks": {
|
||||
"context_warnings": true
|
||||
}
|
||||
}
|
||||
352
get-shit-done/templates/context.md
Normal file
352
get-shit-done/templates/context.md
Normal file
@@ -0,0 +1,352 @@
|
||||
# Phase Context Template
|
||||
|
||||
Template for `.planning/phases/XX-name/{phase_num}-CONTEXT.md` - captures implementation decisions for a phase.
|
||||
|
||||
**Purpose:** Document decisions that downstream agents need. Researcher uses this to know WHAT to investigate. Planner uses this to know WHAT choices are locked vs flexible.
|
||||
|
||||
**Key principle:** Categories are NOT predefined. They emerge from what was actually discussed for THIS phase. A CLI phase has CLI-relevant sections, a UI phase has UI-relevant sections.
|
||||
|
||||
**Downstream consumers:**
|
||||
- `gsd-phase-researcher` — Reads decisions to focus research (e.g., "card layout" → research card component patterns)
|
||||
- `gsd-planner` — Reads decisions to create specific tasks (e.g., "infinite scroll" → task includes virtualization)
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# Phase [X]: [Name] - Context
|
||||
|
||||
**Gathered:** [date]
|
||||
**Status:** Ready for planning
|
||||
|
||||
<domain>
|
||||
## Phase Boundary
|
||||
|
||||
[Clear statement of what this phase delivers — the scope anchor. This comes from ROADMAP.md and is fixed. Discussion clarifies implementation within this boundary.]
|
||||
|
||||
</domain>
|
||||
|
||||
<decisions>
|
||||
## Implementation Decisions
|
||||
|
||||
### [Area 1 that was discussed]
|
||||
- [Specific decision made]
|
||||
- [Another decision if applicable]
|
||||
|
||||
### [Area 2 that was discussed]
|
||||
- [Specific decision made]
|
||||
|
||||
### [Area 3 that was discussed]
|
||||
- [Specific decision made]
|
||||
|
||||
### Claude's Discretion
|
||||
[Areas where user explicitly said "you decide" — Claude has flexibility here during planning/implementation]
|
||||
|
||||
</decisions>
|
||||
|
||||
<specifics>
|
||||
## Specific Ideas
|
||||
|
||||
[Any particular references, examples, or "I want it like X" moments from discussion. Product references, specific behaviors, interaction patterns.]
|
||||
|
||||
[If none: "No specific requirements — open to standard approaches"]
|
||||
|
||||
</specifics>
|
||||
|
||||
<canonical_refs>
|
||||
## Canonical References
|
||||
|
||||
**Downstream agents MUST read these before planning or implementing.**
|
||||
|
||||
[List every spec, ADR, feature doc, or design doc that defines requirements or constraints for this phase. Use full relative paths so agents can read them directly. Group by topic area when the phase has multiple concerns.]
|
||||
|
||||
### [Topic area 1]
|
||||
- `path/to/spec-or-adr.md` — [What this doc decides/defines that's relevant]
|
||||
- `path/to/doc.md` §N — [Specific section and what it covers]
|
||||
|
||||
### [Topic area 2]
|
||||
- `path/to/feature-doc.md` — [What capability this defines]
|
||||
|
||||
[If the project has no external specs: "No external specs — requirements are fully captured in decisions above"]
|
||||
|
||||
</canonical_refs>
|
||||
|
||||
<code_context>
|
||||
## Existing Code Insights
|
||||
|
||||
### Reusable Assets
|
||||
- [Component/hook/utility]: [How it could be used in this phase]
|
||||
|
||||
### Established Patterns
|
||||
- [Pattern]: [How it constrains/enables this phase]
|
||||
|
||||
### Integration Points
|
||||
- [Where new code connects to existing system]
|
||||
|
||||
</code_context>
|
||||
|
||||
<deferred>
|
||||
## Deferred Ideas
|
||||
|
||||
[Ideas that came up during discussion but belong in other phases. Captured here so they're not lost, but explicitly out of scope for this phase.]
|
||||
|
||||
[If none: "None — discussion stayed within phase scope"]
|
||||
|
||||
</deferred>
|
||||
|
||||
---
|
||||
|
||||
*Phase: XX-name*
|
||||
*Context gathered: [date]*
|
||||
```
|
||||
|
||||
<good_examples>
|
||||
|
||||
**Example 1: Visual feature (Post Feed)**
|
||||
|
||||
```markdown
|
||||
# Phase 3: Post Feed - Context
|
||||
|
||||
**Gathered:** 2025-01-20
|
||||
**Status:** Ready for planning
|
||||
|
||||
<domain>
|
||||
## Phase Boundary
|
||||
|
||||
Display posts from followed users in a scrollable feed. Users can view posts and see engagement counts. Creating posts and interactions are separate phases.
|
||||
|
||||
</domain>
|
||||
|
||||
<decisions>
|
||||
## Implementation Decisions
|
||||
|
||||
### Layout style
|
||||
- Card-based layout, not timeline or list
|
||||
- Each card shows: author avatar, name, timestamp, full post content, reaction counts
|
||||
- Cards have subtle shadows, rounded corners — modern feel
|
||||
|
||||
### Loading behavior
|
||||
- Infinite scroll, not pagination
|
||||
- Pull-to-refresh on mobile
|
||||
- New posts indicator at top ("3 new posts") rather than auto-inserting
|
||||
|
||||
### Empty state
|
||||
- Friendly illustration + "Follow people to see posts here"
|
||||
- Suggest 3-5 accounts to follow based on interests
|
||||
|
||||
### Claude's Discretion
|
||||
- Loading skeleton design
|
||||
- Exact spacing and typography
|
||||
- Error state handling
|
||||
|
||||
</decisions>
|
||||
|
||||
<canonical_refs>
|
||||
## Canonical References
|
||||
|
||||
### Feed display
|
||||
- `docs/features/social-feed.md` — Feed requirements, post card fields, engagement display rules
|
||||
- `docs/decisions/adr-012-infinite-scroll.md` — Scroll strategy decision, virtualization requirements
|
||||
|
||||
### Empty states
|
||||
- `docs/design/empty-states.md` — Empty state patterns, illustration guidelines
|
||||
|
||||
</canonical_refs>
|
||||
|
||||
<specifics>
|
||||
## Specific Ideas
|
||||
|
||||
- "I like how Twitter shows the new posts indicator without disrupting your scroll position"
|
||||
- Cards should feel like Linear's issue cards — clean, not cluttered
|
||||
|
||||
</specifics>
|
||||
|
||||
<deferred>
|
||||
## Deferred Ideas
|
||||
|
||||
- Commenting on posts — Phase 5
|
||||
- Bookmarking posts — add to backlog
|
||||
|
||||
</deferred>
|
||||
|
||||
---
|
||||
|
||||
*Phase: 03-post-feed*
|
||||
*Context gathered: 2025-01-20*
|
||||
```
|
||||
|
||||
**Example 2: CLI tool (Database backup)**
|
||||
|
||||
```markdown
|
||||
# Phase 2: Backup Command - Context
|
||||
|
||||
**Gathered:** 2025-01-20
|
||||
**Status:** Ready for planning
|
||||
|
||||
<domain>
|
||||
## Phase Boundary
|
||||
|
||||
CLI command to backup database to local file or S3. Supports full and incremental backups. Restore command is a separate phase.
|
||||
|
||||
</domain>
|
||||
|
||||
<decisions>
|
||||
## Implementation Decisions
|
||||
|
||||
### Output format
|
||||
- JSON for programmatic use, table format for humans
|
||||
- Default to table, --json flag for JSON
|
||||
- Verbose mode (-v) shows progress, silent by default
|
||||
|
||||
### Flag design
|
||||
- Short flags for common options: -o (output), -v (verbose), -f (force)
|
||||
- Long flags for clarity: --incremental, --compress, --encrypt
|
||||
- Required: database connection string (positional or --db)
|
||||
|
||||
### Error recovery
|
||||
- Retry 3 times on network failure, then fail with clear message
|
||||
- --no-retry flag to fail fast
|
||||
- Partial backups are deleted on failure (no corrupt files)
|
||||
|
||||
### Claude's Discretion
|
||||
- Exact progress bar implementation
|
||||
- Compression algorithm choice
|
||||
- Temp file handling
|
||||
|
||||
</decisions>
|
||||
|
||||
<canonical_refs>
|
||||
## Canonical References
|
||||
|
||||
### Backup CLI
|
||||
- `docs/features/backup-restore.md` — Backup requirements, supported backends, encryption spec
|
||||
- `docs/decisions/adr-007-cli-conventions.md` — Flag naming, exit codes, output format standards
|
||||
|
||||
</canonical_refs>
|
||||
|
||||
<specifics>
|
||||
## Specific Ideas
|
||||
|
||||
- "I want it to feel like pg_dump — familiar to database people"
|
||||
- Should work in CI pipelines (exit codes, no interactive prompts)
|
||||
|
||||
</specifics>
|
||||
|
||||
<deferred>
|
||||
## Deferred Ideas
|
||||
|
||||
- Scheduled backups — separate phase
|
||||
- Backup rotation/retention — add to backlog
|
||||
|
||||
</deferred>
|
||||
|
||||
---
|
||||
|
||||
*Phase: 02-backup-command*
|
||||
*Context gathered: 2025-01-20*
|
||||
```
|
||||
|
||||
**Example 3: Organization task (Photo library)**
|
||||
|
||||
```markdown
|
||||
# Phase 1: Photo Organization - Context
|
||||
|
||||
**Gathered:** 2025-01-20
|
||||
**Status:** Ready for planning
|
||||
|
||||
<domain>
|
||||
## Phase Boundary
|
||||
|
||||
Organize existing photo library into structured folders. Handle duplicates and apply consistent naming. Tagging and search are separate phases.
|
||||
|
||||
</domain>
|
||||
|
||||
<decisions>
|
||||
## Implementation Decisions
|
||||
|
||||
### Grouping criteria
|
||||
- Primary grouping by year, then by month
|
||||
- Events detected by time clustering (photos within 2 hours = same event)
|
||||
- Event folders named by date + location if available
|
||||
|
||||
### Duplicate handling
|
||||
- Keep highest resolution version
|
||||
- Move duplicates to _duplicates folder (don't delete)
|
||||
- Log all duplicate decisions for review
|
||||
|
||||
### Naming convention
|
||||
- Format: YYYY-MM-DD_HH-MM-SS_originalname.ext
|
||||
- Preserve original filename as suffix for searchability
|
||||
- Handle name collisions with incrementing suffix
|
||||
|
||||
### Claude's Discretion
|
||||
- Exact clustering algorithm
|
||||
- How to handle photos with no EXIF data
|
||||
- Folder emoji usage
|
||||
|
||||
</decisions>
|
||||
|
||||
<canonical_refs>
|
||||
## Canonical References
|
||||
|
||||
### Organization rules
|
||||
- `docs/features/photo-organization.md` — Grouping rules, duplicate policy, naming spec
|
||||
- `docs/decisions/adr-003-exif-handling.md` — EXIF extraction strategy, fallback for missing metadata
|
||||
|
||||
</canonical_refs>
|
||||
|
||||
<specifics>
|
||||
## Specific Ideas
|
||||
|
||||
- "I want to be able to find photos by roughly when they were taken"
|
||||
- Don't delete anything — worst case, move to a review folder
|
||||
|
||||
</specifics>
|
||||
|
||||
<deferred>
|
||||
## Deferred Ideas
|
||||
|
||||
- Face detection grouping — future phase
|
||||
- Cloud sync — out of scope for now
|
||||
|
||||
</deferred>
|
||||
|
||||
---
|
||||
|
||||
*Phase: 01-photo-organization*
|
||||
*Context gathered: 2025-01-20*
|
||||
```
|
||||
|
||||
</good_examples>
|
||||
|
||||
<guidelines>
|
||||
**This template captures DECISIONS for downstream agents.**
|
||||
|
||||
The output should answer: "What does the researcher need to investigate? What choices are locked for the planner?"
|
||||
|
||||
**Good content (concrete decisions):**
|
||||
- "Card-based layout, not timeline"
|
||||
- "Retry 3 times on network failure, then fail"
|
||||
- "Group by year, then by month"
|
||||
- "JSON for programmatic use, table for humans"
|
||||
|
||||
**Bad content (too vague):**
|
||||
- "Should feel modern and clean"
|
||||
- "Good user experience"
|
||||
- "Fast and responsive"
|
||||
- "Easy to use"
|
||||
|
||||
**After creation:**
|
||||
- File lives in phase directory: `.planning/phases/XX-name/{phase_num}-CONTEXT.md`
|
||||
- `gsd-phase-researcher` uses decisions to focus investigation AND reads canonical_refs to know WHAT docs to study
|
||||
- `gsd-planner` uses decisions + research to create executable tasks AND reads canonical_refs to verify alignment
|
||||
- Downstream agents should NOT need to ask the user again about captured decisions
|
||||
|
||||
**CRITICAL — Canonical references:**
|
||||
- The `<canonical_refs>` section is MANDATORY. Every CONTEXT.md must have one.
|
||||
- If your project has external specs, ADRs, or design docs, list them with full relative paths grouped by topic
|
||||
- If ROADMAP.md lists `Canonical refs:` per phase, extract and expand those
|
||||
- Inline mentions like "see ADR-019" scattered in decisions are useless to downstream agents — they need full paths and section references in a dedicated section they can find
|
||||
- If no external specs exist, say so explicitly — don't silently omit the section
|
||||
</guidelines>
|
||||
78
get-shit-done/templates/continue-here.md
Normal file
78
get-shit-done/templates/continue-here.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# Continue-Here Template
|
||||
|
||||
Copy and fill this structure for `.planning/phases/XX-name/.continue-here.md`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
phase: XX-name
|
||||
task: 3
|
||||
total_tasks: 7
|
||||
status: in_progress
|
||||
last_updated: 2025-01-15T14:30:00Z
|
||||
---
|
||||
```
|
||||
|
||||
```markdown
|
||||
<current_state>
|
||||
[Where exactly are we? What's the immediate context?]
|
||||
</current_state>
|
||||
|
||||
<completed_work>
|
||||
[What got done this session - be specific]
|
||||
|
||||
- Task 1: [name] - Done
|
||||
- Task 2: [name] - Done
|
||||
- Task 3: [name] - In progress, [what's done on it]
|
||||
</completed_work>
|
||||
|
||||
<remaining_work>
|
||||
[What's left in this phase]
|
||||
|
||||
- Task 3: [name] - [what's left to do]
|
||||
- Task 4: [name] - Not started
|
||||
- Task 5: [name] - Not started
|
||||
</remaining_work>
|
||||
|
||||
<decisions_made>
|
||||
[Key decisions and why - so next session doesn't re-debate]
|
||||
|
||||
- Decided to use [X] because [reason]
|
||||
- Chose [approach] over [alternative] because [reason]
|
||||
</decisions_made>
|
||||
|
||||
<blockers>
|
||||
[Anything stuck or waiting on external factors]
|
||||
|
||||
- [Blocker 1]: [status/workaround]
|
||||
</blockers>
|
||||
|
||||
<context>
|
||||
[Mental state, "vibe", anything that helps resume smoothly]
|
||||
|
||||
[What were you thinking about? What was the plan?
|
||||
This is the "pick up exactly where you left off" context.]
|
||||
</context>
|
||||
|
||||
<next_action>
|
||||
[The very first thing to do when resuming]
|
||||
|
||||
Start with: [specific action]
|
||||
</next_action>
|
||||
```
|
||||
|
||||
<yaml_fields>
|
||||
Required YAML frontmatter:
|
||||
|
||||
- `phase`: Directory name (e.g., `02-authentication`)
|
||||
- `task`: Current task number
|
||||
- `total_tasks`: How many tasks in phase
|
||||
- `status`: `in_progress`, `blocked`, `almost_done`
|
||||
- `last_updated`: ISO timestamp
|
||||
</yaml_fields>
|
||||
|
||||
<guidelines>
|
||||
- Be specific enough that a fresh Claude instance understands immediately
|
||||
- Include WHY decisions were made, not just what
|
||||
- The `<next_action>` should be actionable without reading anything else
|
||||
- This file gets DELETED after resume - it's not permanent storage
|
||||
</guidelines>
|
||||
7
get-shit-done/templates/copilot-instructions.md
Normal file
7
get-shit-done/templates/copilot-instructions.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# Instructions for GSD
|
||||
|
||||
- Use the get-shit-done skill when the user asks for GSD or uses a `gsd-*` command.
|
||||
- Treat `/gsd-...` or `gsd-...` as command invocations and load the matching file from `.github/skills/gsd-*`.
|
||||
- When a command says to spawn a subagent, prefer a matching custom agent from `.github/agents`.
|
||||
- Do not apply GSD workflows unless the user explicitly asks for them.
|
||||
- After completing any `gsd-*` command (or any deliverable it triggers: feature, bug fix, tests, docs, etc.), ALWAYS: (1) offer the user the next step by prompting via `ask_user`; repeat this feedback loop until the user explicitly indicates they are done.
|
||||
91
get-shit-done/templates/debug-subagent-prompt.md
Normal file
91
get-shit-done/templates/debug-subagent-prompt.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Debug Subagent Prompt Template
|
||||
|
||||
Template for spawning gsd-debugger agent. The agent contains all debugging expertise - this template provides problem context only.
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
<objective>
|
||||
Investigate issue: {issue_id}
|
||||
|
||||
**Summary:** {issue_summary}
|
||||
</objective>
|
||||
|
||||
<symptoms>
|
||||
expected: {expected}
|
||||
actual: {actual}
|
||||
errors: {errors}
|
||||
reproduction: {reproduction}
|
||||
timeline: {timeline}
|
||||
</symptoms>
|
||||
|
||||
<mode>
|
||||
symptoms_prefilled: {true_or_false}
|
||||
goal: {find_root_cause_only | find_and_fix}
|
||||
</mode>
|
||||
|
||||
<debug_file>
|
||||
Create: .planning/debug/{slug}.md
|
||||
</debug_file>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Placeholders
|
||||
|
||||
| Placeholder | Source | Example |
|
||||
|-------------|--------|---------|
|
||||
| `{issue_id}` | Orchestrator-assigned | `auth-screen-dark` |
|
||||
| `{issue_summary}` | User description | `Auth screen is too dark` |
|
||||
| `{expected}` | From symptoms | `See logo clearly` |
|
||||
| `{actual}` | From symptoms | `Screen is dark` |
|
||||
| `{errors}` | From symptoms | `None in console` |
|
||||
| `{reproduction}` | From symptoms | `Open /auth page` |
|
||||
| `{timeline}` | From symptoms | `After recent deploy` |
|
||||
| `{goal}` | Orchestrator sets | `find_and_fix` |
|
||||
| `{slug}` | Generated | `auth-screen-dark` |
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
**From /gsd:debug:**
|
||||
```python
|
||||
Task(
|
||||
prompt=filled_template,
|
||||
subagent_type="gsd-debugger",
|
||||
description="Debug {slug}"
|
||||
)
|
||||
```
|
||||
|
||||
**From diagnose-issues (UAT):**
|
||||
```python
|
||||
Task(prompt=template, subagent_type="gsd-debugger", description="Debug UAT-001")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Continuation
|
||||
|
||||
For checkpoints, spawn fresh agent with:
|
||||
|
||||
```markdown
|
||||
<objective>
|
||||
Continue debugging {slug}. Evidence is in the debug file.
|
||||
</objective>
|
||||
|
||||
<prior_state>
|
||||
Debug file: @.planning/debug/{slug}.md
|
||||
</prior_state>
|
||||
|
||||
<checkpoint_response>
|
||||
**Type:** {checkpoint_type}
|
||||
**Response:** {user_response}
|
||||
</checkpoint_response>
|
||||
|
||||
<mode>
|
||||
goal: {goal}
|
||||
</mode>
|
||||
```
|
||||
21
get-shit-done/templates/dev-preferences.md
Normal file
21
get-shit-done/templates/dev-preferences.md
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
description: Load developer preferences into this session
|
||||
---
|
||||
|
||||
# Developer Preferences
|
||||
|
||||
> Generated by GSD on {{generated_at}} from {{data_source}}.
|
||||
> Run `/gsd:profile-user --refresh` to regenerate.
|
||||
|
||||
## Behavioral Directives
|
||||
|
||||
Follow these directives when working with this developer. Higher confidence
|
||||
directives should be applied directly. Lower confidence directives should be
|
||||
tried with hedging ("Based on your profile, I'll try X -- let me know if
|
||||
that's off").
|
||||
|
||||
{{behavioral_directives}}
|
||||
|
||||
## Stack Preferences
|
||||
|
||||
{{stack_preferences}}
|
||||
146
get-shit-done/templates/discovery.md
Normal file
146
get-shit-done/templates/discovery.md
Normal file
@@ -0,0 +1,146 @@
|
||||
# Discovery Template
|
||||
|
||||
Template for `.planning/phases/XX-name/DISCOVERY.md` - shallow research for library/option decisions.
|
||||
|
||||
**Purpose:** Answer "which library/option should we use" questions during mandatory discovery in plan-phase.
|
||||
|
||||
For deep ecosystem research ("how do experts build this"), use `/gsd:research-phase` which produces RESEARCH.md.
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
phase: XX-name
|
||||
type: discovery
|
||||
topic: [discovery-topic]
|
||||
---
|
||||
|
||||
<session_initialization>
|
||||
Before beginning discovery, verify today's date:
|
||||
!`date +%Y-%m-%d`
|
||||
|
||||
Use this date when searching for "current" or "latest" information.
|
||||
Example: If today is 2025-11-22, search for "2025" not "2024".
|
||||
</session_initialization>
|
||||
|
||||
<discovery_objective>
|
||||
Discover [topic] to inform [phase name] implementation.
|
||||
|
||||
Purpose: [What decision/implementation this enables]
|
||||
Scope: [Boundaries]
|
||||
Output: DISCOVERY.md with recommendation
|
||||
</discovery_objective>
|
||||
|
||||
<discovery_scope>
|
||||
<include>
|
||||
- [Question to answer]
|
||||
- [Area to investigate]
|
||||
- [Specific comparison if needed]
|
||||
</include>
|
||||
|
||||
<exclude>
|
||||
- [Out of scope for this discovery]
|
||||
- [Defer to implementation phase]
|
||||
</exclude>
|
||||
</discovery_scope>
|
||||
|
||||
<discovery_protocol>
|
||||
|
||||
**Source Priority:**
|
||||
1. **Context7 MCP** - For library/framework documentation (current, authoritative)
|
||||
2. **Official Docs** - For platform-specific or non-indexed libraries
|
||||
3. **WebSearch** - For comparisons, trends, community patterns (verify all findings)
|
||||
|
||||
**Quality Checklist:**
|
||||
Before completing discovery, verify:
|
||||
- [ ] All claims have authoritative sources (Context7 or official docs)
|
||||
- [ ] Negative claims ("X is not possible") verified with official documentation
|
||||
- [ ] API syntax/configuration from Context7 or official docs (never WebSearch alone)
|
||||
- [ ] WebSearch findings cross-checked with authoritative sources
|
||||
- [ ] Recent updates/changelogs checked for breaking changes
|
||||
- [ ] Alternative approaches considered (not just first solution found)
|
||||
|
||||
**Confidence Levels:**
|
||||
- HIGH: Context7 or official docs confirm
|
||||
- MEDIUM: WebSearch + Context7/official docs confirm
|
||||
- LOW: WebSearch only or training knowledge only (mark for validation)
|
||||
|
||||
</discovery_protocol>
|
||||
|
||||
|
||||
<output_structure>
|
||||
Create `.planning/phases/XX-name/DISCOVERY.md`:
|
||||
|
||||
```markdown
|
||||
# [Topic] Discovery
|
||||
|
||||
## Summary
|
||||
[2-3 paragraph executive summary - what was researched, what was found, what's recommended]
|
||||
|
||||
## Primary Recommendation
|
||||
[What to do and why - be specific and actionable]
|
||||
|
||||
## Alternatives Considered
|
||||
[What else was evaluated and why not chosen]
|
||||
|
||||
## Key Findings
|
||||
|
||||
### [Category 1]
|
||||
- [Finding with source URL and relevance to our case]
|
||||
|
||||
### [Category 2]
|
||||
- [Finding with source URL and relevance]
|
||||
|
||||
## Code Examples
|
||||
[Relevant implementation patterns, if applicable]
|
||||
|
||||
## Metadata
|
||||
|
||||
<metadata>
|
||||
<confidence level="high|medium|low">
|
||||
[Why this confidence level - based on source quality and verification]
|
||||
</confidence>
|
||||
|
||||
<sources>
|
||||
- [Primary authoritative sources used]
|
||||
</sources>
|
||||
|
||||
<open_questions>
|
||||
[What couldn't be determined or needs validation during implementation]
|
||||
</open_questions>
|
||||
|
||||
<validation_checkpoints>
|
||||
[If confidence is LOW or MEDIUM, list specific things to verify during implementation]
|
||||
</validation_checkpoints>
|
||||
</metadata>
|
||||
```
|
||||
</output_structure>
|
||||
|
||||
<success_criteria>
|
||||
- All scope questions answered with authoritative sources
|
||||
- Quality checklist items completed
|
||||
- Clear primary recommendation
|
||||
- Low-confidence findings marked with validation checkpoints
|
||||
- Ready to inform PLAN.md creation
|
||||
</success_criteria>
|
||||
|
||||
<guidelines>
|
||||
**When to use discovery:**
|
||||
- Technology choice unclear (library A vs B)
|
||||
- Best practices needed for unfamiliar integration
|
||||
- API/library investigation required
|
||||
- Single decision pending
|
||||
|
||||
**When NOT to use:**
|
||||
- Established patterns (CRUD, auth with known library)
|
||||
- Implementation details (defer to execution)
|
||||
- Questions answerable from existing project context
|
||||
|
||||
**When to use RESEARCH.md instead:**
|
||||
- Niche/complex domains (3D, games, audio, shaders)
|
||||
- Need ecosystem knowledge, not just library choice
|
||||
- "How do experts build this" questions
|
||||
- Use `/gsd:research-phase` for these
|
||||
</guidelines>
|
||||
123
get-shit-done/templates/milestone-archive.md
Normal file
123
get-shit-done/templates/milestone-archive.md
Normal file
@@ -0,0 +1,123 @@
|
||||
# Milestone Archive Template
|
||||
|
||||
This template is used by the complete-milestone workflow to create archive files in `.planning/milestones/`.
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
# Milestone v{{VERSION}}: {{MILESTONE_NAME}}
|
||||
|
||||
**Status:** ✅ SHIPPED {{DATE}}
|
||||
**Phases:** {{PHASE_START}}-{{PHASE_END}}
|
||||
**Total Plans:** {{TOTAL_PLANS}}
|
||||
|
||||
## Overview
|
||||
|
||||
{{MILESTONE_DESCRIPTION}}
|
||||
|
||||
## Phases
|
||||
|
||||
{{PHASES_SECTION}}
|
||||
|
||||
[For each phase in this milestone, include:]
|
||||
|
||||
### Phase {{PHASE_NUM}}: {{PHASE_NAME}}
|
||||
|
||||
**Goal**: {{PHASE_GOAL}}
|
||||
**Depends on**: {{DEPENDS_ON}}
|
||||
**Plans**: {{PLAN_COUNT}} plans
|
||||
|
||||
Plans:
|
||||
|
||||
- [x] {{PHASE}}-01: {{PLAN_DESCRIPTION}}
|
||||
- [x] {{PHASE}}-02: {{PLAN_DESCRIPTION}}
|
||||
[... all plans ...]
|
||||
|
||||
**Details:**
|
||||
{{PHASE_DETAILS_FROM_ROADMAP}}
|
||||
|
||||
**For decimal phases, include (INSERTED) marker:**
|
||||
|
||||
### Phase 2.1: Critical Security Patch (INSERTED)
|
||||
|
||||
**Goal**: Fix authentication bypass vulnerability
|
||||
**Depends on**: Phase 2
|
||||
**Plans**: 1 plan
|
||||
|
||||
Plans:
|
||||
|
||||
- [x] 02.1-01: Patch auth vulnerability
|
||||
|
||||
**Details:**
|
||||
{{PHASE_DETAILS_FROM_ROADMAP}}
|
||||
|
||||
---
|
||||
|
||||
## Milestone Summary
|
||||
|
||||
**Decimal Phases:**
|
||||
|
||||
- Phase 2.1: Critical Security Patch (inserted after Phase 2 for urgent fix)
|
||||
- Phase 5.1: Performance Hotfix (inserted after Phase 5 for production issue)
|
||||
|
||||
**Key Decisions:**
|
||||
{{DECISIONS_FROM_PROJECT_STATE}}
|
||||
[Example:]
|
||||
|
||||
- Decision: Use ROADMAP.md split (Rationale: Constant context cost)
|
||||
- Decision: Decimal phase numbering (Rationale: Clear insertion semantics)
|
||||
|
||||
**Issues Resolved:**
|
||||
{{ISSUES_RESOLVED_DURING_MILESTONE}}
|
||||
[Example:]
|
||||
|
||||
- Fixed context overflow at 100+ phases
|
||||
- Resolved phase insertion confusion
|
||||
|
||||
**Issues Deferred:**
|
||||
{{ISSUES_DEFERRED_TO_LATER}}
|
||||
[Example:]
|
||||
|
||||
- PROJECT-STATE.md tiering (deferred until decisions > 300)
|
||||
|
||||
**Technical Debt Incurred:**
|
||||
{{SHORTCUTS_NEEDING_FUTURE_WORK}}
|
||||
[Example:]
|
||||
|
||||
- Some workflows still have hardcoded paths (fix in Phase 5)
|
||||
|
||||
---
|
||||
|
||||
_For current project status, see .planning/ROADMAP.md_
|
||||
|
||||
---
|
||||
|
||||
## Usage Guidelines
|
||||
|
||||
<guidelines>
|
||||
**When to create milestone archives:**
|
||||
- After completing all phases in a milestone (v1.0, v1.1, v2.0, etc.)
|
||||
- Triggered by complete-milestone workflow
|
||||
- Before planning next milestone work
|
||||
|
||||
**How to fill template:**
|
||||
|
||||
- Replace {{PLACEHOLDERS}} with actual values
|
||||
- Extract phase details from ROADMAP.md
|
||||
- Document decimal phases with (INSERTED) marker
|
||||
- Include key decisions from PROJECT-STATE.md or SUMMARY files
|
||||
- List issues resolved vs deferred
|
||||
- Capture technical debt for future reference
|
||||
|
||||
**Archive location:**
|
||||
|
||||
- Save to `.planning/milestones/v{VERSION}-{NAME}.md`
|
||||
- Example: `.planning/milestones/v1.0-mvp.md`
|
||||
|
||||
**After archiving:**
|
||||
|
||||
- Update ROADMAP.md to collapse completed milestone in `<details>` tag
|
||||
- Update PROJECT.md to brownfield format with Current State section
|
||||
- Continue phase numbering in next milestone (never restart at 01)
|
||||
</guidelines>
|
||||
115
get-shit-done/templates/milestone.md
Normal file
115
get-shit-done/templates/milestone.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Milestone Entry Template
|
||||
|
||||
Add this entry to `.planning/MILESTONES.md` when completing a milestone:
|
||||
|
||||
```markdown
|
||||
## v[X.Y] [Name] (Shipped: YYYY-MM-DD)
|
||||
|
||||
**Delivered:** [One sentence describing what shipped]
|
||||
|
||||
**Phases completed:** [X-Y] ([Z] plans total)
|
||||
|
||||
**Key accomplishments:**
|
||||
- [Major achievement 1]
|
||||
- [Major achievement 2]
|
||||
- [Major achievement 3]
|
||||
- [Major achievement 4]
|
||||
|
||||
**Stats:**
|
||||
- [X] files created/modified
|
||||
- [Y] lines of code (primary language)
|
||||
- [Z] phases, [N] plans, [M] tasks
|
||||
- [D] days from start to ship (or milestone to milestone)
|
||||
|
||||
**Git range:** `feat(XX-XX)` → `feat(YY-YY)`
|
||||
|
||||
**What's next:** [Brief description of next milestone goals, or "Project complete"]
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
<structure>
|
||||
If MILESTONES.md doesn't exist, create it with header:
|
||||
|
||||
```markdown
|
||||
# Project Milestones: [Project Name]
|
||||
|
||||
[Entries in reverse chronological order - newest first]
|
||||
```
|
||||
</structure>
|
||||
|
||||
<guidelines>
|
||||
**When to create milestones:**
|
||||
- Initial v1.0 MVP shipped
|
||||
- Major version releases (v2.0, v3.0)
|
||||
- Significant feature milestones (v1.1, v1.2)
|
||||
- Before archiving planning (capture what was shipped)
|
||||
|
||||
**Don't create milestones for:**
|
||||
- Individual phase completions (normal workflow)
|
||||
- Work in progress (wait until shipped)
|
||||
- Minor bug fixes that don't constitute a release
|
||||
|
||||
**Stats to include:**
|
||||
- Count modified files: `git diff --stat feat(XX-XX)..feat(YY-YY) | tail -1`
|
||||
- Count LOC: `find . -name "*.swift" -o -name "*.ts" | xargs wc -l` (or relevant extension)
|
||||
- Phase/plan/task counts from ROADMAP
|
||||
- Timeline from first phase commit to last phase commit
|
||||
|
||||
**Git range format:**
|
||||
- First commit of milestone → last commit of milestone
|
||||
- Example: `feat(01-01)` → `feat(04-01)` for phases 1-4
|
||||
</guidelines>
|
||||
|
||||
<example>
|
||||
```markdown
|
||||
# Project Milestones: WeatherBar
|
||||
|
||||
## v1.1 Security & Polish (Shipped: 2025-12-10)
|
||||
|
||||
**Delivered:** Security hardening with Keychain integration and comprehensive error handling
|
||||
|
||||
**Phases completed:** 5-6 (3 plans total)
|
||||
|
||||
**Key accomplishments:**
|
||||
- Migrated API key storage from plaintext to macOS Keychain
|
||||
- Implemented comprehensive error handling for network failures
|
||||
- Added Sentry crash reporting integration
|
||||
- Fixed memory leak in auto-refresh timer
|
||||
|
||||
**Stats:**
|
||||
- 23 files modified
|
||||
- 650 lines of Swift added
|
||||
- 2 phases, 3 plans, 12 tasks
|
||||
- 8 days from v1.0 to v1.1
|
||||
|
||||
**Git range:** `feat(05-01)` → `feat(06-02)`
|
||||
|
||||
**What's next:** v2.0 SwiftUI redesign with widget support
|
||||
|
||||
---
|
||||
|
||||
## v1.0 MVP (Shipped: 2025-11-25)
|
||||
|
||||
**Delivered:** Menu bar weather app with current conditions and 3-day forecast
|
||||
|
||||
**Phases completed:** 1-4 (7 plans total)
|
||||
|
||||
**Key accomplishments:**
|
||||
- Menu bar app with popover UI (AppKit)
|
||||
- OpenWeather API integration with auto-refresh
|
||||
- Current weather display with conditions icon
|
||||
- 3-day forecast list with high/low temperatures
|
||||
- Code signed and notarized for distribution
|
||||
|
||||
**Stats:**
|
||||
- 47 files created
|
||||
- 2,450 lines of Swift
|
||||
- 4 phases, 7 plans, 28 tasks
|
||||
- 12 days from start to ship
|
||||
|
||||
**Git range:** `feat(01-01)` → `feat(04-01)`
|
||||
|
||||
**What's next:** Security audit and hardening for v1.1
|
||||
```
|
||||
</example>
|
||||
610
get-shit-done/templates/phase-prompt.md
Normal file
610
get-shit-done/templates/phase-prompt.md
Normal file
@@ -0,0 +1,610 @@
|
||||
# Phase Prompt Template
|
||||
|
||||
> **Note:** Planning methodology is in `agents/gsd-planner.md`.
|
||||
> This template defines the PLAN.md output format that the agent produces.
|
||||
|
||||
Template for `.planning/phases/XX-name/{phase}-{plan}-PLAN.md` - executable phase plans optimized for parallel execution.
|
||||
|
||||
**Naming:** Use `{phase}-{plan}-PLAN.md` format (e.g., `01-02-PLAN.md` for Phase 1, Plan 2)
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
phase: XX-name
|
||||
plan: NN
|
||||
type: execute
|
||||
wave: N # Execution wave (1, 2, 3...). Pre-computed at plan time.
|
||||
depends_on: [] # Plan IDs this plan requires (e.g., ["01-01"]).
|
||||
files_modified: [] # Files this plan modifies.
|
||||
autonomous: true # false if plan has checkpoints requiring user interaction
|
||||
requirements: [] # REQUIRED — Requirement IDs from ROADMAP this plan addresses. MUST NOT be empty.
|
||||
user_setup: [] # Human-required setup Claude cannot automate (see below)
|
||||
|
||||
# Goal-backward verification (derived during planning, verified after execution)
|
||||
must_haves:
|
||||
truths: [] # Observable behaviors that must be true for goal achievement
|
||||
artifacts: [] # Files that must exist with real implementation
|
||||
key_links: [] # Critical connections between artifacts
|
||||
---
|
||||
|
||||
<objective>
|
||||
[What this plan accomplishes]
|
||||
|
||||
Purpose: [Why this matters for the project]
|
||||
Output: [What artifacts will be created]
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@C:/Users/yaoji/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@C:/Users/yaoji/.claude/get-shit-done/templates/summary.md
|
||||
[If plan contains checkpoint tasks (type="checkpoint:*"), add:]
|
||||
@C:/Users/yaoji/.claude/get-shit-done/references/checkpoints.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
|
||||
# Only reference prior plan SUMMARYs if genuinely needed:
|
||||
# - This plan uses types/exports from prior plan
|
||||
# - Prior plan made decision that affects this plan
|
||||
# Do NOT reflexively chain: Plan 02 refs 01, Plan 03 refs 02...
|
||||
|
||||
[Relevant source files:]
|
||||
@src/path/to/relevant.ts
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: [Action-oriented name]</name>
|
||||
<files>path/to/file.ext, another/file.ext</files>
|
||||
<read_first>path/to/reference.ext, path/to/source-of-truth.ext</read_first>
|
||||
<action>[Specific implementation - what to do, how to do it, what to avoid and WHY. Include CONCRETE values: exact identifiers, parameters, expected outputs, file paths, command arguments. Never say "align X with Y" without specifying the exact target state.]</action>
|
||||
<verify>[Command or check to prove it worked]</verify>
|
||||
<acceptance_criteria>
|
||||
- [Grep-verifiable condition: "file.ext contains 'exact string'"]
|
||||
- [Measurable condition: "output.ext uses 'expected-value', NOT 'wrong-value'"]
|
||||
</acceptance_criteria>
|
||||
<done>[Measurable acceptance criteria]</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: [Action-oriented name]</name>
|
||||
<files>path/to/file.ext</files>
|
||||
<read_first>path/to/reference.ext</read_first>
|
||||
<action>[Specific implementation with concrete values]</action>
|
||||
<verify>[Command or check]</verify>
|
||||
<acceptance_criteria>
|
||||
- [Grep-verifiable condition]
|
||||
</acceptance_criteria>
|
||||
<done>[Acceptance criteria]</done>
|
||||
</task>
|
||||
|
||||
<!-- For checkpoint task examples and patterns, see @C:/Users/yaoji/.claude/get-shit-done/references/checkpoints.md -->
|
||||
|
||||
<task type="checkpoint:decision" gate="blocking">
|
||||
<decision>[What needs deciding]</decision>
|
||||
<context>[Why this decision matters]</context>
|
||||
<options>
|
||||
<option id="option-a"><name>[Name]</name><pros>[Benefits]</pros><cons>[Tradeoffs]</cons></option>
|
||||
<option id="option-b"><name>[Name]</name><pros>[Benefits]</pros><cons>[Tradeoffs]</cons></option>
|
||||
</options>
|
||||
<resume-signal>Select: option-a or option-b</resume-signal>
|
||||
</task>
|
||||
|
||||
<task type="checkpoint:human-verify" gate="blocking">
|
||||
<what-built>[What Claude built] - server running at [URL]</what-built>
|
||||
<how-to-verify>Visit [URL] and verify: [visual checks only, NO CLI commands]</how-to-verify>
|
||||
<resume-signal>Type "approved" or describe issues</resume-signal>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
Before declaring plan complete:
|
||||
- [ ] [Specific test command]
|
||||
- [ ] [Build/type check passes]
|
||||
- [ ] [Behavior verification]
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
- All tasks completed
|
||||
- All verification checks pass
|
||||
- No errors or warnings introduced
|
||||
- [Plan-specific criteria]
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/XX-name/{phase}-{plan}-SUMMARY.md`
|
||||
</output>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Frontmatter Fields
|
||||
|
||||
| Field | Required | Purpose |
|
||||
|-------|----------|---------|
|
||||
| `phase` | Yes | Phase identifier (e.g., `01-foundation`) |
|
||||
| `plan` | Yes | Plan number within phase (e.g., `01`, `02`) |
|
||||
| `type` | Yes | Always `execute` for standard plans, `tdd` for TDD plans |
|
||||
| `wave` | Yes | Execution wave number (1, 2, 3...). Pre-computed at plan time. |
|
||||
| `depends_on` | Yes | Array of plan IDs this plan requires. |
|
||||
| `files_modified` | Yes | Files this plan touches. |
|
||||
| `autonomous` | Yes | `true` if no checkpoints, `false` if has checkpoints |
|
||||
| `requirements` | Yes | **MUST** list requirement IDs from ROADMAP. Every roadmap requirement MUST appear in at least one plan. |
|
||||
| `user_setup` | No | Array of human-required setup items (external services) |
|
||||
| `must_haves` | Yes | Goal-backward verification criteria (see below) |
|
||||
|
||||
**Wave is pre-computed:** Wave numbers are assigned during `/gsd:plan-phase`. Execute-phase reads `wave` directly from frontmatter and groups plans by wave number. No runtime dependency analysis needed.
|
||||
|
||||
**Must-haves enable verification:** The `must_haves` field carries goal-backward requirements from planning to execution. After all plans complete, execute-phase spawns a verification subagent that checks these criteria against the actual codebase.
|
||||
|
||||
---
|
||||
|
||||
## Parallel vs Sequential
|
||||
|
||||
<parallel_examples>
|
||||
|
||||
**Wave 1 candidates (parallel):**
|
||||
|
||||
```yaml
|
||||
# Plan 01 - User feature
|
||||
wave: 1
|
||||
depends_on: []
|
||||
files_modified: [src/models/user.ts, src/api/users.ts]
|
||||
autonomous: true
|
||||
|
||||
# Plan 02 - Product feature (no overlap with Plan 01)
|
||||
wave: 1
|
||||
depends_on: []
|
||||
files_modified: [src/models/product.ts, src/api/products.ts]
|
||||
autonomous: true
|
||||
|
||||
# Plan 03 - Order feature (no overlap)
|
||||
wave: 1
|
||||
depends_on: []
|
||||
files_modified: [src/models/order.ts, src/api/orders.ts]
|
||||
autonomous: true
|
||||
```
|
||||
|
||||
All three run in parallel (Wave 1) - no dependencies, no file conflicts.
|
||||
|
||||
**Sequential (genuine dependency):**
|
||||
|
||||
```yaml
|
||||
# Plan 01 - Auth foundation
|
||||
wave: 1
|
||||
depends_on: []
|
||||
files_modified: [src/lib/auth.ts, src/middleware/auth.ts]
|
||||
autonomous: true
|
||||
|
||||
# Plan 02 - Protected features (needs auth)
|
||||
wave: 2
|
||||
depends_on: ["01"]
|
||||
files_modified: [src/features/dashboard.ts]
|
||||
autonomous: true
|
||||
```
|
||||
|
||||
Plan 02 in Wave 2 waits for Plan 01 in Wave 1 - genuine dependency on auth types/middleware.
|
||||
|
||||
**Checkpoint plan:**
|
||||
|
||||
```yaml
|
||||
# Plan 03 - UI with verification
|
||||
wave: 3
|
||||
depends_on: ["01", "02"]
|
||||
files_modified: [src/components/Dashboard.tsx]
|
||||
autonomous: false # Has checkpoint:human-verify
|
||||
```
|
||||
|
||||
Wave 3 runs after Waves 1 and 2. Pauses at checkpoint, orchestrator presents to user, resumes on approval.
|
||||
|
||||
</parallel_examples>
|
||||
|
||||
---
|
||||
|
||||
## Context Section
|
||||
|
||||
**Parallel-aware context:**
|
||||
|
||||
```markdown
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
|
||||
# Only include SUMMARY refs if genuinely needed:
|
||||
# - This plan imports types from prior plan
|
||||
# - Prior plan made decision affecting this plan
|
||||
# - Prior plan's output is input to this plan
|
||||
#
|
||||
# Independent plans need NO prior SUMMARY references.
|
||||
# Do NOT reflexively chain: 02 refs 01, 03 refs 02...
|
||||
|
||||
@src/relevant/source.ts
|
||||
</context>
|
||||
```
|
||||
|
||||
**Bad pattern (creates false dependencies):**
|
||||
```markdown
|
||||
<context>
|
||||
@.planning/phases/03-features/03-01-SUMMARY.md # Just because it's earlier
|
||||
@.planning/phases/03-features/03-02-SUMMARY.md # Reflexive chaining
|
||||
</context>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Scope Guidance
|
||||
|
||||
**Plan sizing:**
|
||||
|
||||
- 2-3 tasks per plan
|
||||
- ~50% context usage maximum
|
||||
- Complex phases: Multiple focused plans, not one large plan
|
||||
|
||||
**When to split:**
|
||||
|
||||
- Different subsystems (auth vs API vs UI)
|
||||
- >3 tasks
|
||||
- Risk of context overflow
|
||||
- TDD candidates - separate plans
|
||||
|
||||
**Vertical slices preferred:**
|
||||
|
||||
```
|
||||
PREFER: Plan 01 = User (model + API + UI)
|
||||
Plan 02 = Product (model + API + UI)
|
||||
|
||||
AVOID: Plan 01 = All models
|
||||
Plan 02 = All APIs
|
||||
Plan 03 = All UIs
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## TDD Plans
|
||||
|
||||
TDD features get dedicated plans with `type: tdd`.
|
||||
|
||||
**Heuristic:** Can you write `expect(fn(input)).toBe(output)` before writing `fn`?
|
||||
→ Yes: Create a TDD plan
|
||||
→ No: Standard task in standard plan
|
||||
|
||||
See `C:/Users/yaoji/.claude/get-shit-done/references/tdd.md` for TDD plan structure.
|
||||
|
||||
---
|
||||
|
||||
## Task Types
|
||||
|
||||
| Type | Use For | Autonomy |
|
||||
|------|---------|----------|
|
||||
| `auto` | Everything Claude can do independently | Fully autonomous |
|
||||
| `checkpoint:human-verify` | Visual/functional verification | Pauses, returns to orchestrator |
|
||||
| `checkpoint:decision` | Implementation choices | Pauses, returns to orchestrator |
|
||||
| `checkpoint:human-action` | Truly unavoidable manual steps (rare) | Pauses, returns to orchestrator |
|
||||
|
||||
**Checkpoint behavior in parallel execution:**
|
||||
- Plan runs until checkpoint
|
||||
- Agent returns with checkpoint details + agent_id
|
||||
- Orchestrator presents to user
|
||||
- User responds
|
||||
- Orchestrator resumes agent with `resume: agent_id`
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
**Autonomous parallel plan:**
|
||||
|
||||
```markdown
|
||||
---
|
||||
phase: 03-features
|
||||
plan: 01
|
||||
type: execute
|
||||
wave: 1
|
||||
depends_on: []
|
||||
files_modified: [src/features/user/model.ts, src/features/user/api.ts, src/features/user/UserList.tsx]
|
||||
autonomous: true
|
||||
---
|
||||
|
||||
<objective>
|
||||
Implement complete User feature as vertical slice.
|
||||
|
||||
Purpose: Self-contained user management that can run parallel to other features.
|
||||
Output: User model, API endpoints, and UI components.
|
||||
</objective>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
<task type="auto">
|
||||
<name>Task 1: Create User model</name>
|
||||
<files>src/features/user/model.ts</files>
|
||||
<action>Define User type with id, email, name, createdAt. Export TypeScript interface.</action>
|
||||
<verify>tsc --noEmit passes</verify>
|
||||
<done>User type exported and usable</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Create User API endpoints</name>
|
||||
<files>src/features/user/api.ts</files>
|
||||
<action>GET /users (list), GET /users/:id (single), POST /users (create). Use User type from model.</action>
|
||||
<verify>fetch tests pass for all endpoints</verify>
|
||||
<done>All CRUD operations work</done>
|
||||
</task>
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
- [ ] npm run build succeeds
|
||||
- [ ] API endpoints respond correctly
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- All tasks completed
|
||||
- User feature works end-to-end
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/03-features/03-01-SUMMARY.md`
|
||||
</output>
|
||||
```
|
||||
|
||||
**Plan with checkpoint (non-autonomous):**
|
||||
|
||||
```markdown
|
||||
---
|
||||
phase: 03-features
|
||||
plan: 03
|
||||
type: execute
|
||||
wave: 2
|
||||
depends_on: ["03-01", "03-02"]
|
||||
files_modified: [src/components/Dashboard.tsx]
|
||||
autonomous: false
|
||||
---
|
||||
|
||||
<objective>
|
||||
Build dashboard with visual verification.
|
||||
|
||||
Purpose: Integrate user and product features into unified view.
|
||||
Output: Working dashboard component.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@C:/Users/yaoji/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@C:/Users/yaoji/.claude/get-shit-done/templates/summary.md
|
||||
@C:/Users/yaoji/.claude/get-shit-done/references/checkpoints.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/phases/03-features/03-01-SUMMARY.md
|
||||
@.planning/phases/03-features/03-02-SUMMARY.md
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
<task type="auto">
|
||||
<name>Task 1: Build Dashboard layout</name>
|
||||
<files>src/components/Dashboard.tsx</files>
|
||||
<action>Create responsive grid with UserList and ProductList components. Use Tailwind for styling.</action>
|
||||
<verify>npm run build succeeds</verify>
|
||||
<done>Dashboard renders without errors</done>
|
||||
</task>
|
||||
|
||||
<!-- Checkpoint pattern: Claude starts server, user visits URL. See checkpoints.md for full patterns. -->
|
||||
<task type="auto">
|
||||
<name>Start dev server</name>
|
||||
<action>Run `npm run dev` in background, wait for ready</action>
|
||||
<verify>fetch http://localhost:3000 returns 200</verify>
|
||||
</task>
|
||||
|
||||
<task type="checkpoint:human-verify" gate="blocking">
|
||||
<what-built>Dashboard - server at http://localhost:3000</what-built>
|
||||
<how-to-verify>Visit localhost:3000/dashboard. Check: desktop grid, mobile stack, no scroll issues.</how-to-verify>
|
||||
<resume-signal>Type "approved" or describe issues</resume-signal>
|
||||
</task>
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
- [ ] npm run build succeeds
|
||||
- [ ] Visual verification passed
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- All tasks completed
|
||||
- User approved visual layout
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/03-features/03-03-SUMMARY.md`
|
||||
</output>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**Bad: Reflexive dependency chaining**
|
||||
```yaml
|
||||
depends_on: ["03-01"] # Just because 01 comes before 02
|
||||
```
|
||||
|
||||
**Bad: Horizontal layer grouping**
|
||||
```
|
||||
Plan 01: All models
|
||||
Plan 02: All APIs (depends on 01)
|
||||
Plan 03: All UIs (depends on 02)
|
||||
```
|
||||
|
||||
**Bad: Missing autonomy flag**
|
||||
```yaml
|
||||
# Has checkpoint but no autonomous: false
|
||||
depends_on: []
|
||||
files_modified: [...]
|
||||
# autonomous: ??? <- Missing!
|
||||
```
|
||||
|
||||
**Bad: Vague tasks**
|
||||
```xml
|
||||
<task type="auto">
|
||||
<name>Set up authentication</name>
|
||||
<action>Add auth to the app</action>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Bad: Missing read_first (executor modifies files it hasn't read)**
|
||||
```xml
|
||||
<task type="auto">
|
||||
<name>Update database config</name>
|
||||
<files>src/config/database.ts</files>
|
||||
<!-- No read_first! Executor doesn't know current state or conventions -->
|
||||
<action>Update the database config to match production settings</action>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Bad: Vague acceptance criteria (not verifiable)**
|
||||
```xml
|
||||
<acceptance_criteria>
|
||||
- Config is properly set up
|
||||
- Database connection works correctly
|
||||
</acceptance_criteria>
|
||||
```
|
||||
|
||||
**Good: Concrete with read_first + verifiable criteria**
|
||||
```xml
|
||||
<task type="auto">
|
||||
<name>Update database config for connection pooling</name>
|
||||
<files>src/config/database.ts</files>
|
||||
<read_first>src/config/database.ts, .env.example, docker-compose.yml</read_first>
|
||||
<action>Add pool configuration: min=2, max=20, idleTimeoutMs=30000. Add SSL config: rejectUnauthorized=true when NODE_ENV=production. Add .env.example entry: DATABASE_POOL_MAX=20.</action>
|
||||
<acceptance_criteria>
|
||||
- database.ts contains "max: 20" and "idleTimeoutMillis: 30000"
|
||||
- database.ts contains SSL conditional on NODE_ENV
|
||||
- .env.example contains DATABASE_POOL_MAX
|
||||
</acceptance_criteria>
|
||||
</task>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidelines
|
||||
|
||||
- Always use XML structure for Claude parsing
|
||||
- Include `wave`, `depends_on`, `files_modified`, `autonomous` in every plan
|
||||
- Prefer vertical slices over horizontal layers
|
||||
- Only reference prior SUMMARYs when genuinely needed
|
||||
- Group checkpoints with related auto tasks in same plan
|
||||
- 2-3 tasks per plan, ~50% context max
|
||||
|
||||
---
|
||||
|
||||
## User Setup (External Services)
|
||||
|
||||
When a plan introduces external services requiring human configuration, declare in frontmatter:
|
||||
|
||||
```yaml
|
||||
user_setup:
|
||||
- service: stripe
|
||||
why: "Payment processing requires API keys"
|
||||
env_vars:
|
||||
- name: STRIPE_SECRET_KEY
|
||||
source: "Stripe Dashboard → Developers → API keys → Secret key"
|
||||
- name: STRIPE_WEBHOOK_SECRET
|
||||
source: "Stripe Dashboard → Developers → Webhooks → Signing secret"
|
||||
dashboard_config:
|
||||
- task: "Create webhook endpoint"
|
||||
location: "Stripe Dashboard → Developers → Webhooks → Add endpoint"
|
||||
details: "URL: https://[your-domain]/api/webhooks/stripe"
|
||||
local_dev:
|
||||
- "stripe listen --forward-to localhost:3000/api/webhooks/stripe"
|
||||
```
|
||||
|
||||
**The automation-first rule:** `user_setup` contains ONLY what Claude literally cannot do:
|
||||
- Account creation (requires human signup)
|
||||
- Secret retrieval (requires dashboard access)
|
||||
- Dashboard configuration (requires human in browser)
|
||||
|
||||
**NOT included:** Package installs, code changes, file creation, CLI commands Claude can run.
|
||||
|
||||
**Result:** Execute-plan generates `{phase}-USER-SETUP.md` with checklist for the user.
|
||||
|
||||
See `C:/Users/yaoji/.claude/get-shit-done/templates/user-setup.md` for full schema and examples
|
||||
|
||||
---
|
||||
|
||||
## Must-Haves (Goal-Backward Verification)
|
||||
|
||||
The `must_haves` field defines what must be TRUE for the phase goal to be achieved. Derived during planning, verified after execution.
|
||||
|
||||
**Structure:**
|
||||
|
||||
```yaml
|
||||
must_haves:
|
||||
truths:
|
||||
- "User can see existing messages"
|
||||
- "User can send a message"
|
||||
- "Messages persist across refresh"
|
||||
artifacts:
|
||||
- path: "src/components/Chat.tsx"
|
||||
provides: "Message list rendering"
|
||||
min_lines: 30
|
||||
- path: "src/app/api/chat/route.ts"
|
||||
provides: "Message CRUD operations"
|
||||
exports: ["GET", "POST"]
|
||||
- path: "prisma/schema.prisma"
|
||||
provides: "Message model"
|
||||
contains: "model Message"
|
||||
key_links:
|
||||
- from: "src/components/Chat.tsx"
|
||||
to: "/api/chat"
|
||||
via: "fetch in useEffect"
|
||||
pattern: "fetch.*api/chat"
|
||||
- from: "src/app/api/chat/route.ts"
|
||||
to: "prisma.message"
|
||||
via: "database query"
|
||||
pattern: "prisma\\.message\\.(find|create)"
|
||||
```
|
||||
|
||||
**Field descriptions:**
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `truths` | Observable behaviors from user perspective. Each must be testable. |
|
||||
| `artifacts` | Files that must exist with real implementation. |
|
||||
| `artifacts[].path` | File path relative to project root. |
|
||||
| `artifacts[].provides` | What this artifact delivers. |
|
||||
| `artifacts[].min_lines` | Optional. Minimum lines to be considered substantive. |
|
||||
| `artifacts[].exports` | Optional. Expected exports to verify. |
|
||||
| `artifacts[].contains` | Optional. Pattern that must exist in file. |
|
||||
| `key_links` | Critical connections between artifacts. |
|
||||
| `key_links[].from` | Source artifact. |
|
||||
| `key_links[].to` | Target artifact or endpoint. |
|
||||
| `key_links[].via` | How they connect (description). |
|
||||
| `key_links[].pattern` | Optional. Regex to verify connection exists. |
|
||||
|
||||
**Why this matters:**
|
||||
|
||||
Task completion ≠ Goal achievement. A task "create chat component" can complete by creating a placeholder. The `must_haves` field captures what must actually work, enabling verification to catch gaps before they compound.
|
||||
|
||||
**Verification flow:**
|
||||
|
||||
1. Plan-phase derives must_haves from phase goal (goal-backward)
|
||||
2. Must_haves written to PLAN.md frontmatter
|
||||
3. Execute-phase runs all plans
|
||||
4. Verification subagent checks must_haves against codebase
|
||||
5. Gaps found → fix plans created → execute → re-verify
|
||||
6. All must_haves pass → phase complete
|
||||
|
||||
See `C:/Users/yaoji/.claude/get-shit-done/workflows/verify-phase.md` for verification logic.
|
||||
117
get-shit-done/templates/planner-subagent-prompt.md
Normal file
117
get-shit-done/templates/planner-subagent-prompt.md
Normal file
@@ -0,0 +1,117 @@
|
||||
# Planner Subagent Prompt Template
|
||||
|
||||
Template for spawning gsd-planner agent. The agent contains all planning expertise - this template provides planning context only.
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
<planning_context>
|
||||
|
||||
**Phase:** {phase_number}
|
||||
**Mode:** {standard | gap_closure}
|
||||
|
||||
**Project State:**
|
||||
@.planning/STATE.md
|
||||
|
||||
**Roadmap:**
|
||||
@.planning/ROADMAP.md
|
||||
|
||||
**Requirements (if exists):**
|
||||
@.planning/REQUIREMENTS.md
|
||||
|
||||
**Phase Context (if exists):**
|
||||
@.planning/phases/{phase_dir}/{phase_num}-CONTEXT.md
|
||||
|
||||
**Research (if exists):**
|
||||
@.planning/phases/{phase_dir}/{phase_num}-RESEARCH.md
|
||||
|
||||
**Gap Closure (if --gaps mode):**
|
||||
@.planning/phases/{phase_dir}/{phase_num}-VERIFICATION.md
|
||||
@.planning/phases/{phase_dir}/{phase_num}-UAT.md
|
||||
|
||||
</planning_context>
|
||||
|
||||
<downstream_consumer>
|
||||
Output consumed by /gsd:execute-phase
|
||||
Plans must be executable prompts with:
|
||||
- Frontmatter (wave, depends_on, files_modified, autonomous)
|
||||
- Tasks in XML format
|
||||
- Verification criteria
|
||||
- must_haves for goal-backward verification
|
||||
</downstream_consumer>
|
||||
|
||||
<quality_gate>
|
||||
Before returning PLANNING COMPLETE:
|
||||
- [ ] PLAN.md files created in phase directory
|
||||
- [ ] Each plan has valid frontmatter
|
||||
- [ ] Tasks are specific and actionable
|
||||
- [ ] Dependencies correctly identified
|
||||
- [ ] Waves assigned for parallel execution
|
||||
- [ ] must_haves derived from phase goal
|
||||
</quality_gate>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Placeholders
|
||||
|
||||
| Placeholder | Source | Example |
|
||||
|-------------|--------|---------|
|
||||
| `{phase_number}` | From roadmap/arguments | `5` or `2.1` |
|
||||
| `{phase_dir}` | Phase directory name | `05-user-profiles` |
|
||||
| `{phase}` | Phase prefix | `05` |
|
||||
| `{standard \| gap_closure}` | Mode flag | `standard` |
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
**From /gsd:plan-phase (standard mode):**
|
||||
```python
|
||||
Task(
|
||||
prompt=filled_template,
|
||||
subagent_type="gsd-planner",
|
||||
description="Plan Phase {phase}"
|
||||
)
|
||||
```
|
||||
|
||||
**From /gsd:plan-phase --gaps (gap closure mode):**
|
||||
```python
|
||||
Task(
|
||||
prompt=filled_template, # with mode: gap_closure
|
||||
subagent_type="gsd-planner",
|
||||
description="Plan gaps for Phase {phase}"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Continuation
|
||||
|
||||
For checkpoints, spawn fresh agent with:
|
||||
|
||||
```markdown
|
||||
<objective>
|
||||
Continue planning for Phase {phase_number}: {phase_name}
|
||||
</objective>
|
||||
|
||||
<prior_state>
|
||||
Phase directory: @.planning/phases/{phase_dir}/
|
||||
Existing plans: @.planning/phases/{phase_dir}/*-PLAN.md
|
||||
</prior_state>
|
||||
|
||||
<checkpoint_response>
|
||||
**Type:** {checkpoint_type}
|
||||
**Response:** {user_response}
|
||||
</checkpoint_response>
|
||||
|
||||
<mode>
|
||||
Continue: {standard | gap_closure}
|
||||
</mode>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Note:** Planning methodology, task breakdown, dependency analysis, wave assignment, TDD detection, and goal-backward derivation are baked into the gsd-planner agent. This template only passes context.
|
||||
184
get-shit-done/templates/project.md
Normal file
184
get-shit-done/templates/project.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# PROJECT.md Template
|
||||
|
||||
Template for `.planning/PROJECT.md` — the living project context document.
|
||||
|
||||
<template>
|
||||
|
||||
```markdown
|
||||
# [Project Name]
|
||||
|
||||
## What This Is
|
||||
|
||||
[Current accurate description — 2-3 sentences. What does this product do and who is it for?
|
||||
Use the user's language and framing. Update whenever reality drifts from this description.]
|
||||
|
||||
## Core Value
|
||||
|
||||
[The ONE thing that matters most. If everything else fails, this must work.
|
||||
One sentence that drives prioritization when tradeoffs arise.]
|
||||
|
||||
## Requirements
|
||||
|
||||
### Validated
|
||||
|
||||
<!-- Shipped and confirmed valuable. -->
|
||||
|
||||
(None yet — ship to validate)
|
||||
|
||||
### Active
|
||||
|
||||
<!-- Current scope. Building toward these. -->
|
||||
|
||||
- [ ] [Requirement 1]
|
||||
- [ ] [Requirement 2]
|
||||
- [ ] [Requirement 3]
|
||||
|
||||
### Out of Scope
|
||||
|
||||
<!-- Explicit boundaries. Includes reasoning to prevent re-adding. -->
|
||||
|
||||
- [Exclusion 1] — [why]
|
||||
- [Exclusion 2] — [why]
|
||||
|
||||
## Context
|
||||
|
||||
[Background information that informs implementation:
|
||||
- Technical environment or ecosystem
|
||||
- Relevant prior work or experience
|
||||
- User research or feedback themes
|
||||
- Known issues to address]
|
||||
|
||||
## Constraints
|
||||
|
||||
- **[Type]**: [What] — [Why]
|
||||
- **[Type]**: [What] — [Why]
|
||||
|
||||
Common types: Tech stack, Timeline, Budget, Dependencies, Compatibility, Performance, Security
|
||||
|
||||
## Key Decisions
|
||||
|
||||
<!-- Decisions that constrain future work. Add throughout project lifecycle. -->
|
||||
|
||||
| Decision | Rationale | Outcome |
|
||||
|----------|-----------|---------|
|
||||
| [Choice] | [Why] | [✓ Good / ⚠️ Revisit / — Pending] |
|
||||
|
||||
---
|
||||
*Last updated: [date] after [trigger]*
|
||||
```
|
||||
|
||||
</template>
|
||||
|
||||
<guidelines>
|
||||
|
||||
**What This Is:**
|
||||
- Current accurate description of the product
|
||||
- 2-3 sentences capturing what it does and who it's for
|
||||
- Use the user's words and framing
|
||||
- Update when the product evolves beyond this description
|
||||
|
||||
**Core Value:**
|
||||
- The single most important thing
|
||||
- Everything else can fail; this cannot
|
||||
- Drives prioritization when tradeoffs arise
|
||||
- Rarely changes; if it does, it's a significant pivot
|
||||
|
||||
**Requirements — Validated:**
|
||||
- Requirements that shipped and proved valuable
|
||||
- Format: `- ✓ [Requirement] — [version/phase]`
|
||||
- These are locked — changing them requires explicit discussion
|
||||
|
||||
**Requirements — Active:**
|
||||
- Current scope being built toward
|
||||
- These are hypotheses until shipped and validated
|
||||
- Move to Validated when shipped, Out of Scope if invalidated
|
||||
|
||||
**Requirements — Out of Scope:**
|
||||
- Explicit boundaries on what we're not building
|
||||
- Always include reasoning (prevents re-adding later)
|
||||
- Includes: considered and rejected, deferred to future, explicitly excluded
|
||||
|
||||
**Context:**
|
||||
- Background that informs implementation decisions
|
||||
- Technical environment, prior work, user feedback
|
||||
- Known issues or technical debt to address
|
||||
- Update as new context emerges
|
||||
|
||||
**Constraints:**
|
||||
- Hard limits on implementation choices
|
||||
- Tech stack, timeline, budget, compatibility, dependencies
|
||||
- Include the "why" — constraints without rationale get questioned
|
||||
|
||||
**Key Decisions:**
|
||||
- Significant choices that affect future work
|
||||
- Add decisions as they're made throughout the project
|
||||
- Track outcome when known:
|
||||
- ✓ Good — decision proved correct
|
||||
- ⚠️ Revisit — decision may need reconsideration
|
||||
- — Pending — too early to evaluate
|
||||
|
||||
**Last Updated:**
|
||||
- Always note when and why the document was updated
|
||||
- Format: `after Phase 2` or `after v1.0 milestone`
|
||||
- Triggers review of whether content is still accurate
|
||||
|
||||
</guidelines>
|
||||
|
||||
<evolution>
|
||||
|
||||
PROJECT.md evolves throughout the project lifecycle.
|
||||
|
||||
**After each phase transition:**
|
||||
1. Requirements invalidated? → Move to Out of Scope with reason
|
||||
2. Requirements validated? → Move to Validated with phase reference
|
||||
3. New requirements emerged? → Add to Active
|
||||
4. Decisions to log? → Add to Key Decisions
|
||||
5. "What This Is" still accurate? → Update if drifted
|
||||
|
||||
**After each milestone:**
|
||||
1. Full review of all sections
|
||||
2. Core Value check — still the right priority?
|
||||
3. Audit Out of Scope — reasons still valid?
|
||||
4. Update Context with current state (users, feedback, metrics)
|
||||
|
||||
</evolution>
|
||||
|
||||
<brownfield>
|
||||
|
||||
For existing codebases:
|
||||
|
||||
1. **Map codebase first** via `/gsd:map-codebase`
|
||||
|
||||
2. **Infer Validated requirements** from existing code:
|
||||
- What does the codebase actually do?
|
||||
- What patterns are established?
|
||||
- What's clearly working and relied upon?
|
||||
|
||||
3. **Gather Active requirements** from user:
|
||||
- Present inferred current state
|
||||
- Ask what they want to build next
|
||||
|
||||
4. **Initialize:**
|
||||
- Validated = inferred from existing code
|
||||
- Active = user's goals for this work
|
||||
- Out of Scope = boundaries user specifies
|
||||
- Context = includes current codebase state
|
||||
|
||||
</brownfield>
|
||||
|
||||
<state_reference>
|
||||
|
||||
STATE.md references PROJECT.md:
|
||||
|
||||
```markdown
|
||||
## Project Reference
|
||||
|
||||
See: .planning/PROJECT.md (updated [date])
|
||||
|
||||
**Core value:** [One-liner from Core Value section]
|
||||
**Current focus:** [Current phase name]
|
||||
```
|
||||
|
||||
This ensures Claude reads current PROJECT.md context.
|
||||
|
||||
</state_reference>
|
||||
231
get-shit-done/templates/requirements.md
Normal file
231
get-shit-done/templates/requirements.md
Normal file
@@ -0,0 +1,231 @@
|
||||
# Requirements Template
|
||||
|
||||
Template for `.planning/REQUIREMENTS.md` — checkable requirements that define "done."
|
||||
|
||||
<template>
|
||||
|
||||
```markdown
|
||||
# Requirements: [Project Name]
|
||||
|
||||
**Defined:** [date]
|
||||
**Core Value:** [from PROJECT.md]
|
||||
|
||||
## v1 Requirements
|
||||
|
||||
Requirements for initial release. Each maps to roadmap phases.
|
||||
|
||||
### Authentication
|
||||
|
||||
- [ ] **AUTH-01**: User can sign up with email and password
|
||||
- [ ] **AUTH-02**: User receives email verification after signup
|
||||
- [ ] **AUTH-03**: User can reset password via email link
|
||||
- [ ] **AUTH-04**: User session persists across browser refresh
|
||||
|
||||
### [Category 2]
|
||||
|
||||
- [ ] **[CAT]-01**: [Requirement description]
|
||||
- [ ] **[CAT]-02**: [Requirement description]
|
||||
- [ ] **[CAT]-03**: [Requirement description]
|
||||
|
||||
### [Category 3]
|
||||
|
||||
- [ ] **[CAT]-01**: [Requirement description]
|
||||
- [ ] **[CAT]-02**: [Requirement description]
|
||||
|
||||
## v2 Requirements
|
||||
|
||||
Deferred to future release. Tracked but not in current roadmap.
|
||||
|
||||
### [Category]
|
||||
|
||||
- **[CAT]-01**: [Requirement description]
|
||||
- **[CAT]-02**: [Requirement description]
|
||||
|
||||
## Out of Scope
|
||||
|
||||
Explicitly excluded. Documented to prevent scope creep.
|
||||
|
||||
| Feature | Reason |
|
||||
|---------|--------|
|
||||
| [Feature] | [Why excluded] |
|
||||
| [Feature] | [Why excluded] |
|
||||
|
||||
## Traceability
|
||||
|
||||
Which phases cover which requirements. Updated during roadmap creation.
|
||||
|
||||
| Requirement | Phase | Status |
|
||||
|-------------|-------|--------|
|
||||
| AUTH-01 | Phase 1 | Pending |
|
||||
| AUTH-02 | Phase 1 | Pending |
|
||||
| AUTH-03 | Phase 1 | Pending |
|
||||
| AUTH-04 | Phase 1 | Pending |
|
||||
| [REQ-ID] | Phase [N] | Pending |
|
||||
|
||||
**Coverage:**
|
||||
- v1 requirements: [X] total
|
||||
- Mapped to phases: [Y]
|
||||
- Unmapped: [Z] ⚠️
|
||||
|
||||
---
|
||||
*Requirements defined: [date]*
|
||||
*Last updated: [date] after [trigger]*
|
||||
```
|
||||
|
||||
</template>
|
||||
|
||||
<guidelines>
|
||||
|
||||
**Requirement Format:**
|
||||
- ID: `[CATEGORY]-[NUMBER]` (AUTH-01, CONTENT-02, SOCIAL-03)
|
||||
- Description: User-centric, testable, atomic
|
||||
- Checkbox: Only for v1 requirements (v2 are not yet actionable)
|
||||
|
||||
**Categories:**
|
||||
- Derive from research FEATURES.md categories
|
||||
- Keep consistent with domain conventions
|
||||
- Typical: Authentication, Content, Social, Notifications, Moderation, Payments, Admin
|
||||
|
||||
**v1 vs v2:**
|
||||
- v1: Committed scope, will be in roadmap phases
|
||||
- v2: Acknowledged but deferred, not in current roadmap
|
||||
- Moving v2 → v1 requires roadmap update
|
||||
|
||||
**Out of Scope:**
|
||||
- Explicit exclusions with reasoning
|
||||
- Prevents "why didn't you include X?" later
|
||||
- Anti-features from research belong here with warnings
|
||||
|
||||
**Traceability:**
|
||||
- Empty initially, populated during roadmap creation
|
||||
- Each requirement maps to exactly one phase
|
||||
- Unmapped requirements = roadmap gap
|
||||
|
||||
**Status Values:**
|
||||
- Pending: Not started
|
||||
- In Progress: Phase is active
|
||||
- Complete: Requirement verified
|
||||
- Blocked: Waiting on external factor
|
||||
|
||||
</guidelines>
|
||||
|
||||
<evolution>
|
||||
|
||||
**After each phase completes:**
|
||||
1. Mark covered requirements as Complete
|
||||
2. Update traceability status
|
||||
3. Note any requirements that changed scope
|
||||
|
||||
**After roadmap updates:**
|
||||
1. Verify all v1 requirements still mapped
|
||||
2. Add new requirements if scope expanded
|
||||
3. Move requirements to v2/out of scope if descoped
|
||||
|
||||
**Requirement completion criteria:**
|
||||
- Requirement is "Complete" when:
|
||||
- Feature is implemented
|
||||
- Feature is verified (tests pass, manual check done)
|
||||
- Feature is committed
|
||||
|
||||
</evolution>
|
||||
|
||||
<example>
|
||||
|
||||
```markdown
|
||||
# Requirements: CommunityApp
|
||||
|
||||
**Defined:** 2025-01-14
|
||||
**Core Value:** Users can share and discuss content with people who share their interests
|
||||
|
||||
## v1 Requirements
|
||||
|
||||
### Authentication
|
||||
|
||||
- [ ] **AUTH-01**: User can sign up with email and password
|
||||
- [ ] **AUTH-02**: User receives email verification after signup
|
||||
- [ ] **AUTH-03**: User can reset password via email link
|
||||
- [ ] **AUTH-04**: User session persists across browser refresh
|
||||
|
||||
### Profiles
|
||||
|
||||
- [ ] **PROF-01**: User can create profile with display name
|
||||
- [ ] **PROF-02**: User can upload avatar image
|
||||
- [ ] **PROF-03**: User can write bio (max 500 chars)
|
||||
- [ ] **PROF-04**: User can view other users' profiles
|
||||
|
||||
### Content
|
||||
|
||||
- [ ] **CONT-01**: User can create text post
|
||||
- [ ] **CONT-02**: User can upload image with post
|
||||
- [ ] **CONT-03**: User can edit own posts
|
||||
- [ ] **CONT-04**: User can delete own posts
|
||||
- [ ] **CONT-05**: User can view feed of posts
|
||||
|
||||
### Social
|
||||
|
||||
- [ ] **SOCL-01**: User can follow other users
|
||||
- [ ] **SOCL-02**: User can unfollow users
|
||||
- [ ] **SOCL-03**: User can like posts
|
||||
- [ ] **SOCL-04**: User can comment on posts
|
||||
- [ ] **SOCL-05**: User can view activity feed (followed users' posts)
|
||||
|
||||
## v2 Requirements
|
||||
|
||||
### Notifications
|
||||
|
||||
- **NOTF-01**: User receives in-app notifications
|
||||
- **NOTF-02**: User receives email for new followers
|
||||
- **NOTF-03**: User receives email for comments on own posts
|
||||
- **NOTF-04**: User can configure notification preferences
|
||||
|
||||
### Moderation
|
||||
|
||||
- **MODR-01**: User can report content
|
||||
- **MODR-02**: User can block other users
|
||||
- **MODR-03**: Admin can view reported content
|
||||
- **MODR-04**: Admin can remove content
|
||||
- **MODR-05**: Admin can ban users
|
||||
|
||||
## Out of Scope
|
||||
|
||||
| Feature | Reason |
|
||||
|---------|--------|
|
||||
| Real-time chat | High complexity, not core to community value |
|
||||
| Video posts | Storage/bandwidth costs, defer to v2+ |
|
||||
| OAuth login | Email/password sufficient for v1 |
|
||||
| Mobile app | Web-first, mobile later |
|
||||
|
||||
## Traceability
|
||||
|
||||
| Requirement | Phase | Status |
|
||||
|-------------|-------|--------|
|
||||
| AUTH-01 | Phase 1 | Pending |
|
||||
| AUTH-02 | Phase 1 | Pending |
|
||||
| AUTH-03 | Phase 1 | Pending |
|
||||
| AUTH-04 | Phase 1 | Pending |
|
||||
| PROF-01 | Phase 2 | Pending |
|
||||
| PROF-02 | Phase 2 | Pending |
|
||||
| PROF-03 | Phase 2 | Pending |
|
||||
| PROF-04 | Phase 2 | Pending |
|
||||
| CONT-01 | Phase 3 | Pending |
|
||||
| CONT-02 | Phase 3 | Pending |
|
||||
| CONT-03 | Phase 3 | Pending |
|
||||
| CONT-04 | Phase 3 | Pending |
|
||||
| CONT-05 | Phase 3 | Pending |
|
||||
| SOCL-01 | Phase 4 | Pending |
|
||||
| SOCL-02 | Phase 4 | Pending |
|
||||
| SOCL-03 | Phase 4 | Pending |
|
||||
| SOCL-04 | Phase 4 | Pending |
|
||||
| SOCL-05 | Phase 4 | Pending |
|
||||
|
||||
**Coverage:**
|
||||
- v1 requirements: 18 total
|
||||
- Mapped to phases: 18
|
||||
- Unmapped: 0 ✓
|
||||
|
||||
---
|
||||
*Requirements defined: 2025-01-14*
|
||||
*Last updated: 2025-01-14 after initial definition*
|
||||
```
|
||||
|
||||
</example>
|
||||
204
get-shit-done/templates/research-project/ARCHITECTURE.md
Normal file
204
get-shit-done/templates/research-project/ARCHITECTURE.md
Normal file
@@ -0,0 +1,204 @@
|
||||
# Architecture Research Template
|
||||
|
||||
Template for `.planning/research/ARCHITECTURE.md` — system structure patterns for the project domain.
|
||||
|
||||
<template>
|
||||
|
||||
```markdown
|
||||
# Architecture Research
|
||||
|
||||
**Domain:** [domain type]
|
||||
**Researched:** [date]
|
||||
**Confidence:** [HIGH/MEDIUM/LOW]
|
||||
|
||||
## Standard Architecture
|
||||
|
||||
### System Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ [Layer Name] │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
||||
│ │ [Comp] │ │ [Comp] │ │ [Comp] │ │ [Comp] │ │
|
||||
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
|
||||
│ │ │ │ │ │
|
||||
├───────┴────────────┴────────────┴────────────┴──────────────┤
|
||||
│ [Layer Name] │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ [Component] │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ [Layer Name] │
|
||||
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||
│ │ [Store] │ │ [Store] │ │ [Store] │ │
|
||||
│ └──────────┘ └──────────┘ └──────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Component Responsibilities
|
||||
|
||||
| Component | Responsibility | Typical Implementation |
|
||||
|-----------|----------------|------------------------|
|
||||
| [name] | [what it owns] | [how it's usually built] |
|
||||
| [name] | [what it owns] | [how it's usually built] |
|
||||
| [name] | [what it owns] | [how it's usually built] |
|
||||
|
||||
## Recommended Project Structure
|
||||
|
||||
```
|
||||
src/
|
||||
├── [folder]/ # [purpose]
|
||||
│ ├── [subfolder]/ # [purpose]
|
||||
│ └── [file].ts # [purpose]
|
||||
├── [folder]/ # [purpose]
|
||||
│ ├── [subfolder]/ # [purpose]
|
||||
│ └── [file].ts # [purpose]
|
||||
├── [folder]/ # [purpose]
|
||||
└── [folder]/ # [purpose]
|
||||
```
|
||||
|
||||
### Structure Rationale
|
||||
|
||||
- **[folder]/:** [why organized this way]
|
||||
- **[folder]/:** [why organized this way]
|
||||
|
||||
## Architectural Patterns
|
||||
|
||||
### Pattern 1: [Pattern Name]
|
||||
|
||||
**What:** [description]
|
||||
**When to use:** [conditions]
|
||||
**Trade-offs:** [pros and cons]
|
||||
|
||||
**Example:**
|
||||
```typescript
|
||||
// [Brief code example showing the pattern]
|
||||
```
|
||||
|
||||
### Pattern 2: [Pattern Name]
|
||||
|
||||
**What:** [description]
|
||||
**When to use:** [conditions]
|
||||
**Trade-offs:** [pros and cons]
|
||||
|
||||
**Example:**
|
||||
```typescript
|
||||
// [Brief code example showing the pattern]
|
||||
```
|
||||
|
||||
### Pattern 3: [Pattern Name]
|
||||
|
||||
**What:** [description]
|
||||
**When to use:** [conditions]
|
||||
**Trade-offs:** [pros and cons]
|
||||
|
||||
## Data Flow
|
||||
|
||||
### Request Flow
|
||||
|
||||
```
|
||||
[User Action]
|
||||
↓
|
||||
[Component] → [Handler] → [Service] → [Data Store]
|
||||
↓ ↓ ↓ ↓
|
||||
[Response] ← [Transform] ← [Query] ← [Database]
|
||||
```
|
||||
|
||||
### State Management
|
||||
|
||||
```
|
||||
[State Store]
|
||||
↓ (subscribe)
|
||||
[Components] ←→ [Actions] → [Reducers/Mutations] → [State Store]
|
||||
```
|
||||
|
||||
### Key Data Flows
|
||||
|
||||
1. **[Flow name]:** [description of how data moves]
|
||||
2. **[Flow name]:** [description of how data moves]
|
||||
|
||||
## Scaling Considerations
|
||||
|
||||
| Scale | Architecture Adjustments |
|
||||
|-------|--------------------------|
|
||||
| 0-1k users | [approach — usually monolith is fine] |
|
||||
| 1k-100k users | [approach — what to optimize first] |
|
||||
| 100k+ users | [approach — when to consider splitting] |
|
||||
|
||||
### Scaling Priorities
|
||||
|
||||
1. **First bottleneck:** [what breaks first, how to fix]
|
||||
2. **Second bottleneck:** [what breaks next, how to fix]
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### Anti-Pattern 1: [Name]
|
||||
|
||||
**What people do:** [the mistake]
|
||||
**Why it's wrong:** [the problem it causes]
|
||||
**Do this instead:** [the correct approach]
|
||||
|
||||
### Anti-Pattern 2: [Name]
|
||||
|
||||
**What people do:** [the mistake]
|
||||
**Why it's wrong:** [the problem it causes]
|
||||
**Do this instead:** [the correct approach]
|
||||
|
||||
## Integration Points
|
||||
|
||||
### External Services
|
||||
|
||||
| Service | Integration Pattern | Notes |
|
||||
|---------|---------------------|-------|
|
||||
| [service] | [how to connect] | [gotchas] |
|
||||
| [service] | [how to connect] | [gotchas] |
|
||||
|
||||
### Internal Boundaries
|
||||
|
||||
| Boundary | Communication | Notes |
|
||||
|----------|---------------|-------|
|
||||
| [module A ↔ module B] | [API/events/direct] | [considerations] |
|
||||
|
||||
## Sources
|
||||
|
||||
- [Architecture references]
|
||||
- [Official documentation]
|
||||
- [Case studies]
|
||||
|
||||
---
|
||||
*Architecture research for: [domain]*
|
||||
*Researched: [date]*
|
||||
```
|
||||
|
||||
</template>
|
||||
|
||||
<guidelines>
|
||||
|
||||
**System Overview:**
|
||||
- Use ASCII box-drawing diagrams for clarity (├── └── │ ─ for structure visualization only)
|
||||
- Show major components and their relationships
|
||||
- Don't over-detail — this is conceptual, not implementation
|
||||
|
||||
**Project Structure:**
|
||||
- Be specific about folder organization
|
||||
- Explain the rationale for grouping
|
||||
- Match conventions of the chosen stack
|
||||
|
||||
**Patterns:**
|
||||
- Include code examples where helpful
|
||||
- Explain trade-offs honestly
|
||||
- Note when patterns are overkill for small projects
|
||||
|
||||
**Scaling Considerations:**
|
||||
- Be realistic — most projects don't need to scale to millions
|
||||
- Focus on "what breaks first" not theoretical limits
|
||||
- Avoid premature optimization recommendations
|
||||
|
||||
**Anti-Patterns:**
|
||||
- Specific to this domain
|
||||
- Include what to do instead
|
||||
- Helps prevent common mistakes during implementation
|
||||
|
||||
</guidelines>
|
||||
147
get-shit-done/templates/research-project/FEATURES.md
Normal file
147
get-shit-done/templates/research-project/FEATURES.md
Normal file
@@ -0,0 +1,147 @@
|
||||
# Features Research Template
|
||||
|
||||
Template for `.planning/research/FEATURES.md` — feature landscape for the project domain.
|
||||
|
||||
<template>
|
||||
|
||||
```markdown
|
||||
# Feature Research
|
||||
|
||||
**Domain:** [domain type]
|
||||
**Researched:** [date]
|
||||
**Confidence:** [HIGH/MEDIUM/LOW]
|
||||
|
||||
## Feature Landscape
|
||||
|
||||
### Table Stakes (Users Expect These)
|
||||
|
||||
Features users assume exist. Missing these = product feels incomplete.
|
||||
|
||||
| Feature | Why Expected | Complexity | Notes |
|
||||
|---------|--------------|------------|-------|
|
||||
| [feature] | [user expectation] | LOW/MEDIUM/HIGH | [implementation notes] |
|
||||
| [feature] | [user expectation] | LOW/MEDIUM/HIGH | [implementation notes] |
|
||||
| [feature] | [user expectation] | LOW/MEDIUM/HIGH | [implementation notes] |
|
||||
|
||||
### Differentiators (Competitive Advantage)
|
||||
|
||||
Features that set the product apart. Not required, but valuable.
|
||||
|
||||
| Feature | Value Proposition | Complexity | Notes |
|
||||
|---------|-------------------|------------|-------|
|
||||
| [feature] | [why it matters] | LOW/MEDIUM/HIGH | [implementation notes] |
|
||||
| [feature] | [why it matters] | LOW/MEDIUM/HIGH | [implementation notes] |
|
||||
| [feature] | [why it matters] | LOW/MEDIUM/HIGH | [implementation notes] |
|
||||
|
||||
### Anti-Features (Commonly Requested, Often Problematic)
|
||||
|
||||
Features that seem good but create problems.
|
||||
|
||||
| Feature | Why Requested | Why Problematic | Alternative |
|
||||
|---------|---------------|-----------------|-------------|
|
||||
| [feature] | [surface appeal] | [actual problems] | [better approach] |
|
||||
| [feature] | [surface appeal] | [actual problems] | [better approach] |
|
||||
|
||||
## Feature Dependencies
|
||||
|
||||
```
|
||||
[Feature A]
|
||||
└──requires──> [Feature B]
|
||||
└──requires──> [Feature C]
|
||||
|
||||
[Feature D] ──enhances──> [Feature A]
|
||||
|
||||
[Feature E] ──conflicts──> [Feature F]
|
||||
```
|
||||
|
||||
### Dependency Notes
|
||||
|
||||
- **[Feature A] requires [Feature B]:** [why the dependency exists]
|
||||
- **[Feature D] enhances [Feature A]:** [how they work together]
|
||||
- **[Feature E] conflicts with [Feature F]:** [why they're incompatible]
|
||||
|
||||
## MVP Definition
|
||||
|
||||
### Launch With (v1)
|
||||
|
||||
Minimum viable product — what's needed to validate the concept.
|
||||
|
||||
- [ ] [Feature] — [why essential]
|
||||
- [ ] [Feature] — [why essential]
|
||||
- [ ] [Feature] — [why essential]
|
||||
|
||||
### Add After Validation (v1.x)
|
||||
|
||||
Features to add once core is working.
|
||||
|
||||
- [ ] [Feature] — [trigger for adding]
|
||||
- [ ] [Feature] — [trigger for adding]
|
||||
|
||||
### Future Consideration (v2+)
|
||||
|
||||
Features to defer until product-market fit is established.
|
||||
|
||||
- [ ] [Feature] — [why defer]
|
||||
- [ ] [Feature] — [why defer]
|
||||
|
||||
## Feature Prioritization Matrix
|
||||
|
||||
| Feature | User Value | Implementation Cost | Priority |
|
||||
|---------|------------|---------------------|----------|
|
||||
| [feature] | HIGH/MEDIUM/LOW | HIGH/MEDIUM/LOW | P1/P2/P3 |
|
||||
| [feature] | HIGH/MEDIUM/LOW | HIGH/MEDIUM/LOW | P1/P2/P3 |
|
||||
| [feature] | HIGH/MEDIUM/LOW | HIGH/MEDIUM/LOW | P1/P2/P3 |
|
||||
|
||||
**Priority key:**
|
||||
- P1: Must have for launch
|
||||
- P2: Should have, add when possible
|
||||
- P3: Nice to have, future consideration
|
||||
|
||||
## Competitor Feature Analysis
|
||||
|
||||
| Feature | Competitor A | Competitor B | Our Approach |
|
||||
|---------|--------------|--------------|--------------|
|
||||
| [feature] | [how they do it] | [how they do it] | [our plan] |
|
||||
| [feature] | [how they do it] | [how they do it] | [our plan] |
|
||||
|
||||
## Sources
|
||||
|
||||
- [Competitor products analyzed]
|
||||
- [User research or feedback sources]
|
||||
- [Industry standards referenced]
|
||||
|
||||
---
|
||||
*Feature research for: [domain]*
|
||||
*Researched: [date]*
|
||||
```
|
||||
|
||||
</template>
|
||||
|
||||
<guidelines>
|
||||
|
||||
**Table Stakes:**
|
||||
- These are non-negotiable for launch
|
||||
- Users don't give credit for having them, but penalize for missing them
|
||||
- Example: A community platform without user profiles is broken
|
||||
|
||||
**Differentiators:**
|
||||
- These are where you compete
|
||||
- Should align with the Core Value from PROJECT.md
|
||||
- Don't try to differentiate on everything
|
||||
|
||||
**Anti-Features:**
|
||||
- Prevent scope creep by documenting what seems good but isn't
|
||||
- Include the alternative approach
|
||||
- Example: "Real-time everything" often creates complexity without value
|
||||
|
||||
**Feature Dependencies:**
|
||||
- Critical for roadmap phase ordering
|
||||
- If A requires B, B must be in an earlier phase
|
||||
- Conflicts inform what NOT to combine in same phase
|
||||
|
||||
**MVP Definition:**
|
||||
- Be ruthless about what's truly minimum
|
||||
- "Nice to have" is not MVP
|
||||
- Launch with less, validate, then expand
|
||||
|
||||
</guidelines>
|
||||
200
get-shit-done/templates/research-project/PITFALLS.md
Normal file
200
get-shit-done/templates/research-project/PITFALLS.md
Normal file
@@ -0,0 +1,200 @@
|
||||
# Pitfalls Research Template
|
||||
|
||||
Template for `.planning/research/PITFALLS.md` — common mistakes to avoid in the project domain.
|
||||
|
||||
<template>
|
||||
|
||||
```markdown
|
||||
# Pitfalls Research
|
||||
|
||||
**Domain:** [domain type]
|
||||
**Researched:** [date]
|
||||
**Confidence:** [HIGH/MEDIUM/LOW]
|
||||
|
||||
## Critical Pitfalls
|
||||
|
||||
### Pitfall 1: [Name]
|
||||
|
||||
**What goes wrong:**
|
||||
[Description of the failure mode]
|
||||
|
||||
**Why it happens:**
|
||||
[Root cause — why developers make this mistake]
|
||||
|
||||
**How to avoid:**
|
||||
[Specific prevention strategy]
|
||||
|
||||
**Warning signs:**
|
||||
[How to detect this early before it becomes a problem]
|
||||
|
||||
**Phase to address:**
|
||||
[Which roadmap phase should prevent this]
|
||||
|
||||
---
|
||||
|
||||
### Pitfall 2: [Name]
|
||||
|
||||
**What goes wrong:**
|
||||
[Description of the failure mode]
|
||||
|
||||
**Why it happens:**
|
||||
[Root cause — why developers make this mistake]
|
||||
|
||||
**How to avoid:**
|
||||
[Specific prevention strategy]
|
||||
|
||||
**Warning signs:**
|
||||
[How to detect this early before it becomes a problem]
|
||||
|
||||
**Phase to address:**
|
||||
[Which roadmap phase should prevent this]
|
||||
|
||||
---
|
||||
|
||||
### Pitfall 3: [Name]
|
||||
|
||||
**What goes wrong:**
|
||||
[Description of the failure mode]
|
||||
|
||||
**Why it happens:**
|
||||
[Root cause — why developers make this mistake]
|
||||
|
||||
**How to avoid:**
|
||||
[Specific prevention strategy]
|
||||
|
||||
**Warning signs:**
|
||||
[How to detect this early before it becomes a problem]
|
||||
|
||||
**Phase to address:**
|
||||
[Which roadmap phase should prevent this]
|
||||
|
||||
---
|
||||
|
||||
[Continue for all critical pitfalls...]
|
||||
|
||||
## Technical Debt Patterns
|
||||
|
||||
Shortcuts that seem reasonable but create long-term problems.
|
||||
|
||||
| Shortcut | Immediate Benefit | Long-term Cost | When Acceptable |
|
||||
|----------|-------------------|----------------|-----------------|
|
||||
| [shortcut] | [benefit] | [cost] | [conditions, or "never"] |
|
||||
| [shortcut] | [benefit] | [cost] | [conditions, or "never"] |
|
||||
| [shortcut] | [benefit] | [cost] | [conditions, or "never"] |
|
||||
|
||||
## Integration Gotchas
|
||||
|
||||
Common mistakes when connecting to external services.
|
||||
|
||||
| Integration | Common Mistake | Correct Approach |
|
||||
|-------------|----------------|------------------|
|
||||
| [service] | [what people do wrong] | [what to do instead] |
|
||||
| [service] | [what people do wrong] | [what to do instead] |
|
||||
| [service] | [what people do wrong] | [what to do instead] |
|
||||
|
||||
## Performance Traps
|
||||
|
||||
Patterns that work at small scale but fail as usage grows.
|
||||
|
||||
| Trap | Symptoms | Prevention | When It Breaks |
|
||||
|------|----------|------------|----------------|
|
||||
| [trap] | [how you notice] | [how to avoid] | [scale threshold] |
|
||||
| [trap] | [how you notice] | [how to avoid] | [scale threshold] |
|
||||
| [trap] | [how you notice] | [how to avoid] | [scale threshold] |
|
||||
|
||||
## Security Mistakes
|
||||
|
||||
Domain-specific security issues beyond general web security.
|
||||
|
||||
| Mistake | Risk | Prevention |
|
||||
|---------|------|------------|
|
||||
| [mistake] | [what could happen] | [how to avoid] |
|
||||
| [mistake] | [what could happen] | [how to avoid] |
|
||||
| [mistake] | [what could happen] | [how to avoid] |
|
||||
|
||||
## UX Pitfalls
|
||||
|
||||
Common user experience mistakes in this domain.
|
||||
|
||||
| Pitfall | User Impact | Better Approach |
|
||||
|---------|-------------|-----------------|
|
||||
| [pitfall] | [how users suffer] | [what to do instead] |
|
||||
| [pitfall] | [how users suffer] | [what to do instead] |
|
||||
| [pitfall] | [how users suffer] | [what to do instead] |
|
||||
|
||||
## "Looks Done But Isn't" Checklist
|
||||
|
||||
Things that appear complete but are missing critical pieces.
|
||||
|
||||
- [ ] **[Feature]:** Often missing [thing] — verify [check]
|
||||
- [ ] **[Feature]:** Often missing [thing] — verify [check]
|
||||
- [ ] **[Feature]:** Often missing [thing] — verify [check]
|
||||
- [ ] **[Feature]:** Often missing [thing] — verify [check]
|
||||
|
||||
## Recovery Strategies
|
||||
|
||||
When pitfalls occur despite prevention, how to recover.
|
||||
|
||||
| Pitfall | Recovery Cost | Recovery Steps |
|
||||
|---------|---------------|----------------|
|
||||
| [pitfall] | LOW/MEDIUM/HIGH | [what to do] |
|
||||
| [pitfall] | LOW/MEDIUM/HIGH | [what to do] |
|
||||
| [pitfall] | LOW/MEDIUM/HIGH | [what to do] |
|
||||
|
||||
## Pitfall-to-Phase Mapping
|
||||
|
||||
How roadmap phases should address these pitfalls.
|
||||
|
||||
| Pitfall | Prevention Phase | Verification |
|
||||
|---------|------------------|--------------|
|
||||
| [pitfall] | Phase [X] | [how to verify prevention worked] |
|
||||
| [pitfall] | Phase [X] | [how to verify prevention worked] |
|
||||
| [pitfall] | Phase [X] | [how to verify prevention worked] |
|
||||
|
||||
## Sources
|
||||
|
||||
- [Post-mortems referenced]
|
||||
- [Community discussions]
|
||||
- [Official "gotchas" documentation]
|
||||
- [Personal experience / known issues]
|
||||
|
||||
---
|
||||
*Pitfalls research for: [domain]*
|
||||
*Researched: [date]*
|
||||
```
|
||||
|
||||
</template>
|
||||
|
||||
<guidelines>
|
||||
|
||||
**Critical Pitfalls:**
|
||||
- Focus on domain-specific issues, not generic mistakes
|
||||
- Include warning signs — early detection prevents disasters
|
||||
- Link to specific phases — makes pitfalls actionable
|
||||
|
||||
**Technical Debt:**
|
||||
- Be realistic — some shortcuts are acceptable
|
||||
- Note when shortcuts are "never acceptable" vs. "only in MVP"
|
||||
- Include the long-term cost to inform tradeoff decisions
|
||||
|
||||
**Performance Traps:**
|
||||
- Include scale thresholds ("breaks at 10k users")
|
||||
- Focus on what's relevant for this project's expected scale
|
||||
- Don't over-engineer for hypothetical scale
|
||||
|
||||
**Security Mistakes:**
|
||||
- Beyond OWASP basics — domain-specific issues
|
||||
- Example: Community platforms have different security concerns than e-commerce
|
||||
- Include risk level to prioritize
|
||||
|
||||
**"Looks Done But Isn't":**
|
||||
- Checklist format for verification during execution
|
||||
- Common in demos vs. production
|
||||
- Prevents "it works on my machine" issues
|
||||
|
||||
**Pitfall-to-Phase Mapping:**
|
||||
- Critical for roadmap creation
|
||||
- Each pitfall should map to a phase that prevents it
|
||||
- Informs phase ordering and success criteria
|
||||
|
||||
</guidelines>
|
||||
120
get-shit-done/templates/research-project/STACK.md
Normal file
120
get-shit-done/templates/research-project/STACK.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# Stack Research Template
|
||||
|
||||
Template for `.planning/research/STACK.md` — recommended technologies for the project domain.
|
||||
|
||||
<template>
|
||||
|
||||
```markdown
|
||||
# Stack Research
|
||||
|
||||
**Domain:** [domain type]
|
||||
**Researched:** [date]
|
||||
**Confidence:** [HIGH/MEDIUM/LOW]
|
||||
|
||||
## Recommended Stack
|
||||
|
||||
### Core Technologies
|
||||
|
||||
| Technology | Version | Purpose | Why Recommended |
|
||||
|------------|---------|---------|-----------------|
|
||||
| [name] | [version] | [what it does] | [why experts use it for this domain] |
|
||||
| [name] | [version] | [what it does] | [why experts use it for this domain] |
|
||||
| [name] | [version] | [what it does] | [why experts use it for this domain] |
|
||||
|
||||
### Supporting Libraries
|
||||
|
||||
| Library | Version | Purpose | When to Use |
|
||||
|---------|---------|---------|-------------|
|
||||
| [name] | [version] | [what it does] | [specific use case] |
|
||||
| [name] | [version] | [what it does] | [specific use case] |
|
||||
| [name] | [version] | [what it does] | [specific use case] |
|
||||
|
||||
### Development Tools
|
||||
|
||||
| Tool | Purpose | Notes |
|
||||
|------|---------|-------|
|
||||
| [name] | [what it does] | [configuration tips] |
|
||||
| [name] | [what it does] | [configuration tips] |
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# Core
|
||||
npm install [packages]
|
||||
|
||||
# Supporting
|
||||
npm install [packages]
|
||||
|
||||
# Dev dependencies
|
||||
npm install -D [packages]
|
||||
```
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
| Recommended | Alternative | When to Use Alternative |
|
||||
|-------------|-------------|-------------------------|
|
||||
| [our choice] | [other option] | [conditions where alternative is better] |
|
||||
| [our choice] | [other option] | [conditions where alternative is better] |
|
||||
|
||||
## What NOT to Use
|
||||
|
||||
| Avoid | Why | Use Instead |
|
||||
|-------|-----|-------------|
|
||||
| [technology] | [specific problem] | [recommended alternative] |
|
||||
| [technology] | [specific problem] | [recommended alternative] |
|
||||
|
||||
## Stack Patterns by Variant
|
||||
|
||||
**If [condition]:**
|
||||
- Use [variation]
|
||||
- Because [reason]
|
||||
|
||||
**If [condition]:**
|
||||
- Use [variation]
|
||||
- Because [reason]
|
||||
|
||||
## Version Compatibility
|
||||
|
||||
| Package A | Compatible With | Notes |
|
||||
|-----------|-----------------|-------|
|
||||
| [package@version] | [package@version] | [compatibility notes] |
|
||||
|
||||
## Sources
|
||||
|
||||
- [Context7 library ID] — [topics fetched]
|
||||
- [Official docs URL] — [what was verified]
|
||||
- [Other source] — [confidence level]
|
||||
|
||||
---
|
||||
*Stack research for: [domain]*
|
||||
*Researched: [date]*
|
||||
```
|
||||
|
||||
</template>
|
||||
|
||||
<guidelines>
|
||||
|
||||
**Core Technologies:**
|
||||
- Include specific version numbers
|
||||
- Explain why this is the standard choice, not just what it does
|
||||
- Focus on technologies that affect architecture decisions
|
||||
|
||||
**Supporting Libraries:**
|
||||
- Include libraries commonly needed for this domain
|
||||
- Note when each is needed (not all projects need all libraries)
|
||||
|
||||
**Alternatives:**
|
||||
- Don't just dismiss alternatives
|
||||
- Explain when alternatives make sense
|
||||
- Helps user make informed decisions if they disagree
|
||||
|
||||
**What NOT to Use:**
|
||||
- Actively warn against outdated or problematic choices
|
||||
- Explain the specific problem, not just "it's old"
|
||||
- Provide the recommended alternative
|
||||
|
||||
**Version Compatibility:**
|
||||
- Note any known compatibility issues
|
||||
- Critical for avoiding debugging time later
|
||||
|
||||
</guidelines>
|
||||
170
get-shit-done/templates/research-project/SUMMARY.md
Normal file
170
get-shit-done/templates/research-project/SUMMARY.md
Normal file
@@ -0,0 +1,170 @@
|
||||
# Research Summary Template
|
||||
|
||||
Template for `.planning/research/SUMMARY.md` — executive summary of project research with roadmap implications.
|
||||
|
||||
<template>
|
||||
|
||||
```markdown
|
||||
# Project Research Summary
|
||||
|
||||
**Project:** [name from PROJECT.md]
|
||||
**Domain:** [inferred domain type]
|
||||
**Researched:** [date]
|
||||
**Confidence:** [HIGH/MEDIUM/LOW]
|
||||
|
||||
## Executive Summary
|
||||
|
||||
[2-3 paragraph overview of research findings]
|
||||
|
||||
- What type of product this is and how experts build it
|
||||
- The recommended approach based on research
|
||||
- Key risks and how to mitigate them
|
||||
|
||||
## Key Findings
|
||||
|
||||
### Recommended Stack
|
||||
|
||||
[Summary from STACK.md — 1-2 paragraphs]
|
||||
|
||||
**Core technologies:**
|
||||
- [Technology]: [purpose] — [why recommended]
|
||||
- [Technology]: [purpose] — [why recommended]
|
||||
- [Technology]: [purpose] — [why recommended]
|
||||
|
||||
### Expected Features
|
||||
|
||||
[Summary from FEATURES.md]
|
||||
|
||||
**Must have (table stakes):**
|
||||
- [Feature] — users expect this
|
||||
- [Feature] — users expect this
|
||||
|
||||
**Should have (competitive):**
|
||||
- [Feature] — differentiator
|
||||
- [Feature] — differentiator
|
||||
|
||||
**Defer (v2+):**
|
||||
- [Feature] — not essential for launch
|
||||
|
||||
### Architecture Approach
|
||||
|
||||
[Summary from ARCHITECTURE.md — 1 paragraph]
|
||||
|
||||
**Major components:**
|
||||
1. [Component] — [responsibility]
|
||||
2. [Component] — [responsibility]
|
||||
3. [Component] — [responsibility]
|
||||
|
||||
### Critical Pitfalls
|
||||
|
||||
[Top 3-5 from PITFALLS.md]
|
||||
|
||||
1. **[Pitfall]** — [how to avoid]
|
||||
2. **[Pitfall]** — [how to avoid]
|
||||
3. **[Pitfall]** — [how to avoid]
|
||||
|
||||
## Implications for Roadmap
|
||||
|
||||
Based on research, suggested phase structure:
|
||||
|
||||
### Phase 1: [Name]
|
||||
**Rationale:** [why this comes first based on research]
|
||||
**Delivers:** [what this phase produces]
|
||||
**Addresses:** [features from FEATURES.md]
|
||||
**Avoids:** [pitfall from PITFALLS.md]
|
||||
|
||||
### Phase 2: [Name]
|
||||
**Rationale:** [why this order]
|
||||
**Delivers:** [what this phase produces]
|
||||
**Uses:** [stack elements from STACK.md]
|
||||
**Implements:** [architecture component]
|
||||
|
||||
### Phase 3: [Name]
|
||||
**Rationale:** [why this order]
|
||||
**Delivers:** [what this phase produces]
|
||||
|
||||
[Continue for suggested phases...]
|
||||
|
||||
### Phase Ordering Rationale
|
||||
|
||||
- [Why this order based on dependencies discovered]
|
||||
- [Why this grouping based on architecture patterns]
|
||||
- [How this avoids pitfalls from research]
|
||||
|
||||
### Research Flags
|
||||
|
||||
Phases likely needing deeper research during planning:
|
||||
- **Phase [X]:** [reason — e.g., "complex integration, needs API research"]
|
||||
- **Phase [Y]:** [reason — e.g., "niche domain, sparse documentation"]
|
||||
|
||||
Phases with standard patterns (skip research-phase):
|
||||
- **Phase [X]:** [reason — e.g., "well-documented, established patterns"]
|
||||
|
||||
## Confidence Assessment
|
||||
|
||||
| Area | Confidence | Notes |
|
||||
|------|------------|-------|
|
||||
| Stack | [HIGH/MEDIUM/LOW] | [reason] |
|
||||
| Features | [HIGH/MEDIUM/LOW] | [reason] |
|
||||
| Architecture | [HIGH/MEDIUM/LOW] | [reason] |
|
||||
| Pitfalls | [HIGH/MEDIUM/LOW] | [reason] |
|
||||
|
||||
**Overall confidence:** [HIGH/MEDIUM/LOW]
|
||||
|
||||
### Gaps to Address
|
||||
|
||||
[Any areas where research was inconclusive or needs validation during implementation]
|
||||
|
||||
- [Gap]: [how to handle during planning/execution]
|
||||
- [Gap]: [how to handle during planning/execution]
|
||||
|
||||
## Sources
|
||||
|
||||
### Primary (HIGH confidence)
|
||||
- [Context7 library ID] — [topics]
|
||||
- [Official docs URL] — [what was checked]
|
||||
|
||||
### Secondary (MEDIUM confidence)
|
||||
- [Source] — [finding]
|
||||
|
||||
### Tertiary (LOW confidence)
|
||||
- [Source] — [finding, needs validation]
|
||||
|
||||
---
|
||||
*Research completed: [date]*
|
||||
*Ready for roadmap: yes*
|
||||
```
|
||||
|
||||
</template>
|
||||
|
||||
<guidelines>
|
||||
|
||||
**Executive Summary:**
|
||||
- Write for someone who will only read this section
|
||||
- Include the key recommendation and main risk
|
||||
- 2-3 paragraphs maximum
|
||||
|
||||
**Key Findings:**
|
||||
- Summarize, don't duplicate full documents
|
||||
- Link to detailed docs (STACK.md, FEATURES.md, etc.)
|
||||
- Focus on what matters for roadmap decisions
|
||||
|
||||
**Implications for Roadmap:**
|
||||
- This is the most important section
|
||||
- Directly informs roadmap creation
|
||||
- Be explicit about phase suggestions and rationale
|
||||
- Include research flags for each suggested phase
|
||||
|
||||
**Confidence Assessment:**
|
||||
- Be honest about uncertainty
|
||||
- Note gaps that need resolution during planning
|
||||
- HIGH = verified with official sources
|
||||
- MEDIUM = community consensus, multiple sources agree
|
||||
- LOW = single source or inference
|
||||
|
||||
**Integration with roadmap creation:**
|
||||
- This file is loaded as context during roadmap creation
|
||||
- Phase suggestions here become starting point for roadmap
|
||||
- Research flags inform phase planning
|
||||
|
||||
</guidelines>
|
||||
552
get-shit-done/templates/research.md
Normal file
552
get-shit-done/templates/research.md
Normal file
@@ -0,0 +1,552 @@
|
||||
# Research Template
|
||||
|
||||
Template for `.planning/phases/XX-name/{phase_num}-RESEARCH.md` - comprehensive ecosystem research before planning.
|
||||
|
||||
**Purpose:** Document what Claude needs to know to implement a phase well - not just "which library" but "how do experts build this."
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# Phase [X]: [Name] - Research
|
||||
|
||||
**Researched:** [date]
|
||||
**Domain:** [primary technology/problem domain]
|
||||
**Confidence:** [HIGH/MEDIUM/LOW]
|
||||
|
||||
<user_constraints>
|
||||
## User Constraints (from CONTEXT.md)
|
||||
|
||||
**CRITICAL:** If CONTEXT.md exists from /gsd:discuss-phase, copy locked decisions here verbatim. These MUST be honored by the planner.
|
||||
|
||||
### Locked Decisions
|
||||
[Copy from CONTEXT.md `## Decisions` section - these are NON-NEGOTIABLE]
|
||||
- [Decision 1]
|
||||
- [Decision 2]
|
||||
|
||||
### Claude's Discretion
|
||||
[Copy from CONTEXT.md - areas where researcher/planner can choose]
|
||||
- [Area 1]
|
||||
- [Area 2]
|
||||
|
||||
### Deferred Ideas (OUT OF SCOPE)
|
||||
[Copy from CONTEXT.md - do NOT research or plan these]
|
||||
- [Deferred 1]
|
||||
- [Deferred 2]
|
||||
|
||||
**If no CONTEXT.md exists:** Write "No user constraints - all decisions at Claude's discretion"
|
||||
</user_constraints>
|
||||
|
||||
<research_summary>
|
||||
## Summary
|
||||
|
||||
[2-3 paragraph executive summary]
|
||||
- What was researched
|
||||
- What the standard approach is
|
||||
- Key recommendations
|
||||
|
||||
**Primary recommendation:** [one-liner actionable guidance]
|
||||
</research_summary>
|
||||
|
||||
<standard_stack>
|
||||
## Standard Stack
|
||||
|
||||
The established libraries/tools for this domain:
|
||||
|
||||
### Core
|
||||
| Library | Version | Purpose | Why Standard |
|
||||
|---------|---------|---------|--------------|
|
||||
| [name] | [ver] | [what it does] | [why experts use it] |
|
||||
| [name] | [ver] | [what it does] | [why experts use it] |
|
||||
|
||||
### Supporting
|
||||
| Library | Version | Purpose | When to Use |
|
||||
|---------|---------|---------|-------------|
|
||||
| [name] | [ver] | [what it does] | [use case] |
|
||||
| [name] | [ver] | [what it does] | [use case] |
|
||||
|
||||
### Alternatives Considered
|
||||
| Instead of | Could Use | Tradeoff |
|
||||
|------------|-----------|----------|
|
||||
| [standard] | [alternative] | [when alternative makes sense] |
|
||||
|
||||
**Installation:**
|
||||
```bash
|
||||
npm install [packages]
|
||||
# or
|
||||
yarn add [packages]
|
||||
```
|
||||
</standard_stack>
|
||||
|
||||
<architecture_patterns>
|
||||
## Architecture Patterns
|
||||
|
||||
### Recommended Project Structure
|
||||
```
|
||||
src/
|
||||
├── [folder]/ # [purpose]
|
||||
├── [folder]/ # [purpose]
|
||||
└── [folder]/ # [purpose]
|
||||
```
|
||||
|
||||
### Pattern 1: [Pattern Name]
|
||||
**What:** [description]
|
||||
**When to use:** [conditions]
|
||||
**Example:**
|
||||
```typescript
|
||||
// [code example from Context7/official docs]
|
||||
```
|
||||
|
||||
### Pattern 2: [Pattern Name]
|
||||
**What:** [description]
|
||||
**When to use:** [conditions]
|
||||
**Example:**
|
||||
```typescript
|
||||
// [code example]
|
||||
```
|
||||
|
||||
### Anti-Patterns to Avoid
|
||||
- **[Anti-pattern]:** [why it's bad, what to do instead]
|
||||
- **[Anti-pattern]:** [why it's bad, what to do instead]
|
||||
</architecture_patterns>
|
||||
|
||||
<dont_hand_roll>
|
||||
## Don't Hand-Roll
|
||||
|
||||
Problems that look simple but have existing solutions:
|
||||
|
||||
| Problem | Don't Build | Use Instead | Why |
|
||||
|---------|-------------|-------------|-----|
|
||||
| [problem] | [what you'd build] | [library] | [edge cases, complexity] |
|
||||
| [problem] | [what you'd build] | [library] | [edge cases, complexity] |
|
||||
| [problem] | [what you'd build] | [library] | [edge cases, complexity] |
|
||||
|
||||
**Key insight:** [why custom solutions are worse in this domain]
|
||||
</dont_hand_roll>
|
||||
|
||||
<common_pitfalls>
|
||||
## Common Pitfalls
|
||||
|
||||
### Pitfall 1: [Name]
|
||||
**What goes wrong:** [description]
|
||||
**Why it happens:** [root cause]
|
||||
**How to avoid:** [prevention strategy]
|
||||
**Warning signs:** [how to detect early]
|
||||
|
||||
### Pitfall 2: [Name]
|
||||
**What goes wrong:** [description]
|
||||
**Why it happens:** [root cause]
|
||||
**How to avoid:** [prevention strategy]
|
||||
**Warning signs:** [how to detect early]
|
||||
|
||||
### Pitfall 3: [Name]
|
||||
**What goes wrong:** [description]
|
||||
**Why it happens:** [root cause]
|
||||
**How to avoid:** [prevention strategy]
|
||||
**Warning signs:** [how to detect early]
|
||||
</common_pitfalls>
|
||||
|
||||
<code_examples>
|
||||
## Code Examples
|
||||
|
||||
Verified patterns from official sources:
|
||||
|
||||
### [Common Operation 1]
|
||||
```typescript
|
||||
// Source: [Context7/official docs URL]
|
||||
[code]
|
||||
```
|
||||
|
||||
### [Common Operation 2]
|
||||
```typescript
|
||||
// Source: [Context7/official docs URL]
|
||||
[code]
|
||||
```
|
||||
|
||||
### [Common Operation 3]
|
||||
```typescript
|
||||
// Source: [Context7/official docs URL]
|
||||
[code]
|
||||
```
|
||||
</code_examples>
|
||||
|
||||
<sota_updates>
|
||||
## State of the Art (2024-2025)
|
||||
|
||||
What's changed recently:
|
||||
|
||||
| Old Approach | Current Approach | When Changed | Impact |
|
||||
|--------------|------------------|--------------|--------|
|
||||
| [old] | [new] | [date/version] | [what it means for implementation] |
|
||||
|
||||
**New tools/patterns to consider:**
|
||||
- [Tool/Pattern]: [what it enables, when to use]
|
||||
- [Tool/Pattern]: [what it enables, when to use]
|
||||
|
||||
**Deprecated/outdated:**
|
||||
- [Thing]: [why it's outdated, what replaced it]
|
||||
</sota_updates>
|
||||
|
||||
<open_questions>
|
||||
## Open Questions
|
||||
|
||||
Things that couldn't be fully resolved:
|
||||
|
||||
1. **[Question]**
|
||||
- What we know: [partial info]
|
||||
- What's unclear: [the gap]
|
||||
- Recommendation: [how to handle during planning/execution]
|
||||
|
||||
2. **[Question]**
|
||||
- What we know: [partial info]
|
||||
- What's unclear: [the gap]
|
||||
- Recommendation: [how to handle]
|
||||
</open_questions>
|
||||
|
||||
<sources>
|
||||
## Sources
|
||||
|
||||
### Primary (HIGH confidence)
|
||||
- [Context7 library ID] - [topics fetched]
|
||||
- [Official docs URL] - [what was checked]
|
||||
|
||||
### Secondary (MEDIUM confidence)
|
||||
- [WebSearch verified with official source] - [finding + verification]
|
||||
|
||||
### Tertiary (LOW confidence - needs validation)
|
||||
- [WebSearch only] - [finding, marked for validation during implementation]
|
||||
</sources>
|
||||
|
||||
<metadata>
|
||||
## Metadata
|
||||
|
||||
**Research scope:**
|
||||
- Core technology: [what]
|
||||
- Ecosystem: [libraries explored]
|
||||
- Patterns: [patterns researched]
|
||||
- Pitfalls: [areas checked]
|
||||
|
||||
**Confidence breakdown:**
|
||||
- Standard stack: [HIGH/MEDIUM/LOW] - [reason]
|
||||
- Architecture: [HIGH/MEDIUM/LOW] - [reason]
|
||||
- Pitfalls: [HIGH/MEDIUM/LOW] - [reason]
|
||||
- Code examples: [HIGH/MEDIUM/LOW] - [reason]
|
||||
|
||||
**Research date:** [date]
|
||||
**Valid until:** [estimate - 30 days for stable tech, 7 days for fast-moving]
|
||||
</metadata>
|
||||
|
||||
---
|
||||
|
||||
*Phase: XX-name*
|
||||
*Research completed: [date]*
|
||||
*Ready for planning: [yes/no]*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Good Example
|
||||
|
||||
```markdown
|
||||
# Phase 3: 3D City Driving - Research
|
||||
|
||||
**Researched:** 2025-01-20
|
||||
**Domain:** Three.js 3D web game with driving mechanics
|
||||
**Confidence:** HIGH
|
||||
|
||||
<research_summary>
|
||||
## Summary
|
||||
|
||||
Researched the Three.js ecosystem for building a 3D city driving game. The standard approach uses Three.js with React Three Fiber for component architecture, Rapier for physics, and drei for common helpers.
|
||||
|
||||
Key finding: Don't hand-roll physics or collision detection. Rapier (via @react-three/rapier) handles vehicle physics, terrain collision, and city object interactions efficiently. Custom physics code leads to bugs and performance issues.
|
||||
|
||||
**Primary recommendation:** Use R3F + Rapier + drei stack. Start with vehicle controller from drei, add Rapier vehicle physics, build city with instanced meshes for performance.
|
||||
</research_summary>
|
||||
|
||||
<standard_stack>
|
||||
## Standard Stack
|
||||
|
||||
### Core
|
||||
| Library | Version | Purpose | Why Standard |
|
||||
|---------|---------|---------|--------------|
|
||||
| three | 0.160.0 | 3D rendering | The standard for web 3D |
|
||||
| @react-three/fiber | 8.15.0 | React renderer for Three.js | Declarative 3D, better DX |
|
||||
| @react-three/drei | 9.92.0 | Helpers and abstractions | Solves common problems |
|
||||
| @react-three/rapier | 1.2.1 | Physics engine bindings | Best physics for R3F |
|
||||
|
||||
### Supporting
|
||||
| Library | Version | Purpose | When to Use |
|
||||
|---------|---------|---------|-------------|
|
||||
| @react-three/postprocessing | 2.16.0 | Visual effects | Bloom, DOF, motion blur |
|
||||
| leva | 0.9.35 | Debug UI | Tweaking parameters |
|
||||
| zustand | 4.4.7 | State management | Game state, UI state |
|
||||
| use-sound | 4.0.1 | Audio | Engine sounds, ambient |
|
||||
|
||||
### Alternatives Considered
|
||||
| Instead of | Could Use | Tradeoff |
|
||||
|------------|-----------|----------|
|
||||
| Rapier | Cannon.js | Cannon simpler but less performant for vehicles |
|
||||
| R3F | Vanilla Three | Vanilla if no React, but R3F DX is much better |
|
||||
| drei | Custom helpers | drei is battle-tested, don't reinvent |
|
||||
|
||||
**Installation:**
|
||||
```bash
|
||||
npm install three @react-three/fiber @react-three/drei @react-three/rapier zustand
|
||||
```
|
||||
</standard_stack>
|
||||
|
||||
<architecture_patterns>
|
||||
## Architecture Patterns
|
||||
|
||||
### Recommended Project Structure
|
||||
```
|
||||
src/
|
||||
├── components/
|
||||
│ ├── Vehicle/ # Player car with physics
|
||||
│ ├── City/ # City generation and buildings
|
||||
│ ├── Road/ # Road network
|
||||
│ └── Environment/ # Sky, lighting, fog
|
||||
├── hooks/
|
||||
│ ├── useVehicleControls.ts
|
||||
│ └── useGameState.ts
|
||||
├── stores/
|
||||
│ └── gameStore.ts # Zustand state
|
||||
└── utils/
|
||||
└── cityGenerator.ts # Procedural generation helpers
|
||||
```
|
||||
|
||||
### Pattern 1: Vehicle with Rapier Physics
|
||||
**What:** Use RigidBody with vehicle-specific settings, not custom physics
|
||||
**When to use:** Any ground vehicle
|
||||
**Example:**
|
||||
```typescript
|
||||
// Source: @react-three/rapier docs
|
||||
import { RigidBody, useRapier } from '@react-three/rapier'
|
||||
|
||||
function Vehicle() {
|
||||
const rigidBody = useRef()
|
||||
|
||||
return (
|
||||
<RigidBody
|
||||
ref={rigidBody}
|
||||
type="dynamic"
|
||||
colliders="hull"
|
||||
mass={1500}
|
||||
linearDamping={0.5}
|
||||
angularDamping={0.5}
|
||||
>
|
||||
<mesh>
|
||||
<boxGeometry args={[2, 1, 4]} />
|
||||
<meshStandardMaterial />
|
||||
</mesh>
|
||||
</RigidBody>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 2: Instanced Meshes for City
|
||||
**What:** Use InstancedMesh for repeated objects (buildings, trees, props)
|
||||
**When to use:** >100 similar objects
|
||||
**Example:**
|
||||
```typescript
|
||||
// Source: drei docs
|
||||
import { Instances, Instance } from '@react-three/drei'
|
||||
|
||||
function Buildings({ positions }) {
|
||||
return (
|
||||
<Instances limit={1000}>
|
||||
<boxGeometry />
|
||||
<meshStandardMaterial />
|
||||
{positions.map((pos, i) => (
|
||||
<Instance key={i} position={pos} scale={[1, Math.random() * 5 + 1, 1]} />
|
||||
))}
|
||||
</Instances>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
### Anti-Patterns to Avoid
|
||||
- **Creating meshes in render loop:** Create once, update transforms only
|
||||
- **Not using InstancedMesh:** Individual meshes for buildings kills performance
|
||||
- **Custom physics math:** Rapier handles it better, every time
|
||||
</architecture_patterns>
|
||||
|
||||
<dont_hand_roll>
|
||||
## Don't Hand-Roll
|
||||
|
||||
| Problem | Don't Build | Use Instead | Why |
|
||||
|---------|-------------|-------------|-----|
|
||||
| Vehicle physics | Custom velocity/acceleration | Rapier RigidBody | Wheel friction, suspension, collisions are complex |
|
||||
| Collision detection | Raycasting everything | Rapier colliders | Performance, edge cases, tunneling |
|
||||
| Camera follow | Manual lerp | drei CameraControls or custom with useFrame | Smooth interpolation, bounds |
|
||||
| City generation | Pure random placement | Grid-based with noise for variation | Random looks wrong, grid is predictable |
|
||||
| LOD | Manual distance checks | drei <Detailed> | Handles transitions, hysteresis |
|
||||
|
||||
**Key insight:** 3D game development has 40+ years of solved problems. Rapier implements proper physics simulation. drei implements proper 3D helpers. Fighting these leads to bugs that look like "game feel" issues but are actually physics edge cases.
|
||||
</dont_hand_roll>
|
||||
|
||||
<common_pitfalls>
|
||||
## Common Pitfalls
|
||||
|
||||
### Pitfall 1: Physics Tunneling
|
||||
**What goes wrong:** Fast objects pass through walls
|
||||
**Why it happens:** Default physics step too large for velocity
|
||||
**How to avoid:** Use CCD (Continuous Collision Detection) in Rapier
|
||||
**Warning signs:** Objects randomly appearing outside buildings
|
||||
|
||||
### Pitfall 2: Performance Death by Draw Calls
|
||||
**What goes wrong:** Game stutters with many buildings
|
||||
**Why it happens:** Each mesh = 1 draw call, hundreds of buildings = hundreds of calls
|
||||
**How to avoid:** InstancedMesh for similar objects, merge static geometry
|
||||
**Warning signs:** GPU bound, low FPS despite simple scene
|
||||
|
||||
### Pitfall 3: Vehicle "Floaty" Feel
|
||||
**What goes wrong:** Car doesn't feel grounded
|
||||
**Why it happens:** Missing proper wheel/suspension simulation
|
||||
**How to avoid:** Use Rapier vehicle controller or tune mass/damping carefully
|
||||
**Warning signs:** Car bounces oddly, doesn't grip corners
|
||||
</common_pitfalls>
|
||||
|
||||
<code_examples>
|
||||
## Code Examples
|
||||
|
||||
### Basic R3F + Rapier Setup
|
||||
```typescript
|
||||
// Source: @react-three/rapier getting started
|
||||
import { Canvas } from '@react-three/fiber'
|
||||
import { Physics } from '@react-three/rapier'
|
||||
|
||||
function Game() {
|
||||
return (
|
||||
<Canvas>
|
||||
<Physics gravity={[0, -9.81, 0]}>
|
||||
<Vehicle />
|
||||
<City />
|
||||
<Ground />
|
||||
</Physics>
|
||||
</Canvas>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
### Vehicle Controls Hook
|
||||
```typescript
|
||||
// Source: Community pattern, verified with drei docs
|
||||
import { useFrame } from '@react-three/fiber'
|
||||
import { useKeyboardControls } from '@react-three/drei'
|
||||
|
||||
function useVehicleControls(rigidBodyRef) {
|
||||
const [, getKeys] = useKeyboardControls()
|
||||
|
||||
useFrame(() => {
|
||||
const { forward, back, left, right } = getKeys()
|
||||
const body = rigidBodyRef.current
|
||||
if (!body) return
|
||||
|
||||
const impulse = { x: 0, y: 0, z: 0 }
|
||||
if (forward) impulse.z -= 10
|
||||
if (back) impulse.z += 5
|
||||
|
||||
body.applyImpulse(impulse, true)
|
||||
|
||||
if (left) body.applyTorqueImpulse({ x: 0, y: 2, z: 0 }, true)
|
||||
if (right) body.applyTorqueImpulse({ x: 0, y: -2, z: 0 }, true)
|
||||
})
|
||||
}
|
||||
```
|
||||
</code_examples>
|
||||
|
||||
<sota_updates>
|
||||
## State of the Art (2024-2025)
|
||||
|
||||
| Old Approach | Current Approach | When Changed | Impact |
|
||||
|--------------|------------------|--------------|--------|
|
||||
| cannon-es | Rapier | 2023 | Rapier is faster, better maintained |
|
||||
| vanilla Three.js | React Three Fiber | 2020+ | R3F is now standard for React apps |
|
||||
| Manual InstancedMesh | drei <Instances> | 2022 | Simpler API, handles updates |
|
||||
|
||||
**New tools/patterns to consider:**
|
||||
- **WebGPU:** Coming but not production-ready for games yet (2025)
|
||||
- **drei Gltf helpers:** <useGLTF.preload> for loading screens
|
||||
|
||||
**Deprecated/outdated:**
|
||||
- **cannon.js (original):** Use cannon-es fork or better, Rapier
|
||||
- **Manual raycasting for physics:** Just use Rapier colliders
|
||||
</sota_updates>
|
||||
|
||||
<sources>
|
||||
## Sources
|
||||
|
||||
### Primary (HIGH confidence)
|
||||
- /pmndrs/react-three-fiber - getting started, hooks, performance
|
||||
- /pmndrs/drei - instances, controls, helpers
|
||||
- /dimforge/rapier-js - physics setup, vehicle physics
|
||||
|
||||
### Secondary (MEDIUM confidence)
|
||||
- Three.js discourse "city driving game" threads - verified patterns against docs
|
||||
- R3F examples repository - verified code works
|
||||
|
||||
### Tertiary (LOW confidence - needs validation)
|
||||
- None - all findings verified
|
||||
</sources>
|
||||
|
||||
<metadata>
|
||||
## Metadata
|
||||
|
||||
**Research scope:**
|
||||
- Core technology: Three.js + React Three Fiber
|
||||
- Ecosystem: Rapier, drei, zustand
|
||||
- Patterns: Vehicle physics, instancing, city generation
|
||||
- Pitfalls: Performance, physics, feel
|
||||
|
||||
**Confidence breakdown:**
|
||||
- Standard stack: HIGH - verified with Context7, widely used
|
||||
- Architecture: HIGH - from official examples
|
||||
- Pitfalls: HIGH - documented in discourse, verified in docs
|
||||
- Code examples: HIGH - from Context7/official sources
|
||||
|
||||
**Research date:** 2025-01-20
|
||||
**Valid until:** 2025-02-20 (30 days - R3F ecosystem stable)
|
||||
</metadata>
|
||||
|
||||
---
|
||||
|
||||
*Phase: 03-city-driving*
|
||||
*Research completed: 2025-01-20*
|
||||
*Ready for planning: yes*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidelines
|
||||
|
||||
**When to create:**
|
||||
- Before planning phases in niche/complex domains
|
||||
- When Claude's training data is likely stale or sparse
|
||||
- When "how do experts do this" matters more than "which library"
|
||||
|
||||
**Structure:**
|
||||
- Use XML tags for section markers (matches GSD templates)
|
||||
- Seven core sections: summary, standard_stack, architecture_patterns, dont_hand_roll, common_pitfalls, code_examples, sources
|
||||
- All sections required (drives comprehensive research)
|
||||
|
||||
**Content quality:**
|
||||
- Standard stack: Specific versions, not just names
|
||||
- Architecture: Include actual code examples from authoritative sources
|
||||
- Don't hand-roll: Be explicit about what problems to NOT solve yourself
|
||||
- Pitfalls: Include warning signs, not just "don't do this"
|
||||
- Sources: Mark confidence levels honestly
|
||||
|
||||
**Integration with planning:**
|
||||
- RESEARCH.md loaded as @context reference in PLAN.md
|
||||
- Standard stack informs library choices
|
||||
- Don't hand-roll prevents custom solutions
|
||||
- Pitfalls inform verification criteria
|
||||
- Code examples can be referenced in task actions
|
||||
|
||||
**After creation:**
|
||||
- File lives in phase directory: `.planning/phases/XX-name/{phase_num}-RESEARCH.md`
|
||||
- Referenced during planning workflow
|
||||
- plan-phase loads it automatically when present
|
||||
54
get-shit-done/templates/retrospective.md
Normal file
54
get-shit-done/templates/retrospective.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Project Retrospective
|
||||
|
||||
*A living document updated after each milestone. Lessons feed forward into future planning.*
|
||||
|
||||
## Milestone: v{version} — {name}
|
||||
|
||||
**Shipped:** {date}
|
||||
**Phases:** {count} | **Plans:** {count} | **Sessions:** {count}
|
||||
|
||||
### What Was Built
|
||||
- {Key deliverable 1}
|
||||
- {Key deliverable 2}
|
||||
- {Key deliverable 3}
|
||||
|
||||
### What Worked
|
||||
- {Efficiency win or successful pattern}
|
||||
- {What went smoothly}
|
||||
|
||||
### What Was Inefficient
|
||||
- {Missed opportunity}
|
||||
- {What took longer than expected}
|
||||
|
||||
### Patterns Established
|
||||
- {New pattern or convention that should persist}
|
||||
|
||||
### Key Lessons
|
||||
1. {Specific, actionable lesson}
|
||||
2. {Another lesson}
|
||||
|
||||
### Cost Observations
|
||||
- Model mix: {X}% opus, {Y}% sonnet, {Z}% haiku
|
||||
- Sessions: {count}
|
||||
- Notable: {efficiency observation}
|
||||
|
||||
---
|
||||
|
||||
## Cross-Milestone Trends
|
||||
|
||||
### Process Evolution
|
||||
|
||||
| Milestone | Sessions | Phases | Key Change |
|
||||
|-----------|----------|--------|------------|
|
||||
| v{X} | {N} | {M} | {What changed in process} |
|
||||
|
||||
### Cumulative Quality
|
||||
|
||||
| Milestone | Tests | Coverage | Zero-Dep Additions |
|
||||
|-----------|-------|----------|-------------------|
|
||||
| v{X} | {N} | {Y}% | {count} |
|
||||
|
||||
### Top Lessons (Verified Across Milestones)
|
||||
|
||||
1. {Lesson verified by multiple milestones}
|
||||
2. {Another cross-validated lesson}
|
||||
202
get-shit-done/templates/roadmap.md
Normal file
202
get-shit-done/templates/roadmap.md
Normal file
@@ -0,0 +1,202 @@
|
||||
# Roadmap Template
|
||||
|
||||
Template for `.planning/ROADMAP.md`.
|
||||
|
||||
## Initial Roadmap (v1.0 Greenfield)
|
||||
|
||||
```markdown
|
||||
# Roadmap: [Project Name]
|
||||
|
||||
## Overview
|
||||
|
||||
[One paragraph describing the journey from start to finish]
|
||||
|
||||
## Phases
|
||||
|
||||
**Phase Numbering:**
|
||||
- Integer phases (1, 2, 3): Planned milestone work
|
||||
- Decimal phases (2.1, 2.2): Urgent insertions (marked with INSERTED)
|
||||
|
||||
Decimal phases appear between their surrounding integers in numeric order.
|
||||
|
||||
- [ ] **Phase 1: [Name]** - [One-line description]
|
||||
- [ ] **Phase 2: [Name]** - [One-line description]
|
||||
- [ ] **Phase 3: [Name]** - [One-line description]
|
||||
- [ ] **Phase 4: [Name]** - [One-line description]
|
||||
|
||||
## Phase Details
|
||||
|
||||
### Phase 1: [Name]
|
||||
**Goal**: [What this phase delivers]
|
||||
**Depends on**: Nothing (first phase)
|
||||
**Requirements**: [REQ-01, REQ-02, REQ-03] <!-- brackets optional, parser handles both formats -->
|
||||
**Success Criteria** (what must be TRUE):
|
||||
1. [Observable behavior from user perspective]
|
||||
2. [Observable behavior from user perspective]
|
||||
3. [Observable behavior from user perspective]
|
||||
**Plans**: [Number of plans, e.g., "3 plans" or "TBD"]
|
||||
|
||||
Plans:
|
||||
- [ ] 01-01: [Brief description of first plan]
|
||||
- [ ] 01-02: [Brief description of second plan]
|
||||
- [ ] 01-03: [Brief description of third plan]
|
||||
|
||||
### Phase 2: [Name]
|
||||
**Goal**: [What this phase delivers]
|
||||
**Depends on**: Phase 1
|
||||
**Requirements**: [REQ-04, REQ-05]
|
||||
**Success Criteria** (what must be TRUE):
|
||||
1. [Observable behavior from user perspective]
|
||||
2. [Observable behavior from user perspective]
|
||||
**Plans**: [Number of plans]
|
||||
|
||||
Plans:
|
||||
- [ ] 02-01: [Brief description]
|
||||
- [ ] 02-02: [Brief description]
|
||||
|
||||
### Phase 2.1: Critical Fix (INSERTED)
|
||||
**Goal**: [Urgent work inserted between phases]
|
||||
**Depends on**: Phase 2
|
||||
**Success Criteria** (what must be TRUE):
|
||||
1. [What the fix achieves]
|
||||
**Plans**: 1 plan
|
||||
|
||||
Plans:
|
||||
- [ ] 02.1-01: [Description]
|
||||
|
||||
### Phase 3: [Name]
|
||||
**Goal**: [What this phase delivers]
|
||||
**Depends on**: Phase 2
|
||||
**Requirements**: [REQ-06, REQ-07, REQ-08]
|
||||
**Success Criteria** (what must be TRUE):
|
||||
1. [Observable behavior from user perspective]
|
||||
2. [Observable behavior from user perspective]
|
||||
3. [Observable behavior from user perspective]
|
||||
**Plans**: [Number of plans]
|
||||
|
||||
Plans:
|
||||
- [ ] 03-01: [Brief description]
|
||||
- [ ] 03-02: [Brief description]
|
||||
|
||||
### Phase 4: [Name]
|
||||
**Goal**: [What this phase delivers]
|
||||
**Depends on**: Phase 3
|
||||
**Requirements**: [REQ-09, REQ-10]
|
||||
**Success Criteria** (what must be TRUE):
|
||||
1. [Observable behavior from user perspective]
|
||||
2. [Observable behavior from user perspective]
|
||||
**Plans**: [Number of plans]
|
||||
|
||||
Plans:
|
||||
- [ ] 04-01: [Brief description]
|
||||
|
||||
## Progress
|
||||
|
||||
**Execution Order:**
|
||||
Phases execute in numeric order: 2 → 2.1 → 2.2 → 3 → 3.1 → 4
|
||||
|
||||
| Phase | Plans Complete | Status | Completed |
|
||||
|-------|----------------|--------|-----------|
|
||||
| 1. [Name] | 0/3 | Not started | - |
|
||||
| 2. [Name] | 0/2 | Not started | - |
|
||||
| 3. [Name] | 0/2 | Not started | - |
|
||||
| 4. [Name] | 0/1 | Not started | - |
|
||||
```
|
||||
|
||||
<guidelines>
|
||||
**Initial planning (v1.0):**
|
||||
- Phase count depends on granularity setting (coarse: 3-5, standard: 5-8, fine: 8-12)
|
||||
- Each phase delivers something coherent
|
||||
- Phases can have 1+ plans (split if >3 tasks or multiple subsystems)
|
||||
- Plans use naming: {phase}-{plan}-PLAN.md (e.g., 01-02-PLAN.md)
|
||||
- No time estimates (this isn't enterprise PM)
|
||||
- Progress table updated by execute workflow
|
||||
- Plan count can be "TBD" initially, refined during planning
|
||||
|
||||
**Success criteria:**
|
||||
- 2-5 observable behaviors per phase (from user's perspective)
|
||||
- Cross-checked against requirements during roadmap creation
|
||||
- Flow downstream to `must_haves` in plan-phase
|
||||
- Verified by verify-phase after execution
|
||||
- Format: "User can [action]" or "[Thing] works/exists"
|
||||
|
||||
**After milestones ship:**
|
||||
- Collapse completed milestones in `<details>` tags
|
||||
- Add new milestone sections for upcoming work
|
||||
- Keep continuous phase numbering (never restart at 01)
|
||||
</guidelines>
|
||||
|
||||
<status_values>
|
||||
- `Not started` - Haven't begun
|
||||
- `In progress` - Currently working
|
||||
- `Complete` - Done (add completion date)
|
||||
- `Deferred` - Pushed to later (with reason)
|
||||
</status_values>
|
||||
|
||||
## Milestone-Grouped Roadmap (After v1.0 Ships)
|
||||
|
||||
After completing first milestone, reorganize with milestone groupings:
|
||||
|
||||
```markdown
|
||||
# Roadmap: [Project Name]
|
||||
|
||||
## Milestones
|
||||
|
||||
- ✅ **v1.0 MVP** - Phases 1-4 (shipped YYYY-MM-DD)
|
||||
- 🚧 **v1.1 [Name]** - Phases 5-6 (in progress)
|
||||
- 📋 **v2.0 [Name]** - Phases 7-10 (planned)
|
||||
|
||||
## Phases
|
||||
|
||||
<details>
|
||||
<summary>✅ v1.0 MVP (Phases 1-4) - SHIPPED YYYY-MM-DD</summary>
|
||||
|
||||
### Phase 1: [Name]
|
||||
**Goal**: [What this phase delivers]
|
||||
**Plans**: 3 plans
|
||||
|
||||
Plans:
|
||||
- [x] 01-01: [Brief description]
|
||||
- [x] 01-02: [Brief description]
|
||||
- [x] 01-03: [Brief description]
|
||||
|
||||
[... remaining v1.0 phases ...]
|
||||
|
||||
</details>
|
||||
|
||||
### 🚧 v1.1 [Name] (In Progress)
|
||||
|
||||
**Milestone Goal:** [What v1.1 delivers]
|
||||
|
||||
#### Phase 5: [Name]
|
||||
**Goal**: [What this phase delivers]
|
||||
**Depends on**: Phase 4
|
||||
**Plans**: 2 plans
|
||||
|
||||
Plans:
|
||||
- [ ] 05-01: [Brief description]
|
||||
- [ ] 05-02: [Brief description]
|
||||
|
||||
[... remaining v1.1 phases ...]
|
||||
|
||||
### 📋 v2.0 [Name] (Planned)
|
||||
|
||||
**Milestone Goal:** [What v2.0 delivers]
|
||||
|
||||
[... v2.0 phases ...]
|
||||
|
||||
## Progress
|
||||
|
||||
| Phase | Milestone | Plans Complete | Status | Completed |
|
||||
|-------|-----------|----------------|--------|-----------|
|
||||
| 1. Foundation | v1.0 | 3/3 | Complete | YYYY-MM-DD |
|
||||
| 2. Features | v1.0 | 2/2 | Complete | YYYY-MM-DD |
|
||||
| 5. Security | v1.1 | 0/2 | Not started | - |
|
||||
```
|
||||
|
||||
**Notes:**
|
||||
- Milestone emoji: ✅ shipped, 🚧 in progress, 📋 planned
|
||||
- Completed milestones collapsed in `<details>` for readability
|
||||
- Current/future milestones expanded
|
||||
- Continuous phase numbering (01-99)
|
||||
- Progress table includes milestone column
|
||||
176
get-shit-done/templates/state.md
Normal file
176
get-shit-done/templates/state.md
Normal file
@@ -0,0 +1,176 @@
|
||||
# State Template
|
||||
|
||||
Template for `.planning/STATE.md` — the project's living memory.
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# Project State
|
||||
|
||||
## Project Reference
|
||||
|
||||
See: .planning/PROJECT.md (updated [date])
|
||||
|
||||
**Core value:** [One-liner from PROJECT.md Core Value section]
|
||||
**Current focus:** [Current phase name]
|
||||
|
||||
## Current Position
|
||||
|
||||
Phase: [X] of [Y] ([Phase name])
|
||||
Plan: [A] of [B] in current phase
|
||||
Status: [Ready to plan / Planning / Ready to execute / In progress / Phase complete]
|
||||
Last activity: [YYYY-MM-DD] — [What happened]
|
||||
|
||||
Progress: [░░░░░░░░░░] 0%
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
**Velocity:**
|
||||
- Total plans completed: [N]
|
||||
- Average duration: [X] min
|
||||
- Total execution time: [X.X] hours
|
||||
|
||||
**By Phase:**
|
||||
|
||||
| Phase | Plans | Total | Avg/Plan |
|
||||
|-------|-------|-------|----------|
|
||||
| - | - | - | - |
|
||||
|
||||
**Recent Trend:**
|
||||
- Last 5 plans: [durations]
|
||||
- Trend: [Improving / Stable / Degrading]
|
||||
|
||||
*Updated after each plan completion*
|
||||
|
||||
## Accumulated Context
|
||||
|
||||
### Decisions
|
||||
|
||||
Decisions are logged in PROJECT.md Key Decisions table.
|
||||
Recent decisions affecting current work:
|
||||
|
||||
- [Phase X]: [Decision summary]
|
||||
- [Phase Y]: [Decision summary]
|
||||
|
||||
### Pending Todos
|
||||
|
||||
[From .planning/todos/pending/ — ideas captured during sessions]
|
||||
|
||||
None yet.
|
||||
|
||||
### Blockers/Concerns
|
||||
|
||||
[Issues that affect future work]
|
||||
|
||||
None yet.
|
||||
|
||||
## Session Continuity
|
||||
|
||||
Last session: [YYYY-MM-DD HH:MM]
|
||||
Stopped at: [Description of last completed action]
|
||||
Resume file: [Path to .continue-here*.md if exists, otherwise "None"]
|
||||
```
|
||||
|
||||
<purpose>
|
||||
|
||||
STATE.md is the project's short-term memory spanning all phases and sessions.
|
||||
|
||||
**Problem it solves:** Information is captured in summaries, issues, and decisions but not systematically consumed. Sessions start without context.
|
||||
|
||||
**Solution:** A single, small file that's:
|
||||
- Read first in every workflow
|
||||
- Updated after every significant action
|
||||
- Contains digest of accumulated context
|
||||
- Enables instant session restoration
|
||||
|
||||
</purpose>
|
||||
|
||||
<lifecycle>
|
||||
|
||||
**Creation:** After ROADMAP.md is created (during init)
|
||||
- Reference PROJECT.md (read it for current context)
|
||||
- Initialize empty accumulated context sections
|
||||
- Set position to "Phase 1 ready to plan"
|
||||
|
||||
**Reading:** First step of every workflow
|
||||
- progress: Present status to user
|
||||
- plan: Inform planning decisions
|
||||
- execute: Know current position
|
||||
- transition: Know what's complete
|
||||
|
||||
**Writing:** After every significant action
|
||||
- execute: After SUMMARY.md created
|
||||
- Update position (phase, plan, status)
|
||||
- Note new decisions (detail in PROJECT.md)
|
||||
- Add blockers/concerns
|
||||
- transition: After phase marked complete
|
||||
- Update progress bar
|
||||
- Clear resolved blockers
|
||||
- Refresh Project Reference date
|
||||
|
||||
</lifecycle>
|
||||
|
||||
<sections>
|
||||
|
||||
### Project Reference
|
||||
Points to PROJECT.md for full context. Includes:
|
||||
- Core value (the ONE thing that matters)
|
||||
- Current focus (which phase)
|
||||
- Last update date (triggers re-read if stale)
|
||||
|
||||
Claude reads PROJECT.md directly for requirements, constraints, and decisions.
|
||||
|
||||
### Current Position
|
||||
Where we are right now:
|
||||
- Phase X of Y — which phase
|
||||
- Plan A of B — which plan within phase
|
||||
- Status — current state
|
||||
- Last activity — what happened most recently
|
||||
- Progress bar — visual indicator of overall completion
|
||||
|
||||
Progress calculation: (completed plans) / (total plans across all phases) × 100%
|
||||
|
||||
### Performance Metrics
|
||||
Track velocity to understand execution patterns:
|
||||
- Total plans completed
|
||||
- Average duration per plan
|
||||
- Per-phase breakdown
|
||||
- Recent trend (improving/stable/degrading)
|
||||
|
||||
Updated after each plan completion.
|
||||
|
||||
### Accumulated Context
|
||||
|
||||
**Decisions:** Reference to PROJECT.md Key Decisions table, plus recent decisions summary for quick access. Full decision log lives in PROJECT.md.
|
||||
|
||||
**Pending Todos:** Ideas captured via /gsd:add-todo
|
||||
- Count of pending todos
|
||||
- Reference to .planning/todos/pending/
|
||||
- Brief list if few, count if many (e.g., "5 pending todos — see /gsd:check-todos")
|
||||
|
||||
**Blockers/Concerns:** From "Next Phase Readiness" sections
|
||||
- Issues that affect future work
|
||||
- Prefix with originating phase
|
||||
- Cleared when addressed
|
||||
|
||||
### Session Continuity
|
||||
Enables instant resumption:
|
||||
- When was last session
|
||||
- What was last completed
|
||||
- Is there a .continue-here file to resume from
|
||||
|
||||
</sections>
|
||||
|
||||
<size_constraint>
|
||||
|
||||
Keep STATE.md under 100 lines.
|
||||
|
||||
It's a DIGEST, not an archive. If accumulated context grows too large:
|
||||
- Keep only 3-5 recent decisions in summary (full log in PROJECT.md)
|
||||
- Keep only active blockers, remove resolved ones
|
||||
|
||||
The goal is "read once, know where we are" — if it's too long, that fails.
|
||||
|
||||
</size_constraint>
|
||||
59
get-shit-done/templates/summary-complex.md
Normal file
59
get-shit-done/templates/summary-complex.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
phase: XX-name
|
||||
plan: YY
|
||||
subsystem: [primary category]
|
||||
tags: [searchable tech]
|
||||
requires:
|
||||
- phase: [prior phase]
|
||||
provides: [what that phase built]
|
||||
provides:
|
||||
- [bullet list of what was built/delivered]
|
||||
affects: [list of phase names or keywords]
|
||||
tech-stack:
|
||||
added: [libraries/tools]
|
||||
patterns: [architectural/code patterns]
|
||||
key-files:
|
||||
created: [important files created]
|
||||
modified: [important files modified]
|
||||
key-decisions:
|
||||
- "Decision 1"
|
||||
patterns-established:
|
||||
- "Pattern 1: description"
|
||||
duration: Xmin
|
||||
completed: YYYY-MM-DD
|
||||
---
|
||||
|
||||
# Phase [X]: [Name] Summary (Complex)
|
||||
|
||||
**[Substantive one-liner describing outcome]**
|
||||
|
||||
## Performance
|
||||
- **Duration:** [time]
|
||||
- **Tasks:** [count completed]
|
||||
- **Files modified:** [count]
|
||||
|
||||
## Accomplishments
|
||||
- [Key outcome 1]
|
||||
- [Key outcome 2]
|
||||
|
||||
## Task Commits
|
||||
1. **Task 1: [task name]** - `hash`
|
||||
2. **Task 2: [task name]** - `hash`
|
||||
3. **Task 3: [task name]** - `hash`
|
||||
|
||||
## Files Created/Modified
|
||||
- `path/to/file.ts` - What it does
|
||||
- `path/to/another.ts` - What it does
|
||||
|
||||
## Decisions Made
|
||||
[Key decisions with brief rationale]
|
||||
|
||||
## Deviations from Plan (Auto-fixed)
|
||||
[Detailed auto-fix records per GSD deviation rules]
|
||||
|
||||
## Issues Encountered
|
||||
[Problems during planned work and resolutions]
|
||||
|
||||
## Next Phase Readiness
|
||||
[What's ready for next phase]
|
||||
[Blockers or concerns]
|
||||
41
get-shit-done/templates/summary-minimal.md
Normal file
41
get-shit-done/templates/summary-minimal.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
phase: XX-name
|
||||
plan: YY
|
||||
subsystem: [primary category]
|
||||
tags: [searchable tech]
|
||||
provides:
|
||||
- [bullet list of what was built/delivered]
|
||||
affects: [list of phase names or keywords]
|
||||
tech-stack:
|
||||
added: [libraries/tools]
|
||||
patterns: [architectural/code patterns]
|
||||
key-files:
|
||||
created: [important files created]
|
||||
modified: [important files modified]
|
||||
key-decisions: []
|
||||
duration: Xmin
|
||||
completed: YYYY-MM-DD
|
||||
---
|
||||
|
||||
# Phase [X]: [Name] Summary (Minimal)
|
||||
|
||||
**[Substantive one-liner describing outcome]**
|
||||
|
||||
## Performance
|
||||
- **Duration:** [time]
|
||||
- **Tasks:** [count]
|
||||
- **Files modified:** [count]
|
||||
|
||||
## Accomplishments
|
||||
- [Most important outcome]
|
||||
- [Second key accomplishment]
|
||||
|
||||
## Task Commits
|
||||
1. **Task 1: [task name]** - `hash`
|
||||
2. **Task 2: [task name]** - `hash`
|
||||
|
||||
## Files Created/Modified
|
||||
- `path/to/file.ts` - What it does
|
||||
|
||||
## Next Phase Readiness
|
||||
[Ready for next phase]
|
||||
48
get-shit-done/templates/summary-standard.md
Normal file
48
get-shit-done/templates/summary-standard.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
phase: XX-name
|
||||
plan: YY
|
||||
subsystem: [primary category]
|
||||
tags: [searchable tech]
|
||||
provides:
|
||||
- [bullet list of what was built/delivered]
|
||||
affects: [list of phase names or keywords]
|
||||
tech-stack:
|
||||
added: [libraries/tools]
|
||||
patterns: [architectural/code patterns]
|
||||
key-files:
|
||||
created: [important files created]
|
||||
modified: [important files modified]
|
||||
key-decisions:
|
||||
- "Decision 1"
|
||||
duration: Xmin
|
||||
completed: YYYY-MM-DD
|
||||
---
|
||||
|
||||
# Phase [X]: [Name] Summary
|
||||
|
||||
**[Substantive one-liner describing outcome]**
|
||||
|
||||
## Performance
|
||||
- **Duration:** [time]
|
||||
- **Tasks:** [count completed]
|
||||
- **Files modified:** [count]
|
||||
|
||||
## Accomplishments
|
||||
- [Key outcome 1]
|
||||
- [Key outcome 2]
|
||||
|
||||
## Task Commits
|
||||
1. **Task 1: [task name]** - `hash`
|
||||
2. **Task 2: [task name]** - `hash`
|
||||
3. **Task 3: [task name]** - `hash`
|
||||
|
||||
## Files Created/Modified
|
||||
- `path/to/file.ts` - What it does
|
||||
- `path/to/another.ts` - What it does
|
||||
|
||||
## Decisions & Deviations
|
||||
[Key decisions or "None - followed plan as specified"]
|
||||
[Minor deviations if any, or "None"]
|
||||
|
||||
## Next Phase Readiness
|
||||
[What's ready for next phase]
|
||||
248
get-shit-done/templates/summary.md
Normal file
248
get-shit-done/templates/summary.md
Normal file
@@ -0,0 +1,248 @@
|
||||
# Summary Template
|
||||
|
||||
Template for `.planning/phases/XX-name/{phase}-{plan}-SUMMARY.md` - phase completion documentation.
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
phase: XX-name
|
||||
plan: YY
|
||||
subsystem: [primary category: auth, payments, ui, api, database, infra, testing, etc.]
|
||||
tags: [searchable tech: jwt, stripe, react, postgres, prisma]
|
||||
|
||||
# Dependency graph
|
||||
requires:
|
||||
- phase: [prior phase this depends on]
|
||||
provides: [what that phase built that this uses]
|
||||
provides:
|
||||
- [bullet list of what this phase built/delivered]
|
||||
affects: [list of phase names or keywords that will need this context]
|
||||
|
||||
# Tech tracking
|
||||
tech-stack:
|
||||
added: [libraries/tools added in this phase]
|
||||
patterns: [architectural/code patterns established]
|
||||
|
||||
key-files:
|
||||
created: [important files created]
|
||||
modified: [important files modified]
|
||||
|
||||
key-decisions:
|
||||
- "Decision 1"
|
||||
- "Decision 2"
|
||||
|
||||
patterns-established:
|
||||
- "Pattern 1: description"
|
||||
- "Pattern 2: description"
|
||||
|
||||
requirements-completed: [] # REQUIRED — Copy ALL requirement IDs from this plan's `requirements` frontmatter field.
|
||||
|
||||
# Metrics
|
||||
duration: Xmin
|
||||
completed: YYYY-MM-DD
|
||||
---
|
||||
|
||||
# Phase [X]: [Name] Summary
|
||||
|
||||
**[Substantive one-liner describing outcome - NOT "phase complete" or "implementation finished"]**
|
||||
|
||||
## Performance
|
||||
|
||||
- **Duration:** [time] (e.g., 23 min, 1h 15m)
|
||||
- **Started:** [ISO timestamp]
|
||||
- **Completed:** [ISO timestamp]
|
||||
- **Tasks:** [count completed]
|
||||
- **Files modified:** [count]
|
||||
|
||||
## Accomplishments
|
||||
- [Most important outcome]
|
||||
- [Second key accomplishment]
|
||||
- [Third if applicable]
|
||||
|
||||
## Task Commits
|
||||
|
||||
Each task was committed atomically:
|
||||
|
||||
1. **Task 1: [task name]** - `abc123f` (feat/fix/test/refactor)
|
||||
2. **Task 2: [task name]** - `def456g` (feat/fix/test/refactor)
|
||||
3. **Task 3: [task name]** - `hij789k` (feat/fix/test/refactor)
|
||||
|
||||
**Plan metadata:** `lmn012o` (docs: complete plan)
|
||||
|
||||
_Note: TDD tasks may have multiple commits (test → feat → refactor)_
|
||||
|
||||
## Files Created/Modified
|
||||
- `path/to/file.ts` - What it does
|
||||
- `path/to/another.ts` - What it does
|
||||
|
||||
## Decisions Made
|
||||
[Key decisions with brief rationale, or "None - followed plan as specified"]
|
||||
|
||||
## Deviations from Plan
|
||||
|
||||
[If no deviations: "None - plan executed exactly as written"]
|
||||
|
||||
[If deviations occurred:]
|
||||
|
||||
### Auto-fixed Issues
|
||||
|
||||
**1. [Rule X - Category] Brief description**
|
||||
- **Found during:** Task [N] ([task name])
|
||||
- **Issue:** [What was wrong]
|
||||
- **Fix:** [What was done]
|
||||
- **Files modified:** [file paths]
|
||||
- **Verification:** [How it was verified]
|
||||
- **Committed in:** [hash] (part of task commit)
|
||||
|
||||
[... repeat for each auto-fix ...]
|
||||
|
||||
---
|
||||
|
||||
**Total deviations:** [N] auto-fixed ([breakdown by rule])
|
||||
**Impact on plan:** [Brief assessment - e.g., "All auto-fixes necessary for correctness/security. No scope creep."]
|
||||
|
||||
## Issues Encountered
|
||||
[Problems and how they were resolved, or "None"]
|
||||
|
||||
[Note: "Deviations from Plan" documents unplanned work that was handled automatically via deviation rules. "Issues Encountered" documents problems during planned work that required problem-solving.]
|
||||
|
||||
## User Setup Required
|
||||
|
||||
[If USER-SETUP.md was generated:]
|
||||
**External services require manual configuration.** See [{phase}-USER-SETUP.md](./{phase}-USER-SETUP.md) for:
|
||||
- Environment variables to add
|
||||
- Dashboard configuration steps
|
||||
- Verification commands
|
||||
|
||||
[If no USER-SETUP.md:]
|
||||
None - no external service configuration required.
|
||||
|
||||
## Next Phase Readiness
|
||||
[What's ready for next phase]
|
||||
[Any blockers or concerns]
|
||||
|
||||
---
|
||||
*Phase: XX-name*
|
||||
*Completed: [date]*
|
||||
```
|
||||
|
||||
<frontmatter_guidance>
|
||||
**Purpose:** Enable automatic context assembly via dependency graph. Frontmatter makes summary metadata machine-readable so plan-phase can scan all summaries quickly and select relevant ones based on dependencies.
|
||||
|
||||
**Fast scanning:** Frontmatter is first ~25 lines, cheap to scan across all summaries without reading full content.
|
||||
|
||||
**Dependency graph:** `requires`/`provides`/`affects` create explicit links between phases, enabling transitive closure for context selection.
|
||||
|
||||
**Subsystem:** Primary categorization (auth, payments, ui, api, database, infra, testing) for detecting related phases.
|
||||
|
||||
**Tags:** Searchable technical keywords (libraries, frameworks, tools) for tech stack awareness.
|
||||
|
||||
**Key-files:** Important files for @context references in PLAN.md.
|
||||
|
||||
**Patterns:** Established conventions future phases should maintain.
|
||||
|
||||
**Population:** Frontmatter is populated during summary creation in execute-plan.md. See `<step name="create_summary">` for field-by-field guidance.
|
||||
</frontmatter_guidance>
|
||||
|
||||
<one_liner_rules>
|
||||
The one-liner MUST be substantive:
|
||||
|
||||
**Good:**
|
||||
- "JWT auth with refresh rotation using jose library"
|
||||
- "Prisma schema with User, Session, and Product models"
|
||||
- "Dashboard with real-time metrics via Server-Sent Events"
|
||||
|
||||
**Bad:**
|
||||
- "Phase complete"
|
||||
- "Authentication implemented"
|
||||
- "Foundation finished"
|
||||
- "All tasks done"
|
||||
|
||||
The one-liner should tell someone what actually shipped.
|
||||
</one_liner_rules>
|
||||
|
||||
<example>
|
||||
```markdown
|
||||
# Phase 1: Foundation Summary
|
||||
|
||||
**JWT auth with refresh rotation using jose library, Prisma User model, and protected API middleware**
|
||||
|
||||
## Performance
|
||||
|
||||
- **Duration:** 28 min
|
||||
- **Started:** 2025-01-15T14:22:10Z
|
||||
- **Completed:** 2025-01-15T14:50:33Z
|
||||
- **Tasks:** 5
|
||||
- **Files modified:** 8
|
||||
|
||||
## Accomplishments
|
||||
- User model with email/password auth
|
||||
- Login/logout endpoints with httpOnly JWT cookies
|
||||
- Protected route middleware checking token validity
|
||||
- Refresh token rotation on each request
|
||||
|
||||
## Files Created/Modified
|
||||
- `prisma/schema.prisma` - User and Session models
|
||||
- `src/app/api/auth/login/route.ts` - Login endpoint
|
||||
- `src/app/api/auth/logout/route.ts` - Logout endpoint
|
||||
- `src/middleware.ts` - Protected route checks
|
||||
- `src/lib/auth.ts` - JWT helpers using jose
|
||||
|
||||
## Decisions Made
|
||||
- Used jose instead of jsonwebtoken (ESM-native, Edge-compatible)
|
||||
- 15-min access tokens with 7-day refresh tokens
|
||||
- Storing refresh tokens in database for revocation capability
|
||||
|
||||
## Deviations from Plan
|
||||
|
||||
### Auto-fixed Issues
|
||||
|
||||
**1. [Rule 2 - Missing Critical] Added password hashing with bcrypt**
|
||||
- **Found during:** Task 2 (Login endpoint implementation)
|
||||
- **Issue:** Plan didn't specify password hashing - storing plaintext would be critical security flaw
|
||||
- **Fix:** Added bcrypt hashing on registration, comparison on login with salt rounds 10
|
||||
- **Files modified:** src/app/api/auth/login/route.ts, src/lib/auth.ts
|
||||
- **Verification:** Password hash test passes, plaintext never stored
|
||||
- **Committed in:** abc123f (Task 2 commit)
|
||||
|
||||
**2. [Rule 3 - Blocking] Installed missing jose dependency**
|
||||
- **Found during:** Task 4 (JWT token generation)
|
||||
- **Issue:** jose package not in package.json, import failing
|
||||
- **Fix:** Ran `npm install jose`
|
||||
- **Files modified:** package.json, package-lock.json
|
||||
- **Verification:** Import succeeds, build passes
|
||||
- **Committed in:** def456g (Task 4 commit)
|
||||
|
||||
---
|
||||
|
||||
**Total deviations:** 2 auto-fixed (1 missing critical, 1 blocking)
|
||||
**Impact on plan:** Both auto-fixes essential for security and functionality. No scope creep.
|
||||
|
||||
## Issues Encountered
|
||||
- jsonwebtoken CommonJS import failed in Edge runtime - switched to jose (planned library change, worked as expected)
|
||||
|
||||
## Next Phase Readiness
|
||||
- Auth foundation complete, ready for feature development
|
||||
- User registration endpoint needed before public launch
|
||||
|
||||
---
|
||||
*Phase: 01-foundation*
|
||||
*Completed: 2025-01-15*
|
||||
```
|
||||
</example>
|
||||
|
||||
<guidelines>
|
||||
**Frontmatter:** MANDATORY - complete all fields. Enables automatic context assembly for future planning.
|
||||
|
||||
**One-liner:** Must be substantive. "JWT auth with refresh rotation using jose library" not "Authentication implemented".
|
||||
|
||||
**Decisions section:**
|
||||
- Key decisions made during execution with rationale
|
||||
- Extracted to STATE.md accumulated context
|
||||
- Use "None - followed plan as specified" if no deviations
|
||||
|
||||
**After creation:** STATE.md updated with position, decisions, issues.
|
||||
</guidelines>
|
||||
146
get-shit-done/templates/user-profile.md
Normal file
146
get-shit-done/templates/user-profile.md
Normal file
@@ -0,0 +1,146 @@
|
||||
# Developer Profile
|
||||
|
||||
> This profile was generated from session analysis. It contains behavioral directives
|
||||
> for Claude to follow when working with this developer. HIGH confidence dimensions
|
||||
> should be acted on directly. LOW confidence dimensions should be approached with
|
||||
> hedging ("Based on your profile, I'll try X -- let me know if that's off").
|
||||
|
||||
**Generated:** {{generated_at}}
|
||||
**Source:** {{data_source}}
|
||||
**Projects Analyzed:** {{projects_list}}
|
||||
**Messages Analyzed:** {{message_count}}
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
{{summary_instructions}}
|
||||
|
||||
---
|
||||
|
||||
## Communication Style
|
||||
|
||||
**Rating:** {{communication_style.rating}} | **Confidence:** {{communication_style.confidence}}
|
||||
|
||||
**Directive:** {{communication_style.claude_instruction}}
|
||||
|
||||
{{communication_style.summary}}
|
||||
|
||||
**Evidence:**
|
||||
|
||||
{{communication_style.evidence}}
|
||||
|
||||
---
|
||||
|
||||
## Decision Speed
|
||||
|
||||
**Rating:** {{decision_speed.rating}} | **Confidence:** {{decision_speed.confidence}}
|
||||
|
||||
**Directive:** {{decision_speed.claude_instruction}}
|
||||
|
||||
{{decision_speed.summary}}
|
||||
|
||||
**Evidence:**
|
||||
|
||||
{{decision_speed.evidence}}
|
||||
|
||||
---
|
||||
|
||||
## Explanation Depth
|
||||
|
||||
**Rating:** {{explanation_depth.rating}} | **Confidence:** {{explanation_depth.confidence}}
|
||||
|
||||
**Directive:** {{explanation_depth.claude_instruction}}
|
||||
|
||||
{{explanation_depth.summary}}
|
||||
|
||||
**Evidence:**
|
||||
|
||||
{{explanation_depth.evidence}}
|
||||
|
||||
---
|
||||
|
||||
## Debugging Approach
|
||||
|
||||
**Rating:** {{debugging_approach.rating}} | **Confidence:** {{debugging_approach.confidence}}
|
||||
|
||||
**Directive:** {{debugging_approach.claude_instruction}}
|
||||
|
||||
{{debugging_approach.summary}}
|
||||
|
||||
**Evidence:**
|
||||
|
||||
{{debugging_approach.evidence}}
|
||||
|
||||
---
|
||||
|
||||
## UX Philosophy
|
||||
|
||||
**Rating:** {{ux_philosophy.rating}} | **Confidence:** {{ux_philosophy.confidence}}
|
||||
|
||||
**Directive:** {{ux_philosophy.claude_instruction}}
|
||||
|
||||
{{ux_philosophy.summary}}
|
||||
|
||||
**Evidence:**
|
||||
|
||||
{{ux_philosophy.evidence}}
|
||||
|
||||
---
|
||||
|
||||
## Vendor Philosophy
|
||||
|
||||
**Rating:** {{vendor_philosophy.rating}} | **Confidence:** {{vendor_philosophy.confidence}}
|
||||
|
||||
**Directive:** {{vendor_philosophy.claude_instruction}}
|
||||
|
||||
{{vendor_philosophy.summary}}
|
||||
|
||||
**Evidence:**
|
||||
|
||||
{{vendor_philosophy.evidence}}
|
||||
|
||||
---
|
||||
|
||||
## Frustration Triggers
|
||||
|
||||
**Rating:** {{frustration_triggers.rating}} | **Confidence:** {{frustration_triggers.confidence}}
|
||||
|
||||
**Directive:** {{frustration_triggers.claude_instruction}}
|
||||
|
||||
{{frustration_triggers.summary}}
|
||||
|
||||
**Evidence:**
|
||||
|
||||
{{frustration_triggers.evidence}}
|
||||
|
||||
---
|
||||
|
||||
## Learning Style
|
||||
|
||||
**Rating:** {{learning_style.rating}} | **Confidence:** {{learning_style.confidence}}
|
||||
|
||||
**Directive:** {{learning_style.claude_instruction}}
|
||||
|
||||
{{learning_style.summary}}
|
||||
|
||||
**Evidence:**
|
||||
|
||||
{{learning_style.evidence}}
|
||||
|
||||
---
|
||||
|
||||
## Profile Metadata
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Profile Version | {{profile_version}} |
|
||||
| Generated | {{generated_at}} |
|
||||
| Source | {{data_source}} |
|
||||
| Projects | {{projects_count}} |
|
||||
| Messages | {{message_count}} |
|
||||
| Dimensions Scored | {{dimensions_scored}}/8 |
|
||||
| High Confidence | {{high_confidence_count}} |
|
||||
| Medium Confidence | {{medium_confidence_count}} |
|
||||
| Low Confidence | {{low_confidence_count}} |
|
||||
| Sensitive Content Excluded | {{sensitive_excluded_summary}} |
|
||||
311
get-shit-done/templates/user-setup.md
Normal file
311
get-shit-done/templates/user-setup.md
Normal file
@@ -0,0 +1,311 @@
|
||||
# User Setup Template
|
||||
|
||||
Template for `.planning/phases/XX-name/{phase}-USER-SETUP.md` - human-required configuration that Claude cannot automate.
|
||||
|
||||
**Purpose:** Document setup tasks that literally require human action - account creation, dashboard configuration, secret retrieval. Claude automates everything possible; this file captures only what remains.
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
# Phase {X}: User Setup Required
|
||||
|
||||
**Generated:** [YYYY-MM-DD]
|
||||
**Phase:** {phase-name}
|
||||
**Status:** Incomplete
|
||||
|
||||
Complete these items for the integration to function. Claude automated everything possible; these items require human access to external dashboards/accounts.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Status | Variable | Source | Add to |
|
||||
|--------|----------|--------|--------|
|
||||
| [ ] | `ENV_VAR_NAME` | [Service Dashboard → Path → To → Value] | `.env.local` |
|
||||
| [ ] | `ANOTHER_VAR` | [Service Dashboard → Path → To → Value] | `.env.local` |
|
||||
|
||||
## Account Setup
|
||||
|
||||
[Only if new account creation is required]
|
||||
|
||||
- [ ] **Create [Service] account**
|
||||
- URL: [signup URL]
|
||||
- Skip if: Already have account
|
||||
|
||||
## Dashboard Configuration
|
||||
|
||||
[Only if dashboard configuration is required]
|
||||
|
||||
- [ ] **[Configuration task]**
|
||||
- Location: [Service Dashboard → Path → To → Setting]
|
||||
- Set to: [Required value or configuration]
|
||||
- Notes: [Any important details]
|
||||
|
||||
## Verification
|
||||
|
||||
After completing setup, verify with:
|
||||
|
||||
```bash
|
||||
# [Verification commands]
|
||||
```
|
||||
|
||||
Expected results:
|
||||
- [What success looks like]
|
||||
|
||||
---
|
||||
|
||||
**Once all items complete:** Mark status as "Complete" at top of file.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## When to Generate
|
||||
|
||||
Generate `{phase}-USER-SETUP.md` when plan frontmatter contains `user_setup` field.
|
||||
|
||||
**Trigger:** `user_setup` exists in PLAN.md frontmatter and has items.
|
||||
|
||||
**Location:** Same directory as PLAN.md and SUMMARY.md.
|
||||
|
||||
**Timing:** Generated during execute-plan.md after tasks complete, before SUMMARY.md creation.
|
||||
|
||||
---
|
||||
|
||||
## Frontmatter Schema
|
||||
|
||||
In PLAN.md, `user_setup` declares human-required configuration:
|
||||
|
||||
```yaml
|
||||
user_setup:
|
||||
- service: stripe
|
||||
why: "Payment processing requires API keys"
|
||||
env_vars:
|
||||
- name: STRIPE_SECRET_KEY
|
||||
source: "Stripe Dashboard → Developers → API keys → Secret key"
|
||||
- name: STRIPE_WEBHOOK_SECRET
|
||||
source: "Stripe Dashboard → Developers → Webhooks → Signing secret"
|
||||
dashboard_config:
|
||||
- task: "Create webhook endpoint"
|
||||
location: "Stripe Dashboard → Developers → Webhooks → Add endpoint"
|
||||
details: "URL: https://[your-domain]/api/webhooks/stripe, Events: checkout.session.completed, customer.subscription.*"
|
||||
local_dev:
|
||||
- "Run: stripe listen --forward-to localhost:3000/api/webhooks/stripe"
|
||||
- "Use the webhook secret from CLI output for local testing"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The Automation-First Rule
|
||||
|
||||
**USER-SETUP.md contains ONLY what Claude literally cannot do.**
|
||||
|
||||
| Claude CAN Do (not in USER-SETUP) | Claude CANNOT Do (→ USER-SETUP) |
|
||||
|-----------------------------------|--------------------------------|
|
||||
| `npm install stripe` | Create Stripe account |
|
||||
| Write webhook handler code | Get API keys from dashboard |
|
||||
| Create `.env.local` file structure | Copy actual secret values |
|
||||
| Run `stripe listen` | Authenticate Stripe CLI (browser OAuth) |
|
||||
| Configure package.json | Access external service dashboards |
|
||||
| Write any code | Retrieve secrets from third-party systems |
|
||||
|
||||
**The test:** "Does this require a human in a browser, accessing an account Claude doesn't have credentials for?"
|
||||
- Yes → USER-SETUP.md
|
||||
- No → Claude does it automatically
|
||||
|
||||
---
|
||||
|
||||
## Service-Specific Examples
|
||||
|
||||
<stripe_example>
|
||||
```markdown
|
||||
# Phase 10: User Setup Required
|
||||
|
||||
**Generated:** 2025-01-14
|
||||
**Phase:** 10-monetization
|
||||
**Status:** Incomplete
|
||||
|
||||
Complete these items for Stripe integration to function.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Status | Variable | Source | Add to |
|
||||
|--------|----------|--------|--------|
|
||||
| [ ] | `STRIPE_SECRET_KEY` | Stripe Dashboard → Developers → API keys → Secret key | `.env.local` |
|
||||
| [ ] | `NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY` | Stripe Dashboard → Developers → API keys → Publishable key | `.env.local` |
|
||||
| [ ] | `STRIPE_WEBHOOK_SECRET` | Stripe Dashboard → Developers → Webhooks → [endpoint] → Signing secret | `.env.local` |
|
||||
|
||||
## Account Setup
|
||||
|
||||
- [ ] **Create Stripe account** (if needed)
|
||||
- URL: https://dashboard.stripe.com/register
|
||||
- Skip if: Already have Stripe account
|
||||
|
||||
## Dashboard Configuration
|
||||
|
||||
- [ ] **Create webhook endpoint**
|
||||
- Location: Stripe Dashboard → Developers → Webhooks → Add endpoint
|
||||
- Endpoint URL: `https://[your-domain]/api/webhooks/stripe`
|
||||
- Events to send:
|
||||
- `checkout.session.completed`
|
||||
- `customer.subscription.created`
|
||||
- `customer.subscription.updated`
|
||||
- `customer.subscription.deleted`
|
||||
|
||||
- [ ] **Create products and prices** (if using subscription tiers)
|
||||
- Location: Stripe Dashboard → Products → Add product
|
||||
- Create each subscription tier
|
||||
- Copy Price IDs to:
|
||||
- `STRIPE_STARTER_PRICE_ID`
|
||||
- `STRIPE_PRO_PRICE_ID`
|
||||
|
||||
## Local Development
|
||||
|
||||
For local webhook testing:
|
||||
```bash
|
||||
stripe listen --forward-to localhost:3000/api/webhooks/stripe
|
||||
```
|
||||
Use the webhook signing secret from CLI output (starts with `whsec_`).
|
||||
|
||||
## Verification
|
||||
|
||||
After completing setup:
|
||||
|
||||
```bash
|
||||
# Check env vars are set
|
||||
grep STRIPE .env.local
|
||||
|
||||
# Verify build passes
|
||||
npm run build
|
||||
|
||||
# Test webhook endpoint (should return 400 bad signature, not 500 crash)
|
||||
curl -X POST http://localhost:3000/api/webhooks/stripe \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{}'
|
||||
```
|
||||
|
||||
Expected: Build passes, webhook returns 400 (signature validation working).
|
||||
|
||||
---
|
||||
|
||||
**Once all items complete:** Mark status as "Complete" at top of file.
|
||||
```
|
||||
</stripe_example>
|
||||
|
||||
<supabase_example>
|
||||
```markdown
|
||||
# Phase 2: User Setup Required
|
||||
|
||||
**Generated:** 2025-01-14
|
||||
**Phase:** 02-authentication
|
||||
**Status:** Incomplete
|
||||
|
||||
Complete these items for Supabase Auth to function.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Status | Variable | Source | Add to |
|
||||
|--------|----------|--------|--------|
|
||||
| [ ] | `NEXT_PUBLIC_SUPABASE_URL` | Supabase Dashboard → Settings → API → Project URL | `.env.local` |
|
||||
| [ ] | `NEXT_PUBLIC_SUPABASE_ANON_KEY` | Supabase Dashboard → Settings → API → anon public | `.env.local` |
|
||||
| [ ] | `SUPABASE_SERVICE_ROLE_KEY` | Supabase Dashboard → Settings → API → service_role | `.env.local` |
|
||||
|
||||
## Account Setup
|
||||
|
||||
- [ ] **Create Supabase project**
|
||||
- URL: https://supabase.com/dashboard/new
|
||||
- Skip if: Already have project for this app
|
||||
|
||||
## Dashboard Configuration
|
||||
|
||||
- [ ] **Enable Email Auth**
|
||||
- Location: Supabase Dashboard → Authentication → Providers
|
||||
- Enable: Email provider
|
||||
- Configure: Confirm email (on/off based on preference)
|
||||
|
||||
- [ ] **Configure OAuth providers** (if using social login)
|
||||
- Location: Supabase Dashboard → Authentication → Providers
|
||||
- For Google: Add Client ID and Secret from Google Cloud Console
|
||||
- For GitHub: Add Client ID and Secret from GitHub OAuth Apps
|
||||
|
||||
## Verification
|
||||
|
||||
After completing setup:
|
||||
|
||||
```bash
|
||||
# Check env vars
|
||||
grep SUPABASE .env.local
|
||||
|
||||
# Verify connection (run in project directory)
|
||||
npx supabase status
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Once all items complete:** Mark status as "Complete" at top of file.
|
||||
```
|
||||
</supabase_example>
|
||||
|
||||
<sendgrid_example>
|
||||
```markdown
|
||||
# Phase 5: User Setup Required
|
||||
|
||||
**Generated:** 2025-01-14
|
||||
**Phase:** 05-notifications
|
||||
**Status:** Incomplete
|
||||
|
||||
Complete these items for SendGrid email to function.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Status | Variable | Source | Add to |
|
||||
|--------|----------|--------|--------|
|
||||
| [ ] | `SENDGRID_API_KEY` | SendGrid Dashboard → Settings → API Keys → Create API Key | `.env.local` |
|
||||
| [ ] | `SENDGRID_FROM_EMAIL` | Your verified sender email address | `.env.local` |
|
||||
|
||||
## Account Setup
|
||||
|
||||
- [ ] **Create SendGrid account**
|
||||
- URL: https://signup.sendgrid.com/
|
||||
- Skip if: Already have account
|
||||
|
||||
## Dashboard Configuration
|
||||
|
||||
- [ ] **Verify sender identity**
|
||||
- Location: SendGrid Dashboard → Settings → Sender Authentication
|
||||
- Option 1: Single Sender Verification (quick, for dev)
|
||||
- Option 2: Domain Authentication (production)
|
||||
|
||||
- [ ] **Create API Key**
|
||||
- Location: SendGrid Dashboard → Settings → API Keys → Create API Key
|
||||
- Permission: Restricted Access → Mail Send (Full Access)
|
||||
- Copy key immediately (shown only once)
|
||||
|
||||
## Verification
|
||||
|
||||
After completing setup:
|
||||
|
||||
```bash
|
||||
# Check env var
|
||||
grep SENDGRID .env.local
|
||||
|
||||
# Test email sending (replace with your test email)
|
||||
curl -X POST http://localhost:3000/api/test-email \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"to": "your@email.com"}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Once all items complete:** Mark status as "Complete" at top of file.
|
||||
```
|
||||
</sendgrid_example>
|
||||
|
||||
---
|
||||
|
||||
## Guidelines
|
||||
|
||||
**Never include:** Actual secret values. Steps Claude can automate (package installs, code changes).
|
||||
|
||||
**Naming:** `{phase}-USER-SETUP.md` matches the phase number pattern.
|
||||
**Status tracking:** User marks checkboxes and updates status line when complete.
|
||||
**Searchability:** `grep -r "USER-SETUP" .planning/` finds all phases with user requirements.
|
||||
322
get-shit-done/templates/verification-report.md
Normal file
322
get-shit-done/templates/verification-report.md
Normal file
@@ -0,0 +1,322 @@
|
||||
# Verification Report Template
|
||||
|
||||
Template for `.planning/phases/XX-name/{phase_num}-VERIFICATION.md` — phase goal verification results.
|
||||
|
||||
---
|
||||
|
||||
## File Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
phase: XX-name
|
||||
verified: YYYY-MM-DDTHH:MM:SSZ
|
||||
status: passed | gaps_found | human_needed
|
||||
score: N/M must-haves verified
|
||||
---
|
||||
|
||||
# Phase {X}: {Name} Verification Report
|
||||
|
||||
**Phase Goal:** {goal from ROADMAP.md}
|
||||
**Verified:** {timestamp}
|
||||
**Status:** {passed | gaps_found | human_needed}
|
||||
|
||||
## Goal Achievement
|
||||
|
||||
### Observable Truths
|
||||
|
||||
| # | Truth | Status | Evidence |
|
||||
|---|-------|--------|----------|
|
||||
| 1 | {truth from must_haves} | ✓ VERIFIED | {what confirmed it} |
|
||||
| 2 | {truth from must_haves} | ✗ FAILED | {what's wrong} |
|
||||
| 3 | {truth from must_haves} | ? UNCERTAIN | {why can't verify} |
|
||||
|
||||
**Score:** {N}/{M} truths verified
|
||||
|
||||
### Required Artifacts
|
||||
|
||||
| Artifact | Expected | Status | Details |
|
||||
|----------|----------|--------|---------|
|
||||
| `src/components/Chat.tsx` | Message list component | ✓ EXISTS + SUBSTANTIVE | Exports ChatList, renders Message[], no stubs |
|
||||
| `src/app/api/chat/route.ts` | Message CRUD | ✗ STUB | File exists but POST returns placeholder |
|
||||
| `prisma/schema.prisma` | Message model | ✓ EXISTS + SUBSTANTIVE | Model defined with all fields |
|
||||
|
||||
**Artifacts:** {N}/{M} verified
|
||||
|
||||
### Key Link Verification
|
||||
|
||||
| From | To | Via | Status | Details |
|
||||
|------|----|----|--------|---------|
|
||||
| Chat.tsx | /api/chat | fetch in useEffect | ✓ WIRED | Line 23: `fetch('/api/chat')` with response handling |
|
||||
| ChatInput | /api/chat POST | onSubmit handler | ✗ NOT WIRED | onSubmit only calls console.log |
|
||||
| /api/chat POST | database | prisma.message.create | ✗ NOT WIRED | Returns hardcoded response, no DB call |
|
||||
|
||||
**Wiring:** {N}/{M} connections verified
|
||||
|
||||
## Requirements Coverage
|
||||
|
||||
| Requirement | Status | Blocking Issue |
|
||||
|-------------|--------|----------------|
|
||||
| {REQ-01}: {description} | ✓ SATISFIED | - |
|
||||
| {REQ-02}: {description} | ✗ BLOCKED | API route is stub |
|
||||
| {REQ-03}: {description} | ? NEEDS HUMAN | Can't verify WebSocket programmatically |
|
||||
|
||||
**Coverage:** {N}/{M} requirements satisfied
|
||||
|
||||
## Anti-Patterns Found
|
||||
|
||||
| File | Line | Pattern | Severity | Impact |
|
||||
|------|------|---------|----------|--------|
|
||||
| src/app/api/chat/route.ts | 12 | `// TODO: implement` | ⚠️ Warning | Indicates incomplete |
|
||||
| src/components/Chat.tsx | 45 | `return <div>Placeholder</div>` | 🛑 Blocker | Renders no content |
|
||||
| src/hooks/useChat.ts | - | File missing | 🛑 Blocker | Expected hook doesn't exist |
|
||||
|
||||
**Anti-patterns:** {N} found ({blockers} blockers, {warnings} warnings)
|
||||
|
||||
## Human Verification Required
|
||||
|
||||
{If no human verification needed:}
|
||||
None — all verifiable items checked programmatically.
|
||||
|
||||
{If human verification needed:}
|
||||
|
||||
### 1. {Test Name}
|
||||
**Test:** {What to do}
|
||||
**Expected:** {What should happen}
|
||||
**Why human:** {Why can't verify programmatically}
|
||||
|
||||
### 2. {Test Name}
|
||||
**Test:** {What to do}
|
||||
**Expected:** {What should happen}
|
||||
**Why human:** {Why can't verify programmatically}
|
||||
|
||||
## Gaps Summary
|
||||
|
||||
{If no gaps:}
|
||||
**No gaps found.** Phase goal achieved. Ready to proceed.
|
||||
|
||||
{If gaps found:}
|
||||
|
||||
### Critical Gaps (Block Progress)
|
||||
|
||||
1. **{Gap name}**
|
||||
- Missing: {what's missing}
|
||||
- Impact: {why this blocks the goal}
|
||||
- Fix: {what needs to happen}
|
||||
|
||||
2. **{Gap name}**
|
||||
- Missing: {what's missing}
|
||||
- Impact: {why this blocks the goal}
|
||||
- Fix: {what needs to happen}
|
||||
|
||||
### Non-Critical Gaps (Can Defer)
|
||||
|
||||
1. **{Gap name}**
|
||||
- Issue: {what's wrong}
|
||||
- Impact: {limited impact because...}
|
||||
- Recommendation: {fix now or defer}
|
||||
|
||||
## Recommended Fix Plans
|
||||
|
||||
{If gaps found, generate fix plan recommendations:}
|
||||
|
||||
### {phase}-{next}-PLAN.md: {Fix Name}
|
||||
|
||||
**Objective:** {What this fixes}
|
||||
|
||||
**Tasks:**
|
||||
1. {Task to fix gap 1}
|
||||
2. {Task to fix gap 2}
|
||||
3. {Verification task}
|
||||
|
||||
**Estimated scope:** {Small / Medium}
|
||||
|
||||
---
|
||||
|
||||
### {phase}-{next+1}-PLAN.md: {Fix Name}
|
||||
|
||||
**Objective:** {What this fixes}
|
||||
|
||||
**Tasks:**
|
||||
1. {Task}
|
||||
2. {Task}
|
||||
|
||||
**Estimated scope:** {Small / Medium}
|
||||
|
||||
---
|
||||
|
||||
## Verification Metadata
|
||||
|
||||
**Verification approach:** Goal-backward (derived from phase goal)
|
||||
**Must-haves source:** {PLAN.md frontmatter | derived from ROADMAP.md goal}
|
||||
**Automated checks:** {N} passed, {M} failed
|
||||
**Human checks required:** {N}
|
||||
**Total verification time:** {duration}
|
||||
|
||||
---
|
||||
*Verified: {timestamp}*
|
||||
*Verifier: Claude (subagent)*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidelines
|
||||
|
||||
**Status values:**
|
||||
- `passed` — All must-haves verified, no blockers
|
||||
- `gaps_found` — One or more critical gaps found
|
||||
- `human_needed` — Automated checks pass but human verification required
|
||||
|
||||
**Evidence types:**
|
||||
- For EXISTS: "File at path, exports X"
|
||||
- For SUBSTANTIVE: "N lines, has patterns X, Y, Z"
|
||||
- For WIRED: "Line N: code that connects A to B"
|
||||
- For FAILED: "Missing because X" or "Stub because Y"
|
||||
|
||||
**Severity levels:**
|
||||
- 🛑 Blocker: Prevents goal achievement, must fix
|
||||
- ⚠️ Warning: Indicates incomplete but doesn't block
|
||||
- ℹ️ Info: Notable but not problematic
|
||||
|
||||
**Fix plan generation:**
|
||||
- Only generate if gaps_found
|
||||
- Group related fixes into single plans
|
||||
- Keep to 2-3 tasks per plan
|
||||
- Include verification task in each plan
|
||||
|
||||
---
|
||||
|
||||
## Example
|
||||
|
||||
```markdown
|
||||
---
|
||||
phase: 03-chat
|
||||
verified: 2025-01-15T14:30:00Z
|
||||
status: gaps_found
|
||||
score: 2/5 must-haves verified
|
||||
---
|
||||
|
||||
# Phase 3: Chat Interface Verification Report
|
||||
|
||||
**Phase Goal:** Working chat interface where users can send and receive messages
|
||||
**Verified:** 2025-01-15T14:30:00Z
|
||||
**Status:** gaps_found
|
||||
|
||||
## Goal Achievement
|
||||
|
||||
### Observable Truths
|
||||
|
||||
| # | Truth | Status | Evidence |
|
||||
|---|-------|--------|----------|
|
||||
| 1 | User can see existing messages | ✗ FAILED | Component renders placeholder, not message data |
|
||||
| 2 | User can type a message | ✓ VERIFIED | Input field exists with onChange handler |
|
||||
| 3 | User can send a message | ✗ FAILED | onSubmit handler is console.log only |
|
||||
| 4 | Sent message appears in list | ✗ FAILED | No state update after send |
|
||||
| 5 | Messages persist across refresh | ? UNCERTAIN | Can't verify - send doesn't work |
|
||||
|
||||
**Score:** 1/5 truths verified
|
||||
|
||||
### Required Artifacts
|
||||
|
||||
| Artifact | Expected | Status | Details |
|
||||
|----------|----------|--------|---------|
|
||||
| `src/components/Chat.tsx` | Message list component | ✗ STUB | Returns `<div>Chat will be here</div>` |
|
||||
| `src/components/ChatInput.tsx` | Message input | ✓ EXISTS + SUBSTANTIVE | Form with input, submit button, handlers |
|
||||
| `src/app/api/chat/route.ts` | Message CRUD | ✗ STUB | GET returns [], POST returns { ok: true } |
|
||||
| `prisma/schema.prisma` | Message model | ✓ EXISTS + SUBSTANTIVE | Message model with id, content, userId, createdAt |
|
||||
|
||||
**Artifacts:** 2/4 verified
|
||||
|
||||
### Key Link Verification
|
||||
|
||||
| From | To | Via | Status | Details |
|
||||
|------|----|----|--------|---------|
|
||||
| Chat.tsx | /api/chat GET | fetch | ✗ NOT WIRED | No fetch call in component |
|
||||
| ChatInput | /api/chat POST | onSubmit | ✗ NOT WIRED | Handler only logs, doesn't fetch |
|
||||
| /api/chat GET | database | prisma.message.findMany | ✗ NOT WIRED | Returns hardcoded [] |
|
||||
| /api/chat POST | database | prisma.message.create | ✗ NOT WIRED | Returns { ok: true }, no DB call |
|
||||
|
||||
**Wiring:** 0/4 connections verified
|
||||
|
||||
## Requirements Coverage
|
||||
|
||||
| Requirement | Status | Blocking Issue |
|
||||
|-------------|--------|----------------|
|
||||
| CHAT-01: User can send message | ✗ BLOCKED | API POST is stub |
|
||||
| CHAT-02: User can view messages | ✗ BLOCKED | Component is placeholder |
|
||||
| CHAT-03: Messages persist | ✗ BLOCKED | No database integration |
|
||||
|
||||
**Coverage:** 0/3 requirements satisfied
|
||||
|
||||
## Anti-Patterns Found
|
||||
|
||||
| File | Line | Pattern | Severity | Impact |
|
||||
|------|------|---------|----------|--------|
|
||||
| src/components/Chat.tsx | 8 | `<div>Chat will be here</div>` | 🛑 Blocker | No actual content |
|
||||
| src/app/api/chat/route.ts | 5 | `return Response.json([])` | 🛑 Blocker | Hardcoded empty |
|
||||
| src/app/api/chat/route.ts | 12 | `// TODO: save to database` | ⚠️ Warning | Incomplete |
|
||||
|
||||
**Anti-patterns:** 3 found (2 blockers, 1 warning)
|
||||
|
||||
## Human Verification Required
|
||||
|
||||
None needed until automated gaps are fixed.
|
||||
|
||||
## Gaps Summary
|
||||
|
||||
### Critical Gaps (Block Progress)
|
||||
|
||||
1. **Chat component is placeholder**
|
||||
- Missing: Actual message list rendering
|
||||
- Impact: Users see "Chat will be here" instead of messages
|
||||
- Fix: Implement Chat.tsx to fetch and render messages
|
||||
|
||||
2. **API routes are stubs**
|
||||
- Missing: Database integration in GET and POST
|
||||
- Impact: No data persistence, no real functionality
|
||||
- Fix: Wire prisma calls in route handlers
|
||||
|
||||
3. **No wiring between frontend and backend**
|
||||
- Missing: fetch calls in components
|
||||
- Impact: Even if API worked, UI wouldn't call it
|
||||
- Fix: Add useEffect fetch in Chat, onSubmit fetch in ChatInput
|
||||
|
||||
## Recommended Fix Plans
|
||||
|
||||
### 03-04-PLAN.md: Implement Chat API
|
||||
|
||||
**Objective:** Wire API routes to database
|
||||
|
||||
**Tasks:**
|
||||
1. Implement GET /api/chat with prisma.message.findMany
|
||||
2. Implement POST /api/chat with prisma.message.create
|
||||
3. Verify: API returns real data, POST creates records
|
||||
|
||||
**Estimated scope:** Small
|
||||
|
||||
---
|
||||
|
||||
### 03-05-PLAN.md: Implement Chat UI
|
||||
|
||||
**Objective:** Wire Chat component to API
|
||||
|
||||
**Tasks:**
|
||||
1. Implement Chat.tsx with useEffect fetch and message rendering
|
||||
2. Wire ChatInput onSubmit to POST /api/chat
|
||||
3. Verify: Messages display, new messages appear after send
|
||||
|
||||
**Estimated scope:** Small
|
||||
|
||||
---
|
||||
|
||||
## Verification Metadata
|
||||
|
||||
**Verification approach:** Goal-backward (derived from phase goal)
|
||||
**Must-haves source:** 03-01-PLAN.md frontmatter
|
||||
**Automated checks:** 2 passed, 8 failed
|
||||
**Human checks required:** 0 (blocked by automated failures)
|
||||
**Total verification time:** 2 min
|
||||
|
||||
---
|
||||
*Verified: 2025-01-15T14:30:00Z*
|
||||
*Verifier: Claude (subagent)*
|
||||
```
|
||||
112
get-shit-done/workflows/add-phase.md
Normal file
112
get-shit-done/workflows/add-phase.md
Normal file
@@ -0,0 +1,112 @@
|
||||
<purpose>
|
||||
Add a new integer phase to the end of the current milestone in the roadmap. Automatically calculates next phase number, creates phase directory, and updates roadmap structure.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="parse_arguments">
|
||||
Parse the command arguments:
|
||||
- All arguments become the phase description
|
||||
- Example: `/gsd:add-phase Add authentication` → description = "Add authentication"
|
||||
- Example: `/gsd:add-phase Fix critical performance issues` → description = "Fix critical performance issues"
|
||||
|
||||
If no arguments provided:
|
||||
|
||||
```
|
||||
ERROR: Phase description required
|
||||
Usage: /gsd:add-phase <description>
|
||||
Example: /gsd:add-phase Add authentication system
|
||||
```
|
||||
|
||||
Exit.
|
||||
</step>
|
||||
|
||||
<step name="init_context">
|
||||
Load phase operation context:
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init phase-op "0")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Check `roadmap_exists` from init JSON. If false:
|
||||
```
|
||||
ERROR: No roadmap found (.planning/ROADMAP.md)
|
||||
Run /gsd:new-project to initialize.
|
||||
```
|
||||
Exit.
|
||||
</step>
|
||||
|
||||
<step name="add_phase">
|
||||
**Delegate the phase addition to gsd-tools:**
|
||||
|
||||
```bash
|
||||
RESULT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" phase add "${description}")
|
||||
```
|
||||
|
||||
The CLI handles:
|
||||
- Finding the highest existing integer phase number
|
||||
- Calculating next phase number (max + 1)
|
||||
- Generating slug from description
|
||||
- Creating the phase directory (`.planning/phases/{NN}-{slug}/`)
|
||||
- Inserting the phase entry into ROADMAP.md with Goal, Depends on, and Plans sections
|
||||
|
||||
Extract from result: `phase_number`, `padded`, `name`, `slug`, `directory`.
|
||||
</step>
|
||||
|
||||
<step name="update_project_state">
|
||||
Update STATE.md to reflect the new phase:
|
||||
|
||||
1. Read `.planning/STATE.md`
|
||||
2. Under "## Accumulated Context" → "### Roadmap Evolution" add entry:
|
||||
```
|
||||
- Phase {N} added: {description}
|
||||
```
|
||||
|
||||
If "Roadmap Evolution" section doesn't exist, create it.
|
||||
</step>
|
||||
|
||||
<step name="completion">
|
||||
Present completion summary:
|
||||
|
||||
```
|
||||
Phase {N} added to current milestone:
|
||||
- Description: {description}
|
||||
- Directory: .planning/phases/{phase-num}-{slug}/
|
||||
- Status: Not planned yet
|
||||
|
||||
Roadmap updated: .planning/ROADMAP.md
|
||||
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Phase {N}: {description}**
|
||||
|
||||
`/gsd:plan-phase {N}`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:add-phase <description>` — add another phase
|
||||
- Review roadmap
|
||||
|
||||
---
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] `gsd-tools phase add` executed successfully
|
||||
- [ ] Phase directory created
|
||||
- [ ] Roadmap updated with new phase entry
|
||||
- [ ] STATE.md updated with roadmap evolution note
|
||||
- [ ] User informed of next steps
|
||||
</success_criteria>
|
||||
351
get-shit-done/workflows/add-tests.md
Normal file
351
get-shit-done/workflows/add-tests.md
Normal file
@@ -0,0 +1,351 @@
|
||||
<purpose>
|
||||
Generate unit and E2E tests for a completed phase based on its SUMMARY.md, CONTEXT.md, and implementation. Classifies each changed file into TDD (unit), E2E (browser), or Skip categories, presents a test plan for user approval, then generates tests following RED-GREEN conventions.
|
||||
|
||||
Users currently hand-craft `/gsd:quick` prompts for test generation after each phase. This workflow standardizes the process with proper classification, quality gates, and gap reporting.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="parse_arguments">
|
||||
Parse `$ARGUMENTS` for:
|
||||
- Phase number (integer, decimal, or letter-suffix) → store as `$PHASE_ARG`
|
||||
- Remaining text after phase number → store as `$EXTRA_INSTRUCTIONS` (optional)
|
||||
|
||||
Example: `/gsd:add-tests 12 focus on edge cases` → `$PHASE_ARG=12`, `$EXTRA_INSTRUCTIONS="focus on edge cases"`
|
||||
|
||||
If no phase argument provided:
|
||||
|
||||
```
|
||||
ERROR: Phase number required
|
||||
Usage: /gsd:add-tests <phase> [additional instructions]
|
||||
Example: /gsd:add-tests 12
|
||||
Example: /gsd:add-tests 12 focus on edge cases in the pricing module
|
||||
```
|
||||
|
||||
Exit.
|
||||
</step>
|
||||
|
||||
<step name="init_context">
|
||||
Load phase operation context:
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init phase-op "${PHASE_ARG}")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Extract from init JSON: `phase_dir`, `phase_number`, `phase_name`.
|
||||
|
||||
Verify the phase directory exists. If not:
|
||||
```
|
||||
ERROR: Phase directory not found for phase ${PHASE_ARG}
|
||||
Ensure the phase exists in .planning/phases/
|
||||
```
|
||||
Exit.
|
||||
|
||||
Read the phase artifacts (in order of priority):
|
||||
1. `${phase_dir}/*-SUMMARY.md` — what was implemented, files changed
|
||||
2. `${phase_dir}/CONTEXT.md` — acceptance criteria, decisions
|
||||
3. `${phase_dir}/*-VERIFICATION.md` — user-verified scenarios (if UAT was done)
|
||||
|
||||
If no SUMMARY.md exists:
|
||||
```
|
||||
ERROR: No SUMMARY.md found for phase ${PHASE_ARG}
|
||||
This command works on completed phases. Run /gsd:execute-phase first.
|
||||
```
|
||||
Exit.
|
||||
|
||||
Present banner:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► ADD TESTS — Phase ${phase_number}: ${phase_name}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="analyze_implementation">
|
||||
Extract the list of files modified by the phase from SUMMARY.md ("Files Changed" or equivalent section).
|
||||
|
||||
For each file, classify into one of three categories:
|
||||
|
||||
| Category | Criteria | Test Type |
|
||||
|----------|----------|-----------|
|
||||
| **TDD** | Pure functions where `expect(fn(input)).toBe(output)` is writable | Unit tests |
|
||||
| **E2E** | UI behavior verifiable by browser automation | Playwright/E2E tests |
|
||||
| **Skip** | Not meaningfully testable or already covered | None |
|
||||
|
||||
**TDD classification — apply when:**
|
||||
- Business logic: calculations, pricing, tax rules, validation
|
||||
- Data transformations: mapping, filtering, aggregation, formatting
|
||||
- Parsers: CSV, JSON, XML, custom format parsing
|
||||
- Validators: input validation, schema validation, business rules
|
||||
- State machines: status transitions, workflow steps
|
||||
- Utilities: string manipulation, date handling, number formatting
|
||||
|
||||
**E2E classification — apply when:**
|
||||
- Keyboard shortcuts: key bindings, modifier keys, chord sequences
|
||||
- Navigation: page transitions, routing, breadcrumbs, back/forward
|
||||
- Form interactions: submit, validation errors, field focus, autocomplete
|
||||
- Selection: row selection, multi-select, shift-click ranges
|
||||
- Drag and drop: reordering, moving between containers
|
||||
- Modal dialogs: open, close, confirm, cancel
|
||||
- Data grids: sorting, filtering, inline editing, column resize
|
||||
|
||||
**Skip classification — apply when:**
|
||||
- UI layout/styling: CSS classes, visual appearance, responsive breakpoints
|
||||
- Configuration: config files, environment variables, feature flags
|
||||
- Glue code: dependency injection setup, middleware registration, routing tables
|
||||
- Migrations: database migrations, schema changes
|
||||
- Simple CRUD: basic create/read/update/delete with no business logic
|
||||
- Type definitions: records, DTOs, interfaces with no logic
|
||||
|
||||
Read each file to verify classification. Don't classify based on filename alone.
|
||||
</step>
|
||||
|
||||
<step name="present_classification">
|
||||
Present the classification to the user for confirmation before proceeding:
|
||||
|
||||
```
|
||||
AskUserQuestion(
|
||||
header: "Test Classification",
|
||||
question: |
|
||||
## Files classified for testing
|
||||
|
||||
### TDD (Unit Tests) — {N} files
|
||||
{list of files with brief reason}
|
||||
|
||||
### E2E (Browser Tests) — {M} files
|
||||
{list of files with brief reason}
|
||||
|
||||
### Skip — {K} files
|
||||
{list of files with brief reason}
|
||||
|
||||
{if $EXTRA_INSTRUCTIONS: "Additional instructions: ${EXTRA_INSTRUCTIONS}"}
|
||||
|
||||
How would you like to proceed?
|
||||
options:
|
||||
- "Approve and generate test plan"
|
||||
- "Adjust classification (I'll specify changes)"
|
||||
- "Cancel"
|
||||
)
|
||||
```
|
||||
|
||||
If user selects "Adjust classification": apply their changes and re-present.
|
||||
If user selects "Cancel": exit gracefully.
|
||||
</step>
|
||||
|
||||
<step name="discover_test_structure">
|
||||
Before generating the test plan, discover the project's existing test structure:
|
||||
|
||||
```bash
|
||||
# Find existing test directories
|
||||
find . -type d -name "*test*" -o -name "*spec*" -o -name "*__tests__*" 2>/dev/null | head -20
|
||||
# Find existing test files for convention matching
|
||||
find . -type f \( -name "*.test.*" -o -name "*.spec.*" -o -name "*Tests.fs" -o -name "*Test.fs" \) 2>/dev/null | head -20
|
||||
# Check for test runners
|
||||
ls package.json *.sln 2>/dev/null
|
||||
```
|
||||
|
||||
Identify:
|
||||
- Test directory structure (where unit tests live, where E2E tests live)
|
||||
- Naming conventions (`.test.ts`, `.spec.ts`, `*Tests.fs`, etc.)
|
||||
- Test runner commands (how to execute unit tests, how to execute E2E tests)
|
||||
- Test framework (xUnit, NUnit, Jest, Playwright, etc.)
|
||||
|
||||
If test structure is ambiguous, ask the user:
|
||||
```
|
||||
AskUserQuestion(
|
||||
header: "Test Structure",
|
||||
question: "I found multiple test locations. Where should I create tests?",
|
||||
options: [list discovered locations]
|
||||
)
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="generate_test_plan">
|
||||
For each approved file, create a detailed test plan.
|
||||
|
||||
**For TDD files**, plan tests following RED-GREEN-REFACTOR:
|
||||
1. Identify testable functions/methods in the file
|
||||
2. For each function: list input scenarios, expected outputs, edge cases
|
||||
3. Note: since code already exists, tests may pass immediately — that's OK, but verify they test the RIGHT behavior
|
||||
|
||||
**For E2E files**, plan tests following RED-GREEN gates:
|
||||
1. Identify user scenarios from CONTEXT.md/VERIFICATION.md
|
||||
2. For each scenario: describe the user action, expected outcome, assertions
|
||||
3. Note: RED gate means confirming the test would fail if the feature were broken
|
||||
|
||||
Present the complete test plan:
|
||||
|
||||
```
|
||||
AskUserQuestion(
|
||||
header: "Test Plan",
|
||||
question: |
|
||||
## Test Generation Plan
|
||||
|
||||
### Unit Tests ({N} tests across {M} files)
|
||||
{for each file: test file path, list of test cases}
|
||||
|
||||
### E2E Tests ({P} tests across {Q} files)
|
||||
{for each file: test file path, list of test scenarios}
|
||||
|
||||
### Test Commands
|
||||
- Unit: {discovered test command}
|
||||
- E2E: {discovered e2e command}
|
||||
|
||||
Ready to generate?
|
||||
options:
|
||||
- "Generate all"
|
||||
- "Cherry-pick (I'll specify which)"
|
||||
- "Adjust plan"
|
||||
)
|
||||
```
|
||||
|
||||
If "Cherry-pick": ask user which tests to include.
|
||||
If "Adjust plan": apply changes and re-present.
|
||||
</step>
|
||||
|
||||
<step name="execute_tdd_generation">
|
||||
For each approved TDD test:
|
||||
|
||||
1. **Create test file** following discovered project conventions (directory, naming, imports)
|
||||
|
||||
2. **Write test** with clear arrange/act/assert structure:
|
||||
```
|
||||
// Arrange — set up inputs and expected outputs
|
||||
// Act — call the function under test
|
||||
// Assert — verify the output matches expectations
|
||||
```
|
||||
|
||||
3. **Run the test**:
|
||||
```bash
|
||||
{discovered test command}
|
||||
```
|
||||
|
||||
4. **Evaluate result:**
|
||||
- **Test passes**: Good — the implementation satisfies the test. Verify the test checks meaningful behavior (not just that it compiles).
|
||||
- **Test fails with assertion error**: This may be a genuine bug discovered by the test. Flag it:
|
||||
```
|
||||
⚠️ Potential bug found: {test name}
|
||||
Expected: {expected}
|
||||
Actual: {actual}
|
||||
File: {implementation file}
|
||||
```
|
||||
Do NOT fix the implementation — this is a test-generation command, not a fix command. Record the finding.
|
||||
- **Test fails with error (import, syntax, etc.)**: This is a test error. Fix the test and re-run.
|
||||
</step>
|
||||
|
||||
<step name="execute_e2e_generation">
|
||||
For each approved E2E test:
|
||||
|
||||
1. **Check for existing tests** covering the same scenario:
|
||||
```bash
|
||||
grep -r "{scenario keyword}" {e2e test directory} 2>/dev/null
|
||||
```
|
||||
If found, extend rather than duplicate.
|
||||
|
||||
2. **Create test file** targeting the user scenario from CONTEXT.md/VERIFICATION.md
|
||||
|
||||
3. **Run the E2E test**:
|
||||
```bash
|
||||
{discovered e2e command}
|
||||
```
|
||||
|
||||
4. **Evaluate result:**
|
||||
- **GREEN (passes)**: Record success
|
||||
- **RED (fails)**: Determine if it's a test issue or a genuine application bug. Flag bugs:
|
||||
```
|
||||
⚠️ E2E failure: {test name}
|
||||
Scenario: {description}
|
||||
Error: {error message}
|
||||
```
|
||||
- **Cannot run**: Report blocker. Do NOT mark as complete.
|
||||
```
|
||||
🛑 E2E blocker: {reason tests cannot run}
|
||||
```
|
||||
|
||||
**No-skip rule:** If E2E tests cannot execute (missing dependencies, environment issues), report the blocker and mark the test as incomplete. Never mark success without actually running the test.
|
||||
</step>
|
||||
|
||||
<step name="summary_and_commit">
|
||||
Create a test coverage report and present to user:
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► TEST GENERATION COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
## Results
|
||||
|
||||
| Category | Generated | Passing | Failing | Blocked |
|
||||
|----------|-----------|---------|---------|---------|
|
||||
| Unit | {N} | {n1} | {n2} | {n3} |
|
||||
| E2E | {M} | {m1} | {m2} | {m3} |
|
||||
|
||||
## Files Created/Modified
|
||||
{list of test files with paths}
|
||||
|
||||
## Coverage Gaps
|
||||
{areas that couldn't be tested and why}
|
||||
|
||||
## Bugs Discovered
|
||||
{any assertion failures that indicate implementation bugs}
|
||||
```
|
||||
|
||||
Record test generation in project state:
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state-snapshot
|
||||
```
|
||||
|
||||
If there are passing tests to commit:
|
||||
|
||||
```bash
|
||||
git add {test files}
|
||||
git commit -m "test(phase-${phase_number}): add unit and E2E tests from add-tests command"
|
||||
```
|
||||
|
||||
Present next steps:
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
{if bugs discovered:}
|
||||
**Fix discovered bugs:** `/gsd:quick fix the {N} test failures discovered in phase ${phase_number}`
|
||||
|
||||
{if blocked tests:}
|
||||
**Resolve test blockers:** {description of what's needed}
|
||||
|
||||
{otherwise:}
|
||||
**All tests passing!** Phase ${phase_number} is fully tested.
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:add-tests {next_phase}` — test another phase
|
||||
- `/gsd:verify-work {phase_number}` — run UAT verification
|
||||
|
||||
---
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Phase artifacts loaded (SUMMARY.md, CONTEXT.md, optionally VERIFICATION.md)
|
||||
- [ ] All changed files classified into TDD/E2E/Skip categories
|
||||
- [ ] Classification presented to user and approved
|
||||
- [ ] Project test structure discovered (directories, conventions, runners)
|
||||
- [ ] Test plan presented to user and approved
|
||||
- [ ] TDD tests generated with arrange/act/assert structure
|
||||
- [ ] E2E tests generated targeting user scenarios
|
||||
- [ ] All tests executed — no untested tests marked as passing
|
||||
- [ ] Bugs discovered by tests flagged (not fixed)
|
||||
- [ ] Test files committed with proper message
|
||||
- [ ] Coverage gaps documented
|
||||
- [ ] Next steps presented to user
|
||||
</success_criteria>
|
||||
158
get-shit-done/workflows/add-todo.md
Normal file
158
get-shit-done/workflows/add-todo.md
Normal file
@@ -0,0 +1,158 @@
|
||||
<purpose>
|
||||
Capture an idea, task, or issue that surfaces during a GSD session as a structured todo for later work. Enables "thought → capture → continue" flow without losing context.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="init_context">
|
||||
Load todo context:
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init todos)
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Extract from init JSON: `commit_docs`, `date`, `timestamp`, `todo_count`, `todos`, `pending_dir`, `todos_dir_exists`.
|
||||
|
||||
Ensure directories exist:
|
||||
```bash
|
||||
mkdir -p .planning/todos/pending .planning/todos/done
|
||||
```
|
||||
|
||||
Note existing areas from the todos array for consistency in infer_area step.
|
||||
</step>
|
||||
|
||||
<step name="extract_content">
|
||||
**With arguments:** Use as the title/focus.
|
||||
- `/gsd:add-todo Add auth token refresh` → title = "Add auth token refresh"
|
||||
|
||||
**Without arguments:** Analyze recent conversation to extract:
|
||||
- The specific problem, idea, or task discussed
|
||||
- Relevant file paths mentioned
|
||||
- Technical details (error messages, line numbers, constraints)
|
||||
|
||||
Formulate:
|
||||
- `title`: 3-10 word descriptive title (action verb preferred)
|
||||
- `problem`: What's wrong or why this is needed
|
||||
- `solution`: Approach hints or "TBD" if just an idea
|
||||
- `files`: Relevant paths with line numbers from conversation
|
||||
</step>
|
||||
|
||||
<step name="infer_area">
|
||||
Infer area from file paths:
|
||||
|
||||
| Path pattern | Area |
|
||||
|--------------|------|
|
||||
| `src/api/*`, `api/*` | `api` |
|
||||
| `src/components/*`, `src/ui/*` | `ui` |
|
||||
| `src/auth/*`, `auth/*` | `auth` |
|
||||
| `src/db/*`, `database/*` | `database` |
|
||||
| `tests/*`, `__tests__/*` | `testing` |
|
||||
| `docs/*` | `docs` |
|
||||
| `.planning/*` | `planning` |
|
||||
| `scripts/*`, `bin/*` | `tooling` |
|
||||
| No files or unclear | `general` |
|
||||
|
||||
Use existing area from step 2 if similar match exists.
|
||||
</step>
|
||||
|
||||
<step name="check_duplicates">
|
||||
```bash
|
||||
# Search for key words from title in existing todos
|
||||
grep -l -i "[key words from title]" .planning/todos/pending/*.md 2>/dev/null
|
||||
```
|
||||
|
||||
If potential duplicate found:
|
||||
1. Read the existing todo
|
||||
2. Compare scope
|
||||
|
||||
If overlapping, use AskUserQuestion:
|
||||
- header: "Duplicate?"
|
||||
- question: "Similar todo exists: [title]. What would you like to do?"
|
||||
- options:
|
||||
- "Skip" — keep existing todo
|
||||
- "Replace" — update existing with new context
|
||||
- "Add anyway" — create as separate todo
|
||||
</step>
|
||||
|
||||
<step name="create_file">
|
||||
Use values from init context: `timestamp` and `date` are already available.
|
||||
|
||||
Generate slug for the title:
|
||||
```bash
|
||||
slug=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" generate-slug "$title" --raw)
|
||||
```
|
||||
|
||||
Write to `.planning/todos/pending/${date}-${slug}.md`:
|
||||
|
||||
```markdown
|
||||
---
|
||||
created: [timestamp]
|
||||
title: [title]
|
||||
area: [area]
|
||||
files:
|
||||
- [file:lines]
|
||||
---
|
||||
|
||||
## Problem
|
||||
|
||||
[problem description - enough context for future Claude to understand weeks later]
|
||||
|
||||
## Solution
|
||||
|
||||
[approach hints or "TBD"]
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="update_state">
|
||||
If `.planning/STATE.md` exists:
|
||||
|
||||
1. Use `todo_count` from init context (or re-run `init todos` if count changed)
|
||||
2. Update "### Pending Todos" under "## Accumulated Context"
|
||||
</step>
|
||||
|
||||
<step name="git_commit">
|
||||
Commit the todo and any updated state:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: capture todo - [title]" --files .planning/todos/pending/[filename] .planning/STATE.md
|
||||
```
|
||||
|
||||
Tool respects `commit_docs` config and gitignore automatically.
|
||||
|
||||
Confirm: "Committed: docs: capture todo - [title]"
|
||||
</step>
|
||||
|
||||
<step name="confirm">
|
||||
```
|
||||
Todo saved: .planning/todos/pending/[filename]
|
||||
|
||||
[title]
|
||||
Area: [area]
|
||||
Files: [count] referenced
|
||||
|
||||
---
|
||||
|
||||
Would you like to:
|
||||
|
||||
1. Continue with current work
|
||||
2. Add another todo
|
||||
3. View all todos (/gsd:check-todos)
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Directory structure exists
|
||||
- [ ] Todo file created with valid frontmatter
|
||||
- [ ] Problem section has enough context for future Claude
|
||||
- [ ] No duplicates (checked and resolved)
|
||||
- [ ] Area consistent with existing todos
|
||||
- [ ] STATE.md updated if exists
|
||||
- [ ] Todo and state committed to git
|
||||
</success_criteria>
|
||||
332
get-shit-done/workflows/audit-milestone.md
Normal file
332
get-shit-done/workflows/audit-milestone.md
Normal file
@@ -0,0 +1,332 @@
|
||||
<purpose>
|
||||
Verify milestone achieved its definition of done by aggregating phase verifications, checking cross-phase integration, and assessing requirements coverage. Reads existing VERIFICATION.md files (phases already verified during execute-phase), aggregates tech debt and deferred gaps, then spawns integration checker for cross-phase wiring.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
## 0. Initialize Milestone Context
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init milestone-op)
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Extract from init JSON: `milestone_version`, `milestone_name`, `phase_count`, `completed_phases`, `commit_docs`.
|
||||
|
||||
Resolve integration checker model:
|
||||
```bash
|
||||
integration_checker_model=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" resolve-model gsd-integration-checker --raw)
|
||||
```
|
||||
|
||||
## 1. Determine Milestone Scope
|
||||
|
||||
```bash
|
||||
# Get phases in milestone (sorted numerically, handles decimals)
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" phases list
|
||||
```
|
||||
|
||||
- Parse version from arguments or detect current from ROADMAP.md
|
||||
- Identify all phase directories in scope
|
||||
- Extract milestone definition of done from ROADMAP.md
|
||||
- Extract requirements mapped to this milestone from REQUIREMENTS.md
|
||||
|
||||
## 2. Read All Phase Verifications
|
||||
|
||||
For each phase directory, read the VERIFICATION.md:
|
||||
|
||||
```bash
|
||||
# For each phase, use find-phase to resolve the directory (handles archived phases)
|
||||
PHASE_INFO=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" find-phase 01 --raw)
|
||||
# Extract directory from JSON, then read VERIFICATION.md from that directory
|
||||
# Repeat for each phase number from ROADMAP.md
|
||||
```
|
||||
|
||||
From each VERIFICATION.md, extract:
|
||||
- **Status:** passed | gaps_found
|
||||
- **Critical gaps:** (if any — these are blockers)
|
||||
- **Non-critical gaps:** tech debt, deferred items, warnings
|
||||
- **Anti-patterns found:** TODOs, stubs, placeholders
|
||||
- **Requirements coverage:** which requirements satisfied/blocked
|
||||
|
||||
If a phase is missing VERIFICATION.md, flag it as "unverified phase" — this is a blocker.
|
||||
|
||||
## 3. Spawn Integration Checker
|
||||
|
||||
With phase context collected:
|
||||
|
||||
Extract `MILESTONE_REQ_IDS` from REQUIREMENTS.md traceability table — all REQ-IDs assigned to phases in this milestone.
|
||||
|
||||
```
|
||||
Task(
|
||||
prompt="Check cross-phase integration and E2E flows.
|
||||
|
||||
Phases: {phase_dirs}
|
||||
Phase exports: {from SUMMARYs}
|
||||
API routes: {routes created}
|
||||
|
||||
Milestone Requirements:
|
||||
{MILESTONE_REQ_IDS — list each REQ-ID with description and assigned phase}
|
||||
|
||||
MUST map each integration finding to affected requirement IDs where applicable.
|
||||
|
||||
Verify cross-phase wiring and E2E user flows.",
|
||||
subagent_type="gsd-integration-checker",
|
||||
model="{integration_checker_model}"
|
||||
)
|
||||
```
|
||||
|
||||
## 4. Collect Results
|
||||
|
||||
Combine:
|
||||
- Phase-level gaps and tech debt (from step 2)
|
||||
- Integration checker's report (wiring gaps, broken flows)
|
||||
|
||||
## 5. Check Requirements Coverage (3-Source Cross-Reference)
|
||||
|
||||
MUST cross-reference three independent sources for each requirement:
|
||||
|
||||
### 5a. Parse REQUIREMENTS.md Traceability Table
|
||||
|
||||
Extract all REQ-IDs mapped to milestone phases from the traceability table:
|
||||
- Requirement ID, description, assigned phase, current status, checked-off state (`[x]` vs `[ ]`)
|
||||
|
||||
### 5b. Parse Phase VERIFICATION.md Requirements Tables
|
||||
|
||||
For each phase's VERIFICATION.md, extract the expanded requirements table:
|
||||
- Requirement | Source Plan | Description | Status | Evidence
|
||||
- Map each entry back to its REQ-ID
|
||||
|
||||
### 5c. Extract SUMMARY.md Frontmatter Cross-Check
|
||||
|
||||
For each phase's SUMMARY.md, extract `requirements-completed` from YAML frontmatter:
|
||||
```bash
|
||||
for summary in .planning/phases/*-*/*-SUMMARY.md; do
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" summary-extract "$summary" --fields requirements_completed | jq -r '.requirements_completed'
|
||||
done
|
||||
```
|
||||
|
||||
### 5d. Status Determination Matrix
|
||||
|
||||
For each REQ-ID, determine status using all three sources:
|
||||
|
||||
| VERIFICATION.md Status | SUMMARY Frontmatter | REQUIREMENTS.md | → Final Status |
|
||||
|------------------------|---------------------|-----------------|----------------|
|
||||
| passed | listed | `[x]` | **satisfied** |
|
||||
| passed | listed | `[ ]` | **satisfied** (update checkbox) |
|
||||
| passed | missing | any | **partial** (verify manually) |
|
||||
| gaps_found | any | any | **unsatisfied** |
|
||||
| missing | listed | any | **partial** (verification gap) |
|
||||
| missing | missing | any | **unsatisfied** |
|
||||
|
||||
### 5e. FAIL Gate and Orphan Detection
|
||||
|
||||
**REQUIRED:** Any `unsatisfied` requirement MUST force `gaps_found` status on the milestone audit.
|
||||
|
||||
**Orphan detection:** Requirements present in REQUIREMENTS.md traceability table but absent from ALL phase VERIFICATION.md files MUST be flagged as orphaned. Orphaned requirements are treated as `unsatisfied` — they were assigned but never verified by any phase.
|
||||
|
||||
## 5.5. Nyquist Compliance Discovery
|
||||
|
||||
Skip if `workflow.nyquist_validation` is explicitly `false` (absent = enabled).
|
||||
|
||||
```bash
|
||||
NYQUIST_CONFIG=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config get workflow.nyquist_validation --raw 2>/dev/null)
|
||||
```
|
||||
|
||||
If `false`: skip entirely.
|
||||
|
||||
For each phase directory, check `*-VALIDATION.md`. If exists, parse frontmatter (`nyquist_compliant`, `wave_0_complete`).
|
||||
|
||||
Classify per phase:
|
||||
|
||||
| Status | Condition |
|
||||
|--------|-----------|
|
||||
| COMPLIANT | `nyquist_compliant: true` and all tasks green |
|
||||
| PARTIAL | VALIDATION.md exists, `nyquist_compliant: false` or red/pending |
|
||||
| MISSING | No VALIDATION.md |
|
||||
|
||||
Add to audit YAML: `nyquist: { compliant_phases, partial_phases, missing_phases, overall }`
|
||||
|
||||
Discovery only — never auto-calls `/gsd:validate-phase`.
|
||||
|
||||
## 6. Aggregate into v{version}-MILESTONE-AUDIT.md
|
||||
|
||||
Create `.planning/v{version}-v{version}-MILESTONE-AUDIT.md` with:
|
||||
|
||||
```yaml
|
||||
---
|
||||
milestone: {version}
|
||||
audited: {timestamp}
|
||||
status: passed | gaps_found | tech_debt
|
||||
scores:
|
||||
requirements: N/M
|
||||
phases: N/M
|
||||
integration: N/M
|
||||
flows: N/M
|
||||
gaps: # Critical blockers
|
||||
requirements:
|
||||
- id: "{REQ-ID}"
|
||||
status: "unsatisfied | partial | orphaned"
|
||||
phase: "{assigned phase}"
|
||||
claimed_by_plans: ["{plan files that reference this requirement}"]
|
||||
completed_by_plans: ["{plan files whose SUMMARY marks it complete}"]
|
||||
verification_status: "passed | gaps_found | missing | orphaned"
|
||||
evidence: "{specific evidence or lack thereof}"
|
||||
integration: [...]
|
||||
flows: [...]
|
||||
tech_debt: # Non-critical, deferred
|
||||
- phase: 01-auth
|
||||
items:
|
||||
- "TODO: add rate limiting"
|
||||
- "Warning: no password strength validation"
|
||||
- phase: 03-dashboard
|
||||
items:
|
||||
- "Deferred: mobile responsive layout"
|
||||
---
|
||||
```
|
||||
|
||||
Plus full markdown report with tables for requirements, phases, integration, tech debt.
|
||||
|
||||
**Status values:**
|
||||
- `passed` — all requirements met, no critical gaps, minimal tech debt
|
||||
- `gaps_found` — critical blockers exist
|
||||
- `tech_debt` — no blockers but accumulated deferred items need review
|
||||
|
||||
## 7. Present Results
|
||||
|
||||
Route by status (see `<offer_next>`).
|
||||
|
||||
</process>
|
||||
|
||||
<offer_next>
|
||||
Output this markdown directly (not as a code block). Route based on status:
|
||||
|
||||
---
|
||||
|
||||
**If passed:**
|
||||
|
||||
## ✓ Milestone {version} — Audit Passed
|
||||
|
||||
**Score:** {N}/{M} requirements satisfied
|
||||
**Report:** .planning/v{version}-MILESTONE-AUDIT.md
|
||||
|
||||
All requirements covered. Cross-phase integration verified. E2E flows complete.
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Complete milestone** — archive and tag
|
||||
|
||||
/gsd:complete-milestone {version}
|
||||
|
||||
<sub>/clear first → fresh context window</sub>
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
|
||||
---
|
||||
|
||||
**If gaps_found:**
|
||||
|
||||
## ⚠ Milestone {version} — Gaps Found
|
||||
|
||||
**Score:** {N}/{M} requirements satisfied
|
||||
**Report:** .planning/v{version}-MILESTONE-AUDIT.md
|
||||
|
||||
### Unsatisfied Requirements
|
||||
|
||||
{For each unsatisfied requirement:}
|
||||
- **{REQ-ID}: {description}** (Phase {X})
|
||||
- {reason}
|
||||
|
||||
### Cross-Phase Issues
|
||||
|
||||
{For each integration gap:}
|
||||
- **{from} → {to}:** {issue}
|
||||
|
||||
### Broken Flows
|
||||
|
||||
{For each flow gap:}
|
||||
- **{flow name}:** breaks at {step}
|
||||
|
||||
### Nyquist Coverage
|
||||
|
||||
| Phase | VALIDATION.md | Compliant | Action |
|
||||
|-------|---------------|-----------|--------|
|
||||
| {phase} | exists/missing | true/false/partial | `/gsd:validate-phase {N}` |
|
||||
|
||||
Phases needing validation: run `/gsd:validate-phase {N}` for each flagged phase.
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Plan gap closure** — create phases to complete milestone
|
||||
|
||||
/gsd:plan-milestone-gaps
|
||||
|
||||
<sub>/clear first → fresh context window</sub>
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
|
||||
**Also available:**
|
||||
- cat .planning/v{version}-MILESTONE-AUDIT.md — see full report
|
||||
- /gsd:complete-milestone {version} — proceed anyway (accept tech debt)
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
|
||||
---
|
||||
|
||||
**If tech_debt (no blockers but accumulated debt):**
|
||||
|
||||
## ⚡ Milestone {version} — Tech Debt Review
|
||||
|
||||
**Score:** {N}/{M} requirements satisfied
|
||||
**Report:** .planning/v{version}-MILESTONE-AUDIT.md
|
||||
|
||||
All requirements met. No critical blockers. Accumulated tech debt needs review.
|
||||
|
||||
### Tech Debt by Phase
|
||||
|
||||
{For each phase with debt:}
|
||||
**Phase {X}: {name}**
|
||||
- {item 1}
|
||||
- {item 2}
|
||||
|
||||
### Total: {N} items across {M} phases
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
|
||||
## ▶ Options
|
||||
|
||||
**A. Complete milestone** — accept debt, track in backlog
|
||||
|
||||
/gsd:complete-milestone {version}
|
||||
|
||||
**B. Plan cleanup phase** — address debt before completing
|
||||
|
||||
/gsd:plan-milestone-gaps
|
||||
|
||||
<sub>/clear first → fresh context window</sub>
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
</offer_next>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Milestone scope identified
|
||||
- [ ] All phase VERIFICATION.md files read
|
||||
- [ ] SUMMARY.md `requirements-completed` frontmatter extracted for each phase
|
||||
- [ ] REQUIREMENTS.md traceability table parsed for all milestone REQ-IDs
|
||||
- [ ] 3-source cross-reference completed (VERIFICATION + SUMMARY + traceability)
|
||||
- [ ] Orphaned requirements detected (in traceability but absent from all VERIFICATIONs)
|
||||
- [ ] Tech debt and deferred gaps aggregated
|
||||
- [ ] Integration checker spawned with milestone requirement IDs
|
||||
- [ ] v{version}-MILESTONE-AUDIT.md created with structured requirement gap objects
|
||||
- [ ] FAIL gate enforced — any unsatisfied requirement forces gaps_found status
|
||||
- [ ] Nyquist compliance scanned for all milestone phases (if enabled)
|
||||
- [ ] Missing VALIDATION.md phases flagged with validate-phase suggestion
|
||||
- [ ] Results presented with actionable next steps
|
||||
</success_criteria>
|
||||
743
get-shit-done/workflows/autonomous.md
Normal file
743
get-shit-done/workflows/autonomous.md
Normal file
@@ -0,0 +1,743 @@
|
||||
<purpose>
|
||||
|
||||
Drive all remaining milestone phases autonomously. For each incomplete phase: discuss → plan → execute using Skill() flat invocations. Pauses only for explicit user decisions (grey area acceptance, blockers, validation requests). Re-reads ROADMAP.md after each phase to catch dynamically inserted phases.
|
||||
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="initialize" priority="first">
|
||||
|
||||
## 1. Initialize
|
||||
|
||||
Parse `$ARGUMENTS` for `--from N` flag:
|
||||
|
||||
```bash
|
||||
FROM_PHASE=""
|
||||
if echo "$ARGUMENTS" | grep -qE '\-\-from\s+[0-9]'; then
|
||||
FROM_PHASE=$(echo "$ARGUMENTS" | grep -oE '\-\-from\s+[0-9]+\.?[0-9]*' | awk '{print $2}')
|
||||
fi
|
||||
```
|
||||
|
||||
Bootstrap via milestone-level init:
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init milestone-op)
|
||||
```
|
||||
|
||||
Parse JSON for: `milestone_version`, `milestone_name`, `phase_count`, `completed_phases`, `roadmap_exists`, `state_exists`, `commit_docs`.
|
||||
|
||||
**If `roadmap_exists` is false:** Error — "No ROADMAP.md found. Run `/gsd:new-milestone` first."
|
||||
**If `state_exists` is false:** Error — "No STATE.md found. Run `/gsd:new-milestone` first."
|
||||
|
||||
Display startup banner:
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► AUTONOMOUS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Milestone: {milestone_version} — {milestone_name}
|
||||
Phases: {phase_count} total, {completed_phases} complete
|
||||
```
|
||||
|
||||
If `FROM_PHASE` is set, display: `Starting from phase ${FROM_PHASE}`
|
||||
|
||||
</step>
|
||||
|
||||
<step name="discover_phases">
|
||||
|
||||
## 2. Discover Phases
|
||||
|
||||
Run phase discovery:
|
||||
|
||||
```bash
|
||||
ROADMAP=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap analyze)
|
||||
```
|
||||
|
||||
Parse the JSON `phases` array.
|
||||
|
||||
**Filter to incomplete phases:** Keep only phases where `disk_status !== "complete"` OR `roadmap_complete === false`.
|
||||
|
||||
**Apply `--from N` filter:** If `FROM_PHASE` was provided, additionally filter out phases where `number < FROM_PHASE` (use numeric comparison — handles decimal phases like "5.1").
|
||||
|
||||
**Sort by `number`** in numeric ascending order.
|
||||
|
||||
**If no incomplete phases remain:**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► AUTONOMOUS ▸ COMPLETE 🎉
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
All phases complete! Nothing left to do.
|
||||
```
|
||||
|
||||
Exit cleanly.
|
||||
|
||||
**Display phase plan:**
|
||||
|
||||
```
|
||||
## Phase Plan
|
||||
|
||||
| # | Phase | Status |
|
||||
|---|-------|--------|
|
||||
| 5 | Skill Scaffolding & Phase Discovery | In Progress |
|
||||
| 6 | Smart Discuss | Not Started |
|
||||
| 7 | Auto-Chain Refinements | Not Started |
|
||||
| 8 | Lifecycle Orchestration | Not Started |
|
||||
```
|
||||
|
||||
**Fetch details for each phase:**
|
||||
|
||||
```bash
|
||||
DETAIL=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap get-phase ${PHASE_NUM})
|
||||
```
|
||||
|
||||
Extract `phase_name`, `goal`, `success_criteria` from each. Store for use in execute_phase and transition messages.
|
||||
|
||||
</step>
|
||||
|
||||
<step name="execute_phase">
|
||||
|
||||
## 3. Execute Phase
|
||||
|
||||
For the current phase, display the progress banner:
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► AUTONOMOUS ▸ Phase {N}/{T}: {Name} [████░░░░] {P}%
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
Where N = current phase number (from the ROADMAP, e.g., 6), T = total milestone phases (from `phase_count` parsed in initialize step, e.g., 8), P = percentage of all milestone phases completed so far. Calculate P as: (number of phases with `disk_status` "complete" from the latest `roadmap analyze` / T × 100). Use █ for filled and ░ for empty segments in the progress bar (8 characters wide).
|
||||
|
||||
**3a. Smart Discuss**
|
||||
|
||||
Check if CONTEXT.md already exists for this phase:
|
||||
|
||||
```bash
|
||||
PHASE_STATE=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init phase-op ${PHASE_NUM})
|
||||
```
|
||||
|
||||
Parse `has_context` from JSON.
|
||||
|
||||
**If has_context is true:** Skip discuss — context already gathered. Display:
|
||||
|
||||
```
|
||||
Phase ${PHASE_NUM}: Context exists — skipping discuss.
|
||||
```
|
||||
|
||||
Proceed to 3b.
|
||||
|
||||
**If has_context is false:** Execute the smart_discuss step for this phase.
|
||||
|
||||
After smart_discuss completes, verify context was written:
|
||||
|
||||
```bash
|
||||
PHASE_STATE=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init phase-op ${PHASE_NUM})
|
||||
```
|
||||
|
||||
Check `has_context`. If false → go to handle_blocker: "Smart discuss for phase ${PHASE_NUM} did not produce CONTEXT.md."
|
||||
|
||||
**3b. Plan**
|
||||
|
||||
```
|
||||
Skill(skill="gsd:plan-phase", args="${PHASE_NUM}")
|
||||
```
|
||||
|
||||
Verify plan produced output — re-run `init phase-op` and check `has_plans`. If false → go to handle_blocker: "Plan phase ${PHASE_NUM} did not produce any plans."
|
||||
|
||||
**3c. Execute**
|
||||
|
||||
```
|
||||
Skill(skill="gsd:execute-phase", args="${PHASE_NUM} --no-transition")
|
||||
```
|
||||
|
||||
**3d. Post-Execution Routing**
|
||||
|
||||
After execute-phase returns, read the verification result:
|
||||
|
||||
```bash
|
||||
VERIFY_STATUS=$(grep "^status:" "${PHASE_DIR}"/*-VERIFICATION.md 2>/dev/null | head -1 | cut -d: -f2 | tr -d ' ')
|
||||
```
|
||||
|
||||
Where `PHASE_DIR` comes from the `init phase-op` call already made in step 3a. If the variable is not in scope, re-fetch:
|
||||
|
||||
```bash
|
||||
PHASE_STATE=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init phase-op ${PHASE_NUM})
|
||||
```
|
||||
|
||||
Parse `phase_dir` from the JSON.
|
||||
|
||||
**If VERIFY_STATUS is empty** (no VERIFICATION.md or no status field):
|
||||
|
||||
Go to handle_blocker: "Execute phase ${PHASE_NUM} did not produce verification results."
|
||||
|
||||
**If `passed`:**
|
||||
|
||||
Display:
|
||||
```
|
||||
Phase ${PHASE_NUM} ✅ ${PHASE_NAME} — Verification passed
|
||||
```
|
||||
|
||||
Proceed to iterate step.
|
||||
|
||||
**If `human_needed`:**
|
||||
|
||||
Read the human_verification section from VERIFICATION.md to get the count and items requiring manual testing.
|
||||
|
||||
Display the items, then ask user via AskUserQuestion:
|
||||
- **question:** "Phase ${PHASE_NUM} has items needing manual verification. Validate now or continue to next phase?"
|
||||
- **options:** "Validate now" / "Continue without validation"
|
||||
|
||||
On **"Validate now"**: Present the specific items from VERIFICATION.md's human_verification section. After user reviews, ask:
|
||||
- **question:** "Validation result?"
|
||||
- **options:** "All good — continue" / "Found issues"
|
||||
|
||||
On "All good — continue": Display `Phase ${PHASE_NUM} ✅ Human validation passed` and proceed to iterate step.
|
||||
|
||||
On "Found issues": Go to handle_blocker with the user's reported issues as the description.
|
||||
|
||||
On **"Continue without validation"**: Display `Phase ${PHASE_NUM} ⏭ Human validation deferred` and proceed to iterate step.
|
||||
|
||||
**If `gaps_found`:**
|
||||
|
||||
Read gap summary from VERIFICATION.md (score and missing items). Display:
|
||||
```
|
||||
⚠ Phase ${PHASE_NUM}: ${PHASE_NAME} — Gaps Found
|
||||
Score: {N}/{M} must-haves verified
|
||||
```
|
||||
|
||||
Ask user via AskUserQuestion:
|
||||
- **question:** "Gaps found in phase ${PHASE_NUM}. How to proceed?"
|
||||
- **options:** "Run gap closure" / "Continue without fixing" / "Stop autonomous mode"
|
||||
|
||||
On **"Run gap closure"**: Execute gap closure cycle (limit: 1 attempt):
|
||||
|
||||
```
|
||||
Skill(skill="gsd:plan-phase", args="${PHASE_NUM} --gaps")
|
||||
```
|
||||
|
||||
Verify gap plans were created — re-run `init phase-op ${PHASE_NUM}` and check `has_plans`. If no new gap plans → go to handle_blocker: "Gap closure planning for phase ${PHASE_NUM} did not produce plans."
|
||||
|
||||
Re-execute:
|
||||
```
|
||||
Skill(skill="gsd:execute-phase", args="${PHASE_NUM} --no-transition")
|
||||
```
|
||||
|
||||
Re-read verification status:
|
||||
```bash
|
||||
VERIFY_STATUS=$(grep "^status:" "${PHASE_DIR}"/*-VERIFICATION.md 2>/dev/null | head -1 | cut -d: -f2 | tr -d ' ')
|
||||
```
|
||||
|
||||
If `passed` or `human_needed`: Route normally (continue or ask user as above).
|
||||
|
||||
If still `gaps_found` after this retry: Display "Gaps persist after closure attempt." and ask via AskUserQuestion:
|
||||
- **question:** "Gap closure did not fully resolve issues. How to proceed?"
|
||||
- **options:** "Continue anyway" / "Stop autonomous mode"
|
||||
|
||||
On "Continue anyway": Proceed to iterate step.
|
||||
On "Stop autonomous mode": Go to handle_blocker.
|
||||
|
||||
This limits gap closure to 1 automatic retry to prevent infinite loops.
|
||||
|
||||
On **"Continue without fixing"**: Display `Phase ${PHASE_NUM} ⏭ Gaps deferred` and proceed to iterate step.
|
||||
|
||||
On **"Stop autonomous mode"**: Go to handle_blocker with "User stopped — gaps remain in phase ${PHASE_NUM}".
|
||||
|
||||
</step>
|
||||
|
||||
<step name="smart_discuss">
|
||||
|
||||
## Smart Discuss
|
||||
|
||||
Run smart discuss for the current phase. Proposes grey area answers in batch tables — the user accepts or overrides per area. Produces identical CONTEXT.md output to regular discuss-phase.
|
||||
|
||||
> **Note:** Smart discuss is an autonomous-optimized variant of the `gsd:discuss-phase` skill. It produces identical CONTEXT.md output but uses batch table proposals instead of sequential questioning. The original `discuss-phase` skill remains unchanged (per CTRL-03). Future milestones may extract this to a separate skill file.
|
||||
|
||||
**Inputs:** `PHASE_NUM` from execute_phase. Run init to get phase paths:
|
||||
|
||||
```bash
|
||||
PHASE_STATE=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init phase-op ${PHASE_NUM})
|
||||
```
|
||||
|
||||
Parse from JSON: `phase_dir`, `phase_slug`, `padded_phase`, `phase_name`.
|
||||
|
||||
---
|
||||
|
||||
### Sub-step 1: Load prior context
|
||||
|
||||
Read project-level and prior phase context to avoid re-asking decided questions.
|
||||
|
||||
**Read project files:**
|
||||
|
||||
```bash
|
||||
cat .planning/PROJECT.md 2>/dev/null
|
||||
cat .planning/REQUIREMENTS.md 2>/dev/null
|
||||
cat .planning/STATE.md 2>/dev/null
|
||||
```
|
||||
|
||||
Extract from these:
|
||||
- **PROJECT.md** — Vision, principles, non-negotiables, user preferences
|
||||
- **REQUIREMENTS.md** — Acceptance criteria, constraints, must-haves vs nice-to-haves
|
||||
- **STATE.md** — Current progress, decisions logged so far
|
||||
|
||||
**Read all prior CONTEXT.md files:**
|
||||
|
||||
```bash
|
||||
find .planning/phases -name "*-CONTEXT.md" 2>/dev/null | sort
|
||||
```
|
||||
|
||||
For each CONTEXT.md where phase number < current phase:
|
||||
- Read the `<decisions>` section — these are locked preferences
|
||||
- Read `<specifics>` — particular references or "I want it like X" moments
|
||||
- Note patterns (e.g., "user consistently prefers minimal UI", "user rejected verbose output")
|
||||
|
||||
**Build internal prior_decisions context** (do not write to file):
|
||||
|
||||
```
|
||||
<prior_decisions>
|
||||
## Project-Level
|
||||
- [Key principle or constraint from PROJECT.md]
|
||||
- [Requirement affecting this phase from REQUIREMENTS.md]
|
||||
|
||||
## From Prior Phases
|
||||
### Phase N: [Name]
|
||||
- [Decision relevant to current phase]
|
||||
- [Preference that establishes a pattern]
|
||||
</prior_decisions>
|
||||
```
|
||||
|
||||
If no prior context exists, continue without — expected for early phases.
|
||||
|
||||
---
|
||||
|
||||
### Sub-step 2: Scout Codebase
|
||||
|
||||
Lightweight codebase scan to inform grey area identification and proposals. Keep under ~5% context.
|
||||
|
||||
**Check for existing codebase maps:**
|
||||
|
||||
```bash
|
||||
ls .planning/codebase/*.md 2>/dev/null
|
||||
```
|
||||
|
||||
**If codebase maps exist:** Read the most relevant ones (CONVENTIONS.md, STRUCTURE.md, STACK.md based on phase type). Extract reusable components, established patterns, integration points. Skip to building context below.
|
||||
|
||||
**If no codebase maps, do targeted grep:**
|
||||
|
||||
Extract key terms from the phase goal. Search for related files:
|
||||
|
||||
```bash
|
||||
grep -rl "{term1}\|{term2}" src/ app/ --include="*.ts" --include="*.tsx" --include="*.js" --include="*.jsx" 2>/dev/null | head -10
|
||||
ls src/components/ src/hooks/ src/lib/ src/utils/ 2>/dev/null
|
||||
```
|
||||
|
||||
Read the 3-5 most relevant files to understand existing patterns.
|
||||
|
||||
**Build internal codebase_context** (do not write to file):
|
||||
- **Reusable assets** — existing components, hooks, utilities usable in this phase
|
||||
- **Established patterns** — how the codebase does state management, styling, data fetching
|
||||
- **Integration points** — where new code connects (routes, nav, providers)
|
||||
|
||||
---
|
||||
|
||||
### Sub-step 3: Analyze Phase and Generate Proposals
|
||||
|
||||
**Get phase details:**
|
||||
|
||||
```bash
|
||||
DETAIL=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap get-phase ${PHASE_NUM})
|
||||
```
|
||||
|
||||
Extract `goal`, `requirements`, `success_criteria` from the JSON response.
|
||||
|
||||
**Infrastructure detection — check FIRST before generating grey areas:**
|
||||
|
||||
A phase is pure infrastructure when ALL of these are true:
|
||||
1. Goal keywords match: "scaffolding", "plumbing", "setup", "configuration", "migration", "refactor", "rename", "restructure", "upgrade", "infrastructure"
|
||||
2. AND success criteria are all technical: "file exists", "test passes", "config valid", "command runs"
|
||||
3. AND no user-facing behavior is described (no "users can", "displays", "shows", "presents")
|
||||
|
||||
**If infrastructure-only:** Skip Sub-step 4. Jump directly to Sub-step 5 with minimal CONTEXT.md. Display:
|
||||
|
||||
```
|
||||
Phase ${PHASE_NUM}: Infrastructure phase — skipping discuss, writing minimal context.
|
||||
```
|
||||
|
||||
Use these defaults for the CONTEXT.md:
|
||||
- `<domain>`: Phase boundary from ROADMAP goal
|
||||
- `<decisions>`: Single "### Claude's Discretion" subsection — "All implementation choices are at Claude's discretion — pure infrastructure phase"
|
||||
- `<code_context>`: Whatever the codebase scout found
|
||||
- `<specifics>`: "No specific requirements — infrastructure phase"
|
||||
- `<deferred>`: "None"
|
||||
|
||||
**If NOT infrastructure — generate grey area proposals:**
|
||||
|
||||
Determine domain type from the phase goal:
|
||||
- Something users **SEE** → visual: layout, interactions, states, density
|
||||
- Something users **CALL** → interface: contracts, responses, errors, auth
|
||||
- Something users **RUN** → execution: invocation, output, behavior modes, flags
|
||||
- Something users **READ** → content: structure, tone, depth, flow
|
||||
- Something being **ORGANIZED** → organization: criteria, grouping, exceptions, naming
|
||||
|
||||
Check prior_decisions — skip grey areas already decided in prior phases.
|
||||
|
||||
Generate **3-4 grey areas** with **~4 questions each**. For each question:
|
||||
- **Pre-select a recommended answer** based on: prior decisions (consistency), codebase patterns (reuse), domain conventions (standard approaches), ROADMAP success criteria
|
||||
- Generate **1-2 alternatives** per question
|
||||
- **Annotate** with prior decision context ("You decided X in Phase N") and code context ("Component Y exists with Z variants") where relevant
|
||||
|
||||
---
|
||||
|
||||
### Sub-step 4: Present Proposals Per Area
|
||||
|
||||
Present grey areas **one at a time**. For each area (M of N):
|
||||
|
||||
Display a table:
|
||||
|
||||
```
|
||||
### Grey Area {M}/{N}: {Area Name}
|
||||
|
||||
| # | Question | ✅ Recommended | Alternative(s) |
|
||||
|---|----------|---------------|-----------------|
|
||||
| 1 | {question} | {answer} — {rationale} | {alt1}; {alt2} |
|
||||
| 2 | {question} | {answer} — {rationale} | {alt1} |
|
||||
| 3 | {question} | {answer} — {rationale} | {alt1}; {alt2} |
|
||||
| 4 | {question} | {answer} — {rationale} | {alt1} |
|
||||
```
|
||||
|
||||
Then prompt the user via **AskUserQuestion**:
|
||||
- **header:** "Area {M}/{N}"
|
||||
- **question:** "Accept these answers for {Area Name}?"
|
||||
- **options:** Build dynamically — always "Accept all" first, then "Change Q1" through "Change QN" for each question (up to 4), then "Discuss deeper" last. Cap at 6 explicit options max (AskUserQuestion adds "Other" automatically).
|
||||
|
||||
**On "Accept all":** Record all recommended answers for this area. Move to next area.
|
||||
|
||||
**On "Change QN":** Use AskUserQuestion with the alternatives for that specific question:
|
||||
- **header:** "{Area Name}"
|
||||
- **question:** "Q{N}: {question text}"
|
||||
- **options:** List the 1-2 alternatives plus "You decide" (maps to Claude's Discretion)
|
||||
|
||||
Record the user's choice. Re-display the updated table with the change reflected. Re-present the full acceptance prompt so the user can make additional changes or accept.
|
||||
|
||||
**On "Discuss deeper":** Switch to interactive mode for this area only — ask questions one at a time using AskUserQuestion with 2-3 concrete options per question plus "You decide". After 4 questions, prompt:
|
||||
- **header:** "{Area Name}"
|
||||
- **question:** "More questions about {area name}, or move to next?"
|
||||
- **options:** "More questions" / "Next area"
|
||||
|
||||
If "More questions", ask 4 more. If "Next area", display final summary table of captured answers for this area and move on.
|
||||
|
||||
**On "Other" (free text):** Interpret as either a specific change request or general feedback. Incorporate into the area's decisions, re-display updated table, re-present acceptance prompt.
|
||||
|
||||
**Scope creep handling:** If user mentions something outside the phase domain:
|
||||
|
||||
```
|
||||
"{Feature} sounds like a new capability — that belongs in its own phase.
|
||||
I'll note it as a deferred idea.
|
||||
|
||||
Back to {current area}: {return to current question}"
|
||||
```
|
||||
|
||||
Track deferred ideas internally for inclusion in CONTEXT.md.
|
||||
|
||||
---
|
||||
|
||||
### Sub-step 5: Write CONTEXT.md
|
||||
|
||||
After all areas are resolved (or infrastructure skip), write the CONTEXT.md file.
|
||||
|
||||
**File path:** `${phase_dir}/${padded_phase}-CONTEXT.md`
|
||||
|
||||
Use **exactly** this structure (identical to discuss-phase output):
|
||||
|
||||
```markdown
|
||||
# Phase {PHASE_NUM}: {Phase Name} - Context
|
||||
|
||||
**Gathered:** {date}
|
||||
**Status:** Ready for planning
|
||||
|
||||
<domain>
|
||||
## Phase Boundary
|
||||
|
||||
{Domain boundary statement from analysis — what this phase delivers}
|
||||
|
||||
</domain>
|
||||
|
||||
<decisions>
|
||||
## Implementation Decisions
|
||||
|
||||
### {Area 1 Name}
|
||||
- {Accepted/chosen answer for Q1}
|
||||
- {Accepted/chosen answer for Q2}
|
||||
- {Accepted/chosen answer for Q3}
|
||||
- {Accepted/chosen answer for Q4}
|
||||
|
||||
### {Area 2 Name}
|
||||
- {Accepted/chosen answer for Q1}
|
||||
- {Accepted/chosen answer for Q2}
|
||||
...
|
||||
|
||||
### Claude's Discretion
|
||||
{Any "You decide" answers collected — note Claude has flexibility here}
|
||||
|
||||
</decisions>
|
||||
|
||||
<code_context>
|
||||
## Existing Code Insights
|
||||
|
||||
### Reusable Assets
|
||||
- {From codebase scout — components, hooks, utilities}
|
||||
|
||||
### Established Patterns
|
||||
- {From codebase scout — state management, styling, data fetching}
|
||||
|
||||
### Integration Points
|
||||
- {From codebase scout — where new code connects}
|
||||
|
||||
</code_context>
|
||||
|
||||
<specifics>
|
||||
## Specific Ideas
|
||||
|
||||
{Any specific references or "I want it like X" from discussion}
|
||||
{If none: "No specific requirements — open to standard approaches"}
|
||||
|
||||
</specifics>
|
||||
|
||||
<deferred>
|
||||
## Deferred Ideas
|
||||
|
||||
{Ideas captured but out of scope for this phase}
|
||||
{If none: "None — discussion stayed within phase scope"}
|
||||
|
||||
</deferred>
|
||||
```
|
||||
|
||||
Write the file.
|
||||
|
||||
**Commit:**
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs(${PADDED_PHASE}): smart discuss context" --files "${phase_dir}/${padded_phase}-CONTEXT.md"
|
||||
```
|
||||
|
||||
Display confirmation:
|
||||
|
||||
```
|
||||
Created: {path}
|
||||
Decisions captured: {count} across {area_count} areas
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="iterate">
|
||||
|
||||
## 4. Iterate
|
||||
|
||||
After each phase completes, re-read ROADMAP.md to catch phases inserted mid-execution (decimal phases like 5.1):
|
||||
|
||||
```bash
|
||||
ROADMAP=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap analyze)
|
||||
```
|
||||
|
||||
Re-filter incomplete phases using the same logic as discover_phases:
|
||||
- Keep phases where `disk_status !== "complete"` OR `roadmap_complete === false`
|
||||
- Apply `--from N` filter if originally provided
|
||||
- Sort by number ascending
|
||||
|
||||
Read STATE.md fresh:
|
||||
|
||||
```bash
|
||||
cat .planning/STATE.md
|
||||
```
|
||||
|
||||
Check for blockers in the Blockers/Concerns section. If blockers are found, go to handle_blocker with the blocker description.
|
||||
|
||||
If incomplete phases remain: proceed to next phase, loop back to execute_phase.
|
||||
|
||||
If all phases complete, proceed to lifecycle step.
|
||||
|
||||
</step>
|
||||
|
||||
<step name="lifecycle">
|
||||
|
||||
## 5. Lifecycle
|
||||
|
||||
After all phases complete, run the milestone lifecycle sequence: audit → complete → cleanup.
|
||||
|
||||
Display lifecycle transition banner:
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► AUTONOMOUS ▸ LIFECYCLE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
All phases complete → Starting lifecycle: audit → complete → cleanup
|
||||
Milestone: {milestone_version} — {milestone_name}
|
||||
```
|
||||
|
||||
**5a. Audit**
|
||||
|
||||
```
|
||||
Skill(skill="gsd:audit-milestone")
|
||||
```
|
||||
|
||||
After audit completes, detect the result:
|
||||
|
||||
```bash
|
||||
AUDIT_FILE=".planning/v${milestone_version}-MILESTONE-AUDIT.md"
|
||||
AUDIT_STATUS=$(grep "^status:" "${AUDIT_FILE}" 2>/dev/null | head -1 | cut -d: -f2 | tr -d ' ')
|
||||
```
|
||||
|
||||
**If AUDIT_STATUS is empty** (no audit file or no status field):
|
||||
|
||||
Go to handle_blocker: "Audit did not produce results — audit file missing or malformed."
|
||||
|
||||
**If `passed`:**
|
||||
|
||||
Display:
|
||||
```
|
||||
Audit ✅ passed — proceeding to complete milestone
|
||||
```
|
||||
|
||||
Proceed to 5b (no user pause — per CTRL-01).
|
||||
|
||||
**If `gaps_found`:**
|
||||
|
||||
Read the gaps summary from the audit file. Display:
|
||||
```
|
||||
⚠ Audit: Gaps Found
|
||||
```
|
||||
|
||||
Ask user via AskUserQuestion:
|
||||
- **question:** "Milestone audit found gaps. How to proceed?"
|
||||
- **options:** "Continue anyway — accept gaps" / "Stop — fix gaps manually"
|
||||
|
||||
On **"Continue anyway"**: Display `Audit ⏭ Gaps accepted — proceeding to complete milestone` and proceed to 5b.
|
||||
|
||||
On **"Stop"**: Go to handle_blocker with "User stopped — audit gaps remain. Run /gsd:audit-milestone to review, then /gsd:complete-milestone when ready."
|
||||
|
||||
**If `tech_debt`:**
|
||||
|
||||
Read the tech debt summary from the audit file. Display:
|
||||
```
|
||||
⚠ Audit: Tech Debt Identified
|
||||
```
|
||||
|
||||
Show the summary, then ask user via AskUserQuestion:
|
||||
- **question:** "Milestone audit found tech debt. How to proceed?"
|
||||
- **options:** "Continue with tech debt" / "Stop — address debt first"
|
||||
|
||||
On **"Continue with tech debt"**: Display `Audit ⏭ Tech debt acknowledged — proceeding to complete milestone` and proceed to 5b.
|
||||
|
||||
On **"Stop"**: Go to handle_blocker with "User stopped — tech debt to address. Run /gsd:audit-milestone to review details."
|
||||
|
||||
**5b. Complete Milestone**
|
||||
|
||||
```
|
||||
Skill(skill="gsd:complete-milestone", args="${milestone_version}")
|
||||
```
|
||||
|
||||
After complete-milestone returns, verify it produced output:
|
||||
|
||||
```bash
|
||||
ls .planning/milestones/v${milestone_version}-ROADMAP.md 2>/dev/null
|
||||
```
|
||||
|
||||
If the archive file does not exist, go to handle_blocker: "Complete milestone did not produce expected archive files."
|
||||
|
||||
**5c. Cleanup**
|
||||
|
||||
```
|
||||
Skill(skill="gsd:cleanup")
|
||||
```
|
||||
|
||||
Cleanup shows its own dry-run and asks user for approval internally — this is an acceptable pause per CTRL-01 since it's an explicit decision about file deletion.
|
||||
|
||||
**5d. Final Completion**
|
||||
|
||||
Display final completion banner:
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► AUTONOMOUS ▸ COMPLETE 🎉
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Milestone: {milestone_version} — {milestone_name}
|
||||
Status: Complete ✅
|
||||
Lifecycle: audit ✅ → complete ✅ → cleanup ✅
|
||||
|
||||
Ship it! 🚀
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="handle_blocker">
|
||||
|
||||
## 6. Handle Blocker
|
||||
|
||||
When any phase operation fails or a blocker is detected, present 3 options via AskUserQuestion:
|
||||
|
||||
**Prompt:** "Phase {N} ({Name}) encountered an issue: {description}"
|
||||
|
||||
**Options:**
|
||||
1. **"Fix and retry"** — Re-run the failed step (discuss, plan, or execute) for this phase
|
||||
2. **"Skip this phase"** — Mark phase as skipped, continue to the next incomplete phase
|
||||
3. **"Stop autonomous mode"** — Display summary of progress so far and exit cleanly
|
||||
|
||||
**On "Fix and retry":** Loop back to the failed step within execute_phase. If the same step fails again after retry, re-present these options.
|
||||
|
||||
**On "Skip this phase":** Log `Phase {N} ⏭ {Name} — Skipped by user` and proceed to iterate.
|
||||
|
||||
**On "Stop autonomous mode":** Display progress summary:
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► AUTONOMOUS ▸ STOPPED
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Completed: {list of completed phases}
|
||||
Skipped: {list of skipped phases}
|
||||
Remaining: {list of remaining phases}
|
||||
|
||||
Resume with: /gsd:autonomous --from {next_phase}
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All incomplete phases executed in order (smart discuss → plan → execute each)
|
||||
- [ ] Smart discuss proposes grey area answers in tables, user accepts or overrides per area
|
||||
- [ ] Progress banners displayed between phases
|
||||
- [ ] Execute-phase invoked with --no-transition (autonomous manages transitions)
|
||||
- [ ] Post-execution verification reads VERIFICATION.md and routes on status
|
||||
- [ ] Passed verification → automatic continue to next phase
|
||||
- [ ] Human-needed verification → user prompted to validate or skip
|
||||
- [ ] Gaps-found → user offered gap closure, continue, or stop
|
||||
- [ ] Gap closure limited to 1 retry (prevents infinite loops)
|
||||
- [ ] Plan-phase and execute-phase failures route to handle_blocker
|
||||
- [ ] ROADMAP.md re-read after each phase (catches inserted phases)
|
||||
- [ ] STATE.md checked for blockers before each phase
|
||||
- [ ] Blockers handled via user choice (retry / skip / stop)
|
||||
- [ ] Final completion or stop summary displayed
|
||||
- [ ] After all phases complete, lifecycle step is invoked (not manual suggestion)
|
||||
- [ ] Lifecycle transition banner displayed before audit
|
||||
- [ ] Audit invoked via Skill(skill="gsd:audit-milestone")
|
||||
- [ ] Audit result routing: passed → auto-continue, gaps_found → user decides, tech_debt → user decides
|
||||
- [ ] Audit technical failure (no file/no status) routes to handle_blocker
|
||||
- [ ] Complete-milestone invoked via Skill() with ${milestone_version} arg
|
||||
- [ ] Cleanup invoked via Skill() — internal confirmation is acceptable (CTRL-01)
|
||||
- [ ] Final completion banner displayed after lifecycle
|
||||
- [ ] Progress bar uses phase number / total milestone phases (not position among incomplete)
|
||||
- [ ] Smart discuss documents relationship to discuss-phase with CTRL-03 note
|
||||
</success_criteria>
|
||||
177
get-shit-done/workflows/check-todos.md
Normal file
177
get-shit-done/workflows/check-todos.md
Normal file
@@ -0,0 +1,177 @@
|
||||
<purpose>
|
||||
List all pending todos, allow selection, load full context for the selected todo, and route to appropriate action.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="init_context">
|
||||
Load todo context:
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init todos)
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Extract from init JSON: `todo_count`, `todos`, `pending_dir`.
|
||||
|
||||
If `todo_count` is 0:
|
||||
```
|
||||
No pending todos.
|
||||
|
||||
Todos are captured during work sessions with /gsd:add-todo.
|
||||
|
||||
---
|
||||
|
||||
Would you like to:
|
||||
|
||||
1. Continue with current phase (/gsd:progress)
|
||||
2. Add a todo now (/gsd:add-todo)
|
||||
```
|
||||
|
||||
Exit.
|
||||
</step>
|
||||
|
||||
<step name="parse_filter">
|
||||
Check for area filter in arguments:
|
||||
- `/gsd:check-todos` → show all
|
||||
- `/gsd:check-todos api` → filter to area:api only
|
||||
</step>
|
||||
|
||||
<step name="list_todos">
|
||||
Use the `todos` array from init context (already filtered by area if specified).
|
||||
|
||||
Parse and display as numbered list:
|
||||
|
||||
```
|
||||
Pending Todos:
|
||||
|
||||
1. Add auth token refresh (api, 2d ago)
|
||||
2. Fix modal z-index issue (ui, 1d ago)
|
||||
3. Refactor database connection pool (database, 5h ago)
|
||||
|
||||
---
|
||||
|
||||
Reply with a number to view details, or:
|
||||
- `/gsd:check-todos [area]` to filter by area
|
||||
- `q` to exit
|
||||
```
|
||||
|
||||
Format age as relative time from created timestamp.
|
||||
</step>
|
||||
|
||||
<step name="handle_selection">
|
||||
Wait for user to reply with a number.
|
||||
|
||||
If valid: load selected todo, proceed.
|
||||
If invalid: "Invalid selection. Reply with a number (1-[N]) or `q` to exit."
|
||||
</step>
|
||||
|
||||
<step name="load_context">
|
||||
Read the todo file completely. Display:
|
||||
|
||||
```
|
||||
## [title]
|
||||
|
||||
**Area:** [area]
|
||||
**Created:** [date] ([relative time] ago)
|
||||
**Files:** [list or "None"]
|
||||
|
||||
### Problem
|
||||
[problem section content]
|
||||
|
||||
### Solution
|
||||
[solution section content]
|
||||
```
|
||||
|
||||
If `files` field has entries, read and briefly summarize each.
|
||||
</step>
|
||||
|
||||
<step name="check_roadmap">
|
||||
Check for roadmap (can use init progress or directly check file existence):
|
||||
|
||||
If `.planning/ROADMAP.md` exists:
|
||||
1. Check if todo's area matches an upcoming phase
|
||||
2. Check if todo's files overlap with a phase's scope
|
||||
3. Note any match for action options
|
||||
</step>
|
||||
|
||||
<step name="offer_actions">
|
||||
**If todo maps to a roadmap phase:**
|
||||
|
||||
Use AskUserQuestion:
|
||||
- header: "Action"
|
||||
- question: "This todo relates to Phase [N]: [name]. What would you like to do?"
|
||||
- options:
|
||||
- "Work on it now" — move to done, start working
|
||||
- "Add to phase plan" — include when planning Phase [N]
|
||||
- "Brainstorm approach" — think through before deciding
|
||||
- "Put it back" — return to list
|
||||
|
||||
**If no roadmap match:**
|
||||
|
||||
Use AskUserQuestion:
|
||||
- header: "Action"
|
||||
- question: "What would you like to do with this todo?"
|
||||
- options:
|
||||
- "Work on it now" — move to done, start working
|
||||
- "Create a phase" — /gsd:add-phase with this scope
|
||||
- "Brainstorm approach" — think through before deciding
|
||||
- "Put it back" — return to list
|
||||
</step>
|
||||
|
||||
<step name="execute_action">
|
||||
**Work on it now:**
|
||||
```bash
|
||||
mv ".planning/todos/pending/[filename]" ".planning/todos/done/"
|
||||
```
|
||||
Update STATE.md todo count. Present problem/solution context. Begin work or ask how to proceed.
|
||||
|
||||
**Add to phase plan:**
|
||||
Note todo reference in phase planning notes. Keep in pending. Return to list or exit.
|
||||
|
||||
**Create a phase:**
|
||||
Display: `/gsd:add-phase [description from todo]`
|
||||
Keep in pending. User runs command in fresh context.
|
||||
|
||||
**Brainstorm approach:**
|
||||
Keep in pending. Start discussion about problem and approaches.
|
||||
|
||||
**Put it back:**
|
||||
Return to list_todos step.
|
||||
</step>
|
||||
|
||||
<step name="update_state">
|
||||
After any action that changes todo count:
|
||||
|
||||
Re-run `init todos` to get updated count, then update STATE.md "### Pending Todos" section if exists.
|
||||
</step>
|
||||
|
||||
<step name="git_commit">
|
||||
If todo was moved to done/, commit the change:
|
||||
|
||||
```bash
|
||||
git rm --cached .planning/todos/pending/[filename] 2>/dev/null || true
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: start work on todo - [title]" --files .planning/todos/done/[filename] .planning/STATE.md
|
||||
```
|
||||
|
||||
Tool respects `commit_docs` config and gitignore automatically.
|
||||
|
||||
Confirm: "Committed: docs: start work on todo - [title]"
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All pending todos listed with title, area, age
|
||||
- [ ] Area filter applied if specified
|
||||
- [ ] Selected todo's full context loaded
|
||||
- [ ] Roadmap context checked for phase match
|
||||
- [ ] Appropriate actions offered
|
||||
- [ ] Selected action executed
|
||||
- [ ] STATE.md updated if todo count changed
|
||||
- [ ] Changes committed to git (if todo moved to done/)
|
||||
</success_criteria>
|
||||
152
get-shit-done/workflows/cleanup.md
Normal file
152
get-shit-done/workflows/cleanup.md
Normal file
@@ -0,0 +1,152 @@
|
||||
<purpose>
|
||||
|
||||
Archive accumulated phase directories from completed milestones into `.planning/milestones/v{X.Y}-phases/`. Identifies which phases belong to each completed milestone, shows a dry-run summary, and moves directories on confirmation.
|
||||
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
|
||||
1. `.planning/MILESTONES.md`
|
||||
2. `.planning/milestones/` directory listing
|
||||
3. `.planning/phases/` directory listing
|
||||
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="identify_completed_milestones">
|
||||
|
||||
Read `.planning/MILESTONES.md` to identify completed milestones and their versions.
|
||||
|
||||
```bash
|
||||
cat .planning/MILESTONES.md
|
||||
```
|
||||
|
||||
Extract each milestone version (e.g., v1.0, v1.1, v2.0).
|
||||
|
||||
Check which milestone archive dirs already exist:
|
||||
|
||||
```bash
|
||||
ls -d .planning/milestones/v*-phases 2>/dev/null
|
||||
```
|
||||
|
||||
Filter to milestones that do NOT already have a `-phases` archive directory.
|
||||
|
||||
If all milestones already have phase archives:
|
||||
|
||||
```
|
||||
All completed milestones already have phase directories archived. Nothing to clean up.
|
||||
```
|
||||
|
||||
Stop here.
|
||||
|
||||
</step>
|
||||
|
||||
<step name="determine_phase_membership">
|
||||
|
||||
For each completed milestone without a `-phases` archive, read the archived ROADMAP snapshot to determine which phases belong to it:
|
||||
|
||||
```bash
|
||||
cat .planning/milestones/v{X.Y}-ROADMAP.md
|
||||
```
|
||||
|
||||
Extract phase numbers and names from the archived roadmap (e.g., Phase 1: Foundation, Phase 2: Auth).
|
||||
|
||||
Check which of those phase directories still exist in `.planning/phases/`:
|
||||
|
||||
```bash
|
||||
ls -d .planning/phases/*/ 2>/dev/null
|
||||
```
|
||||
|
||||
Match phase directories to milestone membership. Only include directories that still exist in `.planning/phases/`.
|
||||
|
||||
</step>
|
||||
|
||||
<step name="show_dry_run">
|
||||
|
||||
Present a dry-run summary for each milestone:
|
||||
|
||||
```
|
||||
## Cleanup Summary
|
||||
|
||||
### v{X.Y} — {Milestone Name}
|
||||
These phase directories will be archived:
|
||||
- 01-foundation/
|
||||
- 02-auth/
|
||||
- 03-core-features/
|
||||
|
||||
Destination: .planning/milestones/v{X.Y}-phases/
|
||||
|
||||
### v{X.Z} — {Milestone Name}
|
||||
These phase directories will be archived:
|
||||
- 04-security/
|
||||
- 05-hardening/
|
||||
|
||||
Destination: .planning/milestones/v{X.Z}-phases/
|
||||
```
|
||||
|
||||
If no phase directories remain to archive (all already moved or deleted):
|
||||
|
||||
```
|
||||
No phase directories found to archive. Phases may have been removed or archived previously.
|
||||
```
|
||||
|
||||
Stop here.
|
||||
|
||||
AskUserQuestion: "Proceed with archiving?" with options: "Yes — archive listed phases" | "Cancel"
|
||||
|
||||
If "Cancel": Stop.
|
||||
|
||||
</step>
|
||||
|
||||
<step name="archive_phases">
|
||||
|
||||
For each milestone, move phase directories:
|
||||
|
||||
```bash
|
||||
mkdir -p .planning/milestones/v{X.Y}-phases
|
||||
```
|
||||
|
||||
For each phase directory belonging to this milestone:
|
||||
|
||||
```bash
|
||||
mv .planning/phases/{dir} .planning/milestones/v{X.Y}-phases/
|
||||
```
|
||||
|
||||
Repeat for all milestones in the cleanup set.
|
||||
|
||||
</step>
|
||||
|
||||
<step name="commit">
|
||||
|
||||
Commit the changes:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "chore: archive phase directories from completed milestones" --files .planning/milestones/ .planning/phases/
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="report">
|
||||
|
||||
```
|
||||
Archived:
|
||||
{For each milestone}
|
||||
- v{X.Y}: {N} phase directories → .planning/milestones/v{X.Y}-phases/
|
||||
|
||||
.planning/phases/ cleaned up.
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
- [ ] All completed milestones without existing phase archives identified
|
||||
- [ ] Phase membership determined from archived ROADMAP snapshots
|
||||
- [ ] Dry-run summary shown and user confirmed
|
||||
- [ ] Phase directories moved to `.planning/milestones/v{X.Y}-phases/`
|
||||
- [ ] Changes committed
|
||||
|
||||
</success_criteria>
|
||||
766
get-shit-done/workflows/complete-milestone.md
Normal file
766
get-shit-done/workflows/complete-milestone.md
Normal file
@@ -0,0 +1,766 @@
|
||||
<purpose>
|
||||
|
||||
Mark a shipped version (v1.0, v1.1, v2.0) as complete. Creates historical record in MILESTONES.md, performs full PROJECT.md evolution review, reorganizes ROADMAP.md with milestone groupings, and tags the release in git.
|
||||
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
|
||||
1. templates/milestone.md
|
||||
2. templates/milestone-archive.md
|
||||
3. `.planning/ROADMAP.md`
|
||||
4. `.planning/REQUIREMENTS.md`
|
||||
5. `.planning/PROJECT.md`
|
||||
|
||||
</required_reading>
|
||||
|
||||
<archival_behavior>
|
||||
|
||||
When a milestone completes:
|
||||
|
||||
1. Extract full milestone details to `.planning/milestones/v[X.Y]-ROADMAP.md`
|
||||
2. Archive requirements to `.planning/milestones/v[X.Y]-REQUIREMENTS.md`
|
||||
3. Update ROADMAP.md — replace milestone details with one-line summary
|
||||
4. Delete REQUIREMENTS.md (fresh one for next milestone)
|
||||
5. Perform full PROJECT.md evolution review
|
||||
6. Offer to create next milestone inline
|
||||
7. Archive UI artifacts (`*-UI-SPEC.md`, `*-UI-REVIEW.md`) alongside other phase documents
|
||||
8. Clean up `.planning/ui-reviews/` screenshot files (binary assets, never archived)
|
||||
|
||||
**Context Efficiency:** Archives keep ROADMAP.md constant-size and REQUIREMENTS.md milestone-scoped.
|
||||
|
||||
**ROADMAP archive** uses `templates/milestone-archive.md` — includes milestone header (status, phases, date), full phase details, milestone summary (decisions, issues, tech debt).
|
||||
|
||||
**REQUIREMENTS archive** contains all requirements marked complete with outcomes, traceability table with final status, notes on changed requirements.
|
||||
|
||||
</archival_behavior>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="verify_readiness">
|
||||
|
||||
**Use `roadmap analyze` for comprehensive readiness check:**
|
||||
|
||||
```bash
|
||||
ROADMAP=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap analyze)
|
||||
```
|
||||
|
||||
This returns all phases with plan/summary counts and disk status. Use this to verify:
|
||||
- Which phases belong to this milestone?
|
||||
- All phases complete (all plans have summaries)? Check `disk_status === 'complete'` for each.
|
||||
- `progress_percent` should be 100%.
|
||||
|
||||
**Requirements completion check (REQUIRED before presenting):**
|
||||
|
||||
Parse REQUIREMENTS.md traceability table:
|
||||
- Count total v1 requirements vs checked-off (`[x]`) requirements
|
||||
- Identify any non-Complete rows in the traceability table
|
||||
|
||||
Present:
|
||||
|
||||
```
|
||||
Milestone: [Name, e.g., "v1.0 MVP"]
|
||||
|
||||
Includes:
|
||||
- Phase 1: Foundation (2/2 plans complete)
|
||||
- Phase 2: Authentication (2/2 plans complete)
|
||||
- Phase 3: Core Features (3/3 plans complete)
|
||||
- Phase 4: Polish (1/1 plan complete)
|
||||
|
||||
Total: {phase_count} phases, {total_plans} plans, all complete
|
||||
Requirements: {N}/{M} v1 requirements checked off
|
||||
```
|
||||
|
||||
**If requirements incomplete** (N < M):
|
||||
|
||||
```
|
||||
⚠ Unchecked Requirements:
|
||||
|
||||
- [ ] {REQ-ID}: {description} (Phase {X})
|
||||
- [ ] {REQ-ID}: {description} (Phase {Y})
|
||||
```
|
||||
|
||||
MUST present 3 options:
|
||||
1. **Proceed anyway** — mark milestone complete with known gaps
|
||||
2. **Run audit first** — `/gsd:audit-milestone` to assess gap severity
|
||||
3. **Abort** — return to development
|
||||
|
||||
If user selects "Proceed anyway": note incomplete requirements in MILESTONES.md under `### Known Gaps` with REQ-IDs and descriptions.
|
||||
|
||||
<config-check>
|
||||
|
||||
```bash
|
||||
cat .planning/config.json 2>/dev/null
|
||||
```
|
||||
|
||||
</config-check>
|
||||
|
||||
<if mode="yolo">
|
||||
|
||||
```
|
||||
⚡ Auto-approved: Milestone scope verification
|
||||
[Show breakdown summary without prompting]
|
||||
Proceeding to stats gathering...
|
||||
```
|
||||
|
||||
Proceed to gather_stats.
|
||||
|
||||
</if>
|
||||
|
||||
<if mode="interactive" OR="custom with gates.confirm_milestone_scope true">
|
||||
|
||||
```
|
||||
Ready to mark this milestone as shipped?
|
||||
(yes / wait / adjust scope)
|
||||
```
|
||||
|
||||
Wait for confirmation.
|
||||
- "adjust scope": Ask which phases to include.
|
||||
- "wait": Stop, user returns when ready.
|
||||
|
||||
</if>
|
||||
|
||||
</step>
|
||||
|
||||
<step name="gather_stats">
|
||||
|
||||
Calculate milestone statistics:
|
||||
|
||||
```bash
|
||||
git log --oneline --grep="feat(" | head -20
|
||||
git diff --stat FIRST_COMMIT..LAST_COMMIT | tail -1
|
||||
find . -name "*.swift" -o -name "*.ts" -o -name "*.py" | xargs wc -l 2>/dev/null
|
||||
git log --format="%ai" FIRST_COMMIT | tail -1
|
||||
git log --format="%ai" LAST_COMMIT | head -1
|
||||
```
|
||||
|
||||
Present:
|
||||
|
||||
```
|
||||
Milestone Stats:
|
||||
- Phases: [X-Y]
|
||||
- Plans: [Z] total
|
||||
- Tasks: [N] total (from phase summaries)
|
||||
- Files modified: [M]
|
||||
- Lines of code: [LOC] [language]
|
||||
- Timeline: [Days] days ([Start] → [End])
|
||||
- Git range: feat(XX-XX) → feat(YY-YY)
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="extract_accomplishments">
|
||||
|
||||
Extract one-liners from SUMMARY.md files using summary-extract:
|
||||
|
||||
```bash
|
||||
# For each phase in milestone, extract one-liner
|
||||
for summary in .planning/phases/*-*/*-SUMMARY.md; do
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" summary-extract "$summary" --fields one_liner | jq -r '.one_liner'
|
||||
done
|
||||
```
|
||||
|
||||
Extract 4-6 key accomplishments. Present:
|
||||
|
||||
```
|
||||
Key accomplishments for this milestone:
|
||||
1. [Achievement from phase 1]
|
||||
2. [Achievement from phase 2]
|
||||
3. [Achievement from phase 3]
|
||||
4. [Achievement from phase 4]
|
||||
5. [Achievement from phase 5]
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="create_milestone_entry">
|
||||
|
||||
**Note:** MILESTONES.md entry is now created automatically by `gsd-tools milestone complete` in the archive_milestone step. The entry includes version, date, phase/plan/task counts, and accomplishments extracted from SUMMARY.md files.
|
||||
|
||||
If additional details are needed (e.g., user-provided "Delivered" summary, git range, LOC stats), add them manually after the CLI creates the base entry.
|
||||
|
||||
</step>
|
||||
|
||||
<step name="evolve_project_full_review">
|
||||
|
||||
Full PROJECT.md evolution review at milestone completion.
|
||||
|
||||
Read all phase summaries:
|
||||
|
||||
```bash
|
||||
cat .planning/phases/*-*/*-SUMMARY.md
|
||||
```
|
||||
|
||||
**Full review checklist:**
|
||||
|
||||
1. **"What This Is" accuracy:**
|
||||
- Compare current description to what was built
|
||||
- Update if product has meaningfully changed
|
||||
|
||||
2. **Core Value check:**
|
||||
- Still the right priority? Did shipping reveal a different core value?
|
||||
- Update if the ONE thing has shifted
|
||||
|
||||
3. **Requirements audit:**
|
||||
|
||||
**Validated section:**
|
||||
- All Active requirements shipped this milestone → Move to Validated
|
||||
- Format: `- ✓ [Requirement] — v[X.Y]`
|
||||
|
||||
**Active section:**
|
||||
- Remove requirements moved to Validated
|
||||
- Add new requirements for next milestone
|
||||
- Keep unaddressed requirements
|
||||
|
||||
**Out of Scope audit:**
|
||||
- Review each item — reasoning still valid?
|
||||
- Remove irrelevant items
|
||||
- Add requirements invalidated during milestone
|
||||
|
||||
4. **Context update:**
|
||||
- Current codebase state (LOC, tech stack)
|
||||
- User feedback themes (if any)
|
||||
- Known issues or technical debt
|
||||
|
||||
5. **Key Decisions audit:**
|
||||
- Extract all decisions from milestone phase summaries
|
||||
- Add to Key Decisions table with outcomes
|
||||
- Mark ✓ Good, ⚠️ Revisit, or — Pending
|
||||
|
||||
6. **Constraints check:**
|
||||
- Any constraints changed during development? Update as needed
|
||||
|
||||
Update PROJECT.md inline. Update "Last updated" footer:
|
||||
|
||||
```markdown
|
||||
---
|
||||
*Last updated: [date] after v[X.Y] milestone*
|
||||
```
|
||||
|
||||
**Example full evolution (v1.0 → v1.1 prep):**
|
||||
|
||||
Before:
|
||||
|
||||
```markdown
|
||||
## What This Is
|
||||
|
||||
A real-time collaborative whiteboard for remote teams.
|
||||
|
||||
## Core Value
|
||||
|
||||
Real-time sync that feels instant.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Validated
|
||||
|
||||
(None yet — ship to validate)
|
||||
|
||||
### Active
|
||||
|
||||
- [ ] Canvas drawing tools
|
||||
- [ ] Real-time sync < 500ms
|
||||
- [ ] User authentication
|
||||
- [ ] Export to PNG
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- Mobile app — web-first approach
|
||||
- Video chat — use external tools
|
||||
```
|
||||
|
||||
After v1.0:
|
||||
|
||||
```markdown
|
||||
## What This Is
|
||||
|
||||
A real-time collaborative whiteboard for remote teams with instant sync and drawing tools.
|
||||
|
||||
## Core Value
|
||||
|
||||
Real-time sync that feels instant.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Validated
|
||||
|
||||
- ✓ Canvas drawing tools — v1.0
|
||||
- ✓ Real-time sync < 500ms — v1.0 (achieved 200ms avg)
|
||||
- ✓ User authentication — v1.0
|
||||
|
||||
### Active
|
||||
|
||||
- [ ] Export to PNG
|
||||
- [ ] Undo/redo history
|
||||
- [ ] Shape tools (rectangles, circles)
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- Mobile app — web-first approach, PWA works well
|
||||
- Video chat — use external tools
|
||||
- Offline mode — real-time is core value
|
||||
|
||||
## Context
|
||||
|
||||
Shipped v1.0 with 2,400 LOC TypeScript.
|
||||
Tech stack: Next.js, Supabase, Canvas API.
|
||||
Initial user testing showed demand for shape tools.
|
||||
```
|
||||
|
||||
**Step complete when:**
|
||||
|
||||
- [ ] "What This Is" reviewed and updated if needed
|
||||
- [ ] Core Value verified as still correct
|
||||
- [ ] All shipped requirements moved to Validated
|
||||
- [ ] New requirements added to Active for next milestone
|
||||
- [ ] Out of Scope reasoning audited
|
||||
- [ ] Context updated with current state
|
||||
- [ ] All milestone decisions added to Key Decisions
|
||||
- [ ] "Last updated" footer reflects milestone completion
|
||||
|
||||
</step>
|
||||
|
||||
<step name="reorganize_roadmap">
|
||||
|
||||
Update `.planning/ROADMAP.md` — group completed milestone phases:
|
||||
|
||||
```markdown
|
||||
# Roadmap: [Project Name]
|
||||
|
||||
## Milestones
|
||||
|
||||
- ✅ **v1.0 MVP** — Phases 1-4 (shipped YYYY-MM-DD)
|
||||
- 🚧 **v1.1 Security** — Phases 5-6 (in progress)
|
||||
- 📋 **v2.0 Redesign** — Phases 7-10 (planned)
|
||||
|
||||
## Phases
|
||||
|
||||
<details>
|
||||
<summary>✅ v1.0 MVP (Phases 1-4) — SHIPPED YYYY-MM-DD</summary>
|
||||
|
||||
- [x] Phase 1: Foundation (2/2 plans) — completed YYYY-MM-DD
|
||||
- [x] Phase 2: Authentication (2/2 plans) — completed YYYY-MM-DD
|
||||
- [x] Phase 3: Core Features (3/3 plans) — completed YYYY-MM-DD
|
||||
- [x] Phase 4: Polish (1/1 plan) — completed YYYY-MM-DD
|
||||
|
||||
</details>
|
||||
|
||||
### 🚧 v[Next] [Name] (In Progress / Planned)
|
||||
|
||||
- [ ] Phase 5: [Name] ([N] plans)
|
||||
- [ ] Phase 6: [Name] ([N] plans)
|
||||
|
||||
## Progress
|
||||
|
||||
| Phase | Milestone | Plans Complete | Status | Completed |
|
||||
| ----------------- | --------- | -------------- | ----------- | ---------- |
|
||||
| 1. Foundation | v1.0 | 2/2 | Complete | YYYY-MM-DD |
|
||||
| 2. Authentication | v1.0 | 2/2 | Complete | YYYY-MM-DD |
|
||||
| 3. Core Features | v1.0 | 3/3 | Complete | YYYY-MM-DD |
|
||||
| 4. Polish | v1.0 | 1/1 | Complete | YYYY-MM-DD |
|
||||
| 5. Security Audit | v1.1 | 0/1 | Not started | - |
|
||||
| 6. Hardening | v1.1 | 0/2 | Not started | - |
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="archive_milestone">
|
||||
|
||||
**Delegate archival to gsd-tools:**
|
||||
|
||||
```bash
|
||||
ARCHIVE=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" milestone complete "v[X.Y]" --name "[Milestone Name]")
|
||||
```
|
||||
|
||||
The CLI handles:
|
||||
- Creating `.planning/milestones/` directory
|
||||
- Archiving ROADMAP.md to `milestones/v[X.Y]-ROADMAP.md`
|
||||
- Archiving REQUIREMENTS.md to `milestones/v[X.Y]-REQUIREMENTS.md` with archive header
|
||||
- Moving audit file to milestones if it exists
|
||||
- Creating/appending MILESTONES.md entry with accomplishments from SUMMARY.md files
|
||||
- Updating STATE.md (status, last activity)
|
||||
|
||||
Extract from result: `version`, `date`, `phases`, `plans`, `tasks`, `accomplishments`, `archived`.
|
||||
|
||||
Verify: `✅ Milestone archived to .planning/milestones/`
|
||||
|
||||
**Phase archival (optional):** After archival completes, ask the user:
|
||||
|
||||
AskUserQuestion(header="Archive Phases", question="Archive phase directories to milestones/?", options: "Yes — move to milestones/v[X.Y]-phases/" | "Skip — keep phases in place")
|
||||
|
||||
If "Yes": move phase directories to the milestone archive:
|
||||
```bash
|
||||
mkdir -p .planning/milestones/v[X.Y]-phases
|
||||
# For each phase directory in .planning/phases/:
|
||||
mv .planning/phases/{phase-dir} .planning/milestones/v[X.Y]-phases/
|
||||
```
|
||||
Verify: `✅ Phase directories archived to .planning/milestones/v[X.Y]-phases/`
|
||||
|
||||
If "Skip": Phase directories remain in `.planning/phases/` as raw execution history. Use `/gsd:cleanup` later to archive retroactively.
|
||||
|
||||
After archival, the AI still handles:
|
||||
- Reorganizing ROADMAP.md with milestone grouping (requires judgment)
|
||||
- Full PROJECT.md evolution review (requires understanding)
|
||||
- Deleting original ROADMAP.md and REQUIREMENTS.md
|
||||
- These are NOT fully delegated because they require AI interpretation of content
|
||||
|
||||
</step>
|
||||
|
||||
<step name="reorganize_roadmap_and_delete_originals">
|
||||
|
||||
After `milestone complete` has archived, reorganize ROADMAP.md with milestone groupings, then delete originals:
|
||||
|
||||
**Reorganize ROADMAP.md** — group completed milestone phases:
|
||||
|
||||
```markdown
|
||||
# Roadmap: [Project Name]
|
||||
|
||||
## Milestones
|
||||
|
||||
- ✅ **v1.0 MVP** — Phases 1-4 (shipped YYYY-MM-DD)
|
||||
- 🚧 **v1.1 Security** — Phases 5-6 (in progress)
|
||||
|
||||
## Phases
|
||||
|
||||
<details>
|
||||
<summary>✅ v1.0 MVP (Phases 1-4) — SHIPPED YYYY-MM-DD</summary>
|
||||
|
||||
- [x] Phase 1: Foundation (2/2 plans) — completed YYYY-MM-DD
|
||||
- [x] Phase 2: Authentication (2/2 plans) — completed YYYY-MM-DD
|
||||
|
||||
</details>
|
||||
```
|
||||
|
||||
**Then delete originals:**
|
||||
|
||||
```bash
|
||||
rm .planning/ROADMAP.md
|
||||
rm .planning/REQUIREMENTS.md
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="write_retrospective">
|
||||
|
||||
**Append to living retrospective:**
|
||||
|
||||
Check for existing retrospective:
|
||||
```bash
|
||||
ls .planning/RETROSPECTIVE.md 2>/dev/null
|
||||
```
|
||||
|
||||
**If exists:** Read the file, append new milestone section before the "## Cross-Milestone Trends" section.
|
||||
|
||||
**If doesn't exist:** Create from template at `C:/Users/yaoji/.claude/get-shit-done/templates/retrospective.md`.
|
||||
|
||||
**Gather retrospective data:**
|
||||
|
||||
1. From SUMMARY.md files: Extract key deliverables, one-liners, tech decisions
|
||||
2. From VERIFICATION.md files: Extract verification scores, gaps found
|
||||
3. From UAT.md files: Extract test results, issues found
|
||||
4. From git log: Count commits, calculate timeline
|
||||
5. From the milestone work: Reflect on what worked and what didn't
|
||||
|
||||
**Write the milestone section:**
|
||||
|
||||
```markdown
|
||||
## Milestone: v{version} — {name}
|
||||
|
||||
**Shipped:** {date}
|
||||
**Phases:** {phase_count} | **Plans:** {plan_count}
|
||||
|
||||
### What Was Built
|
||||
{Extract from SUMMARY.md one-liners}
|
||||
|
||||
### What Worked
|
||||
{Patterns that led to smooth execution}
|
||||
|
||||
### What Was Inefficient
|
||||
{Missed opportunities, rework, bottlenecks}
|
||||
|
||||
### Patterns Established
|
||||
{New conventions discovered during this milestone}
|
||||
|
||||
### Key Lessons
|
||||
{Specific, actionable takeaways}
|
||||
|
||||
### Cost Observations
|
||||
- Model mix: {X}% opus, {Y}% sonnet, {Z}% haiku
|
||||
- Sessions: {count}
|
||||
- Notable: {efficiency observation}
|
||||
```
|
||||
|
||||
**Update cross-milestone trends:**
|
||||
|
||||
If the "## Cross-Milestone Trends" section exists, update the tables with new data from this milestone.
|
||||
|
||||
**Commit:**
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: update retrospective for v${VERSION}" --files .planning/RETROSPECTIVE.md
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="update_state">
|
||||
|
||||
Most STATE.md updates were handled by `milestone complete`, but verify and update remaining fields:
|
||||
|
||||
**Project Reference:**
|
||||
|
||||
```markdown
|
||||
## Project Reference
|
||||
|
||||
See: .planning/PROJECT.md (updated [today])
|
||||
|
||||
**Core value:** [Current core value from PROJECT.md]
|
||||
**Current focus:** [Next milestone or "Planning next milestone"]
|
||||
```
|
||||
|
||||
**Accumulated Context:**
|
||||
- Clear decisions summary (full log in PROJECT.md)
|
||||
- Clear resolved blockers
|
||||
- Keep open blockers for next milestone
|
||||
|
||||
</step>
|
||||
|
||||
<step name="handle_branches">
|
||||
|
||||
Check branching strategy and offer merge options.
|
||||
|
||||
Use `init milestone-op` for context, or load config directly:
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init execute-phase "1")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Extract `branching_strategy`, `phase_branch_template`, `milestone_branch_template`, and `commit_docs` from init JSON.
|
||||
|
||||
**If "none":** Skip to git_tag.
|
||||
|
||||
**For "phase" strategy:**
|
||||
|
||||
```bash
|
||||
BRANCH_PREFIX=$(echo "$PHASE_BRANCH_TEMPLATE" | sed 's/{.*//')
|
||||
PHASE_BRANCHES=$(git branch --list "${BRANCH_PREFIX}*" 2>/dev/null | sed 's/^\*//' | tr -d ' ')
|
||||
```
|
||||
|
||||
**For "milestone" strategy:**
|
||||
|
||||
```bash
|
||||
BRANCH_PREFIX=$(echo "$MILESTONE_BRANCH_TEMPLATE" | sed 's/{.*//')
|
||||
MILESTONE_BRANCH=$(git branch --list "${BRANCH_PREFIX}*" 2>/dev/null | sed 's/^\*//' | tr -d ' ' | head -1)
|
||||
```
|
||||
|
||||
**If no branches found:** Skip to git_tag.
|
||||
|
||||
**If branches exist:**
|
||||
|
||||
```
|
||||
## Git Branches Detected
|
||||
|
||||
Branching strategy: {phase/milestone}
|
||||
Branches: {list}
|
||||
|
||||
Options:
|
||||
1. **Merge to main** — Merge branch(es) to main
|
||||
2. **Delete without merging** — Already merged or not needed
|
||||
3. **Keep branches** — Leave for manual handling
|
||||
```
|
||||
|
||||
AskUserQuestion with options: Squash merge (Recommended), Merge with history, Delete without merging, Keep branches.
|
||||
|
||||
**Squash merge:**
|
||||
|
||||
```bash
|
||||
CURRENT_BRANCH=$(git branch --show-current)
|
||||
git checkout main
|
||||
|
||||
if [ "$BRANCHING_STRATEGY" = "phase" ]; then
|
||||
for branch in $PHASE_BRANCHES; do
|
||||
git merge --squash "$branch"
|
||||
# Strip .planning/ from staging if commit_docs is false
|
||||
if [ "$COMMIT_DOCS" = "false" ]; then
|
||||
git reset HEAD .planning/ 2>/dev/null || true
|
||||
fi
|
||||
git commit -m "feat: $branch for v[X.Y]"
|
||||
done
|
||||
fi
|
||||
|
||||
if [ "$BRANCHING_STRATEGY" = "milestone" ]; then
|
||||
git merge --squash "$MILESTONE_BRANCH"
|
||||
# Strip .planning/ from staging if commit_docs is false
|
||||
if [ "$COMMIT_DOCS" = "false" ]; then
|
||||
git reset HEAD .planning/ 2>/dev/null || true
|
||||
fi
|
||||
git commit -m "feat: $MILESTONE_BRANCH for v[X.Y]"
|
||||
fi
|
||||
|
||||
git checkout "$CURRENT_BRANCH"
|
||||
```
|
||||
|
||||
**Merge with history:**
|
||||
|
||||
```bash
|
||||
CURRENT_BRANCH=$(git branch --show-current)
|
||||
git checkout main
|
||||
|
||||
if [ "$BRANCHING_STRATEGY" = "phase" ]; then
|
||||
for branch in $PHASE_BRANCHES; do
|
||||
git merge --no-ff --no-commit "$branch"
|
||||
# Strip .planning/ from staging if commit_docs is false
|
||||
if [ "$COMMIT_DOCS" = "false" ]; then
|
||||
git reset HEAD .planning/ 2>/dev/null || true
|
||||
fi
|
||||
git commit -m "Merge branch '$branch' for v[X.Y]"
|
||||
done
|
||||
fi
|
||||
|
||||
if [ "$BRANCHING_STRATEGY" = "milestone" ]; then
|
||||
git merge --no-ff --no-commit "$MILESTONE_BRANCH"
|
||||
# Strip .planning/ from staging if commit_docs is false
|
||||
if [ "$COMMIT_DOCS" = "false" ]; then
|
||||
git reset HEAD .planning/ 2>/dev/null || true
|
||||
fi
|
||||
git commit -m "Merge branch '$MILESTONE_BRANCH' for v[X.Y]"
|
||||
fi
|
||||
|
||||
git checkout "$CURRENT_BRANCH"
|
||||
```
|
||||
|
||||
**Delete without merging:**
|
||||
|
||||
```bash
|
||||
if [ "$BRANCHING_STRATEGY" = "phase" ]; then
|
||||
for branch in $PHASE_BRANCHES; do
|
||||
git branch -d "$branch" 2>/dev/null || git branch -D "$branch"
|
||||
done
|
||||
fi
|
||||
|
||||
if [ "$BRANCHING_STRATEGY" = "milestone" ]; then
|
||||
git branch -d "$MILESTONE_BRANCH" 2>/dev/null || git branch -D "$MILESTONE_BRANCH"
|
||||
fi
|
||||
```
|
||||
|
||||
**Keep branches:** Report "Branches preserved for manual handling"
|
||||
|
||||
</step>
|
||||
|
||||
<step name="git_tag">
|
||||
|
||||
Create git tag:
|
||||
|
||||
```bash
|
||||
git tag -a v[X.Y] -m "v[X.Y] [Name]
|
||||
|
||||
Delivered: [One sentence]
|
||||
|
||||
Key accomplishments:
|
||||
- [Item 1]
|
||||
- [Item 2]
|
||||
- [Item 3]
|
||||
|
||||
See .planning/MILESTONES.md for full details."
|
||||
```
|
||||
|
||||
Confirm: "Tagged: v[X.Y]"
|
||||
|
||||
Ask: "Push tag to remote? (y/n)"
|
||||
|
||||
If yes:
|
||||
```bash
|
||||
git push origin v[X.Y]
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="git_commit_milestone">
|
||||
|
||||
Commit milestone completion.
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "chore: complete v[X.Y] milestone" --files .planning/milestones/v[X.Y]-ROADMAP.md .planning/milestones/v[X.Y]-REQUIREMENTS.md .planning/milestones/v[X.Y]-MILESTONE-AUDIT.md .planning/MILESTONES.md .planning/PROJECT.md .planning/STATE.md
|
||||
```
|
||||
```
|
||||
|
||||
Confirm: "Committed: chore: complete v[X.Y] milestone"
|
||||
|
||||
</step>
|
||||
|
||||
<step name="offer_next">
|
||||
|
||||
```
|
||||
✅ Milestone v[X.Y] [Name] complete
|
||||
|
||||
Shipped:
|
||||
- [N] phases ([M] plans, [P] tasks)
|
||||
- [One sentence of what shipped]
|
||||
|
||||
Archived:
|
||||
- milestones/v[X.Y]-ROADMAP.md
|
||||
- milestones/v[X.Y]-REQUIREMENTS.md
|
||||
|
||||
Summary: .planning/MILESTONES.md
|
||||
Tag: v[X.Y]
|
||||
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Start Next Milestone** — questioning → research → requirements → roadmap
|
||||
|
||||
`/gsd:new-milestone`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<milestone_naming>
|
||||
|
||||
**Version conventions:**
|
||||
- **v1.0** — Initial MVP
|
||||
- **v1.1, v1.2** — Minor updates, new features, fixes
|
||||
- **v2.0, v3.0** — Major rewrites, breaking changes, new direction
|
||||
|
||||
**Names:** Short 1-2 words (v1.0 MVP, v1.1 Security, v1.2 Performance, v2.0 Redesign).
|
||||
|
||||
</milestone_naming>
|
||||
|
||||
<what_qualifies>
|
||||
|
||||
**Create milestones for:** Initial release, public releases, major feature sets shipped, before archiving planning.
|
||||
|
||||
**Don't create milestones for:** Every phase completion (too granular), work in progress, internal dev iterations (unless truly shipped).
|
||||
|
||||
Heuristic: "Is this deployed/usable/shipped?" If yes → milestone. If no → keep working.
|
||||
|
||||
</what_qualifies>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
Milestone completion is successful when:
|
||||
|
||||
- [ ] MILESTONES.md entry created with stats and accomplishments
|
||||
- [ ] PROJECT.md full evolution review completed
|
||||
- [ ] All shipped requirements moved to Validated in PROJECT.md
|
||||
- [ ] Key Decisions updated with outcomes
|
||||
- [ ] ROADMAP.md reorganized with milestone grouping
|
||||
- [ ] Roadmap archive created (milestones/v[X.Y]-ROADMAP.md)
|
||||
- [ ] Requirements archive created (milestones/v[X.Y]-REQUIREMENTS.md)
|
||||
- [ ] REQUIREMENTS.md deleted (fresh for next milestone)
|
||||
- [ ] STATE.md updated with fresh project reference
|
||||
- [ ] Git tag created (v[X.Y])
|
||||
- [ ] Milestone commit made (includes archive files and deletion)
|
||||
- [ ] Requirements completion checked against REQUIREMENTS.md traceability table
|
||||
- [ ] Incomplete requirements surfaced with proceed/audit/abort options
|
||||
- [ ] Known gaps recorded in MILESTONES.md if user proceeded with incomplete requirements
|
||||
- [ ] RETROSPECTIVE.md updated with milestone section
|
||||
- [ ] Cross-milestone trends updated
|
||||
- [ ] User knows next step (/gsd:new-milestone)
|
||||
|
||||
</success_criteria>
|
||||
219
get-shit-done/workflows/diagnose-issues.md
Normal file
219
get-shit-done/workflows/diagnose-issues.md
Normal file
@@ -0,0 +1,219 @@
|
||||
<purpose>
|
||||
Orchestrate parallel debug agents to investigate UAT gaps and find root causes.
|
||||
|
||||
After UAT finds gaps, spawn one debug agent per gap. Each agent investigates autonomously with symptoms pre-filled from UAT. Collect root causes, update UAT.md gaps with diagnosis, then hand off to plan-phase --gaps with actual diagnoses.
|
||||
|
||||
Orchestrator stays lean: parse gaps, spawn agents, collect results, update UAT.
|
||||
</purpose>
|
||||
|
||||
<paths>
|
||||
DEBUG_DIR=.planning/debug
|
||||
|
||||
Debug files use the `.planning/debug/` path (hidden directory with leading dot).
|
||||
</paths>
|
||||
|
||||
<core_principle>
|
||||
**Diagnose before planning fixes.**
|
||||
|
||||
UAT tells us WHAT is broken (symptoms). Debug agents find WHY (root cause). plan-phase --gaps then creates targeted fixes based on actual causes, not guesses.
|
||||
|
||||
Without diagnosis: "Comment doesn't refresh" → guess at fix → maybe wrong
|
||||
With diagnosis: "Comment doesn't refresh" → "useEffect missing dependency" → precise fix
|
||||
</core_principle>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="parse_gaps">
|
||||
**Extract gaps from UAT.md:**
|
||||
|
||||
Read the "Gaps" section (YAML format):
|
||||
```yaml
|
||||
- truth: "Comment appears immediately after submission"
|
||||
status: failed
|
||||
reason: "User reported: works but doesn't show until I refresh the page"
|
||||
severity: major
|
||||
test: 2
|
||||
artifacts: []
|
||||
missing: []
|
||||
```
|
||||
|
||||
For each gap, also read the corresponding test from "Tests" section to get full context.
|
||||
|
||||
Build gap list:
|
||||
```
|
||||
gaps = [
|
||||
{truth: "Comment appears immediately...", severity: "major", test_num: 2, reason: "..."},
|
||||
{truth: "Reply button positioned correctly...", severity: "minor", test_num: 5, reason: "..."},
|
||||
...
|
||||
]
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="report_plan">
|
||||
**Report diagnosis plan to user:**
|
||||
|
||||
```
|
||||
## Diagnosing {N} Gaps
|
||||
|
||||
Spawning parallel debug agents to investigate root causes:
|
||||
|
||||
| Gap (Truth) | Severity |
|
||||
|-------------|----------|
|
||||
| Comment appears immediately after submission | major |
|
||||
| Reply button positioned correctly | minor |
|
||||
| Delete removes comment | blocker |
|
||||
|
||||
Each agent will:
|
||||
1. Create DEBUG-{slug}.md with symptoms pre-filled
|
||||
2. Investigate autonomously (read code, form hypotheses, test)
|
||||
3. Return root cause
|
||||
|
||||
This runs in parallel - all gaps investigated simultaneously.
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="spawn_agents">
|
||||
**Spawn debug agents in parallel:**
|
||||
|
||||
For each gap, fill the debug-subagent-prompt template and spawn:
|
||||
|
||||
```
|
||||
Task(
|
||||
prompt=filled_debug_subagent_prompt + "\n\n<files_to_read>\n- {phase_dir}/{phase_num}-UAT.md\n- .planning/STATE.md\n</files_to_read>",
|
||||
subagent_type="gsd-debugger",
|
||||
description="Debug: {truth_short}"
|
||||
)
|
||||
```
|
||||
|
||||
**All agents spawn in single message** (parallel execution).
|
||||
|
||||
Template placeholders:
|
||||
- `{truth}`: The expected behavior that failed
|
||||
- `{expected}`: From UAT test
|
||||
- `{actual}`: Verbatim user description from reason field
|
||||
- `{errors}`: Any error messages from UAT (or "None reported")
|
||||
- `{reproduction}`: "Test {test_num} in UAT"
|
||||
- `{timeline}`: "Discovered during UAT"
|
||||
- `{goal}`: `find_root_cause_only` (UAT flow - plan-phase --gaps handles fixes)
|
||||
- `{slug}`: Generated from truth
|
||||
</step>
|
||||
|
||||
<step name="collect_results">
|
||||
**Collect root causes from agents:**
|
||||
|
||||
Each agent returns with:
|
||||
```
|
||||
## ROOT CAUSE FOUND
|
||||
|
||||
**Debug Session:** ${DEBUG_DIR}/{slug}.md
|
||||
|
||||
**Root Cause:** {specific cause with evidence}
|
||||
|
||||
**Evidence Summary:**
|
||||
- {key finding 1}
|
||||
- {key finding 2}
|
||||
- {key finding 3}
|
||||
|
||||
**Files Involved:**
|
||||
- {file1}: {what's wrong}
|
||||
- {file2}: {related issue}
|
||||
|
||||
**Suggested Fix Direction:** {brief hint for plan-phase --gaps}
|
||||
```
|
||||
|
||||
Parse each return to extract:
|
||||
- root_cause: The diagnosed cause
|
||||
- files: Files involved
|
||||
- debug_path: Path to debug session file
|
||||
- suggested_fix: Hint for gap closure plan
|
||||
|
||||
If agent returns `## INVESTIGATION INCONCLUSIVE`:
|
||||
- root_cause: "Investigation inconclusive - manual review needed"
|
||||
- Note which issue needs manual attention
|
||||
- Include remaining possibilities from agent return
|
||||
</step>
|
||||
|
||||
<step name="update_uat">
|
||||
**Update UAT.md gaps with diagnosis:**
|
||||
|
||||
For each gap in the Gaps section, add artifacts and missing fields:
|
||||
|
||||
```yaml
|
||||
- truth: "Comment appears immediately after submission"
|
||||
status: failed
|
||||
reason: "User reported: works but doesn't show until I refresh the page"
|
||||
severity: major
|
||||
test: 2
|
||||
root_cause: "useEffect in CommentList.tsx missing commentCount dependency"
|
||||
artifacts:
|
||||
- path: "src/components/CommentList.tsx"
|
||||
issue: "useEffect missing dependency"
|
||||
missing:
|
||||
- "Add commentCount to useEffect dependency array"
|
||||
- "Trigger re-render when new comment added"
|
||||
debug_session: .planning/debug/comment-not-refreshing.md
|
||||
```
|
||||
|
||||
Update status in frontmatter to "diagnosed".
|
||||
|
||||
Commit the updated UAT.md:
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs({phase_num}): add root causes from diagnosis" --files ".planning/phases/XX-name/{phase_num}-UAT.md"
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="report_results">
|
||||
**Report diagnosis results and hand off:**
|
||||
|
||||
Display:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► DIAGNOSIS COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
| Gap (Truth) | Root Cause | Files |
|
||||
|-------------|------------|-------|
|
||||
| Comment appears immediately | useEffect missing dependency | CommentList.tsx |
|
||||
| Reply button positioned correctly | CSS flex order incorrect | ReplyButton.tsx |
|
||||
| Delete removes comment | API missing auth header | api/comments.ts |
|
||||
|
||||
Debug sessions: ${DEBUG_DIR}/
|
||||
|
||||
Proceeding to plan fixes...
|
||||
```
|
||||
|
||||
Return to verify-work orchestrator for automatic planning.
|
||||
Do NOT offer manual next steps - verify-work handles the rest.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<context_efficiency>
|
||||
Agents start with symptoms pre-filled from UAT (no symptom gathering).
|
||||
Agents only diagnose—plan-phase --gaps handles fixes (no fix application).
|
||||
</context_efficiency>
|
||||
|
||||
<failure_handling>
|
||||
**Agent fails to find root cause:**
|
||||
- Mark gap as "needs manual review"
|
||||
- Continue with other gaps
|
||||
- Report incomplete diagnosis
|
||||
|
||||
**Agent times out:**
|
||||
- Check DEBUG-{slug}.md for partial progress
|
||||
- Can resume with /gsd:debug
|
||||
|
||||
**All agents fail:**
|
||||
- Something systemic (permissions, git, etc.)
|
||||
- Report for manual investigation
|
||||
- Fall back to plan-phase --gaps without root causes (less precise)
|
||||
</failure_handling>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Gaps parsed from UAT.md
|
||||
- [ ] Debug agents spawned in parallel
|
||||
- [ ] Root causes collected from all agents
|
||||
- [ ] UAT.md gaps updated with artifacts and missing
|
||||
- [ ] Debug sessions saved to ${DEBUG_DIR}/
|
||||
- [ ] Hand off to verify-work for automatic planning
|
||||
</success_criteria>
|
||||
289
get-shit-done/workflows/discovery-phase.md
Normal file
289
get-shit-done/workflows/discovery-phase.md
Normal file
@@ -0,0 +1,289 @@
|
||||
<purpose>
|
||||
Execute discovery at the appropriate depth level.
|
||||
Produces DISCOVERY.md (for Level 2-3) that informs PLAN.md creation.
|
||||
|
||||
Called from plan-phase.md's mandatory_discovery step with a depth parameter.
|
||||
|
||||
NOTE: For comprehensive ecosystem research ("how do experts build this"), use /gsd:research-phase instead, which produces RESEARCH.md.
|
||||
</purpose>
|
||||
|
||||
<depth_levels>
|
||||
**This workflow supports three depth levels:**
|
||||
|
||||
| Level | Name | Time | Output | When |
|
||||
| ----- | ------------ | --------- | -------------------------------------------- | ----------------------------------------- |
|
||||
| 1 | Quick Verify | 2-5 min | No file, proceed with verified knowledge | Single library, confirming current syntax |
|
||||
| 2 | Standard | 15-30 min | DISCOVERY.md | Choosing between options, new integration |
|
||||
| 3 | Deep Dive | 1+ hour | Detailed DISCOVERY.md with validation gates | Architectural decisions, novel problems |
|
||||
|
||||
**Depth is determined by plan-phase.md before routing here.**
|
||||
</depth_levels>
|
||||
|
||||
<source_hierarchy>
|
||||
**MANDATORY: Context7 BEFORE WebSearch**
|
||||
|
||||
Claude's training data is 6-18 months stale. Always verify.
|
||||
|
||||
1. **Context7 MCP FIRST** - Current docs, no hallucination
|
||||
2. **Official docs** - When Context7 lacks coverage
|
||||
3. **WebSearch LAST** - For comparisons and trends only
|
||||
|
||||
See C:/Users/yaoji/.claude/get-shit-done/templates/discovery.md `<discovery_protocol>` for full protocol.
|
||||
</source_hierarchy>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="determine_depth">
|
||||
Check the depth parameter passed from plan-phase.md:
|
||||
- `depth=verify` → Level 1 (Quick Verification)
|
||||
- `depth=standard` → Level 2 (Standard Discovery)
|
||||
- `depth=deep` → Level 3 (Deep Dive)
|
||||
|
||||
Route to appropriate level workflow below.
|
||||
</step>
|
||||
|
||||
<step name="level_1_quick_verify">
|
||||
**Level 1: Quick Verification (2-5 minutes)**
|
||||
|
||||
For: Single known library, confirming syntax/version still correct.
|
||||
|
||||
**Process:**
|
||||
|
||||
1. Resolve library in Context7:
|
||||
|
||||
```
|
||||
mcp__context7__resolve-library-id with libraryName: "[library]"
|
||||
```
|
||||
|
||||
2. Fetch relevant docs:
|
||||
|
||||
```
|
||||
mcp__context7__get-library-docs with:
|
||||
- context7CompatibleLibraryID: [from step 1]
|
||||
- topic: [specific concern]
|
||||
```
|
||||
|
||||
3. Verify:
|
||||
|
||||
- Current version matches expectations
|
||||
- API syntax unchanged
|
||||
- No breaking changes in recent versions
|
||||
|
||||
4. **If verified:** Return to plan-phase.md with confirmation. No DISCOVERY.md needed.
|
||||
|
||||
5. **If concerns found:** Escalate to Level 2.
|
||||
|
||||
**Output:** Verbal confirmation to proceed, or escalation to Level 2.
|
||||
</step>
|
||||
|
||||
<step name="level_2_standard">
|
||||
**Level 2: Standard Discovery (15-30 minutes)**
|
||||
|
||||
For: Choosing between options, new external integration.
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Identify what to discover:**
|
||||
|
||||
- What options exist?
|
||||
- What are the key comparison criteria?
|
||||
- What's our specific use case?
|
||||
|
||||
2. **Context7 for each option:**
|
||||
|
||||
```
|
||||
For each library/framework:
|
||||
- mcp__context7__resolve-library-id
|
||||
- mcp__context7__get-library-docs (mode: "code" for API, "info" for concepts)
|
||||
```
|
||||
|
||||
3. **Official docs** for anything Context7 lacks.
|
||||
|
||||
4. **WebSearch** for comparisons:
|
||||
|
||||
- "[option A] vs [option B] {current_year}"
|
||||
- "[option] known issues"
|
||||
- "[option] with [our stack]"
|
||||
|
||||
5. **Cross-verify:** Any WebSearch finding → confirm with Context7/official docs.
|
||||
|
||||
6. **Create DISCOVERY.md** using C:/Users/yaoji/.claude/get-shit-done/templates/discovery.md structure:
|
||||
|
||||
- Summary with recommendation
|
||||
- Key findings per option
|
||||
- Code examples from Context7
|
||||
- Confidence level (should be MEDIUM-HIGH for Level 2)
|
||||
|
||||
7. Return to plan-phase.md.
|
||||
|
||||
**Output:** `.planning/phases/XX-name/DISCOVERY.md`
|
||||
</step>
|
||||
|
||||
<step name="level_3_deep_dive">
|
||||
**Level 3: Deep Dive (1+ hour)**
|
||||
|
||||
For: Architectural decisions, novel problems, high-risk choices.
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Scope the discovery** using C:/Users/yaoji/.claude/get-shit-done/templates/discovery.md:
|
||||
|
||||
- Define clear scope
|
||||
- Define include/exclude boundaries
|
||||
- List specific questions to answer
|
||||
|
||||
2. **Exhaustive Context7 research:**
|
||||
|
||||
- All relevant libraries
|
||||
- Related patterns and concepts
|
||||
- Multiple topics per library if needed
|
||||
|
||||
3. **Official documentation deep read:**
|
||||
|
||||
- Architecture guides
|
||||
- Best practices sections
|
||||
- Migration/upgrade guides
|
||||
- Known limitations
|
||||
|
||||
4. **WebSearch for ecosystem context:**
|
||||
|
||||
- How others solved similar problems
|
||||
- Production experiences
|
||||
- Gotchas and anti-patterns
|
||||
- Recent changes/announcements
|
||||
|
||||
5. **Cross-verify ALL findings:**
|
||||
|
||||
- Every WebSearch claim → verify with authoritative source
|
||||
- Mark what's verified vs assumed
|
||||
- Flag contradictions
|
||||
|
||||
6. **Create comprehensive DISCOVERY.md:**
|
||||
|
||||
- Full structure from C:/Users/yaoji/.claude/get-shit-done/templates/discovery.md
|
||||
- Quality report with source attribution
|
||||
- Confidence by finding
|
||||
- If LOW confidence on any critical finding → add validation checkpoints
|
||||
|
||||
7. **Confidence gate:** If overall confidence is LOW, present options before proceeding.
|
||||
|
||||
8. Return to plan-phase.md.
|
||||
|
||||
**Output:** `.planning/phases/XX-name/DISCOVERY.md` (comprehensive)
|
||||
</step>
|
||||
|
||||
<step name="identify_unknowns">
|
||||
**For Level 2-3:** Define what we need to learn.
|
||||
|
||||
Ask: What do we need to learn before we can plan this phase?
|
||||
|
||||
- Technology choices?
|
||||
- Best practices?
|
||||
- API patterns?
|
||||
- Architecture approach?
|
||||
</step>
|
||||
|
||||
<step name="create_discovery_scope">
|
||||
Use C:/Users/yaoji/.claude/get-shit-done/templates/discovery.md.
|
||||
|
||||
Include:
|
||||
|
||||
- Clear discovery objective
|
||||
- Scoped include/exclude lists
|
||||
- Source preferences (official docs, Context7, current year)
|
||||
- Output structure for DISCOVERY.md
|
||||
</step>
|
||||
|
||||
<step name="execute_discovery">
|
||||
Run the discovery:
|
||||
- Use web search for current info
|
||||
- Use Context7 MCP for library docs
|
||||
- Prefer current year sources
|
||||
- Structure findings per template
|
||||
</step>
|
||||
|
||||
<step name="create_discovery_output">
|
||||
Write `.planning/phases/XX-name/DISCOVERY.md`:
|
||||
- Summary with recommendation
|
||||
- Key findings with sources
|
||||
- Code examples if applicable
|
||||
- Metadata (confidence, dependencies, open questions, assumptions)
|
||||
</step>
|
||||
|
||||
<step name="confidence_gate">
|
||||
After creating DISCOVERY.md, check confidence level.
|
||||
|
||||
If confidence is LOW:
|
||||
Use AskUserQuestion:
|
||||
|
||||
- header: "Low Conf."
|
||||
- question: "Discovery confidence is LOW: [reason]. How would you like to proceed?"
|
||||
- options:
|
||||
- "Dig deeper" - Do more research before planning
|
||||
- "Proceed anyway" - Accept uncertainty, plan with caveats
|
||||
- "Pause" - I need to think about this
|
||||
|
||||
If confidence is MEDIUM:
|
||||
Inline: "Discovery complete (medium confidence). [brief reason]. Proceed to planning?"
|
||||
|
||||
If confidence is HIGH:
|
||||
Proceed directly, just note: "Discovery complete (high confidence)."
|
||||
</step>
|
||||
|
||||
<step name="open_questions_gate">
|
||||
If DISCOVERY.md has open_questions:
|
||||
|
||||
Present them inline:
|
||||
"Open questions from discovery:
|
||||
|
||||
- [Question 1]
|
||||
- [Question 2]
|
||||
|
||||
These may affect implementation. Acknowledge and proceed? (yes / address first)"
|
||||
|
||||
If "address first": Gather user input on questions, update discovery.
|
||||
</step>
|
||||
|
||||
<step name="offer_next">
|
||||
```
|
||||
Discovery complete: .planning/phases/XX-name/DISCOVERY.md
|
||||
Recommendation: [one-liner]
|
||||
Confidence: [level]
|
||||
|
||||
What's next?
|
||||
|
||||
1. Discuss phase context (/gsd:discuss-phase [current-phase])
|
||||
2. Create phase plan (/gsd:plan-phase [current-phase])
|
||||
3. Refine discovery (dig deeper)
|
||||
4. Review discovery
|
||||
|
||||
```
|
||||
|
||||
NOTE: DISCOVERY.md is NOT committed separately. It will be committed with phase completion.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
**Level 1 (Quick Verify):**
|
||||
- Context7 consulted for library/topic
|
||||
- Current state verified or concerns escalated
|
||||
- Verbal confirmation to proceed (no files)
|
||||
|
||||
**Level 2 (Standard):**
|
||||
- Context7 consulted for all options
|
||||
- WebSearch findings cross-verified
|
||||
- DISCOVERY.md created with recommendation
|
||||
- Confidence level MEDIUM or higher
|
||||
- Ready to inform PLAN.md creation
|
||||
|
||||
**Level 3 (Deep Dive):**
|
||||
- Discovery scope defined
|
||||
- Context7 exhaustively consulted
|
||||
- All WebSearch findings verified against authoritative sources
|
||||
- DISCOVERY.md created with comprehensive analysis
|
||||
- Quality report with source attribution
|
||||
- If LOW confidence findings → validation checkpoints defined
|
||||
- Confidence gate passed
|
||||
- Ready to inform PLAN.md creation
|
||||
</success_criteria>
|
||||
764
get-shit-done/workflows/discuss-phase.md
Normal file
764
get-shit-done/workflows/discuss-phase.md
Normal file
@@ -0,0 +1,764 @@
|
||||
<purpose>
|
||||
Extract implementation decisions that downstream agents need. Analyze the phase to identify gray areas, let the user choose what to discuss, then deep-dive each selected area until satisfied.
|
||||
|
||||
You are a thinking partner, not an interviewer. The user is the visionary — you are the builder. Your job is to capture decisions that will guide research and planning, not to figure out implementation yourself.
|
||||
</purpose>
|
||||
|
||||
<downstream_awareness>
|
||||
**CONTEXT.md feeds into:**
|
||||
|
||||
1. **gsd-phase-researcher** — Reads CONTEXT.md to know WHAT to research
|
||||
- "User wants card-based layout" → researcher investigates card component patterns
|
||||
- "Infinite scroll decided" → researcher looks into virtualization libraries
|
||||
|
||||
2. **gsd-planner** — Reads CONTEXT.md to know WHAT decisions are locked
|
||||
- "Pull-to-refresh on mobile" → planner includes that in task specs
|
||||
- "Claude's Discretion: loading skeleton" → planner can decide approach
|
||||
|
||||
**Your job:** Capture decisions clearly enough that downstream agents can act on them without asking the user again.
|
||||
|
||||
**Not your job:** Figure out HOW to implement. That's what research and planning do with the decisions you capture.
|
||||
</downstream_awareness>
|
||||
|
||||
<philosophy>
|
||||
**User = founder/visionary. Claude = builder.**
|
||||
|
||||
The user knows:
|
||||
- How they imagine it working
|
||||
- What it should look/feel like
|
||||
- What's essential vs nice-to-have
|
||||
- Specific behaviors or references they have in mind
|
||||
|
||||
The user doesn't know (and shouldn't be asked):
|
||||
- Codebase patterns (researcher reads the code)
|
||||
- Technical risks (researcher identifies these)
|
||||
- Implementation approach (planner figures this out)
|
||||
- Success metrics (inferred from the work)
|
||||
|
||||
Ask about vision and implementation choices. Capture decisions for downstream agents.
|
||||
</philosophy>
|
||||
|
||||
<scope_guardrail>
|
||||
**CRITICAL: No scope creep.**
|
||||
|
||||
The phase boundary comes from ROADMAP.md and is FIXED. Discussion clarifies HOW to implement what's scoped, never WHETHER to add new capabilities.
|
||||
|
||||
**Allowed (clarifying ambiguity):**
|
||||
- "How should posts be displayed?" (layout, density, info shown)
|
||||
- "What happens on empty state?" (within the feature)
|
||||
- "Pull to refresh or manual?" (behavior choice)
|
||||
|
||||
**Not allowed (scope creep):**
|
||||
- "Should we also add comments?" (new capability)
|
||||
- "What about search/filtering?" (new capability)
|
||||
- "Maybe include bookmarking?" (new capability)
|
||||
|
||||
**The heuristic:** Does this clarify how we implement what's already in the phase, or does it add a new capability that could be its own phase?
|
||||
|
||||
**When user suggests scope creep:**
|
||||
```
|
||||
"[Feature X] would be a new capability — that's its own phase.
|
||||
Want me to note it for the roadmap backlog?
|
||||
|
||||
For now, let's focus on [phase domain]."
|
||||
```
|
||||
|
||||
Capture the idea in a "Deferred Ideas" section. Don't lose it, don't act on it.
|
||||
</scope_guardrail>
|
||||
|
||||
<gray_area_identification>
|
||||
Gray areas are **implementation decisions the user cares about** — things that could go multiple ways and would change the result.
|
||||
|
||||
**How to identify gray areas:**
|
||||
|
||||
1. **Read the phase goal** from ROADMAP.md
|
||||
2. **Understand the domain** — What kind of thing is being built?
|
||||
- Something users SEE → visual presentation, interactions, states matter
|
||||
- Something users CALL → interface contracts, responses, errors matter
|
||||
- Something users RUN → invocation, output, behavior modes matter
|
||||
- Something users READ → structure, tone, depth, flow matter
|
||||
- Something being ORGANIZED → criteria, grouping, handling exceptions matter
|
||||
3. **Generate phase-specific gray areas** — Not generic categories, but concrete decisions for THIS phase
|
||||
|
||||
**Don't use generic category labels** (UI, UX, Behavior). Generate specific gray areas:
|
||||
|
||||
```
|
||||
Phase: "User authentication"
|
||||
→ Session handling, Error responses, Multi-device policy, Recovery flow
|
||||
|
||||
Phase: "Organize photo library"
|
||||
→ Grouping criteria, Duplicate handling, Naming convention, Folder structure
|
||||
|
||||
Phase: "CLI for database backups"
|
||||
→ Output format, Flag design, Progress reporting, Error recovery
|
||||
|
||||
Phase: "API documentation"
|
||||
→ Structure/navigation, Code examples depth, Versioning approach, Interactive elements
|
||||
```
|
||||
|
||||
**The key question:** What decisions would change the outcome that the user should weigh in on?
|
||||
|
||||
**Claude handles these (don't ask):**
|
||||
- Technical implementation details
|
||||
- Architecture patterns
|
||||
- Performance optimization
|
||||
- Scope (roadmap defines this)
|
||||
</gray_area_identification>
|
||||
|
||||
<answer_validation>
|
||||
**IMPORTANT: Answer validation** — After every AskUserQuestion call, check if the response is empty or whitespace-only. If so:
|
||||
1. Retry the question once with the same parameters
|
||||
2. If still empty, present the options as a plain-text numbered list and ask the user to type their choice number
|
||||
Never proceed with an empty answer.
|
||||
</answer_validation>
|
||||
|
||||
<process>
|
||||
|
||||
**Express path available:** If you already have a PRD or acceptance criteria document, use `/gsd:plan-phase {phase} --prd path/to/prd.md` to skip this discussion and go straight to planning.
|
||||
|
||||
<step name="initialize" priority="first">
|
||||
Phase number from argument (required).
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init phase-op "${PHASE}")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Parse JSON for: `commit_docs`, `phase_found`, `phase_dir`, `phase_number`, `phase_name`, `phase_slug`, `padded_phase`, `has_research`, `has_context`, `has_plans`, `has_verification`, `plan_count`, `roadmap_exists`, `planning_exists`.
|
||||
|
||||
**If `phase_found` is false:**
|
||||
```
|
||||
Phase [X] not found in roadmap.
|
||||
|
||||
Use /gsd:progress to see available phases.
|
||||
```
|
||||
Exit workflow.
|
||||
|
||||
**If `phase_found` is true:** Continue to check_existing.
|
||||
|
||||
**Auto mode** — If `--auto` is present in ARGUMENTS:
|
||||
- In `check_existing`: auto-select "Skip" (if context exists) or continue without prompting (if no context/plans)
|
||||
- In `present_gray_areas`: auto-select ALL gray areas without asking the user
|
||||
- In `discuss_areas`: for each discussion question, choose the recommended option (first option, or the one marked "recommended") without using AskUserQuestion
|
||||
- Log each auto-selected choice inline so the user can review decisions in the context file
|
||||
- After discussion completes, auto-advance to plan-phase (existing behavior)
|
||||
</step>
|
||||
|
||||
<step name="check_existing">
|
||||
Check if CONTEXT.md already exists using `has_context` from init.
|
||||
|
||||
```bash
|
||||
ls ${phase_dir}/*-CONTEXT.md 2>/dev/null
|
||||
```
|
||||
|
||||
**If exists:**
|
||||
|
||||
**If `--auto`:** Auto-select "Update it" — load existing context and continue to analyze_phase. Log: `[auto] Context exists — updating with auto-selected decisions.`
|
||||
|
||||
**Otherwise:** Use AskUserQuestion:
|
||||
- header: "Context"
|
||||
- question: "Phase [X] already has context. What do you want to do?"
|
||||
- options:
|
||||
- "Update it" — Review and revise existing context
|
||||
- "View it" — Show me what's there
|
||||
- "Skip" — Use existing context as-is
|
||||
|
||||
If "Update": Load existing, continue to analyze_phase
|
||||
If "View": Display CONTEXT.md, then offer update/skip
|
||||
If "Skip": Exit workflow
|
||||
|
||||
**If doesn't exist:**
|
||||
|
||||
Check `has_plans` and `plan_count` from init. **If `has_plans` is true:**
|
||||
|
||||
**If `--auto`:** Auto-select "Continue and replan after". Log: `[auto] Plans exist — continuing with context capture, will replan after.`
|
||||
|
||||
**Otherwise:** Use AskUserQuestion:
|
||||
- header: "Plans exist"
|
||||
- question: "Phase [X] already has {plan_count} plan(s) created without user context. Your decisions here won't affect existing plans unless you replan."
|
||||
- options:
|
||||
- "Continue and replan after" — Capture context, then run /gsd:plan-phase {X} to replan
|
||||
- "View existing plans" — Show plans before deciding
|
||||
- "Cancel" — Skip discuss-phase
|
||||
|
||||
If "Continue and replan after": Continue to analyze_phase.
|
||||
If "View existing plans": Display plan files, then offer "Continue" / "Cancel".
|
||||
If "Cancel": Exit workflow.
|
||||
|
||||
**If `has_plans` is false:** Continue to load_prior_context.
|
||||
</step>
|
||||
|
||||
<step name="load_prior_context">
|
||||
Read project-level and prior phase context to avoid re-asking decided questions and maintain consistency.
|
||||
|
||||
**Step 1: Read project-level files**
|
||||
```bash
|
||||
# Core project files
|
||||
cat .planning/PROJECT.md 2>/dev/null
|
||||
cat .planning/REQUIREMENTS.md 2>/dev/null
|
||||
cat .planning/STATE.md 2>/dev/null
|
||||
```
|
||||
|
||||
Extract from these:
|
||||
- **PROJECT.md** — Vision, principles, non-negotiables, user preferences
|
||||
- **REQUIREMENTS.md** — Acceptance criteria, constraints, must-haves vs nice-to-haves
|
||||
- **STATE.md** — Current progress, any flags or session notes
|
||||
|
||||
**Step 2: Read all prior CONTEXT.md files**
|
||||
```bash
|
||||
# Find all CONTEXT.md files from phases before current
|
||||
find .planning/phases -name "*-CONTEXT.md" 2>/dev/null | sort
|
||||
```
|
||||
|
||||
For each CONTEXT.md where phase number < current phase:
|
||||
- Read the `<decisions>` section — these are locked preferences
|
||||
- Read `<specifics>` — particular references or "I want it like X" moments
|
||||
- Note any patterns (e.g., "user consistently prefers minimal UI", "user rejected single-key shortcuts")
|
||||
|
||||
**Step 3: Build internal `<prior_decisions>` context**
|
||||
|
||||
Structure the extracted information:
|
||||
```
|
||||
<prior_decisions>
|
||||
## Project-Level
|
||||
- [Key principle or constraint from PROJECT.md]
|
||||
- [Requirement that affects this phase from REQUIREMENTS.md]
|
||||
|
||||
## From Prior Phases
|
||||
### Phase N: [Name]
|
||||
- [Decision that may be relevant to current phase]
|
||||
- [Preference that establishes a pattern]
|
||||
|
||||
### Phase M: [Name]
|
||||
- [Another relevant decision]
|
||||
</prior_decisions>
|
||||
```
|
||||
|
||||
**Usage in subsequent steps:**
|
||||
- `analyze_phase`: Skip gray areas already decided in prior phases
|
||||
- `present_gray_areas`: Annotate options with prior decisions ("You chose X in Phase 5")
|
||||
- `discuss_areas`: Pre-fill answers or flag conflicts ("This contradicts Phase 3 — same here or different?")
|
||||
|
||||
**If no prior context exists:** Continue without — this is expected for early phases.
|
||||
</step>
|
||||
|
||||
<step name="scout_codebase">
|
||||
Lightweight scan of existing code to inform gray area identification and discussion. Uses ~10% context — acceptable for an interactive session.
|
||||
|
||||
**Step 1: Check for existing codebase maps**
|
||||
```bash
|
||||
ls .planning/codebase/*.md 2>/dev/null
|
||||
```
|
||||
|
||||
**If codebase maps exist:** Read the most relevant ones (CONVENTIONS.md, STRUCTURE.md, STACK.md based on phase type). Extract:
|
||||
- Reusable components/hooks/utilities
|
||||
- Established patterns (state management, styling, data fetching)
|
||||
- Integration points (where new code would connect)
|
||||
|
||||
Skip to Step 3 below.
|
||||
|
||||
**Step 2: If no codebase maps, do targeted grep**
|
||||
|
||||
Extract key terms from the phase goal (e.g., "feed" → "post", "card", "list"; "auth" → "login", "session", "token").
|
||||
|
||||
```bash
|
||||
# Find files related to phase goal terms
|
||||
grep -rl "{term1}\|{term2}" src/ app/ --include="*.ts" --include="*.tsx" --include="*.js" --include="*.jsx" 2>/dev/null | head -10
|
||||
|
||||
# Find existing components/hooks
|
||||
ls src/components/ 2>/dev/null
|
||||
ls src/hooks/ 2>/dev/null
|
||||
ls src/lib/ src/utils/ 2>/dev/null
|
||||
```
|
||||
|
||||
Read the 3-5 most relevant files to understand existing patterns.
|
||||
|
||||
**Step 3: Build internal codebase_context**
|
||||
|
||||
From the scan, identify:
|
||||
- **Reusable assets** — existing components, hooks, utilities that could be used in this phase
|
||||
- **Established patterns** — how the codebase does state management, styling, data fetching
|
||||
- **Integration points** — where new code would connect (routes, nav, providers)
|
||||
- **Creative options** — approaches the existing architecture enables or constrains
|
||||
|
||||
Store as internal `<codebase_context>` for use in analyze_phase and present_gray_areas. This is NOT written to a file — it's used within this session only.
|
||||
</step>
|
||||
|
||||
<step name="analyze_phase">
|
||||
Analyze the phase to identify gray areas worth discussing. **Use both `prior_decisions` and `codebase_context` to ground the analysis.**
|
||||
|
||||
**Read the phase description from ROADMAP.md and determine:**
|
||||
|
||||
1. **Domain boundary** — What capability is this phase delivering? State it clearly.
|
||||
|
||||
1b. **Initialize canonical refs accumulator** — Start building the `<canonical_refs>` list for CONTEXT.md. This accumulates throughout the entire discussion, not just this step.
|
||||
|
||||
**Source 1 (now):** Copy `Canonical refs:` from ROADMAP.md for this phase. Expand each to a full relative path.
|
||||
**Source 2 (now):** Check REQUIREMENTS.md and PROJECT.md for any specs/ADRs referenced for this phase.
|
||||
**Source 3 (scout_codebase):** If existing code references docs (e.g., comments citing ADRs), add those.
|
||||
**Source 4 (discuss_areas):** When the user says "read X", "check Y", or references any doc/spec/ADR during discussion — add it immediately. These are often the MOST important refs because they represent docs the user specifically wants followed.
|
||||
|
||||
This list is MANDATORY in CONTEXT.md. Every ref must have a full relative path so downstream agents can read it directly. If no external docs exist, note that explicitly.
|
||||
|
||||
2. **Check prior decisions** — Before generating gray areas, check if any were already decided:
|
||||
- Scan `<prior_decisions>` for relevant choices (e.g., "Ctrl+C only, no single-key shortcuts")
|
||||
- These are **pre-answered** — don't re-ask unless this phase has conflicting needs
|
||||
- Note applicable prior decisions for use in presentation
|
||||
|
||||
3. **Gray areas by category** — For each relevant category (UI, UX, Behavior, Empty States, Content), identify 1-2 specific ambiguities that would change implementation. **Annotate with code context where relevant** (e.g., "You already have a Card component" or "No existing pattern for this").
|
||||
|
||||
4. **Skip assessment** — If no meaningful gray areas exist (pure infrastructure, clear-cut implementation, or all already decided in prior phases), the phase may not need discussion.
|
||||
|
||||
**Output your analysis internally, then present to user.**
|
||||
|
||||
Example analysis for "Post Feed" phase (with code and prior context):
|
||||
```
|
||||
Domain: Displaying posts from followed users
|
||||
Existing: Card component (src/components/ui/Card.tsx), useInfiniteQuery hook, Tailwind CSS
|
||||
Prior decisions: "Minimal UI preferred" (Phase 2), "No pagination — always infinite scroll" (Phase 4)
|
||||
Gray areas:
|
||||
- UI: Layout style (cards vs timeline vs grid) — Card component exists with shadow/rounded variants
|
||||
- UI: Information density (full posts vs previews) — no existing density patterns
|
||||
- Behavior: Loading pattern — ALREADY DECIDED: infinite scroll (Phase 4)
|
||||
- Empty State: What shows when no posts exist — EmptyState component exists in ui/
|
||||
- Content: What metadata displays (time, author, reactions count)
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="present_gray_areas">
|
||||
Present the domain boundary, prior decisions, and gray areas to user.
|
||||
|
||||
**First, state the boundary and any prior decisions that apply:**
|
||||
```
|
||||
Phase [X]: [Name]
|
||||
Domain: [What this phase delivers — from your analysis]
|
||||
|
||||
We'll clarify HOW to implement this.
|
||||
(New capabilities belong in other phases.)
|
||||
|
||||
[If prior decisions apply:]
|
||||
**Carrying forward from earlier phases:**
|
||||
- [Decision from Phase N that applies here]
|
||||
- [Decision from Phase M that applies here]
|
||||
```
|
||||
|
||||
**If `--auto`:** Auto-select ALL gray areas. Log: `[auto] Selected all gray areas: [list area names].` Skip the AskUserQuestion below and continue directly to discuss_areas with all areas selected.
|
||||
|
||||
**Otherwise, use AskUserQuestion (multiSelect: true):**
|
||||
- header: "Discuss"
|
||||
- question: "Which areas do you want to discuss for [phase name]?"
|
||||
- options: Generate 3-4 phase-specific gray areas, each with:
|
||||
- "[Specific area]" (label) — concrete, not generic
|
||||
- [1-2 questions this covers + code context annotation] (description)
|
||||
- **Highlight the recommended choice with brief explanation why**
|
||||
|
||||
**Prior decision annotations:** When a gray area was already decided in a prior phase, annotate it:
|
||||
```
|
||||
☐ Exit shortcuts — How should users quit?
|
||||
(You decided "Ctrl+C only, no single-key shortcuts" in Phase 5 — revisit or keep?)
|
||||
```
|
||||
|
||||
**Code context annotations:** When the scout found relevant existing code, annotate the gray area description:
|
||||
```
|
||||
☐ Layout style — Cards vs list vs timeline?
|
||||
(You already have a Card component with shadow/rounded variants. Reusing it keeps the app consistent.)
|
||||
```
|
||||
|
||||
**Combining both:** When both prior decisions and code context apply:
|
||||
```
|
||||
☐ Loading behavior — Infinite scroll or pagination?
|
||||
(You chose infinite scroll in Phase 4. useInfiniteQuery hook already set up.)
|
||||
```
|
||||
|
||||
**Do NOT include a "skip" or "you decide" option.** User ran this command to discuss — give them real choices.
|
||||
|
||||
**Examples by domain (with code context):**
|
||||
|
||||
For "Post Feed" (visual feature):
|
||||
```
|
||||
☐ Layout style — Cards vs list vs timeline? (Card component exists with variants)
|
||||
☐ Loading behavior — Infinite scroll or pagination? (useInfiniteQuery hook available)
|
||||
☐ Content ordering — Chronological, algorithmic, or user choice?
|
||||
☐ Post metadata — What info per post? Timestamps, reactions, author?
|
||||
```
|
||||
|
||||
For "Database backup CLI" (command-line tool):
|
||||
```
|
||||
☐ Output format — JSON, table, or plain text? Verbosity levels?
|
||||
☐ Flag design — Short flags, long flags, or both? Required vs optional?
|
||||
☐ Progress reporting — Silent, progress bar, or verbose logging?
|
||||
☐ Error recovery — Fail fast, retry, or prompt for action?
|
||||
```
|
||||
|
||||
For "Organize photo library" (organization task):
|
||||
```
|
||||
☐ Grouping criteria — By date, location, faces, or events?
|
||||
☐ Duplicate handling — Keep best, keep all, or prompt each time?
|
||||
☐ Naming convention — Original names, dates, or descriptive?
|
||||
☐ Folder structure — Flat, nested by year, or by category?
|
||||
```
|
||||
|
||||
Continue to discuss_areas with selected areas.
|
||||
</step>
|
||||
|
||||
<step name="discuss_areas">
|
||||
For each selected area, conduct a focused discussion loop.
|
||||
|
||||
**Batch mode support:** Parse optional `--batch` from `$ARGUMENTS`.
|
||||
- Accept `--batch`, `--batch=N`, or `--batch N`
|
||||
- Default to 4 questions per batch when no number is provided
|
||||
- Clamp explicit sizes to 2-5 so a batch stays answerable
|
||||
- If `--batch` is absent, keep the existing one-question-at-a-time flow
|
||||
|
||||
**Philosophy:** stay adaptive, but let the user choose the pacing.
|
||||
- Default mode: 4 single-question turns, then check whether to continue
|
||||
- `--batch` mode: 1 grouped turn with 2-5 numbered questions, then check whether to continue
|
||||
|
||||
Each answer (or answer set, in batch mode) should reveal the next question or next batch.
|
||||
|
||||
**Auto mode (`--auto`):** For each area, Claude selects the recommended option (first option, or the one explicitly marked "recommended") for every question without using AskUserQuestion. Log each auto-selected choice:
|
||||
```
|
||||
[auto] [Area] — Q: "[question text]" → Selected: "[chosen option]" (recommended default)
|
||||
```
|
||||
After all areas are auto-resolved, skip the "Explore more gray areas" prompt and proceed directly to write_context.
|
||||
|
||||
**Interactive mode (no `--auto`):**
|
||||
|
||||
**For each area:**
|
||||
|
||||
1. **Announce the area:**
|
||||
```
|
||||
Let's talk about [Area].
|
||||
```
|
||||
|
||||
2. **Ask questions using the selected pacing:**
|
||||
|
||||
**Default (no `--batch`): Ask 4 questions using AskUserQuestion**
|
||||
- header: "[Area]" (max 12 chars — abbreviate if needed)
|
||||
- question: Specific decision for this area
|
||||
- options: 2-3 concrete choices (AskUserQuestion adds "Other" automatically), with the recommended choice highlighted and brief explanation why
|
||||
- **Annotate options with code context** when relevant:
|
||||
```
|
||||
"How should posts be displayed?"
|
||||
- Cards (reuses existing Card component — consistent with Messages)
|
||||
- List (simpler, would be a new pattern)
|
||||
- Timeline (needs new Timeline component — none exists yet)
|
||||
```
|
||||
- Include "You decide" as an option when reasonable — captures Claude discretion
|
||||
- **Context7 for library choices:** When a gray area involves library selection (e.g., "magic links" → query next-auth docs) or API approach decisions, use `mcp__context7__*` tools to fetch current documentation and inform the options. Don't use Context7 for every question — only when library-specific knowledge improves the options.
|
||||
|
||||
**Batch mode (`--batch`): Ask 2-5 numbered questions in one plain-text turn**
|
||||
- Group closely related questions for the current area into a single message
|
||||
- Keep each question concrete and answerable in one reply
|
||||
- When options are helpful, include short inline choices per question rather than a separate AskUserQuestion for every item
|
||||
- After the user replies, reflect back the captured decisions, note any unanswered items, and ask only the minimum follow-up needed before moving on
|
||||
- Preserve adaptiveness between batches: use the full set of answers to decide the next batch or whether the area is sufficiently clear
|
||||
|
||||
3. **After the current set of questions, check:**
|
||||
- header: "[Area]" (max 12 chars)
|
||||
- question: "More questions about [area], or move to next? (Remaining: [list other unvisited areas])"
|
||||
- options: "More questions" / "Next area"
|
||||
|
||||
When building the question text, list the remaining unvisited areas so the user knows what's ahead. For example: "More questions about Layout, or move to next? (Remaining: Loading behavior, Content ordering)"
|
||||
|
||||
If "More questions" → ask another 4 single questions, or another 2-5 question batch when `--batch` is active, then check again
|
||||
If "Next area" → proceed to next selected area
|
||||
If "Other" (free text) → interpret intent: continuation phrases ("chat more", "keep going", "yes", "more") map to "More questions"; advancement phrases ("done", "move on", "next", "skip") map to "Next area". If ambiguous, ask: "Continue with more questions about [area], or move to the next area?"
|
||||
|
||||
4. **After all initially-selected areas complete:**
|
||||
- Summarize what was captured from the discussion so far
|
||||
- AskUserQuestion:
|
||||
- header: "Done"
|
||||
- question: "We've discussed [list areas]. Which gray areas remain unclear?"
|
||||
- options: "Explore more gray areas" / "I'm ready for context"
|
||||
- If "Explore more gray areas":
|
||||
- Identify 2-4 additional gray areas based on what was learned
|
||||
- Return to present_gray_areas logic with these new areas
|
||||
- Loop: discuss new areas, then prompt again
|
||||
- If "I'm ready for context": Proceed to write_context
|
||||
|
||||
**Canonical ref accumulation during discussion:**
|
||||
When the user references a doc, spec, or ADR during any answer — e.g., "read adr-014", "check the MCP spec", "per browse-spec.md" — immediately:
|
||||
1. Read the referenced doc (or confirm it exists)
|
||||
2. Add it to the canonical refs accumulator with full relative path
|
||||
3. Use what you learned from the doc to inform subsequent questions
|
||||
|
||||
These user-referenced docs are often MORE important than ROADMAP.md refs because they represent docs the user specifically wants downstream agents to follow. Never drop them.
|
||||
|
||||
**Question design:**
|
||||
- Options should be concrete, not abstract ("Cards" not "Option A")
|
||||
- Each answer should inform the next question or next batch
|
||||
- If user picks "Other" to provide freeform input (e.g., "let me describe it", "something else", or an open-ended reply), ask your follow-up as plain text — NOT another AskUserQuestion. Wait for them to type at the normal prompt, then reflect their input back and confirm before resuming AskUserQuestion or the next numbered batch.
|
||||
|
||||
**Scope creep handling:**
|
||||
If user mentions something outside the phase domain:
|
||||
```
|
||||
"[Feature] sounds like a new capability — that belongs in its own phase.
|
||||
I'll note it as a deferred idea.
|
||||
|
||||
Back to [current area]: [return to current question]"
|
||||
```
|
||||
|
||||
Track deferred ideas internally.
|
||||
</step>
|
||||
|
||||
<step name="write_context">
|
||||
Create CONTEXT.md capturing decisions made.
|
||||
|
||||
**Find or create phase directory:**
|
||||
|
||||
Use values from init: `phase_dir`, `phase_slug`, `padded_phase`.
|
||||
|
||||
If `phase_dir` is null (phase exists in roadmap but no directory):
|
||||
```bash
|
||||
mkdir -p ".planning/phases/${padded_phase}-${phase_slug}"
|
||||
```
|
||||
|
||||
**File location:** `${phase_dir}/${padded_phase}-CONTEXT.md`
|
||||
|
||||
**Structure the content by what was discussed:**
|
||||
|
||||
```markdown
|
||||
# Phase [X]: [Name] - Context
|
||||
|
||||
**Gathered:** [date]
|
||||
**Status:** Ready for planning
|
||||
|
||||
<domain>
|
||||
## Phase Boundary
|
||||
|
||||
[Clear statement of what this phase delivers — the scope anchor]
|
||||
|
||||
</domain>
|
||||
|
||||
<decisions>
|
||||
## Implementation Decisions
|
||||
|
||||
### [Category 1 that was discussed]
|
||||
- [Decision or preference captured]
|
||||
- [Another decision if applicable]
|
||||
|
||||
### [Category 2 that was discussed]
|
||||
- [Decision or preference captured]
|
||||
|
||||
### Claude's Discretion
|
||||
[Areas where user said "you decide" — note that Claude has flexibility here]
|
||||
|
||||
</decisions>
|
||||
|
||||
<canonical_refs>
|
||||
## Canonical References
|
||||
|
||||
**Downstream agents MUST read these before planning or implementing.**
|
||||
|
||||
[MANDATORY section. Write the FULL accumulated canonical refs list here.
|
||||
Sources: ROADMAP.md refs + REQUIREMENTS.md refs + user-referenced docs during
|
||||
discussion + any docs discovered during codebase scout. Group by topic area.
|
||||
Every entry needs a full relative path — not just a name.]
|
||||
|
||||
### [Topic area 1]
|
||||
- `path/to/adr-or-spec.md` — [What it decides/defines that's relevant]
|
||||
- `path/to/doc.md` §N — [Specific section reference]
|
||||
|
||||
### [Topic area 2]
|
||||
- `path/to/feature-doc.md` — [What this doc defines]
|
||||
|
||||
[If no external specs: "No external specs — requirements fully captured in decisions above"]
|
||||
|
||||
</canonical_refs>
|
||||
|
||||
<code_context>
|
||||
## Existing Code Insights
|
||||
|
||||
### Reusable Assets
|
||||
- [Component/hook/utility]: [How it could be used in this phase]
|
||||
|
||||
### Established Patterns
|
||||
- [Pattern]: [How it constrains/enables this phase]
|
||||
|
||||
### Integration Points
|
||||
- [Where new code connects to existing system]
|
||||
|
||||
</code_context>
|
||||
|
||||
<specifics>
|
||||
## Specific Ideas
|
||||
|
||||
[Any particular references, examples, or "I want it like X" moments from discussion]
|
||||
|
||||
[If none: "No specific requirements — open to standard approaches"]
|
||||
|
||||
</specifics>
|
||||
|
||||
<deferred>
|
||||
## Deferred Ideas
|
||||
|
||||
[Ideas that came up but belong in other phases. Don't lose them.]
|
||||
|
||||
[If none: "None — discussion stayed within phase scope"]
|
||||
|
||||
</deferred>
|
||||
|
||||
---
|
||||
|
||||
*Phase: XX-name*
|
||||
*Context gathered: [date]*
|
||||
```
|
||||
|
||||
Write file.
|
||||
</step>
|
||||
|
||||
<step name="confirm_creation">
|
||||
Present summary and next steps:
|
||||
|
||||
```
|
||||
Created: .planning/phases/${PADDED_PHASE}-${SLUG}/${PADDED_PHASE}-CONTEXT.md
|
||||
|
||||
## Decisions Captured
|
||||
|
||||
### [Category]
|
||||
- [Key decision]
|
||||
|
||||
### [Category]
|
||||
- [Key decision]
|
||||
|
||||
[If deferred ideas exist:]
|
||||
## Noted for Later
|
||||
- [Deferred idea] — future phase
|
||||
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Phase ${PHASE}: [Name]** — [Goal from ROADMAP.md]
|
||||
|
||||
`/gsd:plan-phase ${PHASE}`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:plan-phase ${PHASE} --skip-research` — plan without research
|
||||
- `/gsd:ui-phase ${PHASE}` — generate UI design contract before planning (if phase has frontend work)
|
||||
- Review/edit CONTEXT.md before continuing
|
||||
|
||||
---
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="git_commit">
|
||||
Commit phase context (uses `commit_docs` from init internally):
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs(${padded_phase}): capture phase context" --files "${phase_dir}/${padded_phase}-CONTEXT.md"
|
||||
```
|
||||
|
||||
Confirm: "Committed: docs(${padded_phase}): capture phase context"
|
||||
</step>
|
||||
|
||||
<step name="update_state">
|
||||
Update STATE.md with session info:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state record-session \
|
||||
--stopped-at "Phase ${PHASE} context gathered" \
|
||||
--resume-file "${phase_dir}/${padded_phase}-CONTEXT.md"
|
||||
```
|
||||
|
||||
Commit STATE.md:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs(state): record phase ${PHASE} context session" --files .planning/STATE.md
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="auto_advance">
|
||||
Check for auto-advance trigger:
|
||||
|
||||
1. Parse `--auto` flag from $ARGUMENTS
|
||||
2. **Sync chain flag with intent** — if user invoked manually (no `--auto`), clear the ephemeral chain flag from any previous interrupted `--auto` chain. This does NOT touch `workflow.auto_advance` (the user's persistent settings preference):
|
||||
```bash
|
||||
if [[ ! "$ARGUMENTS" =~ --auto ]]; then
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-set workflow._auto_chain_active false 2>/dev/null
|
||||
fi
|
||||
```
|
||||
3. Read both the chain flag and user preference:
|
||||
```bash
|
||||
AUTO_CHAIN=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow._auto_chain_active 2>/dev/null || echo "false")
|
||||
AUTO_CFG=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.auto_advance 2>/dev/null || echo "false")
|
||||
```
|
||||
|
||||
**If `--auto` flag present AND `AUTO_CHAIN` is not true:** Persist chain flag to config (handles direct `--auto` usage without new-project):
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-set workflow._auto_chain_active true
|
||||
```
|
||||
|
||||
**If `--auto` flag present OR `AUTO_CHAIN` is true OR `AUTO_CFG` is true:**
|
||||
|
||||
Display banner:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► AUTO-ADVANCING TO PLAN
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Context captured. Launching plan-phase...
|
||||
```
|
||||
|
||||
Launch plan-phase using the Skill tool to avoid nested Task sessions (which cause runtime freezes due to deep agent nesting — see #686):
|
||||
```
|
||||
Skill(skill="gsd:plan-phase", args="${PHASE} --auto")
|
||||
```
|
||||
|
||||
This keeps the auto-advance chain flat — discuss, plan, and execute all run at the same nesting level rather than spawning increasingly deep Task agents.
|
||||
|
||||
**Handle plan-phase return:**
|
||||
- **PHASE COMPLETE** → Full chain succeeded. Display:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► PHASE ${PHASE} COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Auto-advance pipeline finished: discuss → plan → execute
|
||||
|
||||
Next: /gsd:discuss-phase ${NEXT_PHASE} --auto
|
||||
<sub>/clear first → fresh context window</sub>
|
||||
```
|
||||
- **PLANNING COMPLETE** → Planning done, execution didn't complete:
|
||||
```
|
||||
Auto-advance partial: Planning complete, execution did not finish.
|
||||
Continue: /gsd:execute-phase ${PHASE}
|
||||
```
|
||||
- **PLANNING INCONCLUSIVE / CHECKPOINT** → Stop chain:
|
||||
```
|
||||
Auto-advance stopped: Planning needs input.
|
||||
Continue: /gsd:plan-phase ${PHASE}
|
||||
```
|
||||
- **GAPS FOUND** → Stop chain:
|
||||
```
|
||||
Auto-advance stopped: Gaps found during execution.
|
||||
Continue: /gsd:plan-phase ${PHASE} --gaps
|
||||
```
|
||||
|
||||
**If neither `--auto` nor config enabled:**
|
||||
Route to `confirm_creation` step (existing behavior — show manual next steps).
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- Phase validated against roadmap
|
||||
- Prior context loaded (PROJECT.md, REQUIREMENTS.md, STATE.md, prior CONTEXT.md files)
|
||||
- Already-decided questions not re-asked (carried forward from prior phases)
|
||||
- Codebase scouted for reusable assets, patterns, and integration points
|
||||
- Gray areas identified through intelligent analysis with code and prior decision annotations
|
||||
- User selected which areas to discuss
|
||||
- Each selected area explored until user satisfied (with code-informed and prior-decision-informed options)
|
||||
- Scope creep redirected to deferred ideas
|
||||
- CONTEXT.md captures actual decisions, not vague vision
|
||||
- CONTEXT.md includes canonical_refs section with full file paths to every spec/ADR/doc downstream agents need (MANDATORY — never omit)
|
||||
- CONTEXT.md includes code_context section with reusable assets and patterns
|
||||
- Deferred ideas preserved for future phases
|
||||
- STATE.md updated with session info
|
||||
- User knows next steps
|
||||
</success_criteria>
|
||||
104
get-shit-done/workflows/do.md
Normal file
104
get-shit-done/workflows/do.md
Normal file
@@ -0,0 +1,104 @@
|
||||
<purpose>
|
||||
Analyze freeform text from the user and route to the most appropriate GSD command. This is a dispatcher — it never does the work itself. Match user intent to the best command, confirm the routing, and hand off.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="validate">
|
||||
**Check for input.**
|
||||
|
||||
If `$ARGUMENTS` is empty, ask via AskUserQuestion:
|
||||
|
||||
```
|
||||
What would you like to do? Describe the task, bug, or idea and I'll route it to the right GSD command.
|
||||
```
|
||||
|
||||
Wait for response before continuing.
|
||||
</step>
|
||||
|
||||
<step name="check_project">
|
||||
**Check if project exists.**
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state load 2>/dev/null)
|
||||
```
|
||||
|
||||
Track whether `.planning/` exists — some routes require it, others don't.
|
||||
</step>
|
||||
|
||||
<step name="route">
|
||||
**Match intent to command.**
|
||||
|
||||
Evaluate `$ARGUMENTS` against these routing rules. Apply the **first matching** rule:
|
||||
|
||||
| If the text describes... | Route to | Why |
|
||||
|--------------------------|----------|-----|
|
||||
| Starting a new project, "set up", "initialize" | `/gsd:new-project` | Needs full project initialization |
|
||||
| Mapping or analyzing an existing codebase | `/gsd:map-codebase` | Codebase discovery |
|
||||
| A bug, error, crash, failure, or something broken | `/gsd:debug` | Needs systematic investigation |
|
||||
| Exploring, researching, comparing, or "how does X work" | `/gsd:research-phase` | Domain research before planning |
|
||||
| Discussing vision, "how should X look", brainstorming | `/gsd:discuss-phase` | Needs context gathering |
|
||||
| A complex task: refactoring, migration, multi-file architecture, system redesign | `/gsd:add-phase` | Needs a full phase with plan/build cycle |
|
||||
| Planning a specific phase or "plan phase N" | `/gsd:plan-phase` | Direct planning request |
|
||||
| Executing a phase or "build phase N", "run phase N" | `/gsd:execute-phase` | Direct execution request |
|
||||
| Running all remaining phases automatically | `/gsd:autonomous` | Full autonomous execution |
|
||||
| A review or quality concern about existing work | `/gsd:verify-work` | Needs verification |
|
||||
| Checking progress, status, "where am I" | `/gsd:progress` | Status check |
|
||||
| Resuming work, "pick up where I left off" | `/gsd:resume-work` | Session restoration |
|
||||
| A note, idea, or "remember to..." | `/gsd:add-todo` | Capture for later |
|
||||
| Adding tests, "write tests", "test coverage" | `/gsd:add-tests` | Test generation |
|
||||
| Completing a milestone, shipping, releasing | `/gsd:complete-milestone` | Milestone lifecycle |
|
||||
| A specific, actionable, small task (add feature, fix typo, update config) | `/gsd:quick` | Self-contained, single executor |
|
||||
|
||||
**Requires `.planning/` directory:** All routes except `/gsd:new-project`, `/gsd:map-codebase`, `/gsd:help`, and `/gsd:join-discord`. If the project doesn't exist and the route requires it, suggest `/gsd:new-project` first.
|
||||
|
||||
**Ambiguity handling:** If the text could reasonably match multiple routes, ask the user via AskUserQuestion with the top 2-3 options. For example:
|
||||
|
||||
```
|
||||
"Refactor the authentication system" could be:
|
||||
1. /gsd:add-phase — Full planning cycle (recommended for multi-file refactors)
|
||||
2. /gsd:quick — Quick execution (if scope is small and clear)
|
||||
|
||||
Which approach fits better?
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="display">
|
||||
**Show the routing decision.**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► ROUTING
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
**Input:** {first 80 chars of $ARGUMENTS}
|
||||
**Routing to:** {chosen command}
|
||||
**Reason:** {one-line explanation}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="dispatch">
|
||||
**Invoke the chosen command.**
|
||||
|
||||
Run the selected `/gsd:*` command, passing `$ARGUMENTS` as args.
|
||||
|
||||
If the chosen command expects a phase number and one wasn't provided in the text, extract it from context or ask via AskUserQuestion.
|
||||
|
||||
After invoking the command, stop. The dispatched command handles everything from here.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Input validated (not empty)
|
||||
- [ ] Intent matched to exactly one GSD command
|
||||
- [ ] Ambiguity resolved via user question (if needed)
|
||||
- [ ] Project existence checked for routes that require it
|
||||
- [ ] Routing decision displayed before dispatch
|
||||
- [ ] Command invoked with appropriate arguments
|
||||
- [ ] No work done directly — dispatcher only
|
||||
</success_criteria>
|
||||
670
get-shit-done/workflows/execute-phase.md
Normal file
670
get-shit-done/workflows/execute-phase.md
Normal file
@@ -0,0 +1,670 @@
|
||||
<purpose>
|
||||
Execute all plans in a phase using wave-based parallel execution. Orchestrator stays lean — delegates plan execution to subagents.
|
||||
</purpose>
|
||||
|
||||
<core_principle>
|
||||
Orchestrator coordinates, not executes. Each subagent loads the full execute-plan context. Orchestrator: discover plans → analyze deps → group waves → spawn agents → handle checkpoints → collect results.
|
||||
</core_principle>
|
||||
|
||||
<runtime_compatibility>
|
||||
**Subagent spawning is runtime-specific:**
|
||||
- **Claude Code:** Uses `Task(subagent_type="gsd-executor", ...)` — blocks until complete, returns result
|
||||
- **Copilot:** Uses `@gsd-executor` agent reference — if subagent spawning hangs or fails to return,
|
||||
fall back to **sequential inline execution**: read and follow execute-plan.md directly for each plan
|
||||
instead of spawning parallel agents. This is slower but reliable.
|
||||
- **Other runtimes (Gemini, Codex, OpenCode):** If Task/subagent API is unavailable, use sequential
|
||||
inline execution as the fallback.
|
||||
|
||||
**Fallback rule:** If a spawned agent completes its work (commits visible, SUMMARY.md exists) but
|
||||
the orchestrator never receives the completion signal, treat it as successful based on spot-checks
|
||||
and continue to the next wave/plan.
|
||||
</runtime_compatibility>
|
||||
|
||||
<required_reading>
|
||||
Read STATE.md before any operation to load project context.
|
||||
</required_reading>
|
||||
|
||||
<available_agent_types>
|
||||
These are the valid GSD subagent types registered in .claude/agents/ (or equivalent for your runtime).
|
||||
Always use the exact name from this list — do not fall back to 'general-purpose' or other built-in types:
|
||||
|
||||
- gsd-executor — Executes plan tasks, commits, creates SUMMARY.md
|
||||
- gsd-verifier — Verifies phase completion, checks quality gates
|
||||
- gsd-planner — Creates detailed plans from phase scope
|
||||
- gsd-phase-researcher — Researches technical approaches for a phase
|
||||
- gsd-plan-checker — Reviews plan quality before execution
|
||||
- gsd-debugger — Diagnoses and fixes issues
|
||||
- gsd-codebase-mapper — Maps project structure and dependencies
|
||||
- gsd-integration-checker — Checks cross-phase integration
|
||||
- gsd-nyquist-auditor — Validates verification coverage
|
||||
- gsd-ui-researcher — Researches UI/UX approaches
|
||||
- gsd-ui-checker — Reviews UI implementation quality
|
||||
- gsd-ui-auditor — Audits UI against design requirements
|
||||
</available_agent_types>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="initialize" priority="first">
|
||||
Load all context in one call:
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init execute-phase "${PHASE_ARG}")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Parse JSON for: `executor_model`, `verifier_model`, `commit_docs`, `parallelization`, `branching_strategy`, `branch_name`, `phase_found`, `phase_dir`, `phase_number`, `phase_name`, `phase_slug`, `plans`, `incomplete_plans`, `plan_count`, `incomplete_count`, `state_exists`, `roadmap_exists`, `phase_req_ids`.
|
||||
|
||||
**If `phase_found` is false:** Error — phase directory not found.
|
||||
**If `plan_count` is 0:** Error — no plans found in phase.
|
||||
**If `state_exists` is false but `.planning/` exists:** Offer reconstruct or continue.
|
||||
|
||||
When `parallelization` is false, plans within a wave execute sequentially.
|
||||
|
||||
**REQUIRED — Sync chain flag with intent.** If user invoked manually (no `--auto`), clear the ephemeral chain flag from any previous interrupted `--auto` chain. This prevents stale `_auto_chain_active: true` from causing unwanted auto-advance. This does NOT touch `workflow.auto_advance` (the user's persistent settings preference). You MUST execute this bash block before any config reads:
|
||||
```bash
|
||||
# REQUIRED: prevents stale auto-chain from previous --auto runs
|
||||
if [[ ! "$ARGUMENTS" =~ --auto ]]; then
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-set workflow._auto_chain_active false 2>/dev/null
|
||||
fi
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="check_interactive_mode">
|
||||
**Parse `--interactive` flag from $ARGUMENTS.**
|
||||
|
||||
**If `--interactive` flag present:** Switch to interactive execution mode.
|
||||
|
||||
Interactive mode executes plans sequentially **inline** (no subagent spawning) with user
|
||||
checkpoints between tasks. The user can review, modify, or redirect work at any point.
|
||||
|
||||
**Interactive execution flow:**
|
||||
|
||||
1. Load plan inventory as normal (discover_and_group_plans)
|
||||
2. For each plan (sequentially, ignoring wave grouping):
|
||||
|
||||
a. **Present the plan to the user:**
|
||||
```
|
||||
## Plan {plan_id}: {plan_name}
|
||||
|
||||
Objective: {from plan file}
|
||||
Tasks: {task_count}
|
||||
|
||||
Options:
|
||||
- Execute (proceed with all tasks)
|
||||
- Review first (show task breakdown before starting)
|
||||
- Skip (move to next plan)
|
||||
- Stop (end execution, save progress)
|
||||
```
|
||||
|
||||
b. **If "Review first":** Read and display the full plan file. Ask again: Execute, Modify, Skip.
|
||||
|
||||
c. **If "Execute":** Read and follow `C:/Users/yaoji/.claude/get-shit-done/workflows/execute-plan.md` **inline**
|
||||
(do NOT spawn a subagent). Execute tasks one at a time.
|
||||
|
||||
d. **After each task:** Pause briefly. If the user intervenes (types anything), stop and address
|
||||
their feedback before continuing. Otherwise proceed to next task.
|
||||
|
||||
e. **After plan complete:** Show results, commit, create SUMMARY.md, then present next plan.
|
||||
|
||||
3. After all plans: proceed to verification (same as normal mode).
|
||||
|
||||
**Benefits of interactive mode:**
|
||||
- No subagent overhead — dramatically lower token usage
|
||||
- User catches mistakes early — saves costly verification cycles
|
||||
- Maintains GSD's planning/tracking structure
|
||||
- Best for: small phases, bug fixes, verification gaps, learning GSD
|
||||
|
||||
**Skip to handle_branching step** (interactive plans execute inline after grouping).
|
||||
</step>
|
||||
|
||||
<step name="handle_branching">
|
||||
Check `branching_strategy` from init:
|
||||
|
||||
**"none":** Skip, continue on current branch.
|
||||
|
||||
**"phase" or "milestone":** Use pre-computed `branch_name` from init:
|
||||
```bash
|
||||
git checkout -b "$BRANCH_NAME" 2>/dev/null || git checkout "$BRANCH_NAME"
|
||||
```
|
||||
|
||||
All subsequent commits go to this branch. User handles merging.
|
||||
</step>
|
||||
|
||||
<step name="validate_phase">
|
||||
From init JSON: `phase_dir`, `plan_count`, `incomplete_count`.
|
||||
|
||||
Report: "Found {plan_count} plans in {phase_dir} ({incomplete_count} incomplete)"
|
||||
|
||||
**Update STATE.md for phase start:**
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state begin-phase --phase "${PHASE_NUMBER}" --name "${PHASE_NAME}" --plans "${PLAN_COUNT}"
|
||||
```
|
||||
This updates Status, Last Activity, Current focus, Current Position, and plan counts in STATE.md so frontmatter and body text reflect the active phase immediately.
|
||||
</step>
|
||||
|
||||
<step name="discover_and_group_plans">
|
||||
Load plan inventory with wave grouping in one call:
|
||||
|
||||
```bash
|
||||
PLAN_INDEX=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" phase-plan-index "${PHASE_NUMBER}")
|
||||
```
|
||||
|
||||
Parse JSON for: `phase`, `plans[]` (each with `id`, `wave`, `autonomous`, `objective`, `files_modified`, `task_count`, `has_summary`), `waves` (map of wave number → plan IDs), `incomplete`, `has_checkpoints`.
|
||||
|
||||
**Filtering:** Skip plans where `has_summary: true`. If `--gaps-only`: also skip non-gap_closure plans. If all filtered: "No matching incomplete plans" → exit.
|
||||
|
||||
Report:
|
||||
```
|
||||
## Execution Plan
|
||||
|
||||
**Phase {X}: {Name}** — {total_plans} plans across {wave_count} waves
|
||||
|
||||
| Wave | Plans | What it builds |
|
||||
|------|-------|----------------|
|
||||
| 1 | 01-01, 01-02 | {from plan objectives, 3-8 words} |
|
||||
| 2 | 01-03 | ... |
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="execute_waves">
|
||||
Execute each wave in sequence. Within a wave: parallel if `PARALLELIZATION=true`, sequential if `false`.
|
||||
|
||||
**For each wave:**
|
||||
|
||||
1. **Describe what's being built (BEFORE spawning):**
|
||||
|
||||
Read each plan's `<objective>`. Extract what's being built and why.
|
||||
|
||||
```
|
||||
---
|
||||
## Wave {N}
|
||||
|
||||
**{Plan ID}: {Plan Name}**
|
||||
{2-3 sentences: what this builds, technical approach, why it matters}
|
||||
|
||||
Spawning {count} agent(s)...
|
||||
---
|
||||
```
|
||||
|
||||
- Bad: "Executing terrain generation plan"
|
||||
- Good: "Procedural terrain generator using Perlin noise — creates height maps, biome zones, and collision meshes. Required before vehicle physics can interact with ground."
|
||||
|
||||
2. **Spawn executor agents:**
|
||||
|
||||
Pass paths only — executors read files themselves with their fresh 200k context.
|
||||
This keeps orchestrator context lean (~10-15%).
|
||||
|
||||
```
|
||||
Task(
|
||||
subagent_type="gsd-executor",
|
||||
model="{executor_model}",
|
||||
prompt="
|
||||
<objective>
|
||||
Execute plan {plan_number} of phase {phase_number}-{phase_name}.
|
||||
Commit each task atomically. Create SUMMARY.md. Update STATE.md and ROADMAP.md.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@C:/Users/yaoji/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@C:/Users/yaoji/.claude/get-shit-done/templates/summary.md
|
||||
@C:/Users/yaoji/.claude/get-shit-done/references/checkpoints.md
|
||||
@C:/Users/yaoji/.claude/get-shit-done/references/tdd.md
|
||||
</execution_context>
|
||||
|
||||
<files_to_read>
|
||||
Read these files at execution start using the Read tool:
|
||||
- {phase_dir}/{plan_file} (Plan)
|
||||
- .planning/STATE.md (State)
|
||||
- .planning/config.json (Config, if exists)
|
||||
- ./CLAUDE.md (Project instructions, if exists — follow project-specific guidelines and coding conventions)
|
||||
- .claude/skills/ or .agents/skills/ (Project skills, if either exists — list skills, read SKILL.md for each, follow relevant rules during implementation)
|
||||
</files_to_read>
|
||||
|
||||
<mcp_tools>
|
||||
If CLAUDE.md or project instructions reference MCP tools (e.g. jCodeMunch, context7,
|
||||
or other MCP servers), prefer those tools over Grep/Glob for code navigation when available.
|
||||
MCP tools often save significant tokens by providing structured code indexes.
|
||||
Check tool availability first — if MCP tools are not accessible, fall back to Grep/Glob.
|
||||
</mcp_tools>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All tasks executed
|
||||
- [ ] Each task committed individually
|
||||
- [ ] SUMMARY.md created in plan directory
|
||||
- [ ] STATE.md updated with position and decisions
|
||||
- [ ] ROADMAP.md updated with plan progress (via `roadmap update-plan-progress`)
|
||||
</success_criteria>
|
||||
"
|
||||
)
|
||||
```
|
||||
|
||||
3. **Wait for all agents in wave to complete.**
|
||||
|
||||
4. **Report completion — spot-check claims first:**
|
||||
|
||||
For each SUMMARY.md:
|
||||
- Verify first 2 files from `key-files.created` exist on disk
|
||||
- Check `git log --oneline --all --grep="{phase}-{plan}"` returns ≥1 commit
|
||||
- Check for `## Self-Check: FAILED` marker
|
||||
|
||||
If ANY spot-check fails: report which plan failed, route to failure handler — ask "Retry plan?" or "Continue with remaining waves?"
|
||||
|
||||
If pass:
|
||||
```
|
||||
---
|
||||
## Wave {N} Complete
|
||||
|
||||
**{Plan ID}: {Plan Name}**
|
||||
{What was built — from SUMMARY.md}
|
||||
{Notable deviations, if any}
|
||||
|
||||
{If more waves: what this enables for next wave}
|
||||
---
|
||||
```
|
||||
|
||||
- Bad: "Wave 2 complete. Proceeding to Wave 3."
|
||||
- Good: "Terrain system complete — 3 biome types, height-based texturing, physics collision meshes. Vehicle physics (Wave 3) can now reference ground surfaces."
|
||||
|
||||
5. **Handle failures:**
|
||||
|
||||
**Known Claude Code bug (classifyHandoffIfNeeded):** If an agent reports "failed" with error containing `classifyHandoffIfNeeded is not defined`, this is a Claude Code runtime bug — not a GSD or agent issue. The error fires in the completion handler AFTER all tool calls finish. In this case: run the same spot-checks as step 4 (SUMMARY.md exists, git commits present, no Self-Check: FAILED). If spot-checks PASS → treat as **successful**. If spot-checks FAIL → treat as real failure below.
|
||||
|
||||
For real failures: report which plan failed → ask "Continue?" or "Stop?" → if continue, dependent plans may also fail. If stop, partial completion report.
|
||||
|
||||
5b. **Pre-wave dependency check (waves 2+ only):**
|
||||
|
||||
Before spawning wave N+1, for each plan in the upcoming wave:
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" verify key-links {phase_dir}/{plan}-PLAN.md
|
||||
```
|
||||
|
||||
If any key-link from a PRIOR wave's artifact fails verification:
|
||||
|
||||
## Cross-Plan Wiring Gap
|
||||
|
||||
| Plan | Link | From | Expected Pattern | Status |
|
||||
|------|------|------|-----------------|--------|
|
||||
| {plan} | {via} | {from} | {pattern} | NOT FOUND |
|
||||
|
||||
Wave {N} artifacts may not be properly wired. Options:
|
||||
1. Investigate and fix before continuing
|
||||
2. Continue (may cause cascading failures in wave {N+1})
|
||||
|
||||
Key-links referencing files in the CURRENT (upcoming) wave are skipped.
|
||||
|
||||
6. **Execute checkpoint plans between waves** — see `<checkpoint_handling>`.
|
||||
|
||||
7. **Proceed to next wave.**
|
||||
</step>
|
||||
|
||||
<step name="checkpoint_handling">
|
||||
Plans with `autonomous: false` require user interaction.
|
||||
|
||||
**Auto-mode checkpoint handling:**
|
||||
|
||||
Read auto-advance config (chain flag + user preference):
|
||||
```bash
|
||||
AUTO_CHAIN=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow._auto_chain_active 2>/dev/null || echo "false")
|
||||
AUTO_CFG=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.auto_advance 2>/dev/null || echo "false")
|
||||
```
|
||||
|
||||
When executor returns a checkpoint AND (`AUTO_CHAIN` is `"true"` OR `AUTO_CFG` is `"true"`):
|
||||
- **human-verify** → Auto-spawn continuation agent with `{user_response}` = `"approved"`. Log `⚡ Auto-approved checkpoint`.
|
||||
- **decision** → Auto-spawn continuation agent with `{user_response}` = first option from checkpoint details. Log `⚡ Auto-selected: [option]`.
|
||||
- **human-action** → Present to user (existing behavior below). Auth gates cannot be automated.
|
||||
|
||||
**Standard flow (not auto-mode, or human-action type):**
|
||||
|
||||
1. Spawn agent for checkpoint plan
|
||||
2. Agent runs until checkpoint task or auth gate → returns structured state
|
||||
3. Agent return includes: completed tasks table, current task + blocker, checkpoint type/details, what's awaited
|
||||
4. **Present to user:**
|
||||
```
|
||||
## Checkpoint: [Type]
|
||||
|
||||
**Plan:** 03-03 Dashboard Layout
|
||||
**Progress:** 2/3 tasks complete
|
||||
|
||||
[Checkpoint Details from agent return]
|
||||
[Awaiting section from agent return]
|
||||
```
|
||||
5. User responds: "approved"/"done" | issue description | decision selection
|
||||
6. **Spawn continuation agent (NOT resume)** using continuation-prompt.md template:
|
||||
- `{completed_tasks_table}`: From checkpoint return
|
||||
- `{resume_task_number}` + `{resume_task_name}`: Current task
|
||||
- `{user_response}`: What user provided
|
||||
- `{resume_instructions}`: Based on checkpoint type
|
||||
7. Continuation agent verifies previous commits, continues from resume point
|
||||
8. Repeat until plan completes or user stops
|
||||
|
||||
**Why fresh agent, not resume:** Resume relies on internal serialization that breaks with parallel tool calls. Fresh agents with explicit state are more reliable.
|
||||
|
||||
**Checkpoints in parallel waves:** Agent pauses and returns while other parallel agents may complete. Present checkpoint, spawn continuation, wait for all before next wave.
|
||||
</step>
|
||||
|
||||
<step name="aggregate_results">
|
||||
After all waves:
|
||||
|
||||
```markdown
|
||||
## Phase {X}: {Name} Execution Complete
|
||||
|
||||
**Waves:** {N} | **Plans:** {M}/{total} complete
|
||||
|
||||
| Wave | Plans | Status |
|
||||
|------|-------|--------|
|
||||
| 1 | plan-01, plan-02 | ✓ Complete |
|
||||
| CP | plan-03 | ✓ Verified |
|
||||
| 2 | plan-04 | ✓ Complete |
|
||||
|
||||
### Plan Details
|
||||
1. **03-01**: [one-liner from SUMMARY.md]
|
||||
2. **03-02**: [one-liner from SUMMARY.md]
|
||||
|
||||
### Issues Encountered
|
||||
[Aggregate from SUMMARYs, or "None"]
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="close_parent_artifacts">
|
||||
**For decimal/polish phases only (X.Y pattern):** Close the feedback loop by resolving parent UAT and debug artifacts.
|
||||
|
||||
**Skip if** phase number has no decimal (e.g., `3`, `04`) — only applies to gap-closure phases like `4.1`, `03.1`.
|
||||
|
||||
**1. Detect decimal phase and derive parent:**
|
||||
```bash
|
||||
# Check if phase_number contains a decimal
|
||||
if [[ "$PHASE_NUMBER" == *.* ]]; then
|
||||
PARENT_PHASE="${PHASE_NUMBER%%.*}"
|
||||
fi
|
||||
```
|
||||
|
||||
**2. Find parent UAT file:**
|
||||
```bash
|
||||
PARENT_INFO=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" find-phase "${PARENT_PHASE}" --raw)
|
||||
# Extract directory from PARENT_INFO JSON, then find UAT file in that directory
|
||||
```
|
||||
|
||||
**If no parent UAT found:** Skip this step (gap-closure may have been triggered by VERIFICATION.md instead).
|
||||
|
||||
**3. Update UAT gap statuses:**
|
||||
|
||||
Read the parent UAT file's `## Gaps` section. For each gap entry with `status: failed`:
|
||||
- Update to `status: resolved`
|
||||
|
||||
**4. Update UAT frontmatter:**
|
||||
|
||||
If all gaps now have `status: resolved`:
|
||||
- Update frontmatter `status: diagnosed` → `status: resolved`
|
||||
- Update frontmatter `updated:` timestamp
|
||||
|
||||
**5. Resolve referenced debug sessions:**
|
||||
|
||||
For each gap that has a `debug_session:` field:
|
||||
- Read the debug session file
|
||||
- Update frontmatter `status:` → `resolved`
|
||||
- Update frontmatter `updated:` timestamp
|
||||
- Move to resolved directory:
|
||||
```bash
|
||||
mkdir -p .planning/debug/resolved
|
||||
mv .planning/debug/{slug}.md .planning/debug/resolved/
|
||||
```
|
||||
|
||||
**6. Commit updated artifacts:**
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs(phase-${PARENT_PHASE}): resolve UAT gaps and debug sessions after ${PHASE_NUMBER} gap closure" --files .planning/phases/*${PARENT_PHASE}*/*-UAT.md .planning/debug/resolved/*.md
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="regression_gate">
|
||||
Run prior phases' test suites to catch cross-phase regressions BEFORE verification.
|
||||
|
||||
**Skip if:** This is the first phase (no prior phases), or no prior VERIFICATION.md files exist.
|
||||
|
||||
**Step 1: Discover prior phases' test files**
|
||||
```bash
|
||||
# Find all VERIFICATION.md files from prior phases in current milestone
|
||||
PRIOR_VERIFICATIONS=$(find .planning/phases/ -name "*-VERIFICATION.md" ! -path "*${PHASE_NUMBER}*" 2>/dev/null)
|
||||
```
|
||||
|
||||
**Step 2: Extract test file lists from prior verifications**
|
||||
|
||||
For each VERIFICATION.md found, look for test file references:
|
||||
- Lines containing `test`, `spec`, or `__tests__` paths
|
||||
- The "Test Suite" or "Automated Checks" section
|
||||
- File patterns from `key-files.created` in corresponding SUMMARY.md files that match `*.test.*` or `*.spec.*`
|
||||
|
||||
Collect all unique test file paths into `REGRESSION_FILES`.
|
||||
|
||||
**Step 3: Run regression tests (if any found)**
|
||||
|
||||
```bash
|
||||
# Detect test runner and run prior phase tests
|
||||
if [ -f "package.json" ]; then
|
||||
# Node.js — use project's test runner
|
||||
npx jest ${REGRESSION_FILES} --passWithNoTests --no-coverage -q 2>&1 || npx vitest run ${REGRESSION_FILES} 2>&1
|
||||
elif [ -f "Cargo.toml" ]; then
|
||||
cargo test 2>&1
|
||||
elif [ -f "requirements.txt" ] || [ -f "pyproject.toml" ]; then
|
||||
python -m pytest ${REGRESSION_FILES} -q --tb=short 2>&1
|
||||
fi
|
||||
```
|
||||
|
||||
**Step 4: Report results**
|
||||
|
||||
If all tests pass:
|
||||
```
|
||||
✓ Regression gate: {N} prior-phase test files passed — no regressions detected
|
||||
```
|
||||
→ Proceed to verify_phase_goal
|
||||
|
||||
If any tests fail:
|
||||
```
|
||||
## ⚠ Cross-Phase Regression Detected
|
||||
|
||||
Phase {X} execution may have broken functionality from prior phases.
|
||||
|
||||
| Test File | Phase | Status | Detail |
|
||||
|-----------|-------|--------|--------|
|
||||
| {file} | {origin_phase} | FAILED | {first_failure_line} |
|
||||
|
||||
Options:
|
||||
1. Fix regressions before verification (recommended)
|
||||
2. Continue to verification anyway (regressions will compound)
|
||||
3. Abort phase — roll back and re-plan
|
||||
```
|
||||
|
||||
Use AskUserQuestion to present the options.
|
||||
</step>
|
||||
|
||||
<step name="verify_phase_goal">
|
||||
Verify phase achieved its GOAL, not just completed tasks.
|
||||
|
||||
```
|
||||
Task(
|
||||
prompt="Verify phase {phase_number} goal achievement.
|
||||
Phase directory: {phase_dir}
|
||||
Phase goal: {goal from ROADMAP.md}
|
||||
Phase requirement IDs: {phase_req_ids}
|
||||
Check must_haves against actual codebase.
|
||||
Cross-reference requirement IDs from PLAN frontmatter against REQUIREMENTS.md — every ID MUST be accounted for.
|
||||
Create VERIFICATION.md.",
|
||||
subagent_type="gsd-verifier",
|
||||
model="{verifier_model}"
|
||||
)
|
||||
```
|
||||
|
||||
Read status:
|
||||
```bash
|
||||
grep "^status:" "$PHASE_DIR"/*-VERIFICATION.md | cut -d: -f2 | tr -d ' '
|
||||
```
|
||||
|
||||
| Status | Action |
|
||||
|--------|--------|
|
||||
| `passed` | → update_roadmap |
|
||||
| `human_needed` | Present items for human testing, get approval or feedback |
|
||||
| `gaps_found` | Present gap summary, offer `/gsd:plan-phase {phase} --gaps` |
|
||||
|
||||
**If human_needed:**
|
||||
```
|
||||
## ✓ Phase {X}: {Name} — Human Verification Required
|
||||
|
||||
All automated checks passed. {N} items need human testing:
|
||||
|
||||
{From VERIFICATION.md human_verification section}
|
||||
|
||||
"approved" → continue | Report issues → gap closure
|
||||
```
|
||||
|
||||
**If gaps_found:**
|
||||
```
|
||||
## ⚠ Phase {X}: {Name} — Gaps Found
|
||||
|
||||
**Score:** {N}/{M} must-haves verified
|
||||
**Report:** {phase_dir}/{phase_num}-VERIFICATION.md
|
||||
|
||||
### What's Missing
|
||||
{Gap summaries from VERIFICATION.md}
|
||||
|
||||
---
|
||||
## ▶ Next Up
|
||||
|
||||
`/gsd:plan-phase {X} --gaps`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
Also: `cat {phase_dir}/{phase_num}-VERIFICATION.md` — full report
|
||||
Also: `/gsd:verify-work {X}` — manual testing first
|
||||
```
|
||||
|
||||
Gap closure cycle: `/gsd:plan-phase {X} --gaps` reads VERIFICATION.md → creates gap plans with `gap_closure: true` → user runs `/gsd:execute-phase {X} --gaps-only` → verifier re-runs.
|
||||
</step>
|
||||
|
||||
<step name="update_roadmap">
|
||||
**Mark phase complete and update all tracking files:**
|
||||
|
||||
```bash
|
||||
COMPLETION=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" phase complete "${PHASE_NUMBER}")
|
||||
```
|
||||
|
||||
The CLI handles:
|
||||
- Marking phase checkbox `[x]` with completion date
|
||||
- Updating Progress table (Status → Complete, date)
|
||||
- Updating plan count to final
|
||||
- Advancing STATE.md to next phase
|
||||
- Updating REQUIREMENTS.md traceability
|
||||
|
||||
Extract from result: `next_phase`, `next_phase_name`, `is_last_phase`.
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs(phase-{X}): complete phase execution" --files .planning/ROADMAP.md .planning/STATE.md .planning/REQUIREMENTS.md {phase_dir}/*-VERIFICATION.md
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="update_project_md">
|
||||
**Evolve PROJECT.md to reflect phase completion (prevents planning document drift — #956):**
|
||||
|
||||
PROJECT.md tracks validated requirements, decisions, and current state. Without this step,
|
||||
PROJECT.md falls behind silently over multiple phases.
|
||||
|
||||
1. Read `.planning/PROJECT.md`
|
||||
2. If the file exists and has a `## Validated Requirements` or `## Requirements` section:
|
||||
- Move any requirements validated by this phase from Active → Validated
|
||||
- Add a brief note: `Validated in Phase {X}: {Name}`
|
||||
3. If the file has a `## Current State` or similar section:
|
||||
- Update it to reflect this phase's completion (e.g., "Phase {X} complete — {one-liner}")
|
||||
4. Update the `Last updated:` footer to today's date
|
||||
5. Commit the change:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs(phase-{X}): evolve PROJECT.md after phase completion" --files .planning/PROJECT.md
|
||||
```
|
||||
|
||||
**Skip this step if** `.planning/PROJECT.md` does not exist.
|
||||
</step>
|
||||
|
||||
<step name="offer_next">
|
||||
|
||||
**Exception:** If `gaps_found`, the `verify_phase_goal` step already presents the gap-closure path (`/gsd:plan-phase {X} --gaps`). No additional routing needed — skip auto-advance.
|
||||
|
||||
**No-transition check (spawned by auto-advance chain):**
|
||||
|
||||
Parse `--no-transition` flag from $ARGUMENTS.
|
||||
|
||||
**If `--no-transition` flag present:**
|
||||
|
||||
Execute-phase was spawned by plan-phase's auto-advance. Do NOT run transition.md.
|
||||
After verification passes and roadmap is updated, return completion status to parent:
|
||||
|
||||
```
|
||||
## PHASE COMPLETE
|
||||
|
||||
Phase: ${PHASE_NUMBER} - ${PHASE_NAME}
|
||||
Plans: ${completed_count}/${total_count}
|
||||
Verification: {Passed | Gaps Found}
|
||||
|
||||
[Include aggregate_results output]
|
||||
```
|
||||
|
||||
STOP. Do not proceed to auto-advance or transition.
|
||||
|
||||
**If `--no-transition` flag is NOT present:**
|
||||
|
||||
**Auto-advance detection:**
|
||||
|
||||
1. Parse `--auto` flag from $ARGUMENTS
|
||||
2. Read both the chain flag and user preference (chain flag already synced in init step):
|
||||
```bash
|
||||
AUTO_CHAIN=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow._auto_chain_active 2>/dev/null || echo "false")
|
||||
AUTO_CFG=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.auto_advance 2>/dev/null || echo "false")
|
||||
```
|
||||
|
||||
**If `--auto` flag present OR `AUTO_CHAIN` is true OR `AUTO_CFG` is true (AND verification passed with no gaps):**
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════╗
|
||||
║ AUTO-ADVANCING → TRANSITION ║
|
||||
║ Phase {X} verified, continuing chain ║
|
||||
╚══════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
Execute the transition workflow inline (do NOT use Task — orchestrator context is ~10-15%, transition needs phase completion data already in context):
|
||||
|
||||
Read and follow `C:/Users/yaoji/.claude/get-shit-done/workflows/transition.md`, passing through the `--auto` flag so it propagates to the next phase invocation.
|
||||
|
||||
**If none of `--auto`, `AUTO_CHAIN`, or `AUTO_CFG` is true:**
|
||||
|
||||
**STOP. Do not auto-advance. Do not execute transition. Do not plan next phase. Present options to the user and wait.**
|
||||
|
||||
**IMPORTANT: There is NO `/gsd:transition` command. Never suggest it. The transition workflow is internal only.**
|
||||
|
||||
```
|
||||
## ✓ Phase {X}: {Name} Complete
|
||||
|
||||
/gsd:progress — see updated roadmap
|
||||
/gsd:discuss-phase {next} — discuss next phase before planning
|
||||
/gsd:plan-phase {next} — plan next phase
|
||||
/gsd:execute-phase {next} — execute next phase
|
||||
```
|
||||
|
||||
Only suggest the commands listed above. Do not invent or hallucinate command names.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<context_efficiency>
|
||||
Orchestrator: ~10-15% context. Subagents: fresh 200k each. No polling (Task blocks). No context bleed.
|
||||
</context_efficiency>
|
||||
|
||||
<failure_handling>
|
||||
- **classifyHandoffIfNeeded false failure:** Agent reports "failed" but error is `classifyHandoffIfNeeded is not defined` → Claude Code bug, not GSD. Spot-check (SUMMARY exists, commits present) → if pass, treat as success
|
||||
- **Agent fails mid-plan:** Missing SUMMARY.md → report, ask user how to proceed
|
||||
- **Dependency chain breaks:** Wave 1 fails → Wave 2 dependents likely fail → user chooses attempt or skip
|
||||
- **All agents in wave fail:** Systemic issue → stop, report for investigation
|
||||
- **Checkpoint unresolvable:** "Skip this plan?" or "Abort phase execution?" → record partial progress in STATE.md
|
||||
</failure_handling>
|
||||
|
||||
<resumption>
|
||||
Re-run `/gsd:execute-phase {phase}` → discover_plans finds completed SUMMARYs → skips them → resumes from first incomplete plan → continues wave execution.
|
||||
|
||||
STATE.md tracks: last completed plan, current wave, pending checkpoints.
|
||||
</resumption>
|
||||
493
get-shit-done/workflows/execute-plan.md
Normal file
493
get-shit-done/workflows/execute-plan.md
Normal file
@@ -0,0 +1,493 @@
|
||||
<purpose>
|
||||
Execute a phase prompt (PLAN.md) and create the outcome summary (SUMMARY.md).
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read STATE.md before any operation to load project context.
|
||||
Read config.json for planning behavior settings.
|
||||
|
||||
@C:/Users/yaoji/.claude/get-shit-done/references/git-integration.md
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="init_context" priority="first">
|
||||
Load execution context (paths only to minimize orchestrator context):
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init execute-phase "${PHASE}")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Extract from init JSON: `executor_model`, `commit_docs`, `phase_dir`, `phase_number`, `plans`, `summaries`, `incomplete_plans`, `state_path`, `config_path`.
|
||||
|
||||
If `.planning/` missing: error.
|
||||
</step>
|
||||
|
||||
<step name="identify_plan">
|
||||
```bash
|
||||
# Use plans/summaries from INIT JSON, or list files
|
||||
ls .planning/phases/XX-name/*-PLAN.md 2>/dev/null | sort
|
||||
ls .planning/phases/XX-name/*-SUMMARY.md 2>/dev/null | sort
|
||||
```
|
||||
|
||||
Find first PLAN without matching SUMMARY. Decimal phases supported (`01.1-hotfix/`):
|
||||
|
||||
```bash
|
||||
PHASE=$(echo "$PLAN_PATH" | grep -oE '[0-9]+(\.[0-9]+)?-[0-9]+')
|
||||
# config settings can be fetched via gsd-tools config-get if needed
|
||||
```
|
||||
|
||||
<if mode="yolo">
|
||||
Auto-approve: `⚡ Execute {phase}-{plan}-PLAN.md [Plan X of Y for Phase Z]` → parse_segments.
|
||||
</if>
|
||||
|
||||
<if mode="interactive" OR="custom with gates.execute_next_plan true">
|
||||
Present plan identification, wait for confirmation.
|
||||
</if>
|
||||
</step>
|
||||
|
||||
<step name="record_start_time">
|
||||
```bash
|
||||
PLAN_START_TIME=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
PLAN_START_EPOCH=$(date +%s)
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="parse_segments">
|
||||
```bash
|
||||
grep -n "type=\"checkpoint" .planning/phases/XX-name/{phase}-{plan}-PLAN.md
|
||||
```
|
||||
|
||||
**Routing by checkpoint type:**
|
||||
|
||||
| Checkpoints | Pattern | Execution |
|
||||
|-------------|---------|-----------|
|
||||
| None | A (autonomous) | Single subagent: full plan + SUMMARY + commit |
|
||||
| Verify-only | B (segmented) | Segments between checkpoints. After none/human-verify → SUBAGENT. After decision/human-action → MAIN |
|
||||
| Decision | C (main) | Execute entirely in main context |
|
||||
|
||||
**Pattern A:** init_agent_tracking → spawn Task(subagent_type="gsd-executor", model=executor_model) with prompt: execute plan at [path], autonomous, all tasks + SUMMARY + commit, follow deviation/auth rules, report: plan name, tasks, SUMMARY path, commit hash → track agent_id → wait → update tracking → report.
|
||||
|
||||
**Pattern B:** Execute segment-by-segment. Autonomous segments: spawn subagent for assigned tasks only (no SUMMARY/commit). Checkpoints: main context. After all segments: aggregate, create SUMMARY, commit. See segment_execution.
|
||||
|
||||
**Pattern C:** Execute in main using standard flow (step name="execute").
|
||||
|
||||
Fresh context per subagent preserves peak quality. Main context stays lean.
|
||||
</step>
|
||||
|
||||
<step name="init_agent_tracking">
|
||||
```bash
|
||||
if [ ! -f .planning/agent-history.json ]; then
|
||||
echo '{"version":"1.0","max_entries":50,"entries":[]}' > .planning/agent-history.json
|
||||
fi
|
||||
rm -f .planning/current-agent-id.txt
|
||||
if [ -f .planning/current-agent-id.txt ]; then
|
||||
INTERRUPTED_ID=$(cat .planning/current-agent-id.txt)
|
||||
echo "Found interrupted agent: $INTERRUPTED_ID"
|
||||
fi
|
||||
```
|
||||
|
||||
If interrupted: ask user to resume (Task `resume` parameter) or start fresh.
|
||||
|
||||
**Tracking protocol:** On spawn: write agent_id to `current-agent-id.txt`, append to agent-history.json: `{"agent_id":"[id]","task_description":"[desc]","phase":"[phase]","plan":"[plan]","segment":[num|null],"timestamp":"[ISO]","status":"spawned","completion_timestamp":null}`. On completion: status → "completed", set completion_timestamp, delete current-agent-id.txt. Prune: if entries > max_entries, remove oldest "completed" (never "spawned").
|
||||
|
||||
Run for Pattern A/B before spawning. Pattern C: skip.
|
||||
</step>
|
||||
|
||||
<step name="segment_execution">
|
||||
Pattern B only (verify-only checkpoints). Skip for A/C.
|
||||
|
||||
1. Parse segment map: checkpoint locations and types
|
||||
2. Per segment:
|
||||
- Subagent route: spawn gsd-executor for assigned tasks only. Prompt: task range, plan path, read full plan for context, execute assigned tasks, track deviations, NO SUMMARY/commit. Track via agent protocol.
|
||||
- Main route: execute tasks using standard flow (step name="execute")
|
||||
3. After ALL segments: aggregate files/deviations/decisions → create SUMMARY.md → commit → self-check:
|
||||
- Verify key-files.created exist on disk with `[ -f ]`
|
||||
- Check `git log --oneline --all --grep="{phase}-{plan}"` returns ≥1 commit
|
||||
- Append `## Self-Check: PASSED` or `## Self-Check: FAILED` to SUMMARY
|
||||
|
||||
**Known Claude Code bug (classifyHandoffIfNeeded):** If any segment agent reports "failed" with `classifyHandoffIfNeeded is not defined`, this is a Claude Code runtime bug — not a real failure. Run spot-checks; if they pass, treat as successful.
|
||||
|
||||
|
||||
|
||||
|
||||
</step>
|
||||
|
||||
<step name="load_prompt">
|
||||
```bash
|
||||
cat .planning/phases/XX-name/{phase}-{plan}-PLAN.md
|
||||
```
|
||||
This IS the execution instructions. Follow exactly. If plan references CONTEXT.md: honor user's vision throughout.
|
||||
|
||||
**If plan contains `<interfaces>` block:** These are pre-extracted type definitions and contracts. Use them directly — do NOT re-read the source files to discover types. The planner already extracted what you need.
|
||||
</step>
|
||||
|
||||
<step name="previous_phase_check">
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" phases list --type summaries --raw
|
||||
# Extract the second-to-last summary from the JSON result
|
||||
```
|
||||
If previous SUMMARY has unresolved "Issues Encountered" or "Next Phase Readiness" blockers: AskUserQuestion(header="Previous Issues", options: "Proceed anyway" | "Address first" | "Review previous").
|
||||
</step>
|
||||
|
||||
<step name="execute">
|
||||
Deviations are normal — handle via rules below.
|
||||
|
||||
1. Read @context files from prompt
|
||||
2. **MCP tools:** If CLAUDE.md or project instructions reference MCP tools (e.g. jCodeMunch for code navigation), prefer them over Grep/Glob when available. Fall back to Grep/Glob if MCP tools are not accessible.
|
||||
3. Per task:
|
||||
- **MANDATORY read_first gate:** If the task has a `<read_first>` field, you MUST read every listed file BEFORE making any edits. This is not optional. Do not skip files because you "already know" what's in them — read them. The read_first files establish ground truth for the task.
|
||||
- `type="auto"`: if `tdd="true"` → TDD execution. Implement with deviation rules + auth gates. Verify done criteria. Commit (see task_commit). Track hash for Summary.
|
||||
- `type="checkpoint:*"`: STOP → checkpoint_protocol → wait for user → continue only after confirmation.
|
||||
- **MANDATORY acceptance_criteria check:** After completing each task, if it has `<acceptance_criteria>`, verify EVERY criterion before moving to the next task. Use grep, file reads, or CLI commands to confirm each criterion. If any criterion fails, fix the implementation before proceeding. Do not skip criteria or mark them as "will verify later".
|
||||
3. Run `<verification>` checks
|
||||
4. Confirm `<success_criteria>` met
|
||||
5. Document deviations in Summary
|
||||
</step>
|
||||
|
||||
<authentication_gates>
|
||||
|
||||
## Authentication Gates
|
||||
|
||||
Auth errors during execution are NOT failures — they're expected interaction points.
|
||||
|
||||
**Indicators:** "Not authenticated", "Unauthorized", 401/403, "Please run {tool} login", "Set {ENV_VAR}"
|
||||
|
||||
**Protocol:**
|
||||
1. Recognize auth gate (not a bug)
|
||||
2. STOP task execution
|
||||
3. Create dynamic checkpoint:human-action with exact auth steps
|
||||
4. Wait for user to authenticate
|
||||
5. Verify credentials work
|
||||
6. Retry original task
|
||||
7. Continue normally
|
||||
|
||||
**Example:** `vercel --yes` → "Not authenticated" → checkpoint asking user to `vercel login` → verify with `vercel whoami` → retry deploy → continue
|
||||
|
||||
**In Summary:** Document as normal flow under "## Authentication Gates", not as deviations.
|
||||
|
||||
</authentication_gates>
|
||||
|
||||
<deviation_rules>
|
||||
|
||||
## Deviation Rules
|
||||
|
||||
You WILL discover unplanned work. Apply automatically, track all for Summary.
|
||||
|
||||
| Rule | Trigger | Action | Permission |
|
||||
|------|---------|--------|------------|
|
||||
| **1: Bug** | Broken behavior, errors, wrong queries, type errors, security vulns, race conditions, leaks | Fix → test → verify → track `[Rule 1 - Bug]` | Auto |
|
||||
| **2: Missing Critical** | Missing essentials: error handling, validation, auth, CSRF/CORS, rate limiting, indexes, logging | Add → test → verify → track `[Rule 2 - Missing Critical]` | Auto |
|
||||
| **3: Blocking** | Prevents completion: missing deps, wrong types, broken imports, missing env/config/files, circular deps | Fix blocker → verify proceeds → track `[Rule 3 - Blocking]` | Auto |
|
||||
| **4: Architectural** | Structural change: new DB table, schema change, new service, switching libs, breaking API, new infra | STOP → present decision (below) → track `[Rule 4 - Architectural]` | Ask user |
|
||||
|
||||
**Rule 4 format:**
|
||||
```
|
||||
⚠️ Architectural Decision Needed
|
||||
|
||||
Current task: [task name]
|
||||
Discovery: [what prompted this]
|
||||
Proposed change: [modification]
|
||||
Why needed: [rationale]
|
||||
Impact: [what this affects]
|
||||
Alternatives: [other approaches]
|
||||
|
||||
Proceed with proposed change? (yes / different approach / defer)
|
||||
```
|
||||
|
||||
**Priority:** Rule 4 (STOP) > Rules 1-3 (auto) > unsure → Rule 4
|
||||
**Edge cases:** missing validation → R2 | null crash → R1 | new table → R4 | new column → R1/2
|
||||
**Heuristic:** Affects correctness/security/completion? → R1-3. Maybe? → R4.
|
||||
|
||||
</deviation_rules>
|
||||
|
||||
<deviation_documentation>
|
||||
|
||||
## Documenting Deviations
|
||||
|
||||
Summary MUST include deviations section. None? → `## Deviations from Plan\n\nNone - plan executed exactly as written.`
|
||||
|
||||
Per deviation: **[Rule N - Category] Title** — Found during: Task X | Issue | Fix | Files modified | Verification | Commit hash
|
||||
|
||||
End with: **Total deviations:** N auto-fixed (breakdown). **Impact:** assessment.
|
||||
|
||||
</deviation_documentation>
|
||||
|
||||
<tdd_plan_execution>
|
||||
## TDD Execution
|
||||
|
||||
For `type: tdd` plans — RED-GREEN-REFACTOR:
|
||||
|
||||
1. **Infrastructure** (first TDD plan only): detect project, install framework, config, verify empty suite
|
||||
2. **RED:** Read `<behavior>` → failing test(s) → run (MUST fail) → commit: `test({phase}-{plan}): add failing test for [feature]`
|
||||
3. **GREEN:** Read `<implementation>` → minimal code → run (MUST pass) → commit: `feat({phase}-{plan}): implement [feature]`
|
||||
4. **REFACTOR:** Clean up → tests MUST pass → commit: `refactor({phase}-{plan}): clean up [feature]`
|
||||
|
||||
Errors: RED doesn't fail → investigate test/existing feature. GREEN doesn't pass → debug, iterate. REFACTOR breaks → undo.
|
||||
|
||||
See `C:/Users/yaoji/.claude/get-shit-done/references/tdd.md` for structure.
|
||||
</tdd_plan_execution>
|
||||
|
||||
<precommit_failure_handling>
|
||||
## Pre-commit Hook Failure Handling
|
||||
|
||||
Your commits may trigger pre-commit hooks. Auto-fix hooks handle themselves transparently — files get fixed and re-staged automatically.
|
||||
|
||||
If a commit is BLOCKED by a hook:
|
||||
|
||||
1. The `git commit` command fails with hook error output
|
||||
2. Read the error — it tells you exactly which hook and what failed
|
||||
3. Fix the issue (type error, lint violation, secret leak, etc.)
|
||||
4. `git add` the fixed files
|
||||
5. Retry the commit
|
||||
6. Do NOT use `--no-verify`
|
||||
|
||||
This is normal and expected. Budget 1-2 retry cycles per commit.
|
||||
</precommit_failure_handling>
|
||||
|
||||
<task_commit>
|
||||
## Task Commit Protocol
|
||||
|
||||
After each task (verification passed, done criteria met), commit immediately.
|
||||
|
||||
**1. Check:** `git status --short`
|
||||
|
||||
**2. Stage individually** (NEVER `git add .` or `git add -A`):
|
||||
```bash
|
||||
git add src/api/auth.ts
|
||||
git add src/types/user.ts
|
||||
```
|
||||
|
||||
**3. Commit type:**
|
||||
|
||||
| Type | When | Example |
|
||||
|------|------|---------|
|
||||
| `feat` | New functionality | feat(08-02): create user registration endpoint |
|
||||
| `fix` | Bug fix | fix(08-02): correct email validation regex |
|
||||
| `test` | Test-only (TDD RED) | test(08-02): add failing test for password hashing |
|
||||
| `refactor` | No behavior change (TDD REFACTOR) | refactor(08-02): extract validation to helper |
|
||||
| `perf` | Performance | perf(08-02): add database index |
|
||||
| `docs` | Documentation | docs(08-02): add API docs |
|
||||
| `style` | Formatting | style(08-02): format auth module |
|
||||
| `chore` | Config/deps | chore(08-02): add bcrypt dependency |
|
||||
|
||||
**4. Format:** `{type}({phase}-{plan}): {description}` with bullet points for key changes.
|
||||
|
||||
**5. Record hash:**
|
||||
```bash
|
||||
TASK_COMMIT=$(git rev-parse --short HEAD)
|
||||
TASK_COMMITS+=("Task ${TASK_NUM}: ${TASK_COMMIT}")
|
||||
```
|
||||
|
||||
**6. Check for untracked generated files:**
|
||||
```bash
|
||||
git status --short | grep '^??'
|
||||
```
|
||||
If new untracked files appeared after running scripts or tools, decide for each:
|
||||
- **Commit it** — if it's a source file, config, or intentional artifact
|
||||
- **Add to .gitignore** — if it's a generated/runtime output (build artifacts, `.env` files, cache files, compiled output)
|
||||
- Do NOT leave generated files untracked
|
||||
|
||||
</task_commit>
|
||||
|
||||
<step name="checkpoint_protocol">
|
||||
On `type="checkpoint:*"`: automate everything possible first. Checkpoints are for verification/decisions only.
|
||||
|
||||
Display: `CHECKPOINT: [Type]` box → Progress {X}/{Y} → Task name → type-specific content → `YOUR ACTION: [signal]`
|
||||
|
||||
| Type | Content | Resume signal |
|
||||
|------|---------|---------------|
|
||||
| human-verify (90%) | What was built + verification steps (commands/URLs) | "approved" or describe issues |
|
||||
| decision (9%) | Decision needed + context + options with pros/cons | "Select: option-id" |
|
||||
| human-action (1%) | What was automated + ONE manual step + verification plan | "done" |
|
||||
|
||||
After response: verify if specified. Pass → continue. Fail → inform, wait. WAIT for user — do NOT hallucinate completion.
|
||||
|
||||
See C:/Users/yaoji/.claude/get-shit-done/references/checkpoints.md for details.
|
||||
</step>
|
||||
|
||||
<step name="checkpoint_return_for_orchestrator">
|
||||
When spawned via Task and hitting checkpoint: return structured state (cannot interact with user directly).
|
||||
|
||||
**Required return:** 1) Completed Tasks table (hashes + files) 2) Current Task (what's blocking) 3) Checkpoint Details (user-facing content) 4) Awaiting (what's needed from user)
|
||||
|
||||
Orchestrator parses → presents to user → spawns fresh continuation with your completed tasks state. You will NOT be resumed. In main context: use checkpoint_protocol above.
|
||||
</step>
|
||||
|
||||
<step name="verification_failure_gate">
|
||||
If verification fails:
|
||||
|
||||
**Check if node repair is enabled** (default: on):
|
||||
```bash
|
||||
NODE_REPAIR=$(node "./.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.node_repair 2>/dev/null || echo "true")
|
||||
```
|
||||
|
||||
If `NODE_REPAIR` is `true`: invoke `@./.claude/get-shit-done/workflows/node-repair.md` with:
|
||||
- FAILED_TASK: task number, name, done-criteria
|
||||
- ERROR: expected vs actual result
|
||||
- PLAN_CONTEXT: adjacent task names + phase goal
|
||||
- REPAIR_BUDGET: `workflow.node_repair_budget` from config (default: 2)
|
||||
|
||||
Node repair will attempt RETRY, DECOMPOSE, or PRUNE autonomously. Only reaches this gate again if repair budget is exhausted (ESCALATE).
|
||||
|
||||
If `NODE_REPAIR` is `false` OR repair returns ESCALATE: STOP. Present: "Verification failed for Task [X]: [name]. Expected: [criteria]. Actual: [result]. Repair attempted: [summary of what was tried]." Options: Retry | Skip (mark incomplete) | Stop (investigate). If skipped → SUMMARY "Issues Encountered".
|
||||
</step>
|
||||
|
||||
<step name="record_completion_time">
|
||||
```bash
|
||||
PLAN_END_TIME=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
PLAN_END_EPOCH=$(date +%s)
|
||||
|
||||
DURATION_SEC=$(( PLAN_END_EPOCH - PLAN_START_EPOCH ))
|
||||
DURATION_MIN=$(( DURATION_SEC / 60 ))
|
||||
|
||||
if [[ $DURATION_MIN -ge 60 ]]; then
|
||||
HRS=$(( DURATION_MIN / 60 ))
|
||||
MIN=$(( DURATION_MIN % 60 ))
|
||||
DURATION="${HRS}h ${MIN}m"
|
||||
else
|
||||
DURATION="${DURATION_MIN} min"
|
||||
fi
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="generate_user_setup">
|
||||
```bash
|
||||
grep -A 50 "^user_setup:" .planning/phases/XX-name/{phase}-{plan}-PLAN.md | head -50
|
||||
```
|
||||
|
||||
If user_setup exists: create `{phase}-USER-SETUP.md` using template `C:/Users/yaoji/.claude/get-shit-done/templates/user-setup.md`. Per service: env vars table, account setup checklist, dashboard config, local dev notes, verification commands. Status "Incomplete". Set `USER_SETUP_CREATED=true`. If empty/missing: skip.
|
||||
</step>
|
||||
|
||||
<step name="create_summary">
|
||||
Create `{phase}-{plan}-SUMMARY.md` at `.planning/phases/XX-name/`. Use `C:/Users/yaoji/.claude/get-shit-done/templates/summary.md`.
|
||||
|
||||
**Frontmatter:** phase, plan, subsystem, tags | requires/provides/affects | tech-stack.added/patterns | key-files.created/modified | key-decisions | requirements-completed (**MUST** copy `requirements` array from PLAN.md frontmatter verbatim) | duration ($DURATION), completed ($PLAN_END_TIME date).
|
||||
|
||||
Title: `# Phase [X] Plan [Y]: [Name] Summary`
|
||||
|
||||
One-liner SUBSTANTIVE: "JWT auth with refresh rotation using jose library" not "Authentication implemented"
|
||||
|
||||
Include: duration, start/end times, task count, file count.
|
||||
|
||||
Next: more plans → "Ready for {next-plan}" | last → "Phase complete, ready for next step".
|
||||
</step>
|
||||
|
||||
<step name="update_current_position">
|
||||
Update STATE.md using gsd-tools:
|
||||
|
||||
```bash
|
||||
# Advance plan counter (handles last-plan edge case)
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state advance-plan
|
||||
|
||||
# Recalculate progress bar from disk state
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state update-progress
|
||||
|
||||
# Record execution metrics
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state record-metric \
|
||||
--phase "${PHASE}" --plan "${PLAN}" --duration "${DURATION}" \
|
||||
--tasks "${TASK_COUNT}" --files "${FILE_COUNT}"
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="extract_decisions_and_issues">
|
||||
From SUMMARY: Extract decisions and add to STATE.md:
|
||||
|
||||
```bash
|
||||
# Add each decision from SUMMARY key-decisions
|
||||
# Prefer file inputs for shell-safe text (preserves `$`, `*`, etc. exactly)
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state add-decision \
|
||||
--phase "${PHASE}" --summary-file "${DECISION_TEXT_FILE}" --rationale-file "${RATIONALE_FILE}"
|
||||
|
||||
# Add blockers if any found
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state add-blocker --text-file "${BLOCKER_TEXT_FILE}"
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="update_session_continuity">
|
||||
Update session info using gsd-tools:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state record-session \
|
||||
--stopped-at "Completed ${PHASE}-${PLAN}-PLAN.md" \
|
||||
--resume-file "None"
|
||||
```
|
||||
|
||||
Keep STATE.md under 150 lines.
|
||||
</step>
|
||||
|
||||
<step name="issues_review_gate">
|
||||
If SUMMARY "Issues Encountered" ≠ "None": yolo → log and continue. Interactive → present issues, wait for acknowledgment.
|
||||
</step>
|
||||
|
||||
<step name="update_roadmap">
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap update-plan-progress "${PHASE}"
|
||||
```
|
||||
Counts PLAN vs SUMMARY files on disk. Updates progress table row with correct count and status (`In Progress` or `Complete` with date).
|
||||
</step>
|
||||
|
||||
<step name="update_requirements">
|
||||
Mark completed requirements from the PLAN.md frontmatter `requirements:` field:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" requirements mark-complete ${REQ_IDS}
|
||||
```
|
||||
|
||||
Extract requirement IDs from the plan's frontmatter (e.g., `requirements: [AUTH-01, AUTH-02]`). If no requirements field, skip.
|
||||
</step>
|
||||
|
||||
<step name="git_commit_metadata">
|
||||
Task code already committed per-task. Commit plan metadata:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs({phase}-{plan}): complete [plan-name] plan" --files .planning/phases/XX-name/{phase}-{plan}-SUMMARY.md .planning/STATE.md .planning/ROADMAP.md .planning/REQUIREMENTS.md
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="update_codebase_map">
|
||||
If .planning/codebase/ doesn't exist: skip.
|
||||
|
||||
```bash
|
||||
FIRST_TASK=$(git log --oneline --grep="feat({phase}-{plan}):" --grep="fix({phase}-{plan}):" --grep="test({phase}-{plan}):" --reverse | head -1 | cut -d' ' -f1)
|
||||
git diff --name-only ${FIRST_TASK}^..HEAD 2>/dev/null
|
||||
```
|
||||
|
||||
Update only structural changes: new src/ dir → STRUCTURE.md | deps → STACK.md | file pattern → CONVENTIONS.md | API client → INTEGRATIONS.md | config → STACK.md | renamed → update paths. Skip code-only/bugfix/content changes.
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "" --files .planning/codebase/*.md --amend
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="offer_next">
|
||||
If `USER_SETUP_CREATED=true`: display `⚠️ USER SETUP REQUIRED` with path + env/config tasks at TOP.
|
||||
|
||||
```bash
|
||||
ls -1 .planning/phases/[current-phase-dir]/*-PLAN.md 2>/dev/null | wc -l
|
||||
ls -1 .planning/phases/[current-phase-dir]/*-SUMMARY.md 2>/dev/null | wc -l
|
||||
```
|
||||
|
||||
| Condition | Route | Action |
|
||||
|-----------|-------|--------|
|
||||
| summaries < plans | **A: More plans** | Find next PLAN without SUMMARY. Yolo: auto-continue. Interactive: show next plan, suggest `/gsd:execute-phase {phase}` + `/gsd:verify-work`. STOP here. |
|
||||
| summaries = plans, current < highest phase | **B: Phase done** | Show completion, suggest `/gsd:plan-phase {Z+1}` + `/gsd:verify-work {Z}` + `/gsd:discuss-phase {Z+1}` |
|
||||
| summaries = plans, current = highest phase | **C: Milestone done** | Show banner, suggest `/gsd:complete-milestone` + `/gsd:verify-work` + `/gsd:add-phase` |
|
||||
|
||||
All routes: `/clear` first for fresh context.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
- All tasks from PLAN.md completed
|
||||
- All verifications pass
|
||||
- USER-SETUP.md generated if user_setup in frontmatter
|
||||
- SUMMARY.md created with substantive content
|
||||
- STATE.md updated (position, decisions, issues, session)
|
||||
- ROADMAP.md updated
|
||||
- If codebase map exists: map updated with execution changes (or skipped if no significant changes)
|
||||
- If USER-SETUP.md created: prominently surfaced in completion output
|
||||
</success_criteria>
|
||||
159
get-shit-done/workflows/health.md
Normal file
159
get-shit-done/workflows/health.md
Normal file
@@ -0,0 +1,159 @@
|
||||
<purpose>
|
||||
Validate `.planning/` directory integrity and report actionable issues. Checks for missing files, invalid configurations, inconsistent state, and orphaned plans. Optionally repairs auto-fixable issues.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="parse_args">
|
||||
**Parse arguments:**
|
||||
|
||||
Check if `--repair` flag is present in the command arguments.
|
||||
|
||||
```
|
||||
REPAIR_FLAG=""
|
||||
if arguments contain "--repair"; then
|
||||
REPAIR_FLAG="--repair"
|
||||
fi
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="run_health_check">
|
||||
**Run health validation:**
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" validate health $REPAIR_FLAG
|
||||
```
|
||||
|
||||
Parse JSON output:
|
||||
- `status`: "healthy" | "degraded" | "broken"
|
||||
- `errors[]`: Critical issues (code, message, fix, repairable)
|
||||
- `warnings[]`: Non-critical issues
|
||||
- `info[]`: Informational notes
|
||||
- `repairable_count`: Number of auto-fixable issues
|
||||
- `repairs_performed[]`: Actions taken if --repair was used
|
||||
</step>
|
||||
|
||||
<step name="format_output">
|
||||
**Format and display results:**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD Health Check
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Status: HEALTHY | DEGRADED | BROKEN
|
||||
Errors: N | Warnings: N | Info: N
|
||||
```
|
||||
|
||||
**If repairs were performed:**
|
||||
```
|
||||
## Repairs Performed
|
||||
|
||||
- ✓ config.json: Created with defaults
|
||||
- ✓ STATE.md: Regenerated from roadmap
|
||||
```
|
||||
|
||||
**If errors exist:**
|
||||
```
|
||||
## Errors
|
||||
|
||||
- [E001] config.json: JSON parse error at line 5
|
||||
Fix: Run /gsd:health --repair to reset to defaults
|
||||
|
||||
- [E002] PROJECT.md not found
|
||||
Fix: Run /gsd:new-project to create
|
||||
```
|
||||
|
||||
**If warnings exist:**
|
||||
```
|
||||
## Warnings
|
||||
|
||||
- [W001] STATE.md references phase 5, but only phases 1-3 exist
|
||||
Fix: Run /gsd:health --repair to regenerate
|
||||
|
||||
- [W005] Phase directory "1-setup" doesn't follow NN-name format
|
||||
Fix: Rename to match pattern (e.g., 01-setup)
|
||||
```
|
||||
|
||||
**If info exists:**
|
||||
```
|
||||
## Info
|
||||
|
||||
- [I001] 02-implementation/02-01-PLAN.md has no SUMMARY.md
|
||||
Note: May be in progress
|
||||
```
|
||||
|
||||
**Footer (if repairable issues exist and --repair was NOT used):**
|
||||
```
|
||||
---
|
||||
N issues can be auto-repaired. Run: /gsd:health --repair
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="offer_repair">
|
||||
**If repairable issues exist and --repair was NOT used:**
|
||||
|
||||
Ask user if they want to run repairs:
|
||||
|
||||
```
|
||||
Would you like to run /gsd:health --repair to fix N issues automatically?
|
||||
```
|
||||
|
||||
If yes, re-run with --repair flag and display results.
|
||||
</step>
|
||||
|
||||
<step name="verify_repairs">
|
||||
**If repairs were performed:**
|
||||
|
||||
Re-run health check without --repair to confirm issues are resolved:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" validate health
|
||||
```
|
||||
|
||||
Report final status.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<error_codes>
|
||||
|
||||
| Code | Severity | Description | Repairable |
|
||||
|------|----------|-------------|------------|
|
||||
| E001 | error | .planning/ directory not found | No |
|
||||
| E002 | error | PROJECT.md not found | No |
|
||||
| E003 | error | ROADMAP.md not found | No |
|
||||
| E004 | error | STATE.md not found | Yes |
|
||||
| E005 | error | config.json parse error | Yes |
|
||||
| W001 | warning | PROJECT.md missing required section | No |
|
||||
| W002 | warning | STATE.md references invalid phase | Yes |
|
||||
| W003 | warning | config.json not found | Yes |
|
||||
| W004 | warning | config.json invalid field value | No |
|
||||
| W005 | warning | Phase directory naming mismatch | No |
|
||||
| W006 | warning | Phase in ROADMAP but no directory | No |
|
||||
| W007 | warning | Phase on disk but not in ROADMAP | No |
|
||||
| W008 | warning | config.json: workflow.nyquist_validation absent (defaults to enabled but agents may skip) | Yes |
|
||||
| W009 | warning | Phase has Validation Architecture in RESEARCH.md but no VALIDATION.md | No |
|
||||
| I001 | info | Plan without SUMMARY (may be in progress) | No |
|
||||
|
||||
</error_codes>
|
||||
|
||||
<repair_actions>
|
||||
|
||||
| Action | Effect | Risk |
|
||||
|--------|--------|------|
|
||||
| createConfig | Create config.json with defaults | None |
|
||||
| resetConfig | Delete + recreate config.json | Loses custom settings |
|
||||
| regenerateState | Create STATE.md from ROADMAP structure | Loses session history |
|
||||
| addNyquistKey | Add workflow.nyquist_validation: true to config.json | None — matches existing default |
|
||||
|
||||
**Not repairable (too risky):**
|
||||
- PROJECT.md, ROADMAP.md content
|
||||
- Phase directory renaming
|
||||
- Orphaned plan cleanup
|
||||
|
||||
</repair_actions>
|
||||
542
get-shit-done/workflows/help.md
Normal file
542
get-shit-done/workflows/help.md
Normal file
@@ -0,0 +1,542 @@
|
||||
<purpose>
|
||||
Display the complete GSD command reference. Output ONLY the reference content. Do NOT add project-specific analysis, git status, next-step suggestions, or any commentary beyond the reference.
|
||||
</purpose>
|
||||
|
||||
<reference>
|
||||
# GSD Command Reference
|
||||
|
||||
**GSD** (Get Shit Done) creates hierarchical project plans optimized for solo agentic development with Claude Code.
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. `/gsd:new-project` - Initialize project (includes research, requirements, roadmap)
|
||||
2. `/gsd:plan-phase 1` - Create detailed plan for first phase
|
||||
3. `/gsd:execute-phase 1` - Execute the phase
|
||||
|
||||
## Staying Updated
|
||||
|
||||
GSD evolves fast. Update periodically:
|
||||
|
||||
```bash
|
||||
npx get-shit-done-cc@latest
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
```
|
||||
/gsd:new-project → /gsd:plan-phase → /gsd:execute-phase → repeat
|
||||
```
|
||||
|
||||
### Project Initialization
|
||||
|
||||
**`/gsd:new-project`**
|
||||
Initialize new project through unified flow.
|
||||
|
||||
One command takes you from idea to ready-for-planning:
|
||||
- Deep questioning to understand what you're building
|
||||
- Optional domain research (spawns 4 parallel researcher agents)
|
||||
- Requirements definition with v1/v2/out-of-scope scoping
|
||||
- Roadmap creation with phase breakdown and success criteria
|
||||
|
||||
Creates all `.planning/` artifacts:
|
||||
- `PROJECT.md` — vision and requirements
|
||||
- `config.json` — workflow mode (interactive/yolo)
|
||||
- `research/` — domain research (if selected)
|
||||
- `REQUIREMENTS.md` — scoped requirements with REQ-IDs
|
||||
- `ROADMAP.md` — phases mapped to requirements
|
||||
- `STATE.md` — project memory
|
||||
|
||||
Usage: `/gsd:new-project`
|
||||
|
||||
**`/gsd:map-codebase`**
|
||||
Map an existing codebase for brownfield projects.
|
||||
|
||||
- Analyzes codebase with parallel Explore agents
|
||||
- Creates `.planning/codebase/` with 7 focused documents
|
||||
- Covers stack, architecture, structure, conventions, testing, integrations, concerns
|
||||
- Use before `/gsd:new-project` on existing codebases
|
||||
|
||||
Usage: `/gsd:map-codebase`
|
||||
|
||||
### Phase Planning
|
||||
|
||||
**`/gsd:discuss-phase <number>`**
|
||||
Help articulate your vision for a phase before planning.
|
||||
|
||||
- Captures how you imagine this phase working
|
||||
- Creates CONTEXT.md with your vision, essentials, and boundaries
|
||||
- Use when you have ideas about how something should look/feel
|
||||
- Optional `--batch` asks 2-5 related questions at a time instead of one-by-one
|
||||
|
||||
Usage: `/gsd:discuss-phase 2`
|
||||
Usage: `/gsd:discuss-phase 2 --batch`
|
||||
Usage: `/gsd:discuss-phase 2 --batch=3`
|
||||
|
||||
**`/gsd:research-phase <number>`**
|
||||
Comprehensive ecosystem research for niche/complex domains.
|
||||
|
||||
- Discovers standard stack, architecture patterns, pitfalls
|
||||
- Creates RESEARCH.md with "how experts build this" knowledge
|
||||
- Use for 3D, games, audio, shaders, ML, and other specialized domains
|
||||
- Goes beyond "which library" to ecosystem knowledge
|
||||
|
||||
Usage: `/gsd:research-phase 3`
|
||||
|
||||
**`/gsd:list-phase-assumptions <number>`**
|
||||
See what Claude is planning to do before it starts.
|
||||
|
||||
- Shows Claude's intended approach for a phase
|
||||
- Lets you course-correct if Claude misunderstood your vision
|
||||
- No files created - conversational output only
|
||||
|
||||
Usage: `/gsd:list-phase-assumptions 3`
|
||||
|
||||
**`/gsd:plan-phase <number>`**
|
||||
Create detailed execution plan for a specific phase.
|
||||
|
||||
- Generates `.planning/phases/XX-phase-name/XX-YY-PLAN.md`
|
||||
- Breaks phase into concrete, actionable tasks
|
||||
- Includes verification criteria and success measures
|
||||
- Multiple plans per phase supported (XX-01, XX-02, etc.)
|
||||
|
||||
Usage: `/gsd:plan-phase 1`
|
||||
Result: Creates `.planning/phases/01-foundation/01-01-PLAN.md`
|
||||
|
||||
**PRD Express Path:** Pass `--prd path/to/requirements.md` to skip discuss-phase entirely. Your PRD becomes locked decisions in CONTEXT.md. Useful when you already have clear acceptance criteria.
|
||||
|
||||
### Execution
|
||||
|
||||
**`/gsd:execute-phase <phase-number>`**
|
||||
Execute all plans in a phase.
|
||||
|
||||
- Groups plans by wave (from frontmatter), executes waves sequentially
|
||||
- Plans within each wave run in parallel via Task tool
|
||||
- Verifies phase goal after all plans complete
|
||||
- Updates REQUIREMENTS.md, ROADMAP.md, STATE.md
|
||||
|
||||
Usage: `/gsd:execute-phase 5`
|
||||
|
||||
### Smart Router
|
||||
|
||||
**`/gsd:do <description>`**
|
||||
Route freeform text to the right GSD command automatically.
|
||||
|
||||
- Analyzes natural language input to find the best matching GSD command
|
||||
- Acts as a dispatcher — never does the work itself
|
||||
- Resolves ambiguity by asking you to pick between top matches
|
||||
- Use when you know what you want but don't know which `/gsd:*` command to run
|
||||
|
||||
Usage: `/gsd:do fix the login button`
|
||||
Usage: `/gsd:do refactor the auth system`
|
||||
Usage: `/gsd:do I want to start a new milestone`
|
||||
|
||||
### Quick Mode
|
||||
|
||||
**`/gsd:quick [--full] [--discuss] [--research]`**
|
||||
Execute small, ad-hoc tasks with GSD guarantees but skip optional agents.
|
||||
|
||||
Quick mode uses the same system with a shorter path:
|
||||
- Spawns planner + executor (skips researcher, checker, verifier by default)
|
||||
- Quick tasks live in `.planning/quick/` separate from planned phases
|
||||
- Updates STATE.md tracking (not ROADMAP.md)
|
||||
|
||||
Flags enable additional quality steps:
|
||||
- `--discuss` — Lightweight discussion to surface gray areas before planning
|
||||
- `--research` — Focused research agent investigates approaches before planning
|
||||
- `--full` — Adds plan-checking (max 2 iterations) and post-execution verification
|
||||
|
||||
Flags are composable: `--discuss --research --full` gives the complete quality pipeline for a single task.
|
||||
|
||||
Usage: `/gsd:quick`
|
||||
Usage: `/gsd:quick --research --full`
|
||||
Result: Creates `.planning/quick/NNN-slug/PLAN.md`, `.planning/quick/NNN-slug/SUMMARY.md`
|
||||
|
||||
### Roadmap Management
|
||||
|
||||
**`/gsd:add-phase <description>`**
|
||||
Add new phase to end of current milestone.
|
||||
|
||||
- Appends to ROADMAP.md
|
||||
- Uses next sequential number
|
||||
- Updates phase directory structure
|
||||
|
||||
Usage: `/gsd:add-phase "Add admin dashboard"`
|
||||
|
||||
**`/gsd:insert-phase <after> <description>`**
|
||||
Insert urgent work as decimal phase between existing phases.
|
||||
|
||||
- Creates intermediate phase (e.g., 7.1 between 7 and 8)
|
||||
- Useful for discovered work that must happen mid-milestone
|
||||
- Maintains phase ordering
|
||||
|
||||
Usage: `/gsd:insert-phase 7 "Fix critical auth bug"`
|
||||
Result: Creates Phase 7.1
|
||||
|
||||
**`/gsd:remove-phase <number>`**
|
||||
Remove a future phase and renumber subsequent phases.
|
||||
|
||||
- Deletes phase directory and all references
|
||||
- Renumbers all subsequent phases to close the gap
|
||||
- Only works on future (unstarted) phases
|
||||
- Git commit preserves historical record
|
||||
|
||||
Usage: `/gsd:remove-phase 17`
|
||||
Result: Phase 17 deleted, phases 18-20 become 17-19
|
||||
|
||||
### Milestone Management
|
||||
|
||||
**`/gsd:new-milestone <name>`**
|
||||
Start a new milestone through unified flow.
|
||||
|
||||
- Deep questioning to understand what you're building next
|
||||
- Optional domain research (spawns 4 parallel researcher agents)
|
||||
- Requirements definition with scoping
|
||||
- Roadmap creation with phase breakdown
|
||||
|
||||
Mirrors `/gsd:new-project` flow for brownfield projects (existing PROJECT.md).
|
||||
|
||||
Usage: `/gsd:new-milestone "v2.0 Features"`
|
||||
|
||||
**`/gsd:complete-milestone <version>`**
|
||||
Archive completed milestone and prepare for next version.
|
||||
|
||||
- Creates MILESTONES.md entry with stats
|
||||
- Archives full details to milestones/ directory
|
||||
- Creates git tag for the release
|
||||
- Prepares workspace for next version
|
||||
|
||||
Usage: `/gsd:complete-milestone 1.0.0`
|
||||
|
||||
### Progress Tracking
|
||||
|
||||
**`/gsd:progress`**
|
||||
Check project status and intelligently route to next action.
|
||||
|
||||
- Shows visual progress bar and completion percentage
|
||||
- Summarizes recent work from SUMMARY files
|
||||
- Displays current position and what's next
|
||||
- Lists key decisions and open issues
|
||||
- Offers to execute next plan or create it if missing
|
||||
- Detects 100% milestone completion
|
||||
|
||||
Usage: `/gsd:progress`
|
||||
|
||||
### Session Management
|
||||
|
||||
**`/gsd:resume-work`**
|
||||
Resume work from previous session with full context restoration.
|
||||
|
||||
- Reads STATE.md for project context
|
||||
- Shows current position and recent progress
|
||||
- Offers next actions based on project state
|
||||
|
||||
Usage: `/gsd:resume-work`
|
||||
|
||||
**`/gsd:pause-work`**
|
||||
Create context handoff when pausing work mid-phase.
|
||||
|
||||
- Creates .continue-here file with current state
|
||||
- Updates STATE.md session continuity section
|
||||
- Captures in-progress work context
|
||||
|
||||
Usage: `/gsd:pause-work`
|
||||
|
||||
### Debugging
|
||||
|
||||
**`/gsd:debug [issue description]`**
|
||||
Systematic debugging with persistent state across context resets.
|
||||
|
||||
- Gathers symptoms through adaptive questioning
|
||||
- Creates `.planning/debug/[slug].md` to track investigation
|
||||
- Investigates using scientific method (evidence → hypothesis → test)
|
||||
- Survives `/clear` — run `/gsd:debug` with no args to resume
|
||||
- Archives resolved issues to `.planning/debug/resolved/`
|
||||
|
||||
Usage: `/gsd:debug "login button doesn't work"`
|
||||
Usage: `/gsd:debug` (resume active session)
|
||||
|
||||
### Quick Notes
|
||||
|
||||
**`/gsd:note <text>`**
|
||||
Zero-friction idea capture — one command, instant save, no questions.
|
||||
|
||||
- Saves timestamped note to `.planning/notes/` (or `C:/Users/yaoji/.claude/notes/` globally)
|
||||
- Three subcommands: append (default), list, promote
|
||||
- Promote converts a note into a structured todo
|
||||
- Works without a project (falls back to global scope)
|
||||
|
||||
Usage: `/gsd:note refactor the hook system`
|
||||
Usage: `/gsd:note list`
|
||||
Usage: `/gsd:note promote 3`
|
||||
Usage: `/gsd:note --global cross-project idea`
|
||||
|
||||
### Todo Management
|
||||
|
||||
**`/gsd:add-todo [description]`**
|
||||
Capture idea or task as todo from current conversation.
|
||||
|
||||
- Extracts context from conversation (or uses provided description)
|
||||
- Creates structured todo file in `.planning/todos/pending/`
|
||||
- Infers area from file paths for grouping
|
||||
- Checks for duplicates before creating
|
||||
- Updates STATE.md todo count
|
||||
|
||||
Usage: `/gsd:add-todo` (infers from conversation)
|
||||
Usage: `/gsd:add-todo Add auth token refresh`
|
||||
|
||||
**`/gsd:check-todos [area]`**
|
||||
List pending todos and select one to work on.
|
||||
|
||||
- Lists all pending todos with title, area, age
|
||||
- Optional area filter (e.g., `/gsd:check-todos api`)
|
||||
- Loads full context for selected todo
|
||||
- Routes to appropriate action (work now, add to phase, brainstorm)
|
||||
- Moves todo to done/ when work begins
|
||||
|
||||
Usage: `/gsd:check-todos`
|
||||
Usage: `/gsd:check-todos api`
|
||||
|
||||
### User Acceptance Testing
|
||||
|
||||
**`/gsd:verify-work [phase]`**
|
||||
Validate built features through conversational UAT.
|
||||
|
||||
- Extracts testable deliverables from SUMMARY.md files
|
||||
- Presents tests one at a time (yes/no responses)
|
||||
- Automatically diagnoses failures and creates fix plans
|
||||
- Ready for re-execution if issues found
|
||||
|
||||
Usage: `/gsd:verify-work 3`
|
||||
|
||||
### Ship Work
|
||||
|
||||
**`/gsd:ship [phase]`**
|
||||
Create a PR from completed phase work with an auto-generated body.
|
||||
|
||||
- Pushes branch to remote
|
||||
- Creates PR with summary from SUMMARY.md, VERIFICATION.md, REQUIREMENTS.md
|
||||
- Optionally requests code review
|
||||
- Updates STATE.md with shipping status
|
||||
|
||||
Prerequisites: Phase verified, `gh` CLI installed and authenticated.
|
||||
|
||||
Usage: `/gsd:ship 4` or `/gsd:ship 4 --draft`
|
||||
|
||||
### Milestone Auditing
|
||||
|
||||
**`/gsd:audit-milestone [version]`**
|
||||
Audit milestone completion against original intent.
|
||||
|
||||
- Reads all phase VERIFICATION.md files
|
||||
- Checks requirements coverage
|
||||
- Spawns integration checker for cross-phase wiring
|
||||
- Creates MILESTONE-AUDIT.md with gaps and tech debt
|
||||
|
||||
Usage: `/gsd:audit-milestone`
|
||||
|
||||
**`/gsd:plan-milestone-gaps`**
|
||||
Create phases to close gaps identified by audit.
|
||||
|
||||
- Reads MILESTONE-AUDIT.md and groups gaps into phases
|
||||
- Prioritizes by requirement priority (must/should/nice)
|
||||
- Adds gap closure phases to ROADMAP.md
|
||||
- Ready for `/gsd:plan-phase` on new phases
|
||||
|
||||
Usage: `/gsd:plan-milestone-gaps`
|
||||
|
||||
### Configuration
|
||||
|
||||
**`/gsd:settings`**
|
||||
Configure workflow toggles and model profile interactively.
|
||||
|
||||
- Toggle researcher, plan checker, verifier agents
|
||||
- Select model profile (quality/balanced/budget/inherit)
|
||||
- Updates `.planning/config.json`
|
||||
|
||||
Usage: `/gsd:settings`
|
||||
|
||||
**`/gsd:set-profile <profile>`**
|
||||
Quick switch model profile for GSD agents.
|
||||
|
||||
- `quality` — Opus everywhere except verification
|
||||
- `balanced` — Opus for planning, Sonnet for execution (default)
|
||||
- `budget` — Sonnet for writing, Haiku for research/verification
|
||||
- `inherit` — Use current session model for all agents (OpenCode `/model`)
|
||||
|
||||
Usage: `/gsd:set-profile budget`
|
||||
|
||||
### Utility Commands
|
||||
|
||||
**`/gsd:cleanup`**
|
||||
Archive accumulated phase directories from completed milestones.
|
||||
|
||||
- Identifies phases from completed milestones still in `.planning/phases/`
|
||||
- Shows dry-run summary before moving anything
|
||||
- Moves phase dirs to `.planning/milestones/v{X.Y}-phases/`
|
||||
- Use after multiple milestones to reduce `.planning/phases/` clutter
|
||||
|
||||
Usage: `/gsd:cleanup`
|
||||
|
||||
**`/gsd:help`**
|
||||
Show this command reference.
|
||||
|
||||
**`/gsd:update`**
|
||||
Update GSD to latest version with changelog preview.
|
||||
|
||||
- Shows installed vs latest version comparison
|
||||
- Displays changelog entries for versions you've missed
|
||||
- Highlights breaking changes
|
||||
- Confirms before running install
|
||||
- Better than raw `npx get-shit-done-cc`
|
||||
|
||||
Usage: `/gsd:update`
|
||||
|
||||
**`/gsd:join-discord`**
|
||||
Join the GSD Discord community.
|
||||
|
||||
- Get help, share what you're building, stay updated
|
||||
- Connect with other GSD users
|
||||
|
||||
Usage: `/gsd:join-discord`
|
||||
|
||||
## Files & Structure
|
||||
|
||||
```
|
||||
.planning/
|
||||
├── PROJECT.md # Project vision
|
||||
├── ROADMAP.md # Current phase breakdown
|
||||
├── STATE.md # Project memory & context
|
||||
├── RETROSPECTIVE.md # Living retrospective (updated per milestone)
|
||||
├── config.json # Workflow mode & gates
|
||||
├── todos/ # Captured ideas and tasks
|
||||
│ ├── pending/ # Todos waiting to be worked on
|
||||
│ └── done/ # Completed todos
|
||||
├── debug/ # Active debug sessions
|
||||
│ └── resolved/ # Archived resolved issues
|
||||
├── milestones/
|
||||
│ ├── v1.0-ROADMAP.md # Archived roadmap snapshot
|
||||
│ ├── v1.0-REQUIREMENTS.md # Archived requirements
|
||||
│ └── v1.0-phases/ # Archived phase dirs (via /gsd:cleanup or --archive-phases)
|
||||
│ ├── 01-foundation/
|
||||
│ └── 02-core-features/
|
||||
├── codebase/ # Codebase map (brownfield projects)
|
||||
│ ├── STACK.md # Languages, frameworks, dependencies
|
||||
│ ├── ARCHITECTURE.md # Patterns, layers, data flow
|
||||
│ ├── STRUCTURE.md # Directory layout, key files
|
||||
│ ├── CONVENTIONS.md # Coding standards, naming
|
||||
│ ├── TESTING.md # Test setup, patterns
|
||||
│ ├── INTEGRATIONS.md # External services, APIs
|
||||
│ └── CONCERNS.md # Tech debt, known issues
|
||||
└── phases/
|
||||
├── 01-foundation/
|
||||
│ ├── 01-01-PLAN.md
|
||||
│ └── 01-01-SUMMARY.md
|
||||
└── 02-core-features/
|
||||
├── 02-01-PLAN.md
|
||||
└── 02-01-SUMMARY.md
|
||||
```
|
||||
|
||||
## Workflow Modes
|
||||
|
||||
Set during `/gsd:new-project`:
|
||||
|
||||
**Interactive Mode**
|
||||
|
||||
- Confirms each major decision
|
||||
- Pauses at checkpoints for approval
|
||||
- More guidance throughout
|
||||
|
||||
**YOLO Mode**
|
||||
|
||||
- Auto-approves most decisions
|
||||
- Executes plans without confirmation
|
||||
- Only stops for critical checkpoints
|
||||
|
||||
Change anytime by editing `.planning/config.json`
|
||||
|
||||
## Planning Configuration
|
||||
|
||||
Configure how planning artifacts are managed in `.planning/config.json`:
|
||||
|
||||
**`planning.commit_docs`** (default: `true`)
|
||||
- `true`: Planning artifacts committed to git (standard workflow)
|
||||
- `false`: Planning artifacts kept local-only, not committed
|
||||
|
||||
When `commit_docs: false`:
|
||||
- Add `.planning/` to your `.gitignore`
|
||||
- Useful for OSS contributions, client projects, or keeping planning private
|
||||
- All planning files still work normally, just not tracked in git
|
||||
|
||||
**`planning.search_gitignored`** (default: `false`)
|
||||
- `true`: Add `--no-ignore` to broad ripgrep searches
|
||||
- Only needed when `.planning/` is gitignored and you want project-wide searches to include it
|
||||
|
||||
Example config:
|
||||
```json
|
||||
{
|
||||
"planning": {
|
||||
"commit_docs": false,
|
||||
"search_gitignored": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
**Starting a new project:**
|
||||
|
||||
```
|
||||
/gsd:new-project # Unified flow: questioning → research → requirements → roadmap
|
||||
/clear
|
||||
/gsd:plan-phase 1 # Create plans for first phase
|
||||
/clear
|
||||
/gsd:execute-phase 1 # Execute all plans in phase
|
||||
```
|
||||
|
||||
**Resuming work after a break:**
|
||||
|
||||
```
|
||||
/gsd:progress # See where you left off and continue
|
||||
```
|
||||
|
||||
**Adding urgent mid-milestone work:**
|
||||
|
||||
```
|
||||
/gsd:insert-phase 5 "Critical security fix"
|
||||
/gsd:plan-phase 5.1
|
||||
/gsd:execute-phase 5.1
|
||||
```
|
||||
|
||||
**Completing a milestone:**
|
||||
|
||||
```
|
||||
/gsd:complete-milestone 1.0.0
|
||||
/clear
|
||||
/gsd:new-milestone # Start next milestone (questioning → research → requirements → roadmap)
|
||||
```
|
||||
|
||||
**Capturing ideas during work:**
|
||||
|
||||
```
|
||||
/gsd:add-todo # Capture from conversation context
|
||||
/gsd:add-todo Fix modal z-index # Capture with explicit description
|
||||
/gsd:check-todos # Review and work on todos
|
||||
/gsd:check-todos api # Filter by area
|
||||
```
|
||||
|
||||
**Debugging an issue:**
|
||||
|
||||
```
|
||||
/gsd:debug "form submission fails silently" # Start debug session
|
||||
# ... investigation happens, context fills up ...
|
||||
/clear
|
||||
/gsd:debug # Resume from where you left off
|
||||
```
|
||||
|
||||
## Getting Help
|
||||
|
||||
- Read `.planning/PROJECT.md` for project vision
|
||||
- Read `.planning/STATE.md` for current context
|
||||
- Check `.planning/ROADMAP.md` for phase status
|
||||
- Run `/gsd:progress` to check where you're up to
|
||||
</reference>
|
||||
130
get-shit-done/workflows/insert-phase.md
Normal file
130
get-shit-done/workflows/insert-phase.md
Normal file
@@ -0,0 +1,130 @@
|
||||
<purpose>
|
||||
Insert a decimal phase for urgent work discovered mid-milestone between existing integer phases. Uses decimal numbering (72.1, 72.2, etc.) to preserve the logical sequence of planned phases while accommodating urgent insertions without renumbering the entire roadmap.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="parse_arguments">
|
||||
Parse the command arguments:
|
||||
- First argument: integer phase number to insert after
|
||||
- Remaining arguments: phase description
|
||||
|
||||
Example: `/gsd:insert-phase 72 Fix critical auth bug`
|
||||
-> after = 72
|
||||
-> description = "Fix critical auth bug"
|
||||
|
||||
If arguments missing:
|
||||
|
||||
```
|
||||
ERROR: Both phase number and description required
|
||||
Usage: /gsd:insert-phase <after> <description>
|
||||
Example: /gsd:insert-phase 72 Fix critical auth bug
|
||||
```
|
||||
|
||||
Exit.
|
||||
|
||||
Validate first argument is an integer.
|
||||
</step>
|
||||
|
||||
<step name="init_context">
|
||||
Load phase operation context:
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init phase-op "${after_phase}")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Check `roadmap_exists` from init JSON. If false:
|
||||
```
|
||||
ERROR: No roadmap found (.planning/ROADMAP.md)
|
||||
```
|
||||
Exit.
|
||||
</step>
|
||||
|
||||
<step name="insert_phase">
|
||||
**Delegate the phase insertion to gsd-tools:**
|
||||
|
||||
```bash
|
||||
RESULT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" phase insert "${after_phase}" "${description}")
|
||||
```
|
||||
|
||||
The CLI handles:
|
||||
- Verifying target phase exists in ROADMAP.md
|
||||
- Calculating next decimal phase number (checking existing decimals on disk)
|
||||
- Generating slug from description
|
||||
- Creating the phase directory (`.planning/phases/{N.M}-{slug}/`)
|
||||
- Inserting the phase entry into ROADMAP.md after the target phase with (INSERTED) marker
|
||||
|
||||
Extract from result: `phase_number`, `after_phase`, `name`, `slug`, `directory`.
|
||||
</step>
|
||||
|
||||
<step name="update_project_state">
|
||||
Update STATE.md to reflect the inserted phase:
|
||||
|
||||
1. Read `.planning/STATE.md`
|
||||
2. Under "## Accumulated Context" → "### Roadmap Evolution" add entry:
|
||||
```
|
||||
- Phase {decimal_phase} inserted after Phase {after_phase}: {description} (URGENT)
|
||||
```
|
||||
|
||||
If "Roadmap Evolution" section doesn't exist, create it.
|
||||
</step>
|
||||
|
||||
<step name="completion">
|
||||
Present completion summary:
|
||||
|
||||
```
|
||||
Phase {decimal_phase} inserted after Phase {after_phase}:
|
||||
- Description: {description}
|
||||
- Directory: .planning/phases/{decimal-phase}-{slug}/
|
||||
- Status: Not planned yet
|
||||
- Marker: (INSERTED) - indicates urgent work
|
||||
|
||||
Roadmap updated: .planning/ROADMAP.md
|
||||
Project state updated: .planning/STATE.md
|
||||
|
||||
---
|
||||
|
||||
## Next Up
|
||||
|
||||
**Phase {decimal_phase}: {description}** -- urgent insertion
|
||||
|
||||
`/gsd:plan-phase {decimal_phase}`
|
||||
|
||||
<sub>`/clear` first -> fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- Review insertion impact: Check if Phase {next_integer} dependencies still make sense
|
||||
- Review roadmap
|
||||
|
||||
---
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<anti_patterns>
|
||||
|
||||
- Don't use this for planned work at end of milestone (use /gsd:add-phase)
|
||||
- Don't insert before Phase 1 (decimal 0.1 makes no sense)
|
||||
- Don't renumber existing phases
|
||||
- Don't modify the target phase content
|
||||
- Don't create plans yet (that's /gsd:plan-phase)
|
||||
- Don't commit changes (user decides when to commit)
|
||||
</anti_patterns>
|
||||
|
||||
<success_criteria>
|
||||
Phase insertion is complete when:
|
||||
|
||||
- [ ] `gsd-tools phase insert` executed successfully
|
||||
- [ ] Phase directory created
|
||||
- [ ] Roadmap updated with new phase entry (includes "(INSERTED)" marker)
|
||||
- [ ] STATE.md updated with roadmap evolution note
|
||||
- [ ] User informed of next steps and dependency implications
|
||||
</success_criteria>
|
||||
178
get-shit-done/workflows/list-phase-assumptions.md
Normal file
178
get-shit-done/workflows/list-phase-assumptions.md
Normal file
@@ -0,0 +1,178 @@
|
||||
<purpose>
|
||||
Surface Claude's assumptions about a phase before planning, enabling users to correct misconceptions early.
|
||||
|
||||
Key difference from discuss-phase: This is ANALYSIS of what Claude thinks, not INTAKE of what user knows. No file output - purely conversational to prompt discussion.
|
||||
</purpose>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="validate_phase" priority="first">
|
||||
Phase number: $ARGUMENTS (required)
|
||||
|
||||
**If argument missing:**
|
||||
|
||||
```
|
||||
Error: Phase number required.
|
||||
|
||||
Usage: /gsd:list-phase-assumptions [phase-number]
|
||||
Example: /gsd:list-phase-assumptions 3
|
||||
```
|
||||
|
||||
Exit workflow.
|
||||
|
||||
**If argument provided:**
|
||||
Validate phase exists in roadmap:
|
||||
|
||||
```bash
|
||||
cat .planning/ROADMAP.md | grep -i "Phase ${PHASE}"
|
||||
```
|
||||
|
||||
**If phase not found:**
|
||||
|
||||
```
|
||||
Error: Phase ${PHASE} not found in roadmap.
|
||||
|
||||
Available phases:
|
||||
[list phases from roadmap]
|
||||
```
|
||||
|
||||
Exit workflow.
|
||||
|
||||
**If phase found:**
|
||||
Parse phase details from roadmap:
|
||||
|
||||
- Phase number
|
||||
- Phase name
|
||||
- Phase description/goal
|
||||
- Any scope details mentioned
|
||||
|
||||
Continue to analyze_phase.
|
||||
</step>
|
||||
|
||||
<step name="analyze_phase">
|
||||
Based on roadmap description and project context, identify assumptions across five areas:
|
||||
|
||||
**1. Technical Approach:**
|
||||
What libraries, frameworks, patterns, or tools would Claude use?
|
||||
- "I'd use X library because..."
|
||||
- "I'd follow Y pattern because..."
|
||||
- "I'd structure this as Z because..."
|
||||
|
||||
**2. Implementation Order:**
|
||||
What would Claude build first, second, third?
|
||||
- "I'd start with X because it's foundational"
|
||||
- "Then Y because it depends on X"
|
||||
- "Finally Z because..."
|
||||
|
||||
**3. Scope Boundaries:**
|
||||
What's included vs excluded in Claude's interpretation?
|
||||
- "This phase includes: A, B, C"
|
||||
- "This phase does NOT include: D, E, F"
|
||||
- "Boundary ambiguities: G could go either way"
|
||||
|
||||
**4. Risk Areas:**
|
||||
Where does Claude expect complexity or challenges?
|
||||
- "The tricky part is X because..."
|
||||
- "Potential issues: Y, Z"
|
||||
- "I'd watch out for..."
|
||||
|
||||
**5. Dependencies:**
|
||||
What does Claude assume exists or needs to be in place?
|
||||
- "This assumes X from previous phases"
|
||||
- "External dependencies: Y, Z"
|
||||
- "This will be consumed by..."
|
||||
|
||||
Be honest about uncertainty. Mark assumptions with confidence levels:
|
||||
- "Fairly confident: ..." (clear from roadmap)
|
||||
- "Assuming: ..." (reasonable inference)
|
||||
- "Unclear: ..." (could go multiple ways)
|
||||
</step>
|
||||
|
||||
<step name="present_assumptions">
|
||||
Present assumptions in a clear, scannable format:
|
||||
|
||||
```
|
||||
## My Assumptions for Phase ${PHASE}: ${PHASE_NAME}
|
||||
|
||||
### Technical Approach
|
||||
[List assumptions about how to implement]
|
||||
|
||||
### Implementation Order
|
||||
[List assumptions about sequencing]
|
||||
|
||||
### Scope Boundaries
|
||||
**In scope:** [what's included]
|
||||
**Out of scope:** [what's excluded]
|
||||
**Ambiguous:** [what could go either way]
|
||||
|
||||
### Risk Areas
|
||||
[List anticipated challenges]
|
||||
|
||||
### Dependencies
|
||||
**From prior phases:** [what's needed]
|
||||
**External:** [third-party needs]
|
||||
**Feeds into:** [what future phases need from this]
|
||||
|
||||
---
|
||||
|
||||
**What do you think?**
|
||||
|
||||
Are these assumptions accurate? Let me know:
|
||||
- What I got right
|
||||
- What I got wrong
|
||||
- What I'm missing
|
||||
```
|
||||
|
||||
Wait for user response.
|
||||
</step>
|
||||
|
||||
<step name="gather_feedback">
|
||||
**If user provides corrections:**
|
||||
|
||||
Acknowledge the corrections:
|
||||
|
||||
```
|
||||
Key corrections:
|
||||
- [correction 1]
|
||||
- [correction 2]
|
||||
|
||||
This changes my understanding significantly. [Summarize new understanding]
|
||||
```
|
||||
|
||||
**If user confirms assumptions:**
|
||||
|
||||
```
|
||||
Assumptions validated.
|
||||
```
|
||||
|
||||
Continue to offer_next.
|
||||
</step>
|
||||
|
||||
<step name="offer_next">
|
||||
Present next steps:
|
||||
|
||||
```
|
||||
What's next?
|
||||
1. Discuss context (/gsd:discuss-phase ${PHASE}) - Let me ask you questions to build comprehensive context
|
||||
2. Plan this phase (/gsd:plan-phase ${PHASE}) - Create detailed execution plans
|
||||
3. Re-examine assumptions - I'll analyze again with your corrections
|
||||
4. Done for now
|
||||
```
|
||||
|
||||
Wait for user selection.
|
||||
|
||||
If "Discuss context": Note that CONTEXT.md will incorporate any corrections discussed here
|
||||
If "Plan this phase": Proceed knowing assumptions are understood
|
||||
If "Re-examine": Return to analyze_phase with updated understanding
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- Phase number validated against roadmap
|
||||
- Assumptions surfaced across five areas: technical approach, implementation order, scope, risks, dependencies
|
||||
- Confidence levels marked where appropriate
|
||||
- "What do you think?" prompt presented
|
||||
- User feedback acknowledged
|
||||
- Clear next steps offered
|
||||
</success_criteria>
|
||||
360
get-shit-done/workflows/map-codebase.md
Normal file
360
get-shit-done/workflows/map-codebase.md
Normal file
@@ -0,0 +1,360 @@
|
||||
<purpose>
|
||||
Orchestrate parallel codebase mapper agents to analyze codebase and produce structured documents in .planning/codebase/
|
||||
|
||||
Each agent has fresh context, explores a specific focus area, and **writes documents directly**. The orchestrator only receives confirmation + line counts, then writes a summary.
|
||||
|
||||
Output: .planning/codebase/ folder with 7 structured documents about the codebase state.
|
||||
</purpose>
|
||||
|
||||
<philosophy>
|
||||
**Why dedicated mapper agents:**
|
||||
- Fresh context per domain (no token contamination)
|
||||
- Agents write documents directly (no context transfer back to orchestrator)
|
||||
- Orchestrator only summarizes what was created (minimal context usage)
|
||||
- Faster execution (agents run simultaneously)
|
||||
|
||||
**Document quality over length:**
|
||||
Include enough detail to be useful as reference. Prioritize practical examples (especially code patterns) over arbitrary brevity.
|
||||
|
||||
**Always include file paths:**
|
||||
Documents are reference material for Claude when planning/executing. Always include actual file paths formatted with backticks: `src/services/user.ts`.
|
||||
</philosophy>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="init_context" priority="first">
|
||||
Load codebase mapping context:
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init map-codebase)
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Extract from init JSON: `mapper_model`, `commit_docs`, `codebase_dir`, `existing_maps`, `has_maps`, `codebase_dir_exists`.
|
||||
</step>
|
||||
|
||||
<step name="check_existing">
|
||||
Check if .planning/codebase/ already exists using `has_maps` from init context.
|
||||
|
||||
If `codebase_dir_exists` is true:
|
||||
```bash
|
||||
ls -la .planning/codebase/
|
||||
```
|
||||
|
||||
**If exists:**
|
||||
|
||||
```
|
||||
.planning/codebase/ already exists with these documents:
|
||||
[List files found]
|
||||
|
||||
What's next?
|
||||
1. Refresh - Delete existing and remap codebase
|
||||
2. Update - Keep existing, only update specific documents
|
||||
3. Skip - Use existing codebase map as-is
|
||||
```
|
||||
|
||||
Wait for user response.
|
||||
|
||||
If "Refresh": Delete .planning/codebase/, continue to create_structure
|
||||
If "Update": Ask which documents to update, continue to spawn_agents (filtered)
|
||||
If "Skip": Exit workflow
|
||||
|
||||
**If doesn't exist:**
|
||||
Continue to create_structure.
|
||||
</step>
|
||||
|
||||
<step name="create_structure">
|
||||
Create .planning/codebase/ directory:
|
||||
|
||||
```bash
|
||||
mkdir -p .planning/codebase
|
||||
```
|
||||
|
||||
**Expected output files:**
|
||||
- STACK.md (from tech mapper)
|
||||
- INTEGRATIONS.md (from tech mapper)
|
||||
- ARCHITECTURE.md (from arch mapper)
|
||||
- STRUCTURE.md (from arch mapper)
|
||||
- CONVENTIONS.md (from quality mapper)
|
||||
- TESTING.md (from quality mapper)
|
||||
- CONCERNS.md (from concerns mapper)
|
||||
|
||||
Continue to spawn_agents.
|
||||
</step>
|
||||
|
||||
<step name="detect_runtime_capabilities">
|
||||
Before spawning agents, detect whether the current runtime supports the `Task` tool for subagent delegation.
|
||||
|
||||
**Runtimes with Task tool:** Claude Code, Cursor (native subagent support)
|
||||
**Runtimes WITHOUT Task tool:** Antigravity, Gemini CLI, OpenCode, Codex, and others
|
||||
|
||||
**How to detect:** Check if you have access to a `Task` tool. If you do NOT have a `Task` tool (or only have tools like `browser_subagent` which is for web browsing, NOT code analysis):
|
||||
|
||||
→ **Skip `spawn_agents` and `collect_confirmations`** — go directly to `sequential_mapping` instead.
|
||||
|
||||
**CRITICAL:** Never use `browser_subagent` or `Explore` as a substitute for `Task`. The `browser_subagent` tool is exclusively for web page interaction and will fail for codebase analysis. If `Task` is unavailable, perform the mapping sequentially in-context.
|
||||
</step>
|
||||
|
||||
<step name="spawn_agents" condition="Task tool is available">
|
||||
Spawn 4 parallel gsd-codebase-mapper agents.
|
||||
|
||||
Use Task tool with `subagent_type="gsd-codebase-mapper"`, `model="{mapper_model}"`, and `run_in_background=true` for parallel execution.
|
||||
|
||||
**CRITICAL:** Use the dedicated `gsd-codebase-mapper` agent, NOT `Explore` or `browser_subagent`. The mapper agent writes documents directly.
|
||||
|
||||
**Agent 1: Tech Focus**
|
||||
|
||||
```
|
||||
Task(
|
||||
subagent_type="gsd-codebase-mapper",
|
||||
model="{mapper_model}",
|
||||
run_in_background=true,
|
||||
description="Map codebase tech stack",
|
||||
prompt="Focus: tech
|
||||
|
||||
Analyze this codebase for technology stack and external integrations.
|
||||
|
||||
Write these documents to .planning/codebase/:
|
||||
- STACK.md - Languages, runtime, frameworks, dependencies, configuration
|
||||
- INTEGRATIONS.md - External APIs, databases, auth providers, webhooks
|
||||
|
||||
Explore thoroughly. Write documents directly using templates. Return confirmation only."
|
||||
)
|
||||
```
|
||||
|
||||
**Agent 2: Architecture Focus**
|
||||
|
||||
```
|
||||
Task(
|
||||
subagent_type="gsd-codebase-mapper",
|
||||
model="{mapper_model}",
|
||||
run_in_background=true,
|
||||
description="Map codebase architecture",
|
||||
prompt="Focus: arch
|
||||
|
||||
Analyze this codebase architecture and directory structure.
|
||||
|
||||
Write these documents to .planning/codebase/:
|
||||
- ARCHITECTURE.md - Pattern, layers, data flow, abstractions, entry points
|
||||
- STRUCTURE.md - Directory layout, key locations, naming conventions
|
||||
|
||||
Explore thoroughly. Write documents directly using templates. Return confirmation only."
|
||||
)
|
||||
```
|
||||
|
||||
**Agent 3: Quality Focus**
|
||||
|
||||
```
|
||||
Task(
|
||||
subagent_type="gsd-codebase-mapper",
|
||||
model="{mapper_model}",
|
||||
run_in_background=true,
|
||||
description="Map codebase conventions",
|
||||
prompt="Focus: quality
|
||||
|
||||
Analyze this codebase for coding conventions and testing patterns.
|
||||
|
||||
Write these documents to .planning/codebase/:
|
||||
- CONVENTIONS.md - Code style, naming, patterns, error handling
|
||||
- TESTING.md - Framework, structure, mocking, coverage
|
||||
|
||||
Explore thoroughly. Write documents directly using templates. Return confirmation only."
|
||||
)
|
||||
```
|
||||
|
||||
**Agent 4: Concerns Focus**
|
||||
|
||||
```
|
||||
Task(
|
||||
subagent_type="gsd-codebase-mapper",
|
||||
model="{mapper_model}",
|
||||
run_in_background=true,
|
||||
description="Map codebase concerns",
|
||||
prompt="Focus: concerns
|
||||
|
||||
Analyze this codebase for technical debt, known issues, and areas of concern.
|
||||
|
||||
Write this document to .planning/codebase/:
|
||||
- CONCERNS.md - Tech debt, bugs, security, performance, fragile areas
|
||||
|
||||
Explore thoroughly. Write document directly using template. Return confirmation only."
|
||||
)
|
||||
```
|
||||
|
||||
Continue to collect_confirmations.
|
||||
</step>
|
||||
|
||||
<step name="collect_confirmations">
|
||||
Wait for all 4 agents to complete.
|
||||
|
||||
Read each agent's output file to collect confirmations.
|
||||
|
||||
**Expected confirmation format from each agent:**
|
||||
```
|
||||
## Mapping Complete
|
||||
|
||||
**Focus:** {focus}
|
||||
**Documents written:**
|
||||
- `.planning/codebase/{DOC1}.md` ({N} lines)
|
||||
- `.planning/codebase/{DOC2}.md` ({N} lines)
|
||||
|
||||
Ready for orchestrator summary.
|
||||
```
|
||||
|
||||
**What you receive:** Just file paths and line counts. NOT document contents.
|
||||
|
||||
If any agent failed, note the failure and continue with successful documents.
|
||||
|
||||
Continue to verify_output.
|
||||
</step>
|
||||
|
||||
<step name="sequential_mapping" condition="Task tool is NOT available (e.g. Antigravity, Gemini CLI, Codex)">
|
||||
When the `Task` tool is unavailable, perform codebase mapping sequentially in the current context. This replaces `spawn_agents` and `collect_confirmations`.
|
||||
|
||||
**IMPORTANT:** Do NOT use `browser_subagent`, `Explore`, or any browser-based tool. Use only file system tools (Read, Bash, Write, Grep, Glob, list_dir, view_file, grep_search, or equivalent tools available in your runtime).
|
||||
|
||||
Perform all 4 mapping passes sequentially:
|
||||
|
||||
**Pass 1: Tech Focus**
|
||||
- Explore package.json/Cargo.toml/go.mod/requirements.txt, config files, dependency trees
|
||||
- Write `.planning/codebase/STACK.md` — Languages, runtime, frameworks, dependencies, configuration
|
||||
- Write `.planning/codebase/INTEGRATIONS.md` — External APIs, databases, auth providers, webhooks
|
||||
|
||||
**Pass 2: Architecture Focus**
|
||||
- Explore directory structure, entry points, module boundaries, data flow
|
||||
- Write `.planning/codebase/ARCHITECTURE.md` — Pattern, layers, data flow, abstractions, entry points
|
||||
- Write `.planning/codebase/STRUCTURE.md` — Directory layout, key locations, naming conventions
|
||||
|
||||
**Pass 3: Quality Focus**
|
||||
- Explore code style, error handling patterns, test files, CI config
|
||||
- Write `.planning/codebase/CONVENTIONS.md` — Code style, naming, patterns, error handling
|
||||
- Write `.planning/codebase/TESTING.md` — Framework, structure, mocking, coverage
|
||||
|
||||
**Pass 4: Concerns Focus**
|
||||
- Explore TODOs, known issues, fragile areas, security patterns
|
||||
- Write `.planning/codebase/CONCERNS.md` — Tech debt, bugs, security, performance, fragile areas
|
||||
|
||||
Use the same document templates as the `gsd-codebase-mapper` agent. Include actual file paths formatted with backticks.
|
||||
|
||||
Continue to verify_output.
|
||||
</step>
|
||||
|
||||
<step name="verify_output">
|
||||
Verify all documents created successfully:
|
||||
|
||||
```bash
|
||||
ls -la .planning/codebase/
|
||||
wc -l .planning/codebase/*.md
|
||||
```
|
||||
|
||||
**Verification checklist:**
|
||||
- All 7 documents exist
|
||||
- No empty documents (each should have >20 lines)
|
||||
|
||||
If any documents missing or empty, note which agents may have failed.
|
||||
|
||||
Continue to scan_for_secrets.
|
||||
</step>
|
||||
|
||||
<step name="scan_for_secrets">
|
||||
**CRITICAL SECURITY CHECK:** Scan output files for accidentally leaked secrets before committing.
|
||||
|
||||
Run secret pattern detection:
|
||||
|
||||
```bash
|
||||
# Check for common API key patterns in generated docs
|
||||
grep -E '(sk-[a-zA-Z0-9]{20,}|sk_live_[a-zA-Z0-9]+|sk_test_[a-zA-Z0-9]+|ghp_[a-zA-Z0-9]{36}|gho_[a-zA-Z0-9]{36}|glpat-[a-zA-Z0-9_-]+|AKIA[A-Z0-9]{16}|xox[baprs]-[a-zA-Z0-9-]+|-----BEGIN.*PRIVATE KEY|eyJ[a-zA-Z0-9_-]+\.eyJ[a-zA-Z0-9_-]+\.)' .planning/codebase/*.md 2>/dev/null && SECRETS_FOUND=true || SECRETS_FOUND=false
|
||||
```
|
||||
|
||||
**If SECRETS_FOUND=true:**
|
||||
|
||||
```
|
||||
⚠️ SECURITY ALERT: Potential secrets detected in codebase documents!
|
||||
|
||||
Found patterns that look like API keys or tokens in:
|
||||
[show grep output]
|
||||
|
||||
This would expose credentials if committed.
|
||||
|
||||
**Action required:**
|
||||
1. Review the flagged content above
|
||||
2. If these are real secrets, they must be removed before committing
|
||||
3. Consider adding sensitive files to Claude Code "Deny" permissions
|
||||
|
||||
Pausing before commit. Reply "safe to proceed" if the flagged content is not actually sensitive, or edit the files first.
|
||||
```
|
||||
|
||||
Wait for user confirmation before continuing to commit_codebase_map.
|
||||
|
||||
**If SECRETS_FOUND=false:**
|
||||
|
||||
Continue to commit_codebase_map.
|
||||
</step>
|
||||
|
||||
<step name="commit_codebase_map">
|
||||
Commit the codebase map:
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: map existing codebase" --files .planning/codebase/*.md
|
||||
```
|
||||
|
||||
Continue to offer_next.
|
||||
</step>
|
||||
|
||||
<step name="offer_next">
|
||||
Present completion summary and next steps.
|
||||
|
||||
**Get line counts:**
|
||||
```bash
|
||||
wc -l .planning/codebase/*.md
|
||||
```
|
||||
|
||||
**Output format:**
|
||||
|
||||
```
|
||||
Codebase mapping complete.
|
||||
|
||||
Created .planning/codebase/:
|
||||
- STACK.md ([N] lines) - Technologies and dependencies
|
||||
- ARCHITECTURE.md ([N] lines) - System design and patterns
|
||||
- STRUCTURE.md ([N] lines) - Directory layout and organization
|
||||
- CONVENTIONS.md ([N] lines) - Code style and patterns
|
||||
- TESTING.md ([N] lines) - Test structure and practices
|
||||
- INTEGRATIONS.md ([N] lines) - External services and APIs
|
||||
- CONCERNS.md ([N] lines) - Technical debt and issues
|
||||
|
||||
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Initialize project** — use codebase context for planning
|
||||
|
||||
`/gsd:new-project`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- Re-run mapping: `/gsd:map-codebase`
|
||||
- Review specific file: `cat .planning/codebase/STACK.md`
|
||||
- Edit any document before proceeding
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
End workflow.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- .planning/codebase/ directory created
|
||||
- If Task tool available: 4 parallel gsd-codebase-mapper agents spawned with run_in_background=true
|
||||
- If Task tool NOT available: 4 sequential mapping passes performed inline (never using browser_subagent)
|
||||
- All 7 codebase documents exist
|
||||
- No empty documents (each should have >20 lines)
|
||||
- Clear completion summary with line counts
|
||||
- User offered clear next steps in GSD style
|
||||
</success_criteria>
|
||||
386
get-shit-done/workflows/new-milestone.md
Normal file
386
get-shit-done/workflows/new-milestone.md
Normal file
@@ -0,0 +1,386 @@
|
||||
<purpose>
|
||||
|
||||
Start a new milestone cycle for an existing project. Loads project context, gathers milestone goals (from MILESTONE-CONTEXT.md or conversation), updates PROJECT.md and STATE.md, optionally runs parallel research, defines scoped requirements with REQ-IDs, spawns the roadmapper to create phased execution plan, and commits all artifacts. Brownfield equivalent of new-project.
|
||||
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
## 1. Load Context
|
||||
|
||||
- Read PROJECT.md (existing project, validated requirements, decisions)
|
||||
- Read MILESTONES.md (what shipped previously)
|
||||
- Read STATE.md (pending todos, blockers)
|
||||
- Check for MILESTONE-CONTEXT.md (from /gsd:discuss-milestone)
|
||||
|
||||
## 2. Gather Milestone Goals
|
||||
|
||||
**If MILESTONE-CONTEXT.md exists:**
|
||||
- Use features and scope from discuss-milestone
|
||||
- Present summary for confirmation
|
||||
|
||||
**If no context file:**
|
||||
- Present what shipped in last milestone
|
||||
- Ask inline (freeform, NOT AskUserQuestion): "What do you want to build next?"
|
||||
- Wait for their response, then use AskUserQuestion to probe specifics
|
||||
- If user selects "Other" at any point to provide freeform input, ask follow-up as plain text — not another AskUserQuestion
|
||||
|
||||
## 3. Determine Milestone Version
|
||||
|
||||
- Parse last version from MILESTONES.md
|
||||
- Suggest next version (v1.0 → v1.1, or v2.0 for major)
|
||||
- Confirm with user
|
||||
|
||||
## 4. Update PROJECT.md
|
||||
|
||||
Add/update:
|
||||
|
||||
```markdown
|
||||
## Current Milestone: v[X.Y] [Name]
|
||||
|
||||
**Goal:** [One sentence describing milestone focus]
|
||||
|
||||
**Target features:**
|
||||
- [Feature 1]
|
||||
- [Feature 2]
|
||||
- [Feature 3]
|
||||
```
|
||||
|
||||
Update Active requirements section and "Last updated" footer.
|
||||
|
||||
## 5. Update STATE.md
|
||||
|
||||
```markdown
|
||||
## Current Position
|
||||
|
||||
Phase: Not started (defining requirements)
|
||||
Plan: —
|
||||
Status: Defining requirements
|
||||
Last activity: [today] — Milestone v[X.Y] started
|
||||
```
|
||||
|
||||
Keep Accumulated Context section from previous milestone.
|
||||
|
||||
## 6. Cleanup and Commit
|
||||
|
||||
Delete MILESTONE-CONTEXT.md if exists (consumed).
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: start milestone v[X.Y] [Name]" --files .planning/PROJECT.md .planning/STATE.md
|
||||
```
|
||||
|
||||
## 7. Load Context and Resolve Models
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init new-milestone)
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Extract from init JSON: `researcher_model`, `synthesizer_model`, `roadmapper_model`, `commit_docs`, `research_enabled`, `current_milestone`, `project_exists`, `roadmap_exists`.
|
||||
|
||||
## 8. Research Decision
|
||||
|
||||
Check `research_enabled` from init JSON (loaded from config).
|
||||
|
||||
**If `research_enabled` is `true`:**
|
||||
|
||||
AskUserQuestion: "Research the domain ecosystem for new features before defining requirements?"
|
||||
- "Research first (Recommended)" — Discover patterns, features, architecture for NEW capabilities
|
||||
- "Skip research for this milestone" — Go straight to requirements (does not change your default)
|
||||
|
||||
**If `research_enabled` is `false`:**
|
||||
|
||||
AskUserQuestion: "Research the domain ecosystem for new features before defining requirements?"
|
||||
- "Skip research (current default)" — Go straight to requirements
|
||||
- "Research first" — Discover patterns, features, architecture for NEW capabilities
|
||||
|
||||
**IMPORTANT:** Do NOT persist this choice to config.json. The `workflow.research` setting is a persistent user preference that controls plan-phase behavior across the project. Changing it here would silently alter future `/gsd:plan-phase` behavior. To change the default, use `/gsd:settings`.
|
||||
|
||||
**If user chose "Research first":**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► RESEARCHING
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
◆ Spawning 4 researchers in parallel...
|
||||
→ Stack, Features, Architecture, Pitfalls
|
||||
```
|
||||
|
||||
```bash
|
||||
mkdir -p .planning/research
|
||||
```
|
||||
|
||||
Spawn 4 parallel gsd-project-researcher agents. Each uses this template with dimension-specific fields:
|
||||
|
||||
**Common structure for all 4 researchers:**
|
||||
```
|
||||
Task(prompt="
|
||||
<research_type>Project Research — {DIMENSION} for [new features].</research_type>
|
||||
|
||||
<milestone_context>
|
||||
SUBSEQUENT MILESTONE — Adding [target features] to existing app.
|
||||
{EXISTING_CONTEXT}
|
||||
Focus ONLY on what's needed for the NEW features.
|
||||
</milestone_context>
|
||||
|
||||
<question>{QUESTION}</question>
|
||||
|
||||
<files_to_read>
|
||||
- .planning/PROJECT.md (Project context)
|
||||
</files_to_read>
|
||||
|
||||
<downstream_consumer>{CONSUMER}</downstream_consumer>
|
||||
|
||||
<quality_gate>{GATES}</quality_gate>
|
||||
|
||||
<output>
|
||||
Write to: .planning/research/{FILE}
|
||||
Use template: C:/Users/yaoji/.claude/get-shit-done/templates/research-project/{FILE}
|
||||
</output>
|
||||
", subagent_type="gsd-project-researcher", model="{researcher_model}", description="{DIMENSION} research")
|
||||
```
|
||||
|
||||
**Dimension-specific fields:**
|
||||
|
||||
| Field | Stack | Features | Architecture | Pitfalls |
|
||||
|-------|-------|----------|-------------|----------|
|
||||
| EXISTING_CONTEXT | Existing validated capabilities (DO NOT re-research): [from PROJECT.md] | Existing features (already built): [from PROJECT.md] | Existing architecture: [from PROJECT.md or codebase map] | Focus on common mistakes when ADDING these features to existing system |
|
||||
| QUESTION | What stack additions/changes are needed for [new features]? | How do [target features] typically work? Expected behavior? | How do [target features] integrate with existing architecture? | Common mistakes when adding [target features] to [domain]? |
|
||||
| CONSUMER | Specific libraries with versions for NEW capabilities, integration points, what NOT to add | Table stakes vs differentiators vs anti-features, complexity noted, dependencies on existing | Integration points, new components, data flow changes, suggested build order | Warning signs, prevention strategy, which phase should address it |
|
||||
| GATES | Versions current (verify with Context7), rationale explains WHY, integration considered | Categories clear, complexity noted, dependencies identified | Integration points identified, new vs modified explicit, build order considers deps | Pitfalls specific to adding these features, integration pitfalls covered, prevention actionable |
|
||||
| FILE | STACK.md | FEATURES.md | ARCHITECTURE.md | PITFALLS.md |
|
||||
|
||||
After all 4 complete, spawn synthesizer:
|
||||
|
||||
```
|
||||
Task(prompt="
|
||||
Synthesize research outputs into SUMMARY.md.
|
||||
|
||||
<files_to_read>
|
||||
- .planning/research/STACK.md
|
||||
- .planning/research/FEATURES.md
|
||||
- .planning/research/ARCHITECTURE.md
|
||||
- .planning/research/PITFALLS.md
|
||||
</files_to_read>
|
||||
|
||||
Write to: .planning/research/SUMMARY.md
|
||||
Use template: C:/Users/yaoji/.claude/get-shit-done/templates/research-project/SUMMARY.md
|
||||
Commit after writing.
|
||||
", subagent_type="gsd-research-synthesizer", model="{synthesizer_model}", description="Synthesize research")
|
||||
```
|
||||
|
||||
Display key findings from SUMMARY.md:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► RESEARCH COMPLETE ✓
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
**Stack additions:** [from SUMMARY.md]
|
||||
**Feature table stakes:** [from SUMMARY.md]
|
||||
**Watch Out For:** [from SUMMARY.md]
|
||||
```
|
||||
|
||||
**If "Skip research":** Continue to Step 9.
|
||||
|
||||
## 9. Define Requirements
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► DEFINING REQUIREMENTS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
Read PROJECT.md: core value, current milestone goals, validated requirements (what exists).
|
||||
|
||||
**If research exists:** Read FEATURES.md, extract feature categories.
|
||||
|
||||
Present features by category:
|
||||
```
|
||||
## [Category 1]
|
||||
**Table stakes:** Feature A, Feature B
|
||||
**Differentiators:** Feature C, Feature D
|
||||
**Research notes:** [any relevant notes]
|
||||
```
|
||||
|
||||
**If no research:** Gather requirements through conversation. Ask: "What are the main things users need to do with [new features]?" Clarify, probe for related capabilities, group into categories.
|
||||
|
||||
**Scope each category** via AskUserQuestion (multiSelect: true, header max 12 chars):
|
||||
- "[Feature 1]" — [brief description]
|
||||
- "[Feature 2]" — [brief description]
|
||||
- "None for this milestone" — Defer entire category
|
||||
|
||||
Track: Selected → this milestone. Unselected table stakes → future. Unselected differentiators → out of scope.
|
||||
|
||||
**Identify gaps** via AskUserQuestion:
|
||||
- "No, research covered it" — Proceed
|
||||
- "Yes, let me add some" — Capture additions
|
||||
|
||||
**Generate REQUIREMENTS.md:**
|
||||
- v1 Requirements grouped by category (checkboxes, REQ-IDs)
|
||||
- Future Requirements (deferred)
|
||||
- Out of Scope (explicit exclusions with reasoning)
|
||||
- Traceability section (empty, filled by roadmap)
|
||||
|
||||
**REQ-ID format:** `[CATEGORY]-[NUMBER]` (AUTH-01, NOTIF-02). Continue numbering from existing.
|
||||
|
||||
**Requirement quality criteria:**
|
||||
|
||||
Good requirements are:
|
||||
- **Specific and testable:** "User can reset password via email link" (not "Handle password reset")
|
||||
- **User-centric:** "User can X" (not "System does Y")
|
||||
- **Atomic:** One capability per requirement (not "User can login and manage profile")
|
||||
- **Independent:** Minimal dependencies on other requirements
|
||||
|
||||
Present FULL requirements list for confirmation:
|
||||
|
||||
```
|
||||
## Milestone v[X.Y] Requirements
|
||||
|
||||
### [Category 1]
|
||||
- [ ] **CAT1-01**: User can do X
|
||||
- [ ] **CAT1-02**: User can do Y
|
||||
|
||||
### [Category 2]
|
||||
- [ ] **CAT2-01**: User can do Z
|
||||
|
||||
Does this capture what you're building? (yes / adjust)
|
||||
```
|
||||
|
||||
If "adjust": Return to scoping.
|
||||
|
||||
**Commit requirements:**
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: define milestone v[X.Y] requirements" --files .planning/REQUIREMENTS.md
|
||||
```
|
||||
|
||||
## 10. Create Roadmap
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► CREATING ROADMAP
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
◆ Spawning roadmapper...
|
||||
```
|
||||
|
||||
**Starting phase number:** Read MILESTONES.md for last phase number. Continue from there (v1.0 ended at phase 5 → v1.1 starts at phase 6).
|
||||
|
||||
```
|
||||
Task(prompt="
|
||||
<planning_context>
|
||||
<files_to_read>
|
||||
- .planning/PROJECT.md
|
||||
- .planning/REQUIREMENTS.md
|
||||
- .planning/research/SUMMARY.md (if exists)
|
||||
- .planning/config.json
|
||||
- .planning/MILESTONES.md
|
||||
</files_to_read>
|
||||
</planning_context>
|
||||
|
||||
<instructions>
|
||||
Create roadmap for milestone v[X.Y]:
|
||||
1. Start phase numbering from [N]
|
||||
2. Derive phases from THIS MILESTONE's requirements only
|
||||
3. Map every requirement to exactly one phase
|
||||
4. Derive 2-5 success criteria per phase (observable user behaviors)
|
||||
5. Validate 100% coverage
|
||||
6. Write files immediately (ROADMAP.md, STATE.md, update REQUIREMENTS.md traceability)
|
||||
7. Return ROADMAP CREATED with summary
|
||||
|
||||
Write files first, then return.
|
||||
</instructions>
|
||||
", subagent_type="gsd-roadmapper", model="{roadmapper_model}", description="Create roadmap")
|
||||
```
|
||||
|
||||
**Handle return:**
|
||||
|
||||
**If `## ROADMAP BLOCKED`:** Present blocker, work with user, re-spawn.
|
||||
|
||||
**If `## ROADMAP CREATED`:** Read ROADMAP.md, present inline:
|
||||
|
||||
```
|
||||
## Proposed Roadmap
|
||||
|
||||
**[N] phases** | **[X] requirements mapped** | All covered ✓
|
||||
|
||||
| # | Phase | Goal | Requirements | Success Criteria |
|
||||
|---|-------|------|--------------|------------------|
|
||||
| [N] | [Name] | [Goal] | [REQ-IDs] | [count] |
|
||||
|
||||
### Phase Details
|
||||
|
||||
**Phase [N]: [Name]**
|
||||
Goal: [goal]
|
||||
Requirements: [REQ-IDs]
|
||||
Success criteria:
|
||||
1. [criterion]
|
||||
2. [criterion]
|
||||
```
|
||||
|
||||
**Ask for approval** via AskUserQuestion:
|
||||
- "Approve" — Commit and continue
|
||||
- "Adjust phases" — Tell me what to change
|
||||
- "Review full file" — Show raw ROADMAP.md
|
||||
|
||||
**If "Adjust":** Get notes, re-spawn roadmapper with revision context, loop until approved.
|
||||
**If "Review":** Display raw ROADMAP.md, re-ask.
|
||||
|
||||
**Commit roadmap** (after approval):
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: create milestone v[X.Y] roadmap ([N] phases)" --files .planning/ROADMAP.md .planning/STATE.md .planning/REQUIREMENTS.md
|
||||
```
|
||||
|
||||
## 11. Done
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► MILESTONE INITIALIZED ✓
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
**Milestone v[X.Y]: [Name]**
|
||||
|
||||
| Artifact | Location |
|
||||
|----------------|-----------------------------|
|
||||
| Project | `.planning/PROJECT.md` |
|
||||
| Research | `.planning/research/` |
|
||||
| Requirements | `.planning/REQUIREMENTS.md` |
|
||||
| Roadmap | `.planning/ROADMAP.md` |
|
||||
|
||||
**[N] phases** | **[X] requirements** | Ready to build ✓
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Phase [N]: [Phase Name]** — [Goal]
|
||||
|
||||
`/gsd:discuss-phase [N]` — gather context and clarify approach
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
Also: `/gsd:plan-phase [N]` — skip discussion, plan directly
|
||||
```
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] PROJECT.md updated with Current Milestone section
|
||||
- [ ] STATE.md reset for new milestone
|
||||
- [ ] MILESTONE-CONTEXT.md consumed and deleted (if existed)
|
||||
- [ ] Research completed (if selected) — 4 parallel agents, milestone-aware
|
||||
- [ ] Requirements gathered and scoped per category
|
||||
- [ ] REQUIREMENTS.md created with REQ-IDs
|
||||
- [ ] gsd-roadmapper spawned with phase numbering context
|
||||
- [ ] Roadmap files written immediately (not draft)
|
||||
- [ ] User feedback incorporated (if any)
|
||||
- [ ] ROADMAP.md phases continue from previous milestone
|
||||
- [ ] All commits made (if planning docs committed)
|
||||
- [ ] User knows next step: `/gsd:discuss-phase [N]`
|
||||
|
||||
**Atomic commits:** Each phase commits its artifacts immediately.
|
||||
</success_criteria>
|
||||
1113
get-shit-done/workflows/new-project.md
Normal file
1113
get-shit-done/workflows/new-project.md
Normal file
File diff suppressed because it is too large
Load Diff
97
get-shit-done/workflows/next.md
Normal file
97
get-shit-done/workflows/next.md
Normal file
@@ -0,0 +1,97 @@
|
||||
<purpose>
|
||||
Detect current project state and automatically advance to the next logical GSD workflow step.
|
||||
Reads project state to determine: discuss → plan → execute → verify → complete progression.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="detect_state">
|
||||
Read project state to determine current position:
|
||||
|
||||
```bash
|
||||
# Get state snapshot
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state json 2>/dev/null || echo "{}"
|
||||
```
|
||||
|
||||
Also read:
|
||||
- `.planning/STATE.md` — current phase, progress, plan counts
|
||||
- `.planning/ROADMAP.md` — milestone structure and phase list
|
||||
|
||||
Extract:
|
||||
- `current_phase` — which phase is active
|
||||
- `plan_of` / `plans_total` — plan execution progress
|
||||
- `progress` — overall percentage
|
||||
- `status` — active, paused, etc.
|
||||
|
||||
If no `.planning/` directory exists:
|
||||
```
|
||||
No GSD project detected. Run `/gsd:new-project` to get started.
|
||||
```
|
||||
Exit.
|
||||
</step>
|
||||
|
||||
<step name="determine_next_action">
|
||||
Apply routing rules based on state:
|
||||
|
||||
**Route 1: No phases exist yet → discuss**
|
||||
If ROADMAP has phases but no phase directories exist on disk:
|
||||
→ Next action: `/gsd:discuss-phase <first-phase>`
|
||||
|
||||
**Route 2: Phase exists but has no CONTEXT.md or RESEARCH.md → discuss**
|
||||
If the current phase directory exists but has neither CONTEXT.md nor RESEARCH.md:
|
||||
→ Next action: `/gsd:discuss-phase <current-phase>`
|
||||
|
||||
**Route 3: Phase has context but no plans → plan**
|
||||
If the current phase has CONTEXT.md (or RESEARCH.md) but no PLAN.md files:
|
||||
→ Next action: `/gsd:plan-phase <current-phase>`
|
||||
|
||||
**Route 4: Phase has plans but incomplete summaries → execute**
|
||||
If plans exist but not all have matching summaries:
|
||||
→ Next action: `/gsd:execute-phase <current-phase>`
|
||||
|
||||
**Route 5: All plans have summaries → verify and complete**
|
||||
If all plans in the current phase have summaries:
|
||||
→ Next action: `/gsd:verify-work` then `/gsd:complete-phase`
|
||||
|
||||
**Route 6: Phase complete, next phase exists → advance**
|
||||
If the current phase is complete and the next phase exists in ROADMAP:
|
||||
→ Next action: `/gsd:discuss-phase <next-phase>`
|
||||
|
||||
**Route 7: All phases complete → complete milestone**
|
||||
If all phases are complete:
|
||||
→ Next action: `/gsd:complete-milestone`
|
||||
|
||||
**Route 8: Paused → resume**
|
||||
If STATE.md shows paused_at:
|
||||
→ Next action: `/gsd:resume-work`
|
||||
</step>
|
||||
|
||||
<step name="show_and_execute">
|
||||
Display the determination:
|
||||
|
||||
```
|
||||
## GSD Next
|
||||
|
||||
**Current:** Phase [N] — [name] | [progress]%
|
||||
**Status:** [status description]
|
||||
|
||||
▶ **Next step:** `/gsd:[command] [args]`
|
||||
[One-line explanation of why this is the next step]
|
||||
```
|
||||
|
||||
Then immediately invoke the determined command via SlashCommand.
|
||||
Do not ask for confirmation — the whole point of `/gsd:next` is zero-friction advancement.
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Project state correctly detected
|
||||
- [ ] Next action correctly determined from routing rules
|
||||
- [ ] Command invoked immediately without user confirmation
|
||||
- [ ] Clear status shown before invoking
|
||||
</success_criteria>
|
||||
92
get-shit-done/workflows/node-repair.md
Normal file
92
get-shit-done/workflows/node-repair.md
Normal file
@@ -0,0 +1,92 @@
|
||||
<purpose>
|
||||
Autonomous repair operator for failed task verification. Invoked by execute-plan when a task fails its done-criteria. Proposes and attempts structured fixes before escalating to the user.
|
||||
</purpose>
|
||||
|
||||
<inputs>
|
||||
- FAILED_TASK: Task number, name, and done-criteria from the plan
|
||||
- ERROR: What verification produced — actual result vs expected
|
||||
- PLAN_CONTEXT: Adjacent tasks and phase goal (for constraint awareness)
|
||||
- REPAIR_BUDGET: Max repair attempts remaining (default: 2)
|
||||
</inputs>
|
||||
|
||||
<repair_directive>
|
||||
Analyze the failure and choose exactly one repair strategy:
|
||||
|
||||
**RETRY** — The approach was right but execution failed. Try again with a concrete adjustment.
|
||||
- Use when: command error, missing dependency, wrong path, env issue, transient failure
|
||||
- Output: `RETRY: [specific adjustment to make before retrying]`
|
||||
|
||||
**DECOMPOSE** — The task is too coarse. Break it into smaller verifiable sub-steps.
|
||||
- Use when: done-criteria covers multiple concerns, implementation gaps are structural
|
||||
- Output: `DECOMPOSE: [sub-task 1] | [sub-task 2] | ...` (max 3 sub-tasks)
|
||||
- Sub-tasks must each have a single verifiable outcome
|
||||
|
||||
**PRUNE** — The task is infeasible given current constraints. Skip with justification.
|
||||
- Use when: prerequisite missing and not fixable here, out of scope, contradicts an earlier decision
|
||||
- Output: `PRUNE: [one-sentence justification]`
|
||||
|
||||
**ESCALATE** — Repair budget exhausted, or this is an architectural decision (Rule 4).
|
||||
- Use when: RETRY failed more than once with different approaches, or fix requires structural change
|
||||
- Output: `ESCALATE: [what was tried] | [what decision is needed]`
|
||||
</repair_directive>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="diagnose">
|
||||
Read the error and done-criteria carefully. Ask:
|
||||
1. Is this a transient/environmental issue? → RETRY
|
||||
2. Is the task verifiably too broad? → DECOMPOSE
|
||||
3. Is a prerequisite genuinely missing and unfixable in scope? → PRUNE
|
||||
4. Has RETRY already been attempted with this task? Check REPAIR_BUDGET. If 0 → ESCALATE
|
||||
</step>
|
||||
|
||||
<step name="execute_retry">
|
||||
If RETRY:
|
||||
1. Apply the specific adjustment stated in the directive
|
||||
2. Re-run the task implementation
|
||||
3. Re-run verification
|
||||
4. If passes → continue normally, log `[Node Repair - RETRY] Task [X]: [adjustment made]`
|
||||
5. If fails again → decrement REPAIR_BUDGET, re-invoke node-repair with updated context
|
||||
</step>
|
||||
|
||||
<step name="execute_decompose">
|
||||
If DECOMPOSE:
|
||||
1. Replace the failed task inline with the sub-tasks (do not modify PLAN.md on disk)
|
||||
2. Execute sub-tasks sequentially, each with its own verification
|
||||
3. If all sub-tasks pass → treat original task as succeeded, log `[Node Repair - DECOMPOSE] Task [X] → [N] sub-tasks`
|
||||
4. If a sub-task fails → re-invoke node-repair for that sub-task (REPAIR_BUDGET applies per sub-task)
|
||||
</step>
|
||||
|
||||
<step name="execute_prune">
|
||||
If PRUNE:
|
||||
1. Mark task as skipped with justification
|
||||
2. Log to SUMMARY "Issues Encountered": `[Node Repair - PRUNE] Task [X]: [justification]`
|
||||
3. Continue to next task
|
||||
</step>
|
||||
|
||||
<step name="execute_escalate">
|
||||
If ESCALATE:
|
||||
1. Surface to user via verification_failure_gate with full repair history
|
||||
2. Present: what was tried (each RETRY/DECOMPOSE attempt), what the blocker is, options available
|
||||
3. Wait for user direction before continuing
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<logging>
|
||||
All repair actions must appear in SUMMARY.md under "## Deviations from Plan":
|
||||
|
||||
| Type | Format |
|
||||
|------|--------|
|
||||
| RETRY success | `[Node Repair - RETRY] Task X: [adjustment] — resolved` |
|
||||
| RETRY fail → ESCALATE | `[Node Repair - RETRY] Task X: [N] attempts exhausted — escalated to user` |
|
||||
| DECOMPOSE | `[Node Repair - DECOMPOSE] Task X split into [N] sub-tasks — all passed` |
|
||||
| PRUNE | `[Node Repair - PRUNE] Task X skipped: [justification]` |
|
||||
</logging>
|
||||
|
||||
<constraints>
|
||||
- REPAIR_BUDGET defaults to 2 per task. Configurable via config.json `workflow.node_repair_budget`.
|
||||
- Never modify PLAN.md on disk — decomposed sub-tasks are in-memory only.
|
||||
- DECOMPOSE sub-tasks must be more specific than the original, not synonymous rewrites.
|
||||
- If config.json `workflow.node_repair` is `false`, skip directly to verification_failure_gate (user retains original behavior).
|
||||
</constraints>
|
||||
156
get-shit-done/workflows/note.md
Normal file
156
get-shit-done/workflows/note.md
Normal file
@@ -0,0 +1,156 @@
|
||||
<purpose>
|
||||
Zero-friction idea capture. One Write call, one confirmation line. No questions, no prompts.
|
||||
Runs inline — no Task, no AskUserQuestion, no Bash.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="storage_format">
|
||||
**Note storage format.**
|
||||
|
||||
Notes are stored as individual markdown files:
|
||||
|
||||
- **Project scope**: `.planning/notes/{YYYY-MM-DD}-{slug}.md` — used when `.planning/` exists in cwd
|
||||
- **Global scope**: `C:/Users/yaoji/.claude/notes/{YYYY-MM-DD}-{slug}.md` — fallback when no `.planning/`, or when `--global` flag is present
|
||||
|
||||
Each note file:
|
||||
|
||||
```markdown
|
||||
---
|
||||
date: "YYYY-MM-DD HH:mm"
|
||||
promoted: false
|
||||
---
|
||||
|
||||
{note text verbatim}
|
||||
```
|
||||
|
||||
**`--global` flag**: Strip `--global` from anywhere in `$ARGUMENTS` before parsing. When present, force global scope regardless of whether `.planning/` exists.
|
||||
|
||||
**Important**: Do NOT create `.planning/` if it doesn't exist. Fall back to global scope silently.
|
||||
</step>
|
||||
|
||||
<step name="parse_subcommand">
|
||||
**Parse subcommand from $ARGUMENTS (after stripping --global).**
|
||||
|
||||
| Condition | Subcommand |
|
||||
|-----------|------------|
|
||||
| Arguments are exactly `list` (case-insensitive) | **list** |
|
||||
| Arguments are exactly `promote <N>` where N is a number | **promote** |
|
||||
| Arguments are empty (no text at all) | **list** |
|
||||
| Anything else | **append** (the text IS the note) |
|
||||
|
||||
**Critical**: `list` is only a subcommand when it's the ENTIRE argument. `/gsd:note list of groceries` saves a note with text "list of groceries". Same for `promote` — only a subcommand when followed by exactly one number.
|
||||
</step>
|
||||
|
||||
<step name="append">
|
||||
**Subcommand: append — create a timestamped note file.**
|
||||
|
||||
1. Determine scope (project or global) per storage format above
|
||||
2. Ensure the notes directory exists (`.planning/notes/` or `C:/Users/yaoji/.claude/notes/`)
|
||||
3. Generate slug: first ~4 meaningful words of the note text, lowercase, hyphen-separated (strip articles/prepositions from the start)
|
||||
4. Generate filename: `{YYYY-MM-DD}-{slug}.md`
|
||||
- If a file with that name already exists, append `-2`, `-3`, etc.
|
||||
5. Write the file with frontmatter and note text (see storage format)
|
||||
6. Confirm with exactly one line: `Noted ({scope}): {note text}`
|
||||
- Where `{scope}` is "project" or "global"
|
||||
|
||||
**Constraints:**
|
||||
- **Never modify the note text** — capture verbatim, including typos
|
||||
- **Never ask questions** — just write and confirm
|
||||
- **Timestamp format**: Use local time, `YYYY-MM-DD HH:mm` (24-hour, no seconds)
|
||||
</step>
|
||||
|
||||
<step name="list">
|
||||
**Subcommand: list — show notes from both scopes.**
|
||||
|
||||
1. Glob `.planning/notes/*.md` (if directory exists) — project notes
|
||||
2. Glob `C:/Users/yaoji/.claude/notes/*.md` (if directory exists) — global notes
|
||||
3. For each file, read frontmatter to get `date` and `promoted` status
|
||||
4. Exclude files where `promoted: true` from active counts (but still show them, dimmed)
|
||||
5. Sort by date, number all active entries sequentially starting at 1
|
||||
6. If total active entries > 20, show only the last 10 with a note about how many were omitted
|
||||
|
||||
**Display format:**
|
||||
|
||||
```
|
||||
Notes:
|
||||
|
||||
Project (.planning/notes/):
|
||||
1. [2026-02-08 14:32] refactor the hook system to support async validators
|
||||
2. [promoted] [2026-02-08 14:40] add rate limiting to the API endpoints
|
||||
3. [2026-02-08 15:10] consider adding a --dry-run flag to build
|
||||
|
||||
Global (C:/Users/yaoji/.claude/notes/):
|
||||
4. [2026-02-08 10:00] cross-project idea about shared config
|
||||
|
||||
{count} active note(s). Use `/gsd:note promote <N>` to convert to a todo.
|
||||
```
|
||||
|
||||
If a scope has no directory or no entries, show: `(no notes)`
|
||||
</step>
|
||||
|
||||
<step name="promote">
|
||||
**Subcommand: promote — convert a note into a todo.**
|
||||
|
||||
1. Run the **list** logic to build the numbered index (both scopes)
|
||||
2. Find entry N from the numbered list
|
||||
3. If N is invalid or refers to an already-promoted note, tell the user and stop
|
||||
4. **Requires `.planning/` directory** — if it doesn't exist, warn: "Todos require a GSD project. Run `/gsd:new-project` to initialize one."
|
||||
5. Ensure `.planning/todos/pending/` directory exists
|
||||
6. Generate todo ID: `{NNN}-{slug}` where NNN is the next sequential number (scan both `.planning/todos/pending/` and `.planning/todos/done/` for the highest existing number, increment by 1, zero-pad to 3 digits) and slug is the first ~4 meaningful words of the note text
|
||||
7. Extract the note text from the source file (body after frontmatter)
|
||||
8. Create `.planning/todos/pending/{id}.md`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: "{note text}"
|
||||
status: pending
|
||||
priority: P2
|
||||
source: "promoted from /gsd:note"
|
||||
created: {YYYY-MM-DD}
|
||||
theme: general
|
||||
---
|
||||
|
||||
## Goal
|
||||
|
||||
{note text}
|
||||
|
||||
## Context
|
||||
|
||||
Promoted from quick note captured on {original date}.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] {primary criterion derived from note text}
|
||||
```
|
||||
|
||||
9. Mark the source note file as promoted: update its frontmatter to `promoted: true`
|
||||
10. Confirm: `Promoted note {N} to todo {id}: {note text}`
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<edge_cases>
|
||||
1. **"list" as note text**: `/gsd:note list of things` saves note "list of things" (subcommand only when `list` is the entire arg)
|
||||
2. **No `.planning/`**: Falls back to global `C:/Users/yaoji/.claude/notes/` — works in any directory
|
||||
3. **Promote without project**: Warns that todos require `.planning/`, suggests `/gsd:new-project`
|
||||
4. **Large files**: `list` shows last 10 when >20 active entries
|
||||
5. **Duplicate slugs**: Append `-2`, `-3` etc. to filename if slug already used on same date
|
||||
6. **`--global` position**: Stripped from anywhere — `--global my idea` and `my idea --global` both save "my idea" globally
|
||||
7. **Promote already-promoted**: Tell user "Note {N} is already promoted" and stop
|
||||
8. **Empty note text after stripping flags**: Treat as `list` subcommand
|
||||
</edge_cases>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Append: Note file written with correct frontmatter and verbatim text
|
||||
- [ ] Append: No questions asked — instant capture
|
||||
- [ ] List: Both scopes shown with sequential numbering
|
||||
- [ ] List: Promoted notes shown but dimmed
|
||||
- [ ] Promote: Todo created with correct format
|
||||
- [ ] Promote: Source note marked as promoted
|
||||
- [ ] Global fallback: Works when no `.planning/` exists
|
||||
</success_criteria>
|
||||
176
get-shit-done/workflows/pause-work.md
Normal file
176
get-shit-done/workflows/pause-work.md
Normal file
@@ -0,0 +1,176 @@
|
||||
<purpose>
|
||||
Create structured `.planning/HANDOFF.json` and `.continue-here.md` handoff files to preserve complete work state across sessions. The JSON provides machine-readable state for `/gsd:resume-work`; the markdown provides human-readable context.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="detect">
|
||||
Find current phase directory from most recently modified files:
|
||||
|
||||
```bash
|
||||
# Find most recent phase directory with work
|
||||
ls -lt .planning/phases/*/PLAN.md 2>/dev/null | head -1 | grep -oP 'phases/\K[^/]+'
|
||||
```
|
||||
|
||||
If no active phase detected, ask user which phase they're pausing work on.
|
||||
</step>
|
||||
|
||||
<step name="gather">
|
||||
**Collect complete state for handoff:**
|
||||
|
||||
1. **Current position**: Which phase, which plan, which task
|
||||
2. **Work completed**: What got done this session
|
||||
3. **Work remaining**: What's left in current plan/phase
|
||||
4. **Decisions made**: Key decisions and rationale
|
||||
5. **Blockers/issues**: Anything stuck
|
||||
6. **Human actions pending**: Things that need manual intervention (MCP setup, API keys, approvals, manual testing)
|
||||
7. **Background processes**: Any running servers/watchers that were part of the workflow
|
||||
8. **Files modified**: What's changed but not committed
|
||||
|
||||
Ask user for clarifications if needed via conversational questions.
|
||||
|
||||
**Also inspect SUMMARY.md files for false completions:**
|
||||
```bash
|
||||
# Check for placeholder content in existing summaries
|
||||
grep -l "To be filled\|placeholder\|TBD" .planning/phases/*/*.md 2>/dev/null
|
||||
```
|
||||
Report any summaries with placeholder content as incomplete items.
|
||||
</step>
|
||||
|
||||
<step name="write_structured">
|
||||
**Write structured handoff to `.planning/HANDOFF.json`:**
|
||||
|
||||
```bash
|
||||
timestamp=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" current-timestamp full --raw)
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0",
|
||||
"timestamp": "{timestamp}",
|
||||
"phase": "{phase_number}",
|
||||
"phase_name": "{phase_name}",
|
||||
"phase_dir": "{phase_dir}",
|
||||
"plan": {current_plan_number},
|
||||
"task": {current_task_number},
|
||||
"total_tasks": {total_task_count},
|
||||
"status": "paused",
|
||||
"completed_tasks": [
|
||||
{"id": 1, "name": "{task_name}", "status": "done", "commit": "{short_hash}"},
|
||||
{"id": 2, "name": "{task_name}", "status": "done", "commit": "{short_hash}"},
|
||||
{"id": 3, "name": "{task_name}", "status": "in_progress", "progress": "{what_done}"}
|
||||
],
|
||||
"remaining_tasks": [
|
||||
{"id": 4, "name": "{task_name}", "status": "not_started"},
|
||||
{"id": 5, "name": "{task_name}", "status": "not_started"}
|
||||
],
|
||||
"blockers": [
|
||||
{"description": "{blocker}", "type": "technical|human_action|external", "workaround": "{if any}"}
|
||||
],
|
||||
"human_actions_pending": [
|
||||
{"action": "{what needs to be done}", "context": "{why}", "blocking": true}
|
||||
],
|
||||
"decisions": [
|
||||
{"decision": "{what}", "rationale": "{why}", "phase": "{phase_number}"}
|
||||
],
|
||||
"uncommitted_files": [],
|
||||
"next_action": "{specific first action when resuming}",
|
||||
"context_notes": "{mental state, approach, what you were thinking}"
|
||||
}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="write">
|
||||
**Write handoff to `.planning/phases/XX-name/.continue-here.md`:**
|
||||
|
||||
```markdown
|
||||
---
|
||||
phase: XX-name
|
||||
task: 3
|
||||
total_tasks: 7
|
||||
status: in_progress
|
||||
last_updated: [timestamp from current-timestamp]
|
||||
---
|
||||
|
||||
<current_state>
|
||||
[Where exactly are we? Immediate context]
|
||||
</current_state>
|
||||
|
||||
<completed_work>
|
||||
|
||||
- Task 1: [name] - Done
|
||||
- Task 2: [name] - Done
|
||||
- Task 3: [name] - In progress, [what's done]
|
||||
</completed_work>
|
||||
|
||||
<remaining_work>
|
||||
|
||||
- Task 3: [what's left]
|
||||
- Task 4: Not started
|
||||
- Task 5: Not started
|
||||
</remaining_work>
|
||||
|
||||
<decisions_made>
|
||||
|
||||
- Decided to use [X] because [reason]
|
||||
- Chose [approach] over [alternative] because [reason]
|
||||
</decisions_made>
|
||||
|
||||
<blockers>
|
||||
- [Blocker 1]: [status/workaround]
|
||||
</blockers>
|
||||
|
||||
<context>
|
||||
[Mental state, what were you thinking, the plan]
|
||||
</context>
|
||||
|
||||
<next_action>
|
||||
Start with: [specific first action when resuming]
|
||||
</next_action>
|
||||
```
|
||||
|
||||
Be specific enough for a fresh Claude to understand immediately.
|
||||
|
||||
Use `current-timestamp` for last_updated field. You can use init todos (which provides timestamps) or call directly:
|
||||
```bash
|
||||
timestamp=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" current-timestamp full --raw)
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="commit">
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "wip: [phase-name] paused at task [X]/[Y]" --files .planning/phases/*/.continue-here.md .planning/HANDOFF.json
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="confirm">
|
||||
```
|
||||
✓ Handoff created:
|
||||
- .planning/HANDOFF.json (structured, machine-readable)
|
||||
- .planning/phases/[XX-name]/.continue-here.md (human-readable)
|
||||
|
||||
Current state:
|
||||
|
||||
- Phase: [XX-name]
|
||||
- Task: [X] of [Y]
|
||||
- Status: [in_progress/blocked]
|
||||
- Blockers: [count] ({human_actions_pending count} need human action)
|
||||
- Committed as WIP
|
||||
|
||||
To resume: /gsd:resume-work
|
||||
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] .continue-here.md created in correct phase directory
|
||||
- [ ] All sections filled with specific content
|
||||
- [ ] Committed as WIP
|
||||
- [ ] User knows location and how to resume
|
||||
</success_criteria>
|
||||
274
get-shit-done/workflows/plan-milestone-gaps.md
Normal file
274
get-shit-done/workflows/plan-milestone-gaps.md
Normal file
@@ -0,0 +1,274 @@
|
||||
<purpose>
|
||||
Create all phases necessary to close gaps identified by `/gsd:audit-milestone`. Reads MILESTONE-AUDIT.md, groups gaps into logical phases, creates phase entries in ROADMAP.md, and offers to plan each phase. One command creates all fix phases — no manual `/gsd:add-phase` per gap.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
## 1. Load Audit Results
|
||||
|
||||
```bash
|
||||
# Find the most recent audit file
|
||||
ls -t .planning/v*-MILESTONE-AUDIT.md 2>/dev/null | head -1
|
||||
```
|
||||
|
||||
Parse YAML frontmatter to extract structured gaps:
|
||||
- `gaps.requirements` — unsatisfied requirements
|
||||
- `gaps.integration` — missing cross-phase connections
|
||||
- `gaps.flows` — broken E2E flows
|
||||
|
||||
If no audit file exists or has no gaps, error:
|
||||
```
|
||||
No audit gaps found. Run `/gsd:audit-milestone` first.
|
||||
```
|
||||
|
||||
## 2. Prioritize Gaps
|
||||
|
||||
Group gaps by priority from REQUIREMENTS.md:
|
||||
|
||||
| Priority | Action |
|
||||
|----------|--------|
|
||||
| `must` | Create phase, blocks milestone |
|
||||
| `should` | Create phase, recommended |
|
||||
| `nice` | Ask user: include or defer? |
|
||||
|
||||
For integration/flow gaps, infer priority from affected requirements.
|
||||
|
||||
## 3. Group Gaps into Phases
|
||||
|
||||
Cluster related gaps into logical phases:
|
||||
|
||||
**Grouping rules:**
|
||||
- Same affected phase → combine into one fix phase
|
||||
- Same subsystem (auth, API, UI) → combine
|
||||
- Dependency order (fix stubs before wiring)
|
||||
- Keep phases focused: 2-4 tasks each
|
||||
|
||||
**Example grouping:**
|
||||
```
|
||||
Gap: DASH-01 unsatisfied (Dashboard doesn't fetch)
|
||||
Gap: Integration Phase 1→3 (Auth not passed to API calls)
|
||||
Gap: Flow "View dashboard" broken at data fetch
|
||||
|
||||
→ Phase 6: "Wire Dashboard to API"
|
||||
- Add fetch to Dashboard.tsx
|
||||
- Include auth header in fetch
|
||||
- Handle response, update state
|
||||
- Render user data
|
||||
```
|
||||
|
||||
## 4. Determine Phase Numbers
|
||||
|
||||
Find highest existing phase:
|
||||
```bash
|
||||
# Get sorted phase list, extract last one
|
||||
PHASES=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" phases list)
|
||||
HIGHEST=$(printf '%s\n' "$PHASES" | jq -r '.directories[-1]')
|
||||
```
|
||||
|
||||
New phases continue from there:
|
||||
- If Phase 5 is highest, gaps become Phase 6, 7, 8...
|
||||
|
||||
## 5. Present Gap Closure Plan
|
||||
|
||||
```markdown
|
||||
## Gap Closure Plan
|
||||
|
||||
**Milestone:** {version}
|
||||
**Gaps to close:** {N} requirements, {M} integration, {K} flows
|
||||
|
||||
### Proposed Phases
|
||||
|
||||
**Phase {N}: {Name}**
|
||||
Closes:
|
||||
- {REQ-ID}: {description}
|
||||
- Integration: {from} → {to}
|
||||
Tasks: {count}
|
||||
|
||||
**Phase {N+1}: {Name}**
|
||||
Closes:
|
||||
- {REQ-ID}: {description}
|
||||
- Flow: {flow name}
|
||||
Tasks: {count}
|
||||
|
||||
{If nice-to-have gaps exist:}
|
||||
|
||||
### Deferred (nice-to-have)
|
||||
|
||||
These gaps are optional. Include them?
|
||||
- {gap description}
|
||||
- {gap description}
|
||||
|
||||
---
|
||||
|
||||
Create these {X} phases? (yes / adjust / defer all optional)
|
||||
```
|
||||
|
||||
Wait for user confirmation.
|
||||
|
||||
## 6. Update ROADMAP.md
|
||||
|
||||
Add new phases to current milestone:
|
||||
|
||||
```markdown
|
||||
### Phase {N}: {Name}
|
||||
**Goal:** {derived from gaps being closed}
|
||||
**Requirements:** {REQ-IDs being satisfied}
|
||||
**Gap Closure:** Closes gaps from audit
|
||||
|
||||
### Phase {N+1}: {Name}
|
||||
...
|
||||
```
|
||||
|
||||
## 7. Update REQUIREMENTS.md Traceability Table (REQUIRED)
|
||||
|
||||
For each REQ-ID assigned to a gap closure phase:
|
||||
- Update the Phase column to reflect the new gap closure phase
|
||||
- Reset Status to `Pending`
|
||||
|
||||
Reset checked-off requirements the audit found unsatisfied:
|
||||
- Change `[x]` → `[ ]` for any requirement marked unsatisfied in the audit
|
||||
- Update coverage count at top of REQUIREMENTS.md
|
||||
|
||||
```bash
|
||||
# Verify traceability table reflects gap closure assignments
|
||||
grep -c "Pending" .planning/REQUIREMENTS.md
|
||||
```
|
||||
|
||||
## 8. Create Phase Directories
|
||||
|
||||
```bash
|
||||
mkdir -p ".planning/phases/{NN}-{name}"
|
||||
```
|
||||
|
||||
## 9. Commit Roadmap and Requirements Update
|
||||
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs(roadmap): add gap closure phases {N}-{M}" --files .planning/ROADMAP.md .planning/REQUIREMENTS.md
|
||||
```
|
||||
|
||||
## 10. Offer Next Steps
|
||||
|
||||
```markdown
|
||||
## ✓ Gap Closure Phases Created
|
||||
|
||||
**Phases added:** {N} - {M}
|
||||
**Gaps addressed:** {count} requirements, {count} integration, {count} flows
|
||||
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Plan first gap closure phase**
|
||||
|
||||
`/gsd:plan-phase {N}`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:execute-phase {N}` — if plans already exist
|
||||
- `cat .planning/ROADMAP.md` — see updated roadmap
|
||||
|
||||
---
|
||||
|
||||
**After all gap phases complete:**
|
||||
|
||||
`/gsd:audit-milestone` — re-audit to verify gaps closed
|
||||
`/gsd:complete-milestone {version}` — archive when audit passes
|
||||
```
|
||||
|
||||
</process>
|
||||
|
||||
<gap_to_phase_mapping>
|
||||
|
||||
## How Gaps Become Tasks
|
||||
|
||||
**Requirement gap → Tasks:**
|
||||
```yaml
|
||||
gap:
|
||||
id: DASH-01
|
||||
description: "User sees their data"
|
||||
reason: "Dashboard exists but doesn't fetch from API"
|
||||
missing:
|
||||
- "useEffect with fetch to /api/user/data"
|
||||
- "State for user data"
|
||||
- "Render user data in JSX"
|
||||
|
||||
becomes:
|
||||
|
||||
phase: "Wire Dashboard Data"
|
||||
tasks:
|
||||
- name: "Add data fetching"
|
||||
files: [src/components/Dashboard.tsx]
|
||||
action: "Add useEffect that fetches /api/user/data on mount"
|
||||
|
||||
- name: "Add state management"
|
||||
files: [src/components/Dashboard.tsx]
|
||||
action: "Add useState for userData, loading, error states"
|
||||
|
||||
- name: "Render user data"
|
||||
files: [src/components/Dashboard.tsx]
|
||||
action: "Replace placeholder with userData.map rendering"
|
||||
```
|
||||
|
||||
**Integration gap → Tasks:**
|
||||
```yaml
|
||||
gap:
|
||||
from_phase: 1
|
||||
to_phase: 3
|
||||
connection: "Auth token → API calls"
|
||||
reason: "Dashboard API calls don't include auth header"
|
||||
missing:
|
||||
- "Auth header in fetch calls"
|
||||
- "Token refresh on 401"
|
||||
|
||||
becomes:
|
||||
|
||||
phase: "Add Auth to Dashboard API Calls"
|
||||
tasks:
|
||||
- name: "Add auth header to fetches"
|
||||
files: [src/components/Dashboard.tsx, src/lib/api.ts]
|
||||
action: "Include Authorization header with token in all API calls"
|
||||
|
||||
- name: "Handle 401 responses"
|
||||
files: [src/lib/api.ts]
|
||||
action: "Add interceptor to refresh token or redirect to login on 401"
|
||||
```
|
||||
|
||||
**Flow gap → Tasks:**
|
||||
```yaml
|
||||
gap:
|
||||
name: "User views dashboard after login"
|
||||
broken_at: "Dashboard data load"
|
||||
reason: "No fetch call"
|
||||
missing:
|
||||
- "Fetch user data on mount"
|
||||
- "Display loading state"
|
||||
- "Render user data"
|
||||
|
||||
becomes:
|
||||
|
||||
# Usually same phase as requirement/integration gap
|
||||
# Flow gaps often overlap with other gap types
|
||||
```
|
||||
|
||||
</gap_to_phase_mapping>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] MILESTONE-AUDIT.md loaded and gaps parsed
|
||||
- [ ] Gaps prioritized (must/should/nice)
|
||||
- [ ] Gaps grouped into logical phases
|
||||
- [ ] User confirmed phase plan
|
||||
- [ ] ROADMAP.md updated with new phases
|
||||
- [ ] REQUIREMENTS.md traceability table updated with gap closure phase assignments
|
||||
- [ ] Unsatisfied requirement checkboxes reset (`[x]` → `[ ]`)
|
||||
- [ ] Coverage count updated in REQUIREMENTS.md
|
||||
- [ ] Phase directories created
|
||||
- [ ] Changes committed (includes REQUIREMENTS.md)
|
||||
- [ ] User knows to run `/gsd:plan-phase` next
|
||||
</success_criteria>
|
||||
754
get-shit-done/workflows/plan-phase.md
Normal file
754
get-shit-done/workflows/plan-phase.md
Normal file
@@ -0,0 +1,754 @@
|
||||
<purpose>
|
||||
Create executable phase prompts (PLAN.md files) for a roadmap phase with integrated research and verification. Default flow: Research (if needed) -> Plan -> Verify -> Done. Orchestrates gsd-phase-researcher, gsd-planner, and gsd-plan-checker agents with a revision loop (max 3 iterations).
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
|
||||
@C:/Users/yaoji/.claude/get-shit-done/references/ui-brand.md
|
||||
</required_reading>
|
||||
|
||||
<available_agent_types>
|
||||
Valid GSD subagent types (use exact names — do not fall back to 'general-purpose'):
|
||||
- gsd-phase-researcher — Researches technical approaches for a phase
|
||||
- gsd-planner — Creates detailed plans from phase scope
|
||||
- gsd-plan-checker — Reviews plan quality before execution
|
||||
</available_agent_types>
|
||||
|
||||
<process>
|
||||
|
||||
## 1. Initialize
|
||||
|
||||
Load all context in one call (paths only to minimize orchestrator context):
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init plan-phase "$PHASE")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Parse JSON for: `researcher_model`, `planner_model`, `checker_model`, `research_enabled`, `plan_checker_enabled`, `nyquist_validation_enabled`, `commit_docs`, `phase_found`, `phase_dir`, `phase_number`, `phase_name`, `phase_slug`, `padded_phase`, `has_research`, `has_context`, `has_plans`, `plan_count`, `planning_exists`, `roadmap_exists`, `phase_req_ids`.
|
||||
|
||||
**File paths (for <files_to_read> blocks):** `state_path`, `roadmap_path`, `requirements_path`, `context_path`, `research_path`, `verification_path`, `uat_path`. These are null if files don't exist.
|
||||
|
||||
**If `planning_exists` is false:** Error — run `/gsd:new-project` first.
|
||||
|
||||
## 2. Parse and Normalize Arguments
|
||||
|
||||
Extract from $ARGUMENTS: phase number (integer or decimal like `2.1`), flags (`--research`, `--skip-research`, `--gaps`, `--skip-verify`, `--prd <filepath>`).
|
||||
|
||||
Extract `--prd <filepath>` from $ARGUMENTS. If present, set PRD_FILE to the filepath.
|
||||
|
||||
**If no phase number:** Detect next unplanned phase from roadmap.
|
||||
|
||||
**If `phase_found` is false:** Validate phase exists in ROADMAP.md. If valid, create the directory using `phase_slug` and `padded_phase` from init:
|
||||
```bash
|
||||
mkdir -p ".planning/phases/${padded_phase}-${phase_slug}"
|
||||
```
|
||||
|
||||
**Existing artifacts from init:** `has_research`, `has_plans`, `plan_count`.
|
||||
|
||||
## 3. Validate Phase
|
||||
|
||||
```bash
|
||||
PHASE_INFO=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap get-phase "${PHASE}")
|
||||
```
|
||||
|
||||
**If `found` is false:** Error with available phases. **If `found` is true:** Extract `phase_number`, `phase_name`, `goal` from JSON.
|
||||
|
||||
## 3.5. Handle PRD Express Path
|
||||
|
||||
**Skip if:** No `--prd` flag in arguments.
|
||||
|
||||
**If `--prd <filepath>` provided:**
|
||||
|
||||
1. Read the PRD file:
|
||||
```bash
|
||||
PRD_CONTENT=$(cat "$PRD_FILE" 2>/dev/null)
|
||||
if [ -z "$PRD_CONTENT" ]; then
|
||||
echo "Error: PRD file not found: $PRD_FILE"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
2. Display banner:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► PRD EXPRESS PATH
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Using PRD: {PRD_FILE}
|
||||
Generating CONTEXT.md from requirements...
|
||||
```
|
||||
|
||||
3. Parse the PRD content and generate CONTEXT.md. The orchestrator should:
|
||||
- Extract all requirements, user stories, acceptance criteria, and constraints from the PRD
|
||||
- Map each to a locked decision (everything in the PRD is treated as a locked decision)
|
||||
- Identify any areas the PRD doesn't cover and mark as "Claude's Discretion"
|
||||
- **Extract canonical refs** from ROADMAP.md for this phase, plus any specs/ADRs referenced in the PRD — expand to full file paths (MANDATORY)
|
||||
- Create CONTEXT.md in the phase directory
|
||||
|
||||
4. Write CONTEXT.md:
|
||||
```markdown
|
||||
# Phase [X]: [Name] - Context
|
||||
|
||||
**Gathered:** [date]
|
||||
**Status:** Ready for planning
|
||||
**Source:** PRD Express Path ({PRD_FILE})
|
||||
|
||||
<domain>
|
||||
## Phase Boundary
|
||||
|
||||
[Extracted from PRD — what this phase delivers]
|
||||
|
||||
</domain>
|
||||
|
||||
<decisions>
|
||||
## Implementation Decisions
|
||||
|
||||
{For each requirement/story/criterion in the PRD:}
|
||||
### [Category derived from content]
|
||||
- [Requirement as locked decision]
|
||||
|
||||
### Claude's Discretion
|
||||
[Areas not covered by PRD — implementation details, technical choices]
|
||||
|
||||
</decisions>
|
||||
|
||||
<canonical_refs>
|
||||
## Canonical References
|
||||
|
||||
**Downstream agents MUST read these before planning or implementing.**
|
||||
|
||||
[MANDATORY. Extract from ROADMAP.md and any docs referenced in the PRD.
|
||||
Use full relative paths. Group by topic area.]
|
||||
|
||||
### [Topic area]
|
||||
- `path/to/spec-or-adr.md` — [What it decides/defines]
|
||||
|
||||
[If no external specs: "No external specs — requirements fully captured in decisions above"]
|
||||
|
||||
</canonical_refs>
|
||||
|
||||
<specifics>
|
||||
## Specific Ideas
|
||||
|
||||
[Any specific references, examples, or concrete requirements from PRD]
|
||||
|
||||
</specifics>
|
||||
|
||||
<deferred>
|
||||
## Deferred Ideas
|
||||
|
||||
[Items in PRD explicitly marked as future/v2/out-of-scope]
|
||||
[If none: "None — PRD covers phase scope"]
|
||||
|
||||
</deferred>
|
||||
|
||||
---
|
||||
|
||||
*Phase: XX-name*
|
||||
*Context gathered: [date] via PRD Express Path*
|
||||
```
|
||||
|
||||
5. Commit:
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs(${padded_phase}): generate context from PRD" --files "${phase_dir}/${padded_phase}-CONTEXT.md"
|
||||
```
|
||||
|
||||
6. Set `context_content` to the generated CONTEXT.md content and continue to step 5 (Handle Research).
|
||||
|
||||
**Effect:** This completely bypasses step 4 (Load CONTEXT.md) since we just created it. The rest of the workflow (research, planning, verification) proceeds normally with the PRD-derived context.
|
||||
|
||||
## 4. Load CONTEXT.md
|
||||
|
||||
**Skip if:** PRD express path was used (CONTEXT.md already created in step 3.5).
|
||||
|
||||
Check `context_path` from init JSON.
|
||||
|
||||
If `context_path` is not null, display: `Using phase context from: ${context_path}`
|
||||
|
||||
**If `context_path` is null (no CONTEXT.md exists):**
|
||||
|
||||
Use AskUserQuestion:
|
||||
- header: "No context"
|
||||
- question: "No CONTEXT.md found for Phase {X}. Plans will use research and requirements only — your design preferences won't be included. Continue or capture context first?"
|
||||
- options:
|
||||
- "Continue without context" — Plan using research + requirements only
|
||||
- "Run discuss-phase first" — Capture design decisions before planning
|
||||
|
||||
If "Continue without context": Proceed to step 5.
|
||||
If "Run discuss-phase first":
|
||||
**IMPORTANT:** Do NOT invoke discuss-phase as a nested Skill/Task call — AskUserQuestion
|
||||
does not work correctly in nested subcontexts (#1009). Instead, display the command
|
||||
and exit so the user runs it as a top-level command:
|
||||
```
|
||||
Run this command first, then re-run /gsd:plan-phase {X}:
|
||||
|
||||
/gsd:discuss-phase {X}
|
||||
```
|
||||
**Exit the plan-phase workflow. Do not continue.**
|
||||
|
||||
## 5. Handle Research
|
||||
|
||||
**Skip if:** `--gaps` flag or `--skip-research` flag.
|
||||
|
||||
**If `has_research` is true (from init) AND no `--research` flag:** Use existing, skip to step 6.
|
||||
|
||||
**If RESEARCH.md missing OR `--research` flag:**
|
||||
|
||||
**If no explicit flag (`--research` or `--skip-research`) and not `--auto`:**
|
||||
Ask the user whether to research, with a contextual recommendation based on the phase:
|
||||
|
||||
```
|
||||
AskUserQuestion([
|
||||
{
|
||||
question: "Research before planning Phase {X}: {phase_name}?",
|
||||
header: "Research",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Research first (Recommended)", description: "Investigate domain, patterns, and dependencies before planning. Best for new features, unfamiliar integrations, or architectural changes." },
|
||||
{ label: "Skip research", description: "Plan directly from context and requirements. Best for bug fixes, simple refactors, or well-understood tasks." }
|
||||
]
|
||||
}
|
||||
])
|
||||
```
|
||||
|
||||
If user selects "Skip research": skip to step 6.
|
||||
|
||||
**If `--auto` and `research_enabled` is false:** Skip research silently (preserves automated behavior).
|
||||
|
||||
Display banner:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► RESEARCHING PHASE {X}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
◆ Spawning researcher...
|
||||
```
|
||||
|
||||
### Spawn gsd-phase-researcher
|
||||
|
||||
```bash
|
||||
PHASE_DESC=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap get-phase "${PHASE}" | jq -r '.section')
|
||||
```
|
||||
|
||||
Research prompt:
|
||||
|
||||
```markdown
|
||||
<objective>
|
||||
Research how to implement Phase {phase_number}: {phase_name}
|
||||
Answer: "What do I need to know to PLAN this phase well?"
|
||||
</objective>
|
||||
|
||||
<files_to_read>
|
||||
- {context_path} (USER DECISIONS from /gsd:discuss-phase)
|
||||
- {requirements_path} (Project requirements)
|
||||
- {state_path} (Project decisions and history)
|
||||
</files_to_read>
|
||||
|
||||
<additional_context>
|
||||
**Phase description:** {phase_description}
|
||||
**Phase requirement IDs (MUST address):** {phase_req_ids}
|
||||
|
||||
**Project instructions:** Read ./CLAUDE.md if exists — follow project-specific guidelines
|
||||
**Project skills:** Check .claude/skills/ or .agents/skills/ directory (if either exists) — read SKILL.md files, research should account for project skill patterns
|
||||
</additional_context>
|
||||
|
||||
<output>
|
||||
Write to: {phase_dir}/{phase_num}-RESEARCH.md
|
||||
</output>
|
||||
```
|
||||
|
||||
```
|
||||
Task(
|
||||
prompt=research_prompt,
|
||||
subagent_type="gsd-phase-researcher",
|
||||
model="{researcher_model}",
|
||||
description="Research Phase {phase}"
|
||||
)
|
||||
```
|
||||
|
||||
### Handle Researcher Return
|
||||
|
||||
- **`## RESEARCH COMPLETE`:** Display confirmation, continue to step 6
|
||||
- **`## RESEARCH BLOCKED`:** Display blocker, offer: 1) Provide context, 2) Skip research, 3) Abort
|
||||
|
||||
## 5.5. Create Validation Strategy
|
||||
|
||||
Skip if `nyquist_validation_enabled` is false OR `research_enabled` is false.
|
||||
|
||||
If `research_enabled` is false and `nyquist_validation_enabled` is true: warn "Nyquist validation enabled but research disabled — VALIDATION.md cannot be created without RESEARCH.md. Plans will lack validation requirements (Dimension 8)." Continue to step 6.
|
||||
|
||||
**But Nyquist is not applicable for this run** when all of the following are true:
|
||||
- `research_enabled` is false
|
||||
- `has_research` is false
|
||||
- no `--research` flag was provided
|
||||
|
||||
In that case: **skip validation-strategy creation entirely**. Do **not** expect `RESEARCH.md` or `VALIDATION.md` for this run, and continue to Step 6.
|
||||
|
||||
```bash
|
||||
grep -l "## Validation Architecture" "${PHASE_DIR}"/*-RESEARCH.md 2>/dev/null
|
||||
```
|
||||
|
||||
**If found:**
|
||||
1. Read template: `C:/Users/yaoji/.claude/get-shit-done/templates/VALIDATION.md`
|
||||
2. Write to `${PHASE_DIR}/${PADDED_PHASE}-VALIDATION.md` (use Write tool)
|
||||
3. Fill frontmatter: `{N}` → phase number, `{phase-slug}` → slug, `{date}` → current date
|
||||
4. Verify:
|
||||
```bash
|
||||
test -f "${PHASE_DIR}/${PADDED_PHASE}-VALIDATION.md" && echo "VALIDATION_CREATED=true" || echo "VALIDATION_CREATED=false"
|
||||
```
|
||||
5. If `VALIDATION_CREATED=false`: STOP — do not proceed to Step 6
|
||||
6. If `commit_docs`: `commit "docs(phase-${PHASE}): add validation strategy"`
|
||||
|
||||
**If not found:** Warn and continue — plans may fail Dimension 8.
|
||||
|
||||
## 5.6. UI Design Contract Gate
|
||||
|
||||
> Skip if `workflow.ui_phase` is explicitly `false` AND `workflow.ui_safety_gate` is explicitly `false` in `.planning/config.json`. If keys are absent, treat as enabled.
|
||||
|
||||
```bash
|
||||
UI_PHASE_CFG=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.ui_phase 2>/dev/null || echo "true")
|
||||
UI_GATE_CFG=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.ui_safety_gate 2>/dev/null || echo "true")
|
||||
```
|
||||
|
||||
**If both are `false`:** Skip to step 6.
|
||||
|
||||
Check if phase has frontend indicators:
|
||||
|
||||
```bash
|
||||
PHASE_SECTION=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap get-phase "${PHASE}" 2>/dev/null)
|
||||
echo "$PHASE_SECTION" | grep -iE "UI|interface|frontend|component|layout|page|screen|view|form|dashboard|widget" > /dev/null 2>&1
|
||||
HAS_UI=$?
|
||||
```
|
||||
|
||||
**If `HAS_UI` is 0 (frontend indicators found):**
|
||||
|
||||
Check for existing UI-SPEC:
|
||||
```bash
|
||||
UI_SPEC_FILE=$(ls "${PHASE_DIR}"/*-UI-SPEC.md 2>/dev/null | head -1)
|
||||
```
|
||||
|
||||
**If UI-SPEC.md found:** Set `UI_SPEC_PATH=$UI_SPEC_FILE`. Display: `Using UI design contract: ${UI_SPEC_PATH}`
|
||||
|
||||
**If UI-SPEC.md missing AND `UI_GATE_CFG` is `true`:**
|
||||
|
||||
Use AskUserQuestion:
|
||||
- header: "UI Design Contract"
|
||||
- question: "Phase {N} has frontend indicators but no UI-SPEC.md. Generate a design contract before planning?"
|
||||
- options:
|
||||
- "Generate UI-SPEC first" → Display: "Run `/gsd:ui-phase {N}` then re-run `/gsd:plan-phase {N}`". Exit workflow.
|
||||
- "Continue without UI-SPEC" → Continue to step 6.
|
||||
- "Not a frontend phase" → Continue to step 6.
|
||||
|
||||
**If `HAS_UI` is 1 (no frontend indicators):** Skip silently to step 6.
|
||||
|
||||
## 6. Check Existing Plans
|
||||
|
||||
```bash
|
||||
ls "${PHASE_DIR}"/*-PLAN.md 2>/dev/null
|
||||
```
|
||||
|
||||
**If exists:** Offer: 1) Add more plans, 2) View existing, 3) Replan from scratch.
|
||||
|
||||
## 7. Use Context Paths from INIT
|
||||
|
||||
Extract from INIT JSON:
|
||||
|
||||
```bash
|
||||
STATE_PATH=$(printf '%s\n' "$INIT" | jq -r '.state_path // empty')
|
||||
ROADMAP_PATH=$(printf '%s\n' "$INIT" | jq -r '.roadmap_path // empty')
|
||||
REQUIREMENTS_PATH=$(printf '%s\n' "$INIT" | jq -r '.requirements_path // empty')
|
||||
RESEARCH_PATH=$(printf '%s\n' "$INIT" | jq -r '.research_path // empty')
|
||||
VERIFICATION_PATH=$(printf '%s\n' "$INIT" | jq -r '.verification_path // empty')
|
||||
UAT_PATH=$(printf '%s\n' "$INIT" | jq -r '.uat_path // empty')
|
||||
CONTEXT_PATH=$(printf '%s\n' "$INIT" | jq -r '.context_path // empty')
|
||||
```
|
||||
|
||||
## 7.5. Verify Nyquist Artifacts
|
||||
|
||||
Skip if `nyquist_validation_enabled` is false OR `research_enabled` is false.
|
||||
|
||||
Also skip if all of the following are true:
|
||||
- `research_enabled` is false
|
||||
- `has_research` is false
|
||||
- no `--research` flag was provided
|
||||
|
||||
In that no-research path, Nyquist artifacts are **not required** for this run.
|
||||
|
||||
```bash
|
||||
VALIDATION_EXISTS=$(ls "${PHASE_DIR}"/*-VALIDATION.md 2>/dev/null | head -1)
|
||||
```
|
||||
|
||||
If missing and Nyquist is still enabled/applicable — ask user:
|
||||
1. Re-run: `/gsd:plan-phase {PHASE} --research`
|
||||
2. Disable Nyquist with the exact command:
|
||||
`node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-set workflow.nyquist_validation false`
|
||||
3. Continue anyway (plans fail Dimension 8)
|
||||
|
||||
Proceed to Step 8 only if user selects 2 or 3.
|
||||
|
||||
## 8. Spawn gsd-planner Agent
|
||||
|
||||
Display banner:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► PLANNING PHASE {X}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
◆ Spawning planner...
|
||||
```
|
||||
|
||||
Planner prompt:
|
||||
|
||||
```markdown
|
||||
<planning_context>
|
||||
**Phase:** {phase_number}
|
||||
**Mode:** {standard | gap_closure}
|
||||
|
||||
<files_to_read>
|
||||
- {state_path} (Project State)
|
||||
- {roadmap_path} (Roadmap)
|
||||
- {requirements_path} (Requirements)
|
||||
- {context_path} (USER DECISIONS from /gsd:discuss-phase)
|
||||
- {research_path} (Technical Research)
|
||||
- {verification_path} (Verification Gaps - if --gaps)
|
||||
- {uat_path} (UAT Gaps - if --gaps)
|
||||
- {UI_SPEC_PATH} (UI Design Contract — visual/interaction specs, if exists)
|
||||
</files_to_read>
|
||||
|
||||
**Phase requirement IDs (every ID MUST appear in a plan's `requirements` field):** {phase_req_ids}
|
||||
|
||||
**Project instructions:** Read ./CLAUDE.md if exists — follow project-specific guidelines
|
||||
**Project skills:** Check .claude/skills/ or .agents/skills/ directory (if either exists) — read SKILL.md files, plans should account for project skill rules
|
||||
</planning_context>
|
||||
|
||||
<downstream_consumer>
|
||||
Output consumed by /gsd:execute-phase. Plans need:
|
||||
- Frontmatter (wave, depends_on, files_modified, autonomous)
|
||||
- Tasks in XML format with read_first and acceptance_criteria fields (MANDATORY on every task)
|
||||
- Verification criteria
|
||||
- must_haves for goal-backward verification
|
||||
</downstream_consumer>
|
||||
|
||||
<deep_work_rules>
|
||||
## Anti-Shallow Execution Rules (MANDATORY)
|
||||
|
||||
Every task MUST include these fields — they are NOT optional:
|
||||
|
||||
1. **`<read_first>`** — Files the executor MUST read before touching anything. Always include:
|
||||
- The file being modified (so executor sees current state, not assumptions)
|
||||
- Any "source of truth" file referenced in CONTEXT.md (reference implementations, existing patterns, config files, schemas)
|
||||
- Any file whose patterns, signatures, types, or conventions must be replicated or respected
|
||||
|
||||
2. **`<acceptance_criteria>`** — Verifiable conditions that prove the task was done correctly. Rules:
|
||||
- Every criterion must be checkable with grep, file read, test command, or CLI output
|
||||
- NEVER use subjective language ("looks correct", "properly configured", "consistent with")
|
||||
- ALWAYS include exact strings, patterns, values, or command outputs that must be present
|
||||
- Examples:
|
||||
- Code: `auth.py contains def verify_token(` / `test_auth.py exits 0`
|
||||
- Config: `.env.example contains DATABASE_URL=` / `Dockerfile contains HEALTHCHECK`
|
||||
- Docs: `README.md contains '## Installation'` / `API.md lists all endpoints`
|
||||
- Infra: `deploy.yml has rollback step` / `docker-compose.yml has healthcheck for db`
|
||||
|
||||
3. **`<action>`** — Must include CONCRETE values, not references. Rules:
|
||||
- NEVER say "align X with Y", "match X to Y", "update to be consistent" without specifying the exact target state
|
||||
- ALWAYS include the actual values: config keys, function signatures, SQL statements, class names, import paths, env vars, etc.
|
||||
- If CONTEXT.md has a comparison table or expected values, copy them into the action verbatim
|
||||
- The executor should be able to complete the task from the action text alone, without needing to read CONTEXT.md or reference files (read_first is for verification, not discovery)
|
||||
|
||||
**Why this matters:** Executor agents work from the plan text. Vague instructions like "update the config to match production" produce shallow one-line changes. Concrete instructions like "add DATABASE_URL=postgresql://... , set POOL_SIZE=20, add REDIS_URL=redis://..." produce complete work. The cost of verbose plans is far less than the cost of re-doing shallow execution.
|
||||
</deep_work_rules>
|
||||
|
||||
<quality_gate>
|
||||
- [ ] PLAN.md files created in phase directory
|
||||
- [ ] Each plan has valid frontmatter
|
||||
- [ ] Tasks are specific and actionable
|
||||
- [ ] Every task has `<read_first>` with at least the file being modified
|
||||
- [ ] Every task has `<acceptance_criteria>` with grep-verifiable conditions
|
||||
- [ ] Every `<action>` contains concrete values (no "align X with Y" without specifying what)
|
||||
- [ ] Dependencies correctly identified
|
||||
- [ ] Waves assigned for parallel execution
|
||||
- [ ] must_haves derived from phase goal
|
||||
</quality_gate>
|
||||
```
|
||||
|
||||
```
|
||||
Task(
|
||||
prompt=filled_prompt,
|
||||
subagent_type="gsd-planner",
|
||||
model="{planner_model}",
|
||||
description="Plan Phase {phase}"
|
||||
)
|
||||
```
|
||||
|
||||
## 9. Handle Planner Return
|
||||
|
||||
- **`## PLANNING COMPLETE`:** Display plan count. If `--skip-verify` or `plan_checker_enabled` is false (from init): skip to step 13. Otherwise: step 10.
|
||||
- **`## CHECKPOINT REACHED`:** Present to user, get response, spawn continuation (step 12)
|
||||
- **`## PLANNING INCONCLUSIVE`:** Show attempts, offer: Add context / Retry / Manual
|
||||
|
||||
## 10. Spawn gsd-plan-checker Agent
|
||||
|
||||
Display banner:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► VERIFYING PLANS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
◆ Spawning plan checker...
|
||||
```
|
||||
|
||||
Checker prompt:
|
||||
|
||||
```markdown
|
||||
<verification_context>
|
||||
**Phase:** {phase_number}
|
||||
**Phase Goal:** {goal from ROADMAP}
|
||||
|
||||
<files_to_read>
|
||||
- {PHASE_DIR}/*-PLAN.md (Plans to verify)
|
||||
- {roadmap_path} (Roadmap)
|
||||
- {requirements_path} (Requirements)
|
||||
- {context_path} (USER DECISIONS from /gsd:discuss-phase)
|
||||
- {research_path} (Technical Research — includes Validation Architecture)
|
||||
</files_to_read>
|
||||
|
||||
**Phase requirement IDs (MUST ALL be covered):** {phase_req_ids}
|
||||
|
||||
**Project instructions:** Read ./CLAUDE.md if exists — verify plans honor project guidelines
|
||||
**Project skills:** Check .claude/skills/ or .agents/skills/ directory (if either exists) — verify plans account for project skill rules
|
||||
</verification_context>
|
||||
|
||||
<expected_output>
|
||||
- ## VERIFICATION PASSED — all checks pass
|
||||
- ## ISSUES FOUND — structured issue list
|
||||
</expected_output>
|
||||
```
|
||||
|
||||
```
|
||||
Task(
|
||||
prompt=checker_prompt,
|
||||
subagent_type="gsd-plan-checker",
|
||||
model="{checker_model}",
|
||||
description="Verify Phase {phase} plans"
|
||||
)
|
||||
```
|
||||
|
||||
## 11. Handle Checker Return
|
||||
|
||||
- **`## VERIFICATION PASSED`:** Display confirmation, proceed to step 13.
|
||||
- **`## ISSUES FOUND`:** Display issues, check iteration count, proceed to step 12.
|
||||
|
||||
## 12. Revision Loop (Max 3 Iterations)
|
||||
|
||||
Track `iteration_count` (starts at 1 after initial plan + check).
|
||||
|
||||
**If iteration_count < 3:**
|
||||
|
||||
Display: `Sending back to planner for revision... (iteration {N}/3)`
|
||||
|
||||
Revision prompt:
|
||||
|
||||
```markdown
|
||||
<revision_context>
|
||||
**Phase:** {phase_number}
|
||||
**Mode:** revision
|
||||
|
||||
<files_to_read>
|
||||
- {PHASE_DIR}/*-PLAN.md (Existing plans)
|
||||
- {context_path} (USER DECISIONS from /gsd:discuss-phase)
|
||||
</files_to_read>
|
||||
|
||||
**Checker issues:** {structured_issues_from_checker}
|
||||
</revision_context>
|
||||
|
||||
<instructions>
|
||||
Make targeted updates to address checker issues.
|
||||
Do NOT replan from scratch unless issues are fundamental.
|
||||
Return what changed.
|
||||
</instructions>
|
||||
```
|
||||
|
||||
```
|
||||
Task(
|
||||
prompt=revision_prompt,
|
||||
subagent_type="gsd-planner",
|
||||
model="{planner_model}",
|
||||
description="Revise Phase {phase} plans"
|
||||
)
|
||||
```
|
||||
|
||||
After planner returns -> spawn checker again (step 10), increment iteration_count.
|
||||
|
||||
**If iteration_count >= 3:**
|
||||
|
||||
Display: `Max iterations reached. {N} issues remain:` + issue list
|
||||
|
||||
Offer: 1) Force proceed, 2) Provide guidance and retry, 3) Abandon
|
||||
|
||||
## 13. Requirements Coverage Gate
|
||||
|
||||
After plans pass the checker (or checker is skipped), verify that all phase requirements are covered by at least one plan.
|
||||
|
||||
**Skip if:** `phase_req_ids` is null or TBD (no requirements mapped to this phase).
|
||||
|
||||
**Step 1: Extract requirement IDs claimed by plans**
|
||||
```bash
|
||||
# Collect all requirement IDs from plan frontmatter
|
||||
PLAN_REQS=$(grep -h "requirements_addressed\|requirements:" ${PHASE_DIR}/*-PLAN.md 2>/dev/null | tr -d '[]' | tr ',' '\n' | sed 's/^[[:space:]]*//' | sort -u)
|
||||
```
|
||||
|
||||
**Step 2: Compare against phase requirements from ROADMAP**
|
||||
|
||||
For each REQ-ID in `phase_req_ids`:
|
||||
- If REQ-ID appears in `PLAN_REQS` → covered ✓
|
||||
- If REQ-ID does NOT appear in any plan → uncovered ✗
|
||||
|
||||
**Step 3: Check CONTEXT.md features against plan objectives**
|
||||
|
||||
Read CONTEXT.md `<decisions>` section. Extract feature/capability names. Check each against plan `<objective>` blocks. Features not mentioned in any plan objective → potentially dropped.
|
||||
|
||||
**Step 4: Report**
|
||||
|
||||
If all requirements covered and no dropped features:
|
||||
```
|
||||
✓ Requirements coverage: {N}/{N} REQ-IDs covered by plans
|
||||
```
|
||||
→ Proceed to step 14.
|
||||
|
||||
If gaps found:
|
||||
```
|
||||
## ⚠ Requirements Coverage Gap
|
||||
|
||||
{M} of {N} phase requirements are not assigned to any plan:
|
||||
|
||||
| REQ-ID | Description | Plans |
|
||||
|--------|-------------|-------|
|
||||
| {id} | {from REQUIREMENTS.md} | None |
|
||||
|
||||
{K} CONTEXT.md features not found in plan objectives:
|
||||
- {feature_name} — described in CONTEXT.md but no plan covers it
|
||||
|
||||
Options:
|
||||
1. Re-plan to include missing requirements (recommended)
|
||||
2. Move uncovered requirements to next phase
|
||||
3. Proceed anyway — accept coverage gaps
|
||||
```
|
||||
|
||||
Use AskUserQuestion to present the options.
|
||||
|
||||
## 14. Present Final Status
|
||||
|
||||
Route to `<offer_next>` OR `auto_advance` depending on flags/config.
|
||||
|
||||
## 15. Auto-Advance Check
|
||||
|
||||
Check for auto-advance trigger:
|
||||
|
||||
1. Parse `--auto` flag from $ARGUMENTS
|
||||
2. **Sync chain flag with intent** — if user invoked manually (no `--auto`), clear the ephemeral chain flag from any previous interrupted `--auto` chain. This does NOT touch `workflow.auto_advance` (the user's persistent settings preference):
|
||||
```bash
|
||||
if [[ ! "$ARGUMENTS" =~ --auto ]]; then
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-set workflow._auto_chain_active false 2>/dev/null
|
||||
fi
|
||||
```
|
||||
3. Read both the chain flag and user preference:
|
||||
```bash
|
||||
AUTO_CHAIN=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow._auto_chain_active 2>/dev/null || echo "false")
|
||||
AUTO_CFG=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.auto_advance 2>/dev/null || echo "false")
|
||||
```
|
||||
|
||||
**If `--auto` flag present OR `AUTO_CHAIN` is true OR `AUTO_CFG` is true:**
|
||||
|
||||
Display banner:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► AUTO-ADVANCING TO EXECUTE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Plans ready. Launching execute-phase...
|
||||
```
|
||||
|
||||
Launch execute-phase using the Skill tool to avoid nested Task sessions (which cause runtime freezes due to deep agent nesting):
|
||||
```
|
||||
Skill(skill="gsd:execute-phase", args="${PHASE} --auto --no-transition")
|
||||
```
|
||||
|
||||
The `--no-transition` flag tells execute-phase to return status after verification instead of chaining further. This keeps the auto-advance chain flat — each phase runs at the same nesting level rather than spawning deeper Task agents.
|
||||
|
||||
**Handle execute-phase return:**
|
||||
- **PHASE COMPLETE** → Display final summary:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► PHASE ${PHASE} COMPLETE ✓
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Auto-advance pipeline finished.
|
||||
|
||||
Next: /gsd:discuss-phase ${NEXT_PHASE} --auto
|
||||
```
|
||||
- **GAPS FOUND / VERIFICATION FAILED** → Display result, stop chain:
|
||||
```
|
||||
Auto-advance stopped: Execution needs review.
|
||||
|
||||
Review the output above and continue manually:
|
||||
/gsd:execute-phase ${PHASE}
|
||||
```
|
||||
|
||||
**If neither `--auto` nor config enabled:**
|
||||
Route to `<offer_next>` (existing behavior).
|
||||
|
||||
</process>
|
||||
|
||||
<offer_next>
|
||||
Output this markdown directly (not as a code block):
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► PHASE {X} PLANNED ✓
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
**Phase {X}: {Name}** — {N} plan(s) in {M} wave(s)
|
||||
|
||||
| Wave | Plans | What it builds |
|
||||
|------|-------|----------------|
|
||||
| 1 | 01, 02 | [objectives] |
|
||||
| 2 | 03 | [objective] |
|
||||
|
||||
Research: {Completed | Used existing | Skipped}
|
||||
Verification: {Passed | Passed with override | Skipped}
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Execute Phase {X}** — run all {N} plans
|
||||
|
||||
/gsd:execute-phase {X}
|
||||
|
||||
<sub>/clear first → fresh context window</sub>
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
|
||||
**Also available:**
|
||||
- cat .planning/phases/{phase-dir}/*-PLAN.md — review plans
|
||||
- /gsd:plan-phase {X} --research — re-research first
|
||||
|
||||
───────────────────────────────────────────────────────────────
|
||||
</offer_next>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] .planning/ directory validated
|
||||
- [ ] Phase validated against roadmap
|
||||
- [ ] Phase directory created if needed
|
||||
- [ ] CONTEXT.md loaded early (step 4) and passed to ALL agents
|
||||
- [ ] Research completed (unless --skip-research or --gaps or exists)
|
||||
- [ ] gsd-phase-researcher spawned with CONTEXT.md
|
||||
- [ ] Existing plans checked
|
||||
- [ ] gsd-planner spawned with CONTEXT.md + RESEARCH.md
|
||||
- [ ] Plans created (PLANNING COMPLETE or CHECKPOINT handled)
|
||||
- [ ] gsd-plan-checker spawned with CONTEXT.md
|
||||
- [ ] Verification passed OR user override OR max iterations with user decision
|
||||
- [ ] User sees status between agent spawns
|
||||
- [ ] User knows next steps
|
||||
</success_criteria>
|
||||
450
get-shit-done/workflows/profile-user.md
Normal file
450
get-shit-done/workflows/profile-user.md
Normal file
@@ -0,0 +1,450 @@
|
||||
<purpose>
|
||||
Orchestrate the full developer profiling flow: consent, session analysis (or questionnaire fallback), profile generation, result display, and artifact creation.
|
||||
|
||||
This workflow wires Phase 1 (session pipeline) and Phase 2 (profiling engine) into a cohesive user-facing experience. All heavy lifting is done by existing gsd-tools.cjs subcommands and the gsd-user-profiler agent -- this workflow orchestrates the sequence, handles branching, and provides the UX.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
|
||||
Key references:
|
||||
- @C:/Users/yaoji/.claude/get-shit-done/references/ui-brand.md (display patterns)
|
||||
- @C:/Users/yaoji/.claude/get-shit-done/agents/gsd-user-profiler.md (profiler agent definition)
|
||||
- @C:/Users/yaoji/.claude/get-shit-done/references/user-profiling.md (profiling reference doc)
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
## 1. Initialize
|
||||
|
||||
Parse flags from $ARGUMENTS:
|
||||
- Detect `--questionnaire` flag (skip session analysis, questionnaire-only)
|
||||
- Detect `--refresh` flag (rebuild profile even when one exists)
|
||||
|
||||
Check for existing profile:
|
||||
|
||||
```bash
|
||||
PROFILE_PATH="C:/Users/yaoji/.claude/get-shit-done/USER-PROFILE.md"
|
||||
[ -f "$PROFILE_PATH" ] && echo "EXISTS" || echo "NOT_FOUND"
|
||||
```
|
||||
|
||||
**If profile exists AND --refresh NOT set AND --questionnaire NOT set:**
|
||||
|
||||
Use AskUserQuestion:
|
||||
- header: "Existing Profile"
|
||||
- question: "You already have a profile. What would you like to do?"
|
||||
- options:
|
||||
- "View it" -- Display summary card from existing profile data, then exit
|
||||
- "Refresh it" -- Continue with --refresh behavior
|
||||
- "Cancel" -- Exit workflow
|
||||
|
||||
If "View it": Read USER-PROFILE.md, display its content formatted as a summary card, then exit.
|
||||
If "Refresh it": Set --refresh behavior and continue.
|
||||
If "Cancel": Display "No changes made." and exit.
|
||||
|
||||
**If profile exists AND --refresh IS set:**
|
||||
|
||||
Backup existing profile:
|
||||
```bash
|
||||
cp "C:/Users/yaoji/.claude/get-shit-done/USER-PROFILE.md" "C:/Users/yaoji/.claude/get-shit-done/USER-PROFILE.backup.md"
|
||||
```
|
||||
|
||||
Display: "Re-analyzing your sessions to update your profile."
|
||||
Continue to step 2.
|
||||
|
||||
**If no profile exists:** Continue to step 2.
|
||||
|
||||
---
|
||||
|
||||
## 2. Consent Gate (ACTV-06)
|
||||
|
||||
**Skip if** `--questionnaire` flag is set (no JSONL reading occurs -- jump directly to step 4b).
|
||||
|
||||
Display consent screen:
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD > PROFILE YOUR CODING STYLE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Claude starts every conversation generic. A profile teaches Claude
|
||||
how YOU actually work -- not how you think you work.
|
||||
|
||||
## What We'll Analyze
|
||||
|
||||
Your recent Claude Code sessions, looking for patterns in these
|
||||
8 behavioral dimensions:
|
||||
|
||||
| Dimension | What It Measures |
|
||||
|----------------------|---------------------------------------------|
|
||||
| Communication Style | How you phrase requests (terse vs. detailed) |
|
||||
| Decision Speed | How you choose between options |
|
||||
| Explanation Depth | How much explanation you want with code |
|
||||
| Debugging Approach | How you tackle errors and bugs |
|
||||
| UX Philosophy | How much you care about design vs. function |
|
||||
| Vendor Philosophy | How you evaluate libraries and tools |
|
||||
| Frustration Triggers | What makes you correct Claude |
|
||||
| Learning Style | How you prefer to learn new things |
|
||||
|
||||
## Data Handling
|
||||
|
||||
✓ Reads session files locally (read-only, nothing modified)
|
||||
✓ Analyzes message patterns (not content meaning)
|
||||
✓ Stores profile at C:/Users/yaoji/.claude/get-shit-done/USER-PROFILE.md
|
||||
✗ Nothing is sent to external services
|
||||
✗ Sensitive content (API keys, passwords) is automatically excluded
|
||||
```
|
||||
|
||||
**If --refresh path:**
|
||||
Show abbreviated consent instead:
|
||||
|
||||
```
|
||||
Re-analyzing your sessions to update your profile.
|
||||
Your existing profile has been backed up to USER-PROFILE.backup.md.
|
||||
```
|
||||
|
||||
Use AskUserQuestion:
|
||||
- header: "Refresh"
|
||||
- question: "Continue with profile refresh?"
|
||||
- options:
|
||||
- "Continue" -- Proceed to step 3
|
||||
- "Cancel" -- Exit workflow
|
||||
|
||||
**If default (no --refresh) path:**
|
||||
|
||||
Use AskUserQuestion:
|
||||
- header: "Ready?"
|
||||
- question: "Ready to analyze your sessions?"
|
||||
- options:
|
||||
- "Let's go" -- Proceed to step 3 (session analysis)
|
||||
- "Use questionnaire instead" -- Jump to step 4b (questionnaire path)
|
||||
- "Not now" -- Display "No worries. Run /gsd:profile-user when ready." and exit
|
||||
|
||||
---
|
||||
|
||||
## 3. Session Scan
|
||||
|
||||
Display: "◆ Scanning sessions..."
|
||||
|
||||
Run session scan:
|
||||
```bash
|
||||
SCAN_RESULT=$(node C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs scan-sessions --json 2>/dev/null)
|
||||
```
|
||||
|
||||
Parse the JSON output to get session count and project count.
|
||||
|
||||
Display: "✓ Found N sessions across M projects"
|
||||
|
||||
**Determine data sufficiency:**
|
||||
- Count total messages available from the scan result (sum sessions across projects)
|
||||
- If 0 sessions found: Display "No sessions found. Switching to questionnaire." and jump to step 4b
|
||||
- If sessions found: Continue to step 4a
|
||||
|
||||
---
|
||||
|
||||
## 4a. Session Analysis Path
|
||||
|
||||
Display: "◆ Sampling messages..."
|
||||
|
||||
Run profile sampling:
|
||||
```bash
|
||||
SAMPLE_RESULT=$(node C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs profile-sample --json 2>/dev/null)
|
||||
```
|
||||
|
||||
Parse the JSON output to get the temp directory path and message count.
|
||||
|
||||
Display: "✓ Sampled N messages from M projects"
|
||||
|
||||
Display: "◆ Analyzing patterns..."
|
||||
|
||||
**Spawn gsd-user-profiler agent using Task tool:**
|
||||
|
||||
Use the Task tool to spawn the `gsd-user-profiler` agent. Provide it with:
|
||||
- The sampled JSONL file path from profile-sample output
|
||||
- The user-profiling reference doc at `C:/Users/yaoji/.claude/get-shit-done/references/user-profiling.md`
|
||||
|
||||
The agent prompt should follow this structure:
|
||||
```
|
||||
Read the profiling reference document and the sampled session messages, then analyze the developer's behavioral patterns across all 8 dimensions.
|
||||
|
||||
Reference: @C:/Users/yaoji/.claude/get-shit-done/references/user-profiling.md
|
||||
Session data: @{temp_dir}/profile-sample.jsonl
|
||||
|
||||
Analyze these messages and return your analysis in the <analysis> JSON format specified in the reference document.
|
||||
```
|
||||
|
||||
**Parse the agent's output:**
|
||||
- Extract the `<analysis>` JSON block from the agent's response
|
||||
- Save analysis JSON to a temp file (in the same temp directory created by profile-sample)
|
||||
|
||||
```bash
|
||||
ANALYSIS_PATH="{temp_dir}/analysis.json"
|
||||
```
|
||||
|
||||
Write the analysis JSON to `$ANALYSIS_PATH`.
|
||||
|
||||
Display: "✓ Analysis complete (N dimensions scored)"
|
||||
|
||||
**Check for thin data:**
|
||||
- Read the analysis JSON and check the total message count
|
||||
- If < 50 messages were analyzed: Note that a questionnaire supplement could improve accuracy. Display: "Note: Limited session data (N messages). Results may have lower confidence."
|
||||
|
||||
Continue to step 5.
|
||||
|
||||
---
|
||||
|
||||
## 4b. Questionnaire Path
|
||||
|
||||
Display: "Using questionnaire to build your profile."
|
||||
|
||||
**Get questions:**
|
||||
```bash
|
||||
QUESTIONS=$(node C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs profile-questionnaire --json 2>/dev/null)
|
||||
```
|
||||
|
||||
Parse the questions JSON. It contains 8 questions, one per dimension.
|
||||
|
||||
**Present each question to the user via AskUserQuestion:**
|
||||
|
||||
For each question in the questions array:
|
||||
- header: The dimension name (e.g., "Communication Style")
|
||||
- question: The question text
|
||||
- options: The answer options from the question definition
|
||||
|
||||
Collect all answers into an answers JSON object mapping dimension keys to selected answer values.
|
||||
|
||||
**Save answers to temp file:**
|
||||
```bash
|
||||
ANSWERS_PATH=$(mktemp /tmp/gsd-profile-answers-XXXXXX.json)
|
||||
```
|
||||
|
||||
Write the answers JSON to `$ANSWERS_PATH`.
|
||||
|
||||
**Convert answers to analysis:**
|
||||
```bash
|
||||
ANALYSIS_RESULT=$(node C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs profile-questionnaire --answers "$ANSWERS_PATH" --json 2>/dev/null)
|
||||
```
|
||||
|
||||
Parse the analysis JSON from the result.
|
||||
|
||||
Save analysis JSON to a temp file:
|
||||
```bash
|
||||
ANALYSIS_PATH=$(mktemp /tmp/gsd-profile-analysis-XXXXXX.json)
|
||||
```
|
||||
|
||||
Write the analysis JSON to `$ANALYSIS_PATH`.
|
||||
|
||||
Continue to step 5 (skip split resolution since questionnaire handles ambiguity internally).
|
||||
|
||||
---
|
||||
|
||||
## 5. Split Resolution
|
||||
|
||||
**Skip if** questionnaire-only path (splits already handled internally).
|
||||
|
||||
Read the analysis JSON from `$ANALYSIS_PATH`.
|
||||
|
||||
Check each dimension for `cross_project_consistent: false`.
|
||||
|
||||
**For each split detected:**
|
||||
|
||||
Use AskUserQuestion:
|
||||
- header: The dimension name (e.g., "Communication Style")
|
||||
- question: "Your sessions show different patterns:" followed by the split context (e.g., "CLI/backend projects -> terse-direct, Frontend/UI projects -> detailed-structured")
|
||||
- options:
|
||||
- Rating option A (e.g., "terse-direct")
|
||||
- Rating option B (e.g., "detailed-structured")
|
||||
- "Context-dependent (keep both)"
|
||||
|
||||
**If user picks a specific rating:** Update the dimension's `rating` field in the analysis JSON to the selected value.
|
||||
|
||||
**If user picks "Context-dependent":** Keep the dominant rating in the `rating` field. Add a `context_note` to the dimension's summary describing the split (e.g., "Context-dependent: terse in CLI projects, detailed in frontend projects").
|
||||
|
||||
Write updated analysis JSON back to `$ANALYSIS_PATH`.
|
||||
|
||||
---
|
||||
|
||||
## 6. Profile Write
|
||||
|
||||
Display: "◆ Writing profile..."
|
||||
|
||||
```bash
|
||||
node C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs write-profile --input "$ANALYSIS_PATH" --json 2>/dev/null
|
||||
```
|
||||
|
||||
Display: "✓ Profile written to C:/Users/yaoji/.claude/get-shit-done/USER-PROFILE.md"
|
||||
|
||||
---
|
||||
|
||||
## 7. Result Display
|
||||
|
||||
Read the analysis JSON from `$ANALYSIS_PATH` to build the display.
|
||||
|
||||
**Show report card table:**
|
||||
|
||||
```
|
||||
## Your Profile
|
||||
|
||||
| Dimension | Rating | Confidence |
|
||||
|----------------------|----------------------|------------|
|
||||
| Communication Style | detailed-structured | HIGH |
|
||||
| Decision Speed | deliberate-informed | MEDIUM |
|
||||
| Explanation Depth | concise | HIGH |
|
||||
| Debugging Approach | hypothesis-driven | MEDIUM |
|
||||
| UX Philosophy | pragmatic | LOW |
|
||||
| Vendor Philosophy | thorough-evaluator | HIGH |
|
||||
| Frustration Triggers | scope-creep | MEDIUM |
|
||||
| Learning Style | self-directed | HIGH |
|
||||
```
|
||||
|
||||
(Populate with actual values from the analysis JSON.)
|
||||
|
||||
**Show highlight reel:**
|
||||
|
||||
Pick 3-4 dimensions with the highest confidence and most evidence signals. Format as:
|
||||
|
||||
```
|
||||
## Highlights
|
||||
|
||||
- **Communication (HIGH):** You consistently provide structured context with
|
||||
headers and problem statements before making requests
|
||||
- **Vendor Choices (HIGH):** You research alternatives thoroughly -- comparing
|
||||
docs, GitHub activity, and bundle sizes before committing
|
||||
- **Frustrations (MEDIUM):** You correct Claude most often for doing things
|
||||
you didn't ask for -- scope creep is your primary trigger
|
||||
```
|
||||
|
||||
Build highlights from the `evidence` array and `summary` fields in the analysis JSON. Use the most compelling evidence quotes. Format each as "You tend to..." or "You consistently..." with evidence attribution.
|
||||
|
||||
**Offer full profile view:**
|
||||
|
||||
Use AskUserQuestion:
|
||||
- header: "Profile"
|
||||
- question: "Want to see the full profile?"
|
||||
- options:
|
||||
- "Yes" -- Read and display the full USER-PROFILE.md content, then continue to step 8
|
||||
- "Continue to artifacts" -- Proceed directly to step 8
|
||||
|
||||
---
|
||||
|
||||
## 8. Artifact Selection (ACTV-05)
|
||||
|
||||
Use AskUserQuestion with multiSelect:
|
||||
- header: "Artifacts"
|
||||
- question: "Which artifacts should I generate?"
|
||||
- options (ALL pre-selected by default):
|
||||
- "/gsd:dev-preferences command file" -- "Load your preferences in any session"
|
||||
- "CLAUDE.md profile section" -- "Add profile to this project's CLAUDE.md"
|
||||
- "Global CLAUDE.md" -- "Add profile to C:/Users/yaoji/.claude/CLAUDE.md for all projects"
|
||||
|
||||
**If no artifacts selected:** Display "No artifacts generated. Your profile is saved at C:/Users/yaoji/.claude/get-shit-done/USER-PROFILE.md" and jump to step 10.
|
||||
|
||||
---
|
||||
|
||||
## 9. Artifact Generation
|
||||
|
||||
Generate selected artifacts sequentially (file I/O is fast, no benefit from parallel agents):
|
||||
|
||||
**For /gsd:dev-preferences (if selected):**
|
||||
|
||||
```bash
|
||||
node C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs generate-dev-preferences --analysis "$ANALYSIS_PATH" --json 2>/dev/null
|
||||
```
|
||||
|
||||
Display: "✓ Generated /gsd:dev-preferences at C:/Users/yaoji/.claude/commands/gsd/dev-preferences.md"
|
||||
|
||||
**For CLAUDE.md profile section (if selected):**
|
||||
|
||||
```bash
|
||||
node C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs generate-claude-profile --analysis "$ANALYSIS_PATH" --json 2>/dev/null
|
||||
```
|
||||
|
||||
Display: "✓ Added profile section to CLAUDE.md"
|
||||
|
||||
**For Global CLAUDE.md (if selected):**
|
||||
|
||||
```bash
|
||||
node C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs generate-claude-profile --analysis "$ANALYSIS_PATH" --global --json 2>/dev/null
|
||||
```
|
||||
|
||||
Display: "✓ Added profile section to C:/Users/yaoji/.claude/CLAUDE.md"
|
||||
|
||||
**Error handling:** If any gsd-tools.cjs call fails, display the error message and use AskUserQuestion to offer "Retry" or "Skip this artifact". On retry, re-run the command. On skip, continue to next artifact.
|
||||
|
||||
---
|
||||
|
||||
## 10. Summary & Refresh Diff
|
||||
|
||||
**If --refresh path:**
|
||||
|
||||
Read both old backup and new analysis to compare dimension ratings/confidence.
|
||||
|
||||
Read the backed-up profile:
|
||||
```bash
|
||||
BACKUP_PATH="C:/Users/yaoji/.claude/get-shit-done/USER-PROFILE.backup.md"
|
||||
```
|
||||
|
||||
Compare each dimension's rating and confidence between old and new. Display diff table showing only changed dimensions:
|
||||
|
||||
```
|
||||
## Changes
|
||||
|
||||
| Dimension | Before | After |
|
||||
|-----------------|-----------------------------|-----------------------------|
|
||||
| Communication | terse-direct (LOW) | detailed-structured (HIGH) |
|
||||
| Debugging | fix-first (MEDIUM) | hypothesis-driven (MEDIUM) |
|
||||
```
|
||||
|
||||
If nothing changed: Display "No changes detected -- your profile is already up to date."
|
||||
|
||||
**Display final summary:**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD > PROFILE COMPLETE ✓
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Your profile: C:/Users/yaoji/.claude/get-shit-done/USER-PROFILE.md
|
||||
```
|
||||
|
||||
Then list paths for each generated artifact:
|
||||
```
|
||||
Artifacts:
|
||||
✓ /gsd:dev-preferences C:/Users/yaoji/.claude/commands/gsd/dev-preferences.md
|
||||
✓ CLAUDE.md section ./CLAUDE.md
|
||||
✓ Global CLAUDE.md C:/Users/yaoji/.claude/CLAUDE.md
|
||||
```
|
||||
|
||||
(Only show artifacts that were actually generated.)
|
||||
|
||||
**Clean up temp files:**
|
||||
|
||||
Remove the temp directory created by profile-sample (contains sample JSONL and analysis JSON):
|
||||
```bash
|
||||
rm -rf "$TEMP_DIR"
|
||||
```
|
||||
|
||||
Also remove any standalone temp files created for questionnaire answers:
|
||||
```bash
|
||||
rm -f "$ANSWERS_PATH" 2>/dev/null
|
||||
rm -f "$ANALYSIS_PATH" 2>/dev/null
|
||||
```
|
||||
|
||||
(Only clean up temp paths that were actually created during this workflow run.)
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Initialization detects existing profile and handles all three responses (view/refresh/cancel)
|
||||
- [ ] Consent gate shown for session analysis path, skipped for questionnaire path
|
||||
- [ ] Session scan discovers sessions and reports statistics
|
||||
- [ ] Session analysis path: samples messages, spawns profiler agent, extracts analysis JSON
|
||||
- [ ] Questionnaire path: presents 8 questions, collects answers, converts to analysis JSON
|
||||
- [ ] Split resolution presents context-dependent splits with user resolution options
|
||||
- [ ] Profile written to USER-PROFILE.md via write-profile subcommand
|
||||
- [ ] Result display shows report card table and highlight reel with evidence
|
||||
- [ ] Artifact selection uses multiSelect with all options pre-selected
|
||||
- [ ] Artifacts generated sequentially via gsd-tools.cjs subcommands
|
||||
- [ ] Refresh diff shows changed dimensions when --refresh was used
|
||||
- [ ] Temp files cleaned up on completion
|
||||
</success_criteria>
|
||||
382
get-shit-done/workflows/progress.md
Normal file
382
get-shit-done/workflows/progress.md
Normal file
@@ -0,0 +1,382 @@
|
||||
<purpose>
|
||||
Check project progress, summarize recent work and what's ahead, then intelligently route to the next action — either executing an existing plan or creating the next one. Provides situational awareness before continuing work.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="init_context">
|
||||
**Load progress context (paths only):**
|
||||
|
||||
```bash
|
||||
INIT=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" init progress)
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Extract from init JSON: `project_exists`, `roadmap_exists`, `state_exists`, `phases`, `current_phase`, `next_phase`, `milestone_version`, `completed_count`, `phase_count`, `paused_at`, `state_path`, `roadmap_path`, `project_path`, `config_path`.
|
||||
|
||||
If `project_exists` is false (no `.planning/` directory):
|
||||
|
||||
```
|
||||
No planning structure found.
|
||||
|
||||
Run /gsd:new-project to start a new project.
|
||||
```
|
||||
|
||||
Exit.
|
||||
|
||||
If missing STATE.md: suggest `/gsd:new-project`.
|
||||
|
||||
**If ROADMAP.md missing but PROJECT.md exists:**
|
||||
|
||||
This means a milestone was completed and archived. Go to **Route F** (between milestones).
|
||||
|
||||
If missing both ROADMAP.md and PROJECT.md: suggest `/gsd:new-project`.
|
||||
</step>
|
||||
|
||||
<step name="load">
|
||||
**Use structured extraction from gsd-tools:**
|
||||
|
||||
Instead of reading full files, use targeted tools to get only the data needed for the report:
|
||||
- `ROADMAP=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap analyze)`
|
||||
- `STATE=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" state-snapshot)`
|
||||
|
||||
This minimizes orchestrator context usage.
|
||||
</step>
|
||||
|
||||
<step name="analyze_roadmap">
|
||||
**Get comprehensive roadmap analysis (replaces manual parsing):**
|
||||
|
||||
```bash
|
||||
ROADMAP=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" roadmap analyze)
|
||||
```
|
||||
|
||||
This returns structured JSON with:
|
||||
- All phases with disk status (complete/partial/planned/empty/no_directory)
|
||||
- Goal and dependencies per phase
|
||||
- Plan and summary counts per phase
|
||||
- Aggregated stats: total plans, summaries, progress percent
|
||||
- Current and next phase identification
|
||||
|
||||
Use this instead of manually reading/parsing ROADMAP.md.
|
||||
</step>
|
||||
|
||||
<step name="recent">
|
||||
**Gather recent work context:**
|
||||
|
||||
- Find the 2-3 most recent SUMMARY.md files
|
||||
- Use `summary-extract` for efficient parsing:
|
||||
```bash
|
||||
node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" summary-extract <path> --fields one_liner
|
||||
```
|
||||
- This shows "what we've been working on"
|
||||
</step>
|
||||
|
||||
<step name="position">
|
||||
**Parse current position from init context and roadmap analysis:**
|
||||
|
||||
- Use `current_phase` and `next_phase` from `$ROADMAP`
|
||||
- Note `paused_at` if work was paused (from `$STATE`)
|
||||
- Count pending todos: use `init todos` or `list-todos`
|
||||
- Check for active debug sessions: `ls .planning/debug/*.md 2>/dev/null | grep -v resolved | wc -l`
|
||||
</step>
|
||||
|
||||
<step name="report">
|
||||
**Generate progress bar from gsd-tools, then present rich status report:**
|
||||
|
||||
```bash
|
||||
# Get formatted progress bar
|
||||
PROGRESS_BAR=$(node "C:/Users/yaoji/.claude/get-shit-done/bin/gsd-tools.cjs" progress bar --raw)
|
||||
```
|
||||
|
||||
Present:
|
||||
|
||||
```
|
||||
# [Project Name]
|
||||
|
||||
**Progress:** {PROGRESS_BAR}
|
||||
**Profile:** [quality/balanced/budget/inherit]
|
||||
|
||||
## Recent Work
|
||||
- [Phase X, Plan Y]: [what was accomplished - 1 line from summary-extract]
|
||||
- [Phase X, Plan Z]: [what was accomplished - 1 line from summary-extract]
|
||||
|
||||
## Current Position
|
||||
Phase [N] of [total]: [phase-name]
|
||||
Plan [M] of [phase-total]: [status]
|
||||
CONTEXT: [✓ if has_context | - if not]
|
||||
|
||||
## Key Decisions Made
|
||||
- [extract from $STATE.decisions[]]
|
||||
- [e.g. jq -r '.decisions[].decision' from state-snapshot]
|
||||
|
||||
## Blockers/Concerns
|
||||
- [extract from $STATE.blockers[]]
|
||||
- [e.g. jq -r '.blockers[].text' from state-snapshot]
|
||||
|
||||
## Pending Todos
|
||||
- [count] pending — /gsd:check-todos to review
|
||||
|
||||
## Active Debug Sessions
|
||||
- [count] active — /gsd:debug to continue
|
||||
(Only show this section if count > 0)
|
||||
|
||||
## What's Next
|
||||
[Next phase/plan objective from roadmap analyze]
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="route">
|
||||
**Determine next action based on verified counts.**
|
||||
|
||||
**Step 1: Count plans, summaries, and issues in current phase**
|
||||
|
||||
List files in the current phase directory:
|
||||
|
||||
```bash
|
||||
ls -1 .planning/phases/[current-phase-dir]/*-PLAN.md 2>/dev/null | wc -l
|
||||
ls -1 .planning/phases/[current-phase-dir]/*-SUMMARY.md 2>/dev/null | wc -l
|
||||
ls -1 .planning/phases/[current-phase-dir]/*-UAT.md 2>/dev/null | wc -l
|
||||
```
|
||||
|
||||
State: "This phase has {X} plans, {Y} summaries."
|
||||
|
||||
**Step 1.5: Check for unaddressed UAT gaps**
|
||||
|
||||
Check for UAT.md files with status "diagnosed" (has gaps needing fixes).
|
||||
|
||||
```bash
|
||||
# Check for diagnosed UAT with gaps
|
||||
grep -l "status: diagnosed" .planning/phases/[current-phase-dir]/*-UAT.md 2>/dev/null
|
||||
```
|
||||
|
||||
Track:
|
||||
- `uat_with_gaps`: UAT.md files with status "diagnosed" (gaps need fixing)
|
||||
|
||||
**Step 2: Route based on counts**
|
||||
|
||||
| Condition | Meaning | Action |
|
||||
|-----------|---------|--------|
|
||||
| uat_with_gaps > 0 | UAT gaps need fix plans | Go to **Route E** |
|
||||
| summaries < plans | Unexecuted plans exist | Go to **Route A** |
|
||||
| summaries = plans AND plans > 0 | Phase complete | Go to Step 3 |
|
||||
| plans = 0 | Phase not yet planned | Go to **Route B** |
|
||||
|
||||
---
|
||||
|
||||
**Route A: Unexecuted plan exists**
|
||||
|
||||
Find the first PLAN.md without matching SUMMARY.md.
|
||||
Read its `<objective>` section.
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**{phase}-{plan}: [Plan Name]** — [objective summary from PLAN.md]
|
||||
|
||||
`/gsd:execute-phase {phase}`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Route B: Phase needs planning**
|
||||
|
||||
Check if `{phase_num}-CONTEXT.md` exists in phase directory.
|
||||
|
||||
**If CONTEXT.md exists:**
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Phase {N}: {Name}** — {Goal from ROADMAP.md}
|
||||
<sub>✓ Context gathered, ready to plan</sub>
|
||||
|
||||
`/gsd:plan-phase {phase-number}`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
**If CONTEXT.md does NOT exist:**
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Phase {N}: {Name}** — {Goal from ROADMAP.md}
|
||||
|
||||
`/gsd:discuss-phase {phase}` — gather context and clarify approach
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:plan-phase {phase}` — skip discussion, plan directly
|
||||
- `/gsd:list-phase-assumptions {phase}` — see Claude's assumptions
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Route E: UAT gaps need fix plans**
|
||||
|
||||
UAT.md exists with gaps (diagnosed issues). User needs to plan fixes.
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ⚠ UAT Gaps Found
|
||||
|
||||
**{phase_num}-UAT.md** has {N} gaps requiring fixes.
|
||||
|
||||
`/gsd:plan-phase {phase} --gaps`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:execute-phase {phase}` — execute phase plans
|
||||
- `/gsd:verify-work {phase}` — run more UAT testing
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Step 3: Check milestone status (only when phase complete)**
|
||||
|
||||
Read ROADMAP.md and identify:
|
||||
1. Current phase number
|
||||
2. All phase numbers in the current milestone section
|
||||
|
||||
Count total phases and identify the highest phase number.
|
||||
|
||||
State: "Current phase is {X}. Milestone has {N} phases (highest: {Y})."
|
||||
|
||||
**Route based on milestone status:**
|
||||
|
||||
| Condition | Meaning | Action |
|
||||
|-----------|---------|--------|
|
||||
| current phase < highest phase | More phases remain | Go to **Route C** |
|
||||
| current phase = highest phase | Milestone complete | Go to **Route D** |
|
||||
|
||||
---
|
||||
|
||||
**Route C: Phase complete, more phases remain**
|
||||
|
||||
Read ROADMAP.md to get the next phase's name and goal.
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ✓ Phase {Z} Complete
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Phase {Z+1}: {Name}** — {Goal from ROADMAP.md}
|
||||
|
||||
`/gsd:discuss-phase {Z+1}` — gather context and clarify approach
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:plan-phase {Z+1}` — skip discussion, plan directly
|
||||
- `/gsd:verify-work {Z}` — user acceptance test before continuing
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Route D: Milestone complete**
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## 🎉 Milestone Complete
|
||||
|
||||
All {N} phases finished!
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Complete Milestone** — archive and prepare for next
|
||||
|
||||
`/gsd:complete-milestone`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
|
||||
**Also available:**
|
||||
- `/gsd:verify-work` — user acceptance test before completing milestone
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Route F: Between milestones (ROADMAP.md missing, PROJECT.md exists)**
|
||||
|
||||
A milestone was completed and archived. Ready to start the next milestone cycle.
|
||||
|
||||
Read MILESTONES.md to find the last completed milestone version.
|
||||
|
||||
```
|
||||
---
|
||||
|
||||
## ✓ Milestone v{X.Y} Complete
|
||||
|
||||
Ready to plan the next milestone.
|
||||
|
||||
## ▶ Next Up
|
||||
|
||||
**Start Next Milestone** — questioning → research → requirements → roadmap
|
||||
|
||||
`/gsd:new-milestone`
|
||||
|
||||
<sub>`/clear` first → fresh context window</sub>
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
</step>
|
||||
|
||||
<step name="edge_cases">
|
||||
**Handle edge cases:**
|
||||
|
||||
- Phase complete but next phase not planned → offer `/gsd:plan-phase [next]`
|
||||
- All work complete → offer milestone completion
|
||||
- Blockers present → highlight before offering to continue
|
||||
- Handoff file exists → mention it, offer `/gsd:resume-work`
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
- [ ] Rich context provided (recent work, decisions, issues)
|
||||
- [ ] Current position clear with visual progress
|
||||
- [ ] What's next clearly explained
|
||||
- [ ] Smart routing: /gsd:execute-phase if plans exist, /gsd:plan-phase if not
|
||||
- [ ] User confirms before any action
|
||||
- [ ] Seamless handoff to appropriate gsd command
|
||||
</success_criteria>
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user