chore: initial backup of Claude Code configuration
Includes: CLAUDE.md, settings.json, agents, commands, rules, skills, hooks, contexts, evals, get-shit-done, plugin configs (installed list and marketplace sources). Excludes credentials, runtime caches, telemetry, session data, and plugin binary cache.
This commit is contained in:
331
skills/autoresearch/SKILL.md
Normal file
331
skills/autoresearch/SKILL.md
Normal file
@@ -0,0 +1,331 @@
|
||||
---
|
||||
name: autoresearch
|
||||
description: "Autonomously optimize any Claude Code skill by running it repeatedly, scoring outputs against binary evals, mutating the prompt, and keeping improvements. Based on Karpathy's autoresearch methodology. Use when: optimize this skill, improve this skill, run autoresearch on, make this skill better, self-improve skill, benchmark skill, eval my skill, run evals on. Outputs: an improved SKILL.md, a results log, and a changelog of every mutation tried."
|
||||
---
|
||||
|
||||
# Autoresearch for Skills
|
||||
|
||||
Most skills work about 70% of the time. The other 30% you get garbage. The fix isn't to rewrite the skill from scratch. It's to let an agent run it dozens of times, score every output, and tighten the prompt until that 30% disappears.
|
||||
|
||||
This skill adapts Andrej Karpathy's autoresearch methodology (autonomous experimentation loops) to Claude Code skills. Instead of optimizing ML training code, we optimize skill prompts.
|
||||
|
||||
---
|
||||
|
||||
## the core job
|
||||
|
||||
Take any existing skill, define what "good output" looks like as binary yes/no checks, then run an autonomous loop that:
|
||||
|
||||
1. Generates outputs from the skill using test inputs
|
||||
2. Scores every output against the eval criteria
|
||||
3. Mutates the skill prompt to fix failures
|
||||
4. Keeps mutations that improve the score, discards the rest
|
||||
5. Repeats until the score ceiling is hit or the user stops it
|
||||
|
||||
**Output:** An improved SKILL.md + `results.tsv` log + `changelog.md` of every mutation attempted + a live HTML dashboard you can watch in your browser.
|
||||
|
||||
---
|
||||
|
||||
## before starting: gather context
|
||||
|
||||
**STOP. Do not run any experiments until all fields below are confirmed with the user. Ask for any missing fields before proceeding.**
|
||||
|
||||
1. **Target skill** — Which skill do you want to optimize? (need the exact path to SKILL.md)
|
||||
2. **Test inputs** — What 3-5 different prompts/scenarios should we test the skill with? (variety matters — pick inputs that cover different use cases so we don't overfit to one scenario)
|
||||
3. **Eval criteria** — What 3-6 binary yes/no checks define a good output? (these are your "test questions" — see [references/eval-guide.md](references/eval-guide.md) for how to write good evals)
|
||||
4. **Runs per experiment** — How many times should we run the skill per mutation? Default: 5. (more runs = more reliable scores, but slower and more expensive. 5 is the sweet spot for most skills.)
|
||||
5. **Run interval** — How often should experiments cycle? Default: every 2 minutes. (shorter = faster iteration, but costs more)
|
||||
6. **Budget cap** — Optional. Max number of experiment cycles before stopping. Default: no cap (runs until you stop it).
|
||||
|
||||
---
|
||||
|
||||
## step 1: read the skill
|
||||
|
||||
Before changing anything, read and understand the target skill completely.
|
||||
|
||||
1. Read the full SKILL.md file
|
||||
2. Read any files in `references/` that the skill links to
|
||||
3. Identify the skill's core job, process steps, and output format
|
||||
4. Note any existing quality checks or anti-patterns already in the skill
|
||||
|
||||
Do NOT skip this. You need to understand what the skill does before you can improve it.
|
||||
|
||||
---
|
||||
|
||||
## step 2: build the eval suite
|
||||
|
||||
Convert the user's eval criteria into a structured test. Every check must be binary — pass or fail, no scales.
|
||||
|
||||
**Format each eval as:**
|
||||
|
||||
```
|
||||
EVAL [number]: [Short name]
|
||||
Question: [Yes/no question about the output]
|
||||
Pass condition: [What "yes" looks like — be specific]
|
||||
Fail condition: [What triggers a "no"]
|
||||
```
|
||||
|
||||
**Rules for good evals:**
|
||||
- Binary only. Yes or no. No "rate 1-7" scales. Scales compound variability and give unreliable results.
|
||||
- Specific enough to be consistent. "Is the text readable?" is too vague. "Are all words spelled correctly with no truncated sentences?" is testable.
|
||||
- Not so narrow that the skill games the eval. "Contains fewer than 200 words" will make the skill optimize for brevity at the expense of everything else.
|
||||
- 3-6 evals is the sweet spot. More than that and the skill starts parroting eval criteria back instead of actually improving.
|
||||
|
||||
See [references/eval-guide.md](references/eval-guide.md) for detailed examples of good vs bad evals.
|
||||
|
||||
**Max score calculation:**
|
||||
```
|
||||
max_score = [number of evals] × [runs per experiment]
|
||||
```
|
||||
|
||||
Example: 4 evals × 5 runs = max score of 20.
|
||||
|
||||
---
|
||||
|
||||
## step 3: generate the live dashboard
|
||||
|
||||
Before running any experiments, create a live HTML dashboard at `autoresearch-[skill-name]/dashboard.html` and open it in the browser.
|
||||
|
||||
The dashboard must:
|
||||
- Auto-refresh every 10 seconds (reads from results.tsv)
|
||||
- Show a score progression line chart (experiment number on X axis, pass rate % on Y axis)
|
||||
- Show a colored bar for each experiment: green = keep, red = discard, blue = baseline
|
||||
- Show a table of all experiments with: experiment #, score, pass rate, status, description
|
||||
- Show per-eval breakdown: which evals pass most/least across all runs
|
||||
- Show current status: "Running experiment [N]..." or "Idle"
|
||||
- Use clean styling with soft colors (white background, pastel accents, clean sans-serif font)
|
||||
|
||||
Generate the dashboard as a single self-contained HTML file with inline CSS and JavaScript. Use Chart.js loaded from CDN for the line chart. The JS should fetch `results.json` (which you update after each experiment alongside results.tsv) and re-render.
|
||||
|
||||
**Open it immediately** after creating it: `open dashboard.html` (macOS) so the user can see it in their browser.
|
||||
|
||||
**Update `results.json`** after every experiment so the dashboard stays current. The JSON format:
|
||||
|
||||
```json
|
||||
{
|
||||
"skill_name": "[name]",
|
||||
"status": "running",
|
||||
"current_experiment": 3,
|
||||
"baseline_score": 70.0,
|
||||
"best_score": 90.0,
|
||||
"experiments": [
|
||||
{
|
||||
"id": 0,
|
||||
"score": 14,
|
||||
"max_score": 20,
|
||||
"pass_rate": 70.0,
|
||||
"status": "baseline",
|
||||
"description": "original skill — no changes"
|
||||
}
|
||||
],
|
||||
"eval_breakdown": [
|
||||
{"name": "Text legibility", "pass_count": 8, "total": 10},
|
||||
{"name": "Pastel colors", "pass_count": 9, "total": 10}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
When the run finishes (user stops it or ceiling hit), update `status` to `"complete"` so the dashboard shows a "Done" state with final summary.
|
||||
|
||||
---
|
||||
|
||||
## step 4: establish baseline
|
||||
|
||||
Run the skill AS-IS before changing anything. This is experiment #0.
|
||||
|
||||
1. **Ask the user what to name the new version.** Example: "What should I call the optimized version? (e.g., anti-slop-v2, anti-slop-optimized)" The user picks the name.
|
||||
2. Create a working directory: `autoresearch-[skill-name]/` inside the skill's folder
|
||||
3. **Copy the original SKILL.md into the working directory as `[user-chosen-name].md`** — this is the copy you will mutate. NEVER edit the original SKILL.md. All mutations happen on this copy only.
|
||||
4. Also save `SKILL.md.baseline` in the working directory (identical to the original — this is your revert target)
|
||||
4. Create `results.tsv` with the header row
|
||||
5. Create `results.json` and `dashboard.html`, then open the dashboard in the browser
|
||||
6. Run the skill [N] times using the test inputs (use [user-chosen-name].md for all runs)
|
||||
7. Score every output against every eval
|
||||
8. Record the baseline score and update both results.tsv and results.json
|
||||
|
||||
**results.tsv format (tab-separated):**
|
||||
|
||||
```
|
||||
experiment score max_score pass_rate status description
|
||||
0 14 20 70.0% baseline original skill — no changes
|
||||
```
|
||||
|
||||
**IMPORTANT:** After establishing baseline, confirm the score with the user before proceeding. If baseline is already 90%+, the skill may not need optimization — ask the user if they want to continue.
|
||||
|
||||
---
|
||||
|
||||
## step 5: run the experiment loop
|
||||
|
||||
This is the core autoresearch loop. Once started, run autonomously until stopped.
|
||||
|
||||
**LOOP:**
|
||||
|
||||
1. **Analyze failures.** Look at which evals are failing most. Read the actual outputs that failed. Identify the pattern — is it a formatting issue? A missing instruction? An ambiguous directive?
|
||||
|
||||
2. **Form a hypothesis.** Pick ONE thing to change. Don't change 5 things at once — you won't know what helped.
|
||||
|
||||
Good mutations:
|
||||
- Add a specific instruction that addresses the most common failure
|
||||
- Reword an ambiguous instruction to be more explicit
|
||||
- Add an anti-pattern ("Do NOT do X") for a recurring mistake
|
||||
- Move a buried instruction higher in the skill (priority = position)
|
||||
- Add or improve an example that shows the correct behavior
|
||||
- Remove an instruction that's causing the skill to over-optimize for one thing at the expense of others
|
||||
|
||||
Bad mutations:
|
||||
- Rewriting the entire skill from scratch
|
||||
- Adding 10 new rules at once
|
||||
- Making the skill longer without a specific reason
|
||||
- Adding vague instructions like "make it better" or "be more creative"
|
||||
|
||||
3. **Make the change.** Edit `[user-chosen-name].md` (in the working directory) with ONE targeted mutation. NEVER touch the original SKILL.md.
|
||||
|
||||
4. **Run the experiment.** Execute the skill [N] times with the same test inputs.
|
||||
|
||||
5. **Score it.** Run every output through every eval. Calculate total score.
|
||||
|
||||
6. **Decide: keep or discard.**
|
||||
- Score improved → **KEEP.** Log it. This is the new baseline for `[user-chosen-name].md`.
|
||||
- Score stayed the same → **DISCARD.** Revert `[user-chosen-name].md` to previous version. The change added complexity without improvement.
|
||||
- Score got worse → **DISCARD.** Revert `[user-chosen-name].md` to previous version.
|
||||
|
||||
7. **Log the result** in results.tsv.
|
||||
|
||||
8. **Repeat.** Go back to step 1 of the loop.
|
||||
|
||||
**NEVER STOP.** Once the loop starts, do not pause to ask the user if you should continue. They may be away from the computer. Run autonomously until:
|
||||
- The user manually stops you
|
||||
- You hit the budget cap (if one was set)
|
||||
- You hit 95%+ pass rate for 3 consecutive experiments (diminishing returns)
|
||||
|
||||
**If you run out of ideas:** Re-read the failing outputs. Try combining two previous near-miss mutations. Try a completely different approach to the same problem. Try removing things instead of adding them. Simplification that maintains the score is a win.
|
||||
|
||||
---
|
||||
|
||||
## step 6: write the changelog
|
||||
|
||||
After each experiment (whether kept or discarded), append to `changelog.md`:
|
||||
|
||||
```markdown
|
||||
## Experiment [N] — [keep/discard]
|
||||
|
||||
**Score:** [X]/[max] ([percent]%)
|
||||
**Change:** [One sentence describing what was changed]
|
||||
**Reasoning:** [Why this change was expected to help]
|
||||
**Result:** [What actually happened — which evals improved/declined]
|
||||
**Failing outputs:** [Brief description of what still fails, if anything]
|
||||
```
|
||||
|
||||
This changelog is the most valuable artifact. It's a research log that any future agent (or smarter future model) can pick up and continue from.
|
||||
|
||||
---
|
||||
|
||||
## step 7: deliver results
|
||||
|
||||
When the user returns or the loop stops, present:
|
||||
|
||||
1. **Score summary:** Baseline score → Final score (percent improvement)
|
||||
2. **Total experiments run:** How many mutations were tried
|
||||
3. **Keep rate:** How many mutations were kept vs discarded
|
||||
4. **Top 3 changes that helped most** (from the changelog)
|
||||
5. **Remaining failure patterns** (what the skill still gets wrong, if anything)
|
||||
6. **The improved [user-chosen-name].md** (in the working directory — the original SKILL.md is untouched)
|
||||
7. **Location of results.tsv and changelog.md** for reference
|
||||
|
||||
---
|
||||
|
||||
## output format
|
||||
|
||||
The skill produces four files in `autoresearch-[skill-name]/`:
|
||||
|
||||
```
|
||||
autoresearch-[skill-name]/
|
||||
├── dashboard.html # live browser dashboard (auto-refreshes)
|
||||
├── results.json # data file powering the dashboard
|
||||
├── results.tsv # score log for every experiment
|
||||
├── changelog.md # detailed mutation log
|
||||
└── SKILL.md.baseline # original skill before optimization
|
||||
```
|
||||
|
||||
**The original SKILL.md is NEVER modified.** The improved version lives in `[user-chosen-name].md`. The user can review, diff, and manually apply changes if they choose. Do NOT offer to overwrite the original. Do NOT copy the working file over the original. The whole point is that the original stays safe.
|
||||
|
||||
**results.tsv example:**
|
||||
|
||||
```
|
||||
experiment score max_score pass_rate status description
|
||||
0 14 20 70.0% baseline original skill — no changes
|
||||
1 16 20 80.0% keep added explicit instruction to avoid numbering in diagrams
|
||||
2 16 20 80.0% discard tried enforcing left-to-right layout — no improvement
|
||||
3 18 20 90.0% keep added color palette hex codes instead of vague "pastel" description
|
||||
4 18 20 90.0% discard added anti-pattern for neon colors — no improvement
|
||||
5 19 20 95.0% keep added worked example showing correct label formatting
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## example: optimizing a diagram-generator skill
|
||||
|
||||
**Context gathered:**
|
||||
- Target skill: `~/.claude/skills/diagram-generator/SKILL.md`
|
||||
- Test inputs: "OAuth flow diagram", "CI/CD pipeline", "microservices architecture", "user onboarding funnel", "database schema relationships"
|
||||
- Evals: (1) All text legible and spelled correctly? (2) Uses only pastel/soft colors? (3) Linear layout — left-to-right or top-to-bottom? (4) Free of numbers, ordinals, and ordering?
|
||||
- Runs per experiment: 10
|
||||
- Max score: 40
|
||||
|
||||
**Baseline run (experiment 0):**
|
||||
Generated 10 diagrams. Scored each against 4 evals. Result: 32/40 (80%).
|
||||
Common failures: 3 diagrams had numbered steps, 2 had bright red elements, 3 had illegible small text.
|
||||
|
||||
**Experiment 1 — KEEP (35/40, 87.5%):**
|
||||
Change: Added "NEVER include step numbers, ordinal numbers (1st, 2nd), or any numerical ordering in diagrams" to the anti-patterns section.
|
||||
Result: Numbering failures dropped from 3 to 1. Other evals held steady.
|
||||
|
||||
**Experiment 2 — DISCARD (34/40, 85%):**
|
||||
Change: Added "All text must be minimum 14px font size."
|
||||
Result: Legibility improved by 1, but color compliance dropped by 2. Reverted.
|
||||
|
||||
**Experiment 3 — KEEP (37/40, 92.5%):**
|
||||
Change: Replaced vague "pastel colors" instruction with specific hex codes: `#A8D8EA, #AA96DA, #FCBAD3, #FFFFD2, #B5EAD7`.
|
||||
Result: Color eval went from 8/10 to 10/10. Other evals held.
|
||||
|
||||
**Experiment 4 — DISCARD (37/40, 92.5%):**
|
||||
Change: Added anti-pattern "Do NOT use red (#FF0000), orange (#FF8C00), or neon green (#39FF14)."
|
||||
Result: No change. The hex codes from experiment 3 already solved the color problem. Reverted to keep skill simpler.
|
||||
|
||||
**Experiment 5 — KEEP (39/40, 97.5%):**
|
||||
Change: Added a worked example showing a correct diagram with properly formatted labels (no numbers, pastel fills, left-to-right flow, legible text).
|
||||
Result: Hit 39/40. One remaining failure: a complex diagram with overlapping labels. Diminishing returns — stopped.
|
||||
|
||||
**Final delivery:**
|
||||
- Baseline: 32/40 (80%) → Final: 39/40 (97.5%)
|
||||
- 5 experiments, 3 kept, 2 discarded
|
||||
- Top changes: specific hex codes for colors, explicit anti-numbering rule, worked example
|
||||
- Remaining issue: very complex diagrams occasionally get overlapping labels (1/40 failure rate)
|
||||
|
||||
---
|
||||
|
||||
## how this connects to other skills
|
||||
|
||||
**What feeds into autoresearch:**
|
||||
- Any existing skill that needs optimization
|
||||
- User-defined eval criteria (or help them define evals using the eval guide)
|
||||
|
||||
**What autoresearch feeds into:**
|
||||
- The improved skill replaces the original
|
||||
- The changelog can be passed to future models for continued optimization
|
||||
- The eval suite can be reused whenever the skill is updated
|
||||
|
||||
---
|
||||
|
||||
## the test
|
||||
|
||||
A good autoresearch run:
|
||||
|
||||
1. **Started with a baseline** — never changed anything before measuring the starting point
|
||||
2. **Used binary evals only** — no scales, no vibes, no "rate this 1-10"
|
||||
3. **Changed one thing at a time** — so you know exactly what helped
|
||||
4. **Kept a complete log** — every experiment recorded, kept or discarded
|
||||
5. **Improved the score** — measurable improvement from baseline to final
|
||||
6. **Didn't overfit** — the skill got better at the actual job, not just at passing the specific test inputs
|
||||
7. **Ran autonomously** — didn't stop to ask permission between experiments
|
||||
|
||||
If the skill "passes" all evals but the actual output quality hasn't improved — the evals are bad, not the skill. Go back to step 2 and write better evals.
|
||||
807
skills/azure-devops-pipelines/SKILL.md
Normal file
807
skills/azure-devops-pipelines/SKILL.md
Normal file
@@ -0,0 +1,807 @@
|
||||
---
|
||||
name: azure-devops-pipelines
|
||||
description: Manage Azure DevOps build pipelines, release pipelines, pipeline runs, deployments, and pull requests via Azure CLI. Trigger builds, create releases, approve/vote on PRs, merge PRs, manage reviewers, monitor status, and manage artifacts.
|
||||
---
|
||||
|
||||
# Azure DevOps Pipelines
|
||||
|
||||
## Prerequisites
|
||||
|
||||
```bash
|
||||
# Install Azure CLI extension
|
||||
az extension add --name azure-devops
|
||||
az extension update --name azure-devops
|
||||
|
||||
# Auth: interactive login
|
||||
az login
|
||||
|
||||
# Auth: PAT-based login
|
||||
az devops login --organization https://dev.azure.com/ORG
|
||||
# Then paste your PAT when prompted
|
||||
|
||||
# Set defaults (avoids passing --org and --project every time)
|
||||
az devops configure --defaults organization=https://dev.azure.com/ORG project=PROJECT
|
||||
```
|
||||
|
||||
## Common Flags
|
||||
|
||||
Every command below supports these global flags. Omitted from examples for brevity:
|
||||
|
||||
| Flag | Purpose |
|
||||
|------|---------|
|
||||
| `--org URL` | Override default organization |
|
||||
| `--project NAME` | Override default project |
|
||||
| `--detect false` | Disable auto-detection from git remote (recommended in scripts) |
|
||||
| `--output table` | Human-readable output |
|
||||
| `--output json` | Machine-readable output |
|
||||
| `--query JMESPATH` | Filter JSON output |
|
||||
|
||||
---
|
||||
|
||||
## 1. YAML Pipelines (Modern)
|
||||
|
||||
### Run a pipeline
|
||||
|
||||
```bash
|
||||
# By name
|
||||
az pipelines run --name "MyPipeline"
|
||||
|
||||
# By ID
|
||||
az pipelines run --id 42
|
||||
|
||||
# With branch
|
||||
az pipelines run --name "MyPipeline" --branch refs/heads/feature/xyz
|
||||
|
||||
# With variables
|
||||
az pipelines run --name "MyPipeline" --variables "env=staging" "debug=true"
|
||||
|
||||
# With parameters (template parameters)
|
||||
az pipelines run --name "MyPipeline" --parameters "image=ubuntu-latest" "pool=MyPool"
|
||||
|
||||
# With commit
|
||||
az pipelines run --name "MyPipeline" --commit-id abc123def
|
||||
|
||||
# Open in browser after queuing
|
||||
az pipelines run --name "MyPipeline" --open
|
||||
```
|
||||
|
||||
### List pipelines
|
||||
|
||||
```bash
|
||||
# All pipelines
|
||||
az pipelines list --output table
|
||||
|
||||
# Filter by name (supports wildcards)
|
||||
az pipelines list --name "Deploy*" --output table
|
||||
|
||||
# Filter by folder
|
||||
az pipelines list --folder-path "production" --top 20
|
||||
|
||||
# Get pipeline IDs only
|
||||
az pipelines list --query "[].{ID:id, Name:name}" --output table
|
||||
```
|
||||
|
||||
### Show pipeline details
|
||||
|
||||
```bash
|
||||
az pipelines show --id 42
|
||||
az pipelines show --name "MyPipeline" --open
|
||||
```
|
||||
|
||||
### Create / Update / Delete pipeline
|
||||
|
||||
```bash
|
||||
# Create from YAML in Azure Repos
|
||||
az pipelines create --name "NewPipeline" \
|
||||
--repository RepoName --branch master \
|
||||
--repository-type tfsgit \
|
||||
--yaml-path azure-pipelines.yml
|
||||
|
||||
# Create from GitHub
|
||||
az pipelines create --name "NewPipeline" \
|
||||
--repository Owner/Repo --branch main \
|
||||
--repository-type github \
|
||||
--service-connection SERVICE_CONN_ID \
|
||||
--yaml-path .azure/pipelines.yml
|
||||
|
||||
# Skip first run on create
|
||||
az pipelines create --name "NewPipeline" \
|
||||
--repository RepoName --branch main \
|
||||
--repository-type tfsgit \
|
||||
--yaml-path azure-pipelines.yml \
|
||||
--skip-first-run true
|
||||
|
||||
# Update pipeline
|
||||
az pipelines update --id 42 --new-name "RenamedPipeline"
|
||||
az pipelines update --id 42 --yaml-path new-pipeline.yml --branch main
|
||||
az pipelines update --id 42 --new-folder-path "team/production"
|
||||
|
||||
# Delete pipeline
|
||||
az pipelines delete --id 42 --yes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Classic Builds
|
||||
|
||||
### Queue a build
|
||||
|
||||
```bash
|
||||
# By definition ID
|
||||
az pipelines build queue --definition-id 10
|
||||
|
||||
# By definition name
|
||||
az pipelines build queue --definition-name "LegacyBuild"
|
||||
|
||||
# With branch and variables
|
||||
az pipelines build queue --definition-id 10 \
|
||||
--branch refs/heads/release/1.0 \
|
||||
--variables "config=Release" "deploy=true"
|
||||
|
||||
# Open results in browser
|
||||
az pipelines build queue --definition-id 10 --open
|
||||
```
|
||||
|
||||
### List builds
|
||||
|
||||
```bash
|
||||
# Recent builds
|
||||
az pipelines build list --top 10 --output table
|
||||
|
||||
# Filter by definition
|
||||
az pipelines build list --definition-ids 10 20 --top 5
|
||||
|
||||
# Filter by status
|
||||
az pipelines build list --status inProgress --output table
|
||||
az pipelines build list --status completed --result failed --top 10
|
||||
|
||||
# Filter by branch
|
||||
az pipelines build list --branch refs/heads/main --top 5
|
||||
|
||||
# Filter by reason
|
||||
az pipelines build list --reason pullRequest --top 10
|
||||
|
||||
# Useful JMESPath queries
|
||||
az pipelines build list --top 5 \
|
||||
--query "[].{ID:id, Status:status, Result:result, Pipeline:definition.name}" \
|
||||
--output table
|
||||
```
|
||||
|
||||
### Show build details
|
||||
|
||||
```bash
|
||||
az pipelines build show --id 1234
|
||||
az pipelines build show --id 1234 --open
|
||||
```
|
||||
|
||||
### Cancel a build
|
||||
|
||||
```bash
|
||||
az pipelines build cancel --build-id 1234
|
||||
```
|
||||
|
||||
### Build definitions
|
||||
|
||||
```bash
|
||||
# List all build definitions
|
||||
az pipelines build definition list --output table
|
||||
az pipelines build definition list --query "[].{ID:id, Name:name, Path:path}" --output table
|
||||
|
||||
# Show definition details
|
||||
az pipelines build definition show --id 10
|
||||
az pipelines build definition show --name "MyBuildDef" --open
|
||||
```
|
||||
|
||||
### Build tags
|
||||
|
||||
```bash
|
||||
# Add tags
|
||||
az pipelines build tag add --build-id 1234 --tags "production" "v1.0"
|
||||
|
||||
# List tags
|
||||
az pipelines build tag list --build-id 1234
|
||||
|
||||
# Delete a tag
|
||||
az pipelines build tag delete --build-id 1234 --tag "v1.0"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Release Pipelines
|
||||
|
||||
### Create a release
|
||||
|
||||
```bash
|
||||
# By definition ID
|
||||
az pipelines release create --definition-id 5
|
||||
|
||||
# By definition name
|
||||
az pipelines release create --definition-name "MyRelease"
|
||||
|
||||
# With description
|
||||
az pipelines release create --definition-id 5 \
|
||||
--description "Release v1.2.3 - hotfix for login bug"
|
||||
|
||||
# With artifact version
|
||||
az pipelines release create --definition-id 5 \
|
||||
--artifact-metadata-list "alias1=version_id1" "alias2=version_id2"
|
||||
|
||||
# Open in browser
|
||||
az pipelines release create --definition-id 5 --open
|
||||
```
|
||||
|
||||
### List releases
|
||||
|
||||
```bash
|
||||
# Recent releases
|
||||
az pipelines release list --top 10 --output table
|
||||
|
||||
# Filter by definition
|
||||
az pipelines release list --definition-id 5 --top 10
|
||||
|
||||
# Filter by status
|
||||
az pipelines release list --status active --top 10
|
||||
|
||||
# Filter by branch
|
||||
az pipelines release list --source-branch refs/heads/main
|
||||
|
||||
# Filter by time range
|
||||
az pipelines release list --min-created-time "2025-01-01" --max-created-time "2025-02-01"
|
||||
|
||||
# Useful JMESPath queries
|
||||
az pipelines release list --top 5 \
|
||||
--query "[].{ID:id, Name:name, Status:status, CreatedOn:createdOn}" \
|
||||
--output table
|
||||
```
|
||||
|
||||
### Show release details
|
||||
|
||||
```bash
|
||||
az pipelines release show --id 100
|
||||
az pipelines release show --id 100 --open
|
||||
```
|
||||
|
||||
### Release definitions
|
||||
|
||||
```bash
|
||||
# List all release definitions
|
||||
az pipelines release definition list --output table
|
||||
az pipelines release definition list --name "Deploy*" --top 10
|
||||
|
||||
# Filter by artifact type
|
||||
az pipelines release definition list --artifact-type build
|
||||
|
||||
# Show definition details
|
||||
az pipelines release definition show --id 5
|
||||
az pipelines release definition show --name "MyRelease" --open
|
||||
|
||||
# Get environments from a release definition
|
||||
az pipelines release definition show --id 5 \
|
||||
--query "environments[].{ID:id, Name:name, Rank:rank}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Pipeline Runs
|
||||
|
||||
### List runs
|
||||
|
||||
```bash
|
||||
# Recent runs
|
||||
az pipelines runs list --top 10 --output table
|
||||
|
||||
# Filter by pipeline
|
||||
az pipelines runs list --pipeline-ids 42 --top 5
|
||||
|
||||
# Multiple pipelines
|
||||
az pipelines runs list --pipeline-ids 42 43 44 --top 10
|
||||
|
||||
# Filter by status and result
|
||||
az pipelines runs list --status completed --result succeeded --top 10
|
||||
az pipelines runs list --status inProgress --output table
|
||||
|
||||
# Order by time
|
||||
az pipelines runs list --query-order FinishTimeDesc --top 10
|
||||
|
||||
# Useful JMESPath
|
||||
az pipelines runs list --top 5 \
|
||||
--query "[].{ID:id, Pipeline:pipeline.name, Status:status, Result:result}" \
|
||||
--output table
|
||||
```
|
||||
|
||||
### Show run details
|
||||
|
||||
```bash
|
||||
az pipelines runs show --id 5678
|
||||
az pipelines runs show --id 5678 --open
|
||||
```
|
||||
|
||||
### Run artifacts
|
||||
|
||||
```bash
|
||||
# List artifacts for a run
|
||||
az pipelines runs artifact list --run-id 5678
|
||||
|
||||
# Download an artifact
|
||||
az pipelines runs artifact download --run-id 5678 \
|
||||
--artifact-name "drop" --path ./artifacts
|
||||
|
||||
# Upload an artifact
|
||||
az pipelines runs artifact upload --run-id 5678 \
|
||||
--artifact-name "test-results" --path ./test-output
|
||||
```
|
||||
|
||||
### Run tags
|
||||
|
||||
```bash
|
||||
az pipelines runs tag add --run-id 5678 --tags "deployed" "v2.0"
|
||||
az pipelines runs tag list --run-id 5678
|
||||
az pipelines runs tag delete --run-id 5678 --tag "deployed"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Variables & Variable Groups
|
||||
|
||||
### Pipeline variables
|
||||
|
||||
```bash
|
||||
# List variables for a pipeline
|
||||
az pipelines variable list --pipeline-id 42 --output table
|
||||
|
||||
# Create a variable
|
||||
az pipelines variable create --pipeline-id 42 \
|
||||
--name "MY_VAR" --value "my_value"
|
||||
|
||||
# Create a secret variable
|
||||
az pipelines variable create --pipeline-id 42 \
|
||||
--name "MY_SECRET" --value "s3cret" --is-secret true
|
||||
|
||||
# Update a variable
|
||||
az pipelines variable update --pipeline-id 42 \
|
||||
--name "MY_VAR" --new-value "updated_value"
|
||||
|
||||
# Delete a variable
|
||||
az pipelines variable delete --pipeline-id 42 --name "MY_VAR" --yes
|
||||
```
|
||||
|
||||
### Variable groups
|
||||
|
||||
```bash
|
||||
# List variable groups
|
||||
az pipelines variable-group list --output table
|
||||
|
||||
# Create a variable group
|
||||
az pipelines variable-group create --name "MyVarGroup" \
|
||||
--variables "key1=val1" "key2=val2"
|
||||
|
||||
# Show variable group
|
||||
az pipelines variable-group show --id 1
|
||||
az pipelines variable-group show --group-name "MyVarGroup"
|
||||
|
||||
# Update variable group
|
||||
az pipelines variable-group update --id 1 --name "RenamedGroup"
|
||||
|
||||
# Delete variable group
|
||||
az pipelines variable-group delete --id 1 --yes
|
||||
|
||||
# Manage variables within a group
|
||||
az pipelines variable-group variable list --group-id 1 --output table
|
||||
az pipelines variable-group variable create --group-id 1 --name "NEW_VAR" --value "val"
|
||||
az pipelines variable-group variable update --group-id 1 --name "NEW_VAR" --new-value "val2"
|
||||
az pipelines variable-group variable delete --group-id 1 --name "NEW_VAR" --yes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Agent Pools & Queues
|
||||
|
||||
```bash
|
||||
# List agent pools
|
||||
az pipelines pool list --output table
|
||||
az pipelines pool show --id 1
|
||||
|
||||
# List agents in a pool
|
||||
az pipelines agent list --pool-id 1 --output table
|
||||
az pipelines agent show --pool-id 1 --agent-id 5
|
||||
|
||||
# List agent queues
|
||||
az pipelines queue list --output table
|
||||
az pipelines queue show --id 1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Pipeline Folders
|
||||
|
||||
```bash
|
||||
# List folders
|
||||
az pipelines folder list --output table
|
||||
|
||||
# Create a folder
|
||||
az pipelines folder create --path "team/production"
|
||||
|
||||
# Update folder
|
||||
az pipelines folder update --path "team/production" --new-path "team/prod"
|
||||
|
||||
# Delete folder
|
||||
az pipelines folder delete --path "team/prod" --yes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Advanced: REST API via az devops invoke
|
||||
|
||||
For operations not directly supported by the CLI, use `az devops invoke`.
|
||||
|
||||
### Approve a release deployment
|
||||
|
||||
```bash
|
||||
# Step 1: Get pending approvals
|
||||
az devops invoke --area release --resource approvals \
|
||||
--route-parameters project=PROJECT \
|
||||
--query-parameters releaseId=RELEASE_ID \
|
||||
--api-version 7.1 --http-method GET
|
||||
|
||||
# Step 2: Approve (PATCH)
|
||||
az devops invoke --area release --resource approvals \
|
||||
--route-parameters project=PROJECT approvalId=APPROVAL_ID \
|
||||
--api-version 7.1 --http-method PATCH \
|
||||
--in-file approval-body.json
|
||||
```
|
||||
|
||||
Where `approval-body.json`:
|
||||
```json
|
||||
{
|
||||
"status": "approved",
|
||||
"comments": "Approved via CLI"
|
||||
}
|
||||
```
|
||||
|
||||
### Deploy to a specific environment
|
||||
|
||||
```bash
|
||||
# Trigger deployment of a release to an environment
|
||||
az devops invoke --area release --resource releases \
|
||||
--route-parameters project=PROJECT releaseId=RELEASE_ID \
|
||||
--resource-sub-type environments --resource-id ENV_ID \
|
||||
--api-version 7.1 --http-method PATCH \
|
||||
--in-file deploy-body.json
|
||||
```
|
||||
|
||||
Where `deploy-body.json`:
|
||||
```json
|
||||
{
|
||||
"status": "inProgress",
|
||||
"comment": "Deploying via CLI"
|
||||
}
|
||||
```
|
||||
|
||||
### Get build timeline / logs
|
||||
|
||||
```bash
|
||||
# Get build timeline (stages, jobs, tasks)
|
||||
az devops invoke --area build --resource timeline \
|
||||
--route-parameters project=PROJECT buildId=BUILD_ID \
|
||||
--api-version 7.1 --http-method GET
|
||||
|
||||
# Get build logs
|
||||
az devops invoke --area build --resource logs \
|
||||
--route-parameters project=PROJECT buildId=BUILD_ID \
|
||||
--api-version 7.1 --http-method GET
|
||||
```
|
||||
|
||||
### Check run approvals (YAML pipeline environments)
|
||||
|
||||
```bash
|
||||
# List checks for a pipeline run
|
||||
az devops invoke --area pipelines --resource runs \
|
||||
--route-parameters project=PROJECT pipelineId=PIPELINE_ID runId=RUN_ID \
|
||||
--api-version 7.1 --http-method GET
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Composite Workflows
|
||||
|
||||
### Build then release
|
||||
|
||||
```bash
|
||||
# 1. Queue a build and capture the build ID
|
||||
BUILD_JSON=$(az pipelines build queue --definition-id 10 --output json)
|
||||
BUILD_ID=$(echo "$BUILD_JSON" | jq -r '.id')
|
||||
echo "Build queued: $BUILD_ID"
|
||||
|
||||
# 2. Poll until complete
|
||||
while true; do
|
||||
STATUS=$(az pipelines build show --id "$BUILD_ID" \
|
||||
--query "status" --output tsv)
|
||||
RESULT=$(az pipelines build show --id "$BUILD_ID" \
|
||||
--query "result" --output tsv)
|
||||
echo "Build $BUILD_ID: status=$STATUS result=$RESULT"
|
||||
if [ "$STATUS" = "completed" ]; then break; fi
|
||||
sleep 30
|
||||
done
|
||||
|
||||
# 3. Check result
|
||||
if [ "$RESULT" != "succeeded" ]; then
|
||||
echo "Build failed with result: $RESULT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 4. Create release
|
||||
az pipelines release create --definition-id 5 \
|
||||
--description "Auto-release from build $BUILD_ID"
|
||||
```
|
||||
|
||||
### Run YAML pipeline and wait
|
||||
|
||||
```bash
|
||||
# 1. Run pipeline and capture run ID
|
||||
RUN_JSON=$(az pipelines run --id 42 --branch refs/heads/main --output json)
|
||||
RUN_ID=$(echo "$RUN_JSON" | jq -r '.id')
|
||||
echo "Pipeline run queued: $RUN_ID"
|
||||
|
||||
# 2. Poll until complete
|
||||
while true; do
|
||||
STATE=$(az pipelines runs show --id "$RUN_ID" \
|
||||
--query "state" --output tsv)
|
||||
RESULT=$(az pipelines runs show --id "$RUN_ID" \
|
||||
--query "result" --output tsv)
|
||||
echo "Run $RUN_ID: state=$STATE result=$RESULT"
|
||||
if [ "$STATE" = "completed" ]; then break; fi
|
||||
sleep 30
|
||||
done
|
||||
|
||||
# 3. Download artifacts on success
|
||||
if [ "$RESULT" = "succeeded" ]; then
|
||||
az pipelines runs artifact download --run-id "$RUN_ID" \
|
||||
--artifact-name "drop" --path ./artifacts
|
||||
fi
|
||||
```
|
||||
|
||||
### List failed builds across multiple pipelines
|
||||
|
||||
```bash
|
||||
az pipelines build list \
|
||||
--definition-ids 10 20 30 \
|
||||
--result failed \
|
||||
--top 20 \
|
||||
--query "[].{ID:id, Pipeline:definition.name, Branch:sourceBranch, Time:finishTime}" \
|
||||
--output table
|
||||
```
|
||||
|
||||
### Find latest successful build for a branch
|
||||
|
||||
```bash
|
||||
az pipelines build list \
|
||||
--definition-ids 10 \
|
||||
--branch refs/heads/main \
|
||||
--result succeeded \
|
||||
--top 1 \
|
||||
--query "[0].{ID:id, BuildNumber:buildNumber, Time:finishTime}" \
|
||||
--output table
|
||||
```
|
||||
|
||||
### Release status dashboard
|
||||
|
||||
```bash
|
||||
# Show recent releases with environment statuses
|
||||
az pipelines release list --definition-id 5 --top 5 \
|
||||
--query "[].{ID:id, Name:name, Status:status, Envs:environments[].{Name:name,Status:status}}" \
|
||||
--output json | jq '.'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Pull Request Management
|
||||
|
||||
PR 操作用 `az repos pr` 命令,完全原生支持,无需 REST API。
|
||||
|
||||
### Approve / Vote on a PR
|
||||
|
||||
```bash
|
||||
# Approve
|
||||
az repos pr set-vote --id 123 --vote approve
|
||||
|
||||
# Approve with suggestions
|
||||
az repos pr set-vote --id 123 --vote approve-with-suggestions
|
||||
|
||||
# Reject
|
||||
az repos pr set-vote --id 123 --vote reject
|
||||
|
||||
# Wait for author
|
||||
az repos pr set-vote --id 123 --vote wait-for-author
|
||||
|
||||
# Reset vote
|
||||
az repos pr set-vote --id 123 --vote reset
|
||||
```
|
||||
|
||||
### Create a PR
|
||||
|
||||
```bash
|
||||
# Basic
|
||||
az repos pr create --title "feat: add login page" \
|
||||
--source-branch feature/login --target-branch main
|
||||
|
||||
# With description and reviewers
|
||||
az repos pr create \
|
||||
--title "feat: add login page" \
|
||||
--description "Implements OAuth2 login flow" "See JIRA-123" \
|
||||
--source-branch feature/login \
|
||||
--target-branch main \
|
||||
--required-reviewers "user@company.com" "team@company.com" \
|
||||
--reviewers "optional@company.com"
|
||||
|
||||
# With auto-complete and squash merge
|
||||
az repos pr create \
|
||||
--title "feat: add login page" \
|
||||
--source-branch feature/login \
|
||||
--target-branch main \
|
||||
--auto-complete true \
|
||||
--squash true \
|
||||
--delete-source-branch true
|
||||
|
||||
# Create as draft
|
||||
az repos pr create --title "WIP: new feature" \
|
||||
--source-branch feature/xyz --draft true
|
||||
|
||||
# Open in browser after creation
|
||||
az repos pr create --title "My PR" \
|
||||
--source-branch feature/xyz --open
|
||||
```
|
||||
|
||||
### List PRs
|
||||
|
||||
```bash
|
||||
# All active PRs
|
||||
az repos pr list --status active --output table
|
||||
|
||||
# PRs targeting main
|
||||
az repos pr list --target-branch main --output table
|
||||
|
||||
# PRs created by me
|
||||
az repos pr list --creator "me@company.com" --output table
|
||||
|
||||
# PRs where I'm a reviewer
|
||||
az repos pr list --reviewer "me@company.com" --output table
|
||||
|
||||
# From a specific branch
|
||||
az repos pr list --source-branch feature/login
|
||||
|
||||
# Completed/merged PRs
|
||||
az repos pr list --status completed --top 10 --output table
|
||||
|
||||
# All PRs (active + completed + abandoned)
|
||||
az repos pr list --status all --top 20 \
|
||||
--query "[].{ID:pullRequestId, Title:title, Status:status, Author:createdBy.displayName}" \
|
||||
--output table
|
||||
```
|
||||
|
||||
### Show PR details
|
||||
|
||||
```bash
|
||||
az repos pr show --id 123
|
||||
az repos pr show --id 123 --open
|
||||
```
|
||||
|
||||
### Update a PR
|
||||
|
||||
```bash
|
||||
# Merge/complete a PR
|
||||
az repos pr update --id 123 --status completed
|
||||
|
||||
# Abandon a PR
|
||||
az repos pr update --id 123 --status abandoned
|
||||
|
||||
# Reactivate an abandoned PR
|
||||
az repos pr update --id 123 --status active
|
||||
|
||||
# Rename title
|
||||
az repos pr update --id 123 --title "fix: corrected login flow"
|
||||
|
||||
# Update description
|
||||
az repos pr update --id 123 --description "Updated description" "Second line"
|
||||
|
||||
# Enable auto-complete
|
||||
az repos pr update --id 123 --auto-complete true
|
||||
|
||||
# Publish draft PR
|
||||
az repos pr update --id 123 --draft false
|
||||
|
||||
# Convert to draft
|
||||
az repos pr update --id 123 --draft true
|
||||
|
||||
# Bypass policies and force complete (use with caution)
|
||||
az repos pr update --id 123 --status completed \
|
||||
--bypass-policy true --bypass-policy-reason "Emergency hotfix"
|
||||
```
|
||||
|
||||
### Manage reviewers
|
||||
|
||||
```bash
|
||||
# Add reviewers
|
||||
az repos pr reviewer add --id 123 --reviewers "user1@company.com" "user2@company.com"
|
||||
|
||||
# List reviewers
|
||||
az repos pr reviewer list --id 123 --output table
|
||||
|
||||
# Remove a reviewer
|
||||
az repos pr reviewer remove --id 123 --reviewers "user1@company.com"
|
||||
```
|
||||
|
||||
### Manage policies
|
||||
|
||||
```bash
|
||||
# List policies on a PR
|
||||
az repos pr policy list --id 123 --output table
|
||||
|
||||
# Re-queue a failed policy check
|
||||
az repos pr policy queue --id 123 --evaluation-id EVAL_ID
|
||||
```
|
||||
|
||||
### Work items
|
||||
|
||||
```bash
|
||||
# Link work items to a PR
|
||||
az repos pr work-item add --id 123 --work-items 456 789
|
||||
|
||||
# List linked work items
|
||||
az repos pr work-item list --id 123 --output table
|
||||
|
||||
# Unlink work items
|
||||
az repos pr work-item remove --id 123 --work-items 456
|
||||
```
|
||||
|
||||
### Checkout PR branch locally
|
||||
|
||||
```bash
|
||||
az repos pr checkout --id 123
|
||||
```
|
||||
|
||||
### Composite: Create PR and wait for approval
|
||||
|
||||
```bash
|
||||
# 1. Create PR
|
||||
PR_JSON=$(az repos pr create \
|
||||
--title "feat: new feature" \
|
||||
--source-branch feature/xyz \
|
||||
--target-branch main \
|
||||
--output json)
|
||||
PR_ID=$(echo "$PR_JSON" | jq -r '.pullRequestId')
|
||||
echo "PR created: $PR_ID"
|
||||
|
||||
# 2. Poll until completed or abandoned
|
||||
while true; do
|
||||
STATUS=$(az repos pr show --id "$PR_ID" --query "status" --output tsv)
|
||||
echo "PR $PR_ID status: $STATUS"
|
||||
if [ "$STATUS" = "completed" ] || [ "$STATUS" = "abandoned" ]; then
|
||||
break
|
||||
fi
|
||||
sleep 60
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Useful JMESPath Patterns
|
||||
|
||||
```bash
|
||||
# Get only IDs
|
||||
--query "[].id" --output tsv
|
||||
|
||||
# ID + Name table
|
||||
--query "[].{ID:id, Name:name}" --output table
|
||||
|
||||
# Filter in query (e.g., only inProgress)
|
||||
--query "[?status=='inProgress'].{ID:id, Name:name}" --output table
|
||||
|
||||
# Nested: release environments
|
||||
--query "environments[].{Env:name, Status:status}" --output table
|
||||
|
||||
# First item only
|
||||
--query "[0].id" --output tsv
|
||||
|
||||
# Sort by field
|
||||
--query "sort_by([],&finishTime)" --output json
|
||||
```
|
||||
266
skills/knowledge-vault/SKILL.md
Normal file
266
skills/knowledge-vault/SKILL.md
Normal file
@@ -0,0 +1,266 @@
|
||||
---
|
||||
name: knowledge-vault
|
||||
description: Manage the Obsidian Knowledge vault — create notes, organize content, search, and maintain the PARA + Zettelkasten system
|
||||
---
|
||||
|
||||
# Knowledge Vault Management
|
||||
|
||||
## Vault Location
|
||||
|
||||
`/Users/yiukai/Documents/git/knowledge-base`
|
||||
|
||||
## Folder Structure
|
||||
|
||||
```
|
||||
0 - Daily Notes/ # 每日笔记
|
||||
1 - Inbox/ # 快速捕捉
|
||||
2 - Projects/ # 有目标的项目
|
||||
3 - Areas/ # 长期关注领域
|
||||
4 - Resources/ # 主题参考资料
|
||||
5 - Archive/ # 已完成/暂停
|
||||
6 - Zettelkasten/ # 原子化永久笔记
|
||||
System/
|
||||
Attachments/ # 附件
|
||||
Templates/ # 模板
|
||||
```
|
||||
|
||||
## Creating Notes
|
||||
|
||||
### Inbox Note
|
||||
|
||||
Quick capture, no formatting pressure. Place in `1 - Inbox/`.
|
||||
|
||||
```markdown
|
||||
---
|
||||
created: "YYYY-MM-DD HH:mm"
|
||||
type: inbox
|
||||
---
|
||||
|
||||
# Title
|
||||
|
||||
Content here
|
||||
```
|
||||
|
||||
### Daily Note
|
||||
|
||||
Place in `0 - Daily Notes/` with filename `YYYY-MM-DD.md`.
|
||||
|
||||
```markdown
|
||||
---
|
||||
created: "YYYY-MM-DD"
|
||||
type: daily
|
||||
---
|
||||
|
||||
# YYYY-MM-DD dddd
|
||||
|
||||
## 今日捕捉
|
||||
|
||||
-
|
||||
|
||||
## 待办
|
||||
|
||||
- [ ]
|
||||
|
||||
## 回顾
|
||||
|
||||
```
|
||||
|
||||
### Zettelkasten Note
|
||||
|
||||
Atomic permanent note. One idea per note. Place in `6 - Zettelkasten/`.
|
||||
Filename format: `YYYYMMDDHHMMSS Title.md`
|
||||
|
||||
```markdown
|
||||
---
|
||||
created: "YYYY-MM-DD HH:mm"
|
||||
type: zettel
|
||||
tags: []
|
||||
source: ""
|
||||
---
|
||||
|
||||
# Title
|
||||
|
||||
Write the idea in your own words.
|
||||
|
||||
---
|
||||
|
||||
## Related
|
||||
|
||||
- [[link to related notes]]
|
||||
|
||||
## Source
|
||||
|
||||
- Source reference
|
||||
```
|
||||
|
||||
### Project Note
|
||||
|
||||
Place in `2 - Projects/`.
|
||||
|
||||
```markdown
|
||||
---
|
||||
created: "YYYY-MM-DD"
|
||||
type: project
|
||||
status: active
|
||||
deadline: ""
|
||||
---
|
||||
|
||||
# Title
|
||||
|
||||
## Goal
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ]
|
||||
|
||||
## Notes
|
||||
|
||||
-
|
||||
|
||||
## Related
|
||||
|
||||
-
|
||||
```
|
||||
|
||||
### Resource Note
|
||||
|
||||
Place in `4 - Resources/`.
|
||||
|
||||
```markdown
|
||||
---
|
||||
created: "YYYY-MM-DD"
|
||||
type: resource
|
||||
tags: []
|
||||
source: ""
|
||||
---
|
||||
|
||||
# Title
|
||||
|
||||
## Summary
|
||||
|
||||
## Key Points
|
||||
|
||||
-
|
||||
|
||||
## Related
|
||||
|
||||
-
|
||||
```
|
||||
|
||||
### MOC (Map of Content)
|
||||
|
||||
Index note for a topic. Place in `3 - Areas/` or `4 - Resources/`.
|
||||
|
||||
```markdown
|
||||
---
|
||||
created: "YYYY-MM-DD"
|
||||
type: moc
|
||||
---
|
||||
|
||||
# Title
|
||||
|
||||
## Overview
|
||||
|
||||
-
|
||||
|
||||
## Key Notes
|
||||
|
||||
- [[links]]
|
||||
|
||||
## Related MOCs
|
||||
|
||||
-
|
||||
```
|
||||
|
||||
## Operations
|
||||
|
||||
### Create a note
|
||||
|
||||
1. Determine note type based on content
|
||||
2. Use the correct template from above
|
||||
3. Fill `created` with current date/time
|
||||
4. For Zettelkasten: generate timestamp filename `YYYYMMDDHHMMSS Title.md`
|
||||
5. Add `[[links]]` to related existing notes when possible
|
||||
|
||||
### Organize Inbox
|
||||
|
||||
1. Read all files in `1 - Inbox/` (exclude README.md)
|
||||
2. For each note, suggest destination:
|
||||
- Actionable task → `2 - Projects/`
|
||||
- Ongoing responsibility → `3 - Areas/`
|
||||
- Reference material → `4 - Resources/`
|
||||
- Original insight → rewrite as Zettel in `6 - Zettelkasten/`
|
||||
- Outdated → suggest deletion
|
||||
3. Update frontmatter `type` field to match destination (e.g., `project`, `resource`, `zettel`)
|
||||
4. Add `[[wikilinks]]` to related existing notes in the destination folder
|
||||
5. Move files after user confirmation
|
||||
|
||||
### Search notes
|
||||
|
||||
Use Grep to search note content across the vault:
|
||||
```
|
||||
Grep pattern="search term" path="/Users/yiukai/Documents/git/knowledge-base" glob="*.md"
|
||||
```
|
||||
|
||||
Exclude system files:
|
||||
```
|
||||
Grep pattern="search term" path="/Users/yiukai/Documents/git/knowledge-base" glob="[0-6]*/**/*.md"
|
||||
```
|
||||
|
||||
Present search results as a list of `[[wikilinks]]` with brief Chinese summaries of each match.
|
||||
|
||||
### Find related notes
|
||||
|
||||
1. Extract key concepts/tags from the current note
|
||||
2. Search for those terms across `6 - Zettelkasten/` and other folders
|
||||
3. Present results as `[[wikilinks]]` with Chinese explanation of each note's relevance
|
||||
|
||||
### Review Zettelkasten health
|
||||
|
||||
1. Find orphan notes (no incoming or outgoing links)
|
||||
2. Find notes missing tags or sources
|
||||
3. Suggest connections between notes on related topics
|
||||
|
||||
## Git Sync
|
||||
|
||||
MANDATORY for every vault operation that modifies files:
|
||||
|
||||
### Before modifying files
|
||||
|
||||
```bash
|
||||
cd /Users/yiukai/Documents/git/knowledge-base && git pull --rebase origin main
|
||||
```
|
||||
|
||||
### After modifying files
|
||||
|
||||
```bash
|
||||
cd /Users/yiukai/Documents/git/knowledge-base && git add -A && git commit -m "vault: <brief description of changes>" && git push origin main
|
||||
```
|
||||
|
||||
Commit message examples:
|
||||
- `vault: add daily note 2026-03-15`
|
||||
- `vault: create zettel on distributed consensus`
|
||||
- `vault: organize inbox notes to projects and resources`
|
||||
- `vault: update MOC for claude code`
|
||||
|
||||
If pull has conflicts, stop and ask the user before proceeding.
|
||||
|
||||
## Dependencies
|
||||
|
||||
This skill defines **what** to create and **where** to put it. For **how** to write Obsidian content, delegate to `obsidian-skills` plugin:
|
||||
|
||||
- **Writing Markdown** → use `obsidian:obsidian-markdown` for wikilinks, embeds, callouts, properties syntax
|
||||
- **Creating .canvas files** → use `obsidian:json-canvas`
|
||||
- **Creating .base files** → use `obsidian:obsidian-bases`
|
||||
- **Interacting with running Obsidian** → use `obsidian:obsidian-cli` (read/create/search via CLI)
|
||||
- **Fetching web content into notes** → use `obsidian:defuddle` to extract clean markdown from URLs
|
||||
|
||||
## Rules
|
||||
|
||||
- ALWAYS use Obsidian `[[wikilink]]` syntax for internal links (see `obsidian:obsidian-markdown`)
|
||||
- ALWAYS pull before and push after modifying vault files (see Git Sync above)
|
||||
- NEVER modify notes without user confirmation (except when creating new ones)
|
||||
- Zettelkasten notes must be atomic — one idea per note
|
||||
- Use simplified Chinese for all note content
|
||||
- Frontmatter `created` field uses ISO format: `YYYY-MM-DD` or `YYYY-MM-DD HH:mm`
|
||||
- Tags in frontmatter use lowercase English: `[concept, insight, question]`
|
||||
234
skills/openclaw-create-agent/SKILL.md
Normal file
234
skills/openclaw-create-agent/SKILL.md
Normal file
@@ -0,0 +1,234 @@
|
||||
---
|
||||
name: openclaw-create-agent
|
||||
description: Create a new OpenClaw agent with Discord integration -- directory setup, bootstrap files, config update, and verification on the homelab server.
|
||||
---
|
||||
|
||||
# Create OpenClaw Agent
|
||||
|
||||
## When to Use
|
||||
|
||||
- User asks to create/add a new OpenClaw agent
|
||||
- User wants to connect a new AI bot to Discord via OpenClaw
|
||||
- User says "create agent", "add agent", "new bot" in the context of OpenClaw
|
||||
|
||||
Do NOT use for modifying existing agents -- use the `openclaw` skill instead.
|
||||
|
||||
---
|
||||
|
||||
## Environment
|
||||
|
||||
- Server: `yiukai@192.168.68.108`
|
||||
- Config: `/home/yiukai/.openclaw/openclaw.json` (JSON, hot-reloads on save)
|
||||
- Home: `/home/yiukai/.openclaw/`
|
||||
- Owner Discord ID: `964122056163721286`
|
||||
- Default model: `kimi-coding/k2p5`
|
||||
- All commands run via `ssh yiukai@192.168.68.108 '<command>'`
|
||||
|
||||
---
|
||||
|
||||
## Required Input
|
||||
|
||||
Gather ALL before starting. Ask the user for any missing items:
|
||||
|
||||
| Item | Required | Default | Example |
|
||||
|------|----------|---------|---------|
|
||||
| Agent ID | Yes | -- | `xhs-creator` (lowercase, hyphenated) |
|
||||
| Display Name | Yes | -- | `小红薯` |
|
||||
| Discord Bot Token | Yes | -- | `MTQ4NTMw...` |
|
||||
| Guild ID | Yes | -- | `1485305839379021871` |
|
||||
| Channel ID | Yes | -- | `1485305839828074620` |
|
||||
| Purpose | Yes | -- | `小红书内容创作、话题分析` |
|
||||
| Require Mention | No | `false` | `true` for shared channels |
|
||||
| Model | No | `kimi-coding/k2p5` | `google-antigravity/claude-opus-4-6-thinking` |
|
||||
|
||||
Remind user of Discord bot prerequisites if they don't have a token yet:
|
||||
1. https://discord.com/developers/applications > New Application
|
||||
2. Bot page > Reset Token > copy
|
||||
3. Enable **Message Content Intent** (Privileged Gateway Intents)
|
||||
4. OAuth2 > URL Generator > scope `bot` > permissions: Send Messages, Read Message History, Add Reactions
|
||||
5. Invite bot to the target server using the generated URL
|
||||
|
||||
---
|
||||
|
||||
## Procedure
|
||||
|
||||
### Step 1: Pre-flight Checks
|
||||
|
||||
Decode the bot user ID from the token's first segment (before first `.`):
|
||||
|
||||
```bash
|
||||
echo "FIRST_SEGMENT" | base64 -d
|
||||
```
|
||||
|
||||
Then verify the agent doesn't already exist:
|
||||
|
||||
```bash
|
||||
ssh yiukai@192.168.68.108 'node -e "
|
||||
const cfg = JSON.parse(require(\"fs\").readFileSync(\"/home/yiukai/.openclaw/openclaw.json\", \"utf8\"));
|
||||
const exists = cfg.agents.list.some(a => a.id === \"AGENT_ID\");
|
||||
console.log(exists ? \"CONFLICT: agent already exists\" : \"OK: agent ID available\");
|
||||
"'
|
||||
```
|
||||
|
||||
If CONFLICT, stop and ask user to choose a different ID or confirm they want to update the existing agent.
|
||||
|
||||
### Step 2: Create Directories
|
||||
|
||||
```bash
|
||||
ssh yiukai@192.168.68.108 "mkdir -p ~/.openclaw/workspace-AGENT_ID ~/.openclaw/agents/AGENT_ID/agent"
|
||||
```
|
||||
|
||||
### Step 3: Write AGENTS.md
|
||||
|
||||
Write to `/home/yiukai/.openclaw/workspace-AGENT_ID/AGENTS.md` via SSH heredoc.
|
||||
IMPORTANT: All .md bootstrap files go in the WORKSPACE directory, NOT agentDir. agentDir only stores JSON config files (auto-managed by OpenClaw).
|
||||
|
||||
Tailor content to the agent's purpose. Every AGENTS.md must include:
|
||||
|
||||
1. **Identity statement** -- one sentence: who the agent is and what it specializes in
|
||||
2. **Core capabilities** -- 3-5 numbered sections with concrete descriptions
|
||||
3. **Workflow or output templates** -- structured format the agent should follow when producing output
|
||||
4. **Constraints** -- what the agent must NOT do
|
||||
|
||||
Use the user's stated purpose to generate domain-specific content. Do NOT use generic placeholder text.
|
||||
|
||||
### Step 4: Write SOUL.md
|
||||
|
||||
Write to `/home/yiukai/.openclaw/workspace-AGENT_ID/SOUL.md` via SSH heredoc.
|
||||
|
||||
Keep it short (20-30 lines). Must include:
|
||||
|
||||
1. **Identity** -- one-line character description
|
||||
2. **Tone** -- 3-4 bullet points on communication style
|
||||
3. **Language** -- specify primary language (usually Chinese)
|
||||
4. **Boundaries** -- 3-4 things the agent refuses to do
|
||||
|
||||
### Step 5: Update Config
|
||||
|
||||
Use a single Node.js script via SSH to atomically update all three config sections. The script must:
|
||||
|
||||
1. Read the current config
|
||||
2. Add agent entry to `agents.list`
|
||||
3. Add Discord account to `channels.discord.accounts`
|
||||
4. Add binding to `bindings`
|
||||
5. Write back
|
||||
|
||||
```bash
|
||||
ssh yiukai@192.168.68.108 'node -e "
|
||||
const fs = require(\"fs\");
|
||||
const path = \"/home/yiukai/.openclaw/openclaw.json\";
|
||||
const cfg = JSON.parse(fs.readFileSync(path, \"utf8\"));
|
||||
|
||||
// --- Agent ---
|
||||
cfg.agents.list.push({
|
||||
id: \"AGENT_ID\",
|
||||
name: \"AGENT_ID\",
|
||||
workspace: \"/home/yiukai/.openclaw/workspace-AGENT_ID\",
|
||||
agentDir: \"/home/yiukai/.openclaw/agents/AGENT_ID/agent\",
|
||||
model: \"MODEL\",
|
||||
identity: { name: \"DISPLAY_NAME\" },
|
||||
groupChat: {
|
||||
mentionPatterns: [
|
||||
\"<@!?BOT_USER_ID>\",
|
||||
\"DISPLAY_NAME\",
|
||||
\"SHORT_ALIAS\",
|
||||
\"BOT_USER_ID\"
|
||||
]
|
||||
}
|
||||
});
|
||||
|
||||
// --- Discord Account ---
|
||||
cfg.channels.discord.accounts[\"AGENT_ID\"] = {
|
||||
name: \"DISPLAY_NAME\",
|
||||
enabled: true,
|
||||
token: \"FULL_BOT_TOKEN\",
|
||||
groupPolicy: \"open\",
|
||||
streaming: \"off\",
|
||||
guilds: {
|
||||
\"GUILD_ID\": {
|
||||
requireMention: REQUIRE_MENTION_BOOL,
|
||||
users: [\"964122056163721286\", \"BOT_USER_ID\"],
|
||||
channels: { \"CHANNEL_ID\": { allow: true } }
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// --- Binding ---
|
||||
cfg.bindings.push({
|
||||
agentId: \"AGENT_ID\",
|
||||
match: { channel: \"discord\", accountId: \"AGENT_ID\" }
|
||||
});
|
||||
|
||||
fs.writeFileSync(path, JSON.stringify(cfg, null, 2));
|
||||
console.log(\"OK: config updated\");
|
||||
"'
|
||||
```
|
||||
|
||||
Substitute ALL placeholders before executing. Never leave template variables in the actual command.
|
||||
|
||||
### Step 6: Verify
|
||||
|
||||
Wait 5 seconds for hot-reload, then check logs:
|
||||
|
||||
```bash
|
||||
ssh yiukai@192.168.68.108 'journalctl --user -u openclaw-gateway --since "30 sec ago" --no-pager 2>&1 | grep -iE "AGENT_ID|error|reload"'
|
||||
```
|
||||
|
||||
**Success indicators** (all three must appear):
|
||||
- `[reload] config change detected` -- hot-reload triggered
|
||||
- `[discord] [AGENT_ID] starting provider` -- bot connected to Discord
|
||||
- `channels resolved: GUILD_ID/CHANNEL_ID` -- channel mapped successfully
|
||||
|
||||
**If hot-reload fails**, restart manually:
|
||||
|
||||
```bash
|
||||
ssh yiukai@192.168.68.108 'systemctl --user restart openclaw-gateway'
|
||||
```
|
||||
|
||||
Then recheck logs.
|
||||
|
||||
**If bot fails to connect**, common causes:
|
||||
- Bot not invited to server -- remind user to use OAuth2 invite link
|
||||
- Message Content Intent not enabled -- user must enable in Developer Portal
|
||||
- Invalid token -- ask user to regenerate
|
||||
|
||||
### Step 7: Report to User
|
||||
|
||||
Summarize:
|
||||
- Agent ID and display name
|
||||
- Discord server and channel (by name if visible in logs)
|
||||
- Mention requirement
|
||||
- Model
|
||||
- Next step: send a test message in the Discord channel
|
||||
|
||||
---
|
||||
|
||||
## Optional: Add Cron Job
|
||||
|
||||
If the user wants scheduled tasks:
|
||||
|
||||
```bash
|
||||
ssh yiukai@192.168.68.108 'node -e "
|
||||
const fs = require(\"fs\");
|
||||
const path = \"/home/yiukai/.openclaw/openclaw.json\";
|
||||
const cfg = JSON.parse(fs.readFileSync(path, \"utf8\"));
|
||||
if (!cfg.cron) cfg.cron = { enabled: true, entries: [] };
|
||||
if (!cfg.cron.entries) cfg.cron.entries = [];
|
||||
cfg.cron.entries.push({
|
||||
name: \"JOB_NAME\",
|
||||
schedule: \"CRON_EXPRESSION\",
|
||||
timezone: \"Europe/Stockholm\",
|
||||
agentId: \"AGENT_ID\",
|
||||
message: \"TASK_PROMPT\",
|
||||
deliver: { channel: \"discord\", target: \"channel:CHANNEL_ID\" }
|
||||
});
|
||||
fs.writeFileSync(path, JSON.stringify(cfg, null, 2));
|
||||
console.log(\"OK: cron job added\");
|
||||
"'
|
||||
```
|
||||
|
||||
## Optional: Enable Agent-to-Agent Communication
|
||||
|
||||
1. Add agent ID to `tools.agentToAgent.allow` array
|
||||
2. Set `subagents.allowAgents` on the calling agent
|
||||
3. Set `requireMention: true` on all collaborating agents in the shared guild
|
||||
696
skills/openclaw/SKILL.md
Normal file
696
skills/openclaw/SKILL.md
Normal file
@@ -0,0 +1,696 @@
|
||||
---
|
||||
name: openclaw
|
||||
description: Operate OpenClaw - self-hosted AI gateway connecting chat apps to AI agents. Manage gateway, agents, channels, skills, plugins, sessions, hooks, webhooks, cron jobs, and configuration.
|
||||
---
|
||||
|
||||
# OpenClaw Operations
|
||||
|
||||
## Overview
|
||||
|
||||
OpenClaw is a self-hosted gateway connecting messaging platforms (WhatsApp, Telegram, Discord, Slack, Signal, iMessage, MS Teams, etc.) to AI coding agents. It runs on Node.js 22+ and uses a WebSocket-based control plane.
|
||||
|
||||
- Config: `~/.openclaw/openclaw.json` (JSON5 format, hot-reloads)
|
||||
- Default port: `18789`
|
||||
- Local repo: `C:\Users\yaoji\git\OpenSource\openclaw`
|
||||
- Docs: https://docs.openclaw.ai/
|
||||
|
||||
## Gateway Management
|
||||
|
||||
### Start / Stop / Status
|
||||
|
||||
```bash
|
||||
# Run gateway foreground
|
||||
openclaw gateway
|
||||
openclaw gateway run
|
||||
|
||||
# With options
|
||||
openclaw gateway --port 18789 --bind loopback --verbose
|
||||
openclaw gateway --dev # dev mode, creates config if missing
|
||||
openclaw gateway --allow-unconfigured # skip gateway.mode check
|
||||
openclaw gateway --force # kill existing listener on port
|
||||
|
||||
# Service lifecycle
|
||||
openclaw gateway install [--port 18789] [--token TOKEN] [--force]
|
||||
openclaw gateway start
|
||||
openclaw gateway stop
|
||||
openclaw gateway restart
|
||||
openclaw gateway uninstall
|
||||
|
||||
# Status and diagnostics
|
||||
openclaw gateway status [--json] [--deep] [--no-probe]
|
||||
openclaw gateway health --url ws://127.0.0.1:18789
|
||||
openclaw gateway probe [--json]
|
||||
|
||||
# Discovery (Bonjour/mDNS)
|
||||
openclaw gateway discover [--timeout 4000] [--json]
|
||||
|
||||
# Low-level RPC
|
||||
openclaw gateway call <method> [--params '{"key":"value"}']
|
||||
openclaw gateway call status
|
||||
openclaw gateway call logs.tail --params '{"sinceMs": 60000}'
|
||||
```
|
||||
|
||||
### Gateway Config Keys
|
||||
|
||||
```bash
|
||||
openclaw config set gateway.port 19001 --strict-json
|
||||
openclaw config set gateway.bind "loopback"
|
||||
openclaw config set gateway.auth.mode "token"
|
||||
openclaw config set gateway.auth.token "my-secret"
|
||||
openclaw config set gateway.http.endpoints.chatCompletions.enabled true --strict-json
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Config file: `~/.openclaw/openclaw.json`
|
||||
|
||||
```bash
|
||||
# Print config file path
|
||||
openclaw config file
|
||||
|
||||
# Read values (dot/bracket notation)
|
||||
openclaw config get agents.defaults.workspace
|
||||
openclaw config get agents.list[0].id
|
||||
openclaw config get channels.whatsapp.enabled
|
||||
|
||||
# Write values (JSON5 auto-parsed, use --strict-json for explicit)
|
||||
openclaw config set agents.defaults.workspace "/path/to/workspace"
|
||||
openclaw config set agents.defaults.heartbeat.every "2h"
|
||||
openclaw config set channels.whatsapp.groups '["*"]' --strict-json
|
||||
openclaw config set agents.list[0].tools.exec.node "node-id"
|
||||
|
||||
# Remove values
|
||||
openclaw config unset tools.web.search.apiKey
|
||||
|
||||
# Validate
|
||||
openclaw config validate [--json]
|
||||
|
||||
# Interactive wizard
|
||||
openclaw configure
|
||||
```
|
||||
|
||||
### Config Structure Reference
|
||||
|
||||
```json5
|
||||
{
|
||||
// Identity
|
||||
identity: { name: "Pi", theme: "space lobster", emoji: "" },
|
||||
|
||||
// Agents
|
||||
agents: {
|
||||
defaults: {
|
||||
workspace: "~/.openclaw/workspace",
|
||||
model: { primary: "anthropic/claude-sonnet-4-20250514", fallbacks: [] },
|
||||
skills: [], // skill allowlist
|
||||
sandbox: { mode: "off" }, // "off" | "non-main" | "all"
|
||||
heartbeat: { every: "1h" },
|
||||
},
|
||||
list: [
|
||||
{
|
||||
id: "main",
|
||||
default: true,
|
||||
workspace: "~/.openclaw/workspace",
|
||||
model: { primary: "anthropic/claude-sonnet-4-20250514" },
|
||||
skills: [],
|
||||
identity: { name: "Pi", emoji: "" },
|
||||
runtime: { type: "embedded" }, // "embedded" | "acp"
|
||||
subagents: { allowAgents: [], model: "..." },
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
// Channels
|
||||
channels: {
|
||||
whatsapp: { enabled: true, dmPolicy: "pairing", allowFrom: ["+15555550123"], groups: { "*": { requireMention: true } } },
|
||||
telegram: { enabled: true, botToken: "...", dmPolicy: "open" },
|
||||
discord: { enabled: true, token: "..." },
|
||||
slack: { enabled: true, botToken: "...", signingSecret: "..." },
|
||||
signal: { enabled: true, phoneNumber: "+1..." },
|
||||
// imessage, googlechat, msteams, matrix, irc, line, feishu, mattermost, etc.
|
||||
},
|
||||
|
||||
// Session
|
||||
session: {
|
||||
scope: "per-peer", // "main" | "per-peer" | "per-channel-peer" | "per-account-channel-peer"
|
||||
reset: { mode: "idle", idleMinutes: 120 },
|
||||
maintenance: { mode: "warn", pruneAfter: "30d", maxEntries: 500 },
|
||||
},
|
||||
|
||||
// Skills
|
||||
skills: {
|
||||
allowBundled: [],
|
||||
load: { extraDirs: [], watch: true },
|
||||
install: { preferBrew: true, nodeManager: "npm" },
|
||||
entries: {
|
||||
"web-search": { enabled: true },
|
||||
"image-gen": { enabled: true, apiKey: "..." },
|
||||
}
|
||||
},
|
||||
|
||||
// Plugins
|
||||
plugins: {
|
||||
enabled: true,
|
||||
allow: [],
|
||||
deny: [],
|
||||
load: { paths: [] },
|
||||
entries: {
|
||||
"my-plugin": { enabled: true, config: {} }
|
||||
}
|
||||
},
|
||||
|
||||
// Tools
|
||||
tools: {
|
||||
web: { search: { enabled: true }, fetch: { enabled: true } },
|
||||
browser: { enabled: true },
|
||||
canvas: { enabled: true },
|
||||
media: { audio: { enabled: true } },
|
||||
},
|
||||
|
||||
// Gateway
|
||||
gateway: {
|
||||
mode: "local",
|
||||
bind: "loopback", // "loopback" | "lan" | "tailnet" | "auto"
|
||||
port: 18789,
|
||||
auth: { mode: "token", token: "..." },
|
||||
controlUi: { enabled: true },
|
||||
http: {
|
||||
endpoints: {
|
||||
chatCompletions: { enabled: false },
|
||||
responses: { enabled: false },
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
// Hooks (webhooks)
|
||||
hooks: {
|
||||
enabled: true,
|
||||
token: "${OPENCLAW_HOOKS_TOKEN}",
|
||||
path: "/hooks",
|
||||
defaultSessionKey: "hook:ingress",
|
||||
allowRequestSessionKey: false,
|
||||
allowedAgentIds: ["main"],
|
||||
// Internal hooks (event-driven)
|
||||
internal: {
|
||||
enabled: true,
|
||||
entries: {
|
||||
"session-memory": { enabled: true },
|
||||
"command-logger": { enabled: false },
|
||||
}
|
||||
},
|
||||
// Webhook mappings
|
||||
mappings: [
|
||||
{ match: { path: "gmail" }, action: "agent", agentId: "main", deliver: true }
|
||||
],
|
||||
},
|
||||
|
||||
// Cron
|
||||
cron: { enabled: true, maxConcurrentRuns: 2 },
|
||||
|
||||
// ACP (Agent Control Protocol)
|
||||
acp: { enabled: false, backend: "acpx", maxConcurrentSessions: 5 },
|
||||
|
||||
// Logging
|
||||
logging: { level: "info", redactSensitive: "tools" },
|
||||
|
||||
// Environment
|
||||
env: { vars: { MY_KEY: "value" } },
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Management
|
||||
|
||||
```bash
|
||||
# List agents
|
||||
openclaw agents list
|
||||
|
||||
# Add agent with workspace
|
||||
openclaw agents add work --workspace ~/.openclaw/workspace-work
|
||||
|
||||
# Delete agent
|
||||
openclaw agents delete work
|
||||
|
||||
# Routing bindings
|
||||
openclaw agents bindings [--agent work] [--json]
|
||||
openclaw agents bind --agent work --bind telegram:ops --bind discord:guild-a
|
||||
openclaw agents unbind --agent work --bind telegram:ops
|
||||
openclaw agents unbind --agent work --all
|
||||
|
||||
# Set agent identity
|
||||
openclaw agents set-identity --agent main --name "Pi" --emoji ""
|
||||
openclaw agents set-identity --workspace ~/.openclaw/workspace --from-identity
|
||||
openclaw agents set-identity --agent main --avatar avatars/openclaw.png
|
||||
```
|
||||
|
||||
### Agent Config Example
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
list: [
|
||||
{
|
||||
id: "main",
|
||||
default: true,
|
||||
workspace: "~/.openclaw/workspace",
|
||||
identity: { name: "Pi", theme: "space lobster", emoji: "" },
|
||||
},
|
||||
{
|
||||
id: "work",
|
||||
workspace: "~/.openclaw/workspace-work",
|
||||
model: { primary: "openai/gpt-5" },
|
||||
skills: ["web-search", "code-runner"],
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Bootstrap Files
|
||||
|
||||
Place in agent workspace root:
|
||||
- `AGENTS.md` - Operating instructions + memory
|
||||
- `SOUL.md` - Persona, boundaries, tone
|
||||
- `TOOLS.md` - User tool notes
|
||||
- `BOOTSTRAP.md` - One-time ritual (deleted after first run)
|
||||
- `IDENTITY.md` - Agent name/vibe
|
||||
- `USER.md` - User profile
|
||||
|
||||
## Channel Management
|
||||
|
||||
```bash
|
||||
# List and status
|
||||
openclaw channels list
|
||||
openclaw channels status [--probe]
|
||||
openclaw channels capabilities [--channel discord --target channel:123]
|
||||
|
||||
# Add / remove accounts
|
||||
openclaw channels add --channel telegram --token <bot-token>
|
||||
openclaw channels remove --channel telegram [--delete]
|
||||
|
||||
# Login/logout (interactive)
|
||||
openclaw channels login --channel whatsapp
|
||||
openclaw channels logout --channel whatsapp
|
||||
|
||||
# Resolve names to IDs
|
||||
openclaw channels resolve --channel slack "#general" "@jane"
|
||||
openclaw channels resolve --channel discord "My Server/#support"
|
||||
|
||||
# Tail channel logs
|
||||
openclaw channels logs --channel all
|
||||
```
|
||||
|
||||
### DM Policies
|
||||
|
||||
- `pairing` (default) - Require pairing code approval
|
||||
- `allowlist` - Only allow specific senders
|
||||
- `open` - Allow all DMs
|
||||
- `disabled` - Block all DMs
|
||||
|
||||
## Sending Messages
|
||||
|
||||
```bash
|
||||
# Send text
|
||||
openclaw message send --channel telegram --target @mychat --message "Hello"
|
||||
|
||||
# Send with media
|
||||
openclaw message send --channel discord --target channel:123 --message "See this" --media ./image.png
|
||||
|
||||
# Reply to message
|
||||
openclaw message send --channel discord --target channel:123 --message "Reply" --reply-to 456
|
||||
|
||||
# Create poll
|
||||
openclaw message poll --channel discord --target channel:123 \
|
||||
--poll-question "Lunch?" --poll-option Pizza --poll-option Sushi --poll-multi
|
||||
|
||||
# React
|
||||
openclaw message react --channel slack --target C123 --message-id 456 --emoji "check"
|
||||
|
||||
# Read messages
|
||||
openclaw message read --channel discord --target channel:123 --limit 20
|
||||
|
||||
# Edit / delete
|
||||
openclaw message edit --channel discord --target channel:123 --message-id 789 --message "Updated"
|
||||
openclaw message delete --channel discord --target channel:123 --message-id 789
|
||||
|
||||
# Broadcast to multiple targets
|
||||
openclaw message broadcast --channel all --targets user1 --targets user2 --message "Announcement"
|
||||
|
||||
# Thread operations (Discord)
|
||||
openclaw message thread create --channel discord --target channel:123 --thread-name "Discussion"
|
||||
openclaw message thread list --channel discord --guild-id 456
|
||||
openclaw message thread reply --channel discord --target thread:789 --message "Reply"
|
||||
```
|
||||
|
||||
### Target Formats
|
||||
|
||||
| Channel | Format |
|
||||
|---------|--------|
|
||||
| WhatsApp | E.164 (`+15551234567`) or group JID |
|
||||
| Telegram | chat id or `@username` |
|
||||
| Discord | `channel:<id>` or `user:<id>` |
|
||||
| Slack | `channel:<id>` or `user:<id>` |
|
||||
| Signal | `+E.164`, `group:<id>`, `username:<name>` |
|
||||
| iMessage | handle, `chat_id:<id>`, `chat_guid:<guid>` |
|
||||
| MS Teams | `conversation:<id>` or `user:<aad-object-id>` |
|
||||
|
||||
## Session Management
|
||||
|
||||
```bash
|
||||
# List sessions
|
||||
openclaw sessions [--agent work] [--all-agents] [--json]
|
||||
openclaw sessions --active 120 # active in last 120 minutes
|
||||
|
||||
# Cleanup
|
||||
openclaw sessions cleanup --dry-run [--agent work] [--all-agents]
|
||||
openclaw sessions cleanup --enforce
|
||||
```
|
||||
|
||||
Session storage: `~/.openclaw/agents/<agentId>/sessions/`
|
||||
|
||||
### Session Scopes
|
||||
|
||||
- `main` - Single session per agent
|
||||
- `per-peer` - One session per sender
|
||||
- `per-channel-peer` - One session per sender per channel
|
||||
- `per-account-channel-peer` - Full isolation
|
||||
|
||||
## Skills Management
|
||||
|
||||
Skills extend agent capabilities. Three sources (precedence: workspace > managed > bundled):
|
||||
- **Bundled**: shipped with OpenClaw (web-search, browser, canvas, cron, etc.)
|
||||
- **Managed**: `~/.openclaw/skills/`
|
||||
- **Workspace**: `<workspace>/skills/`
|
||||
|
||||
```bash
|
||||
# List skills
|
||||
openclaw skills list [--eligible] [--verbose] [--json]
|
||||
|
||||
# Info and check
|
||||
openclaw skills info <name> [--json]
|
||||
openclaw skills check
|
||||
```
|
||||
|
||||
### SKILL.md Format
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-skill
|
||||
description: What this skill does
|
||||
requires:
|
||||
bins: [node, git]
|
||||
env: [MY_API_KEY]
|
||||
config: [tools.web.search.apiKey]
|
||||
install:
|
||||
- kind: node
|
||||
package: my-skill-package
|
||||
always: false
|
||||
skillKey: MY_SKILL
|
||||
emoji: ""
|
||||
homepage: https://example.com
|
||||
---
|
||||
|
||||
# My Skill
|
||||
|
||||
Instructions and tool definitions for the LLM agent...
|
||||
```
|
||||
|
||||
### Skills Config
|
||||
|
||||
```json5
|
||||
{
|
||||
skills: {
|
||||
entries: {
|
||||
"web-search": { enabled: true },
|
||||
"browser": { enabled: true },
|
||||
"image-gen": { enabled: true, apiKey: "sk-..." },
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Plugins Management
|
||||
|
||||
Plugins are in-process gateway extensions with full API access.
|
||||
|
||||
```bash
|
||||
# List, info
|
||||
openclaw plugins list
|
||||
openclaw plugins info <id>
|
||||
|
||||
# Install
|
||||
openclaw plugins install <path-or-npm-spec> [--pin] [--link]
|
||||
openclaw plugins install -l ./my-plugin # link local plugin
|
||||
|
||||
# Enable / disable
|
||||
openclaw plugins enable <id>
|
||||
openclaw plugins disable <id>
|
||||
|
||||
# Update
|
||||
openclaw plugins update <id>
|
||||
openclaw plugins update --all [--dry-run]
|
||||
|
||||
# Uninstall
|
||||
openclaw plugins uninstall <id> [--keep-files] [--dry-run]
|
||||
|
||||
# Diagnostics
|
||||
openclaw plugins doctor
|
||||
```
|
||||
|
||||
### Plugin Manifest
|
||||
|
||||
Every plugin needs `openclaw.plugin.json` with:
|
||||
- Plugin metadata
|
||||
- `configSchema` (JSON Schema, even if empty)
|
||||
|
||||
## Webhooks (External Triggers)
|
||||
|
||||
Enable in config:
|
||||
|
||||
```json5
|
||||
{
|
||||
hooks: {
|
||||
enabled: true,
|
||||
token: "shared-secret",
|
||||
path: "/hooks",
|
||||
defaultSessionKey: "hook:ingress",
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Endpoints
|
||||
|
||||
**POST /hooks/wake** - Enqueue system event:
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:18789/hooks/wake \
|
||||
-H 'Authorization: Bearer SECRET' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"text":"New email received","mode":"now"}'
|
||||
```
|
||||
|
||||
**POST /hooks/agent** - Run isolated agent turn:
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:18789/hooks/agent \
|
||||
-H 'Authorization: Bearer SECRET' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"message": "Summarize inbox",
|
||||
"name": "Email",
|
||||
"agentId": "main",
|
||||
"deliver": true,
|
||||
"channel": "telegram",
|
||||
"to": "123456789",
|
||||
"model": "anthropic/claude-sonnet-4-20250514",
|
||||
"timeoutSeconds": 120
|
||||
}'
|
||||
```
|
||||
|
||||
**POST /hooks/\<name\>** - Custom mapped hooks (via `hooks.mappings`).
|
||||
|
||||
### Auth
|
||||
|
||||
- Header: `Authorization: Bearer <token>` (recommended)
|
||||
- Header: `x-openclaw-token: <token>`
|
||||
- Query string tokens are rejected
|
||||
|
||||
## Internal Hooks (Event-Driven)
|
||||
|
||||
Hooks run inside the gateway on agent events.
|
||||
|
||||
```bash
|
||||
# List / info / check
|
||||
openclaw hooks list [--eligible] [--verbose] [--json]
|
||||
openclaw hooks info <name> [--json]
|
||||
openclaw hooks check [--json]
|
||||
|
||||
# Enable / disable
|
||||
openclaw hooks enable <name>
|
||||
openclaw hooks disable <name>
|
||||
|
||||
# Install hook packs
|
||||
openclaw hooks install <path-or-npm-spec>
|
||||
```
|
||||
|
||||
### Bundled Hooks
|
||||
|
||||
| Hook | Event | Purpose |
|
||||
|------|-------|---------|
|
||||
| session-memory | command:new | Save session context to memory files |
|
||||
| bootstrap-extra-files | agent:bootstrap | Inject extra workspace bootstrap files |
|
||||
| command-logger | command | Audit log all commands to JSONL |
|
||||
| boot-md | gateway:startup | Run BOOT.md on gateway start |
|
||||
|
||||
### Creating Custom Hooks
|
||||
|
||||
1. Create directory: `~/.openclaw/hooks/my-hook/`
|
||||
2. Create `HOOK.md`:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-hook
|
||||
description: "Does something useful"
|
||||
metadata: { "openclaw": { "emoji": "", "events": ["command:new"] } }
|
||||
---
|
||||
|
||||
# My Hook
|
||||
Description here.
|
||||
```
|
||||
|
||||
3. Create `handler.ts`:
|
||||
|
||||
```typescript
|
||||
const handler = async (event) => {
|
||||
if (event.type !== "command" || event.action !== "new") return;
|
||||
console.log("[my-hook] Triggered!");
|
||||
event.messages.push("Hook executed!");
|
||||
};
|
||||
export default handler;
|
||||
```
|
||||
|
||||
4. Enable: `openclaw hooks enable my-hook`
|
||||
|
||||
### Event Types
|
||||
|
||||
- `command:new`, `command:reset`, `command:stop`
|
||||
- `session:compact:before`, `session:compact:after`
|
||||
- `agent:bootstrap`
|
||||
- `gateway:startup`
|
||||
- `message:received`, `message:sent`, `message:transcribed`, `message:preprocessed`
|
||||
|
||||
## Cron Jobs
|
||||
|
||||
```bash
|
||||
# Add recurring job
|
||||
openclaw cron add \
|
||||
--name "Morning brief" \
|
||||
--cron "0 7 * * *" \
|
||||
--session isolated \
|
||||
--message "Summarize overnight updates." \
|
||||
--announce --channel telegram --to "123456789"
|
||||
|
||||
# Add one-shot job
|
||||
openclaw cron add --name "Reminder" --at "2026-03-15T10:00:00" --message "Check report"
|
||||
|
||||
# Edit job
|
||||
openclaw cron edit <job-id> --announce --channel slack --to "channel:C1234567890"
|
||||
openclaw cron edit <job-id> --no-deliver
|
||||
openclaw cron edit <job-id> --light-context
|
||||
|
||||
# Full help
|
||||
openclaw cron --help
|
||||
```
|
||||
|
||||
## Onboarding & Setup
|
||||
|
||||
```bash
|
||||
# Interactive onboarding wizard
|
||||
openclaw onboard [--install-daemon]
|
||||
|
||||
# Setup workspace
|
||||
openclaw setup
|
||||
|
||||
# System diagnostics
|
||||
openclaw doctor [--fix]
|
||||
openclaw status [--deep]
|
||||
openclaw health
|
||||
|
||||
# Logs
|
||||
openclaw logs [--follow]
|
||||
|
||||
# Update OpenClaw
|
||||
openclaw update
|
||||
```
|
||||
|
||||
## OpenAI-Compatible API
|
||||
|
||||
When enabled, the gateway exposes:
|
||||
- `POST /v1/chat/completions` - OpenAI Chat Completions format
|
||||
- `POST /v1/responses` - Open response format
|
||||
|
||||
Enable:
|
||||
```bash
|
||||
openclaw config set gateway.http.endpoints.chatCompletions.enabled true --strict-json
|
||||
openclaw config set gateway.http.endpoints.responses.enabled true --strict-json
|
||||
```
|
||||
|
||||
## Model Providers
|
||||
|
||||
25+ supported providers including:
|
||||
- Anthropic (Claude)
|
||||
- OpenAI (GPT)
|
||||
- Ollama (local)
|
||||
- OpenRouter
|
||||
- AWS Bedrock
|
||||
- Mistral, Qwen, vLLM, Deepgram, etc.
|
||||
|
||||
```bash
|
||||
# Discover models
|
||||
openclaw models [list]
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Initial Setup
|
||||
```bash
|
||||
npm install -g openclaw@latest
|
||||
openclaw onboard --install-daemon
|
||||
openclaw channels login
|
||||
openclaw gateway
|
||||
```
|
||||
|
||||
### Add New Channel
|
||||
```bash
|
||||
openclaw channels add --channel telegram --token BOT_TOKEN
|
||||
openclaw agents bind --agent main --bind telegram
|
||||
openclaw gateway restart
|
||||
```
|
||||
|
||||
### Multi-Agent Setup
|
||||
```bash
|
||||
openclaw agents add work --workspace ~/.openclaw/workspace-work
|
||||
openclaw agents bind --agent work --bind telegram:ops
|
||||
openclaw agents bind --agent main --bind whatsapp
|
||||
```
|
||||
|
||||
### Trigger Agent via API
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:18789/hooks/agent \
|
||||
-H 'Authorization: Bearer TOKEN' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"message":"Analyze this data","deliver":false}'
|
||||
```
|
||||
|
||||
### Enable Skill
|
||||
```bash
|
||||
openclaw config set skills.entries.web-search.enabled true --strict-json
|
||||
openclaw gateway restart
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
```bash
|
||||
openclaw doctor [--fix] # Guided diagnostics and repairs
|
||||
openclaw status --deep # Full system status audit
|
||||
openclaw channels status --probe # Channel connectivity check
|
||||
openclaw config validate # Config schema validation
|
||||
openclaw gateway probe # Debug gateway connectivity
|
||||
openclaw logs --follow # Tail gateway logs
|
||||
```
|
||||
174
skills/prod-error-triage/SKILL.md
Normal file
174
skills/prod-error-triage/SKILL.md
Normal file
@@ -0,0 +1,174 @@
|
||||
---
|
||||
name: prod-error-triage
|
||||
description: End-to-end production error triage workflow - search logs, diagnose root cause, fix code, create Jira ticket, create branch, commit, and create PR. Use when investigating production errors, log messages, or exceptions.
|
||||
---
|
||||
|
||||
# Production Error Triage
|
||||
|
||||
End-to-end workflow for investigating production errors and shipping fixes.
|
||||
|
||||
## When to Use
|
||||
|
||||
Trigger when the user:
|
||||
- Pastes a log message or error and asks to investigate
|
||||
- Asks "why is X failing in prod"
|
||||
- Wants to trace a production exception
|
||||
|
||||
## Defaults
|
||||
|
||||
- **Jira project_key**: `ALLPOST`
|
||||
- **Jira component**: `BE`
|
||||
- **Azure DevOps org**: `https://dev.azure.com/billodev`
|
||||
- **Azure DevOps project**: `Billo App Platform`
|
||||
|
||||
## Workflow
|
||||
|
||||
Execute these phases in order. Report findings to the user after each phase before proceeding.
|
||||
|
||||
### Phase 1: Log Search & Context Gathering
|
||||
|
||||
1. **Search for the error** using `mcp__billo-es-logs__search_logs` with the error message or keywords
|
||||
2. **Expand the time window** if no results (start with `now-1h`, widen to `now-24h`, `now-7d`)
|
||||
3. **Get surrounding logs** by searching with the same `Correlation-ID` and a narrow time window around the error
|
||||
4. **Quantify impact** using `count_only: true` to understand if this is isolated or widespread
|
||||
5. **Check for patterns** - compare error logs with success logs using `sample: true` to find what differs
|
||||
|
||||
Key questions to answer:
|
||||
- How many errors in the last 24h?
|
||||
- Is it intermittent or constant?
|
||||
- Which application/service is affected?
|
||||
- Is there a Correlation-ID to trace the full request?
|
||||
|
||||
### Phase 2: Root Cause Analysis
|
||||
|
||||
1. **Read the stack trace** - identify the exact file and line number
|
||||
2. **Read the source code** at the error location using the file path from the stack trace
|
||||
3. **Trace upstream** - read the calling code to understand the full flow
|
||||
4. **Identify the real error** - the logged exception may wrap the actual cause. Look for inner exceptions and upstream error logs with the same Correlation-ID
|
||||
5. **Compare success vs failure** - if intermittent, determine what condition causes the divergence
|
||||
|
||||
Present findings to the user:
|
||||
- Error chain (what calls what)
|
||||
- Root cause (the actual bug, not the symptom)
|
||||
- Why it is intermittent (if applicable)
|
||||
- Impact scope
|
||||
|
||||
### Phase 3: Code Fix
|
||||
|
||||
1. **Implement the minimal fix** addressing the root cause
|
||||
2. **Consider idempotency** - if the error is caused by retries, add guards to make the operation safe to retry
|
||||
3. **Consider edge cases** - identify scenarios where the fix might not cover (e.g. partial completion) and flag them to the user
|
||||
4. **Show the diff** to the user and get confirmation before proceeding
|
||||
|
||||
#### Multi-Repo Changes
|
||||
|
||||
If the fix spans multiple repos (e.g. Infrastructure + Payment):
|
||||
1. Fix the upstream repo first (e.g. shared library)
|
||||
2. Merge and publish a new NuGet package version
|
||||
3. Update the downstream repo to reference the new version
|
||||
4. **Check dependency compatibility before updating**:
|
||||
- `Microsoft.Extensions.*` major version must match the downstream project's TFM (net9.0 = 9.x)
|
||||
- `AWSSDK.*` major version must not conflict with other transitive dependencies (e.g. MongoDB.Driver requires AWSSDK.Core < 4.0)
|
||||
- Run `dotnet restore` to verify before committing
|
||||
|
||||
### Phase 4: Jira Ticket
|
||||
|
||||
Create a ticket using `mcp__billo-es-logs__create_bug_ticket` with:
|
||||
|
||||
- **project_key**: `ALLPOST` (default, ask user if different)
|
||||
- **component**: `BE`
|
||||
- **priority**: Based on impact (2300+ errors/day = `Highest`)
|
||||
- **summary**: Short, searchable - include error type and affected component
|
||||
- **description**: Uses lightweight formatting that converts to Jira ADF:
|
||||
- Lines ending with `:` become **h3 headings** (e.g. `Problem:`)
|
||||
- Lines starting with `- ` become **bullet lists**
|
||||
- Text wrapped in `**` becomes **bold**
|
||||
- Everything else is a paragraph
|
||||
|
||||
```
|
||||
Problem:
|
||||
DownloadAndSendInvoiceCommandHandler fails with 409 BlobAlreadyExists
|
||||
|
||||
Impact:
|
||||
- 2300+ errors in the last 24 hours
|
||||
- Affects both regular and **reminder** invoices
|
||||
|
||||
Root Cause:
|
||||
- AzureStorage.StoreFileAsync calls blobClient.UploadAsync() without overwrite flag
|
||||
- No idempotency check in the handler
|
||||
|
||||
Fix:
|
||||
Add idempotency guard to check **InvoiceTransaction** status before uploading
|
||||
|
||||
Files:
|
||||
- Billo.Platform.Payment.Business/Commands/Handlers/DownloadAndSendInvoiceCommandHandler.cs
|
||||
```
|
||||
|
||||
If the API returns 400, likely causes:
|
||||
- Missing required field (e.g. `component`)
|
||||
- Invalid `priority` value
|
||||
- Wrong `project_key`
|
||||
|
||||
Use `mcp__billo-es-logs__search_tickets` with an existing ticket key to discover required fields.
|
||||
|
||||
### Phase 5: Branch & Commit
|
||||
|
||||
1. **Create branch** using the naming convention `{prefix}/{TICKET_ID}_{description}`:
|
||||
```
|
||||
bug/ALLPOST-4228_fix-invoice-upload-blob-already-exists
|
||||
fix/ALLPOST-4230_crash
|
||||
feature/ALLPOST-4028_login-page
|
||||
feat/ALLPOST-4028_login-page
|
||||
chore/ALLPOST-4031_cleanup
|
||||
```
|
||||
Choose the prefix that best matches the work type. Any prefix is valid.
|
||||
2. **Stage only the changed files** - never `git add .`
|
||||
3. **Commit** with conventional commit format:
|
||||
```
|
||||
fix: {description} ({TICKET_KEY})
|
||||
|
||||
{Brief explanation of what and why}
|
||||
```
|
||||
4. **Ask before pushing** - do not push without user confirmation
|
||||
|
||||
### Phase 6: Create PR
|
||||
|
||||
Create PR using Azure DevOps CLI:
|
||||
|
||||
```bash
|
||||
az repos pr create \
|
||||
--org "https://dev.azure.com/billodev" \
|
||||
--project "Billo App Platform" \
|
||||
--detect false \
|
||||
--repository "{REPO_NAME}" \
|
||||
--source-branch "{BRANCH}" \
|
||||
--target-branch "develop" \
|
||||
--title "{type}: {description} ({TICKET_KEY})" \
|
||||
--description "{summary of changes}"
|
||||
```
|
||||
|
||||
Notes:
|
||||
- `--project` is required, will error without it
|
||||
- `--detect false` avoids auto-detection issues
|
||||
- Return the PR URL to the user when done
|
||||
|
||||
## Tools Reference
|
||||
|
||||
| Phase | Tool | Purpose |
|
||||
|-------|------|---------|
|
||||
| Log search | `mcp__billo-es-logs__search_logs` | Search with query, time range, level, application |
|
||||
| Impact | `mcp__billo-es-logs__search_logs` with `count_only: true` | Count matching errors |
|
||||
| Patterns | `mcp__billo-es-logs__search_logs` with `sample: true` | Random sample from large result sets |
|
||||
| Source code | `Read`, `Glob`, `Grep` | Find and read source files |
|
||||
| Ticket lookup | `mcp__billo-es-logs__search_tickets` | Find existing tickets or discover field requirements |
|
||||
| Ticket create | `mcp__billo-es-logs__create_bug_ticket` | Create Jira bug ticket |
|
||||
| Git | `Bash` | Branch, commit, push |
|
||||
| PR | `az repos pr create` | Create Azure DevOps pull request |
|
||||
|
||||
## Tips
|
||||
|
||||
- Always search logs before reading code - the logs tell you where to look
|
||||
- Use `Correlation-ID` to trace a single request across services
|
||||
- When errors are intermittent, the root cause is often in retry/concurrency behavior, not in the happy path
|
||||
- When updating shared NuGet packages, always verify transitive dependency compatibility with downstream projects before publishing
|
||||
- Flag edge cases to the user rather than silently ignoring them
|
||||
50
skills/wsl-python/SKILL.md
Normal file
50
skills/wsl-python/SKILL.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
name: wsl-python
|
||||
description: WSL + Conda Python workflow patterns for Invoice Master projects
|
||||
---
|
||||
|
||||
# WSL Python Workflow
|
||||
|
||||
## Command Prefix (REQUIRED)
|
||||
|
||||
All Python commands MUST use this prefix:
|
||||
|
||||
```bash
|
||||
wsl bash -c "source ~/miniconda3/etc/profile.d/conda.sh && conda activate invoice-sm120 && <command>"
|
||||
```
|
||||
|
||||
NEVER run Python commands directly in Windows PowerShell/CMD.
|
||||
|
||||
## Common Commands
|
||||
|
||||
### Run Tests
|
||||
```bash
|
||||
wsl bash -c "source ~/miniconda3/etc/profile.d/conda.sh && conda activate invoice-sm120 && cd /mnt/c/Users/yaoji/git/ColaCoder/invoice-master-poc-v2 && pytest tests/ -v"
|
||||
```
|
||||
|
||||
### Run Specific Test File
|
||||
```bash
|
||||
wsl bash -c "source ~/miniconda3/etc/profile.d/conda.sh && conda activate invoice-sm120 && cd /mnt/c/Users/yaoji/git/ColaCoder/invoice-master-poc-v2 && pytest tests/<path> -v -s"
|
||||
```
|
||||
|
||||
### Run Tests with Coverage
|
||||
```bash
|
||||
wsl bash -c "source ~/miniconda3/etc/profile.d/conda.sh && conda activate invoice-sm120 && cd /mnt/c/Users/yaoji/git/ColaCoder/invoice-master-poc-v2 && pytest --cov=packages --cov-report=term-missing tests/"
|
||||
```
|
||||
|
||||
### Format Code
|
||||
```bash
|
||||
wsl bash -c "source ~/miniconda3/etc/profile.d/conda.sh && conda activate invoice-sm120 && cd /mnt/c/Users/yaoji/git/ColaCoder/invoice-master-poc-v2 && black packages/ && ruff check --fix packages/"
|
||||
```
|
||||
|
||||
## Environment Details
|
||||
|
||||
- Python: 3.10.19
|
||||
- Conda env: `invoice-sm120`
|
||||
- PDF DPI: 150 (not 300)
|
||||
- Pre-existing test failures: `tests/shared/storage/test_s3.py`, `test_azure.py` (missing boto3/azure modules - safe to ignore)
|
||||
|
||||
## Path Mapping
|
||||
|
||||
- Windows: `c:\Users\yaoji\git\ColaCoder\invoice-master-poc-v2\`
|
||||
- WSL: `/mnt/c/Users/yaoji/git/ColaCoder/invoice-master-poc-v2/`
|
||||
Reference in New Issue
Block a user