README: - Remove duplicated agent config, safety, security sections (covered by docs) - Add ux_design_system.md and safety.py to project structure and doc links - Convert doc links to descriptive table agent-config-guide.md: - Replace fictional agents/tools with real ones from agents.yaml - Remove nonexistent 'admin' permission level (only read/write) - Fix template names (e-commerce, saas, fintech) - List all available built-in tools openapi-import-guide.md: - Fix /result -> /classifications endpoint - Fix POST /approve to show no request body - Remove nonexistent 'admin' access type - Update response examples to match actual API demo-script.md: - Fix agent names (order_agent -> order_lookup) - Replace fictional refund scenario with real lookup+cancel flow ARCHITECTURE.md: - Fix langgraph-supervisor version (v1.1 -> 0.0.12+) docker-compose.yml: - Expose postgres on port 5433 for local dev
3.9 KiB
3.9 KiB
Smart Support -- Demo Script
Overview
This script walks through a live demonstration of Smart Support, showcasing multi-agent routing, human-in-the-loop interrupts, conversation replay, and the analytics dashboard.
Prerequisites
- Docker and Docker Compose installed
- API key for one of: Anthropic, OpenAI, or Google
Setup (5 minutes)
1. Start the stack
cp .env.example .env
# Edit .env and add your ANTHROPIC_API_KEY (or other provider key)
docker compose up -d
Wait for all services to be healthy:
docker compose ps
# All services should show "healthy" or "running"
2. Seed demo data (optional)
docker compose exec backend python fixtures/demo_data.py
3. Open the app
Navigate to http://localhost in your browser.
Demo Flow
Scene 1: Basic Chat (2 minutes)
- Open the Chat tab (default).
- Send: "What is the status of order 12345?"
- Observe the
tool_callindicator appear in the sidebar (order_lookupcallingget_order_status). - The agent responds with order status.
- Observe the
- Send: "Can you cancel that order?"
- The system detects a write operation and shows an Interrupt Prompt.
- Click Approve to confirm the cancellation.
- The agent confirms cancellation.
Key points to highlight:
- Real-time token streaming (words appear as they are generated)
- Tool call visibility (transparency into what the agent is doing)
- Human-in-the-loop confirmation for write operations
Scene 2: Multi-Agent Routing (2 minutes)
- Start a new browser tab (new session) or clear session storage.
- Send: "I need to check order 12345 AND cancel order 67890"
- The supervisor detects two intents:
order_lookup(read) andorder_actions(write). - Both agents run in sequence.
- The cancellation triggers an interrupt prompt for human approval.
- The supervisor detects two intents:
Key points to highlight:
- Intent classification detecting multiple actions
- Automatic routing to appropriate specialist agents
- Sequential execution with confirmation gates for write operations
Scene 3: Conversation Replay (2 minutes)
- Click the Replay tab.
- The conversation list shows all sessions, including the ones just conducted.
- Click any thread to see the detailed step-by-step replay.
- Expand a
tool_callstep to see the parameters and result.
Key points to highlight:
- Full audit trail of every agent action
- Expandable params/result for debugging
- Pagination for long conversations
Scene 4: Analytics Dashboard (2 minutes)
- Click the Dashboard tab.
- Select the 7d range.
- Point out:
- Total conversations and resolution rate
- Agent usage breakdown (which agents handled how many messages)
- Interrupt stats (approved vs. rejected vs. expired)
- Cost and token usage
Key points to highlight:
- Operational visibility into agent performance
- Cost tracking per conversation/agent
- Resolution and escalation rates
Scene 5: OpenAPI Import (2 minutes)
- Click the API Review tab.
- Paste the URL:
http://localhost:8000/openapi.json(or the sample API URL) - Click Import.
- Watch the job status update from
pendingtoprocessingtodone. - Review the classified endpoints table.
- Edit the
access_typefor a sensitive endpoint (e.g., changereadtowrite). - Click Approve & Save.
Key points to highlight:
- Zero-configuration discovery: paste a URL, get an agent
- AI-powered classification of endpoint sensitivity
- Human review gate before any endpoints go live
Troubleshooting
WebSocket shows "disconnected":
- Check that the backend container is running:
docker compose logs backend - Verify port 8000 is not blocked
No LLM responses:
- Confirm your API key is set in
.env - Check backend logs:
docker compose logs backend
Database errors:
- Run:
docker compose restart backend - If tables are missing:
docker compose exec backend python -c "import asyncio; from app.db import *; ..."