Files
smart-support/docs/demo-script.md
Yaojia Wang be5c84bcff docs: reconcile README and docs with actual codebase
README:
- Remove duplicated agent config, safety, security sections (covered by docs)
- Add ux_design_system.md and safety.py to project structure and doc links
- Convert doc links to descriptive table

agent-config-guide.md:
- Replace fictional agents/tools with real ones from agents.yaml
- Remove nonexistent 'admin' permission level (only read/write)
- Fix template names (e-commerce, saas, fintech)
- List all available built-in tools

openapi-import-guide.md:
- Fix /result -> /classifications endpoint
- Fix POST /approve to show no request body
- Remove nonexistent 'admin' access type
- Update response examples to match actual API

demo-script.md:
- Fix agent names (order_agent -> order_lookup)
- Replace fictional refund scenario with real lookup+cancel flow

ARCHITECTURE.md:
- Fix langgraph-supervisor version (v1.1 -> 0.0.12+)

docker-compose.yml:
- Expose postgres on port 5433 for local dev
2026-04-06 13:55:45 +02:00

3.9 KiB

Smart Support -- Demo Script

Overview

This script walks through a live demonstration of Smart Support, showcasing multi-agent routing, human-in-the-loop interrupts, conversation replay, and the analytics dashboard.

Prerequisites

  • Docker and Docker Compose installed
  • API key for one of: Anthropic, OpenAI, or Google

Setup (5 minutes)

1. Start the stack

cp .env.example .env
# Edit .env and add your ANTHROPIC_API_KEY (or other provider key)
docker compose up -d

Wait for all services to be healthy:

docker compose ps
# All services should show "healthy" or "running"

2. Seed demo data (optional)

docker compose exec backend python fixtures/demo_data.py

3. Open the app

Navigate to http://localhost in your browser.


Demo Flow

Scene 1: Basic Chat (2 minutes)

  1. Open the Chat tab (default).
  2. Send: "What is the status of order 12345?"
    • Observe the tool_call indicator appear in the sidebar (order_lookup calling get_order_status).
    • The agent responds with order status.
  3. Send: "Can you cancel that order?"
    • The system detects a write operation and shows an Interrupt Prompt.
    • Click Approve to confirm the cancellation.
    • The agent confirms cancellation.

Key points to highlight:

  • Real-time token streaming (words appear as they are generated)
  • Tool call visibility (transparency into what the agent is doing)
  • Human-in-the-loop confirmation for write operations

Scene 2: Multi-Agent Routing (2 minutes)

  1. Start a new browser tab (new session) or clear session storage.
  2. Send: "I need to check order 12345 AND cancel order 67890"
    • The supervisor detects two intents: order_lookup (read) and order_actions (write).
    • Both agents run in sequence.
    • The cancellation triggers an interrupt prompt for human approval.

Key points to highlight:

  • Intent classification detecting multiple actions
  • Automatic routing to appropriate specialist agents
  • Sequential execution with confirmation gates for write operations

Scene 3: Conversation Replay (2 minutes)

  1. Click the Replay tab.
  2. The conversation list shows all sessions, including the ones just conducted.
  3. Click any thread to see the detailed step-by-step replay.
  4. Expand a tool_call step to see the parameters and result.

Key points to highlight:

  • Full audit trail of every agent action
  • Expandable params/result for debugging
  • Pagination for long conversations

Scene 4: Analytics Dashboard (2 minutes)

  1. Click the Dashboard tab.
  2. Select the 7d range.
  3. Point out:
    • Total conversations and resolution rate
    • Agent usage breakdown (which agents handled how many messages)
    • Interrupt stats (approved vs. rejected vs. expired)
    • Cost and token usage

Key points to highlight:

  • Operational visibility into agent performance
  • Cost tracking per conversation/agent
  • Resolution and escalation rates

Scene 5: OpenAPI Import (2 minutes)

  1. Click the API Review tab.
  2. Paste the URL: http://localhost:8000/openapi.json (or the sample API URL)
  3. Click Import.
  4. Watch the job status update from pending to processing to done.
  5. Review the classified endpoints table.
  6. Edit the access_type for a sensitive endpoint (e.g., change read to write).
  7. Click Approve & Save.

Key points to highlight:

  • Zero-configuration discovery: paste a URL, get an agent
  • AI-powered classification of endpoint sensitivity
  • Human review gate before any endpoints go live

Troubleshooting

WebSocket shows "disconnected":

  • Check that the backend container is running: docker compose logs backend
  • Verify port 8000 is not blocked

No LLM responses:

  • Confirm your API key is set in .env
  • Check backend logs: docker compose logs backend

Database errors:

  • Run: docker compose restart backend
  • If tables are missing: docker compose exec backend python -c "import asyncio; from app.db import *; ..."