Backend: - ConversationTracker: Protocol + PostgresConversationTracker for lifecycle tracking - Error handler: ErrorCategory enum, classify_error(), with_retry() exponential backoff - Wire PostgresAnalyticsRecorder + ConversationTracker into ws_handler - Rate limiting (10 msg/10s per thread), edge case hardening - Health endpoint GET /api/health, version 0.5.0 - Demo seed data script + sample OpenAPI spec Frontend (all new): - React Router with NavBar (Chat / Replay / Dashboard / Review) - ReplayListPage + ReplayPage with ReplayTimeline component - DashboardPage with MetricCard, range selector, zero-state - ReviewPage for OpenAPI classification review - ErrorBanner for WebSocket disconnect handling - API client (api.ts) with typed fetch wrappers Infrastructure: - Frontend Dockerfile (multi-stage node -> nginx) - nginx.conf with SPA routing + API/WS proxy - docker-compose.yml with frontend service + healthchecks - .env.example files (root + backend) Documentation: - README.md with quick start and architecture - Agent configuration guide - OpenAPI import guide - Deployment guide - Demo script 48 new tests, 449 total passing, 92.87% coverage
3.2 KiB
Agent Configuration Guide
Overview
Smart Support agents are defined in backend/agents.yaml. Each agent is a
specialist with a specific role, permission level, and set of tools it can call.
agents.yaml Structure
agents:
- name: order_agent
description: "Handles order status, tracking, and cancellations."
permission: write
tools:
- get_order_status
- cancel_order
personality:
tone: friendly
greeting: "I can help with your order. What is your order number?"
escalation_message: "I'm escalating this to a human agent now."
- name: refund_agent
description: "Processes refund requests."
permission: write
tools:
- process_refund
- check_refund_eligibility
personality:
tone: empathetic
greeting: "I'm the refund specialist. How can I help?"
escalation_message: "I need to escalate this refund request."
- name: general_agent
description: "Answers general questions and FAQs."
permission: read
tools:
- search_faq
- fallback_respond
Fields
name (required)
Unique identifier used for routing. Must be alphanumeric with underscores.
description (required)
Plain-text description of what this agent handles. Used by the supervisor to route user messages to the right agent. Be specific.
permission (required)
Controls the interrupt threshold:
read-- no interrupt required. Agent can act immediately.write-- requires human approval via interrupt before executing tools.admin-- requires human approval and is logged for audit.
tools (required)
List of tool names this agent can use. Tools are registered in the agent factory. Each tool name must match a registered LangChain tool.
personality (optional)
Customizes agent behavior:
tone--friendly,formal,empathetic,technicalgreeting-- Opening message injected at session start.escalation_message-- Message sent when the agent escalates.
Built-in Templates
Use TEMPLATE_NAME environment variable to load a pre-built agent configuration:
| Template | Description |
|---|---|
ecommerce |
Orders, refunds, shipping, product questions |
saas |
Account management, billing, technical support |
generic |
General-purpose FAQ and escalation |
Example:
TEMPLATE_NAME=ecommerce uvicorn app.main:app
Adding New Agents
- Add agent definition to
agents.yaml. - Register any new tools in
backend/app/agents/. - Restart the backend.
The supervisor will automatically route to the new agent when the user's intent matches the agent's description.
Agent Routing Logic
- User sends a message.
- The LLM supervisor classifies the intent against all agent descriptions.
- If unambiguous, the matching agent is invoked directly.
- If ambiguous (multiple plausible agents), the system asks a clarification question.
- If multi-intent, agents are invoked sequentially.
Escalation
Any agent can trigger escalation by calling the escalate tool. This:
- Sends a webhook notification (if
WEBHOOK_URLis configured). - Marks the conversation with
resolution_type = escalated. - Sends the agent's
escalation_messageto the user.