- Bump langgraph from 0.4 to 1.0+, langgraph-supervisor from 0.0.12 to 0.0.30+ - Bump langchain-core, langchain-anthropic, langchain-openai to 1.x - Add langchain>=1.0 dependency for new create_agent location - Migrate create_react_agent -> create_agent (prompt -> system_prompt) - Fix create_supervisor positional arg to named agents= parameter - Replace AsyncMock checkpointer with InMemorySaver in tests (v1 type validation) - Update version references in README, ARCHITECTURE, eng-review-plan
6.7 KiB
6.7 KiB
Smart Support
AI customer support action layer. Paste your API spec, get an AI agent that executes real actions.
The Problem
Existing support tools (Zendesk, Intercom, Ada) answer FAQs well but automation rates stall at 20-30%. The remaining 70% of tickets require agents to manually log into internal systems to look up orders, cancel orders, issue coupons.
Smart Support fills that gap as the "action layer" -- it does not replace your existing support platform, it enables AI to directly call your internal systems.
How It Works
User message -> Chat UI -> FastAPI WebSocket -> LangGraph Supervisor -> Specialist Agent -> MCP Tools -> Your systems
| |
Agent Registry interrupt()
(YAML config) (human approval)
|
PostgresSaver
(session persistence)
- User sends a message in the chat UI.
- LangGraph Supervisor classifies intent and routes to the right agent.
- Agent calls your internal systems via MCP tools.
- Write operations trigger a human-in-the-loop approval gate.
- All operations are logged with full replay and analytics.
Key Features
- Multi-agent routing -- each operation goes to a specialist agent with its own tools and permissions
- Zero-config import -- paste an OpenAPI 3.0 URL, agents are generated automatically
- Human-in-the-loop -- all write operations (cancel, refund, modify) require approval; reads execute immediately
- Session context -- multi-turn conversation with persistent state across reconnects
- Real-time streaming -- WebSocket token streaming with live tool call visibility
- Conversation replay -- step-by-step audit trail of every agent decision
- Analytics dashboard -- resolution rate, agent usage, escalation rate, cost per conversation
- YAML-driven config -- agents, personas, and vertical templates in a single file
Tech Stack
| Component | Technology |
|---|---|
| Backend | Python 3.11+, FastAPI |
| Agent orchestration | LangGraph 1.x, langgraph-supervisor |
| Session state | PostgreSQL 16 + langgraph-checkpoint-postgres |
| LLM | Claude Sonnet 4.6 (configurable: OpenAI, Azure OpenAI, Google) |
| Frontend | React 19, TypeScript, Vite |
| Testing | pytest (backend), vitest + happy-dom (frontend) |
| Deployment | Docker Compose |
Quick Start
git clone <repo-url>
cd smart-support
# Configure your LLM API key
cp .env.example .env
# Edit .env: set LLM_PROVIDER and the corresponding API key
# anthropic -> ANTHROPIC_API_KEY
# openai -> OPENAI_API_KEY
# azure_openai -> AZURE_OPENAI_API_KEY + AZURE_OPENAI_ENDPOINT + AZURE_OPENAI_DEPLOYMENT
# google -> GOOGLE_API_KEY
# Start all services
docker compose up -d
# Open the app
open http://localhost
Local Development
# Start only PostgreSQL via Docker (exposed on port 5433)
docker compose up postgres -d
# Backend (in one terminal)
cd backend
pip install -e ".[dev]"
uvicorn app.main:app --host 0.0.0.0 --port 8001 --reload
# Frontend (in another terminal)
cd frontend
npm install
npm run dev # http://localhost:5173 (proxies /api and /ws to :8001)
See Deployment Guide for production setup, HTTPS, and scaling.
Project Structure
smart-support/
├── backend/
│ ├── app/
│ │ ├── main.py # FastAPI + WebSocket entry point
│ │ ├── graph.py # LangGraph Supervisor
│ │ ├── ws_handler.py # WebSocket message dispatch + rate limiting
│ │ ├── safety.py # Confirmation rules + MCP error taxonomy
│ │ ├── agents/ # Agent definitions and tools
│ │ ├── registry.py # YAML agent registry loader
│ │ ├── openapi/ # OpenAPI parser, classifier, and review API
│ │ ├── replay/ # Conversation replay API
│ │ └── analytics/ # Analytics queries and API
│ ├── agents.yaml # Agent registry configuration
│ ├── templates/ # Vertical industry templates
│ └── tests/ # Unit, integration, and E2E tests
├── frontend/
│ ├── src/
│ │ ├── pages/ # Chat, Replay, Dashboard, Review pages
│ │ ├── components/ # NavBar, Layout, MetricCard, ReplayTimeline
│ │ ├── hooks/ # useWebSocket with reconnect support
│ │ └── api.ts # Typed API client
│ └── Dockerfile # Multi-stage nginx build
├── docs/ # Architecture, deployment, guides
├── docker-compose.yml # Full-stack compose
└── .env.example # Environment variable template
API Endpoints
| Method | Path | Description |
|---|---|---|
| WS | /ws |
Main WebSocket chat endpoint |
| GET | /api/health |
Health check |
| GET | /api/conversations |
List conversations (paginated) |
| GET | /api/replay/{thread_id} |
Replay conversation steps (paginated) |
| GET | /api/analytics |
Analytics summary (?range=7d) |
| POST | /api/openapi/import |
Start OpenAPI import job |
| GET | /api/openapi/jobs/{id} |
Check import job status |
| GET | /api/openapi/jobs/{id}/classifications |
Get endpoint classifications |
| PUT | /api/openapi/jobs/{id}/classifications/{idx} |
Update a classification |
| POST | /api/openapi/jobs/{id}/approve |
Approve and generate tools |
Running Tests
# Backend (516 tests, 94% coverage)
cd backend
pytest --cov=app --cov-report=term-missing
# Frontend (23 tests, vitest + happy-dom)
cd frontend
npm test
Backend coverage is enforced at 80%+.
Documentation
| Document | Description |
|---|---|
| Architecture | System design, component diagram, data flow, ADRs |
| Development Plan | Phase breakdown, task checklists, and status |
| Agent Config Guide | agents.yaml format, fields, templates, routing logic |
| OpenAPI Import Guide | Auto-discovery workflow, REST API, SSRF protection |
| Deployment Guide | Docker, local dev, production, HTTPS, backups, scaling |
| Demo Script | Step-by-step live demo walkthrough (5 scenes) |
| UX Design System | Color palette, typography, component patterns, CSS tokens |
License
MIT