# Smart Support AI customer support action layer. Paste your API spec, get an AI agent that executes real actions. ## The Problem Existing support tools (Zendesk, Intercom, Ada) answer FAQs well but automation rates stall at 20-30%. The remaining 70% of tickets require agents to manually log into internal systems to look up orders, cancel orders, issue coupons. Smart Support fills that gap as the "action layer" -- it does not replace your existing support platform, it enables AI to directly call your internal systems. ## How It Works ``` User message -> Chat UI -> FastAPI WebSocket -> LangGraph Supervisor -> Specialist Agent -> MCP Tools -> Your systems | | Agent Registry interrupt() (YAML config) (human approval) | PostgresSaver (session persistence) ``` 1. User sends a message in the chat UI. 2. LangGraph Supervisor classifies intent and routes to the right agent. 3. Agent calls your internal systems via MCP tools. 4. Write operations trigger a human-in-the-loop approval gate. 5. All operations are logged with full replay and analytics. ## Key Features - **Multi-agent routing** -- each operation goes to a specialist agent with its own tools and permissions - **Zero-config import** -- paste an OpenAPI 3.0 URL, agents are generated automatically - **Human-in-the-loop** -- all write operations (cancel, refund, modify) require approval; reads execute immediately - **Session context** -- multi-turn conversation with persistent state across reconnects - **Real-time streaming** -- WebSocket token streaming with live tool call visibility - **Conversation replay** -- step-by-step audit trail of every agent decision - **Analytics dashboard** -- resolution rate, agent usage, escalation rate, cost per conversation - **YAML-driven config** -- agents, personas, and vertical templates in a single file ## Tech Stack | Component | Technology | |-----------|-----------| | Backend | Python 3.11+, FastAPI | | Agent orchestration | LangGraph 1.x, langgraph-supervisor | | Session state | PostgreSQL 16 + langgraph-checkpoint-postgres | | LLM | Claude Sonnet 4.6 (configurable: OpenAI, Azure OpenAI, Google) | | Frontend | React 19, TypeScript, Vite | | Testing | pytest (backend), vitest + happy-dom (frontend) | | Deployment | Docker Compose | ## Quick Start ```bash git clone cd smart-support # Configure your LLM API key cp .env.example .env # Edit .env: set LLM_PROVIDER and the corresponding API key # anthropic -> ANTHROPIC_API_KEY # openai -> OPENAI_API_KEY # azure_openai -> AZURE_OPENAI_API_KEY + AZURE_OPENAI_ENDPOINT + AZURE_OPENAI_DEPLOYMENT # google -> GOOGLE_API_KEY # Start all services docker compose up -d # Open the app open http://localhost ``` ### Local Development ```bash # Start only PostgreSQL via Docker (exposed on port 5433) docker compose up postgres -d # Backend (in one terminal) cd backend pip install -e ".[dev]" uvicorn app.main:app --host 0.0.0.0 --port 8001 --reload # Frontend (in another terminal) cd frontend npm install npm run dev # http://localhost:5173 (proxies /api and /ws to :8001) ``` See [Deployment Guide](docs/deployment.md) for production setup, HTTPS, and scaling. ## Project Structure ``` smart-support/ ├── backend/ │ ├── app/ │ │ ├── main.py # FastAPI + WebSocket entry point │ │ ├── graph.py # LangGraph Supervisor │ │ ├── ws_handler.py # WebSocket message dispatch + rate limiting │ │ ├── safety.py # Confirmation rules + MCP error taxonomy │ │ ├── agents/ # Agent definitions and tools │ │ ├── registry.py # YAML agent registry loader │ │ ├── openapi/ # OpenAPI parser, classifier, and review API │ │ ├── replay/ # Conversation replay API │ │ └── analytics/ # Analytics queries and API │ ├── agents.yaml # Agent registry configuration │ ├── templates/ # Vertical industry templates │ └── tests/ # Unit, integration, and E2E tests ├── frontend/ │ ├── src/ │ │ ├── pages/ # Chat, Replay, Dashboard, Review pages │ │ ├── components/ # NavBar, Layout, MetricCard, ReplayTimeline │ │ ├── hooks/ # useWebSocket with reconnect support │ │ └── api.ts # Typed API client │ └── Dockerfile # Multi-stage nginx build ├── docs/ # Architecture, deployment, guides ├── docker-compose.yml # Full-stack compose └── .env.example # Environment variable template ``` ## API Endpoints | Method | Path | Description | |--------|------|-------------| | WS | `/ws` | Main WebSocket chat endpoint | | GET | `/api/health` | Health check | | GET | `/api/conversations` | List conversations (paginated) | | GET | `/api/replay/{thread_id}` | Replay conversation steps (paginated) | | GET | `/api/analytics` | Analytics summary (`?range=7d`) | | POST | `/api/openapi/import` | Start OpenAPI import job | | GET | `/api/openapi/jobs/{id}` | Check import job status | | GET | `/api/openapi/jobs/{id}/classifications` | Get endpoint classifications | | PUT | `/api/openapi/jobs/{id}/classifications/{idx}` | Update a classification | | POST | `/api/openapi/jobs/{id}/approve` | Approve and generate tools | ## Running Tests ```bash # Backend (516 tests, 94% coverage) cd backend pytest --cov=app --cov-report=term-missing # Frontend (23 tests, vitest + happy-dom) cd frontend npm test ``` Backend coverage is enforced at 80%+. ## Documentation | Document | Description | |----------|-------------| | [Architecture](docs/ARCHITECTURE.md) | System design, component diagram, data flow, ADRs | | [Development Plan](docs/DEVELOPMENT-PLAN.md) | Phase breakdown, task checklists, and status | | [Agent Config Guide](docs/agent-config-guide.md) | agents.yaml format, fields, templates, routing logic | | [OpenAPI Import Guide](docs/openapi-import-guide.md) | Auto-discovery workflow, REST API, SSRF protection | | [Deployment Guide](docs/deployment.md) | Docker, local dev, production, HTTPS, backups, scaling | | [Demo Script](docs/demo-script.md) | Step-by-step live demo walkthrough (5 scenes) | | [UX Design System](docs/ux_design_system.md) | Color palette, typography, component patterns, CSS tokens | ## License MIT