Backend: - FastAPI WebSocket /ws endpoint with streaming via LangGraph astream - LangGraph Supervisor connecting 3 mock agents (order_lookup, order_actions, fallback) - YAML Agent Registry with Pydantic validation and immutable configs - PostgresSaver checkpoint persistence via langgraph-checkpoint-postgres - Session TTL with 30-min sliding window and interrupt extension - LLM provider abstraction (Anthropic/OpenAI/Google) - Token usage + cost tracking callback handler - Input validation: message size cap, thread_id format, content length - Security: no hardcoded defaults, startup API key validation, no input reflection Frontend: - React 19 + TypeScript + Vite chat UI - WebSocket hook with reconnect + exponential backoff - Streaming token display with agent attribution - Interrupt approval/reject UI for write operations - Collapsible tool call viewer Testing: - 87 unit tests, 87% coverage (exceeds 80% requirement) - Ruff lint + format clean Infrastructure: - Docker Compose (PostgreSQL 16 + backend) - pyproject.toml with full dependency management
20 lines
391 B
Plaintext
20 lines
391 B
Plaintext
# Database
|
|
DATABASE_URL=postgresql://smart_support:dev_password@localhost:5432/smart_support
|
|
|
|
# LLM Provider: anthropic | openai | google
|
|
LLM_PROVIDER=anthropic
|
|
LLM_MODEL=claude-sonnet-4-6
|
|
|
|
# API Keys (set the one matching your LLM_PROVIDER)
|
|
ANTHROPIC_API_KEY=
|
|
OPENAI_API_KEY=
|
|
GOOGLE_API_KEY=
|
|
|
|
# Session
|
|
SESSION_TTL_MINUTES=30
|
|
INTERRUPT_TTL_MINUTES=30
|
|
|
|
# Server
|
|
WS_HOST=0.0.0.0
|
|
WS_PORT=8000
|