# Deployment Guide ## Docker Compose (Recommended) ### Prerequisites - Docker Engine 24+ - Docker Compose v2 ### Quick Start ```bash git clone cd smart-support # Configure environment cp .env.example .env # Edit .env: set ANTHROPIC_API_KEY (or OPENAI_API_KEY / GOOGLE_API_KEY) # Start all services docker compose up -d # Verify health docker compose ps curl http://localhost/api/health ``` The app is available at http://localhost (frontend) and http://localhost:8000 (backend API). ### Services | Service | Port | Description | |---------|------|-------------| | postgres | 5432 | PostgreSQL 16 database | | backend | 8000 | FastAPI + LangGraph backend | | frontend | 80 | React SPA served by nginx | ### Stopping ```bash docker compose down # Stop services, keep data docker compose down -v # Stop services and delete database volume ``` ## Production Considerations ### Environment Variables Set these in production (never commit secrets): | Variable | Required | Description | |----------|----------|-------------| | `POSTGRES_PASSWORD` | Yes | Strong random password | | `ANTHROPIC_API_KEY` | Yes* | LLM provider API key | | `LLM_PROVIDER` | Yes | `anthropic`, `openai`, or `google` | | `LLM_MODEL` | Yes | Model name for your provider | | `WEBHOOK_URL` | No | Escalation notification endpoint | | `SESSION_TTL_MINUTES` | No | Session timeout (default: 30) | *Or `OPENAI_API_KEY` / `GOOGLE_API_KEY` depending on `LLM_PROVIDER`. ### HTTPS For production, place a reverse proxy (nginx, Caddy, or a load balancer) in front of the frontend container and configure TLS termination there. The WebSocket endpoint at `/ws` must be proxied with `Upgrade: websocket` headers. The frontend nginx.conf handles this internally for the backend connection. Example Caddy configuration: ``` example.com { reverse_proxy localhost:80 } ``` ### Database Backups ```bash # Backup docker compose exec postgres pg_dump -U smart_support smart_support > backup.sql # Restore cat backup.sql | docker compose exec -T postgres psql -U smart_support smart_support ``` ### Scaling The backend is stateless (session state is in PostgreSQL via LangGraph's PostgresSaver). You can run multiple backend replicas behind a load balancer. The WebSocket connections are session-specific. Use sticky sessions or a shared session backend if load balancing WebSockets across multiple instances. ## Manual / Development Setup ### Backend ```bash cd backend python -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate pip install -e ".[dev]" # Set environment variables cp .env.example .env # Edit .env # Start database docker compose up postgres -d # Run backend uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload ``` ### Frontend ```bash cd frontend npm install npm run dev # Dev server on http://localhost:5173 ``` ### Running Tests ```bash cd backend pytest --cov=app --cov-report=term-missing ``` ## Health Checks ### Backend health ```http GET /api/health ``` Response: ```json {"status": "ok", "version": "0.5.0"} ``` ### WebSocket health Connect to `ws://localhost:8000/ws` and send: ```json {"type": "message", "thread_id": "health-check", "content": "ping"} ``` A `message_complete` or `error` response confirms the WebSocket is alive.