Smart Support - AI customer service action layer framework. Includes design doc, CEO plan, eng review, test plan, and README.
2.4 KiB
2.4 KiB
TODOS
Before Phase 3
- Tool interface decision: The tool layer should support multiple backends, not just MCP:
- MCP tools — for complex, stateful interactions via MCP protocol (stdio/SSE)
- CLI tools — wrap existing CLIs (Shopify CLI, AWS CLI, Stripe CLI, etc.). Parse stdout/stderr.
- Direct API tools — simple REST/GraphQL HTTP calls, no MCP overhead.
LangChain tools are just Python functions with descriptions — the backend is an implementation detail. Research MCP Python SDK (
mcpon PyPI) for the MCP path. Design the tool base class to abstract over all three backends. ~2-3 hours research. Flagged by eng review outside voice + user feedback.
Before Production Deployment (P1)
- Auth system: API key auth for chat WebSocket, session-based auth for dashboard/replay/import. Rate limiting on all endpoints. Blocks any real client deployment. Effort: M (CC: ~2 days). Depends on: Phase 4 completion.
Before Phase 4 (Client Engagement)
- Checkpointer migration plan: InMemorySaver → PostgresSaver (or SQLiteSaver as intermediate). InMemorySaver loses all state on restart/crash. PostgresSaver requires schema, connection pooling, serialization compatibility. Not a simple config swap. Plan the migration before any real client deployment.
Design Changes from Eng Review
- Use LangGraph built-ins: Checkpointers for session state, interrupt() for human-in-the-loop, supervisor pattern for multi-agent routing. Don't rebuild what LangGraph provides.
- WebSocket for streaming: Bidirectional connection for streaming tokens + interrupt flow.
- Supervisor pattern: Despite latency concern (8-15s per response), founder chose multi-agent supervisor over single-agent. Stream all tokens to mitigate perceived wait.
- YAML agent registry: Declarative agent definitions for client configurability.
- Prompt caching: Enabled from day one to reduce LLM costs.
- Multi-LLM provider support: Use LangChain's provider abstractions (ChatAnthropic, ChatOpenAI, ChatGoogleGenerativeAI). Provider configurable per deployment.
- Multi-backend tool support: Tool layer supports MCP servers, CLI wrappers, and direct API calls. LangChain tools abstract over all three backends.
- Interrupt resume flow: Design WebSocket reconnection + re-send interrupt prompt on reconnect.
- Tests per phase: 28 unit/integration + 4 E2E, written alongside each phase.