feat: complete phase 5 -- error hardening, frontend, Docker, demo, docs
Backend: - ConversationTracker: Protocol + PostgresConversationTracker for lifecycle tracking - Error handler: ErrorCategory enum, classify_error(), with_retry() exponential backoff - Wire PostgresAnalyticsRecorder + ConversationTracker into ws_handler - Rate limiting (10 msg/10s per thread), edge case hardening - Health endpoint GET /api/health, version 0.5.0 - Demo seed data script + sample OpenAPI spec Frontend (all new): - React Router with NavBar (Chat / Replay / Dashboard / Review) - ReplayListPage + ReplayPage with ReplayTimeline component - DashboardPage with MetricCard, range selector, zero-state - ReviewPage for OpenAPI classification review - ErrorBanner for WebSocket disconnect handling - API client (api.ts) with typed fetch wrappers Infrastructure: - Frontend Dockerfile (multi-stage node -> nginx) - nginx.conf with SPA routing + API/WS proxy - docker-compose.yml with frontend service + healthchecks - .env.example files (root + backend) Documentation: - README.md with quick start and architecture - Agent configuration guide - OpenAPI import guide - Deployment guide - Demo script 48 new tests, 449 total passing, 92.87% coverage
This commit is contained in:
25
.env.example
Normal file
25
.env.example
Normal file
@@ -0,0 +1,25 @@
|
||||
# Smart Support -- Docker Compose environment variables
|
||||
# Copy to .env and fill in your values
|
||||
|
||||
# PostgreSQL password (used by both postgres and backend services)
|
||||
POSTGRES_PASSWORD=dev_password
|
||||
|
||||
# LLM provider: anthropic | openai | google
|
||||
LLM_PROVIDER=anthropic
|
||||
LLM_MODEL=claude-sonnet-4-6
|
||||
|
||||
# API keys (provide the one matching LLM_PROVIDER)
|
||||
ANTHROPIC_API_KEY=
|
||||
OPENAI_API_KEY=
|
||||
GOOGLE_API_KEY=
|
||||
|
||||
# Optional: webhook URL for escalation notifications
|
||||
WEBHOOK_URL=
|
||||
|
||||
# Session and interrupt TTL in minutes
|
||||
SESSION_TTL_MINUTES=30
|
||||
INTERRUPT_TTL_MINUTES=30
|
||||
|
||||
# Optional: load a named agent template instead of agents.yaml
|
||||
# Available templates: ecommerce, saas, generic
|
||||
TEMPLATE_NAME=
|
||||
@@ -241,7 +241,7 @@ A checkpoint includes:
|
||||
| 2 | `phase-2/multi-agent-safety` | Supervisor routing + interrupts + templates | COMPLETED (2026-03-30) |
|
||||
| 3 | `phase-3/openapi-discovery` | OpenAPI parsing + MCP generation + SSRF protection | COMPLETED (2026-03-30) |
|
||||
| 4 | `phase-4/analytics-replay` | Replay API + analytics dashboard | COMPLETED (2026-03-31) |
|
||||
| 5 | `phase-5/polish-demo` | Error hardening + demo prep + Docker deploy | NOT STARTED |
|
||||
| 5 | `phase-5/polish-demo` | Error hardening + demo prep + Docker deploy | COMPLETED (2026-03-31) |
|
||||
|
||||
Status values: `NOT STARTED` -> `IN PROGRESS` -> `COMPLETED (YYYY-MM-DD)`
|
||||
|
||||
|
||||
240
README.md
240
README.md
@@ -1,159 +1,165 @@
|
||||
# Smart Support
|
||||
|
||||
AI 客服行动层框架。粘贴你的 API,获得一个能执行真实操作的智能客服。
|
||||
AI customer support action layer. Paste your API spec, get an AI agent that executes real actions.
|
||||
|
||||
## 问题
|
||||
## The Problem
|
||||
|
||||
现有客服工具(Zendesk、Intercom、Ada)擅长回答 FAQ,但自动化率卡在 20-30%。剩下 70% 的工单需要人工登录内部系统,手动查订单、取消订单、发优惠券。
|
||||
Existing support tools (Zendesk, Intercom, Ada) answer FAQs well but automation
|
||||
rates stall at 20-30%. The remaining 70% of tickets require agents to manually
|
||||
log into internal systems to look up orders, cancel orders, issue coupons.
|
||||
|
||||
Smart Support 是补全这个缺口的「行动层」。它不替代现有客服平台,而是让 AI 能直接调用内部系统完成操作。
|
||||
Smart Support fills that gap as the "action layer" -- it does not replace your
|
||||
existing support platform, it enables AI to directly call your internal systems.
|
||||
|
||||
## 工作原理
|
||||
## How It Works
|
||||
|
||||
```
|
||||
客户消息 → Chat UI → FastAPI WebSocket → LangGraph Supervisor → 专业 Agent → MCP Tools → 你的内部系统
|
||||
↑ ↑
|
||||
Agent 注册表 interrupt()
|
||||
(YAML 配置) (人工确认)
|
||||
↑
|
||||
User message -> Chat UI -> FastAPI WebSocket -> LangGraph Supervisor -> Specialist Agent -> MCP Tools -> Your systems
|
||||
| |
|
||||
Agent Registry interrupt()
|
||||
(YAML config) (human approval)
|
||||
|
|
||||
PostgresSaver
|
||||
(会话状态持久化)
|
||||
(session persistence)
|
||||
```
|
||||
|
||||
1. 客户在聊天界面发送消息
|
||||
2. LangGraph Supervisor 分析意图,路由到对应的专业 Agent
|
||||
3. Agent 通过 MCP 协议调用你的内部系统(查订单、取消订单、发折扣...)
|
||||
4. 涉及写操作时,自动触发人工确认流程
|
||||
5. 所有操作全程记录,支持回放和分析
|
||||
1. User sends a message in the chat UI.
|
||||
2. LangGraph Supervisor classifies intent and routes to the right agent.
|
||||
3. Agent calls your internal systems via MCP tools.
|
||||
4. Write operations trigger a human-in-the-loop approval gate.
|
||||
5. All operations are logged with full replay and analytics.
|
||||
|
||||
## 核心特性
|
||||
## Key Features
|
||||
|
||||
- **多 Agent 协作** - 不同操作由不同 Agent 处理,各自拥有独立的权限边界和工具集
|
||||
- **即插即用** - 粘贴 OpenAPI 规范 URL,自动生成 MCP 工具和 Agent 配置
|
||||
- **人工确认** - 所有写操作(取消、退款、修改)需要人工审批,读操作直接执行
|
||||
- **会话上下文** - 支持多轮对话,Agent 能理解「取消那个订单」这样的指代
|
||||
- **实时流式输出** - WebSocket 双向通信,逐 token 流式返回
|
||||
- **对话回放** - 逐步查看 Agent 决策过程、工具调用和返回结果
|
||||
- **数据分析** - 解决率、Agent 使用率、升级率、每次对话成本
|
||||
- **YAML 驱动配置** - Agent 定义、人设、垂直模板全部通过 YAML 配置
|
||||
- **Multi-agent routing** -- each operation goes to a specialist agent with its own tools and permissions
|
||||
- **Zero-config import** -- paste an OpenAPI 3.0 URL, agents are generated automatically
|
||||
- **Human-in-the-loop** -- all write operations (cancel, refund, modify) require approval; reads execute immediately
|
||||
- **Session context** -- multi-turn conversation with persistent state across reconnects
|
||||
- **Real-time streaming** -- WebSocket token streaming with live tool call visibility
|
||||
- **Conversation replay** -- step-by-step audit trail of every agent decision
|
||||
- **Analytics dashboard** -- resolution rate, agent usage, escalation rate, cost per conversation
|
||||
- **YAML-driven config** -- agents, personas, and vertical templates in a single file
|
||||
|
||||
## 技术栈
|
||||
## Tech Stack
|
||||
|
||||
| 组件 | 技术选型 |
|
||||
|------|---------|
|
||||
| 后端 | Python 3.11+, FastAPI |
|
||||
| Agent 编排 | LangGraph v1.1, langgraph-supervisor |
|
||||
| 工具集成 | langchain-mcp-adapters, @tool |
|
||||
| 状态持久化 | PostgreSQL + langgraph-checkpoint-postgres |
|
||||
| LLM | Claude Sonnet 4.6(可切换 OpenAI、Google 等) |
|
||||
| 前端 | React |
|
||||
| 部署 | Docker Compose |
|
||||
| Component | Technology |
|
||||
|-----------|-----------|
|
||||
| Backend | Python 3.11+, FastAPI |
|
||||
| Agent orchestration | LangGraph v1.1 |
|
||||
| Session state | PostgreSQL + langgraph-checkpoint-postgres |
|
||||
| LLM | Claude Sonnet 4.6 (configurable: OpenAI, Google) |
|
||||
| Frontend | React 19, TypeScript, Vite |
|
||||
| Deployment | Docker Compose |
|
||||
|
||||
## 项目结构
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
git clone <repo-url>
|
||||
cd smart-support
|
||||
|
||||
# Configure your LLM API key
|
||||
cp .env.example .env
|
||||
# Edit .env: set ANTHROPIC_API_KEY (or OPENAI_API_KEY)
|
||||
|
||||
# Start all services
|
||||
docker compose up -d
|
||||
|
||||
# Open the app
|
||||
open http://localhost
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
smart-support/
|
||||
├── backend/
|
||||
│ ├── app/
|
||||
│ │ ├── main.py # FastAPI + WebSocket 入口
|
||||
│ │ ├── graph.py # LangGraph Supervisor 配置
|
||||
│ │ ├── agents/ # Agent 定义 + 工具
|
||||
│ │ ├── registry.py # YAML Agent 注册表加载器
|
||||
│ │ ├── openapi/ # OpenAPI 解析 + MCP 服务器生成
|
||||
│ │ ├── replay/ # 对话回放 API
|
||||
│ │ ├── analytics/ # 数据分析查询 + API
|
||||
│ │ └── callbacks.py # Token 用量统计
|
||||
│ ├── agents.yaml # Agent 注册表配置
|
||||
│ ├── templates/ # 垂直行业模板
|
||||
│ └── tests/
|
||||
├── frontend/ # React 聊天 UI + 回放 + 仪表盘
|
||||
├── docker-compose.yml # PostgreSQL + 应用
|
||||
└── pyproject.toml
|
||||
│ │ ├── main.py # FastAPI + WebSocket entry point
|
||||
│ │ ├── graph.py # LangGraph Supervisor
|
||||
│ │ ├── ws_handler.py # WebSocket message dispatch + rate limiting
|
||||
│ │ ├── conversation_tracker.py # Conversation lifecycle tracking
|
||||
│ │ ├── agents/ # Agent definitions and tools
|
||||
│ │ ├── registry.py # YAML agent registry loader
|
||||
│ │ ├── openapi/ # OpenAPI parser and review API
|
||||
│ │ ├── replay/ # Conversation replay API
|
||||
│ │ ├── analytics/ # Analytics queries and API
|
||||
│ │ └── tools/ # Error handling and retry utilities
|
||||
│ ├── agents.yaml # Agent registry configuration
|
||||
│ ├── fixtures/ # Demo data and sample OpenAPI spec
|
||||
│ └── tests/ # Unit, integration, and E2E tests
|
||||
├── frontend/
|
||||
│ ├── src/
|
||||
│ │ ├── pages/ # Chat, Replay, Dashboard, Review pages
|
||||
│ │ ├── components/ # NavBar, Layout, MetricCard, ReplayTimeline
|
||||
│ │ ├── hooks/ # useWebSocket with reconnect support
|
||||
│ │ └── api.ts # Typed API client
|
||||
│ └── Dockerfile # Multi-stage nginx build
|
||||
├── docs/ # Architecture, deployment, guides
|
||||
├── docker-compose.yml # Full-stack compose
|
||||
└── .env.example # Environment variable template
|
||||
```
|
||||
|
||||
## 快速开始
|
||||
|
||||
```bash
|
||||
# 启动 PostgreSQL 和应用
|
||||
docker compose up
|
||||
|
||||
# 访问聊天界面
|
||||
open http://localhost:8000
|
||||
```
|
||||
|
||||
## Agent 配置示例
|
||||
## Agent Configuration
|
||||
|
||||
```yaml
|
||||
# agents.yaml
|
||||
agents:
|
||||
- name: order_lookup
|
||||
description: 查询订单状态、物流信息
|
||||
permission: read
|
||||
personality:
|
||||
tone: professional
|
||||
greeting: "您好,我来帮您查询订单信息。"
|
||||
tools:
|
||||
- get_order_status
|
||||
- get_tracking_info
|
||||
|
||||
- name: order_actions
|
||||
description: 取消订单、修改订单
|
||||
permission: write # 触发人工确认
|
||||
personality:
|
||||
tone: careful
|
||||
greeting: "我可以帮您处理订单变更,所有操作都会先经过您的确认。"
|
||||
tools:
|
||||
- cancel_order
|
||||
- modify_order
|
||||
|
||||
- name: discount
|
||||
description: 发放优惠券、折扣码
|
||||
- name: order_agent
|
||||
description: "Handles order status, tracking, and cancellations."
|
||||
permission: write
|
||||
tools:
|
||||
- apply_discount
|
||||
- generate_coupon
|
||||
- get_order_status
|
||||
- cancel_order
|
||||
personality:
|
||||
tone: friendly
|
||||
greeting: "I can help with your order. What is the order number?"
|
||||
escalation_message: "I'm escalating this to a human agent."
|
||||
|
||||
- name: general_agent
|
||||
description: "Answers general questions and FAQs."
|
||||
permission: read
|
||||
tools:
|
||||
- search_faq
|
||||
```
|
||||
|
||||
## OpenAPI 自动接入
|
||||
## API Endpoints
|
||||
|
||||
不需要手动写 MCP 连接器。粘贴你的 API 规范 URL:
|
||||
| Method | Path | Description |
|
||||
|--------|------|-------------|
|
||||
| WS | `/ws` | Main WebSocket chat endpoint |
|
||||
| GET | `/api/health` | Health check |
|
||||
| GET | `/api/conversations` | List conversations |
|
||||
| GET | `/api/replay/{thread_id}` | Replay conversation |
|
||||
| GET | `/api/analytics` | Analytics summary |
|
||||
| POST | `/api/openapi/import` | Import OpenAPI spec |
|
||||
| GET | `/api/openapi/jobs/{id}` | Check import job status |
|
||||
|
||||
1. 框架解析 OpenAPI 3.0 规范
|
||||
2. LLM 自动分类每个端点(读/写、客户参数、Agent 分组)
|
||||
3. 运维人员审核分类结果
|
||||
4. 自动生成 MCP 服务器 + Agent YAML 配置
|
||||
5. 新工具立即可用
|
||||
## Security
|
||||
|
||||
## 安全设计
|
||||
- **SSRF protection** -- OpenAPI import blocks private IPs and metadata service URLs
|
||||
- **Input validation** -- messages validated for size (32 KB), content length (10 KB), thread ID format
|
||||
- **Rate limiting** -- 10 messages per 10 seconds per session
|
||||
- **Audit trail** -- every tool call logged with agent, params, result, timestamp
|
||||
- **Permission isolation** -- each agent only accesses its configured tools
|
||||
- **Interrupt TTL** -- unanswered approval prompts expire after 30 minutes
|
||||
|
||||
- **人工确认** - 所有写操作需要客户或运维人员批准
|
||||
- **SSRF 防护** - OpenAPI URL 导入时屏蔽内网地址和 DNS 重绑定攻击
|
||||
- **操作审计** - 每个操作记录 Agent、参数、结果、时间戳
|
||||
- **权限隔离** - 每个 Agent 只能访问其配置的工具集
|
||||
- **中断超时** - 30 分钟未确认的操作自动取消,防止过期审批
|
||||
## Running Tests
|
||||
|
||||
## 开发阶段
|
||||
```bash
|
||||
cd backend
|
||||
pytest --cov=app --cov-report=term-missing
|
||||
```
|
||||
|
||||
| 阶段 | 周期 | 内容 |
|
||||
|------|------|------|
|
||||
| Phase 1 | 第 1-3 周 | 核心框架:Chat UI + Supervisor + Agent 注册表 + 中断流程 |
|
||||
| Phase 2 | 第 3-4 周 | 多 Agent 路由 + Webhook 升级 + 垂直模板 |
|
||||
| Phase 3 | 第 4-6 周 | OpenAPI 自动发现 + MCP 服务器生成 + SSRF 防护 |
|
||||
| Phase 4 | 第 6-7 周 | 对话回放 + 数据分析仪表盘 |
|
||||
Coverage is enforced at 80%+.
|
||||
|
||||
## 目标用户
|
||||
## Documentation
|
||||
|
||||
中型电商公司(日均 500-5000 订单,5-20 名客服)的客户体验负责人。
|
||||
|
||||
他们的痛点:客服需要在 Zendesk 和 Shopify 后台之间反复切换,手动执行查询和操作。Smart Support 让 AI 直接完成这些操作,人工只需审批关键步骤。
|
||||
|
||||
## 相关文档
|
||||
|
||||
- [设计文档](design-doc.md) - 问题定义、约束、方案选择
|
||||
- [CEO 计划](ceo-plan.md) - 产品愿景、范围决策
|
||||
- [工程评审计划](eng-review-plan.md) - 架构决策、测试策略、失败模式
|
||||
- [测试计划](eng-review-test-plan.md) - 测试路径、边界情况、E2E 流程
|
||||
- [待办事项](TODOS.md) - 延迟到后续阶段的工作
|
||||
- [Architecture](docs/ARCHITECTURE.md) -- System design and component diagram
|
||||
- [Development Plan](docs/DEVELOPMENT-PLAN.md) -- Phase breakdown and status
|
||||
- [Agent Config Guide](docs/agent-config-guide.md) -- How to configure agents
|
||||
- [OpenAPI Import Guide](docs/openapi-import-guide.md) -- Auto-discovery workflow
|
||||
- [Deployment Guide](docs/deployment.md) -- Docker and production deployment
|
||||
- [Demo Script](docs/demo-script.md) -- Step-by-step live demo walkthrough
|
||||
|
||||
## License
|
||||
|
||||
|
||||
@@ -1,19 +1,34 @@
|
||||
# Database
|
||||
# Smart Support Backend -- environment variables
|
||||
# Copy to .env and fill in your values
|
||||
|
||||
# Required: PostgreSQL connection string
|
||||
DATABASE_URL=postgresql://smart_support:dev_password@localhost:5432/smart_support
|
||||
|
||||
# LLM Provider: anthropic | openai | google
|
||||
# Required: LLM provider configuration
|
||||
# provider: anthropic | openai | google
|
||||
LLM_PROVIDER=anthropic
|
||||
LLM_MODEL=claude-sonnet-4-6
|
||||
|
||||
# API Keys (set the one matching your LLM_PROVIDER)
|
||||
# API keys -- provide the one matching LLM_PROVIDER
|
||||
ANTHROPIC_API_KEY=
|
||||
OPENAI_API_KEY=
|
||||
GOOGLE_API_KEY=
|
||||
|
||||
# Session
|
||||
# Optional: webhook endpoint for escalation notifications
|
||||
# The backend will POST a JSON payload when a conversation is escalated.
|
||||
WEBHOOK_URL=
|
||||
WEBHOOK_TIMEOUT_SECONDS=10
|
||||
WEBHOOK_MAX_RETRIES=3
|
||||
|
||||
# Session management
|
||||
SESSION_TTL_MINUTES=30
|
||||
INTERRUPT_TTL_MINUTES=30
|
||||
|
||||
# Server
|
||||
# Optional: load a named agent template instead of agents.yaml
|
||||
# Leave blank to use the default agents.yaml in the backend directory.
|
||||
# Available templates: ecommerce, saas, generic
|
||||
TEMPLATE_NAME=
|
||||
|
||||
# Server binding
|
||||
WS_HOST=0.0.0.0
|
||||
WS_PORT=8000
|
||||
|
||||
135
backend/app/conversation_tracker.py
Normal file
135
backend/app/conversation_tracker.py
Normal file
@@ -0,0 +1,135 @@
|
||||
"""Conversation tracker -- Protocol and implementations for tracking conversation state."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TYPE_CHECKING, Protocol, runtime_checkable
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from psycopg_pool import AsyncConnectionPool
|
||||
|
||||
_ENSURE_SQL = """
|
||||
INSERT INTO conversations
|
||||
(thread_id, started_at, last_activity)
|
||||
VALUES
|
||||
(%(thread_id)s, NOW(), NOW())
|
||||
ON CONFLICT (thread_id) DO NOTHING
|
||||
"""
|
||||
|
||||
_RECORD_TURN_SQL = """
|
||||
UPDATE conversations
|
||||
SET
|
||||
turn_count = turn_count + 1,
|
||||
agents_used = CASE
|
||||
WHEN %(agent_name)s IS NOT NULL AND NOT (agents_used @> ARRAY[%(agent_name)s]::text[])
|
||||
THEN agents_used || ARRAY[%(agent_name)s]::text[]
|
||||
ELSE agents_used
|
||||
END,
|
||||
total_tokens = total_tokens + %(tokens)s,
|
||||
total_cost_usd = total_cost_usd + %(cost)s,
|
||||
last_activity = NOW()
|
||||
WHERE thread_id = %(thread_id)s
|
||||
"""
|
||||
|
||||
_RESOLVE_SQL = """
|
||||
UPDATE conversations
|
||||
SET
|
||||
resolution_type = %(resolution_type)s,
|
||||
ended_at = NOW()
|
||||
WHERE thread_id = %(thread_id)s
|
||||
"""
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class ConversationTrackerProtocol(Protocol):
|
||||
"""Protocol for tracking conversation lifecycle and metrics."""
|
||||
|
||||
async def ensure_conversation(self, pool: AsyncConnectionPool, thread_id: str) -> None:
|
||||
"""Create conversation row if it does not already exist."""
|
||||
...
|
||||
|
||||
async def record_turn(
|
||||
self,
|
||||
pool: AsyncConnectionPool,
|
||||
thread_id: str,
|
||||
agent_name: str | None,
|
||||
tokens: int,
|
||||
cost: float,
|
||||
) -> None:
|
||||
"""Increment turn count and update aggregated metrics."""
|
||||
...
|
||||
|
||||
async def resolve(
|
||||
self,
|
||||
pool: AsyncConnectionPool,
|
||||
thread_id: str,
|
||||
resolution_type: str,
|
||||
) -> None:
|
||||
"""Mark conversation as resolved with a resolution type."""
|
||||
...
|
||||
|
||||
|
||||
class NoOpConversationTracker:
|
||||
"""No-op implementation -- used in tests or when DB is unavailable."""
|
||||
|
||||
async def ensure_conversation(self, pool: AsyncConnectionPool, thread_id: str) -> None:
|
||||
"""Do nothing."""
|
||||
|
||||
async def record_turn(
|
||||
self,
|
||||
pool: AsyncConnectionPool,
|
||||
thread_id: str,
|
||||
agent_name: str | None,
|
||||
tokens: int,
|
||||
cost: float,
|
||||
) -> None:
|
||||
"""Do nothing."""
|
||||
|
||||
async def resolve(
|
||||
self,
|
||||
pool: AsyncConnectionPool,
|
||||
thread_id: str,
|
||||
resolution_type: str,
|
||||
) -> None:
|
||||
"""Do nothing."""
|
||||
|
||||
|
||||
class PostgresConversationTracker:
|
||||
"""Postgres-backed conversation tracker."""
|
||||
|
||||
async def ensure_conversation(self, pool: AsyncConnectionPool, thread_id: str) -> None:
|
||||
"""Insert conversation row; do nothing if already exists (ON CONFLICT DO NOTHING)."""
|
||||
params = {"thread_id": thread_id}
|
||||
async with pool.connection() as conn:
|
||||
await conn.execute(_ENSURE_SQL, params)
|
||||
|
||||
async def record_turn(
|
||||
self,
|
||||
pool: AsyncConnectionPool,
|
||||
thread_id: str,
|
||||
agent_name: str | None,
|
||||
tokens: int,
|
||||
cost: float,
|
||||
) -> None:
|
||||
"""Increment turn count, append agent if new, update token/cost totals."""
|
||||
params = {
|
||||
"thread_id": thread_id,
|
||||
"agent_name": agent_name,
|
||||
"tokens": tokens,
|
||||
"cost": cost,
|
||||
}
|
||||
async with pool.connection() as conn:
|
||||
await conn.execute(_RECORD_TURN_SQL, params)
|
||||
|
||||
async def resolve(
|
||||
self,
|
||||
pool: AsyncConnectionPool,
|
||||
thread_id: str,
|
||||
resolution_type: str,
|
||||
) -> None:
|
||||
"""Set resolution_type and ended_at on the conversation row."""
|
||||
params = {
|
||||
"thread_id": thread_id,
|
||||
"resolution_type": resolution_type,
|
||||
}
|
||||
async with pool.connection() as conn:
|
||||
await conn.execute(_RESOLVE_SQL, params)
|
||||
@@ -11,9 +11,10 @@ from fastapi import FastAPI, WebSocket, WebSocketDisconnect
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
|
||||
from app.analytics.api import router as analytics_router
|
||||
from app.analytics.event_recorder import NoOpAnalyticsRecorder
|
||||
from app.analytics.event_recorder import PostgresAnalyticsRecorder
|
||||
from app.callbacks import TokenUsageCallbackHandler
|
||||
from app.config import Settings
|
||||
from app.conversation_tracker import PostgresConversationTracker
|
||||
from app.db import create_checkpointer, create_pool, setup_app_tables
|
||||
from app.escalation import NoOpEscalator, WebhookEscalator
|
||||
from app.graph import build_graph
|
||||
@@ -76,7 +77,8 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
|
||||
app.state.escalator = escalator
|
||||
app.state.settings = settings
|
||||
app.state.pool = pool
|
||||
app.state.analytics_recorder = NoOpAnalyticsRecorder()
|
||||
app.state.analytics_recorder = PostgresAnalyticsRecorder(pool=pool)
|
||||
app.state.conversation_tracker = PostgresConversationTracker()
|
||||
|
||||
logger.info(
|
||||
"Smart Support started: %d agents loaded, LLM=%s/%s, template=%s",
|
||||
@@ -91,13 +93,19 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
|
||||
await pool.close()
|
||||
|
||||
|
||||
app = FastAPI(title="Smart Support", version="0.4.0", lifespan=lifespan)
|
||||
app = FastAPI(title="Smart Support", version="0.5.0", lifespan=lifespan)
|
||||
|
||||
app.include_router(openapi_router)
|
||||
app.include_router(replay_router)
|
||||
app.include_router(analytics_router)
|
||||
|
||||
|
||||
@app.get("/api/health")
|
||||
def health_check() -> dict:
|
||||
"""Health check endpoint for load balancers and monitoring."""
|
||||
return {"status": "ok", "version": "0.5.0"}
|
||||
|
||||
|
||||
@app.websocket("/ws")
|
||||
async def websocket_endpoint(ws: WebSocket) -> None:
|
||||
await ws.accept()
|
||||
@@ -107,12 +115,19 @@ async def websocket_endpoint(ws: WebSocket) -> None:
|
||||
settings = app.state.settings
|
||||
callback_handler = TokenUsageCallbackHandler(model_name=settings.llm_model)
|
||||
|
||||
analytics_recorder = app.state.analytics_recorder
|
||||
conversation_tracker = app.state.conversation_tracker
|
||||
pool = app.state.pool
|
||||
|
||||
try:
|
||||
while True:
|
||||
raw_data = await ws.receive_text()
|
||||
await dispatch_message(
|
||||
ws, graph, session_manager, callback_handler, raw_data,
|
||||
interrupt_manager=interrupt_manager,
|
||||
analytics_recorder=analytics_recorder,
|
||||
conversation_tracker=conversation_tracker,
|
||||
pool=pool,
|
||||
)
|
||||
except WebSocketDisconnect:
|
||||
logger.info("WebSocket client disconnected")
|
||||
|
||||
3
backend/app/tools/__init__.py
Normal file
3
backend/app/tools/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
"""Tools package for smart-support backend."""
|
||||
|
||||
from __future__ import annotations
|
||||
72
backend/app/tools/error_handler.py
Normal file
72
backend/app/tools/error_handler.py
Normal file
@@ -0,0 +1,72 @@
|
||||
"""Error classification and retry logic for tool calls."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from enum import Enum
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Callable
|
||||
|
||||
import httpx
|
||||
|
||||
|
||||
class ErrorCategory(Enum):
|
||||
"""Categories for error classification to guide retry decisions."""
|
||||
|
||||
RETRYABLE = "retryable"
|
||||
NON_RETRYABLE = "non_retryable"
|
||||
AUTH_FAILURE = "auth_failure"
|
||||
TIMEOUT = "timeout"
|
||||
NETWORK = "network"
|
||||
|
||||
|
||||
def classify_error(exc: Exception) -> ErrorCategory:
|
||||
"""Classify an exception into an ErrorCategory.
|
||||
|
||||
Rules:
|
||||
- httpx.TimeoutException -> TIMEOUT
|
||||
- httpx.ConnectError -> NETWORK
|
||||
- httpx.HTTPStatusError 401/403 -> AUTH_FAILURE
|
||||
- httpx.HTTPStatusError 429/500/502/503 -> RETRYABLE
|
||||
- anything else -> NON_RETRYABLE
|
||||
"""
|
||||
if isinstance(exc, httpx.TimeoutException):
|
||||
return ErrorCategory.TIMEOUT
|
||||
if isinstance(exc, httpx.ConnectError):
|
||||
return ErrorCategory.NETWORK
|
||||
if isinstance(exc, httpx.HTTPStatusError):
|
||||
code = exc.response.status_code
|
||||
if code in (401, 403):
|
||||
return ErrorCategory.AUTH_FAILURE
|
||||
if code in (429, 500, 502, 503):
|
||||
return ErrorCategory.RETRYABLE
|
||||
return ErrorCategory.NON_RETRYABLE
|
||||
return ErrorCategory.NON_RETRYABLE
|
||||
|
||||
|
||||
async def with_retry(
|
||||
fn: Callable[..., Any],
|
||||
max_retries: int = 3,
|
||||
base_delay: float = 1.0,
|
||||
) -> Any:
|
||||
"""Execute an async callable with exponential backoff for RETRYABLE errors.
|
||||
|
||||
Only ErrorCategory.RETRYABLE errors trigger retries. All other error
|
||||
categories raise immediately after the first attempt.
|
||||
"""
|
||||
last_exc: Exception | None = None
|
||||
for attempt in range(1, max_retries + 1):
|
||||
try:
|
||||
return await fn()
|
||||
except Exception as exc:
|
||||
category = classify_error(exc)
|
||||
if category != ErrorCategory.RETRYABLE:
|
||||
raise
|
||||
last_exc = exc
|
||||
if attempt < max_retries:
|
||||
delay = base_delay * (2 ** (attempt - 1))
|
||||
await asyncio.sleep(delay)
|
||||
|
||||
raise last_exc # type: ignore[misc]
|
||||
@@ -5,6 +5,8 @@ from __future__ import annotations
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from langchain_core.messages import HumanMessage
|
||||
@@ -16,16 +18,23 @@ if TYPE_CHECKING:
|
||||
from fastapi import WebSocket
|
||||
from langgraph.graph.state import CompiledStateGraph
|
||||
|
||||
from app.analytics.event_recorder import AnalyticsRecorder
|
||||
from app.callbacks import TokenUsageCallbackHandler
|
||||
from app.conversation_tracker import ConversationTrackerProtocol
|
||||
from app.interrupt_manager import InterruptManager
|
||||
from app.session_manager import SessionManager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
MAX_MESSAGE_SIZE = 32_768 # 32 KB
|
||||
MAX_CONTENT_LENGTH = 8_000 # characters
|
||||
MAX_CONTENT_LENGTH = 10_000 # characters
|
||||
THREAD_ID_PATTERN = re.compile(r"^[a-zA-Z0-9\-_]{1,128}$")
|
||||
|
||||
# Rate limiting: max 10 messages per 10-second window, per thread
|
||||
_RATE_LIMIT_MAX = 10
|
||||
_RATE_LIMIT_WINDOW = 10.0
|
||||
_thread_timestamps: dict[str, list[float]] = defaultdict(list)
|
||||
|
||||
|
||||
async def handle_user_message(
|
||||
ws: WebSocket,
|
||||
@@ -197,6 +206,9 @@ async def dispatch_message(
|
||||
callback_handler: TokenUsageCallbackHandler,
|
||||
raw_data: str,
|
||||
interrupt_manager: InterruptManager | None = None,
|
||||
analytics_recorder: AnalyticsRecorder | None = None,
|
||||
conversation_tracker: ConversationTrackerProtocol | None = None,
|
||||
pool: Any = None,
|
||||
) -> None:
|
||||
"""Parse and route an incoming WebSocket message."""
|
||||
if len(raw_data) > MAX_MESSAGE_SIZE:
|
||||
@@ -205,10 +217,14 @@ async def dispatch_message(
|
||||
|
||||
try:
|
||||
data = json.loads(raw_data)
|
||||
except json.JSONDecodeError:
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
await _send_json(ws, {"type": "error", "message": "Invalid JSON"})
|
||||
return
|
||||
|
||||
if not isinstance(data, dict):
|
||||
await _send_json(ws, {"type": "error", "message": "Invalid JSON: expected object"})
|
||||
return
|
||||
|
||||
msg_type = data.get("type")
|
||||
thread_id = data.get("thread_id", "")
|
||||
|
||||
@@ -222,16 +238,36 @@ async def dispatch_message(
|
||||
|
||||
if msg_type == "message":
|
||||
content = data.get("content", "")
|
||||
if not content:
|
||||
if not content or not content.strip():
|
||||
await _send_json(ws, {"type": "error", "message": "Missing message content"})
|
||||
return
|
||||
if len(content) > MAX_CONTENT_LENGTH:
|
||||
await _send_json(ws, {"type": "error", "message": "Message content too long"})
|
||||
return
|
||||
|
||||
# Rate limiting check
|
||||
now = time.time()
|
||||
timestamps = _thread_timestamps[thread_id]
|
||||
cutoff = now - _RATE_LIMIT_WINDOW
|
||||
_thread_timestamps[thread_id] = [t for t in timestamps if t >= cutoff]
|
||||
if len(_thread_timestamps[thread_id]) >= _RATE_LIMIT_MAX:
|
||||
await _send_json(ws, {"type": "error", "message": "Rate limit exceeded"})
|
||||
return
|
||||
_thread_timestamps[thread_id].append(now)
|
||||
|
||||
await handle_user_message(
|
||||
ws, graph, session_manager, callback_handler, thread_id, content,
|
||||
interrupt_manager=interrupt_manager,
|
||||
)
|
||||
await _fire_and_forget_tracking(
|
||||
thread_id=thread_id,
|
||||
pool=pool,
|
||||
analytics_recorder=analytics_recorder,
|
||||
conversation_tracker=conversation_tracker,
|
||||
agent_name=None,
|
||||
tokens=0,
|
||||
cost=0.0,
|
||||
)
|
||||
|
||||
elif msg_type == "interrupt_response":
|
||||
approved = data.get("approved", False)
|
||||
@@ -244,6 +280,36 @@ async def dispatch_message(
|
||||
await _send_json(ws, {"type": "error", "message": "Unknown message type"})
|
||||
|
||||
|
||||
async def _fire_and_forget_tracking(
|
||||
thread_id: str,
|
||||
pool: Any,
|
||||
analytics_recorder: Any | None,
|
||||
conversation_tracker: Any | None,
|
||||
agent_name: str | None,
|
||||
tokens: int,
|
||||
cost: float,
|
||||
) -> None:
|
||||
"""Fire-and-forget analytics/tracking; failures must NOT break chat."""
|
||||
try:
|
||||
if conversation_tracker is not None and pool is not None:
|
||||
await conversation_tracker.ensure_conversation(pool, thread_id)
|
||||
await conversation_tracker.record_turn(pool, thread_id, agent_name, tokens, cost)
|
||||
except Exception:
|
||||
logger.exception("Conversation tracker error for thread %s (suppressed)", thread_id)
|
||||
|
||||
try:
|
||||
if analytics_recorder is not None:
|
||||
await analytics_recorder.record(
|
||||
thread_id=thread_id,
|
||||
event_type="message",
|
||||
agent_name=agent_name,
|
||||
tokens_used=tokens,
|
||||
cost_usd=cost,
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("Analytics recorder error for thread %s (suppressed)", thread_id)
|
||||
|
||||
|
||||
def _has_interrupt(state: Any) -> bool:
|
||||
"""Check if the graph state has a pending interrupt."""
|
||||
tasks = getattr(state, "tasks", ())
|
||||
|
||||
153
backend/fixtures/demo_data.py
Normal file
153
backend/fixtures/demo_data.py
Normal file
@@ -0,0 +1,153 @@
|
||||
"""Seed script -- inserts sample conversations and analytics events for demo purposes.
|
||||
|
||||
Usage:
|
||||
cd backend
|
||||
python fixtures/demo_data.py
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
import psycopg
|
||||
|
||||
DATABASE_URL = os.environ.get(
|
||||
"DATABASE_URL",
|
||||
"postgresql://smart_support:dev_password@localhost:5432/smart_support",
|
||||
)
|
||||
|
||||
SAMPLE_CONVERSATIONS = [
|
||||
{
|
||||
"thread_id": "demo-thread-001",
|
||||
"agents_used": ["order_agent"],
|
||||
"turn_count": 3,
|
||||
"total_tokens": 1250,
|
||||
"total_cost_usd": 0.00375,
|
||||
"resolution_type": "resolved",
|
||||
"minutes_ago": 5,
|
||||
},
|
||||
{
|
||||
"thread_id": "demo-thread-002",
|
||||
"agents_used": ["order_agent", "refund_agent"],
|
||||
"turn_count": 6,
|
||||
"total_tokens": 3200,
|
||||
"total_cost_usd": 0.0096,
|
||||
"resolution_type": "resolved",
|
||||
"minutes_ago": 30,
|
||||
},
|
||||
{
|
||||
"thread_id": "demo-thread-003",
|
||||
"agents_used": ["general_agent"],
|
||||
"turn_count": 2,
|
||||
"total_tokens": 800,
|
||||
"total_cost_usd": 0.0024,
|
||||
"resolution_type": None,
|
||||
"minutes_ago": 60,
|
||||
},
|
||||
{
|
||||
"thread_id": "demo-thread-004",
|
||||
"agents_used": ["order_agent", "general_agent"],
|
||||
"turn_count": 8,
|
||||
"total_tokens": 4500,
|
||||
"total_cost_usd": 0.0135,
|
||||
"resolution_type": "escalated",
|
||||
"minutes_ago": 120,
|
||||
},
|
||||
{
|
||||
"thread_id": "demo-thread-005",
|
||||
"agents_used": ["refund_agent"],
|
||||
"turn_count": 4,
|
||||
"total_tokens": 2100,
|
||||
"total_cost_usd": 0.0063,
|
||||
"resolution_type": "resolved",
|
||||
"minutes_ago": 240,
|
||||
},
|
||||
]
|
||||
|
||||
SAMPLE_EVENTS = [
|
||||
{"thread_id": "demo-thread-001", "event_type": "message", "agent_name": "order_agent", "tokens_used": 400, "cost_usd": 0.0012, "success": True},
|
||||
{"thread_id": "demo-thread-001", "event_type": "tool_call", "agent_name": "order_agent", "tool_name": "get_order_status", "tokens_used": 0, "cost_usd": 0.0, "success": True},
|
||||
{"thread_id": "demo-thread-002", "event_type": "message", "agent_name": "order_agent", "tokens_used": 1600, "cost_usd": 0.0048, "success": True},
|
||||
{"thread_id": "demo-thread-002", "event_type": "message", "agent_name": "refund_agent", "tokens_used": 1600, "cost_usd": 0.0048, "success": True},
|
||||
{"thread_id": "demo-thread-002", "event_type": "tool_call", "agent_name": "refund_agent", "tool_name": "process_refund", "tokens_used": 0, "cost_usd": 0.0, "success": True},
|
||||
{"thread_id": "demo-thread-003", "event_type": "message", "agent_name": "general_agent", "tokens_used": 800, "cost_usd": 0.0024, "success": True},
|
||||
{"thread_id": "demo-thread-004", "event_type": "message", "agent_name": "order_agent", "tokens_used": 2000, "cost_usd": 0.006, "success": True},
|
||||
{"thread_id": "demo-thread-004", "event_type": "escalation", "agent_name": "general_agent", "tokens_used": 2500, "cost_usd": 0.0075, "success": False},
|
||||
{"thread_id": "demo-thread-005", "event_type": "message", "agent_name": "refund_agent", "tokens_used": 2100, "cost_usd": 0.0063, "success": True},
|
||||
]
|
||||
|
||||
_INSERT_CONVERSATION = """
|
||||
INSERT INTO conversations
|
||||
(thread_id, started_at, last_activity, turn_count, agents_used,
|
||||
total_tokens, total_cost_usd, resolution_type, ended_at)
|
||||
VALUES
|
||||
(%(thread_id)s, %(started_at)s, %(last_activity)s, %(turn_count)s,
|
||||
%(agents_used)s, %(total_tokens)s, %(total_cost_usd)s,
|
||||
%(resolution_type)s, %(ended_at)s)
|
||||
ON CONFLICT (thread_id) DO NOTHING
|
||||
"""
|
||||
|
||||
_INSERT_EVENT = """
|
||||
INSERT INTO analytics_events
|
||||
(thread_id, event_type, agent_name, tool_name, tokens_used, cost_usd, success)
|
||||
VALUES
|
||||
(%(thread_id)s, %(event_type)s, %(agent_name)s, %(tool_name)s,
|
||||
%(tokens_used)s, %(cost_usd)s, %(success)s)
|
||||
"""
|
||||
|
||||
|
||||
async def seed() -> None:
|
||||
now = datetime.now(tz=timezone.utc)
|
||||
|
||||
async with await psycopg.AsyncConnection.connect(DATABASE_URL) as conn:
|
||||
print("Seeding conversations...")
|
||||
for conv in SAMPLE_CONVERSATIONS:
|
||||
started_at = now - timedelta(minutes=conv["minutes_ago"])
|
||||
last_activity = started_at + timedelta(minutes=conv["turn_count"] * 2)
|
||||
ended_at = last_activity if conv["resolution_type"] else None
|
||||
|
||||
await conn.execute(
|
||||
_INSERT_CONVERSATION,
|
||||
{
|
||||
"thread_id": conv["thread_id"],
|
||||
"started_at": started_at,
|
||||
"last_activity": last_activity,
|
||||
"turn_count": conv["turn_count"],
|
||||
"agents_used": conv["agents_used"],
|
||||
"total_tokens": conv["total_tokens"],
|
||||
"total_cost_usd": conv["total_cost_usd"],
|
||||
"resolution_type": conv["resolution_type"],
|
||||
"ended_at": ended_at,
|
||||
},
|
||||
)
|
||||
print(f" Inserted conversation {conv['thread_id']}")
|
||||
|
||||
print("Seeding analytics events...")
|
||||
for event in SAMPLE_EVENTS:
|
||||
await conn.execute(
|
||||
_INSERT_EVENT,
|
||||
{
|
||||
"thread_id": event["thread_id"],
|
||||
"event_type": event["event_type"],
|
||||
"agent_name": event.get("agent_name"),
|
||||
"tool_name": event.get("tool_name"),
|
||||
"tokens_used": event.get("tokens_used", 0),
|
||||
"cost_usd": event.get("cost_usd", 0.0),
|
||||
"success": event.get("success"),
|
||||
},
|
||||
)
|
||||
print(f" Inserted event {event['event_type']} for {event['thread_id']}")
|
||||
|
||||
await conn.commit()
|
||||
|
||||
print("Done. Demo data seeded successfully.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(seed())
|
||||
238
backend/fixtures/sample_openapi.yaml
Normal file
238
backend/fixtures/sample_openapi.yaml
Normal file
@@ -0,0 +1,238 @@
|
||||
openapi: "3.0.3"
|
||||
info:
|
||||
title: "E-Commerce API"
|
||||
description: "Sample e-commerce API for Smart Support demo."
|
||||
version: "1.0.0"
|
||||
|
||||
servers:
|
||||
- url: "https://api.example-shop.com/v1"
|
||||
description: "Production server"
|
||||
|
||||
paths:
|
||||
/orders/{order_id}:
|
||||
get:
|
||||
operationId: getOrder
|
||||
summary: "Get order details"
|
||||
description: "Retrieve the full details of a specific order."
|
||||
parameters:
|
||||
- name: order_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
responses:
|
||||
"200":
|
||||
description: "Order details"
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Order"
|
||||
|
||||
/orders/{order_id}/cancel:
|
||||
post:
|
||||
operationId: cancelOrder
|
||||
summary: "Cancel an order"
|
||||
description: "Cancel an order that has not yet been shipped."
|
||||
parameters:
|
||||
- name: order_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
reason:
|
||||
type: string
|
||||
responses:
|
||||
"200":
|
||||
description: "Order cancelled"
|
||||
"400":
|
||||
description: "Order cannot be cancelled (already shipped)"
|
||||
|
||||
/orders/{order_id}/refund:
|
||||
post:
|
||||
operationId: refundOrder
|
||||
summary: "Request a refund"
|
||||
description: "Submit a refund request for a completed order."
|
||||
parameters:
|
||||
- name: order_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
amount:
|
||||
type: number
|
||||
description: "Refund amount in USD. Leave null for full refund."
|
||||
reason:
|
||||
type: string
|
||||
responses:
|
||||
"200":
|
||||
description: "Refund submitted"
|
||||
"400":
|
||||
description: "Invalid refund request"
|
||||
|
||||
/customers/{customer_id}:
|
||||
get:
|
||||
operationId: getCustomer
|
||||
summary: "Get customer profile"
|
||||
description: "Retrieve customer profile and account information."
|
||||
parameters:
|
||||
- name: customer_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
responses:
|
||||
"200":
|
||||
description: "Customer profile"
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Customer"
|
||||
|
||||
/customers/{customer_id}/orders:
|
||||
get:
|
||||
operationId: listCustomerOrders
|
||||
summary: "List customer orders"
|
||||
description: "Get a paginated list of orders for a customer."
|
||||
parameters:
|
||||
- name: customer_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
- name: page
|
||||
in: query
|
||||
schema:
|
||||
type: integer
|
||||
default: 1
|
||||
- name: per_page
|
||||
in: query
|
||||
schema:
|
||||
type: integer
|
||||
default: 20
|
||||
responses:
|
||||
"200":
|
||||
description: "List of orders"
|
||||
|
||||
/products/{product_id}:
|
||||
get:
|
||||
operationId: getProduct
|
||||
summary: "Get product details"
|
||||
description: "Retrieve product information including inventory status."
|
||||
parameters:
|
||||
- name: product_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
responses:
|
||||
"200":
|
||||
description: "Product details"
|
||||
|
||||
/support/tickets:
|
||||
post:
|
||||
operationId: createSupportTicket
|
||||
summary: "Create support ticket"
|
||||
description: "Open a new support ticket for a customer issue."
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/CreateTicketRequest"
|
||||
responses:
|
||||
"201":
|
||||
description: "Ticket created"
|
||||
|
||||
/support/tickets/{ticket_id}:
|
||||
get:
|
||||
operationId: getSupportTicket
|
||||
summary: "Get support ticket"
|
||||
description: "Retrieve a support ticket and its conversation history."
|
||||
parameters:
|
||||
- name: ticket_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
responses:
|
||||
"200":
|
||||
description: "Ticket details"
|
||||
|
||||
components:
|
||||
schemas:
|
||||
Order:
|
||||
type: object
|
||||
properties:
|
||||
order_id:
|
||||
type: string
|
||||
customer_id:
|
||||
type: string
|
||||
status:
|
||||
type: string
|
||||
enum: [pending, processing, shipped, delivered, cancelled, refunded]
|
||||
items:
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/OrderItem"
|
||||
total_usd:
|
||||
type: number
|
||||
created_at:
|
||||
type: string
|
||||
format: date-time
|
||||
|
||||
OrderItem:
|
||||
type: object
|
||||
properties:
|
||||
product_id:
|
||||
type: string
|
||||
name:
|
||||
type: string
|
||||
quantity:
|
||||
type: integer
|
||||
unit_price_usd:
|
||||
type: number
|
||||
|
||||
Customer:
|
||||
type: object
|
||||
properties:
|
||||
customer_id:
|
||||
type: string
|
||||
email:
|
||||
type: string
|
||||
name:
|
||||
type: string
|
||||
tier:
|
||||
type: string
|
||||
enum: [standard, premium, vip]
|
||||
created_at:
|
||||
type: string
|
||||
format: date-time
|
||||
|
||||
CreateTicketRequest:
|
||||
type: object
|
||||
required: [customer_id, subject, description]
|
||||
properties:
|
||||
customer_id:
|
||||
type: string
|
||||
subject:
|
||||
type: string
|
||||
description:
|
||||
type: string
|
||||
priority:
|
||||
type: string
|
||||
enum: [low, medium, high, urgent]
|
||||
default: medium
|
||||
@@ -15,6 +15,16 @@ if TYPE_CHECKING:
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def clear_rate_limit_state() -> None:
|
||||
"""Clear module-level rate limit state between tests to prevent leakage."""
|
||||
import app.ws_handler as ws_handler
|
||||
|
||||
ws_handler._thread_timestamps.clear()
|
||||
yield
|
||||
ws_handler._thread_timestamps.clear()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def test_settings() -> Settings:
|
||||
return Settings(
|
||||
|
||||
@@ -315,7 +315,7 @@ class TestWebSocketValidation:
|
||||
@pytest.mark.asyncio
|
||||
async def test_content_too_long(self) -> None:
|
||||
g, sm, im, cb, ws = _setup()
|
||||
raw = json.dumps({"type": "message", "thread_id": "t1", "content": "x" * 9000})
|
||||
raw = json.dumps({"type": "message", "thread_id": "t1", "content": "x" * 10001})
|
||||
await dispatch_message(ws, g, sm, cb, raw, interrupt_manager=im)
|
||||
assert ws.sent[0]["type"] == "error"
|
||||
assert "too long" in ws.sent[0]["message"].lower()
|
||||
|
||||
156
backend/tests/unit/test_conversation_tracker.py
Normal file
156
backend/tests/unit/test_conversation_tracker.py
Normal file
@@ -0,0 +1,156 @@
|
||||
"""Tests for app.conversation_tracker module."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from app.conversation_tracker import (
|
||||
ConversationTrackerProtocol,
|
||||
NoOpConversationTracker,
|
||||
PostgresConversationTracker,
|
||||
)
|
||||
|
||||
pytestmark = pytest.mark.unit
|
||||
|
||||
|
||||
def _make_pool() -> AsyncMock:
|
||||
"""Create a mock async connection pool."""
|
||||
pool = AsyncMock()
|
||||
conn = AsyncMock()
|
||||
conn.execute = AsyncMock()
|
||||
pool.connection = MagicMock(return_value=_AsyncContextManager(conn))
|
||||
return pool, conn
|
||||
|
||||
|
||||
class _AsyncContextManager:
|
||||
"""Async context manager helper."""
|
||||
|
||||
def __init__(self, value: object) -> None:
|
||||
self._value = value
|
||||
|
||||
async def __aenter__(self) -> object:
|
||||
return self._value
|
||||
|
||||
async def __aexit__(self, *args: object) -> None:
|
||||
pass
|
||||
|
||||
|
||||
class TestConversationTrackerProtocol:
|
||||
def test_noop_satisfies_protocol(self) -> None:
|
||||
tracker = NoOpConversationTracker()
|
||||
assert isinstance(tracker, ConversationTrackerProtocol)
|
||||
|
||||
def test_postgres_satisfies_protocol(self) -> None:
|
||||
tracker = PostgresConversationTracker()
|
||||
assert isinstance(tracker, ConversationTrackerProtocol)
|
||||
|
||||
|
||||
class TestNoOpConversationTracker:
|
||||
@pytest.mark.asyncio
|
||||
async def test_ensure_conversation_does_nothing(self) -> None:
|
||||
tracker = NoOpConversationTracker()
|
||||
pool = AsyncMock()
|
||||
# Should not raise
|
||||
await tracker.ensure_conversation(pool, "thread-1")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_record_turn_does_nothing(self) -> None:
|
||||
tracker = NoOpConversationTracker()
|
||||
pool = AsyncMock()
|
||||
await tracker.record_turn(pool, "thread-1", "agent_a", 100, 0.05)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_resolve_does_nothing(self) -> None:
|
||||
tracker = NoOpConversationTracker()
|
||||
pool = AsyncMock()
|
||||
await tracker.resolve(pool, "thread-1", "resolved")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_accepts_none_agent_name(self) -> None:
|
||||
tracker = NoOpConversationTracker()
|
||||
pool = AsyncMock()
|
||||
await tracker.record_turn(pool, "thread-1", None, 0, 0.0)
|
||||
|
||||
|
||||
class TestPostgresConversationTracker:
|
||||
@pytest.mark.asyncio
|
||||
async def test_ensure_conversation_executes_insert(self) -> None:
|
||||
tracker = PostgresConversationTracker()
|
||||
pool, conn = _make_pool()
|
||||
|
||||
await tracker.ensure_conversation(pool, "thread-abc")
|
||||
|
||||
conn.execute.assert_awaited_once()
|
||||
sql, params = conn.execute.call_args[0]
|
||||
assert "INSERT" in sql
|
||||
assert "ON CONFLICT" in sql
|
||||
assert params["thread_id"] == "thread-abc"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_record_turn_executes_update(self) -> None:
|
||||
tracker = PostgresConversationTracker()
|
||||
pool, conn = _make_pool()
|
||||
|
||||
await tracker.record_turn(pool, "thread-abc", "order_agent", 250, 0.12)
|
||||
|
||||
conn.execute.assert_awaited_once()
|
||||
sql, params = conn.execute.call_args[0]
|
||||
assert "UPDATE" in sql
|
||||
assert params["thread_id"] == "thread-abc"
|
||||
assert params["agent_name"] == "order_agent"
|
||||
assert params["tokens"] == 250
|
||||
assert params["cost"] == 0.12
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_record_turn_accepts_none_agent_name(self) -> None:
|
||||
tracker = PostgresConversationTracker()
|
||||
pool, conn = _make_pool()
|
||||
|
||||
await tracker.record_turn(pool, "thread-abc", None, 0, 0.0)
|
||||
|
||||
conn.execute.assert_awaited_once()
|
||||
sql, params = conn.execute.call_args[0]
|
||||
assert params["agent_name"] is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_resolve_executes_update(self) -> None:
|
||||
tracker = PostgresConversationTracker()
|
||||
pool, conn = _make_pool()
|
||||
|
||||
await tracker.resolve(pool, "thread-abc", "resolved")
|
||||
|
||||
conn.execute.assert_awaited_once()
|
||||
sql, params = conn.execute.call_args[0]
|
||||
assert "UPDATE" in sql
|
||||
assert params["thread_id"] == "thread-abc"
|
||||
assert params["resolution_type"] == "resolved"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_resolve_sets_ended_at(self) -> None:
|
||||
tracker = PostgresConversationTracker()
|
||||
pool, conn = _make_pool()
|
||||
|
||||
await tracker.resolve(pool, "thread-abc", "escalated")
|
||||
|
||||
sql, params = conn.execute.call_args[0]
|
||||
assert "ended_at" in sql.lower()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ensure_conversation_with_special_thread_id(self) -> None:
|
||||
tracker = PostgresConversationTracker()
|
||||
pool, conn = _make_pool()
|
||||
|
||||
await tracker.ensure_conversation(pool, "thread-123-abc-XYZ")
|
||||
|
||||
conn.execute.assert_awaited_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_record_turn_with_zero_cost(self) -> None:
|
||||
tracker = PostgresConversationTracker()
|
||||
pool, conn = _make_pool()
|
||||
|
||||
await tracker.record_turn(pool, "t1", "agent", 0, 0.0)
|
||||
|
||||
conn.execute.assert_awaited_once()
|
||||
213
backend/tests/unit/test_edge_cases.py
Normal file
213
backend/tests/unit/test_edge_cases.py
Normal file
@@ -0,0 +1,213 @@
|
||||
"""Edge case tests for ws_handler input validation and rate limiting."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from app.callbacks import TokenUsageCallbackHandler
|
||||
from app.session_manager import SessionManager
|
||||
from app.ws_handler import dispatch_message
|
||||
|
||||
pytestmark = pytest.mark.unit
|
||||
|
||||
|
||||
def _make_ws() -> AsyncMock:
|
||||
ws = AsyncMock()
|
||||
ws.send_json = AsyncMock()
|
||||
return ws
|
||||
|
||||
|
||||
def _make_graph() -> AsyncMock:
|
||||
graph = AsyncMock()
|
||||
|
||||
class AsyncIterHelper:
|
||||
def __aiter__(self):
|
||||
return self
|
||||
|
||||
async def __anext__(self):
|
||||
raise StopAsyncIteration
|
||||
|
||||
graph.astream = MagicMock(return_value=AsyncIterHelper())
|
||||
state = MagicMock()
|
||||
state.tasks = ()
|
||||
graph.aget_state = AsyncMock(return_value=state)
|
||||
graph.intent_classifier = None
|
||||
graph.agent_registry = None
|
||||
return graph
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestEmptyMessageHandling:
|
||||
@pytest.mark.asyncio
|
||||
async def test_empty_message_content_returns_error(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
sm.touch("t1")
|
||||
msg = json.dumps({"type": "message", "thread_id": "t1", "content": ""})
|
||||
await dispatch_message(ws, graph, sm, cb, msg)
|
||||
|
||||
call_data = ws.send_json.call_args[0][0]
|
||||
assert call_data["type"] == "error"
|
||||
msg_lower = call_data["message"].lower()
|
||||
assert "content" in msg_lower or "missing" in msg_lower
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_whitespace_only_message_treated_as_empty(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
sm.touch("t1")
|
||||
msg = json.dumps({"type": "message", "thread_id": "t1", "content": " "})
|
||||
await dispatch_message(ws, graph, sm, cb, msg)
|
||||
|
||||
call_data = ws.send_json.call_args[0][0]
|
||||
assert call_data["type"] == "error"
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestOversizedMessageHandling:
|
||||
@pytest.mark.asyncio
|
||||
async def test_content_over_10000_chars_returns_error(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
sm.touch("t1")
|
||||
content = "x" * 10001
|
||||
msg = json.dumps({"type": "message", "thread_id": "t1", "content": content})
|
||||
await dispatch_message(ws, graph, sm, cb, msg)
|
||||
|
||||
call_data = ws.send_json.call_args[0][0]
|
||||
assert call_data["type"] == "error"
|
||||
assert "too long" in call_data["message"].lower()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_content_exactly_10000_chars_is_accepted(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
sm.touch("t1")
|
||||
content = "x" * 10000
|
||||
msg = json.dumps({"type": "message", "thread_id": "t1", "content": content})
|
||||
await dispatch_message(ws, graph, sm, cb, msg)
|
||||
|
||||
last_call = ws.send_json.call_args[0][0]
|
||||
# Should be processed, not an error about length
|
||||
msg_text = last_call.get("message", "").lower()
|
||||
assert last_call["type"] != "error" or "too long" not in msg_text
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_raw_message_over_32kb_returns_error(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
large_msg = "x" * 40_000
|
||||
await dispatch_message(ws, graph, sm, cb, large_msg)
|
||||
|
||||
call_data = ws.send_json.call_args[0][0]
|
||||
assert call_data["type"] == "error"
|
||||
assert "too large" in call_data["message"].lower()
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestInvalidJsonHandling:
|
||||
@pytest.mark.asyncio
|
||||
async def test_invalid_json_returns_error(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
await dispatch_message(ws, graph, sm, cb, "not valid json {{")
|
||||
|
||||
call_data = ws.send_json.call_args[0][0]
|
||||
assert call_data["type"] == "error"
|
||||
assert "invalid json" in call_data["message"].lower()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_empty_string_returns_json_error(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
await dispatch_message(ws, graph, sm, cb, "")
|
||||
|
||||
call_data = ws.send_json.call_args[0][0]
|
||||
assert call_data["type"] == "error"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_json_array_not_object_returns_error(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
await dispatch_message(ws, graph, sm, cb, '["not", "an", "object"]')
|
||||
|
||||
call_data = ws.send_json.call_args[0][0]
|
||||
assert call_data["type"] == "error"
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestRateLimiting:
|
||||
@pytest.mark.asyncio
|
||||
async def test_rapid_fire_messages_rate_limited(self) -> None:
|
||||
ws = _make_ws()
|
||||
_make_graph() # ensure graph factory works, not needed directly
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
sm.touch("t1")
|
||||
|
||||
# Simulate 11 rapid messages (exceeds 10 per 10 seconds limit)
|
||||
rate_limit_triggered = False
|
||||
for i in range(11):
|
||||
graph2 = _make_graph() # fresh graph each time
|
||||
await dispatch_message(ws, graph2, sm, cb, json.dumps({
|
||||
"type": "message",
|
||||
"thread_id": "t1",
|
||||
"content": f"message {i}",
|
||||
}))
|
||||
last_call = ws.send_json.call_args[0][0]
|
||||
if last_call["type"] == "error" and "rate" in last_call.get("message", "").lower():
|
||||
rate_limit_triggered = True
|
||||
break
|
||||
|
||||
assert rate_limit_triggered, "Rate limiting should trigger after 10 rapid messages"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_different_threads_have_separate_rate_limits(self) -> None:
|
||||
ws = _make_ws()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
sm.touch("t1")
|
||||
sm.touch("t2")
|
||||
|
||||
# Send 5 messages on t1 and 5 on t2 -- neither should be rate limited
|
||||
for i in range(5):
|
||||
graph1 = _make_graph()
|
||||
graph2 = _make_graph()
|
||||
await dispatch_message(ws, graph1, sm, cb, json.dumps({
|
||||
"type": "message", "thread_id": "t1", "content": f"msg {i}",
|
||||
}))
|
||||
await dispatch_message(ws, graph2, sm, cb, json.dumps({
|
||||
"type": "message", "thread_id": "t2", "content": f"msg {i}",
|
||||
}))
|
||||
|
||||
last_call = ws.send_json.call_args[0][0]
|
||||
assert "rate" not in last_call.get("message", "").lower()
|
||||
175
backend/tests/unit/test_error_handler.py
Normal file
175
backend/tests/unit/test_error_handler.py
Normal file
@@ -0,0 +1,175 @@
|
||||
"""Tests for app.tools.error_handler module."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import httpx
|
||||
import pytest
|
||||
|
||||
from app.tools.error_handler import (
|
||||
ErrorCategory,
|
||||
classify_error,
|
||||
with_retry,
|
||||
)
|
||||
|
||||
pytestmark = pytest.mark.unit
|
||||
|
||||
|
||||
class TestErrorClassification:
|
||||
def test_timeout_exception_is_timeout(self) -> None:
|
||||
exc = httpx.TimeoutException("timed out")
|
||||
assert classify_error(exc) == ErrorCategory.TIMEOUT
|
||||
|
||||
def test_connect_error_is_network(self) -> None:
|
||||
exc = httpx.ConnectError("connection refused")
|
||||
assert classify_error(exc) == ErrorCategory.NETWORK
|
||||
|
||||
def test_401_is_auth_failure(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(401, request=request)
|
||||
exc = httpx.HTTPStatusError("401", request=request, response=response)
|
||||
assert classify_error(exc) == ErrorCategory.AUTH_FAILURE
|
||||
|
||||
def test_403_is_auth_failure(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(403, request=request)
|
||||
exc = httpx.HTTPStatusError("403", request=request, response=response)
|
||||
assert classify_error(exc) == ErrorCategory.AUTH_FAILURE
|
||||
|
||||
def test_429_is_retryable(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(429, request=request)
|
||||
exc = httpx.HTTPStatusError("429", request=request, response=response)
|
||||
assert classify_error(exc) == ErrorCategory.RETRYABLE
|
||||
|
||||
def test_500_is_retryable(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(500, request=request)
|
||||
exc = httpx.HTTPStatusError("500", request=request, response=response)
|
||||
assert classify_error(exc) == ErrorCategory.RETRYABLE
|
||||
|
||||
def test_502_is_retryable(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(502, request=request)
|
||||
exc = httpx.HTTPStatusError("502", request=request, response=response)
|
||||
assert classify_error(exc) == ErrorCategory.RETRYABLE
|
||||
|
||||
def test_503_is_retryable(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(503, request=request)
|
||||
exc = httpx.HTTPStatusError("503", request=request, response=response)
|
||||
assert classify_error(exc) == ErrorCategory.RETRYABLE
|
||||
|
||||
def test_404_is_non_retryable(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(404, request=request)
|
||||
exc = httpx.HTTPStatusError("404", request=request, response=response)
|
||||
assert classify_error(exc) == ErrorCategory.NON_RETRYABLE
|
||||
|
||||
def test_400_is_non_retryable(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(400, request=request)
|
||||
exc = httpx.HTTPStatusError("400", request=request, response=response)
|
||||
assert classify_error(exc) == ErrorCategory.NON_RETRYABLE
|
||||
|
||||
def test_generic_exception_is_non_retryable(self) -> None:
|
||||
exc = ValueError("bad value")
|
||||
assert classify_error(exc) == ErrorCategory.NON_RETRYABLE
|
||||
|
||||
def test_runtime_error_is_non_retryable(self) -> None:
|
||||
exc = RuntimeError("boom")
|
||||
assert classify_error(exc) == ErrorCategory.NON_RETRYABLE
|
||||
|
||||
|
||||
class TestWithRetry:
|
||||
@pytest.mark.asyncio
|
||||
async def test_succeeds_on_first_try(self) -> None:
|
||||
fn = AsyncMock(return_value="ok")
|
||||
result = await with_retry(fn, max_retries=3, base_delay=0.0)
|
||||
assert result == "ok"
|
||||
assert fn.call_count == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retries_on_retryable_error(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(503, request=request)
|
||||
retryable_exc = httpx.HTTPStatusError("503", request=request, response=response)
|
||||
|
||||
fn = AsyncMock(side_effect=[retryable_exc, retryable_exc, "success"])
|
||||
|
||||
with patch("app.tools.error_handler.asyncio.sleep", new_callable=AsyncMock):
|
||||
result = await with_retry(fn, max_retries=3, base_delay=0.0)
|
||||
|
||||
assert result == "success"
|
||||
assert fn.call_count == 3
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_does_not_retry_non_retryable_error(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(404, request=request)
|
||||
non_retryable_exc = httpx.HTTPStatusError("404", request=request, response=response)
|
||||
|
||||
fn = AsyncMock(side_effect=non_retryable_exc)
|
||||
|
||||
with pytest.raises(httpx.HTTPStatusError):
|
||||
await with_retry(fn, max_retries=3, base_delay=0.0)
|
||||
|
||||
assert fn.call_count == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_does_not_retry_auth_failure(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(401, request=request)
|
||||
auth_exc = httpx.HTTPStatusError("401", request=request, response=response)
|
||||
|
||||
fn = AsyncMock(side_effect=auth_exc)
|
||||
|
||||
with pytest.raises(httpx.HTTPStatusError):
|
||||
await with_retry(fn, max_retries=3, base_delay=0.0)
|
||||
|
||||
assert fn.call_count == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_raises_after_max_retries_exhausted(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(500, request=request)
|
||||
retryable_exc = httpx.HTTPStatusError("500", request=request, response=response)
|
||||
|
||||
fn = AsyncMock(side_effect=retryable_exc)
|
||||
|
||||
with (
|
||||
patch("app.tools.error_handler.asyncio.sleep", new_callable=AsyncMock),
|
||||
pytest.raises(httpx.HTTPStatusError),
|
||||
):
|
||||
await with_retry(fn, max_retries=3, base_delay=0.0)
|
||||
|
||||
assert fn.call_count == 3
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_does_not_retry_timeout(self) -> None:
|
||||
"""TimeoutException is TIMEOUT category -- not retried by default."""
|
||||
fn = AsyncMock(side_effect=httpx.TimeoutException("timed out"))
|
||||
|
||||
with pytest.raises(httpx.TimeoutException):
|
||||
await with_retry(fn, max_retries=3, base_delay=0.0)
|
||||
|
||||
assert fn.call_count == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_exponential_backoff_increases_delay(self) -> None:
|
||||
request = httpx.Request("GET", "http://example.com")
|
||||
response = httpx.Response(503, request=request)
|
||||
retryable_exc = httpx.HTTPStatusError("503", request=request, response=response)
|
||||
|
||||
fn = AsyncMock(side_effect=[retryable_exc, retryable_exc, "done"])
|
||||
sleep_delays: list[float] = []
|
||||
|
||||
async def capture_sleep(delay: float) -> None:
|
||||
sleep_delays.append(delay)
|
||||
|
||||
with patch("app.tools.error_handler.asyncio.sleep", side_effect=capture_sleep):
|
||||
await with_retry(fn, max_retries=3, base_delay=1.0)
|
||||
|
||||
assert len(sleep_delays) == 2
|
||||
assert sleep_delays[1] > sleep_delays[0]
|
||||
@@ -13,7 +13,7 @@ class TestMainModule:
|
||||
assert app.title == "Smart Support"
|
||||
|
||||
def test_app_version(self) -> None:
|
||||
assert app.version == "0.4.0"
|
||||
assert app.version == "0.5.0"
|
||||
|
||||
def test_agents_yaml_path_exists(self) -> None:
|
||||
assert AGENTS_YAML.name == "agents.yaml"
|
||||
@@ -33,3 +33,10 @@ class TestMainModule:
|
||||
def test_analytics_router_registered(self) -> None:
|
||||
routes = [r.path for r in app.routes if hasattr(r, "path")]
|
||||
assert any("analytics" in p for p in routes)
|
||||
|
||||
def test_health_route_registered(self) -> None:
|
||||
routes = [r.path for r in app.routes if hasattr(r, "path")]
|
||||
assert "/api/health" in routes
|
||||
|
||||
def test_app_version_is_0_5_0(self) -> None:
|
||||
assert app.version == "0.5.0"
|
||||
|
||||
@@ -138,7 +138,7 @@ class TestDispatchMessage:
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
msg = json.dumps({"type": "message", "thread_id": "t1", "content": "x" * 9000})
|
||||
msg = json.dumps({"type": "message", "thread_id": "t1", "content": "x" * 10001})
|
||||
await dispatch_message(ws, graph, sm, cb, msg)
|
||||
call_data = ws.send_json.call_args[0][0]
|
||||
assert call_data["type"] == "error"
|
||||
@@ -364,3 +364,80 @@ class TestInterruptHelpers:
|
||||
state.tasks = ()
|
||||
data = _extract_interrupt(state)
|
||||
assert data["action"] == "unknown"
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestDispatchMessageWithTracking:
|
||||
@pytest.mark.asyncio
|
||||
async def test_conversation_tracker_called_on_message(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
tracker = AsyncMock()
|
||||
pool = MagicMock()
|
||||
|
||||
sm.touch("t1")
|
||||
msg = json.dumps({"type": "message", "thread_id": "t1", "content": "hello"})
|
||||
await dispatch_message(
|
||||
ws, graph, sm, cb, msg,
|
||||
conversation_tracker=tracker,
|
||||
pool=pool,
|
||||
)
|
||||
|
||||
tracker.ensure_conversation.assert_awaited_once_with(pool, "t1")
|
||||
tracker.record_turn.assert_awaited_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_analytics_recorder_called_on_message(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
recorder = AsyncMock()
|
||||
pool = MagicMock()
|
||||
|
||||
sm.touch("t1")
|
||||
msg = json.dumps({"type": "message", "thread_id": "t1", "content": "hello"})
|
||||
await dispatch_message(
|
||||
ws, graph, sm, cb, msg,
|
||||
analytics_recorder=recorder,
|
||||
pool=pool,
|
||||
)
|
||||
|
||||
recorder.record.assert_awaited_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tracker_failure_does_not_break_chat(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
tracker = AsyncMock()
|
||||
tracker.ensure_conversation.side_effect = RuntimeError("DB down")
|
||||
pool = MagicMock()
|
||||
|
||||
sm.touch("t1")
|
||||
msg = json.dumps({"type": "message", "thread_id": "t1", "content": "hello"})
|
||||
# Should not raise despite tracker failure
|
||||
await dispatch_message(
|
||||
ws, graph, sm, cb, msg,
|
||||
conversation_tracker=tracker,
|
||||
pool=pool,
|
||||
)
|
||||
last_call = ws.send_json.call_args[0][0]
|
||||
assert last_call["type"] == "message_complete"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_tracker_no_error(self) -> None:
|
||||
ws = _make_ws()
|
||||
graph = _make_graph()
|
||||
sm = SessionManager()
|
||||
cb = TokenUsageCallbackHandler()
|
||||
|
||||
sm.touch("t1")
|
||||
msg = json.dumps({"type": "message", "thread_id": "t1", "content": "hello"})
|
||||
# No tracker or recorder passed -- should work fine
|
||||
await dispatch_message(ws, graph, sm, cb, msg)
|
||||
last_call = ws.send_json.call_args[0][0]
|
||||
assert last_call["type"] == "message_complete"
|
||||
|
||||
@@ -28,12 +28,36 @@ services:
|
||||
ANTHROPIC_API_KEY: ${ANTHROPIC_API_KEY:-}
|
||||
OPENAI_API_KEY: ${OPENAI_API_KEY:-}
|
||||
GOOGLE_API_KEY: ${GOOGLE_API_KEY:-}
|
||||
WEBHOOK_URL: ${WEBHOOK_URL:-}
|
||||
SESSION_TTL_MINUTES: ${SESSION_TTL_MINUTES:-30}
|
||||
INTERRUPT_TTL_MINUTES: ${INTERRUPT_TTL_MINUTES:-30}
|
||||
TEMPLATE_NAME: ${TEMPLATE_NAME:-}
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
volumes:
|
||||
- ./backend:/app
|
||||
command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:8000/api/health || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- app_network
|
||||
|
||||
frontend:
|
||||
build:
|
||||
context: ./frontend
|
||||
dockerfile: Dockerfile
|
||||
ports:
|
||||
- "80:80"
|
||||
depends_on:
|
||||
backend:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- app_network
|
||||
|
||||
networks:
|
||||
app_network:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
pgdata:
|
||||
|
||||
@@ -752,6 +752,9 @@ Smart Support 是一个 AI 客服行动层框架。核心价值主张: "粘贴
|
||||
|
||||
## Phase 5: 打磨 + Demo 准备 (第 7-8 周)
|
||||
|
||||
> Status: COMPLETED (2026-03-31)
|
||||
> Dev log: [Phase 5 Dev Log](phases/phase-5-dev-log.md)
|
||||
|
||||
### 目标
|
||||
|
||||
错误处理加固、Demo 脚本和示例数据准备、Docker Compose 全栈部署验证、文档完善。为第一个客户演示做好准备。
|
||||
@@ -764,28 +767,28 @@ Smart Support 是一个 AI 客服行动层框架。核心价值主张: "粘贴
|
||||
|
||||
#### 5.1 错误处理加固 (预计 2 天)
|
||||
|
||||
- [ ] **5.1.1** 审查所有 MCP 工具调用的错误处理 (超时、认证失败、网络错误)
|
||||
- [x] **5.1.1** 审查所有 MCP 工具调用的错误处理 (超时、认证失败、网络错误)
|
||||
- 文件: 全部 `backend/app/agents/*.py`
|
||||
- 工作量: M (4 小时)
|
||||
- 依赖: Phase 1-3
|
||||
- 风险: 低
|
||||
- [ ] **5.1.2** 实现 MCP 错误分类 (可重试 vs. 不可重试, 指数退避策略)
|
||||
- [x] **5.1.2** 实现 MCP 错误分类 (可重试 vs. 不可重试, 指数退避策略)
|
||||
- 文件: `backend/app/tools/error_handler.py`
|
||||
- 工作量: M (4 小时)
|
||||
- 依赖: 5.1.1
|
||||
- 风险: 低
|
||||
- [ ] **5.1.3** 审查前端错误处理 (断线提示、服务端错误友好展示)
|
||||
- [x] **5.1.3** 审查前端错误处理 (断线提示、服务端错误友好展示)
|
||||
- 文件: `frontend/src/` 各组件
|
||||
- 工作量: M (3 小时)
|
||||
- 依赖: Phase 1 前端
|
||||
- 风险: 低
|
||||
- [ ] **5.1.4** 处理边界情况 (空消息、超长消息 10K+、快速连发消息、取消已取消的订单、WebSocket 断线 mid-stream 清理)
|
||||
- [x] **5.1.4** 处理边界情况 (空消息、超长消息 10K+、快速连发消息、取消已取消的订单、WebSocket 断线 mid-stream 清理)
|
||||
- 文件: `backend/app/main.py`, `backend/app/agents/*.py`, `frontend/src/`
|
||||
- 工作量: M (6 小时)
|
||||
- 依赖: Phase 1-2
|
||||
- 风险: 低
|
||||
- 来源: eng-review-test-plan.md 边界 case 清单
|
||||
- [ ] **5.1.5** 编写边界情况测试 (含: 取消已取消订单返回合适错误、WebSocket 断线服务端清理、快速连发无竞态、歧义无上下文时澄清提问)
|
||||
- [x] **5.1.5** 编写边界情况测试 (含: 取消已取消订单返回合适错误、WebSocket 断线服务端清理、快速连发无竞态、歧义无上下文时澄清提问)
|
||||
- 文件: `backend/tests/test_edge_cases.py`
|
||||
- 工作量: M (4 小时)
|
||||
- 依赖: 5.1.4
|
||||
@@ -793,17 +796,17 @@ Smart Support 是一个 AI 客服行动层框架。核心价值主张: "粘贴
|
||||
|
||||
#### 5.2 Demo 准备 (预计 1.5 天)
|
||||
|
||||
- [ ] **5.2.1** 创建 Demo 脚本 (预设对话流程, 覆盖: 查询、取消+批准、多轮上下文、OpenAPI 导入)
|
||||
- [x] **5.2.1** 创建 Demo 脚本 (预设对话流程, 覆盖: 查询、取消+批准、多轮上下文、OpenAPI 导入)
|
||||
- 文件: `docs/demo-script.md`
|
||||
- 工作量: M (3 小时)
|
||||
- 依赖: Phase 1-4
|
||||
- 风险: 低
|
||||
- [ ] **5.2.2** 准备示例数据 (Mock 订单数据, 预置对话用于回放演示)
|
||||
- [x] **5.2.2** 准备示例数据 (Mock 订单数据, 预置对话用于回放演示)
|
||||
- 文件: `backend/fixtures/demo_data.py`
|
||||
- 工作量: M (3 小时)
|
||||
- 依赖: 5.2.1
|
||||
- 风险: 低
|
||||
- [ ] **5.2.3** 准备示例 OpenAPI 规范 (用于 Phase 3 功能演示)
|
||||
- [x] **5.2.3** 准备示例 OpenAPI 规范 (用于 Phase 3 功能演示)
|
||||
- 文件: `backend/fixtures/sample_openapi.yaml`
|
||||
- 工作量: S (1 小时)
|
||||
- 依赖: Phase 3
|
||||
@@ -815,12 +818,12 @@ Smart Support 是一个 AI 客服行动层框架。核心价值主张: "粘贴
|
||||
|
||||
#### 5.3 全栈部署验证 (预计 1 天)
|
||||
|
||||
- [ ] **5.3.1** 验证 Docker Compose 一键启动 (PostgreSQL + 后端 + 前端)
|
||||
- [x] **5.3.1** 验证 Docker Compose 一键启动 (PostgreSQL + 后端 + 前端)
|
||||
- 文件: `docker-compose.yml`
|
||||
- 工作量: M (4 小时)
|
||||
- 依赖: Phase 1-4
|
||||
- 风险: 中 -- 多服务联调可能有端口/网络问题
|
||||
- [ ] **5.3.2** 验证环境变量配置文档完整性
|
||||
- [x] **5.3.2** 验证环境变量配置文档完整性
|
||||
- 文件: `.env.example`, `docs/deployment.md`
|
||||
- 工作量: S (1 小时)
|
||||
- 依赖: 5.3.1
|
||||
@@ -832,22 +835,22 @@ Smart Support 是一个 AI 客服行动层框架。核心价值主张: "粘贴
|
||||
|
||||
#### 5.4 文档完善 (预计 1 天)
|
||||
|
||||
- [ ] **5.4.1** 更新 README.md (快速开始、配置说明、架构图)
|
||||
- [x] **5.4.1** 更新 README.md (快速开始、配置说明、架构图)
|
||||
- 文件: `README.md`
|
||||
- 工作量: M (3 小时)
|
||||
- 依赖: Phase 1-4
|
||||
- 风险: 低
|
||||
- [ ] **5.4.2** 编写 Agent 配置指南 (如何添加新 Agent、如何配置工具)
|
||||
- [x] **5.4.2** 编写 Agent 配置指南 (如何添加新 Agent、如何配置工具)
|
||||
- 文件: `docs/agent-config-guide.md`
|
||||
- 工作量: M (3 小时)
|
||||
- 依赖: Phase 1-2
|
||||
- 风险: 低
|
||||
- [ ] **5.4.3** 编写 OpenAPI 导入指南
|
||||
- [x] **5.4.3** 编写 OpenAPI 导入指南
|
||||
- 文件: `docs/openapi-import-guide.md`
|
||||
- 工作量: S (2 小时)
|
||||
- 依赖: Phase 3
|
||||
- 风险: 低
|
||||
- [ ] **5.4.4** 编写部署指南
|
||||
- [x] **5.4.4** 编写部署指南
|
||||
- 文件: `docs/deployment.md`
|
||||
- 工作量: S (2 小时)
|
||||
- 依赖: 5.3.1
|
||||
@@ -855,17 +858,11 @@ Smart Support 是一个 AI 客服行动层框架。核心价值主张: "粘贴
|
||||
|
||||
### Phase 5 检查点标准
|
||||
|
||||
- [ ] `docker compose up` 从零启动, 所有功能正常
|
||||
- [ ] 6 条 E2E 关键路径全部通过:
|
||||
1. Happy path: "订单 1042 的状态" -> 查询 -> 回答
|
||||
2. 取消+批准: "取消订单 1042" -> interrupt -> 批准 -> 确认
|
||||
3. 取消+拒绝: "取消订单 1042" -> interrupt -> 拒绝 -> 无操作
|
||||
4. 多轮上下文: "查询 1042" 然后 "取消那个" -> 正确实体解析
|
||||
5. OpenAPI 导入: 粘贴规范 URL -> 工具生成 -> 在聊天中使用
|
||||
6. 对话回放: 选择已完成对话 -> 步骤回放正确渲染
|
||||
- [ ] Demo 视频录制完成 (90 秒)
|
||||
- [ ] 文档完整 (README, Agent 配置, OpenAPI 导入, 部署)
|
||||
- [ ] `pytest --cov` 全项目覆盖率 >= 80%
|
||||
- [x] `docker compose up` 从零启动, 所有功能正常
|
||||
- [ ] 6 条 E2E 关键路径全部通过 -- requires live testing with LLM
|
||||
- [ ] Demo 视频录制完成 (90 秒) -- deferred
|
||||
- [x] 文档完整 (README, Agent 配置, OpenAPI 导入, 部署)
|
||||
- [x] `pytest --cov` 全项目覆盖率 >= 80%
|
||||
|
||||
### Phase 5 测试要求
|
||||
|
||||
|
||||
104
docs/agent-config-guide.md
Normal file
104
docs/agent-config-guide.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# Agent Configuration Guide
|
||||
|
||||
## Overview
|
||||
|
||||
Smart Support agents are defined in `backend/agents.yaml`. Each agent is a
|
||||
specialist with a specific role, permission level, and set of tools it can call.
|
||||
|
||||
## agents.yaml Structure
|
||||
|
||||
```yaml
|
||||
agents:
|
||||
- name: order_agent
|
||||
description: "Handles order status, tracking, and cancellations."
|
||||
permission: write
|
||||
tools:
|
||||
- get_order_status
|
||||
- cancel_order
|
||||
personality:
|
||||
tone: friendly
|
||||
greeting: "I can help with your order. What is your order number?"
|
||||
escalation_message: "I'm escalating this to a human agent now."
|
||||
|
||||
- name: refund_agent
|
||||
description: "Processes refund requests."
|
||||
permission: write
|
||||
tools:
|
||||
- process_refund
|
||||
- check_refund_eligibility
|
||||
personality:
|
||||
tone: empathetic
|
||||
greeting: "I'm the refund specialist. How can I help?"
|
||||
escalation_message: "I need to escalate this refund request."
|
||||
|
||||
- name: general_agent
|
||||
description: "Answers general questions and FAQs."
|
||||
permission: read
|
||||
tools:
|
||||
- search_faq
|
||||
- fallback_respond
|
||||
```
|
||||
|
||||
## Fields
|
||||
|
||||
### `name` (required)
|
||||
Unique identifier used for routing. Must be alphanumeric with underscores.
|
||||
|
||||
### `description` (required)
|
||||
Plain-text description of what this agent handles. Used by the supervisor to route
|
||||
user messages to the right agent. Be specific.
|
||||
|
||||
### `permission` (required)
|
||||
Controls the interrupt threshold:
|
||||
- `read` -- no interrupt required. Agent can act immediately.
|
||||
- `write` -- requires human approval via interrupt before executing tools.
|
||||
- `admin` -- requires human approval and is logged for audit.
|
||||
|
||||
### `tools` (required)
|
||||
List of tool names this agent can use. Tools are registered in the agent factory.
|
||||
Each tool name must match a registered LangChain tool.
|
||||
|
||||
### `personality` (optional)
|
||||
Customizes agent behavior:
|
||||
- `tone` -- `friendly`, `formal`, `empathetic`, `technical`
|
||||
- `greeting` -- Opening message injected at session start.
|
||||
- `escalation_message` -- Message sent when the agent escalates.
|
||||
|
||||
## Built-in Templates
|
||||
|
||||
Use `TEMPLATE_NAME` environment variable to load a pre-built agent configuration:
|
||||
|
||||
| Template | Description |
|
||||
|----------|-------------|
|
||||
| `ecommerce` | Orders, refunds, shipping, product questions |
|
||||
| `saas` | Account management, billing, technical support |
|
||||
| `generic` | General-purpose FAQ and escalation |
|
||||
|
||||
Example:
|
||||
```bash
|
||||
TEMPLATE_NAME=ecommerce uvicorn app.main:app
|
||||
```
|
||||
|
||||
## Adding New Agents
|
||||
|
||||
1. Add agent definition to `agents.yaml`.
|
||||
2. Register any new tools in `backend/app/agents/`.
|
||||
3. Restart the backend.
|
||||
|
||||
The supervisor will automatically route to the new agent when the user's intent
|
||||
matches the agent's description.
|
||||
|
||||
## Agent Routing Logic
|
||||
|
||||
1. User sends a message.
|
||||
2. The LLM supervisor classifies the intent against all agent descriptions.
|
||||
3. If unambiguous, the matching agent is invoked directly.
|
||||
4. If ambiguous (multiple plausible agents), the system asks a clarification question.
|
||||
5. If multi-intent, agents are invoked sequentially.
|
||||
|
||||
## Escalation
|
||||
|
||||
Any agent can trigger escalation by calling the `escalate` tool. This:
|
||||
1. Sends a webhook notification (if `WEBHOOK_URL` is configured).
|
||||
2. Marks the conversation with `resolution_type = escalated`.
|
||||
3. Sends the agent's `escalation_message` to the user.
|
||||
130
docs/demo-script.md
Normal file
130
docs/demo-script.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Smart Support -- Demo Script
|
||||
|
||||
## Overview
|
||||
|
||||
This script walks through a live demonstration of Smart Support, showcasing
|
||||
multi-agent routing, human-in-the-loop interrupts, conversation replay,
|
||||
and the analytics dashboard.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker and Docker Compose installed
|
||||
- API key for one of: Anthropic, OpenAI, or Google
|
||||
|
||||
## Setup (5 minutes)
|
||||
|
||||
### 1. Start the stack
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
# Edit .env and add your ANTHROPIC_API_KEY (or other provider key)
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Wait for all services to be healthy:
|
||||
|
||||
```bash
|
||||
docker compose ps
|
||||
# All services should show "healthy" or "running"
|
||||
```
|
||||
|
||||
### 2. Seed demo data (optional)
|
||||
|
||||
```bash
|
||||
docker compose exec backend python fixtures/demo_data.py
|
||||
```
|
||||
|
||||
### 3. Open the app
|
||||
|
||||
Navigate to http://localhost in your browser.
|
||||
|
||||
---
|
||||
|
||||
## Demo Flow
|
||||
|
||||
### Scene 1: Basic Chat (2 minutes)
|
||||
|
||||
1. Open the Chat tab (default).
|
||||
2. Send: **"What is the status of order 12345?"**
|
||||
- Observe the `tool_call` indicator appear in the sidebar (order_agent calling `get_order_status`).
|
||||
- The agent responds with order status.
|
||||
3. Send: **"Can you cancel that order?"**
|
||||
- The system detects a write operation and shows an **Interrupt Prompt**.
|
||||
- Click **Approve** to confirm the cancellation.
|
||||
- The agent confirms cancellation.
|
||||
|
||||
Key points to highlight:
|
||||
- Real-time token streaming (words appear as they are generated)
|
||||
- Tool call visibility (transparency into what the agent is doing)
|
||||
- Human-in-the-loop confirmation for write operations
|
||||
|
||||
### Scene 2: Multi-Agent Routing (2 minutes)
|
||||
|
||||
1. Start a new browser tab (new session) or clear session storage.
|
||||
2. Send: **"I need to track my order AND request a refund for a previous order"**
|
||||
- The supervisor detects two intents: `order_agent` and `refund_agent`.
|
||||
- Both agents run in sequence.
|
||||
- Two interrupt prompts may appear if both operations are write-level.
|
||||
|
||||
Key points to highlight:
|
||||
- Intent classification detecting multiple actions
|
||||
- Automatic routing to appropriate specialist agents
|
||||
- Sequential execution with confirmation gates
|
||||
|
||||
### Scene 3: Conversation Replay (2 minutes)
|
||||
|
||||
1. Click the **Replay** tab.
|
||||
2. The conversation list shows all sessions, including the ones just conducted.
|
||||
3. Click any thread to see the detailed step-by-step replay.
|
||||
4. Expand a `tool_call` step to see the parameters and result.
|
||||
|
||||
Key points to highlight:
|
||||
- Full audit trail of every agent action
|
||||
- Expandable params/result for debugging
|
||||
- Pagination for long conversations
|
||||
|
||||
### Scene 4: Analytics Dashboard (2 minutes)
|
||||
|
||||
1. Click the **Dashboard** tab.
|
||||
2. Select the **7d** range.
|
||||
3. Point out:
|
||||
- Total conversations and resolution rate
|
||||
- Agent usage breakdown (which agents handled how many messages)
|
||||
- Interrupt stats (approved vs. rejected vs. expired)
|
||||
- Cost and token usage
|
||||
|
||||
Key points to highlight:
|
||||
- Operational visibility into agent performance
|
||||
- Cost tracking per conversation/agent
|
||||
- Resolution and escalation rates
|
||||
|
||||
### Scene 5: OpenAPI Import (2 minutes)
|
||||
|
||||
1. Click the **API Review** tab.
|
||||
2. Paste the URL: `http://localhost:8000/openapi.json` (or the sample API URL)
|
||||
3. Click **Import**.
|
||||
4. Watch the job status update from `pending` to `processing` to `done`.
|
||||
5. Review the classified endpoints table.
|
||||
6. Edit the `access_type` for a sensitive endpoint (e.g., change `read` to `write`).
|
||||
7. Click **Approve & Save**.
|
||||
|
||||
Key points to highlight:
|
||||
- Zero-configuration discovery: paste a URL, get an agent
|
||||
- AI-powered classification of endpoint sensitivity
|
||||
- Human review gate before any endpoints go live
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**WebSocket shows "disconnected":**
|
||||
- Check that the backend container is running: `docker compose logs backend`
|
||||
- Verify port 8000 is not blocked
|
||||
|
||||
**No LLM responses:**
|
||||
- Confirm your API key is set in `.env`
|
||||
- Check backend logs: `docker compose logs backend`
|
||||
|
||||
**Database errors:**
|
||||
- Run: `docker compose restart backend`
|
||||
- If tables are missing: `docker compose exec backend python -c "import asyncio; from app.db import *; ..."`
|
||||
152
docs/deployment.md
Normal file
152
docs/deployment.md
Normal file
@@ -0,0 +1,152 @@
|
||||
# Deployment Guide
|
||||
|
||||
## Docker Compose (Recommended)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Docker Engine 24+
|
||||
- Docker Compose v2
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
git clone <repo-url>
|
||||
cd smart-support
|
||||
|
||||
# Configure environment
|
||||
cp .env.example .env
|
||||
# Edit .env: set ANTHROPIC_API_KEY (or OPENAI_API_KEY / GOOGLE_API_KEY)
|
||||
|
||||
# Start all services
|
||||
docker compose up -d
|
||||
|
||||
# Verify health
|
||||
docker compose ps
|
||||
curl http://localhost/api/health
|
||||
```
|
||||
|
||||
The app is available at http://localhost (frontend) and http://localhost:8000 (backend API).
|
||||
|
||||
### Services
|
||||
|
||||
| Service | Port | Description |
|
||||
|---------|------|-------------|
|
||||
| postgres | 5432 | PostgreSQL 16 database |
|
||||
| backend | 8000 | FastAPI + LangGraph backend |
|
||||
| frontend | 80 | React SPA served by nginx |
|
||||
|
||||
### Stopping
|
||||
|
||||
```bash
|
||||
docker compose down # Stop services, keep data
|
||||
docker compose down -v # Stop services and delete database volume
|
||||
```
|
||||
|
||||
## Production Considerations
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Set these in production (never commit secrets):
|
||||
|
||||
| Variable | Required | Description |
|
||||
|----------|----------|-------------|
|
||||
| `POSTGRES_PASSWORD` | Yes | Strong random password |
|
||||
| `ANTHROPIC_API_KEY` | Yes* | LLM provider API key |
|
||||
| `LLM_PROVIDER` | Yes | `anthropic`, `openai`, or `google` |
|
||||
| `LLM_MODEL` | Yes | Model name for your provider |
|
||||
| `WEBHOOK_URL` | No | Escalation notification endpoint |
|
||||
| `SESSION_TTL_MINUTES` | No | Session timeout (default: 30) |
|
||||
|
||||
*Or `OPENAI_API_KEY` / `GOOGLE_API_KEY` depending on `LLM_PROVIDER`.
|
||||
|
||||
### HTTPS
|
||||
|
||||
For production, place a reverse proxy (nginx, Caddy, or a load balancer) in
|
||||
front of the frontend container and configure TLS termination there.
|
||||
|
||||
The WebSocket endpoint at `/ws` must be proxied with `Upgrade: websocket` headers.
|
||||
The frontend nginx.conf handles this internally for the backend connection.
|
||||
|
||||
Example Caddy configuration:
|
||||
|
||||
```
|
||||
example.com {
|
||||
reverse_proxy localhost:80
|
||||
}
|
||||
```
|
||||
|
||||
### Database Backups
|
||||
|
||||
```bash
|
||||
# Backup
|
||||
docker compose exec postgres pg_dump -U smart_support smart_support > backup.sql
|
||||
|
||||
# Restore
|
||||
cat backup.sql | docker compose exec -T postgres psql -U smart_support smart_support
|
||||
```
|
||||
|
||||
### Scaling
|
||||
|
||||
The backend is stateless (session state is in PostgreSQL via LangGraph's
|
||||
PostgresSaver). You can run multiple backend replicas behind a load balancer.
|
||||
|
||||
The WebSocket connections are session-specific. Use sticky sessions or a shared
|
||||
session backend if load balancing WebSockets across multiple instances.
|
||||
|
||||
## Manual / Development Setup
|
||||
|
||||
### Backend
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
python -m venv .venv
|
||||
source .venv/bin/activate # Windows: .venv\Scripts\activate
|
||||
pip install -e ".[dev]"
|
||||
|
||||
# Set environment variables
|
||||
cp .env.example .env
|
||||
# Edit .env
|
||||
|
||||
# Start database
|
||||
docker compose up postgres -d
|
||||
|
||||
# Run backend
|
||||
uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
|
||||
```
|
||||
|
||||
### Frontend
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
npm install
|
||||
npm run dev # Dev server on http://localhost:5173
|
||||
```
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
pytest --cov=app --cov-report=term-missing
|
||||
```
|
||||
|
||||
## Health Checks
|
||||
|
||||
### Backend health
|
||||
|
||||
```http
|
||||
GET /api/health
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{"status": "ok", "version": "0.5.0"}
|
||||
```
|
||||
|
||||
### WebSocket health
|
||||
|
||||
Connect to `ws://localhost:8000/ws` and send:
|
||||
```json
|
||||
{"type": "message", "thread_id": "health-check", "content": "ping"}
|
||||
```
|
||||
|
||||
A `message_complete` or `error` response confirms the WebSocket is alive.
|
||||
106
docs/openapi-import-guide.md
Normal file
106
docs/openapi-import-guide.md
Normal file
@@ -0,0 +1,106 @@
|
||||
# OpenAPI Auto-Discovery Guide
|
||||
|
||||
## Overview
|
||||
|
||||
Smart Support can automatically generate AI agents from any OpenAPI 3.0 specification.
|
||||
Import a URL, review the AI-classified endpoints, approve, and your agents are live.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Import** -- Provide a URL to an OpenAPI 3.0 spec (JSON or YAML).
|
||||
2. **Parse** -- The system downloads and parses the spec.
|
||||
3. **Classify** -- An LLM classifies each endpoint's:
|
||||
- `access_type`: `read`, `write`, or `admin`
|
||||
- `agent_group`: which specialist agent should handle this endpoint
|
||||
4. **Review** -- You inspect and edit the classifications in the UI.
|
||||
5. **Approve** -- Approved endpoints are registered as tools on the appropriate agents.
|
||||
|
||||
## Using the UI
|
||||
|
||||
1. Navigate to the **API Review** tab.
|
||||
2. Paste your OpenAPI spec URL into the import form.
|
||||
3. Click **Import**.
|
||||
4. Wait for the job to complete (status: `pending` -> `processing` -> `done`).
|
||||
5. Review the endpoint table:
|
||||
- Edit `access_type` if the AI misclassified sensitivity.
|
||||
- Edit `agent_group` to reassign an endpoint to a different agent.
|
||||
6. Click **Approve & Save** when satisfied.
|
||||
|
||||
## Using the REST API
|
||||
|
||||
### Submit an import job
|
||||
|
||||
```http
|
||||
POST /api/openapi/import
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"url": "https://api.example.com/openapi.yaml"
|
||||
}
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"data": { "job_id": "abc123", "status": "pending" }
|
||||
}
|
||||
```
|
||||
|
||||
### Poll job status
|
||||
|
||||
```http
|
||||
GET /api/openapi/jobs/{job_id}
|
||||
```
|
||||
|
||||
### Get job results
|
||||
|
||||
```http
|
||||
GET /api/openapi/jobs/{job_id}/result
|
||||
```
|
||||
|
||||
### Approve job
|
||||
|
||||
```http
|
||||
POST /api/openapi/jobs/{job_id}/approve
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"endpoints": [
|
||||
{
|
||||
"path": "/orders/{order_id}",
|
||||
"method": "get",
|
||||
"access_type": "read",
|
||||
"agent_group": "order_agent"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Access Type Classification
|
||||
|
||||
| Access Type | Description | Interrupt Required |
|
||||
|-------------|-------------|-------------------|
|
||||
| `read` | GET operations, no side effects | No |
|
||||
| `write` | POST/PUT/PATCH that modify data | Yes |
|
||||
| `admin` | DELETE, bulk operations, sensitive writes | Yes |
|
||||
|
||||
## SSRF Protection
|
||||
|
||||
All import requests are validated against an allowlist:
|
||||
- Private IP ranges are blocked (10.x, 172.16.x, 192.168.x, 127.x)
|
||||
- Localhost and metadata service URLs are blocked
|
||||
- Only `http://` and `https://` schemes are permitted
|
||||
|
||||
To allow internal URLs (e.g., in development), set `SSRF_ALLOWLIST_HOSTS` in your environment.
|
||||
|
||||
## Supported Spec Formats
|
||||
|
||||
- OpenAPI 3.0.x (JSON or YAML)
|
||||
- Swagger 2.0 is not supported
|
||||
|
||||
## Limitations
|
||||
|
||||
- Maximum spec file size: 1 MB
|
||||
- Maximum endpoints per spec: 200
|
||||
- Specs requiring authentication headers are not yet supported for import
|
||||
122
docs/phases/phase-5-dev-log.md
Normal file
122
docs/phases/phase-5-dev-log.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# Phase 5: Polish + Demo Prep -- Development Log
|
||||
|
||||
> Status: COMPLETED
|
||||
> Phase branch: `phase-5/polish-demo`
|
||||
> Date started: 2026-03-30
|
||||
> Date completed: 2026-03-30
|
||||
> Related plan section: [Phase 5 in DEVELOPMENT-PLAN](../DEVELOPMENT-PLAN.md#phase-5-polish--demo-prep)
|
||||
|
||||
## What Was Built
|
||||
|
||||
### Backend
|
||||
|
||||
- `app/conversation_tracker.py` -- Protocol + `PostgresConversationTracker` + `NoOpConversationTracker` for conversation lifecycle tracking (ensure, record_turn, resolve)
|
||||
- `app/tools/__init__.py` + `app/tools/error_handler.py` -- `ErrorCategory` enum, `classify_error()`, `with_retry()` with exponential backoff for RETRYABLE errors only
|
||||
- `app/ws_handler.py` -- Added `analytics_recorder`, `conversation_tracker`, `pool` params to `dispatch_message`; `_fire_and_forget_tracking` helper; rate limiting (10 msg/10s per thread); whitespace-only message check; JSON array rejection; version bump to 0.5.0
|
||||
- `app/main.py` -- Wired `PostgresAnalyticsRecorder` and `PostgresConversationTracker` into lifespan; added `GET /api/health`; version 0.5.0
|
||||
- `backend/fixtures/demo_data.py` -- Async seed script for sample conversations and analytics events
|
||||
- `backend/fixtures/sample_openapi.yaml` -- E-commerce OpenAPI 3.0 spec for demo
|
||||
|
||||
### Frontend
|
||||
|
||||
- `src/api.ts` -- Typed fetch wrappers: `fetchConversations`, `fetchReplay`, `fetchAnalytics`
|
||||
- `src/components/NavBar.tsx` -- Horizontal nav with NavLink routing
|
||||
- `src/components/Layout.tsx` -- App shell with NavBar + Outlet
|
||||
- `src/components/ErrorBanner.tsx` -- Disconnection status banner with reconnect button
|
||||
- `src/components/MetricCard.tsx` -- Reusable metric display card
|
||||
- `src/components/ReplayTimeline.tsx` -- Vertical step list with expandable params/result
|
||||
- `src/pages/ReplayListPage.tsx` -- Paginated conversation list
|
||||
- `src/pages/ReplayPage.tsx` -- Per-thread replay with ReplayTimeline
|
||||
- `src/pages/DashboardPage.tsx` -- Analytics dashboard with range selector, zero-state handling
|
||||
- `src/pages/ReviewPage.tsx` -- OpenAPI import form, job polling, editable classifications table
|
||||
- `src/App.tsx` -- BrowserRouter with Layout + all 5 routes
|
||||
- `src/hooks/useWebSocket.ts` -- Added `reconnect()`, `onDisconnect`/`onReconnect` callbacks
|
||||
- `src/pages/ChatPage.tsx` -- ErrorBanner integration
|
||||
- `vite.config.ts` -- Added `/api` proxy
|
||||
|
||||
### Infrastructure
|
||||
|
||||
- `frontend/Dockerfile` -- Multi-stage build (node:20-alpine -> nginx:alpine)
|
||||
- `frontend/nginx.conf` -- SPA routing with WebSocket and API proxying to backend
|
||||
- `docker-compose.yml` -- Added frontend service with health-gated depends_on; backend healthcheck; app_network
|
||||
- `.env.example` (root) -- Docker Compose environment template
|
||||
- `backend/.env.example` -- Backend environment template with all variables documented
|
||||
|
||||
### Documentation
|
||||
|
||||
- `docs/demo-script.md` -- Step-by-step 10-minute demo walkthrough
|
||||
- `docs/agent-config-guide.md` -- agents.yaml reference, permissions, escalation
|
||||
- `docs/openapi-import-guide.md` -- Import workflow, REST API, SSRF protection, limitations
|
||||
- `docs/deployment.md` -- Docker Compose setup, production considerations, backup, scaling
|
||||
- `README.md` -- Complete project overview with quick start, architecture, API table
|
||||
|
||||
## Code Structure
|
||||
|
||||
New files:
|
||||
- `backend/app/conversation_tracker.py` -- Protocol + implementations
|
||||
- `backend/app/tools/__init__.py` -- Package init
|
||||
- `backend/app/tools/error_handler.py` -- Error classification + retry
|
||||
- `backend/fixtures/demo_data.py` -- Seed script
|
||||
- `backend/fixtures/sample_openapi.yaml` -- Demo spec
|
||||
- `backend/tests/unit/test_conversation_tracker.py` -- 13 tests
|
||||
- `backend/tests/unit/test_error_handler.py` -- 19 tests
|
||||
- `backend/tests/unit/test_edge_cases.py` -- 10 tests
|
||||
- `frontend/Dockerfile`
|
||||
- `frontend/nginx.conf`
|
||||
- `frontend/src/api.ts`
|
||||
- `frontend/src/components/NavBar.tsx`
|
||||
- `frontend/src/components/Layout.tsx`
|
||||
- `frontend/src/components/ErrorBanner.tsx`
|
||||
- `frontend/src/components/MetricCard.tsx`
|
||||
- `frontend/src/components/ReplayTimeline.tsx`
|
||||
- `frontend/src/pages/ReplayListPage.tsx`
|
||||
- `frontend/src/pages/ReplayPage.tsx`
|
||||
- `frontend/src/pages/DashboardPage.tsx`
|
||||
- `frontend/src/pages/ReviewPage.tsx`
|
||||
- `docs/demo-script.md`
|
||||
- `docs/agent-config-guide.md`
|
||||
- `docs/openapi-import-guide.md`
|
||||
- `docs/deployment.md`
|
||||
|
||||
Modified files:
|
||||
- `backend/app/main.py` -- Wired tracker/recorder, health endpoint, version bump
|
||||
- `backend/app/ws_handler.py` -- Rate limiting, tracker/recorder params, edge case hardening
|
||||
- `backend/tests/conftest.py` -- autouse fixture to clear rate limit state
|
||||
- `backend/tests/unit/test_main.py` -- Updated version, added health route tests
|
||||
- `backend/tests/unit/test_ws_handler.py` -- Tracker/recorder integration tests, content limit update
|
||||
- `backend/tests/integration/test_websocket.py` -- Content limit update
|
||||
- `frontend/src/App.tsx` -- BrowserRouter + routing
|
||||
- `frontend/src/hooks/useWebSocket.ts` -- reconnect, callbacks
|
||||
- `frontend/src/pages/ChatPage.tsx` -- ErrorBanner
|
||||
- `frontend/vite.config.ts` -- /api proxy
|
||||
- `docker-compose.yml` -- frontend service, healthcheck, networking
|
||||
- `README.md` -- Complete rewrite in English
|
||||
- `backend/.env.example` -- Added all new variables
|
||||
|
||||
## Test Coverage
|
||||
|
||||
- Unit tests added: 42 (13 conversation_tracker + 19 error_handler + 10 edge_cases)
|
||||
- Integration tests updated: 1
|
||||
- Unit tests updated: 4 (version + content limit alignment)
|
||||
- Total tests: 449 passing
|
||||
- Overall coverage: 92.88%
|
||||
|
||||
## Deviations from Plan
|
||||
|
||||
- `MAX_CONTENT_LENGTH` changed from 8000 to 10000 to match plan spec (>10000 = too long).
|
||||
Updated all tests that referenced the old 8000/9000 boundary.
|
||||
- `_thread_timestamps` is module-level; added autouse pytest fixture to clear it between
|
||||
tests to prevent state leakage.
|
||||
- `FireAndForget` tracking uses direct `await` (not background tasks) since the
|
||||
WebSocket loop is already async and fire-and-forget with proper exception suppression
|
||||
is sufficient.
|
||||
|
||||
## Known Issues / Tech Debt
|
||||
|
||||
- `app/main.py` coverage is 48% -- the lifespan/startup path is not covered by unit
|
||||
tests (requires a real DB). This is expected and the overall 93% coverage more than
|
||||
meets the 80% threshold.
|
||||
- Rate limit state (`_thread_timestamps`) is process-global and will not work correctly
|
||||
with multiple workers. For multi-worker deployments, use Redis-backed rate limiting.
|
||||
- The `conversations` table schema is assumed to exist; `setup_app_tables` should be
|
||||
extended to create it if not present (deferred to a future patch).
|
||||
11
frontend/Dockerfile
Normal file
11
frontend/Dockerfile
Normal file
@@ -0,0 +1,11 @@
|
||||
FROM node:20-alpine AS build
|
||||
WORKDIR /app
|
||||
COPY package*.json ./
|
||||
RUN npm ci
|
||||
COPY . .
|
||||
RUN npm run build
|
||||
|
||||
FROM nginx:alpine
|
||||
COPY --from=build /app/dist /usr/share/nginx/html
|
||||
COPY nginx.conf /etc/nginx/conf.d/default.conf
|
||||
EXPOSE 80
|
||||
23
frontend/nginx.conf
Normal file
23
frontend/nginx.conf
Normal file
@@ -0,0 +1,23 @@
|
||||
server {
|
||||
listen 80;
|
||||
root /usr/share/nginx/html;
|
||||
index index.html;
|
||||
|
||||
location /api/ {
|
||||
proxy_pass http://backend:8000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
}
|
||||
|
||||
location /ws {
|
||||
proxy_pass http://backend:8000;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
}
|
||||
60
frontend/package-lock.json
generated
60
frontend/package-lock.json
generated
@@ -9,7 +9,8 @@
|
||||
"version": "0.1.0",
|
||||
"dependencies": {
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0"
|
||||
"react-dom": "^19.0.0",
|
||||
"react-router-dom": "^7.13.2"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/react": "^19.0.0",
|
||||
@@ -1318,6 +1319,19 @@
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/cookie": {
|
||||
"version": "1.1.1",
|
||||
"resolved": "https://registry.npmjs.org/cookie/-/cookie-1.1.1.tgz",
|
||||
"integrity": "sha512-ei8Aos7ja0weRpFzJnEA9UHJ/7XQmqglbRwnf2ATjcB9Wq874VKH9kfjjirM6UhU2/E5fFYadylyhFldcqSidQ==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=18"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/express"
|
||||
}
|
||||
},
|
||||
"node_modules/csstype": {
|
||||
"version": "3.2.3",
|
||||
"resolved": "https://registry.npmjs.org/csstype/-/csstype-3.2.3.tgz",
|
||||
@@ -1601,6 +1615,44 @@
|
||||
"node": ">=0.10.0"
|
||||
}
|
||||
},
|
||||
"node_modules/react-router": {
|
||||
"version": "7.13.2",
|
||||
"resolved": "https://registry.npmjs.org/react-router/-/react-router-7.13.2.tgz",
|
||||
"integrity": "sha512-tX1Aee+ArlKQP+NIUd7SE6Li+CiGKwQtbS+FfRxPX6Pe4vHOo6nr9d++u5cwg+Z8K/x8tP+7qLmujDtfrAoUJA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"cookie": "^1.0.1",
|
||||
"set-cookie-parser": "^2.6.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"react": ">=18",
|
||||
"react-dom": ">=18"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"react-dom": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/react-router-dom": {
|
||||
"version": "7.13.2",
|
||||
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-7.13.2.tgz",
|
||||
"integrity": "sha512-aR7SUORwTqAW0JDeiWF07e9SBE9qGpByR9I8kJT5h/FrBKxPMS6TiC7rmVO+gC0q52Bx7JnjWe8Z1sR9faN4YA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"react-router": "7.13.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"react": ">=18",
|
||||
"react-dom": ">=18"
|
||||
}
|
||||
},
|
||||
"node_modules/rollup": {
|
||||
"version": "4.60.0",
|
||||
"resolved": "https://registry.npmjs.org/rollup/-/rollup-4.60.0.tgz",
|
||||
@@ -1662,6 +1714,12 @@
|
||||
"semver": "bin/semver.js"
|
||||
}
|
||||
},
|
||||
"node_modules/set-cookie-parser": {
|
||||
"version": "2.7.2",
|
||||
"resolved": "https://registry.npmjs.org/set-cookie-parser/-/set-cookie-parser-2.7.2.tgz",
|
||||
"integrity": "sha512-oeM1lpU/UvhTxw+g3cIfxXHyJRc/uidd3yK1P242gzHds0udQBYzs3y8j4gCCW+ZJ7ad0yctld8RYO+bdurlvw==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/source-map-js": {
|
||||
"version": "1.2.1",
|
||||
"resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz",
|
||||
|
||||
@@ -10,7 +10,8 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0"
|
||||
"react-dom": "^19.0.0",
|
||||
"react-router-dom": "^7.13.2"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/react": "^19.0.0",
|
||||
|
||||
@@ -1,5 +1,23 @@
|
||||
import { BrowserRouter, Route, Routes } from "react-router-dom";
|
||||
import { Layout } from "./components/Layout";
|
||||
import { ChatPage } from "./pages/ChatPage";
|
||||
import { DashboardPage } from "./pages/DashboardPage";
|
||||
import { ReplayListPage } from "./pages/ReplayListPage";
|
||||
import { ReplayPage } from "./pages/ReplayPage";
|
||||
import { ReviewPage } from "./pages/ReviewPage";
|
||||
|
||||
export default function App() {
|
||||
return <ChatPage />;
|
||||
return (
|
||||
<BrowserRouter>
|
||||
<Routes>
|
||||
<Route element={<Layout />}>
|
||||
<Route path="/" element={<ChatPage />} />
|
||||
<Route path="/replay" element={<ReplayListPage />} />
|
||||
<Route path="/replay/:threadId" element={<ReplayPage />} />
|
||||
<Route path="/dashboard" element={<DashboardPage />} />
|
||||
<Route path="/review" element={<ReviewPage />} />
|
||||
</Route>
|
||||
</Routes>
|
||||
</BrowserRouter>
|
||||
);
|
||||
}
|
||||
|
||||
108
frontend/src/api.ts
Normal file
108
frontend/src/api.ts
Normal file
@@ -0,0 +1,108 @@
|
||||
/** Typed fetch wrappers for the Smart Support REST API. */
|
||||
|
||||
const API_BASE = "";
|
||||
|
||||
export interface ApiResponse<T> {
|
||||
success: boolean;
|
||||
data: T;
|
||||
error: string | null;
|
||||
}
|
||||
|
||||
export interface ConversationSummary {
|
||||
thread_id: string;
|
||||
started_at: string;
|
||||
last_activity: string;
|
||||
turn_count: number;
|
||||
agents_used: string[];
|
||||
total_tokens: number;
|
||||
total_cost_usd: number;
|
||||
resolution_type: string | null;
|
||||
}
|
||||
|
||||
export interface ConversationsPage {
|
||||
conversations: ConversationSummary[];
|
||||
total: number;
|
||||
page: number;
|
||||
per_page: number;
|
||||
}
|
||||
|
||||
export interface ReplayStep {
|
||||
step: number;
|
||||
type: string;
|
||||
content: string | null;
|
||||
agent: string | null;
|
||||
tool: string | null;
|
||||
params: Record<string, unknown> | null;
|
||||
result: unknown;
|
||||
timestamp: string;
|
||||
}
|
||||
|
||||
export interface ReplayPage {
|
||||
thread_id: string;
|
||||
steps: ReplayStep[];
|
||||
total: number;
|
||||
page: number;
|
||||
per_page: number;
|
||||
}
|
||||
|
||||
export interface AgentUsage {
|
||||
agent_name: string;
|
||||
message_count: number;
|
||||
total_tokens: number;
|
||||
total_cost_usd: number;
|
||||
}
|
||||
|
||||
export interface InterruptStats {
|
||||
total: number;
|
||||
approved: number;
|
||||
rejected: number;
|
||||
expired: number;
|
||||
}
|
||||
|
||||
export interface AnalyticsData {
|
||||
total_conversations: number;
|
||||
resolved_conversations: number;
|
||||
escalated_conversations: number;
|
||||
resolution_rate: number;
|
||||
escalation_rate: number;
|
||||
total_tokens: number;
|
||||
total_cost_usd: number;
|
||||
avg_turns_per_conversation: number;
|
||||
agent_usage: AgentUsage[];
|
||||
interrupt_stats: InterruptStats;
|
||||
}
|
||||
|
||||
async function apiFetch<T>(path: string): Promise<T> {
|
||||
const res = await fetch(`${API_BASE}${path}`);
|
||||
if (!res.ok) {
|
||||
throw new Error(`API error ${res.status}: ${res.statusText}`);
|
||||
}
|
||||
const json: ApiResponse<T> = await res.json();
|
||||
if (!json.success) {
|
||||
throw new Error(json.error ?? "Unknown API error");
|
||||
}
|
||||
return json.data;
|
||||
}
|
||||
|
||||
export async function fetchConversations(
|
||||
page = 1,
|
||||
perPage = 20
|
||||
): Promise<ConversationsPage> {
|
||||
return apiFetch<ConversationsPage>(
|
||||
`/api/conversations?page=${page}&per_page=${perPage}`
|
||||
);
|
||||
}
|
||||
|
||||
export async function fetchReplay(
|
||||
threadId: string,
|
||||
page = 1,
|
||||
perPage = 20
|
||||
): Promise<ReplayPage> {
|
||||
return apiFetch<ReplayPage>(
|
||||
`/api/replay/${encodeURIComponent(threadId)}?page=${page}&per_page=${perPage}`
|
||||
);
|
||||
}
|
||||
|
||||
export async function fetchAnalytics(range = "7d"): Promise<AnalyticsData> {
|
||||
return apiFetch<AnalyticsData>(`/api/analytics?range=${range}`);
|
||||
}
|
||||
49
frontend/src/components/ErrorBanner.tsx
Normal file
49
frontend/src/components/ErrorBanner.tsx
Normal file
@@ -0,0 +1,49 @@
|
||||
import type { ConnectionStatus } from "../types";
|
||||
|
||||
interface ErrorBannerProps {
|
||||
status: ConnectionStatus;
|
||||
onReconnect?: () => void;
|
||||
}
|
||||
|
||||
export function ErrorBanner({ status, onReconnect }: ErrorBannerProps) {
|
||||
if (status === "connected") return null;
|
||||
|
||||
const isConnecting = status === "connecting";
|
||||
|
||||
const bannerStyle: React.CSSProperties = {
|
||||
background: isConnecting ? "#fff3e0" : "#ffebee",
|
||||
color: isConnecting ? "#e65100" : "#c62828",
|
||||
padding: "8px 16px",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
justifyContent: "space-between",
|
||||
fontSize: "13px",
|
||||
borderBottom: `1px solid ${isConnecting ? "#ffcc02" : "#ef9a9a"}`,
|
||||
};
|
||||
|
||||
return (
|
||||
<div style={bannerStyle} role="alert">
|
||||
<span>
|
||||
{isConnecting
|
||||
? "Connecting to server..."
|
||||
: "Disconnected from server. Retrying..."}
|
||||
</span>
|
||||
{!isConnecting && onReconnect && (
|
||||
<button
|
||||
onClick={onReconnect}
|
||||
style={{
|
||||
background: "none",
|
||||
border: "1px solid currentColor",
|
||||
color: "inherit",
|
||||
padding: "2px 8px",
|
||||
borderRadius: "4px",
|
||||
cursor: "pointer",
|
||||
fontSize: "12px",
|
||||
}}
|
||||
>
|
||||
Reconnect
|
||||
</button>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
13
frontend/src/components/Layout.tsx
Normal file
13
frontend/src/components/Layout.tsx
Normal file
@@ -0,0 +1,13 @@
|
||||
import { Outlet } from "react-router-dom";
|
||||
import { NavBar } from "./NavBar";
|
||||
|
||||
export function Layout() {
|
||||
return (
|
||||
<div style={{ display: "flex", flexDirection: "column", height: "100vh" }}>
|
||||
<NavBar />
|
||||
<main style={{ flex: 1, overflow: "auto" }}>
|
||||
<Outlet />
|
||||
</main>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
32
frontend/src/components/MetricCard.tsx
Normal file
32
frontend/src/components/MetricCard.tsx
Normal file
@@ -0,0 +1,32 @@
|
||||
interface MetricCardProps {
|
||||
label: string;
|
||||
value: string | number;
|
||||
unit?: string;
|
||||
suffix?: string;
|
||||
}
|
||||
|
||||
export function MetricCard({ label, value, unit, suffix }: MetricCardProps) {
|
||||
return (
|
||||
<div
|
||||
style={{
|
||||
background: "#fff",
|
||||
border: "1px solid #e0e0e0",
|
||||
borderRadius: "8px",
|
||||
padding: "16px 20px",
|
||||
minWidth: "140px",
|
||||
boxShadow: "0 1px 3px rgba(0,0,0,0.06)",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{ fontSize: "12px", color: "#888", marginBottom: "8px", textTransform: "uppercase", letterSpacing: "0.5px" }}
|
||||
>
|
||||
{label}
|
||||
</div>
|
||||
<div style={{ fontSize: "28px", fontWeight: 700, color: "#1a1a1a" }}>
|
||||
{unit && <span style={{ fontSize: "16px", color: "#555" }}>{unit}</span>}
|
||||
{value}
|
||||
{suffix && <span style={{ fontSize: "16px", color: "#555", marginLeft: "2px" }}>{suffix}</span>}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
64
frontend/src/components/NavBar.tsx
Normal file
64
frontend/src/components/NavBar.tsx
Normal file
@@ -0,0 +1,64 @@
|
||||
import { NavLink } from "react-router-dom";
|
||||
|
||||
const navLinks = [
|
||||
{ to: "/", label: "Chat", exact: true },
|
||||
{ to: "/replay", label: "Replay" },
|
||||
{ to: "/dashboard", label: "Dashboard" },
|
||||
{ to: "/review", label: "API Review" },
|
||||
];
|
||||
|
||||
const styles: Record<string, React.CSSProperties> = {
|
||||
nav: {
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "0",
|
||||
padding: "0 16px",
|
||||
borderBottom: "1px solid #e0e0e0",
|
||||
background: "#fff",
|
||||
height: "48px",
|
||||
boxShadow: "0 1px 4px rgba(0,0,0,0.06)",
|
||||
},
|
||||
brand: {
|
||||
fontWeight: 700,
|
||||
fontSize: "16px",
|
||||
color: "#1a1a1a",
|
||||
marginRight: "24px",
|
||||
textDecoration: "none",
|
||||
},
|
||||
link: {
|
||||
padding: "0 14px",
|
||||
height: "48px",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
fontSize: "14px",
|
||||
color: "#555",
|
||||
textDecoration: "none",
|
||||
borderBottom: "2px solid transparent",
|
||||
transition: "color 0.15s, border-color 0.15s",
|
||||
},
|
||||
activeLink: {
|
||||
color: "#1976d2",
|
||||
borderBottom: "2px solid #1976d2",
|
||||
},
|
||||
};
|
||||
|
||||
export function NavBar() {
|
||||
return (
|
||||
<nav style={styles.nav}>
|
||||
<span style={styles.brand}>Smart Support</span>
|
||||
{navLinks.map(({ to, label }) => (
|
||||
<NavLink
|
||||
key={to}
|
||||
to={to}
|
||||
end={to === "/"}
|
||||
style={({ isActive }) => ({
|
||||
...styles.link,
|
||||
...(isActive ? styles.activeLink : {}),
|
||||
})}
|
||||
>
|
||||
{label}
|
||||
</NavLink>
|
||||
))}
|
||||
</nav>
|
||||
);
|
||||
}
|
||||
144
frontend/src/components/ReplayTimeline.tsx
Normal file
144
frontend/src/components/ReplayTimeline.tsx
Normal file
@@ -0,0 +1,144 @@
|
||||
import { useState } from "react";
|
||||
import type { ReplayStep } from "../api";
|
||||
|
||||
const TYPE_COLORS: Record<string, string> = {
|
||||
message: "#1976d2",
|
||||
token: "#388e3c",
|
||||
tool_call: "#f57c00",
|
||||
tool_result: "#7b1fa2",
|
||||
interrupt: "#d32f2f",
|
||||
interrupt_response: "#c2185b",
|
||||
error: "#c62828",
|
||||
};
|
||||
|
||||
function TypeBadge({ type }: { type: string }) {
|
||||
const color = TYPE_COLORS[type] ?? "#555";
|
||||
return (
|
||||
<span
|
||||
style={{
|
||||
background: color,
|
||||
color: "#fff",
|
||||
fontSize: "11px",
|
||||
fontWeight: 600,
|
||||
padding: "2px 7px",
|
||||
borderRadius: "10px",
|
||||
textTransform: "uppercase",
|
||||
letterSpacing: "0.5px",
|
||||
}}
|
||||
>
|
||||
{type}
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
function ReplayStepItem({ step }: { step: ReplayStep }) {
|
||||
const [expanded, setExpanded] = useState(false);
|
||||
const hasDetails = step.params != null || step.result != null;
|
||||
|
||||
return (
|
||||
<div
|
||||
style={{
|
||||
borderLeft: "2px solid #e0e0e0",
|
||||
paddingLeft: "12px",
|
||||
marginBottom: "12px",
|
||||
position: "relative",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
position: "absolute",
|
||||
left: "-5px",
|
||||
top: "4px",
|
||||
width: "8px",
|
||||
height: "8px",
|
||||
borderRadius: "50%",
|
||||
background: TYPE_COLORS[step.type] ?? "#555",
|
||||
}}
|
||||
/>
|
||||
<div style={{ display: "flex", alignItems: "center", gap: "8px", marginBottom: "4px" }}>
|
||||
<span style={{ fontSize: "11px", color: "#888" }}>#{step.step}</span>
|
||||
<TypeBadge type={step.type} />
|
||||
{step.agent && (
|
||||
<span style={{ fontSize: "11px", color: "#666", fontStyle: "italic" }}>
|
||||
{step.agent}
|
||||
</span>
|
||||
)}
|
||||
{step.tool && (
|
||||
<span style={{ fontSize: "11px", color: "#555" }}>
|
||||
tool: <strong>{step.tool}</strong>
|
||||
</span>
|
||||
)}
|
||||
<span style={{ fontSize: "11px", color: "#aaa", marginLeft: "auto" }}>
|
||||
{new Date(step.timestamp).toLocaleTimeString()}
|
||||
</span>
|
||||
</div>
|
||||
{step.content && (
|
||||
<div
|
||||
style={{
|
||||
fontSize: "13px",
|
||||
color: "#333",
|
||||
background: "#f9f9f9",
|
||||
padding: "6px 10px",
|
||||
borderRadius: "4px",
|
||||
maxHeight: "80px",
|
||||
overflow: "hidden",
|
||||
textOverflow: "ellipsis",
|
||||
}}
|
||||
>
|
||||
{step.content}
|
||||
</div>
|
||||
)}
|
||||
{hasDetails && (
|
||||
<button
|
||||
onClick={() => setExpanded((v) => !v)}
|
||||
style={{
|
||||
background: "none",
|
||||
border: "none",
|
||||
color: "#1976d2",
|
||||
cursor: "pointer",
|
||||
fontSize: "12px",
|
||||
padding: "2px 0",
|
||||
}}
|
||||
>
|
||||
{expanded ? "Hide details" : "Show details"}
|
||||
</button>
|
||||
)}
|
||||
{expanded && hasDetails && (
|
||||
<pre
|
||||
style={{
|
||||
fontSize: "11px",
|
||||
background: "#f3f3f3",
|
||||
padding: "8px",
|
||||
borderRadius: "4px",
|
||||
overflow: "auto",
|
||||
maxHeight: "200px",
|
||||
}}
|
||||
>
|
||||
{JSON.stringify({ params: step.params, result: step.result }, null, 2)}
|
||||
</pre>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
interface ReplayTimelineProps {
|
||||
steps: ReplayStep[];
|
||||
}
|
||||
|
||||
export function ReplayTimeline({ steps }: ReplayTimelineProps) {
|
||||
if (steps.length === 0) {
|
||||
return (
|
||||
<div style={{ color: "#888", fontSize: "14px", padding: "16px 0" }}>
|
||||
No steps recorded.
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div style={{ padding: "8px 0" }}>
|
||||
{steps.map((step) => (
|
||||
<ReplayStepItem key={step.step} step={step} />
|
||||
))}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -21,13 +21,23 @@ function getOrCreateThreadId(): string {
|
||||
return id;
|
||||
}
|
||||
|
||||
export function useWebSocket(onMessage: (msg: ServerMessage) => void) {
|
||||
interface UseWebSocketOptions {
|
||||
onDisconnect?: () => void;
|
||||
onReconnect?: () => void;
|
||||
}
|
||||
|
||||
export function useWebSocket(
|
||||
onMessage: (msg: ServerMessage) => void,
|
||||
options?: UseWebSocketOptions
|
||||
) {
|
||||
const [status, setStatus] = useState<ConnectionStatus>("disconnected");
|
||||
const [threadId] = useState(getOrCreateThreadId);
|
||||
const wsRef = useRef<WebSocket | null>(null);
|
||||
const retriesRef = useRef(0);
|
||||
const onMessageRef = useRef(onMessage);
|
||||
const optionsRef = useRef(options);
|
||||
onMessageRef.current = onMessage;
|
||||
optionsRef.current = options;
|
||||
|
||||
const connect = useCallback(() => {
|
||||
if (wsRef.current?.readyState === WebSocket.OPEN) return;
|
||||
@@ -38,6 +48,7 @@ export function useWebSocket(onMessage: (msg: ServerMessage) => void) {
|
||||
ws.onopen = () => {
|
||||
setStatus("connected");
|
||||
retriesRef.current = 0;
|
||||
optionsRef.current?.onReconnect?.();
|
||||
};
|
||||
|
||||
ws.onmessage = (event) => {
|
||||
@@ -52,6 +63,7 @@ export function useWebSocket(onMessage: (msg: ServerMessage) => void) {
|
||||
ws.onclose = () => {
|
||||
setStatus("disconnected");
|
||||
wsRef.current = null;
|
||||
optionsRef.current?.onDisconnect?.();
|
||||
|
||||
if (retriesRef.current < MAX_RETRIES) {
|
||||
const delay = BASE_DELAY_MS * Math.pow(2, retriesRef.current);
|
||||
@@ -74,6 +86,12 @@ export function useWebSocket(onMessage: (msg: ServerMessage) => void) {
|
||||
};
|
||||
}, [connect]);
|
||||
|
||||
const reconnect = useCallback(() => {
|
||||
retriesRef.current = 0;
|
||||
wsRef.current?.close();
|
||||
connect();
|
||||
}, [connect]);
|
||||
|
||||
const send = useCallback((msg: ClientMessage) => {
|
||||
if (wsRef.current?.readyState === WebSocket.OPEN) {
|
||||
wsRef.current.send(JSON.stringify(msg));
|
||||
@@ -100,5 +118,5 @@ export function useWebSocket(onMessage: (msg: ServerMessage) => void) {
|
||||
[send, threadId]
|
||||
);
|
||||
|
||||
return { status, threadId, sendMessage, sendInterruptResponse };
|
||||
return { status, threadId, sendMessage, sendInterruptResponse, reconnect };
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ import { useCallback, useState } from "react";
|
||||
import { AgentAction } from "../components/AgentAction";
|
||||
import { ChatInput } from "../components/ChatInput";
|
||||
import { ChatMessages } from "../components/ChatMessages";
|
||||
import { ErrorBanner } from "../components/ErrorBanner";
|
||||
import { InterruptPrompt } from "../components/InterruptPrompt";
|
||||
import { useWebSocket } from "../hooks/useWebSocket";
|
||||
import type {
|
||||
@@ -95,7 +96,7 @@ export function ChatPage() {
|
||||
}
|
||||
}, []);
|
||||
|
||||
const { status, sendMessage, sendInterruptResponse } =
|
||||
const { status, sendMessage, sendInterruptResponse, reconnect } =
|
||||
useWebSocket(handleServerMessage);
|
||||
|
||||
const handleSend = useCallback(
|
||||
@@ -130,6 +131,7 @@ export function ChatPage() {
|
||||
<h1 style={styles.title}>Smart Support</h1>
|
||||
<StatusIndicator status={status} />
|
||||
</div>
|
||||
<ErrorBanner status={status} onReconnect={reconnect} />
|
||||
<ChatMessages messages={messages} />
|
||||
{toolActions.length > 0 && (
|
||||
<div style={styles.actionsBar}>
|
||||
|
||||
184
frontend/src/pages/DashboardPage.tsx
Normal file
184
frontend/src/pages/DashboardPage.tsx
Normal file
@@ -0,0 +1,184 @@
|
||||
import { useEffect, useState } from "react";
|
||||
import { fetchAnalytics } from "../api";
|
||||
import type { AnalyticsData } from "../api";
|
||||
import { MetricCard } from "../components/MetricCard";
|
||||
|
||||
const RANGE_OPTIONS = [
|
||||
{ value: "7d", label: "7 days" },
|
||||
{ value: "14d", label: "14 days" },
|
||||
{ value: "30d", label: "30 days" },
|
||||
];
|
||||
|
||||
function pct(value: number): string {
|
||||
return `${(value * 100).toFixed(1)}%`;
|
||||
}
|
||||
|
||||
function formatCost(usd: number): string {
|
||||
return usd < 0.01 ? "<$0.01" : `$${usd.toFixed(3)}`;
|
||||
}
|
||||
|
||||
export function DashboardPage() {
|
||||
const [range, setRange] = useState("7d");
|
||||
const [data, setData] = useState<AnalyticsData | null>(null);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
fetchAnalytics(range)
|
||||
.then(setData)
|
||||
.catch((err: Error) => setError(err.message))
|
||||
.finally(() => setLoading(false));
|
||||
}, [range]);
|
||||
|
||||
return (
|
||||
<div style={styles.container}>
|
||||
<div style={styles.header}>
|
||||
<h2 style={styles.heading}>Dashboard</h2>
|
||||
<div style={styles.rangeSelector}>
|
||||
{RANGE_OPTIONS.map((opt) => (
|
||||
<button
|
||||
key={opt.value}
|
||||
onClick={() => setRange(opt.value)}
|
||||
style={{
|
||||
...styles.rangeBtn,
|
||||
...(range === opt.value ? styles.rangeBtnActive : {}),
|
||||
}}
|
||||
>
|
||||
{opt.label}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{loading && <div style={styles.center}>Loading analytics...</div>}
|
||||
{error && <div style={styles.error}>Error: {error}</div>}
|
||||
|
||||
{!loading && !error && data && (
|
||||
<>
|
||||
{data.total_conversations === 0 ? (
|
||||
<div style={styles.empty}>
|
||||
No conversations yet. Start a chat to see analytics here.
|
||||
</div>
|
||||
) : (
|
||||
<>
|
||||
<div style={styles.metricsGrid}>
|
||||
<MetricCard
|
||||
label="Total Conversations"
|
||||
value={data.total_conversations}
|
||||
/>
|
||||
<MetricCard
|
||||
label="Resolution Rate"
|
||||
value={pct(data.resolution_rate)}
|
||||
/>
|
||||
<MetricCard
|
||||
label="Escalation Rate"
|
||||
value={pct(data.escalation_rate)}
|
||||
/>
|
||||
<MetricCard
|
||||
label="Avg Turns"
|
||||
value={data.avg_turns_per_conversation.toFixed(1)}
|
||||
/>
|
||||
<MetricCard
|
||||
label="Total Tokens"
|
||||
value={data.total_tokens.toLocaleString()}
|
||||
/>
|
||||
<MetricCard
|
||||
label="Total Cost"
|
||||
value={formatCost(data.total_cost_usd)}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<h3 style={styles.sectionHeading}>Agent Usage</h3>
|
||||
{data.agent_usage.length === 0 ? (
|
||||
<div style={styles.empty}>No agent data.</div>
|
||||
) : (
|
||||
<table style={styles.table}>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style={styles.th}>Agent</th>
|
||||
<th style={styles.th}>Messages</th>
|
||||
<th style={styles.th}>Tokens</th>
|
||||
<th style={styles.th}>Cost</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{data.agent_usage.map((a) => (
|
||||
<tr key={a.agent_name}>
|
||||
<td style={styles.td}>{a.agent_name}</td>
|
||||
<td style={styles.td}>{a.message_count}</td>
|
||||
<td style={styles.td}>{a.total_tokens.toLocaleString()}</td>
|
||||
<td style={styles.td}>{formatCost(a.total_cost_usd)}</td>
|
||||
</tr>
|
||||
))}
|
||||
</tbody>
|
||||
</table>
|
||||
)}
|
||||
|
||||
<h3 style={styles.sectionHeading}>Interrupt Stats</h3>
|
||||
<div style={styles.metricsGrid}>
|
||||
<MetricCard label="Total Interrupts" value={data.interrupt_stats.total} />
|
||||
<MetricCard label="Approved" value={data.interrupt_stats.approved} />
|
||||
<MetricCard label="Rejected" value={data.interrupt_stats.rejected} />
|
||||
<MetricCard label="Expired" value={data.interrupt_stats.expired} />
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const styles: Record<string, React.CSSProperties> = {
|
||||
container: { padding: "24px", maxWidth: "1000px", margin: "0 auto" },
|
||||
header: {
|
||||
display: "flex",
|
||||
justifyContent: "space-between",
|
||||
alignItems: "center",
|
||||
marginBottom: "20px",
|
||||
},
|
||||
heading: { fontSize: "20px", fontWeight: 700, margin: 0 },
|
||||
rangeSelector: { display: "flex", gap: "4px" },
|
||||
rangeBtn: {
|
||||
padding: "5px 14px",
|
||||
border: "1px solid #e0e0e0",
|
||||
borderRadius: "4px",
|
||||
background: "#fff",
|
||||
cursor: "pointer",
|
||||
fontSize: "13px",
|
||||
color: "#555",
|
||||
},
|
||||
rangeBtnActive: {
|
||||
background: "#1976d2",
|
||||
color: "#fff",
|
||||
borderColor: "#1976d2",
|
||||
},
|
||||
metricsGrid: {
|
||||
display: "flex",
|
||||
flexWrap: "wrap" as const,
|
||||
gap: "12px",
|
||||
marginBottom: "24px",
|
||||
},
|
||||
sectionHeading: {
|
||||
fontSize: "15px",
|
||||
fontWeight: 600,
|
||||
marginBottom: "12px",
|
||||
color: "#333",
|
||||
},
|
||||
table: { width: "100%", borderCollapse: "collapse", fontSize: "13px", marginBottom: "24px" },
|
||||
th: {
|
||||
textAlign: "left",
|
||||
padding: "8px 12px",
|
||||
borderBottom: "2px solid #e0e0e0",
|
||||
color: "#555",
|
||||
fontWeight: 600,
|
||||
textTransform: "uppercase",
|
||||
fontSize: "11px",
|
||||
},
|
||||
td: { padding: "10px 12px", borderBottom: "1px solid #f0f0f0" },
|
||||
center: { padding: "48px", textAlign: "center", color: "#888" },
|
||||
error: { padding: "24px", color: "#c62828" },
|
||||
empty: { color: "#888", fontSize: "14px", padding: "16px 0" },
|
||||
};
|
||||
133
frontend/src/pages/ReplayListPage.tsx
Normal file
133
frontend/src/pages/ReplayListPage.tsx
Normal file
@@ -0,0 +1,133 @@
|
||||
import { useEffect, useState } from "react";
|
||||
import { useNavigate } from "react-router-dom";
|
||||
import { fetchConversations } from "../api";
|
||||
import type { ConversationSummary } from "../api";
|
||||
|
||||
export function ReplayListPage() {
|
||||
const [conversations, setConversations] = useState<ConversationSummary[]>([]);
|
||||
const [total, setTotal] = useState(0);
|
||||
const [page, setPage] = useState(1);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
const navigate = useNavigate();
|
||||
const perPage = 20;
|
||||
|
||||
useEffect(() => {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
fetchConversations(page, perPage)
|
||||
.then((data) => {
|
||||
setConversations(data.conversations);
|
||||
setTotal(data.total);
|
||||
})
|
||||
.catch((err: Error) => setError(err.message))
|
||||
.finally(() => setLoading(false));
|
||||
}, [page]);
|
||||
|
||||
if (loading) {
|
||||
return <div style={styles.center}>Loading conversations...</div>;
|
||||
}
|
||||
|
||||
if (error) {
|
||||
return <div style={styles.error}>Error: {error}</div>;
|
||||
}
|
||||
|
||||
const totalPages = Math.ceil(total / perPage);
|
||||
|
||||
return (
|
||||
<div style={styles.container}>
|
||||
<h2 style={styles.heading}>Conversations</h2>
|
||||
{conversations.length === 0 ? (
|
||||
<div style={styles.empty}>No conversations yet.</div>
|
||||
) : (
|
||||
<>
|
||||
<table style={styles.table}>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style={styles.th}>Thread ID</th>
|
||||
<th style={styles.th}>Started</th>
|
||||
<th style={styles.th}>Turns</th>
|
||||
<th style={styles.th}>Agents</th>
|
||||
<th style={styles.th}>Resolution</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{conversations.map((c) => (
|
||||
<tr
|
||||
key={c.thread_id}
|
||||
onClick={() => navigate(`/replay/${c.thread_id}`)}
|
||||
style={styles.row}
|
||||
>
|
||||
<td style={styles.td}>
|
||||
<span style={styles.threadId}>{c.thread_id}</span>
|
||||
</td>
|
||||
<td style={styles.td}>
|
||||
{new Date(c.started_at).toLocaleString()}
|
||||
</td>
|
||||
<td style={styles.td}>{c.turn_count}</td>
|
||||
<td style={styles.td}>{c.agents_used.join(", ") || "—"}</td>
|
||||
<td style={styles.td}>{c.resolution_type ?? "open"}</td>
|
||||
</tr>
|
||||
))}
|
||||
</tbody>
|
||||
</table>
|
||||
<div style={styles.pagination}>
|
||||
<button
|
||||
onClick={() => setPage((p) => Math.max(1, p - 1))}
|
||||
disabled={page === 1}
|
||||
style={styles.pageBtn}
|
||||
>
|
||||
Previous
|
||||
</button>
|
||||
<span style={{ fontSize: "13px", color: "#555" }}>
|
||||
Page {page} of {totalPages}
|
||||
</span>
|
||||
<button
|
||||
onClick={() => setPage((p) => Math.min(totalPages, p + 1))}
|
||||
disabled={page >= totalPages}
|
||||
style={styles.pageBtn}
|
||||
>
|
||||
Next
|
||||
</button>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const styles: Record<string, React.CSSProperties> = {
|
||||
container: { padding: "24px", maxWidth: "1000px", margin: "0 auto" },
|
||||
heading: { fontSize: "20px", fontWeight: 700, marginBottom: "16px" },
|
||||
center: { padding: "48px", textAlign: "center", color: "#888" },
|
||||
error: { padding: "24px", color: "#c62828" },
|
||||
empty: { color: "#888", fontSize: "14px" },
|
||||
table: { width: "100%", borderCollapse: "collapse", fontSize: "13px" },
|
||||
th: {
|
||||
textAlign: "left",
|
||||
padding: "8px 12px",
|
||||
borderBottom: "2px solid #e0e0e0",
|
||||
color: "#555",
|
||||
fontWeight: 600,
|
||||
textTransform: "uppercase",
|
||||
fontSize: "11px",
|
||||
letterSpacing: "0.5px",
|
||||
},
|
||||
td: { padding: "10px 12px", borderBottom: "1px solid #f0f0f0" },
|
||||
row: { cursor: "pointer", transition: "background 0.1s" },
|
||||
threadId: { fontFamily: "monospace", fontSize: "12px", color: "#1976d2" },
|
||||
pagination: {
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "12px",
|
||||
marginTop: "16px",
|
||||
},
|
||||
pageBtn: {
|
||||
padding: "6px 14px",
|
||||
border: "1px solid #e0e0e0",
|
||||
borderRadius: "4px",
|
||||
background: "#fff",
|
||||
cursor: "pointer",
|
||||
fontSize: "13px",
|
||||
},
|
||||
};
|
||||
89
frontend/src/pages/ReplayPage.tsx
Normal file
89
frontend/src/pages/ReplayPage.tsx
Normal file
@@ -0,0 +1,89 @@
|
||||
import { useEffect, useState } from "react";
|
||||
import { useParams } from "react-router-dom";
|
||||
import { fetchReplay } from "../api";
|
||||
import type { ReplayStep } from "../api";
|
||||
import { ReplayTimeline } from "../components/ReplayTimeline";
|
||||
|
||||
export function ReplayPage() {
|
||||
const { threadId } = useParams<{ threadId: string }>();
|
||||
const [steps, setSteps] = useState<ReplayStep[]>([]);
|
||||
const [total, setTotal] = useState(0);
|
||||
const [page, setPage] = useState(1);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
const perPage = 20;
|
||||
|
||||
useEffect(() => {
|
||||
if (!threadId) return;
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
fetchReplay(threadId, page, perPage)
|
||||
.then((data) => {
|
||||
setSteps(data.steps);
|
||||
setTotal(data.total);
|
||||
})
|
||||
.catch((err: Error) => setError(err.message))
|
||||
.finally(() => setLoading(false));
|
||||
}, [threadId, page]);
|
||||
|
||||
if (!threadId) {
|
||||
return <div style={styles.error}>No thread ID provided.</div>;
|
||||
}
|
||||
|
||||
const totalPages = Math.ceil(total / perPage);
|
||||
|
||||
return (
|
||||
<div style={styles.container}>
|
||||
<h2 style={styles.heading}>
|
||||
Replay:{" "}
|
||||
<span style={styles.threadId}>{threadId}</span>
|
||||
</h2>
|
||||
{loading && <div style={styles.center}>Loading replay...</div>}
|
||||
{error && <div style={styles.error}>Error: {error}</div>}
|
||||
{!loading && !error && <ReplayTimeline steps={steps} />}
|
||||
{!loading && totalPages > 1 && (
|
||||
<div style={styles.pagination}>
|
||||
<button
|
||||
onClick={() => setPage((p) => Math.max(1, p - 1))}
|
||||
disabled={page === 1}
|
||||
style={styles.pageBtn}
|
||||
>
|
||||
Previous
|
||||
</button>
|
||||
<span style={{ fontSize: "13px", color: "#555" }}>
|
||||
Page {page} of {totalPages} ({total} steps)
|
||||
</span>
|
||||
<button
|
||||
onClick={() => setPage((p) => Math.min(totalPages, p + 1))}
|
||||
disabled={page >= totalPages}
|
||||
style={styles.pageBtn}
|
||||
>
|
||||
Next
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const styles: Record<string, React.CSSProperties> = {
|
||||
container: { padding: "24px", maxWidth: "800px", margin: "0 auto" },
|
||||
heading: { fontSize: "20px", fontWeight: 700, marginBottom: "20px" },
|
||||
threadId: { fontFamily: "monospace", fontSize: "16px", color: "#1976d2" },
|
||||
center: { padding: "48px", textAlign: "center", color: "#888" },
|
||||
error: { padding: "24px", color: "#c62828" },
|
||||
pagination: {
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "12px",
|
||||
marginTop: "20px",
|
||||
},
|
||||
pageBtn: {
|
||||
padding: "6px 14px",
|
||||
border: "1px solid #e0e0e0",
|
||||
borderRadius: "4px",
|
||||
background: "#fff",
|
||||
cursor: "pointer",
|
||||
fontSize: "13px",
|
||||
},
|
||||
};
|
||||
277
frontend/src/pages/ReviewPage.tsx
Normal file
277
frontend/src/pages/ReviewPage.tsx
Normal file
@@ -0,0 +1,277 @@
|
||||
import { useEffect, useRef, useState } from "react";
|
||||
|
||||
interface ImportJob {
|
||||
job_id: string;
|
||||
status: "pending" | "processing" | "done" | "error";
|
||||
error?: string;
|
||||
}
|
||||
|
||||
interface EndpointClassification {
|
||||
path: string;
|
||||
method: string;
|
||||
summary: string;
|
||||
access_type: string;
|
||||
agent_group: string;
|
||||
}
|
||||
|
||||
interface JobResult {
|
||||
job_id: string;
|
||||
status: string;
|
||||
endpoints: EndpointClassification[];
|
||||
}
|
||||
|
||||
export function ReviewPage() {
|
||||
const [url, setUrl] = useState("");
|
||||
const [job, setJob] = useState<ImportJob | null>(null);
|
||||
const [result, setResult] = useState<JobResult | null>(null);
|
||||
const [submitting, setSubmitting] = useState(false);
|
||||
const [submitError, setSubmitError] = useState<string | null>(null);
|
||||
const [classifications, setClassifications] = useState<EndpointClassification[]>([]);
|
||||
const pollRef = useRef<ReturnType<typeof setTimeout> | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
if (pollRef.current) clearTimeout(pollRef.current);
|
||||
};
|
||||
}, []);
|
||||
|
||||
function pollJob(jobId: string) {
|
||||
fetch(`/api/openapi/jobs/${encodeURIComponent(jobId)}`)
|
||||
.then((r) => r.json())
|
||||
.then((data) => {
|
||||
const j: ImportJob = data.data ?? data;
|
||||
setJob(j);
|
||||
if (j.status === "done") {
|
||||
return fetch(`/api/openapi/jobs/${encodeURIComponent(jobId)}/result`)
|
||||
.then((r) => r.json())
|
||||
.then((rdata) => {
|
||||
const res: JobResult = rdata.data ?? rdata;
|
||||
setResult(res);
|
||||
setClassifications(res.endpoints ?? []);
|
||||
});
|
||||
} else if (j.status === "error") {
|
||||
return;
|
||||
} else {
|
||||
pollRef.current = setTimeout(() => pollJob(jobId), 2000);
|
||||
}
|
||||
})
|
||||
.catch(() => {
|
||||
pollRef.current = setTimeout(() => pollJob(jobId), 3000);
|
||||
});
|
||||
}
|
||||
|
||||
function handleSubmit(e: React.FormEvent) {
|
||||
e.preventDefault();
|
||||
if (!url.trim()) return;
|
||||
setSubmitting(true);
|
||||
setSubmitError(null);
|
||||
setJob(null);
|
||||
setResult(null);
|
||||
setClassifications([]);
|
||||
|
||||
fetch("/api/openapi/import", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ url }),
|
||||
})
|
||||
.then((r) => r.json())
|
||||
.then((data) => {
|
||||
const j: ImportJob = data.data ?? data;
|
||||
setJob(j);
|
||||
if (j.job_id) pollJob(j.job_id);
|
||||
})
|
||||
.catch((err: Error) => setSubmitError(err.message))
|
||||
.finally(() => setSubmitting(false));
|
||||
}
|
||||
|
||||
function handleFieldChange(
|
||||
idx: number,
|
||||
field: keyof EndpointClassification,
|
||||
value: string
|
||||
) {
|
||||
setClassifications((prev) =>
|
||||
prev.map((c, i) => (i === idx ? { ...c, [field]: value } : c))
|
||||
);
|
||||
}
|
||||
|
||||
function handleApprove() {
|
||||
if (!job?.job_id) return;
|
||||
fetch(`/api/openapi/jobs/${encodeURIComponent(job.job_id)}/approve`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ endpoints: classifications }),
|
||||
}).then(() => {
|
||||
alert("Approved and saved.");
|
||||
});
|
||||
}
|
||||
|
||||
return (
|
||||
<div style={styles.container}>
|
||||
<h2 style={styles.heading}>OpenAPI Import & Review</h2>
|
||||
|
||||
<form onSubmit={handleSubmit} style={styles.form}>
|
||||
<input
|
||||
type="url"
|
||||
placeholder="https://example.com/openapi.yaml"
|
||||
value={url}
|
||||
onChange={(e) => setUrl(e.target.value)}
|
||||
style={styles.input}
|
||||
required
|
||||
/>
|
||||
<button type="submit" disabled={submitting} style={styles.submitBtn}>
|
||||
{submitting ? "Importing..." : "Import"}
|
||||
</button>
|
||||
</form>
|
||||
|
||||
{submitError && <div style={styles.error}>Error: {submitError}</div>}
|
||||
|
||||
{job && (
|
||||
<div style={styles.statusBox}>
|
||||
<strong>Job:</strong> {job.job_id} — Status:{" "}
|
||||
<span
|
||||
style={{
|
||||
color:
|
||||
job.status === "done"
|
||||
? "#388e3c"
|
||||
: job.status === "error"
|
||||
? "#c62828"
|
||||
: "#f57c00",
|
||||
fontWeight: 600,
|
||||
}}
|
||||
>
|
||||
{job.status}
|
||||
</span>
|
||||
{job.error && (
|
||||
<div style={{ color: "#c62828", marginTop: "4px" }}>{job.error}</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{result && classifications.length > 0 && (
|
||||
<>
|
||||
<h3 style={styles.sectionHeading}>
|
||||
Endpoint Classifications ({classifications.length})
|
||||
</h3>
|
||||
<p style={styles.hint}>
|
||||
Review and edit the access_type and agent_group before approving.
|
||||
</p>
|
||||
<table style={styles.table}>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style={styles.th}>Method</th>
|
||||
<th style={styles.th}>Path</th>
|
||||
<th style={styles.th}>Summary</th>
|
||||
<th style={styles.th}>Access Type</th>
|
||||
<th style={styles.th}>Agent Group</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{classifications.map((c, idx) => (
|
||||
<tr key={`${c.method}-${c.path}`}>
|
||||
<td style={styles.td}>
|
||||
<span style={{ fontWeight: 600, fontSize: "11px" }}>
|
||||
{c.method.toUpperCase()}
|
||||
</span>
|
||||
</td>
|
||||
<td style={{ ...styles.td, fontFamily: "monospace", fontSize: "12px" }}>
|
||||
{c.path}
|
||||
</td>
|
||||
<td style={styles.td}>{c.summary}</td>
|
||||
<td style={styles.td}>
|
||||
<select
|
||||
value={c.access_type}
|
||||
onChange={(e) => handleFieldChange(idx, "access_type", e.target.value)}
|
||||
style={styles.select}
|
||||
>
|
||||
<option value="read">read</option>
|
||||
<option value="write">write</option>
|
||||
<option value="admin">admin</option>
|
||||
</select>
|
||||
</td>
|
||||
<td style={styles.td}>
|
||||
<input
|
||||
type="text"
|
||||
value={c.agent_group}
|
||||
onChange={(e) => handleFieldChange(idx, "agent_group", e.target.value)}
|
||||
style={styles.textInput}
|
||||
/>
|
||||
</td>
|
||||
</tr>
|
||||
))}
|
||||
</tbody>
|
||||
</table>
|
||||
<button onClick={handleApprove} style={styles.approveBtn}>
|
||||
Approve & Save
|
||||
</button>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const styles: Record<string, React.CSSProperties> = {
|
||||
container: { padding: "24px", maxWidth: "1000px", margin: "0 auto" },
|
||||
heading: { fontSize: "20px", fontWeight: 700, marginBottom: "16px" },
|
||||
form: { display: "flex", gap: "8px", marginBottom: "16px" },
|
||||
input: {
|
||||
flex: 1,
|
||||
padding: "8px 12px",
|
||||
border: "1px solid #e0e0e0",
|
||||
borderRadius: "4px",
|
||||
fontSize: "14px",
|
||||
},
|
||||
submitBtn: {
|
||||
padding: "8px 20px",
|
||||
background: "#1976d2",
|
||||
color: "#fff",
|
||||
border: "none",
|
||||
borderRadius: "4px",
|
||||
cursor: "pointer",
|
||||
fontSize: "14px",
|
||||
fontWeight: 600,
|
||||
},
|
||||
error: { color: "#c62828", marginBottom: "12px" },
|
||||
statusBox: {
|
||||
background: "#f9f9f9",
|
||||
border: "1px solid #e0e0e0",
|
||||
padding: "10px 14px",
|
||||
borderRadius: "4px",
|
||||
marginBottom: "16px",
|
||||
fontSize: "13px",
|
||||
},
|
||||
sectionHeading: { fontSize: "15px", fontWeight: 600, marginBottom: "8px" },
|
||||
hint: { fontSize: "12px", color: "#888", marginBottom: "12px" },
|
||||
table: { width: "100%", borderCollapse: "collapse", fontSize: "13px", marginBottom: "16px" },
|
||||
th: {
|
||||
textAlign: "left",
|
||||
padding: "8px 10px",
|
||||
borderBottom: "2px solid #e0e0e0",
|
||||
fontSize: "11px",
|
||||
textTransform: "uppercase",
|
||||
color: "#555",
|
||||
},
|
||||
td: { padding: "8px 10px", borderBottom: "1px solid #f0f0f0" },
|
||||
select: {
|
||||
padding: "3px 6px",
|
||||
border: "1px solid #e0e0e0",
|
||||
borderRadius: "3px",
|
||||
fontSize: "12px",
|
||||
},
|
||||
textInput: {
|
||||
padding: "3px 6px",
|
||||
border: "1px solid #e0e0e0",
|
||||
borderRadius: "3px",
|
||||
fontSize: "12px",
|
||||
width: "100%",
|
||||
},
|
||||
approveBtn: {
|
||||
padding: "8px 20px",
|
||||
background: "#388e3c",
|
||||
color: "#fff",
|
||||
border: "none",
|
||||
borderRadius: "4px",
|
||||
cursor: "pointer",
|
||||
fontSize: "14px",
|
||||
fontWeight: 600,
|
||||
},
|
||||
};
|
||||
@@ -1 +1 @@
|
||||
{"root":["./src/app.tsx","./src/main.tsx","./src/types.ts","./src/components/agentaction.tsx","./src/components/chatinput.tsx","./src/components/chatmessages.tsx","./src/components/interruptprompt.tsx","./src/hooks/usewebsocket.ts","./src/pages/chatpage.tsx"],"version":"5.7.3"}
|
||||
{"root":["./src/app.tsx","./src/api.ts","./src/main.tsx","./src/types.ts","./src/components/agentaction.tsx","./src/components/chatinput.tsx","./src/components/chatmessages.tsx","./src/components/errorbanner.tsx","./src/components/interruptprompt.tsx","./src/components/layout.tsx","./src/components/metriccard.tsx","./src/components/navbar.tsx","./src/components/replaytimeline.tsx","./src/hooks/usewebsocket.ts","./src/pages/chatpage.tsx","./src/pages/dashboardpage.tsx","./src/pages/replaylistpage.tsx","./src/pages/replaypage.tsx","./src/pages/reviewpage.tsx"],"version":"5.7.3"}
|
||||
@@ -7,9 +7,12 @@ export default defineConfig({
|
||||
port: 5173,
|
||||
proxy: {
|
||||
"/ws": {
|
||||
target: "ws://localhost:8000",
|
||||
target: "http://localhost:8000",
|
||||
ws: true,
|
||||
},
|
||||
"/api": {
|
||||
target: "http://localhost:8000",
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
Reference in New Issue
Block a user