Files
smart-support/backend/app/llm.py
Yaojia Wang 33488fd634 feat: complete phase 1 -- core framework with chat loop, agents, and React UI
Backend:
- FastAPI WebSocket /ws endpoint with streaming via LangGraph astream
- LangGraph Supervisor connecting 3 mock agents (order_lookup, order_actions, fallback)
- YAML Agent Registry with Pydantic validation and immutable configs
- PostgresSaver checkpoint persistence via langgraph-checkpoint-postgres
- Session TTL with 30-min sliding window and interrupt extension
- LLM provider abstraction (Anthropic/OpenAI/Google)
- Token usage + cost tracking callback handler
- Input validation: message size cap, thread_id format, content length
- Security: no hardcoded defaults, startup API key validation, no input reflection

Frontend:
- React 19 + TypeScript + Vite chat UI
- WebSocket hook with reconnect + exponential backoff
- Streaming token display with agent attribution
- Interrupt approval/reject UI for write operations
- Collapsible tool call viewer

Testing:
- 87 unit tests, 87% coverage (exceeds 80% requirement)
- Ruff lint + format clean

Infrastructure:
- Docker Compose (PostgreSQL 16 + backend)
- pyproject.toml with full dependency management
2026-03-30 00:54:21 +02:00

43 lines
1.1 KiB
Python

"""LLM provider factory with prompt caching support."""
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from langchain_core.language_models import BaseChatModel
from app.config import Settings
def create_llm(settings: Settings) -> BaseChatModel:
"""Create an LLM instance based on the configured provider."""
provider = settings.llm_provider
model = settings.llm_model
if provider == "anthropic":
from langchain_anthropic import ChatAnthropic
return ChatAnthropic(
model=model,
api_key=settings.anthropic_api_key,
)
if provider == "openai":
from langchain_openai import ChatOpenAI
return ChatOpenAI(
model=model,
api_key=settings.openai_api_key,
)
if provider == "google":
from langchain_google_genai import ChatGoogleGenerativeAI
return ChatGoogleGenerativeAI(
model=model,
google_api_key=settings.google_api_key,
)
raise ValueError(f"Unknown LLM provider: '{provider}'. Use 'anthropic', 'openai', or 'google'.")