This guide walks you through building a FastAPI backend that uses LangChain and OpenAI to deliver a streaming chat interface.
tool_calls_run()
— Streaming LogicFile | Responsibility |
---|---|
tool_chat.py | FastAPI endpoint (/stream-tool-chat-with-openai ) |
ToolController.py | Orchestrates flow based on model/tool |
OpenAIToolService.py | LangChain LLM setup and execution |
simple_tools.py | Custom tool functions (e.g., search) |
prompt_template.py | Optional prompt customization |
Concept | Purpose |
---|---|
ChatOpenAI | LangChain wrapper for OpenAI chat models |
Tools | Extend LLM behavior with custom logic |
Memory | Maintain conversation history and context |
Prompt Template | Structure system/user messages dynamically |
Streaming | StreamingResponse |
ToolController | Manages initialization and execution flow |