System Architecture
- LLM (ChatOpenAI)
- Tools
- Prompt Template
- Memory
FastAPI Endpoint
Controller Flow
initialization_service_code(code)
- Sets LLM provider (e.g., OPEN_AI)_select_manager(chat_input)
- Selects appropriate tool serviceselected_tool.initialize_llm()
- Initializes LangChain LLMselected_tool.initialize_repository()
- Loads session and historyselected_tool.prompt_attach()
- Attaches custom prompt templateselected_tool.create_conversation()
- Logs query into memoryselected_tool.tool_calls_run()
- Executes pipeline and streams results
LangChain Components
LLM Initialization
Memory Configuration
Tool Registration
Prompt Template
Streaming Implementation
File Structure
File | Responsibility |
---|---|
tool_chat.py | FastAPI endpoint implementation |
ToolController.py | Flow orchestration based on model/tool |
OpenAIToolService.py | LangChain LLM setup and execution |
simple_tools.py | Custom tool functions |
prompt_template.py | Prompt customization |
LangChain Concepts
Component | Purpose |
---|---|
ChatOpenAI | LangChain wrapper for OpenAI chat models |
Tools | Extend LLM behavior with custom logic |
Memory | Maintain conversation history and context |
Prompt Template | Structure system/user messages dynamically |
Streaming | Real-time response delivery |
ToolController | Manage initialization and execution flow |