This document gives a step-by-step breakdown of how prompts flow through the system:
  • From frontend trigger to backend Python handlers
  • Includes hooks, handlers, payload formats, and utility functions
  • Ideal for developers working on prompt submission, socket handling, and agent-based AI routing

Core Functions

handleAIApiType(fileMetadata)

Determines which backend API handles the request based on file metadata. Returns: String key (doc, custom_gpt, canvas) for backend routing

handleSubmitPrompt(refine = false)

Triggers the complete prompt submission lifecycle. Behavior:
  • First message: Creates new chat and adds user as member
  • Always disables chat input until response received

enterNewPrompt()

Creates message entry in MongoDB before backend submission. Usage: Must be called before sending to backend

Payload Structure

Standard payload format across all LLM APIs:
{
  query: string,
  messageId: string,
  modelId: string,
  chatId: string,
  model_name: string,
  msgCredit: number
}

Backend API Functions

General Chat Functions

  • getPerplexityResponse(payload) - General LLM processing
  • getAINormatChatResponse(payload) - Image uploads or no prompt selected
  • getAIDocResponse(payload) - Document-based chats
  • setChatTitleByAI(params) - Auto-generate chat titles

Agent Functions

  • getAICustomGPTResponse(payload) - Custom GPT agents (uses agent.model_name)
  • getAIProAgentChatResponse(payload) - QA and Web Proposal agents
  • getSalesCallResponse(payload) - Sales call analysis

Specialized Functions

  • chatCanvasAiResponse(payload) - Selection-based canvas responses
  • getSeoKeyWords(payload) - SEO keyword generation

React Hooks

Core Hooks

  • useConversation() - Central conversation management, LLM streaming, prompt submission
  • useThunderBoltPopup() - File uploads, agent selection, prompt selection
  • useMediaUpload() - Media uploads for canvas responses

Data Hooks

  • useCustomGpt() - Fetch available custom GPT agents
  • usePrompt() - Retrieve saved prompts
  • useBrainDocs() - Load uploaded documents

Utility Hooks

  • useIntersectionObserver() - Infinite scroll for lists
  • useDebounce() - Debounce user input
  • useServerAction() - Server-side actions in Next.js

Utility Functions

State Management

  • handleModelSelectionUrl() - Sync model state with URL
  • handleProAgentUrlState() - Sync Pro Agent state with URL
  • handleNewChatClick() - Start new chat session

UI Control

  • blockProAgentAction() - Disable input during Pro Agent response

API Helpers

  • commonApi() - Generic client-side API calls
  • serverApi() - Server-specific API wrapper
  • ValidationError - Display Yup validation errors

UI Components

Lists and Modals

  • CommonList - Combined private/shared document list
  • RenderModalList - Model selection dropdown

Common Parameters

Standard Parameters

  • custom_gpt_id - Agent identifier
  • companyId - Company identifier
  • provider - LLM provider
  • code - Model code

Canvas-specific Parameters

  • currentMessageId - Message identifier
  • startIndex - Selection start
  • endIndex - Selection end

Agent-specific Parameters

  • agent_extra_info - Additional agent data

Troubleshooting

Prompt Not Reaching Backend

  1. Verify handleSubmitPrompt() is triggered
  2. Ensure enterNewPrompt() runs before submission
  3. Check handleAIApiType() returns correct type
  4. Confirm socket connection is active

Custom GPT Issues

  • Verify getAICustomGPTResponse() uses agent.model_name
  • Check payload structure matches expected schema

Pro Agent Input Issues

  • Ensure blockProAgentAction() is called before getAIProAgentChatResponse()