| Method | Purpose |
|---|---|
add_prompt_tokens(count: int) | Tracks input tokens from user prompts |
add_completion_tokens(count: int) | Tracks output tokens from AI responses |
calculate_total_cost(model_name: str) | Calculates total credits using model rates |
CostCalculator.calculate_total_cost()thread_repo.initialization()add_prompt_tokens()add_completion_tokens()calculate_total_cost(model_name) computes credit usageon_llm_end callback stores all usage dataThreadRepository handles database storage| File/Location | Purpose |
|---|---|
cost_calc_handler.py | Core token counting and cost calculation logic |
thread_repository.py | Database persistence for usage tracking |
callbacks/openai/cost_calc_handler.py | OpenAI-specific cost calculations |
callbacks/gemini/cost_calc_handler.py | Google Gemini cost calculations |
callbacks/anthropic/cost_calc_handler.py | Anthropic Claude cost calculations |
callbacks/huggingface/cost_calc_handler.py | Hugging Face model cost calculations |
model_cost_mapping.py | Credit rates per model and provider |
on_llm_end for automatic trackingCostCalculator and ThreadRepository