[PR #52] [MERGED] Add Pydantic validation for LiteLLM configs #56

Closed
opened 2026-03-02 04:08:04 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/gadievron/raptor/pull/52
Author: @gadievron
Created: 12/24/2025
Status: Merged
Merged: 12/25/2025
Merged by: @gadievron

Base: mainHead: pydantic-litellm-validation


📝 Commits (10+)

  • c3c3320 Add auto thinking model selection and update to latest versions
  • d8c57eb Fix Cursor bot issues on PR #50
  • 2b34b28 Fix inconsistent model name format in fallback
  • 8951ffc Fix API key extraction and explicit null handling
  • 097326f Fix non-list model_list value crashes iteration loop
  • 925805f Fix null api_key handling and cost calculation
  • 26a91c7 Fix invalid provider when matching by model name alias
  • f1a2e57 Fix null model_name and underlying_model causing TypeError
  • bbf4cc6 Fix empty YAML config causing AttributeError
  • 5ceb9c6 Add Pydantic validation for LiteLLM configs

📊 Changes

2 files changed (+456 additions, -18 deletions)

View changed files

📝 packages/llm_analysis/llm/config.py (+291 -18)
packages/llm_analysis/llm/yaml_schema.py (+165 -0)

📄 Description

Add Pydantic Validation for LiteLLM Configs

What

Validates LiteLLM YAML configuration files at load time using Pydantic schemas.

Why

Provides immediate, clear error messages for invalid configs instead of runtime failures.

Changes

  • yaml_schema.py - Pydantic validation schemas (169 lines)
  • config.py - Validation integration
  • litellm-model-configuration-guide.md - Documentation (331 lines)

Validation Rules

  • Model format: provider/name (e.g., openai/gpt-4o)
  • Temperature: 0.0-2.0
  • Max tokens: positive integers
  • No duplicate model names
  • Required fields enforced

Backward Compatible

No breaking changes. Invalid configs return empty list and fall back to environment variables.


Note

Introduces schema-validated LiteLLM config loading and smarter model selection with updated defaults.

  • New yaml_schema.py (Pydantic) validates model_list, litellm_params, and model_info (format/ranges/duplicates) and is used in config.py when reading YAML from standard paths or LITELLM_CONFIG_PATH
  • Adds _get_litellm_models and _get_best_thinking_model to discover and score models (reasoning-capable prioritized), including env-var API key resolution, and cost tiering (COST_OPUS_PER_1K, COST_DEFAULT_PER_1K)
  • Updates default primary model selection: try auto-selected thinking model first; else fall back to API keys (Anthropic/OpenAI/Gemini) using latest aliases and larger max_tokens; else Ollama; final Claude fallback
  • Expands fallback model list: includes Opus/Sonnet, GPT-5.2 (+ thinking), Gemini 3 (pro + deep-think), and first 3 local Ollama models; uses LiteLLM aliases and tier-aware behavior

Written by Cursor Bugbot for commit 5ceb9c6a33. This will update automatically on new commits. Configure here.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/gadievron/raptor/pull/52 **Author:** [@gadievron](https://github.com/gadievron) **Created:** 12/24/2025 **Status:** ✅ Merged **Merged:** 12/25/2025 **Merged by:** [@gadievron](https://github.com/gadievron) **Base:** `main` ← **Head:** `pydantic-litellm-validation` --- ### 📝 Commits (10+) - [`c3c3320`](https://github.com/gadievron/raptor/commit/c3c332082f306c7e484ff42fff86ddb01c28b51d) Add auto thinking model selection and update to latest versions - [`d8c57eb`](https://github.com/gadievron/raptor/commit/d8c57ebc53a2ead3884a6d5ca2cd108ad133de5d) Fix Cursor bot issues on PR #50 - [`2b34b28`](https://github.com/gadievron/raptor/commit/2b34b2833ea46943981bddcc1e72a4a566baf787) Fix inconsistent model name format in fallback - [`8951ffc`](https://github.com/gadievron/raptor/commit/8951ffcae8531b8db685c67fa8bef36d1b7a0c82) Fix API key extraction and explicit null handling - [`097326f`](https://github.com/gadievron/raptor/commit/097326f8cf806232ef00408847d3f954632a36c4) Fix non-list model_list value crashes iteration loop - [`925805f`](https://github.com/gadievron/raptor/commit/925805fd8a4624299657a571773394becdc75037) Fix null api_key handling and cost calculation - [`26a91c7`](https://github.com/gadievron/raptor/commit/26a91c78b1f35fd20892f409658a5a7300cd3b12) Fix invalid provider when matching by model name alias - [`f1a2e57`](https://github.com/gadievron/raptor/commit/f1a2e57daf55bf8aefa9b6fcf0a32dd8ad36c936) Fix null model_name and underlying_model causing TypeError - [`bbf4cc6`](https://github.com/gadievron/raptor/commit/bbf4cc6c1fbcb53cb1015bd94d58548912eb300e) Fix empty YAML config causing AttributeError - [`5ceb9c6`](https://github.com/gadievron/raptor/commit/5ceb9c6a335f4df9fa09fb91a49e44d5625a9f36) Add Pydantic validation for LiteLLM configs ### 📊 Changes **2 files changed** (+456 additions, -18 deletions) <details> <summary>View changed files</summary> 📝 `packages/llm_analysis/llm/config.py` (+291 -18) ➕ `packages/llm_analysis/llm/yaml_schema.py` (+165 -0) </details> ### 📄 Description # Add Pydantic Validation for LiteLLM Configs ## What Validates LiteLLM YAML configuration files at load time using Pydantic schemas. ## Why Provides immediate, clear error messages for invalid configs instead of runtime failures. ## Changes - `yaml_schema.py` - Pydantic validation schemas (169 lines) - `config.py` - Validation integration - `litellm-model-configuration-guide.md` - Documentation (331 lines) ## Validation Rules - Model format: `provider/name` (e.g., `openai/gpt-4o`) - Temperature: 0.0-2.0 - Max tokens: positive integers - No duplicate model names - Required fields enforced ## Backward Compatible No breaking changes. Invalid configs return empty list and fall back to environment variables. <!-- CURSOR_SUMMARY --> --- > [!NOTE] > Introduces schema-validated LiteLLM config loading and smarter model selection with updated defaults. > > - New `yaml_schema.py` (Pydantic) validates `model_list`, `litellm_params`, and `model_info` (format/ranges/duplicates) and is used in `config.py` when reading YAML from standard paths or `LITELLM_CONFIG_PATH` > - Adds `_get_litellm_models` and `_get_best_thinking_model` to discover and score models (reasoning-capable prioritized), including env-var API key resolution, and cost tiering (`COST_OPUS_PER_1K`, `COST_DEFAULT_PER_1K`) > - Updates default primary model selection: try auto-selected thinking model first; else fall back to API keys (Anthropic/OpenAI/Gemini) using latest aliases and larger `max_tokens`; else Ollama; final Claude fallback > - Expands fallback model list: includes Opus/Sonnet, GPT-5.2 (+ thinking), Gemini 3 (pro + deep-think), and first 3 local Ollama models; uses LiteLLM aliases and tier-aware behavior > > <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit 5ceb9c6a335f4df9fa09fb91a49e44d5625a9f36. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup> <!-- /CURSOR_SUMMARY --> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-02 04:08:04 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/raptor#56
No description provided.