[PR #21] [MERGED] Feat: Add external Ollama server configuration with intelligent retry delays #32

Closed
opened 2026-03-02 04:07:57 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/gadievron/raptor/pull/21
Author: @sapran
Created: 12/5/2025
Status: Merged
Merged: 12/7/2025
Merged by: @danielcuthbert

Base: mainHead: feature/external-ollama-server


📝 Commits (1)

  • d50e70d Add external Ollama server configuration with intelligent retry delays

📊 Changes

8 files changed (+126 additions, -17 deletions)

View changed files

📝 .gitignore (+3 -0)
📝 DEPENDENCIES.md (+9 -0)
📝 README.md (+28 -0)
📝 core/config.py (+3 -0)
📝 packages/llm_analysis/llm/client.py (+6 -3)
📝 packages/llm_analysis/llm/config.py (+50 -6)
📝 packages/llm_analysis/llm/providers.py (+22 -5)
📝 raptor_agentic.py (+5 -3)

📄 Description

This commit implements two major features for Ollama LLM integration:

  1. External Ollama Server Support (OLLAMA_HOST)

    • Add OLLAMA_HOST environment variable for configurable server URL
    • Replace all hardcoded localhost:11434 references throughout architecture
    • Implement URL validation requiring http:// or https:// protocol
    • Enhance error messages to differentiate local vs remote server issues
    • Add comprehensive logging for connection attempts and failures
    • Default: http://localhost:11434 (backwards compatible)
  2. Intelligent Retry Delays

    • Implement server location detection (local vs remote)
    • Remote servers: 5s base delay (vs 2s for local)
    • Apply exponential backoff on retry delays
    • Reduce JSON parsing errors from remote Ollama servers
    • Add get_retry_delay() method to LLMConfig for dynamic delay selection

Modified files:

  • core/config.py: Add OLLAMA_HOST configuration constant
  • packages/llm_analysis/llm/config.py: Add URL validation, retry_delay_remote field, get_retry_delay() method
  • packages/llm_analysis/llm/providers.py: Enhanced error handling and logging for local/remote servers
  • packages/llm_analysis/llm/client.py: Dynamic retry delays with exponential backoff
  • raptor_agentic.py: Use OLLAMA_HOST for Ollama availability checks
  • README.md: Document OLLAMA_HOST environment variable and performance tuning
  • DEPENDENCIES.md: Add Ollama documentation with remote server configuration
  • .gitignore: Add codeql_dbs/ to ignore list

All changes maintain backwards compatibility with existing configurations. Tested with both local and remote Ollama servers successfully.

🤖 Generated with Claude Code


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/gadievron/raptor/pull/21 **Author:** [@sapran](https://github.com/sapran) **Created:** 12/5/2025 **Status:** ✅ Merged **Merged:** 12/7/2025 **Merged by:** [@danielcuthbert](https://github.com/danielcuthbert) **Base:** `main` ← **Head:** `feature/external-ollama-server` --- ### 📝 Commits (1) - [`d50e70d`](https://github.com/gadievron/raptor/commit/d50e70d0cc1b7f22ab09ba1b09913949929f6b7e) Add external Ollama server configuration with intelligent retry delays ### 📊 Changes **8 files changed** (+126 additions, -17 deletions) <details> <summary>View changed files</summary> 📝 `.gitignore` (+3 -0) 📝 `DEPENDENCIES.md` (+9 -0) 📝 `README.md` (+28 -0) 📝 `core/config.py` (+3 -0) 📝 `packages/llm_analysis/llm/client.py` (+6 -3) 📝 `packages/llm_analysis/llm/config.py` (+50 -6) 📝 `packages/llm_analysis/llm/providers.py` (+22 -5) 📝 `raptor_agentic.py` (+5 -3) </details> ### 📄 Description This commit implements two major features for Ollama LLM integration: 1. External Ollama Server Support (OLLAMA_HOST) - Add OLLAMA_HOST environment variable for configurable server URL - Replace all hardcoded localhost:11434 references throughout architecture - Implement URL validation requiring http:// or https:// protocol - Enhance error messages to differentiate local vs remote server issues - Add comprehensive logging for connection attempts and failures - Default: http://localhost:11434 (backwards compatible) 2. Intelligent Retry Delays - Implement server location detection (local vs remote) - Remote servers: 5s base delay (vs 2s for local) - Apply exponential backoff on retry delays - Reduce JSON parsing errors from remote Ollama servers - Add get_retry_delay() method to LLMConfig for dynamic delay selection Modified files: - core/config.py: Add OLLAMA_HOST configuration constant - packages/llm_analysis/llm/config.py: Add URL validation, retry_delay_remote field, get_retry_delay() method - packages/llm_analysis/llm/providers.py: Enhanced error handling and logging for local/remote servers - packages/llm_analysis/llm/client.py: Dynamic retry delays with exponential backoff - raptor_agentic.py: Use OLLAMA_HOST for Ollama availability checks - README.md: Document OLLAMA_HOST environment variable and performance tuning - DEPENDENCIES.md: Add Ollama documentation with remote server configuration - .gitignore: Add codeql_dbs/ to ignore list All changes maintain backwards compatibility with existing configurations. Tested with both local and remote Ollama servers successfully. 🤖 Generated with [Claude Code](https://claude.com/claude-code) --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-02 04:07:57 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/raptor#32
No description provided.