mirror of
https://github.com/gadievron/raptor.git
synced 2026-04-25 05:56:00 +03:00
[PR #21] [MERGED] Feat: Add external Ollama server configuration with intelligent retry delays #32
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/raptor#32
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/gadievron/raptor/pull/21
Author: @sapran
Created: 12/5/2025
Status: ✅ Merged
Merged: 12/7/2025
Merged by: @danielcuthbert
Base:
main← Head:feature/external-ollama-server📝 Commits (1)
d50e70dAdd external Ollama server configuration with intelligent retry delays📊 Changes
8 files changed (+126 additions, -17 deletions)
View changed files
📝
.gitignore(+3 -0)📝
DEPENDENCIES.md(+9 -0)📝
README.md(+28 -0)📝
core/config.py(+3 -0)📝
packages/llm_analysis/llm/client.py(+6 -3)📝
packages/llm_analysis/llm/config.py(+50 -6)📝
packages/llm_analysis/llm/providers.py(+22 -5)📝
raptor_agentic.py(+5 -3)📄 Description
This commit implements two major features for Ollama LLM integration:
External Ollama Server Support (OLLAMA_HOST)
Intelligent Retry Delays
Modified files:
All changes maintain backwards compatibility with existing configurations. Tested with both local and remote Ollama servers successfully.
🤖 Generated with Claude Code
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.