mirror of
https://github.com/AJaySi/ALwrity.git
synced 2026-04-25 08:55:58 +03:00
[PR #405] [MERGED] Enforce fail-fast SIF behavior and low-cost remote fallback #708
Labels
No labels
AI Content Agents
AI Content Strategy
AI Content planning
AI Marketing Tools
AI SEO
AI personalization
AI writer
ALwrity Copi-lot
Alwrity web search
Anthropic
DeepSeek
Gemini AI
Integration
LLM
OnBoarding
OnBoarding
RAG knowledgebase Memory
bug
documentation
enhancement
good first issue
help wanted
invalid
openai
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ALwrity#708
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/AJaySi/ALwrity/pull/405
Author: @AJaySi
Created: 3/9/2026
Status: ✅ Merged
Merged: 3/11/2026
Merged by: @AJaySi
Base:
main← Head:codex/fix-oserror-when-loading-models📝 Commits (1)
8b0547cMake SIF fail fast and add low-cost remote LLM fallback📊 Changes
7 files changed (+219 additions, -67 deletions)
View changed files
📝
backend/services/intelligence/agents/agent_orchestrator.py(+1 -1)📝
backend/services/intelligence/agents/core_agent_framework.py(+51 -17)📝
backend/services/intelligence/agents/specialized/base.py(+1 -1)📝
backend/services/intelligence/sif_agents.py(+55 -15)📝
backend/services/intelligence/txtai_service.py(+11 -2)📝
backend/services/llm_providers/huggingface_provider.py(+81 -22)📝
backend/services/llm_providers/main_text_generation.py(+19 -9)📄 Description
Summary
This follow-up addresses review feedback around silent failures, model fallback behavior, and cost control for agent-heavy SIF flows.
What changed
Fail fast when local agent/runtime is unavailable
BaseALwrityAgent._generate_llm_responsenow raises when no local LLM is present (instead of returning[LLM Unavailable]).run()now raises iftxtai_agentis missing (instead of returning"Agent not initialized")._execute_fallbackno longer returns simulated/mock success strings; it now raises a hard error with explicit context.Remote fallback path now exists and is explicit
_generate_llm_responsenow attempts remote fallback viallm_text_gen.Cost-aware remote fallback model selection
preferred_hf_modelssupport tollm_text_gen.Qwen/Qwen2.5-1.5B-InstructQwen/Qwen2.5-0.5B-InstructTinyLlama/TinyLlama-1.1B-Chat-v1.0llm_text_genprovider selection to provider-qualified form (:groq) where applicable.Fail-fast indexing/search behavior in txtai service
TxtaiIntelligenceServicenow supportsSIF_FAIL_FAST(defaults totrue).index_contentandsearchraiseRuntimeError(instead of silently returning).Why
Validation
python -m py_compile backend/services/intelligence/agents/core_agent_framework.py backend/services/intelligence/txtai_service.py backend/services/llm_providers/main_text_generation.pyNotes
SIF_FAIL_FASTif needed for local troubleshooting.Codex Task
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.