mirror of
https://github.com/KeygraphHQ/shannon.git
synced 2026-04-25 09:35:55 +03:00
[GH-ISSUE #143] [FEATURE]: Any way to use a local LLM instead of Anthropic or OpenAI? #48
Labels
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/shannon-KeygraphHQ#48
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @eitansha on GitHub (Feb 17, 2026).
Original GitHub issue: https://github.com/KeygraphHQ/shannon/issues/143
Is your feature request related to a problem? Please describe.
I’m running a local LLM on solid hardware.
Is there any way to make Shannon work with it instead of using an OpenAI or Anthropic API key?
Describe the solution you'd like
Connect Shannon to a local LLM model
Describe alternatives you've considered
No response
Additional context
No response
@FlashLim commented on GitHub (Feb 23, 2026):
I’m using LM Studio to host the AI model locally. I configured it with the OpenAI API key format and updated the router-config to point to my local LLM endpoint.
I was able to connect everything successfully, and the prompts are being sent to my server. However, under the PreReconAgent task, the same prompt keeps repeating indefinitely.
I haven’t investigated the issue at the code level yet, but I suspect Anthropic may be required because the app seems tightly coupled to its specific response format and structure. Unless you can replicate Anthropic’s response schema exactly and reroute the network requests to your local LLM accordingly, attempting to use a local model may not be worthwhile.