[GH-ISSUE #143] [FEATURE]: Any way to use a local LLM instead of Anthropic or OpenAI? #48

Closed
opened 2026-02-27 07:20:09 +03:00 by kerem · 1 comment
Owner

Originally created by @eitansha on GitHub (Feb 17, 2026).
Original GitHub issue: https://github.com/KeygraphHQ/shannon/issues/143

I’m running a local LLM on solid hardware.
Is there any way to make Shannon work with it instead of using an OpenAI or Anthropic API key?

Describe the solution you'd like

Connect Shannon to a local LLM model

Describe alternatives you've considered

No response

Additional context

No response

Originally created by @eitansha on GitHub (Feb 17, 2026). Original GitHub issue: https://github.com/KeygraphHQ/shannon/issues/143 ### Is your feature request related to a problem? Please describe. I’m running a local LLM on solid hardware. Is there any way to make Shannon work with it instead of using an OpenAI or Anthropic API key? ### Describe the solution you'd like Connect Shannon to a local LLM model ### Describe alternatives you've considered _No response_ ### Additional context _No response_
kerem closed this issue 2026-02-27 07:20:09 +03:00
Author
Owner

@FlashLim commented on GitHub (Feb 23, 2026):

I’m using LM Studio to host the AI model locally. I configured it with the OpenAI API key format and updated the router-config to point to my local LLM endpoint.

I was able to connect everything successfully, and the prompts are being sent to my server. However, under the PreReconAgent task, the same prompt keeps repeating indefinitely.

I haven’t investigated the issue at the code level yet, but I suspect Anthropic may be required because the app seems tightly coupled to its specific response format and structure. Unless you can replicate Anthropic’s response schema exactly and reroute the network requests to your local LLM accordingly, attempting to use a local model may not be worthwhile.

<!-- gh-comment-id:3946467358 --> @FlashLim commented on GitHub (Feb 23, 2026): I’m using LM Studio to host the AI model locally. I configured it with the OpenAI API key format and updated the router-config to point to my local LLM endpoint. I was able to connect everything successfully, and the prompts are being sent to my server. However, under the PreReconAgent task, the same prompt keeps repeating indefinitely. I haven’t investigated the issue at the code level yet, but I suspect Anthropic may be required because the app seems tightly coupled to its specific response format and structure. Unless you can replicate Anthropic’s response schema exactly and reroute the network requests to your local LLM accordingly, attempting to use a local model may not be worthwhile.
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/shannon-KeygraphHQ#48
No description provided.