[GH-ISSUE #2] GUI-Add Local AI Provider problem #3

Closed
opened 2026-02-27 20:20:05 +03:00 by kerem · 3 comments
Owner

Originally created by @leventorcan on GitHub (Feb 19, 2026).
Original GitHub issue: https://github.com/ownpilot/OwnPilot/issues/2

Issue 1:
When i tried to add my local Ollama server, i got the error message below.

"API Error Validation failed: name: Invalid input: expected string, received undefined; providerType: Invalid option: expected one of "lmstudio"|"ollama"|"localai"|"vllm"|"custom"; baseUrl: Invalid input: expected string, received undefined"

I tried the curl command below with "local-providers" instead of "providers" in terminal and it is added. I can even see the model list in GUI after added.


curl -X POST http://localhost:8080/api/v1/local-providers
-H "Content-Type: application/json"
-d '{
"name": "Ollama",
"providerType": "ollama",
"baseUrl": "http://127.0.0.1:11434"
}'

Issue 2:
When i try to send a test message in chat window, i got this error message below, even after i have set my local model.

"Demo Mode Response

I received your message: "test"

I'm currently running in demo mode with a98be91d-fa5d-4425-8977-5f2ee7412f74 (qwen2.5-coder:7b). To get real AI responses, please configure your API key in the Settings page.


This is a simulated response. Configure your API key for actual AI capabilities."

Elinize saglik, cok guzel olmus bu arada:)
Levent

Originally created by @leventorcan on GitHub (Feb 19, 2026). Original GitHub issue: https://github.com/ownpilot/OwnPilot/issues/2 Issue 1: When i tried to add my local Ollama server, i got the error message below. "API Error Validation failed: name: Invalid input: expected string, received undefined; providerType: Invalid option: expected one of "lmstudio"|"ollama"|"localai"|"vllm"|"custom"; baseUrl: Invalid input: expected string, received undefined" I tried the curl command below with "local-providers" instead of "providers" in terminal and it is added. I can even see the model list in GUI after added. ---- curl -X POST http://localhost:8080/api/v1/local-providers \ -H "Content-Type: application/json" \ -d '{ "name": "Ollama", "providerType": "ollama", "baseUrl": "http://127.0.0.1:11434" }' ---- Issue 2: When i try to send a test message in chat window, i got this error message below, even after i have set my local model. "Demo Mode Response I received your message: "test" I'm currently running in demo mode with a98be91d-fa5d-4425-8977-5f2ee7412f74 (qwen2.5-coder:7b). To get real AI responses, please configure your API key in the Settings page. --- _This is a simulated response. Configure your API key for actual AI capabilities._" Elinize saglik, cok guzel olmus bu arada:) Levent
kerem closed this issue 2026-02-27 20:20:05 +03:00
Author
Owner

@Cayrop commented on GitHub (Feb 19, 2026):

I ran into the same issue and found the root cause in the source code.

The problem comes from the demo mode check:

export async function isDemoMode(): Promise<boolean> {
  const configured = await getConfiguredProviderIds();
  const providers = [
    'openai', 'anthropic', 'zhipu', 'deepseek', 'groq',
    'google', 'xai', 'mistral', 'together', 'fireworks', 'perplexity'
  ];
  return !providers.some(p => configured.has(p));
}

Demo mode is only disabled if one of the providers in this whitelist is configured.
Since LM Studio is not included, the app stays in demo mode even when LM Studio is properly configured and working.

Workarounds

1. Override the OpenAI provider (recommended)

LM Studio exposes an OpenAI-compatible API, so you can configure it as an OpenAI provider:

  • Provider: OpenAI
  • Base URL: http://127.0.0.1:1234/v1
  • API key: any value (if authentication is not enabled in LM Studio, it does not validate the key)

This makes openai appear as configured and disables demo mode.
This approach is cleaner because it uses the intended provider flow.

2. Set a dummy OpenAI API key

Another way to bypass demo mode is simply configuring the OpenAI provider with an invalid API key.
Since the demo check only verifies that a supported provider is configured (not that it actually works), this also disables demo mode. However, this is more of a workaround than a proper solution.


If local providers like LM Studio are meant to be supported, it may be better for the demo mode logic to check whether any provider is configured rather than relying on a fixed whitelist.

<!-- gh-comment-id:3930349852 --> @Cayrop commented on GitHub (Feb 19, 2026): I ran into the same issue and found the root cause in the source code. The problem comes from the demo mode check: ```ts export async function isDemoMode(): Promise<boolean> { const configured = await getConfiguredProviderIds(); const providers = [ 'openai', 'anthropic', 'zhipu', 'deepseek', 'groq', 'google', 'xai', 'mistral', 'together', 'fireworks', 'perplexity' ]; return !providers.some(p => configured.has(p)); } ``` Demo mode is only disabled if one of the providers in this whitelist is configured. Since **LM Studio is not included**, the app stays in demo mode even when LM Studio is properly configured and working. ### Workarounds **1. Override the OpenAI provider (recommended)** LM Studio exposes an OpenAI-compatible API, so you can configure it as an OpenAI provider: * Provider: `OpenAI` * Base URL: `http://127.0.0.1:1234/v1` * API key: any value (if authentication is not enabled in LM Studio, it does not validate the key) This makes `openai` appear as configured and disables demo mode. This approach is cleaner because it uses the intended provider flow. **2. Set a dummy OpenAI API key** Another way to bypass demo mode is simply configuring the OpenAI provider with an invalid API key. Since the demo check only verifies that a supported provider is configured (not that it actually works), this also disables demo mode. However, this is more of a workaround than a proper solution. --- If local providers like LM Studio are meant to be supported, it may be better for the demo mode logic to check whether *any* provider is configured rather than relying on a fixed whitelist.
Author
Owner

@ersinkoc commented on GitHub (Feb 19, 2026):

Image Image

I hope these screenshots will help you better understand.

<!-- gh-comment-id:3930658467 --> @ersinkoc commented on GitHub (Feb 19, 2026): <img width="1165" height="785" alt="Image" src="https://github.com/user-attachments/assets/7445bfcb-ff83-454d-973d-53196d4137ae" /> <img width="1671" height="1162" alt="Image" src="https://github.com/user-attachments/assets/94529cb5-3f2a-4c5e-a8b3-dd62644d3abd" /> I hope these screenshots will help you better understand.
Author
Owner

@leventorcan commented on GitHub (Feb 19, 2026):

I am running Ollama (no api key), not LM Studio:)

@Cayrop 's 2. workaround (Set a dummy OpenAI API key) worked.

Issue 1 still needs to be fixed though...

<!-- gh-comment-id:3930692824 --> @leventorcan commented on GitHub (Feb 19, 2026): I am running Ollama (no api key), not LM Studio:) @Cayrop 's 2. workaround (Set a dummy OpenAI API key) worked. Issue 1 still needs to be fixed though...
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/OwnPilot#3
No description provided.