[GH-ISSUE #1047] AI tagging not working, but summary works well #687

Closed
opened 2026-03-02 11:51:54 +03:00 by kerem · 10 comments
Owner

Originally created by @hz-xiaxz on GitHub (Feb 18, 2025).
Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/1047

Describe the Bug

I tried the sample document on my laptop and the automatic tagging seems not working. However, the summary works well. If I have misunderstood any of the functionality please let me know

Steps to Reproduce

Configure with Gemini AI with this setting. (in .env file)
https://github.com/hoarder-app/hoarder/issues/462#issuecomment-2664521800

Expected Behaviour

Automatic AI tagging enabled

Screenshots or Additional Context

Image

Image

Device Details

Chrome on Win11

Exact Hoarder Version

v0.22.0

Have you checked the troubleshooting guide?

  • I have checked the troubleshooting guide and I haven't found a solution to my problem
Originally created by @hz-xiaxz on GitHub (Feb 18, 2025). Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/1047 ### Describe the Bug I tried the sample document on my laptop and the automatic tagging seems not working. However, the summary works well. If I have misunderstood any of the functionality please let me know ### Steps to Reproduce Configure with Gemini AI with this setting. (in `.env` file) https://github.com/hoarder-app/hoarder/issues/462#issuecomment-2664521800 ### Expected Behaviour Automatic AI tagging enabled ### Screenshots or Additional Context ![Image](https://github.com/user-attachments/assets/41a88215-5d88-4c99-aec3-af724e654c84) ![Image](https://github.com/user-attachments/assets/d27a6b6c-672e-4fd5-bdf7-82565333078d) ### Device Details Chrome on Win11 ### Exact Hoarder Version v0.22.0 ### Have you checked the troubleshooting guide? - [x] I have checked the troubleshooting guide and I haven't found a solution to my problem
kerem 2026-03-02 11:51:54 +03:00
Author
Owner

@kamtschatka commented on GitHub (Feb 19, 2025):

please add some logs from hoarder, so we can see what happens during the tagging

<!-- gh-comment-id:2668445248 --> @kamtschatka commented on GitHub (Feb 19, 2025): please add some logs from hoarder, so we can see what happens during the tagging
Author
Owner

@MohamedBassem commented on GitHub (Feb 23, 2025):

This is usually a problem with models that don't support output schemas. But we can confirm once you share the output from the container.

<!-- gh-comment-id:2677121182 --> @MohamedBassem commented on GitHub (Feb 23, 2025): This is usually a problem with models that don't support output schemas. But we can confirm once you share the output from the container.
Author
Owner

@hz-xiaxz commented on GitHub (Feb 25, 2025):

please add some logs from hoarder, so we can see what happens during the tagging

sorry for the late reply. Quickly skimming the doc, I think I didn't get to know how to see the logs. Shall I use hoarder in a development mode? Would be grateful if you have some hints!

<!-- gh-comment-id:2680008873 --> @hz-xiaxz commented on GitHub (Feb 25, 2025): > please add some logs from hoarder, so we can see what happens during the tagging sorry for the late reply. Quickly skimming the doc, I think I didn't get to know how to see the logs. Shall I use hoarder in a development mode? Would be grateful if you have some hints!
Author
Owner

@JC1738 commented on GitHub (Mar 2, 2025):

I have the same problem with LLM Studio, summary works, tagging isn't:

Hoarder Logs
2025-03-02T01:45:09.580Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.613Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.623Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.653Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.660Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.689Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.698Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656)

LLM Studio
2025-03-01 17:51:51 [DEBUG] Received request: POST to /v1/chat/completions with body { "messages": [ { "role": "user", "content": "\nYou are a bot in a read-it-later app and your res... <Truncated in logs> ...y \"tags\" and the value is an array of string tags." } ], "model": "qwen2.5-14b-instruct", "response_format": { "type": "json_object" } }

<!-- gh-comment-id:2692511270 --> @JC1738 commented on GitHub (Mar 2, 2025): I have the same problem with LLM Studio, summary works, tagging isn't: **Hoarder Logs** `2025-03-02T01:45:09.580Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.613Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.623Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.653Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.660Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.689Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.698Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656)` **LLM Studio** `2025-03-01 17:51:51 [DEBUG] Received request: POST to /v1/chat/completions with body { "messages": [ { "role": "user", "content": "\nYou are a bot in a read-it-later app and your res... <Truncated in logs> ...y \"tags\" and the value is an array of string tags." } ], "model": "qwen2.5-14b-instruct", "response_format": { "type": "json_object" } }`
Author
Owner

@MohamedBassem commented on GitHub (Mar 2, 2025):

The nightly build (that will be ready in 15mins) should fix this problem.

<!-- gh-comment-id:2692696962 --> @MohamedBassem commented on GitHub (Mar 2, 2025): The nightly build (that will be ready in 15mins) should fix this problem.
Author
Owner

@LaelLuo commented on GitHub (Mar 21, 2025):

docker version 0.22.0 still not working.i use LM Studio.
i update to latest version, it does work

2025-03-21T03:59:33.150Z info: [search][36] Attempting to index bookmark with id eoj9m0qac06qliwjeuy7welr ...
2025-03-21T03:59:33.849Z info: [search][36] Completed successfully
2025-03-21T03:59:34.860Z info: [search][38] Attempting to index bookmark with id a3snr0zayx32hz2wvbamvl8v ...
2025-03-21T03:59:34.926Z info: [search][38] Completed successfully
�2025-03-21T03:59:35.139Z info: [Crawler][37] Will crawl "https://operating-system-in-1000-lines.vercel.app/zh/01-setting-up-development-environment" for link with id "a3snr0zayx32hz2wvbamvl8v"
�2025-03-21T03:59:35.139Z info: [Crawler][37] Attempting to determine the content-type for the url https://operating-system-in-1000-lines.vercel.app/zh/01-setting-up-development-environment
2025-03-21T03:59:35.156Z info: [webhook][39] Starting a webhook job for bookmark with id "a3snr0zayx32hz2wvbamvl8v"
2025-03-21T03:59:35.156Z info: [webhook][39] Completed successfully
�2025-03-21T03:59:35.384Z info: [Crawler][37] Content-type for the url https://operating-system-in-1000-lines.vercel.app/zh/01-setting-up-development-environment is "text/html; charset=utf-8"
�2025-03-21T03:59:36.139Z info: [Crawler][37] Successfully navigated to "https://operating-system-in-1000-lines.vercel.app/zh/01-setting-up-development-environment". Waiting for the page to load ...
2025-03-21T03:59:37.391Z info: [Crawler][37] Finished waiting for the page to load.
2025-03-21T03:59:37.395Z info: [Crawler][37] Successfully fetched the page content.
�2025-03-21T03:59:37.481Z info: [Crawler][37] Finished capturing page content and a screenshot. FullPageScreenshot: false
2025-03-21T03:59:37.484Z info: [Crawler][37] Will attempt to extract metadata from page ...
2025-03-21T03:59:37.837Z info: [Crawler][37] Will attempt to extract readable content ...
2025-03-21T03:59:38.103Z info: [Crawler][37] Done extracting readable content.
2025-03-21T03:59:38.161Z info: [Crawler][37] Stored the screenshot as assetId: 3ffff7d5-3f51-4840-8978-c707d216e013
2025-03-21T03:59:39.028Z info: [Crawler][37] Done extracting metadata from the page.
2025-03-21T03:59:39.050Z info: [Crawler][37] Completed successfully
2025-03-21T03:59:39.163Z info: [search][41] Attempting to index bookmark with id a3snr0zayx32hz2wvbamvl8v ...
�2025-03-21T03:59:39.170Z info: [inference][40] Starting an inference job for bookmark with id "a3snr0zayx32hz2wvbamvl8v"
�2025-03-21T03:59:39.183Z info: [VideoCrawler][42] Skipping video download from "https://operating-system-in-1000-lines.vercel.app/zh/01-setting-up-development-environment", because it is disabled in the config.
2025-03-21T03:59:39.184Z info: [VideoCrawler][42] Video Download Completed successfully
2025-03-21T03:59:39.195Z info: [webhook][43] Starting a webhook job for bookmark with id "a3snr0zayx32hz2wvbamvl8v"
2025-03-21T03:59:39.195Z info: [webhook][43] Completed successfully
�2025-03-21T03:59:39.200Z error: [inference][40] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'"
Error: 400 "'response_format.type' must be 'json_schema'"
    at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20)
�    at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33)
    at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
�    at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002)
    at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097)
    at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356)
    at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814)
�    at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656)
�2025-03-21T03:59:39.207Z info: [inference][40] Starting an inference job for bookmark with id "a3snr0zayx32hz2wvbamvl8v"
�2025-03-21T03:59:39.213Z error: [inference][40] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'"
Error: 400 "'response_format.type' must be 'json_schema'"
    at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20)
�    at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33)
    at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
�    at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002)
    at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097)
    at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356)
    at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814)
�    at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656)
�2025-03-21T03:59:39.224Z info: [inference][40] Starting an inference job for bookmark with id "a3snr0zayx32hz2wvbamvl8v"
�2025-03-21T03:59:39.228Z error: [inference][40] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'"
Error: 400 "'response_format.type' must be 'json_schema'"
    at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20)
�    at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33)
    at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
�    at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002)
    at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097)
    at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356)
    at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814)
�    at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656)
�2025-03-21T03:59:39.238Z info: [inference][40] Starting an inference job for bookmark with id "a3snr0zayx32hz2wvbamvl8v"
�2025-03-21T03:59:39.242Z error: [inference][40] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'"
Error: 400 "'response_format.type' must be 'json_schema'"
    at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20)
�    at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33)
    at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
�    at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002)
    at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097)
    at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356)
    at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814)
�    at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656)
<!-- gh-comment-id:2742205640 --> @LaelLuo commented on GitHub (Mar 21, 2025): docker version 0.22.0 still not working.i use LM Studio. i update to latest version, it does work ```log 2025-03-21T03:59:33.150Z info: [search][36] Attempting to index bookmark with id eoj9m0qac06qliwjeuy7welr ... 2025-03-21T03:59:33.849Z info: [search][36] Completed successfully 2025-03-21T03:59:34.860Z info: [search][38] Attempting to index bookmark with id a3snr0zayx32hz2wvbamvl8v ... 2025-03-21T03:59:34.926Z info: [search][38] Completed successfully �2025-03-21T03:59:35.139Z info: [Crawler][37] Will crawl "https://operating-system-in-1000-lines.vercel.app/zh/01-setting-up-development-environment" for link with id "a3snr0zayx32hz2wvbamvl8v" �2025-03-21T03:59:35.139Z info: [Crawler][37] Attempting to determine the content-type for the url https://operating-system-in-1000-lines.vercel.app/zh/01-setting-up-development-environment 2025-03-21T03:59:35.156Z info: [webhook][39] Starting a webhook job for bookmark with id "a3snr0zayx32hz2wvbamvl8v" 2025-03-21T03:59:35.156Z info: [webhook][39] Completed successfully �2025-03-21T03:59:35.384Z info: [Crawler][37] Content-type for the url https://operating-system-in-1000-lines.vercel.app/zh/01-setting-up-development-environment is "text/html; charset=utf-8" �2025-03-21T03:59:36.139Z info: [Crawler][37] Successfully navigated to "https://operating-system-in-1000-lines.vercel.app/zh/01-setting-up-development-environment". Waiting for the page to load ... 2025-03-21T03:59:37.391Z info: [Crawler][37] Finished waiting for the page to load. 2025-03-21T03:59:37.395Z info: [Crawler][37] Successfully fetched the page content. �2025-03-21T03:59:37.481Z info: [Crawler][37] Finished capturing page content and a screenshot. FullPageScreenshot: false 2025-03-21T03:59:37.484Z info: [Crawler][37] Will attempt to extract metadata from page ... 2025-03-21T03:59:37.837Z info: [Crawler][37] Will attempt to extract readable content ... 2025-03-21T03:59:38.103Z info: [Crawler][37] Done extracting readable content. 2025-03-21T03:59:38.161Z info: [Crawler][37] Stored the screenshot as assetId: 3ffff7d5-3f51-4840-8978-c707d216e013 2025-03-21T03:59:39.028Z info: [Crawler][37] Done extracting metadata from the page. 2025-03-21T03:59:39.050Z info: [Crawler][37] Completed successfully 2025-03-21T03:59:39.163Z info: [search][41] Attempting to index bookmark with id a3snr0zayx32hz2wvbamvl8v ... �2025-03-21T03:59:39.170Z info: [inference][40] Starting an inference job for bookmark with id "a3snr0zayx32hz2wvbamvl8v" �2025-03-21T03:59:39.183Z info: [VideoCrawler][42] Skipping video download from "https://operating-system-in-1000-lines.vercel.app/zh/01-setting-up-development-environment", because it is disabled in the config. 2025-03-21T03:59:39.184Z info: [VideoCrawler][42] Video Download Completed successfully 2025-03-21T03:59:39.195Z info: [webhook][43] Starting a webhook job for bookmark with id "a3snr0zayx32hz2wvbamvl8v" 2025-03-21T03:59:39.195Z info: [webhook][43] Completed successfully �2025-03-21T03:59:39.200Z error: [inference][40] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) � at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) � at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) � at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) �2025-03-21T03:59:39.207Z info: [inference][40] Starting an inference job for bookmark with id "a3snr0zayx32hz2wvbamvl8v" �2025-03-21T03:59:39.213Z error: [inference][40] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) � at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) � at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) � at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) �2025-03-21T03:59:39.224Z info: [inference][40] Starting an inference job for bookmark with id "a3snr0zayx32hz2wvbamvl8v" �2025-03-21T03:59:39.228Z error: [inference][40] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) � at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) � at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) � at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) �2025-03-21T03:59:39.238Z info: [inference][40] Starting an inference job for bookmark with id "a3snr0zayx32hz2wvbamvl8v" �2025-03-21T03:59:39.242Z error: [inference][40] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) � at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) � at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) � at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) ```
Author
Owner

@RobertusIT commented on GitHub (Aug 13, 2025):

I have the same problem with LLM Studio, summary works, tagging isn't:

Hoarder Logs 2025-03-02T01:45:09.580Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.613Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.623Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.653Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.660Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.689Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.698Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656)

LLM Studio 2025-03-01 17:51:51 [DEBUG] Received request: POST to /v1/chat/completions with body { "messages": [ { "role": "user", "content": "\nYou are a bot in a read-it-later app and your res... <Truncated in logs> ...y \"tags\" and the value is an array of string tags." } ], "model": "qwen2.5-14b-instruct", "response_format": { "type": "json_object" } }

please can you share your setup in .env about LM Studio?

I can't figure out, karakeep don't connect to my LM Studio.

And which model are you using for text and images? I have 16GB VRAM

Thanks

<!-- gh-comment-id:3184782703 --> @RobertusIT commented on GitHub (Aug 13, 2025): > I have the same problem with LLM Studio, summary works, tagging isn't: > > **Hoarder Logs** `2025-03-02T01:45:09.580Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.613Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.623Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.653Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.660Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656) 2025-03-02T01:45:09.689Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.698Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656)` > > **LLM Studio** `2025-03-01 17:51:51 [DEBUG] Received request: POST to /v1/chat/completions with body { "messages": [ { "role": "user", "content": "\nYou are a bot in a read-it-later app and your res... <Truncated in logs> ...y \"tags\" and the value is an array of string tags." } ], "model": "qwen2.5-14b-instruct", "response_format": { "type": "json_object" } }` please can you share your setup in .env about LM Studio? I can't figure out, karakeep don't connect to my LM Studio. And which model are you using for text and images? I have 16GB VRAM Thanks
Author
Owner

@LaelLuo commented on GitHub (Aug 13, 2025):

I haven’t used Karakeep in a while, but my LM Studio instance is still running. I’m not sure which .env file you’re referring to—could you tell me the exact path or paste the section you need? Once I know, I’ll be happy to share it.

As for the models, I was using:

  • Text embeddings: mxbai-embed-large-v1 (the f16 GGUF)
  • Vision + text: MiniCPM-o-2_6-GGUF/Q4_K_M.gguf

Both ran fine on my 12 GB VRAM RTX 3060.

<!-- gh-comment-id:3184837993 --> @LaelLuo commented on GitHub (Aug 13, 2025): I haven’t used Karakeep in a while, but my LM Studio instance is still running. I’m not sure which `.env` file you’re referring to—could you tell me the exact path or paste the section you need? Once I know, I’ll be happy to share it. As for the models, I was using: - Text embeddings: `mxbai-embed-large-v1` (the f16 GGUF) - Vision + text: `MiniCPM-o-2_6-GGUF/Q4_K_M.gguf` Both ran fine on my 12 GB VRAM RTX 3060.
Author
Owner

@RobertusIT commented on GitHub (Aug 13, 2025):

I haven’t used Karakeep in a while, but my LM Studio instance is still running. I’m not sure which .env file you’re referring to—could you tell me the exact path or paste the section you need? Once I know, I’ll be happy to share it.

As for the models, I was using:

  • Text embeddings: mxbai-embed-large-v1 (the f16 GGUF)
  • Vision + text: MiniCPM-o-2_6-GGUF/Q4_K_M.gguf

Both ran fine on my 12 GB VRAM RTX 3060.

In .env files i have:

OPENAI_BASE_URL=http://192.168.178.47:1234/v1
OPENAI_API_KEY=A43545ty543465 #  is fake, gemini told me to put anyway
# Example models:
INFERENCE_TEXT_MODEL=google/gemma-3-27b
INFERENCE_IMAGE_MODEL=google/gemma-3-27b

I tried with openrouter api, and gemini, but with few request, now i must to wait, no sense to use free api of gemini or openrouter.
In openrouter I tried to use only free model, like mistral and anyway after 50 request, you must to pay.
With gemini gemma, I don't understand but doesn't works anyway, but the limit should be more, but doesn't works and maybe for karakeep isn't good. You are the expert.

<!-- gh-comment-id:3184880891 --> @RobertusIT commented on GitHub (Aug 13, 2025): > I haven’t used Karakeep in a while, but my LM Studio instance is still running. I’m not sure which `.env` file you’re referring to—could you tell me the exact path or paste the section you need? Once I know, I’ll be happy to share it. > > As for the models, I was using: > > * Text embeddings: `mxbai-embed-large-v1` (the f16 GGUF) > * Vision + text: `MiniCPM-o-2_6-GGUF/Q4_K_M.gguf` > > Both ran fine on my 12 GB VRAM RTX 3060. In .env files i have: ``` OPENAI_BASE_URL=http://192.168.178.47:1234/v1 OPENAI_API_KEY=A43545ty543465 # is fake, gemini told me to put anyway # Example models: INFERENCE_TEXT_MODEL=google/gemma-3-27b INFERENCE_IMAGE_MODEL=google/gemma-3-27b ``` I tried with openrouter api, and gemini, but with few request, now i must to wait, no sense to use free api of gemini or openrouter. In openrouter I tried to use only free model, like mistral and anyway after 50 request, you must to pay. With gemini gemma, I don't understand but doesn't works anyway, but the limit should be more, but doesn't works and maybe for karakeep isn't good. You are the expert.
Author
Owner

@LaelLuo commented on GitHub (Aug 13, 2025):

@RobertusIT Here's my LM Studio .env configuration that worked with Hoarder/Karakeep:

OPENAI_BASE_URL=http://192.168.2.218:1234/v1
OPENAI_API_KEY=lm_studio
INFERENCE_IMAGE_MODEL=minicpm-o-2_6
EMBEDDING_TEXT_MODEL=text-embedding-nomic-embed-text-v1.5
INFERENCE_LANG=chinese
DISABLE_SIGNUPS=true

This configuration worked well for me. For your 16GB VRAM setup, the MiniCPM-o-2_6 model should run smoothly, and you could potentially handle even larger models.

However, I'm not sure if this configuration is still valid with the current version of Karakeep, as this was from when the project was still called Hoarder. You might need to check the latest documentation to see if any environment variables have changed since the rename.

<!-- gh-comment-id:3185127428 --> @LaelLuo commented on GitHub (Aug 13, 2025): @RobertusIT Here's my LM Studio .env configuration that worked with Hoarder/Karakeep: ``` OPENAI_BASE_URL=http://192.168.2.218:1234/v1 OPENAI_API_KEY=lm_studio INFERENCE_IMAGE_MODEL=minicpm-o-2_6 EMBEDDING_TEXT_MODEL=text-embedding-nomic-embed-text-v1.5 INFERENCE_LANG=chinese DISABLE_SIGNUPS=true ``` This configuration worked well for me. For your 16GB VRAM setup, the MiniCPM-o-2_6 model should run smoothly, and you could potentially handle even larger models. However, I'm not sure if this configuration is still valid with the current version of Karakeep, as this was from when the project was still called Hoarder. You might need to check the latest documentation to see if any environment variables have changed since the rename.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/karakeep#687
No description provided.