mirror of
https://github.com/karakeep-app/karakeep.git
synced 2026-04-25 07:56:05 +03:00
Closed
opened 2026-03-02 11:48:33 +03:00 by kerem
·
30 comments
No Branch/Tag specified
main
refactor/use-npm-singlefile
onetab
claude/issue-2596-20260321-1401
claude/fix-docs-button-responsive-V3aBQ
claude/review-import-backpressure-D4ArJ
claude/fix-archived-bookmarks-mobile-P9OJW
claude/issue-1189-20260211-1601
claude/fix-nested-smart-lists-3uFkt
claude/issue-2298-20251223-1704
feat/import-v3
claude/add-cli-search-subcommand-6kIe0
claude/add-bookmark-indexing-timestamps-96bPj
claude/auto-disable-failing-feeds-fkDhP
claude/add-tag-search-aliases-HzESD
feat/docker-compose-dev
claude/add-attachedby-tags-endpoint-01WYfemMGHJJjXsPYLvUJAno
claude/fix-crawler-memory-leaks-NE7Ct
bookmark-debugger
claude/issue-2352-20260106-1120
claude/issue-1977-20260102-2348
claude/add-banner-rendering-JeLUk
claude/add-descendant-qualifier-cUm26
claude/skip-metadata-refresh-archives-CAo4Y
claude/fix-archive-pending-banner-pAyGM
claude/add-embeddings-support-h2swV
claude/nested-manage-lists-QVV85
claude/privacy-type-system-MG1bT
claude/add-action-menu-icons-6hNKw
claude/issue-2299-20251223-1711
claude/bookmark-indexing-progress-QwZSI
claude/migrate-bookmark-attachments-3O2te
claude/add-2025-wrapped-feature-tIUIh
claude/improve-ai-settings-design-639tq
claude/add-youtube-metascraper-plugin-0lWC7
claude/add-problem-reporting-gSSEV
claude/add-mobile-list-menus-spcS7
claude/shadcn-bookmark-cards-WWHzP
claude/add-extensions-link-HTeXc
claude/add-onboarding-screens-hsYMO
claude/fix-settings-switch-overflow-nlzM4
claude/clamp-bookmark-titles-diAEz
claude/port-stats-mobile-expo-MuXAn
claude/whats-new-base-version-vrv8C
claude/fix-settings-auth-checks-jgyD8
claude/add-server-version-display-3sGa2
claude/fix-tag-editor-scrolling-rzdbG
claude/add-company-pricing-card-y5mHY
claude/audit-optimize-transactions-xpDVc
codex/ensure-consistent-ui-experience-across-app-pages
claude/plan-opentelemetry-integration-01Jx183mz1Ev8h8JoYj97Auw
libsql
db-indicies
claude/export-import-lists-01UuCWwdaqduAd35NppvjnMD
claude/configurable-worker-timeout-0198GQh6YrrRzqG62xnogyrz
claude/check-import-quota-01CPdxTpHp18Ba62bYcBTVbA
claude/scraper-worker-thread-01FEHen6MGrQHmdBstJSuiyA
claude/customize-dialog-styling-01CVjEv2KgyZJSpCg3mqkvR7
claude/add-asset-cache-headers-0175WhNcqwiwurrmjj52jnLT
claude/add-db-search-plugin-017Xxd4Jq3MfjWT788vgfbaq
benchmarks-2
claude/add-filtered-deletion-01DTxWNcg3hhqdNpeNLa3s6L
claude/actionbutton-loading-spinner-015DY5ZTvgPgFAXTZz3UGaYv
claude/add-broken-links-qualifier-01S31X1LsKiYb9gE1dXTKvi3
claude/docker-release-tag-trigger-01UmzFXEumhK2jdmRGtMcueo
claude/spread-feed-fetch-scheduling-01EihUtmZSyqeE1HfRMessxW
restate-idempotency
claude/align-android-ios-colors-01GJfkhEyZVBReohVioPa8ok
claude/improve-mobile-app-colors-0155LzHfkd5HyJr6YyZMsus5
codex/add-autocomplete-for-search-query-language
claude/add-bookmark-backups-016L2A8Z94n7tDgDdMPdFuAd
claude/restrict-binary-user-permissions-01FSGyy2RXGZvE26YbAejzGi
effect-ts
claude/prepare-trpc-npm-publish-0193EjfwpxSNVNcLXqXjs6Ln
shared-list-sidebar
claude/lazy-load-tiktoken-017UTNpJPTcMMQvNEBa1aFwo
codex/fix-asset-pre-processing-worker-abort-signals
add-groupid
claude/add-bookmark-list-button-01VF7uXYNLsVDzqdozWMXP5M
claude/extract-shared-ui-components-01DSVfaCr6WRqAyx1vJTZk9r
claude/migrate-shadcn-sidebar-01DKjpg9MD5PJ2potemSnbvW
claude/add-collaborators-rate-limits-01VjXyRWWPUkGQKa8d8D8qKj
claude/modernize-dark-mode-01FRfE81PAY5C44pFu1cYocf
claude/add-signed-url-bookmark-01PjYT1ZhvLK2FPJNTAhJsWf
restate-group-id
claude/add-highlights-page-012vhHpn8fVNp3gf7gBeW14s
claude/disable-shared-bookmark-features-01B9fiGUdu6NyWaxSQFsQBxP
claude/mobile-bookmark-grid-layouts-018cGBBMhPJVq6PJVRBpqT2r
claude/add-mobile-bookmark-summary-01494LYoh4sJW5Fj4GPm62Vj
claude/add-mobile-tags-screen-01WRADt4ZzvXVew1Y9vqF8SV
claude/add-highlight-notes-01LpanRLS4a2YMnT1qB5GTqX
claude/add-search-bar-014k2ngaqjwYRVSvqmbuECqr
claude/hide-collaborator-emails-01TQrkkMupC7CR9BTuDkireg
claude/list-invitation-approval-0129V89M1riXW6JqmoF74VfM
claude/add-bookmark-archive-sort-018VbGPGvtmsGgXFEERoAX7B
claude/add-mobile-smart-lists-01251tYo9u1SywE6XFezAv9e
claude/bookmark-drag-drop-01DmWq286ogHpDGHKcXjKr3z
claude/add-rss-import-01DH1Q2axcDeq8nQJR5MWjPJ
claude/mobile-inapp-browser-auth-01KiT6bwyntRPQ1X4oTtAveC
claude/offline-mode-react-query-01D1rE2bdBEPw2teGqunr5Gd
claude/add-singlefile-extension-support-01BEB9QQZABzwfZDvR9Bz5b2
claude/custom-list-slugs-01VxcfkNUXZ97FNpNVURopMq
claude/issue-2148-20251118-1133
claude/add-groupid-queue-fairness-011CV1r8Wb46HuGAg5o95i3m
claude/hide-viewer-shared-lists-01Fst6NBvdxrXXnDhUmjsNDP
claude/collaborative-lists-013AvDvMqkoszDVcSoCYgBcM
claude/implement-feature-01LT5XzGsbEhZkYXNEjEwdui
claude/fix-bookmark-loading-state-01AgF4H2drxwuTCJDB2Xgiu4
claude/admin-user-edit-013tbiRmb1KX2fhSYqmGKCu8
claude/expose-all-api-01YTruEW72WQYMtq4iZoaPkA
claude/add-doc-link-main-016NYLxShpKuH6R8XCBgeZtc
claude/fix-issue-2133-019JLvdSRAUbU4FtjQztcM6S
claude/explore-effect-ts-integration-01F7xb1dWwP1ma4LnLbFGfDD
claude/optimize-dockerfile-build-011CV5gDnPZbdbbVSPDofC4e
claude/add-custom-headers-guide-011CV249t16aWDRb1mCrzQdC
claude/mobile-app-signup-011CUxPtCXgU6U3T8GShTR2Q
claude/crawler-worker-fetch-browser-011CUvcRc24XEr9DTWDW6MX8
claude/fix-issue-784-011CUvubQrcZHG9S3KjpCKbK
codex/add-user-settings-for-inference-language-and-screenshots
claude/fix-mobile-signin-server-address-011CUnaUWwY2Fhq5Xbwhgr8H
better-auth-2
claude/issue-2028-20251012-1429
claude/issue-1010-20251012-1154
codex/update-feed-refresh-job-idempotency-key
restate
import-v2
fix-public-lists
recurse-delete-list
abort-dangling-processing
tag-pagination
ratelimit-plugin
claude/issue-1937-20250914-0912
codex/implement-title-search-query-qualifier
copilot/add-edit-button-for-notes
cookie-path
ai-tag-cleanup
codex/add-allowlist-and-blocklist-env-variables
mobile-retheme
expo-next-upgrade
opencode/issue1788-20250727215611
fix-trailing-slash-deduplication
edit-bookmark-dialog
bookmark-embeddings
rag
nextjs-15
bookmark-hover-bar
sapling-pr-archive-MohamedBassem
track-bookmark-assets
json-cli
admin-settings
mobile-dark-mode
android/v1.9.2-0
ios/v1.9.1-1
android/v1.9.1-0
ios/v1.9.1-0
ios/v1.9.0-2
ios/v1.9.0-1
android/v1.9.0-1
extension/v1.2.9
cli/v0.31.0
sdk/v0.31.0
mcp/v0.31.0
android/v1.9.0-0
ios/v1.9.0-0
v0.31.0
android/v1.8.5-0
cli/v0.30.0
sdk/v0.30.0
ios/v1.8.4-0
android/v1.8.4-0
v0.30.0
cli/v0.29.1
v0.29.3
v0.29.2
v0.29.1
sdk/v0.29.0
cli/v0.29.0
mcp/v0.29.0
ios/v1.8.3-0
android/v1.8.3-0
extension/v1.2.8
v0.29.0
android/v1.8.2-2
android/v1.8.2-1
ios/v1.8.2-0
android/v1.8.2-0
extension/v1.2.7
android/v1.8.1-0
ios/v1.8.1-0
v0.28.0
cli/v0.27.1
cli/v0.27.0
v0.27.1
sdk/v0.27.0
v0.27.0
android/v1.8.0-1
ios/v1.8.0-1
mcp/v0.26.0
sdk/v0.26.0
v0.26.0
cli/v0.25.0
ios/v1.7.0-1
mcp/v0.25.0
v0.25.0
extension/v1.2.6
ios/v1.7.0-0
android/v1.7.0-0
v0.24.1
v0.24.0
mcp/v0.23.10
mcp/v0.23.9
mcp/v0.23.8
extension/v1.2.5
mcp/v0.23.7
mcp/v0.23.6
mcp/v0.23.5
mcp/v0.23.4
sdk/v0.23.2
cli/v0.23.0
extension/v1.2.4
android/v1.6.9-1
ios/v1.6.9-1
v0.23.2
v0.23.1
sdk/v0.23.0
v0.23.0
ios/v1.6.9-0
sdk/v0.22.0
v0.22.0
android/v1.6.8-0
ios/v1.6.8-0
sdk/v0.21.2
sdk/v0.21.1
sdk/v0.21.0
v0.21.0
cli/v0.20.0
v0.20.0
ios/v1.6.7-4
android/v1.6.7-4
ios/v1.6.7-3
android/v1.6.7-3
android/v1.6.7-2
ios/v1.6.7-2
android/v1.6.7-1
ios/v1.6.7-1
ios/v1.6.7-0
android/v1.6.7-0
v0.19.0
android/v1.6.6-0
android/v1.6.5-0
ios/v1.6.5-0
ios/v1.6.4-0
android/v1.6.4-0
v0.18.0
v0.17.1
v0.17.0
ios/v1.6.3-0
android/v1.6.3-0
extension/v1.2.3
ios/v1.6.2-1
android/v1.6.2-1
ios/v1.6.2-0
android/v1.6.2-0
v0.16.0
ios/v1.6.1-3
android/v1.6.1-3
ios/v1.6.1-2
android/v1.6.1-2
android/v1.6.1-1
ios/v1.6.1-1
android/v1.6.1-0
ios/v1.6.1-0
extension/v1.2.2
android/v1.6.0-1
ios/v1.6.0-1
ios/v1.6.0
android/v1.6.0
cli/v0.13.7
cli/v0.13.6
v0.15.0
cli/v0.13.5
extension/v1.2.1
v0.14.0
cli/v0.13.3
cli/v0.13.2
cli/v0.13.1
cli/v0.13.0
v0.13.1
v0.13.0
mobile-v1.5.0
mobile-v1.4.0
v0.12.2
v0.12.1
v0.12.0
v0.11.1
v0.11.0
v0.10.1
v0.10.0
v0.9.0
v0.8.0
v0.7.0
v0.6.0
v0.5.0
v0.4.1
v.0.4.0
v.0.3.1
v0.3.0
v0.2.2
v0.2.1
v0.2.0
v0.1.0
Labels
Clear labels
Mirrored from GitHub Pull Request
UI/UX
android
bug
dependencies
documentation
documentation
extension
feature request
feature request
good first issue
ios
long-term
performance
pri/high
pri/low
pri/medium
pull-request
Mirrored from GitHub Pull Request
question
status/approved
status/icebox
status/pending_clarification
status/untriaged
No labels
UI/UX
android
bug
dependencies
documentation
documentation
extension
feature request
feature request
good first issue
ios
long-term
performance
pri/high
pri/low
pri/medium
pull-request
question
status/approved
status/icebox
status/pending_clarification
status/untriaged
Milestone
Clear milestone
No items
No milestone
Projects
Clear projects
No items
No project
Assignees
Clear assignees
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".
No due date set.
Dependencies
No dependencies set.
Reference
starred/karakeep#297
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @kamtschatka on GitHub (Oct 3, 2024).
Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/462
Different people have already asked for "their" AI provider to be supported.
It is unlikely that we add support for all of them, but we could switch to a library that allows us to connect to at least the more well known ones.
In the discussion https://github.com/hoarder-app/hoarder/discussions/453, there was a list of possible solutions already:
@kamtschatka commented on GitHub (Oct 3, 2024):
Requested providers so far:
@jkaberg commented on GitHub (Oct 4, 2024):
I didn't see https://claude.ai mentioned anywhere yet so here's my +1 - documentation here
@bhupesh-sf commented on GitHub (Nov 7, 2024):
What about local LLM's using ollama. All we need is to give an option to configure OpenAI Base url and there are many providers which
given OpenAI compatible endpoint for the use of local LLMs.
@MohamedBassem commented on GitHub (Nov 7, 2024):
@bhupesh-sf hoarder already supports local llms natively. Check the inference section in the configuration documentation
@bhupesh-sf commented on GitHub (Nov 7, 2024):
Oh, thanks. Being excited with the app I missed in documentation. Sorry for my ignorance
@MohamedBassem commented on GitHub (Nov 9, 2024):
Gemini now has an OpenAI compatible API as well: https://developers.googleblog.com/en/gemini-is-now-accessible-from-the-openai-library/
@bebound commented on GitHub (Nov 12, 2024):
I've used https://github.com/stulzq/azure-openai-proxy to simulate Azure OpenAI as OpenAI. It works well for chenzhaoyu94/chatgpt-web.
When use the same config in hoarder, it shows "something went wrong", and I can't find any useful message in the log.
@dinnouti commented on GitHub (Nov 27, 2024):
+1 for Amazon Bedrock LLMs like Claude, Meta, Cohere, so-on
@jbohnslav commented on GitHub (Dec 1, 2024):
If you use a LiteLLM proxy, you can already connect to all of these LLMs via the OpenAI api.
@xiaoduo commented on GitHub (Dec 3, 2024):
If can support OpenAI compatible API, would be enough.
@dinnouti commented on GitHub (Dec 4, 2024):
Just to close the loop on the Bedrock, AWS has a sample OpenAI-compatible RESTful APIs for Amazon Bedrock code.
https://github.com/aws-samples/bedrock-access-gateway
@bradhawkins85 commented on GitHub (Jan 16, 2025):
For those wanting to use Gemini here is the section from my Docker .env that works perfectly.
OPENAI_BASE_URL=https://generativelanguage.googleapis.com/v1beta/
OPENAI_API_KEY=Your API Key From Google AI Studio
INFERENCE_TEXT_MODEL=gemini-1.5-flash
@stancubed commented on GitHub (Jan 25, 2025):
Fantastic! Would love to see this as an example in the docs, if appropriate!
@yeathn commented on GitHub (Feb 2, 2025):
Here is what got mine working for Perplexity.
OPENAI_BASE_URL: https://api.perplexity.ai
OPENAI_API_KEY: Your Perplexity API Key
INFERENCE_TEXT_MODEL: sonar-pro
@sparkyfen commented on GitHub (Feb 3, 2025):
@yeathn what version of hoarder are you using? My docker container still complains with:
@yeathn commented on GitHub (Feb 3, 2025):
@sparkyfen Just checked the logs mine does too. It apprently only worked for the AI summary feature but not for tagging.
@Corb3t commented on GitHub (Feb 5, 2025):
Is it possible to add a new setting within the webgui to choose which AI & API key from User Settings > AI Settings instead of having to adjust the docker env?
@hz-xiaxz commented on GitHub (Feb 18, 2025):
I tried this and it works good on AI summary, but automatic tagging is not working. Anyone has any idea on that? Thanks!
@bradhawkins85 commented on GitHub (Feb 18, 2025):
try:
OPENAI_BASE_URL: https://generativelanguage.googleapis.com/v1beta/
OPENAI_API_KEY: Your API Key From Google AI Studio
INFERENCE_TEXT_MODEL: gemini-1.5-flash
INFERENCE_IMAGE_MODEL: gemini-1.5-flash
EMBEDDING_TEXT_MODEL: text-embedding-004
INFERENCE_JOB_TIMEOUT_SEC: 3600
Don't know if it will make any difference but I rebuilt my hoarder server recently and added the extra lines, worked fine previously but that's my most up to date version.
Here is a link to my complete docker compose in case that helps, API and Secret keys have been removed.
https://pastebin.com/QAHrgFFc
@hz-xiaxz commented on GitHub (Feb 18, 2025):
thanks for your fast and kind reply!! Though I still find the tagging not working. Maybe I should open another issue. Your setting has ruled out the problem of AI setting and thank you again!
@JC1738 commented on GitHub (Mar 2, 2025):
I am using LLM Studio for local LLM. I get the summary to work, but for the tagging, I get the following error:
Hoarder Logs
2025-03-02T01:45:09.613Z info: [inference][70] Starting an inference job for bookmark with id "izehx9mvbn7dl41so5dx9maj" 2025-03-02T01:45:09.623Z error: [inference][70] inference job failed: Error: 400 "'response_format.type' must be 'json_schema'" Error: 400 "'response_format.type' must be 'json_schema'" at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:291:33) at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.67.1_zod@3.22.4/node_modules/openai/core.js:335:30) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async OpenAIInferenceClient.inferFromText (/app/apps/workers/node_modules/.pnpm/@hoarder+shared@file+packages+shared_better-sqlite3@11.3.0/node_modules/@hoarder/shared/inference.ts:2:2002) at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:6:3097) at async inferTags (/app/apps/workers/openaiWorker.ts:6:3356) at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6814) at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/liteque@0.3.2_better-sqlite3@11.3.0/node_modules/liteque/dist/runner.js:2:2656)LLM Studio
2025-03-01 17:51:51 [DEBUG] Received request: POST to /v1/chat/completions with body { "messages": [ { "role": "user", "content": "\nYou are a bot in a read-it-later app and your res... <Truncated in logs> ...y \"tags\" and the value is an array of string tags." } ], "model": "qwen2.5-14b-instruct", "response_format": { "type": "json_object" } }@MohamedBassem commented on GitHub (Mar 2, 2025):
Hey folks, I found the problem with this response format thing and merged a fix in 69d81aa. The nightly build will be ready in 15mins and will have a fix for this issue. I tried it with gemini and it works well (both for tagging and summaries). And as an escape hatch, if the provider you're using doesn't support structured outputs, you will be able to set
INFERENCE_SUPPORTS_STRUCTURED_OUTPUT=falseand hope that the model will be able to respond in the correct format.@JC1738 commented on GitHub (Mar 2, 2025):
Great, ideally we could have different endpoints and different models for all the uses. My local LLMs don't do well on vision, but are perfectly fine summary, and hopefully work for tagging, but would be nice to specify the url and model for text embedding, summary, tagging, and images.
@Rising-Galaxy commented on GitHub (Mar 21, 2025):
deepseek please.
@BenGeba commented on GitHub (Apr 10, 2025):
Has anyone found a solution for Azure? I always get a “404 - Resource not found”.
I have used the following ways of writing the URL:
OPENAI_BASE_URL: https://my-resource.openai.azure.com/openai/deployments/gpt-4o-miniOPENAI_BASE_URL: https://my-resource.openai.azure.com/openai/deployments/gpt-4o-mini/chat/completions?api-version=2025-01-01-previewOPENAI_BASE_URL: https://my-resource.openai.azure.comI get the same response for all of them
@nooz commented on GitHub (Apr 18, 2025):
+1 for OpenRouter support
@MohamedBassem commented on GitHub (Apr 18, 2025):
Folks, at this point, there's no plans to support any more providers beside openai-compatible ones (most of the providers are) and Ollama. The industry is converging on openai-compatible APIs anyways. I've added a guide (link) about how to configure some of of the most popular providers (e.g. gemini, openrouter and perplexity). If you try other popular providers and they work, please send a PR to add it to this guide.
@snotrauk commented on GitHub (Jul 21, 2025):
has anyone got a work around for Azure AI?
@snotrauk commented on GitHub (Jul 21, 2025):
managed to get it working with - https://github.com/stulzq/azure-openai-proxy
@cloudchristoph commented on GitHub (Oct 26, 2025):
Yes @BenGeba. For gpt-4.1-mini you should use the following parameters. (There is no need for a proxy @snotrauk - at least not for karakeep.)
You find all infos in the AI Foundry Portal (My assets -> Models + endpoints -> your model), but ignore the "Target URI".
Switch to the
Open AI SDKin the right example section.Your base URL could be ".openai.azure.com" or ".cognitiveservices.azure.com" - pay close attention. Microsoft is slowly migrating to a new endpoint.
GPT-5-mini will work the same way - but we have to wait for the release of the "max_tokens" to "max_completion_tokens" fix (#1969) - cause Azure is only accepting the new parameter for GPT-5 etc.
Hope this helps.