mirror of
https://github.com/karakeep-app/karakeep.git
synced 2026-04-25 07:56:05 +03:00
[GH-ISSUE #2344] [BUG] Memory leak in 0.30 (with Pangolin) - RAM usage continuously growing over time #1422
Open
opened 2026-03-02 11:57:11 +03:00 by kerem
·
52 comments
No Branch/Tag specified
main
refactor/use-npm-singlefile
onetab
claude/issue-2596-20260321-1401
claude/fix-docs-button-responsive-V3aBQ
claude/review-import-backpressure-D4ArJ
claude/fix-archived-bookmarks-mobile-P9OJW
claude/issue-1189-20260211-1601
claude/fix-nested-smart-lists-3uFkt
claude/issue-2298-20251223-1704
feat/import-v3
claude/add-cli-search-subcommand-6kIe0
claude/add-bookmark-indexing-timestamps-96bPj
claude/auto-disable-failing-feeds-fkDhP
claude/add-tag-search-aliases-HzESD
feat/docker-compose-dev
claude/add-attachedby-tags-endpoint-01WYfemMGHJJjXsPYLvUJAno
claude/fix-crawler-memory-leaks-NE7Ct
bookmark-debugger
claude/issue-2352-20260106-1120
claude/issue-1977-20260102-2348
claude/add-banner-rendering-JeLUk
claude/add-descendant-qualifier-cUm26
claude/skip-metadata-refresh-archives-CAo4Y
claude/fix-archive-pending-banner-pAyGM
claude/add-embeddings-support-h2swV
claude/nested-manage-lists-QVV85
claude/privacy-type-system-MG1bT
claude/add-action-menu-icons-6hNKw
claude/issue-2299-20251223-1711
claude/bookmark-indexing-progress-QwZSI
claude/migrate-bookmark-attachments-3O2te
claude/add-2025-wrapped-feature-tIUIh
claude/improve-ai-settings-design-639tq
claude/add-youtube-metascraper-plugin-0lWC7
claude/add-problem-reporting-gSSEV
claude/add-mobile-list-menus-spcS7
claude/shadcn-bookmark-cards-WWHzP
claude/add-extensions-link-HTeXc
claude/add-onboarding-screens-hsYMO
claude/fix-settings-switch-overflow-nlzM4
claude/clamp-bookmark-titles-diAEz
claude/port-stats-mobile-expo-MuXAn
claude/whats-new-base-version-vrv8C
claude/fix-settings-auth-checks-jgyD8
claude/add-server-version-display-3sGa2
claude/fix-tag-editor-scrolling-rzdbG
claude/add-company-pricing-card-y5mHY
claude/audit-optimize-transactions-xpDVc
codex/ensure-consistent-ui-experience-across-app-pages
claude/plan-opentelemetry-integration-01Jx183mz1Ev8h8JoYj97Auw
libsql
db-indicies
claude/export-import-lists-01UuCWwdaqduAd35NppvjnMD
claude/configurable-worker-timeout-0198GQh6YrrRzqG62xnogyrz
claude/check-import-quota-01CPdxTpHp18Ba62bYcBTVbA
claude/scraper-worker-thread-01FEHen6MGrQHmdBstJSuiyA
claude/customize-dialog-styling-01CVjEv2KgyZJSpCg3mqkvR7
claude/add-asset-cache-headers-0175WhNcqwiwurrmjj52jnLT
claude/add-db-search-plugin-017Xxd4Jq3MfjWT788vgfbaq
benchmarks-2
claude/add-filtered-deletion-01DTxWNcg3hhqdNpeNLa3s6L
claude/actionbutton-loading-spinner-015DY5ZTvgPgFAXTZz3UGaYv
claude/add-broken-links-qualifier-01S31X1LsKiYb9gE1dXTKvi3
claude/docker-release-tag-trigger-01UmzFXEumhK2jdmRGtMcueo
claude/spread-feed-fetch-scheduling-01EihUtmZSyqeE1HfRMessxW
restate-idempotency
claude/align-android-ios-colors-01GJfkhEyZVBReohVioPa8ok
claude/improve-mobile-app-colors-0155LzHfkd5HyJr6YyZMsus5
codex/add-autocomplete-for-search-query-language
claude/add-bookmark-backups-016L2A8Z94n7tDgDdMPdFuAd
claude/restrict-binary-user-permissions-01FSGyy2RXGZvE26YbAejzGi
effect-ts
claude/prepare-trpc-npm-publish-0193EjfwpxSNVNcLXqXjs6Ln
shared-list-sidebar
claude/lazy-load-tiktoken-017UTNpJPTcMMQvNEBa1aFwo
codex/fix-asset-pre-processing-worker-abort-signals
add-groupid
claude/add-bookmark-list-button-01VF7uXYNLsVDzqdozWMXP5M
claude/extract-shared-ui-components-01DSVfaCr6WRqAyx1vJTZk9r
claude/migrate-shadcn-sidebar-01DKjpg9MD5PJ2potemSnbvW
claude/add-collaborators-rate-limits-01VjXyRWWPUkGQKa8d8D8qKj
claude/modernize-dark-mode-01FRfE81PAY5C44pFu1cYocf
claude/add-signed-url-bookmark-01PjYT1ZhvLK2FPJNTAhJsWf
restate-group-id
claude/add-highlights-page-012vhHpn8fVNp3gf7gBeW14s
claude/disable-shared-bookmark-features-01B9fiGUdu6NyWaxSQFsQBxP
claude/mobile-bookmark-grid-layouts-018cGBBMhPJVq6PJVRBpqT2r
claude/add-mobile-bookmark-summary-01494LYoh4sJW5Fj4GPm62Vj
claude/add-mobile-tags-screen-01WRADt4ZzvXVew1Y9vqF8SV
claude/add-highlight-notes-01LpanRLS4a2YMnT1qB5GTqX
claude/add-search-bar-014k2ngaqjwYRVSvqmbuECqr
claude/hide-collaborator-emails-01TQrkkMupC7CR9BTuDkireg
claude/list-invitation-approval-0129V89M1riXW6JqmoF74VfM
claude/add-bookmark-archive-sort-018VbGPGvtmsGgXFEERoAX7B
claude/add-mobile-smart-lists-01251tYo9u1SywE6XFezAv9e
claude/bookmark-drag-drop-01DmWq286ogHpDGHKcXjKr3z
claude/add-rss-import-01DH1Q2axcDeq8nQJR5MWjPJ
claude/mobile-inapp-browser-auth-01KiT6bwyntRPQ1X4oTtAveC
claude/offline-mode-react-query-01D1rE2bdBEPw2teGqunr5Gd
claude/add-singlefile-extension-support-01BEB9QQZABzwfZDvR9Bz5b2
claude/custom-list-slugs-01VxcfkNUXZ97FNpNVURopMq
claude/issue-2148-20251118-1133
claude/add-groupid-queue-fairness-011CV1r8Wb46HuGAg5o95i3m
claude/hide-viewer-shared-lists-01Fst6NBvdxrXXnDhUmjsNDP
claude/collaborative-lists-013AvDvMqkoszDVcSoCYgBcM
claude/implement-feature-01LT5XzGsbEhZkYXNEjEwdui
claude/fix-bookmark-loading-state-01AgF4H2drxwuTCJDB2Xgiu4
claude/admin-user-edit-013tbiRmb1KX2fhSYqmGKCu8
claude/expose-all-api-01YTruEW72WQYMtq4iZoaPkA
claude/add-doc-link-main-016NYLxShpKuH6R8XCBgeZtc
claude/fix-issue-2133-019JLvdSRAUbU4FtjQztcM6S
claude/explore-effect-ts-integration-01F7xb1dWwP1ma4LnLbFGfDD
claude/optimize-dockerfile-build-011CV5gDnPZbdbbVSPDofC4e
claude/add-custom-headers-guide-011CV249t16aWDRb1mCrzQdC
claude/mobile-app-signup-011CUxPtCXgU6U3T8GShTR2Q
claude/crawler-worker-fetch-browser-011CUvcRc24XEr9DTWDW6MX8
claude/fix-issue-784-011CUvubQrcZHG9S3KjpCKbK
codex/add-user-settings-for-inference-language-and-screenshots
claude/fix-mobile-signin-server-address-011CUnaUWwY2Fhq5Xbwhgr8H
better-auth-2
claude/issue-2028-20251012-1429
claude/issue-1010-20251012-1154
codex/update-feed-refresh-job-idempotency-key
restate
import-v2
fix-public-lists
recurse-delete-list
abort-dangling-processing
tag-pagination
ratelimit-plugin
claude/issue-1937-20250914-0912
codex/implement-title-search-query-qualifier
copilot/add-edit-button-for-notes
cookie-path
ai-tag-cleanup
codex/add-allowlist-and-blocklist-env-variables
mobile-retheme
expo-next-upgrade
opencode/issue1788-20250727215611
fix-trailing-slash-deduplication
edit-bookmark-dialog
bookmark-embeddings
rag
nextjs-15
bookmark-hover-bar
sapling-pr-archive-MohamedBassem
track-bookmark-assets
json-cli
admin-settings
mobile-dark-mode
android/v1.9.2-0
ios/v1.9.1-1
android/v1.9.1-0
ios/v1.9.1-0
ios/v1.9.0-2
ios/v1.9.0-1
android/v1.9.0-1
extension/v1.2.9
cli/v0.31.0
sdk/v0.31.0
mcp/v0.31.0
android/v1.9.0-0
ios/v1.9.0-0
v0.31.0
android/v1.8.5-0
cli/v0.30.0
sdk/v0.30.0
ios/v1.8.4-0
android/v1.8.4-0
v0.30.0
cli/v0.29.1
v0.29.3
v0.29.2
v0.29.1
sdk/v0.29.0
cli/v0.29.0
mcp/v0.29.0
ios/v1.8.3-0
android/v1.8.3-0
extension/v1.2.8
v0.29.0
android/v1.8.2-2
android/v1.8.2-1
ios/v1.8.2-0
android/v1.8.2-0
extension/v1.2.7
android/v1.8.1-0
ios/v1.8.1-0
v0.28.0
cli/v0.27.1
cli/v0.27.0
v0.27.1
sdk/v0.27.0
v0.27.0
android/v1.8.0-1
ios/v1.8.0-1
mcp/v0.26.0
sdk/v0.26.0
v0.26.0
cli/v0.25.0
ios/v1.7.0-1
mcp/v0.25.0
v0.25.0
extension/v1.2.6
ios/v1.7.0-0
android/v1.7.0-0
v0.24.1
v0.24.0
mcp/v0.23.10
mcp/v0.23.9
mcp/v0.23.8
extension/v1.2.5
mcp/v0.23.7
mcp/v0.23.6
mcp/v0.23.5
mcp/v0.23.4
sdk/v0.23.2
cli/v0.23.0
extension/v1.2.4
android/v1.6.9-1
ios/v1.6.9-1
v0.23.2
v0.23.1
sdk/v0.23.0
v0.23.0
ios/v1.6.9-0
sdk/v0.22.0
v0.22.0
android/v1.6.8-0
ios/v1.6.8-0
sdk/v0.21.2
sdk/v0.21.1
sdk/v0.21.0
v0.21.0
cli/v0.20.0
v0.20.0
ios/v1.6.7-4
android/v1.6.7-4
ios/v1.6.7-3
android/v1.6.7-3
android/v1.6.7-2
ios/v1.6.7-2
android/v1.6.7-1
ios/v1.6.7-1
ios/v1.6.7-0
android/v1.6.7-0
v0.19.0
android/v1.6.6-0
android/v1.6.5-0
ios/v1.6.5-0
ios/v1.6.4-0
android/v1.6.4-0
v0.18.0
v0.17.1
v0.17.0
ios/v1.6.3-0
android/v1.6.3-0
extension/v1.2.3
ios/v1.6.2-1
android/v1.6.2-1
ios/v1.6.2-0
android/v1.6.2-0
v0.16.0
ios/v1.6.1-3
android/v1.6.1-3
ios/v1.6.1-2
android/v1.6.1-2
android/v1.6.1-1
ios/v1.6.1-1
android/v1.6.1-0
ios/v1.6.1-0
extension/v1.2.2
android/v1.6.0-1
ios/v1.6.0-1
ios/v1.6.0
android/v1.6.0
cli/v0.13.7
cli/v0.13.6
v0.15.0
cli/v0.13.5
extension/v1.2.1
v0.14.0
cli/v0.13.3
cli/v0.13.2
cli/v0.13.1
cli/v0.13.0
v0.13.1
v0.13.0
mobile-v1.5.0
mobile-v1.4.0
v0.12.2
v0.12.1
v0.12.0
v0.11.1
v0.11.0
v0.10.1
v0.10.0
v0.9.0
v0.8.0
v0.7.0
v0.6.0
v0.5.0
v0.4.1
v.0.4.0
v.0.3.1
v0.3.0
v0.2.2
v0.2.1
v0.2.0
v0.1.0
Labels
Clear labels
Mirrored from GitHub Pull Request
UI/UX
android
bug
dependencies
documentation
documentation
extension
feature request
feature request
good first issue
ios
long-term
performance
pri/high
pri/low
pri/medium
pull-request
Mirrored from GitHub Pull Request
question
status/approved
status/icebox
status/pending_clarification
status/untriaged
No labels
UI/UX
android
bug
dependencies
documentation
documentation
extension
feature request
feature request
good first issue
ios
long-term
performance
pri/high
pri/low
pri/medium
pull-request
question
status/approved
status/icebox
status/pending_clarification
status/untriaged
Milestone
Clear milestone
No items
No milestone
Projects
Clear projects
No items
No project
Assignees
Clear assignees
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".
No due date set.
Dependencies
No dependencies set.
Reference
starred/karakeep#1422
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @nkkfs on GitHub (Jan 4, 2026).
Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/2344
Describe the Bug
I've been experiencing a memory leak issue with Karakeep container for the past few weeks. The RAM usage continuously grows over time until it becomes problematic.
My Environment
Current Behavior
bookmarks.searchBookmarksendpointLogs:
Recent logs show repeated 500 error when searching bookmarks:
Health checks return 200 OK, but bookmark search operations consistently fail with 500 error.
Steps to Reproduce
Expected Behaviour
Memory usage should remain stable over time without requiring periodic restarts.
Screenshots or Additional Context
Device Details
No response
Exact Karakeep Version
v0.30.0
Have you checked the troubleshooting guide?
@nkkfs commented on GitHub (Jan 4, 2026):
Probably there was a problem with Meilisearch API Key, I don't know why it doesn't showed eariler.
@nkkfs commented on GitHub (Jan 5, 2026):
However, that wasn't the case. The RAM filled up more slowly or not at all, but after the night it increased significantly again.
Logs are completly clear for now, only requests with code 200.
@kennyHH commented on GitHub (Jan 6, 2026):
Same problem here. It keeps growing until the machine crashes due to low ram.
Nothing shows up in logs and I can't see any suspicious processes running.
@Infamousbugg commented on GitHub (Jan 6, 2026):
Same issue here, on Unraid. Also running it through Pangolin and have OpenID configured.
@MohamedBassem commented on GitHub (Jan 6, 2026):
ok, folks, I'll mark this as approved, but I need help debugging this further. Both the server powering
try.karakeep.appand my own personal instance don't seem to be experiencing this:Can all of you share
topfrom inside their containers to see which process is using that memory? Can you also share which server version you're running? Also, are any of you running a publicly accessible instance (try to ensure that it's not related to #2346)?@nkkfs commented on GitHub (Jan 6, 2026):
For me
topfromkarakeepcontainer looks likeMy karakeep instance is only accesable in Tailscale network via Reverse Proxy (Pangolin), so that's why I'm ignoring the fact that I could have been hacked or something.
@kennyHH commented on GitHub (Jan 6, 2026):
This is output of 'top' command from my container.
I've put memory restrictions (1024mb) on the karakeep-web container , but it's not really helping.
@0rokska8 commented on GitHub (Jan 6, 2026):
I made a complete new 0.30.0 installation (docker) and have the same issue. After I saw 2x the complete RAM is used, I put a restriction of 4GB. The instance is behind a reverse-proxy (pangolin) with authentication.
@Nanianmichaels commented on GitHub (Jan 8, 2026):
Count me in as one of the people affected, I noticed today after getting a low memory warning from the machine.
As others have pointed out, karakeep-web's top results show that the next-server and node processes are the responsible for hogging the memory. It doesn't seem like it's a #2346 instance, I have no weird processes running, and CPU usage is normal.
Not sure if relevant, but I'm running Karakeep on a RPi 5, so I can confirm the issue also appears on ARM64 builds.
Let me know if there's anything I can do to test a solution, or if you need more info from my .
@MohamedBassem commented on GitHub (Jan 11, 2026):
Ok, I think we can rule out this is related to #2346.
Now unfortunately the top command, does show the RSS.
Can you instead run inside the container:
And share the output? Sorry for the hassle.
@kennyHH commented on GitHub (Jan 11, 2026):
This is output from mine.
@MohamedBassem commented on GitHub (Jan 11, 2026):
Ok, that's helpful. It means that the leak is coming from the nextjx server and not the background workers. Interesting ...
@0rokska8 commented on GitHub (Jan 11, 2026):
My output, if still needed. If you need more, just let me know.
@Nanianmichaels commented on GitHub (Jan 11, 2026):
Can confirm over here, next-server is requesting an ungodly amount of RAM.
That was actually what made me start digging into this issue, my host was showing "next-server (v" as taking up a lot of RAM, and I dug into which process was launching it, at some point after three or four levels coming up with docker being involved, which I then did the "scream test" (a.k.a., turned each service off one by one) until that process went away.
What I find weird is that as soon as you launch the service, it will automatically request something like 20GB+ of virtual RAM, even if the actual RAM usage is low.
Hopefully you'll be able to get to the root of this one, I had another OoM instance today, even though I had limited the service to 1GB of RAM usage...
@MohamedBassem commented on GitHub (Jan 11, 2026):
Thanks for the input. Please keep it coming and it would be great if everyone shares the environment that they're running on (docker, unraid, proxmox, bare linux,etc). I'm yet to find why you guys are hitting this and not my instance or even the cloud deployment.
@Infamousbugg commented on GitHub (Jan 11, 2026):
Mine looks identical to this. Running on Unraid.
@MohamedBassem commented on GitHub (Jan 11, 2026):
Anyone willing to hop on the nightly release and confirm if there's problem is still there?
@Nanianmichaels commented on GitHub (Jan 11, 2026):
My first thought was somehow the worker becoming borked because I had moved my Ollama instance (or rather, it became unavailable because I reinstalled the OS on the machine it was being served, and it took me a while before I got it working again) and I hadn't updated the compose file to either turn the Ollama reference off or update it. My reasoning was Karakeep was trying to ping an LLM that wasn't available and it was getting stuck with open LLM requests.
I currently have two failed inference jobs, and Karakeep tells me inference jobs are "active", not sure if that can be a trigger.
In addition, I have 11 failed crawler jobs, and at least on of them (seen in the logs), Karakeep repeatedly tries and fails to get data (the site is live, but it's basically a big table with data). Again, not sure if that can be a trigger, either.
I can try the nightly, will report ASAP.
@MohamedBassem commented on GitHub (Jan 11, 2026):
@Nanianmichaels that would have been a theory if the the worker process is the one using the memory. But from the screenshots so far, its nextjs that's leaking. We don't take to ollama from nextjs (except for the manual
sumarize with AIbutton).@Nanianmichaels commented on GitHub (Jan 11, 2026):
Shame it's not something as easy to diagnose as that, then... :|
Just updated to nightly/latest.
Mainthread and next-server are still requesting 16GB of virtual memory each. Too soon to tell if there's still a leak, I'll only be able to tell in a few hours.
Weirdly enough, Docker does NOT count memory usage from those two processes towards the container memory usage, which means setting a container memory limit is not effective as a way to prevent OoM events.
@0rokska8 commented on GitHub (Jan 11, 2026):
Also just updated to latest/nightly.
At this point I’ll need to let it run for a few hours again to see whether the memory usage keeps growing or stabilizes.
top:
ps -eo pid,comm,rss,vsz:
@micahlt commented on GitHub (Jan 11, 2026):
I'm experiencing this same issue - Docker on Ubuntu Server 24.04. Just restarted my container on the nightly version, waiting to see if the leak is still there.
@MohamedBassem commented on GitHub (Jan 11, 2026):
@Nanianmichaels docker should be counting the RSS not the virtual memory. So setting a container limit should work.
Thanks for the help folks, waiting for the updates about the nightly version.
@Infamousbugg commented on GitHub (Jan 11, 2026):
I've had the latest build running for about an hour. Next-server started out at 190mb, but it has slowly inched upwards to 676mb.
@MohamedBassem commented on GitHub (Jan 11, 2026):
This sounds relevant: https://github.com/vercel/next.js/issues/85914. But I still don't know why it's only reproducing for you and not for me. I'll need to repro to be able to test the fix. Anyone comfortable enough with the node echosystem to try to get us a heap profile during the time where memory is inflated (optimally with RSS > 2GB)?
@Nanianmichaels commented on GitHub (Jan 11, 2026):
It seems that the leak is still there on nightly.
Mainthread is steady at ~210MB despite the large virtual memory request, next-server has grown to 1GB actually used memory, with zero activity on the webui.
Sadly, I can't help with heap profiling.
Also, for some reason docker on this machine apparently reports 0 as the RAM usage for ALL containers, which is... not great for triggering memory limits... O_o
@MohamedBassem commented on GitHub (Jan 11, 2026):
@Nanianmichaels if there's no activity on the web container, can you share the logs?
@Nanianmichaels commented on GitHub (Jan 11, 2026):
There's a bit of background activity, but nothing major or sensitive (I think), so here it goes:
@0rokska8 commented on GitHub (Jan 12, 2026):
After almost 24 hours, I have to confirm that the behaviour in the nightly is the same as before (I changed now the limit from 4GB to 1GB)
I would like to help, unfortunately I have no understanding of the node echosystem.
@MohamedBassem commented on GitHub (Jan 13, 2026):
712M is within what I'd call fine (there are other ways to mitigate that one, mostly tuning GC). I'm mostly interested in debugging situations where memory grows beyond 2GB because that's what is not normal. Anyone who has seen usage of 2GB+ can you share a redacted version of your env file? I tried a fresh install on proxmox and regardless of how hard I push it (with default env), couldn't get it to grow past 1.5GB, and left it over night and it stabilized at 1GB.
@Nanianmichaels commented on GitHub (Jan 13, 2026):
I do not run with an ENV file, I run Karakeep with ENV parameters directly on the Compose file.
So here's a copy of the Compose file, with redactions where relevant:
Hope this helps with debugging!
@jesusfer commented on GitHub (Jan 13, 2026):
I can also reproduce the issue with 0.30.0, though I have not tested the nightly.
These are my env vars (secret stuff removed):
These are my stats right now.
@0rokska8 commented on GitHub (Jan 13, 2026):
I think there is a misunderstanding. As I mentioned, I limited the RAM to 1GB, but as you see on the screenshot it always fill up until the GB was reached and then it started again at around 200MB. until it reaches the GB. It grows about 500MB per hour. I set the RAM limit to 8GB at midday and now we are at 3.3GB, continuously growing.
And the content of my .env file:
@MohamedBassem commented on GitHub (Jan 14, 2026):
Wow those numbers are crazy. Ok, I'll try to repro one more time, with a fresh installation using docker this time (instead of proxmox). If it doesn't repro, I can start providing some scripts for you guys to help get some more debugging info.
@MohamedBassem commented on GitHub (Jan 14, 2026):
Folks, I've created a new discord channel here to debug this further with faster comms. Please join if you're impacted and willing to help: https://discord.com/channels/1223681308962721802/1460952880516432008
Also, I've heard multiple mentions of Pangolin here. Is anyone impacted (3GB+) that is NOT using Pangolin as a reverse proxy in front of Karakeep?
@Nanianmichaels commented on GitHub (Jan 14, 2026):
For some reason, that channel doesn't open for me on Discord. Do I need to be in the server beforehand? If so, I'd appreciate a link.
Also, I am indeed running Karakeep behind Pangolin, too.
@MohamedBassem commented on GitHub (Jan 14, 2026):
This link should work: https://discord.gg/KqvDactC
@MohamedBassem commented on GitHub (Jan 19, 2026):
With the help of few folks on discord, we've confirmed that pangolin is the culprit and disabling it stablizes the memory. So this issue is impacting only pangolin users.
Now that we've narrowed it down, I'll try to repro and rootcause to release a fix.
Huge thanks to @Nanianmichaels and @kennyHH for the help in debugging this and reporting the results back.
@msylw commented on GitHub (Jan 21, 2026):
Just found this bug after noticing this happening to me. Running in docker on ubuntu 25.10.
I do NOT use pangolin.
But I DO use traefik.
@ne-bknn commented on GitHub (Jan 23, 2026):
Hi! I'm on docker running on arch linux, updated to 0.30.0 a few days ago and have the same problem -
I do not use pangolin. In my lab I use authelia. Karakeep is behind caddy, but no oidc/forward auth, authelia is not yet configured for it. Just karakeep's local auth.
The thing is, I updated on 01/16, but memory started to leak on 01/17. On 01/17 I enabled lab-wide blackbox exporter prober that constantly GETs
/. Disabled it for karakeep, waiting for the results.@MohamedBassem commented on GitHub (Jan 23, 2026):
@ne-bknn yes, our current understanding is that it's those heartbeats from pangolin or other services (such as blackbox exporter) that trigger the leak. We believe we have a good understanding of the problem and are working on a fix.
@nerdware-timmmi commented on GitHub (Feb 9, 2026):
I would also like to present my setup and Grafana metrics here again, and I am looking forward to the fix.
@Nanianmichaels commented on GitHub (Feb 9, 2026):
You might want to visit the Discord server's discussion thread, to see if the devs need extra information from you, since it appears to be the same presentation, so it might be another way that the bug is triggered.
@nerdware-timmmi commented on GitHub (Feb 10, 2026):
But the discord link isn't working anymore :-/
@Nanianmichaels commented on GitHub (Feb 10, 2026):
I got you: https://discord.gg/mJWfNyqW
@Kinson261 commented on GitHub (Feb 10, 2026):
I encountered this issue as well, having karakeep behing Pangolin. Up to 7Gb used by karakeep alone.
From the discord link, a temporary fix is to disable the health check in pangolin.
@momonator25 commented on GitHub (Feb 10, 2026):
I guess this is not related to Pangolin itself as I had the same behavior with my own docker healthcheck for the karakeep container:
After removing it yesterday evening (09.02.26) the memory usage seems to be stable at around 500MB (which still is a bit for an idling app, doing only some background tasks)
I still have Uptime Kuma http healthchecks in 30s intervals which I disabled now, I will watch the memory usage and will report here again.
@Nanianmichaels commented on GitHub (Feb 10, 2026):
Apparently the memory leak can be caused by any healthchecks, not just Pangolin.
At least that's what I get from this reply earlier in the thread:
https://github.com/karakeep-app/karakeep/issues/2344#issuecomment-3789635985
@momonator25 commented on GitHub (Feb 10, 2026):
Oh yes sorry I missed that comment..
@macros111 commented on GitHub (Feb 18, 2026):
Same for me. I am on unraid with karakeep, meilisearch and browserless-v2.
I disabled uptime kuma and pangolin health check. I have a fresh instance with 3 bookmarks rigth now and memory usage was at 4gb and rising.
Now after a restart of the memory usage of the karakeep container is down to 400mb.
@micahlt commented on GitHub (Feb 18, 2026):
I think this still needs some attention. It makes no sense for me to not be able to monitor Karakeep in any way without it ballooning memory.
@Nanianmichaels commented on GitHub (Feb 18, 2026):
As per the dev's post earlier (https://github.com/karakeep-app/karakeep/issues/2344#issuecomment-3789635985), "We believe we have a good understanding of the problem and are working on a fix."
So this issue is being worked on, and hopefully a new version that addresses it will come out soon.
Meanwhile, disabling monitoring is a workaround until said new version is released. When it will be released? When it's ready.