[GH-ISSUE #966] Horizontal scaling in k8s #639

Closed
opened 2026-03-02 11:51:32 +03:00 by kerem · 3 comments
Owner

Originally created by @swills on GitHub (Feb 2, 2025).
Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/966

Is it safe to increase the number of replicas for the web pod in k8s?

I've customized the number of replicas on the chrome pod and that seems fine, but of course that has no local data.

It looks to be unsafe to increase replicas on the meilisearch pod based on the reading I've done about that, since it uses LMDB.

But I don't know enough about the stuff running in the web pod to know if it's safe to increase the replicas. (This would of course mean using a storage class for the PVC which supports ReadWriteMany access.)

Originally created by @swills on GitHub (Feb 2, 2025). Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/966 Is it safe to increase the number of replicas for the web pod in k8s? I've customized the number of replicas on the chrome pod and that seems fine, but of course that has no local data. It looks to be unsafe to increase replicas on the meilisearch pod based on the reading I've done about that, since it uses LMDB. But I don't know enough about the stuff running in the web pod to know if it's safe to increase the replicas. (This would of course mean using a storage class for the PVC which supports ReadWriteMany access.)
kerem 2026-03-02 11:51:32 +03:00
  • closed this issue
  • added the
    question
    label
Author
Owner

@MohamedBassem commented on GitHub (Feb 2, 2025):

Honestly, I've not given it a try before but theoretically speaking I see nothing that would prevent it from working.

The only caveat that I can think of is that you'll be forced to have double the number of background job workers (because they are baked into the web pod). For example, if you have CRAWLER_NUM_WORKERS=1, each replica will have 1 worker (so you end up with X workers instead of 1). The work should get correctly distributed across them though, so it shouldn't be that big of a deal.

I'd recommend taking backups before going with that route. Let me know how it goes, I'm curious :)

<!-- gh-comment-id:2629575069 --> @MohamedBassem commented on GitHub (Feb 2, 2025): Honestly, I've not given it a try before but theoretically speaking I see nothing that would prevent it from working. The only caveat that I can think of is that you'll be forced to have double the number of background job workers (because they are baked into the web pod). For example, if you have `CRAWLER_NUM_WORKERS=1`, each replica will have 1 worker (so you end up with X workers instead of 1). The work should get correctly distributed across them though, so it shouldn't be that big of a deal. I'd recommend taking backups before going with that route. Let me know how it goes, I'm curious :)
Author
Owner

@swills commented on GitHub (Feb 2, 2025):

Got it, thanks! I'll definitely back up (and restore), as I have to do that any way in order to migrate from the rbd block (RWO) to NFS (RWM) storage. I'll let you know how it goes.

<!-- gh-comment-id:2629580003 --> @swills commented on GitHub (Feb 2, 2025): Got it, thanks! I'll definitely back up (and restore), as I have to do that any way in order to migrate from the rbd block (RWO) to NFS (RWM) storage. I'll let you know how it goes.
Author
Owner

@swills commented on GitHub (Feb 3, 2025):

Well, it seems to be mostly OK, at least in my limited single user testing. I do occasionally get "Database is locked" errors when adding things.

<!-- gh-comment-id:2629846432 --> @swills commented on GitHub (Feb 3, 2025): Well, it seems to be mostly OK, at least in my limited single user testing. I do occasionally get "Database is locked" errors when adding things.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/karakeep#639
No description provided.