mirror of
https://github.com/dani-garcia/vaultwarden.git
synced 2026-04-26 01:35:54 +03:00
[GH-ISSUE #300] initial icon caching spawns *lots* of processes #161
Labels
No labels
SSO
Third party
better for forum
bug
bug
documentation
duplicate
enhancement
future Vault
future Vault
future Vault
good first issue
help wanted
low priority
notes
pull-request
question
troubleshooting
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/vaultwarden#161
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @tycho on GitHub (Dec 17, 2018).
Original GitHub issue: https://github.com/dani-garcia/vaultwarden/issues/300
I noticed that if my
ICON_CACHE_FOLDERis empty and I visit my vault, the main bitwarden_rs process seems to spawn many processes (or threads, not sure) to cache all the icons at once, or at least, not with a reasonable upper bound. This should probably be done with only a few processes at a time (like 4 or 8 or something?), both to be friendly to process limits and to be friendly to icons.bitwarden.com.The reason I noticed this issue is because I had
LimitNPROC=64in mybitwarden_rs.servicefile, which caused bitwarden_rs to crash with a "resource temporarily unavailable" error code on trying to load the vault page. I bumped the limit much higher and was able to load my vault page, but it'd be nice to not need to do that.@mprasil commented on GitHub (Dec 17, 2018):
I'm not sure how to approach this to be honest. The icons API is not authenticated so we don't necessarily know if it's one client spamming the API with tons of requests or more clients requesting few icons. We could look at the client IP, but that's often not correct or helpful. (all clients behind NAT, etc..) So I don't see any reliable way to do per-client throttling on the server side.
There were some issues raised for the client side upstream code, but I think it went kinda nowhere.
You can always attempt to limit the API calls on the proxy side, if you don't connect directly to server, but as I said, the in many cases you can't reliably limit the calls without risking very bad experience for some users.
@dani-garcia commented on GitHub (Dec 17, 2018):
We don't manage threads ourselves, it's rocket doing it, but it always launches the same number, so I don't know how the problem only appears in this case.
You can manage the number of threads with the
ROCKET_WORKERSenv variable (by default it's number of cores * 2).Other than that or using a proxy as @mprasil said, there is not much we can do.
@dani-garcia commented on GitHub (Dec 18, 2018):
Seeing as we can't do a lot ourselves, I'll close this.
Note that an alternative to avoid hammering the official server would be to use
LOCAL_ICON_EXTRACTOR=true, which would download the favicons directly from each site, but that has its own negatives.Attachment::save()returns Result instead of bool #2643