mirror of
https://github.com/dani-garcia/vaultwarden.git
synced 2026-04-26 09:46:00 +03:00
[GH-ISSUE #822] It gets unresponsive sometime after starting the container #575
Labels
No labels
SSO
Third party
better for forum
bug
bug
documentation
duplicate
enhancement
future Vault
future Vault
future Vault
good first issue
help wanted
low priority
notes
pull-request
question
troubleshooting
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/vaultwarden#575
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @gerroon on GitHub (Jan 19, 2020).
Original GitHub issue: https://github.com/dani-garcia/vaultwarden/issues/822
Subject of the issue
Hi
I started getting a strange issue. The BW_RS container becomes unresponsive sometime after starting, when I sya unresponsive I mean unresponsive to browsing via clients like Firefox. I can neither access the local ip nor the reverse proxied url. The Docker logs shows nothing, it looks as if it is running normally.
Here last couple lines from
docker logs -f bitwardenYour environment
Debian Testing, Docker
docker run -d --name bitwarden -v /media/bitwarden:/data/ -p 4080:80 -p 3012:3012 --restart=always bitwardenrs/server:latestBitwarden_rs version: 1.13.1
Install method: Docker
Clients used: Firefox, Android
Reverse proxy and version: Apache
Other relevant information:
Steps to reproduce
Start the Docker container wand wait about half an hour, the web page is unaccessible.
Expected behaviour
I should be able to access the site nomatter how long the container was up and running.
Actual behaviour
The site is not reachable. The site is initally reachable for a while, but after sometime it is not reachable via all the IPs the server has or the reverse proxy (via Apache2)
Relevant logs
curl -v http://localhost:80/aliveroot@9af1c2f0025e:/# bash healthcheck.shNo results shown
@gerroon commented on GitHub (Jan 19, 2020):
One thing I also see
docker exec -ti bitwarden curl -v http://localhost:80/aliveHowever this line below does not produce those errors, not sure if ipv6 causing any issues if so why it would do so
docker exec -ti bitwarden curl -v http://127.0.0.1:80/aliveI feel like this must have been introduced in last 2 release becasue I never had this issue before. I can restart the container and it is back for a while and the check command runs successfully and I can login to site
Here I restarted the unresponsive bitwarden container.
@gerroon commented on GitHub (Feb 4, 2020):
I cant figure this out :( I am really thinking that either FF addon or the Android app is crashing this. Because crashes/unrespnsiveness is almost like random. Sometimes it takes horus sometimes 10 min after restrating the container.
Noone else has this issue?
Here app is totally unresponsive (cant login to site, or sometimes the paeg cant even be opened), the web page cant be opened, the addons cant sync. And I just had to restart the conrainer just 10 mins ago.
Also here is the
topview of the processes running inside the container.https://paste.debian.net/hidden/bb8cd5f5/
Here is straight curl to the site
Docker stats
@mprasil commented on GitHub (Feb 6, 2020):
This is strange. Just looking at the stats it seems to consume more memory than I'm used to seeing. (might not be relevant) Do you have many users?
It seems that the service just runs out of workers for whatever reason, can you share some apache configuration, that might be relevant. Perhaps websocket configuration isn't correct and it ends up using all the workers with persistent connections? (try to turn off websockets maybe?)
Also just total stab in the dark, but you can increase the number of worker threads (the default is 10):
@gerroon commented on GitHub (Feb 6, 2020):
Hi
I have total 3 users.
Here is the apache conf. The rest is just couple extra lines about the site and ssl
I will try increasing the workers to see if I get any remedy. Thanks for the pointers.
@jjlin commented on GitHub (Feb 6, 2020):
You have a lot of hung calls to the healthcheck script for some reason. What OS version and Docker version are you running? Also, maybe try disabling the healthcheck (
docker run --no-healthcheck ...) and see if that helps.@gerroon commented on GitHub (Feb 6, 2020):
Hi
I have no idea why health checks runs like that since I do not get involved with the container myself.
I am on Debian Testing, Docker version 19.03.5
I will also try your recommendation
@cmroanirgo commented on GitHub (Feb 17, 2020):
Just browsing this fault and I notice either a) the issue or (more likely) b) a typo in this issue.
You state you start with:
docker run ... -p 4080:80(note port 4080)Your apache conf is incorrect in using:
ProxyPass / http://127.0.0.1:4030/(note port 4030)Perhaps my comment is a red herring, or this is not the actual problem...? You do state that it works for a while...
@gerroon commented on GitHub (Feb 17, 2020):
Hi
I tried to anonymize some info, might have mistyped. That is not the real issue, since I can always access it when it works.
Increasing number of workes seemed to help. I will test a bit more.
@jjlin commented on GitHub (May 13, 2020):
Potentially related: #950
@BlackDex commented on GitHub (Nov 18, 2020):
Closing this ticket because of inactivity.
Feel free to re-open if the issue isn't resolved using the
testing/masterversion and updated docker software.