mirror of
https://github.com/dani-garcia/vaultwarden.git
synced 2026-04-26 01:35:54 +03:00
[GH-ISSUE #2622] Docker container v1.25.1 persistent check with volumes fails to start #1329
Labels
No labels
SSO
Third party
better for forum
bug
bug
documentation
duplicate
enhancement
future Vault
future Vault
future Vault
good first issue
help wanted
low priority
notes
pull-request
question
troubleshooting
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/vaultwarden#1329
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @aclare92 on GitHub (Jul 17, 2022).
Original GitHub issue: https://github.com/dani-garcia/vaultwarden/issues/2622
Subject of the issue
On starting of the latest (1.25.1) version, it will error stating there is no .env and no persistent storage although configured and working in 1.25.0
Deployment environment
Install method: Docker
Clients used: Application doesn't start at all so presume all clients
Reverse proxy and version: N/A
MySQL/MariaDB or PostgreSQL version: N/A
Other relevant details:
Steps to reproduce
Run the below docker compose; (I have every variable populated but removed for this issue raise)
Expected behaviour
To launch the application and be accessible via it's port or the remapped port if provided i.e. 50003:80 in this case
Actual behaviour
Application does not start and reviewing the logs, the below is returned;
Troubleshooting data
@shadoskill commented on GitHub (Jul 17, 2022):
Yeah I am experiencing the same issue.
@BlackDex commented on GitHub (Jul 17, 2022):
Which version of docker/docker-compose are you using?
And is the volume a docker volume, or a directory on the host?
@shadoskill commented on GitHub (Jul 17, 2022):
Docker: 20.10.17
Unlike the OP I did not use compose, I just ran commands manually.
docker volume create vaultwarden_data
docker run -d --name vaultwarden -v vaultwarden_data:/data/ -p 20000:80 vaultwarden/server:latest
docker volume inspect vaultwarden_data
[ { "CreatedAt": "2022-07-17T07:44:17-05:00", "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/vaultwarden_data/_data", "Name": "vaultwarden_data", "Options": {}, "Scope": "local" } ]And then I see the same error in my logs.
@BlackDex commented on GitHub (Jul 17, 2022):
Looks like this happens when using a
volumeand not abind mountfrom the host.For now i suggest to add
I_REALLY_WANT_VOLATILE_STORAGE=trueto the environment config.That should at least make the latest version work without issues.
@BlackDex commented on GitHub (Jul 17, 2022):
I'm looking into a way to still being able to detect this in a different way.
I have to do some checking but maybe there is an option.
@teward commented on GitHub (Jul 18, 2022):
NOTE: This is a quick workaround to solve this, and it'll be recognized as persistent storage by the container. It maps 'persistent data' onto your disk. Which is, in fact, the same thing that the main Bitwarden docker instances want you to do for the same reasons.
Create a local directory for your vaultwarden data in the same directory as your docker-compose file. Then map it in volumes with
./data:/dataor similar. This is a workaround that will do a bind-mount. The problem with non-bind-mount storage is that Docker Compose will delete it when told to in a properdocker compose downcall that includes destroying the volumes. Hence why it's not 'persistent' without a bind mount or external storage mechanisms where it mounts external storage mechanisms in.What you've done is created an ephemeral volume in Docker's storage that can be deleted by Docker with the right deletion arguments to
docker compose down. Using a local directory as a bind-mount inside the Docker container means that Docker can delete the volume, but not the underlying data.So your volumes should look like this under the container:
... where
vaultwarden_datais a directory in your local dir where you have your docker compose file.(This plagued me on a few containers before, so i've learned how Docker talks and bind-mounts to fix this issue)
@BlackDex commented on GitHub (Jul 18, 2022):
While that would indeed solve it, it would cause there current data to be lost/stuck in the volume and not on the bind mounted directory.
I'm in the progress of detecting an un-named volume to allow docker/podman volumes unless someone forgot to bind-mount or create a volume.
@BlackDex commented on GitHub (Jul 22, 2022):
If you can test the current
testingrelease that would be great. That should have fixed the issue.@oscar-b commented on GitHub (Jul 24, 2022):
@BlackDex Tested with Version 1.25.1-331f6c08 and all seems fine now (without
I_REALLY_WANT_VOLATILE_STORAGE)! 👍🏻@abcde57219 commented on GitHub (Aug 13, 2022):
I am still having this same issue using the testing container. The issue started for me after an update in mid to late July. Any other thoughts on things I should try to fix?
@BlackDex commented on GitHub (Aug 13, 2022):
Maybe the same as this https://github.com/dani-garcia/vaultwarden/discussions/2656
@abcde57219 commented on GitHub (Aug 13, 2022):
I was able to get it working, but think I had to do more than #2656. I had to do three things:
I am not sure if numbers 1 and 3 were in fact addressing issue #2656.
@BlackDex commented on GitHub (Aug 13, 2022):
There are no checks done on the naming, you could have even named it
1passwordand it wouldn't have mattered.It really only checks if the mounted directory consists of a string of alphanumerical characters with the exact amount of 64.
Both bitwarden and vaultwarden do not match that criteria at all.
It is strange, but good that you were able to solve it.