mirror of
https://github.com/dani-garcia/vaultwarden.git
synced 2026-04-26 09:46:00 +03:00
[GH-ISSUE #856] Database and all users missing after rebooting installation -- Next Steps? #604
Labels
No labels
SSO
Third party
better for forum
bug
bug
documentation
duplicate
enhancement
future Vault
future Vault
future Vault
good first issue
help wanted
low priority
notes
pull-request
question
troubleshooting
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/vaultwarden#604
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @kevdogg on GitHub (Feb 9, 2020).
Original GitHub issue: https://github.com/dani-garcia/vaultwarden/issues/856
Hi I'm self hosting bitwarden_rs
I thought I did very well last week configuring things. I restarted the VM running the docker image many times and all went well.
My host system is FreeNAS and inside FreeNAS I'm runnning a VM linux VM with docker installed inside the VM.
Over the weekend I had one of my ZFS disks die on me, forcing me to take down the system and reinstall the disk. It's a RAIDZ2 config so I didn't experience any data loss that I know off -- until
So I reboot FreeNAS and have it up and running. I restart the two VM's within FreeNAS (nginx reverse proxy VM and bitwarden_rs VM) and attempt to login. It keeps saying bad password. I look inside the admin page and there are no listed users. It's like the system is completely blank.
The data is link locally to using docker compose to:
The /var/data/bw directory exists and I see the db.sqlite3 file inside.
When testing last week, I synced by the bitwarden ios app to the repository, and now it can not sync. I logged out and back in -- now it states there are no items in your vault
I using this now as a testing rig -- however I can't tell you how much this actually stinks with the data loss. I'm not sure how best to restore old data but honestly this shouldn't have happened.
I'm not sure what logs to post.
I don't specifically have a database backup, rather snapshots of the entire VM since it the VM runs on ZFS. Snapshots are taken every 12 hours
@mqus commented on GitHub (Feb 9, 2020):
Could you please provide some logs of the bitwarden_rs output [1] (showing startup, client login,etc) , the versions of bitwarden_rs and the clients, which clients you have tried, which database you are using (I assume sqlite)? Did you try to look into the database file with e.g.
sqlite3 file.sqlite?To use a backup, you'll only have to replace the database or you can also pretty much roll back the snapshot but I assume you tried that already? Did you set up a new instance/vm for testing with the ios app and what exact steps did you do? Do I understand correctly that you have installed bitwarden_rs via docker in its own VM, on the linux VM on FreeNAS? and the bitwarden_rs-only-VM has snapshots?
It would have been nice if you had filled out the template, but without the logs at least there isn't anything we can do right now.
PS:
[1] https://github.com/dani-garcia/bitwarden_rs/wiki/Logging
@kevdogg commented on GitHub (Feb 10, 2020):
I tried looking at the logs -- what a mess.
However I guess there was some success. None however to help anyone. Snapshots of the VM were taken every 12 hours. I originally tried rolling back to an earlier release without success. Because the VM is block device running with FreeNAS which utilizes ZFS, I was able however to clone an earlier snapshot, and both the clone with an Arch Linux install disk. I was able to manually mount the drives and then I was able to selectively choose the database files I needed. Once copying them to the /data directory, I restarted bitwarden_rs and things seem to have returned to normal. I wish I could understand what went wrong, however at the same time its good to know that at least one backup strategy is selectively working.
Thanks for your help.