[GH-ISSUE #1466] Import times out with a 524 when behind Cloudflare #972

Closed
opened 2026-03-03 02:05:11 +03:00 by kerem · 6 comments
Owner

Originally created by @defunctl on GitHub (Mar 6, 2021).
Original GitHub issue: https://github.com/dani-garcia/vaultwarden/issues/1466

Subject of the issue

When importing a Bitwarden JSON file behind Cloudflare it's one long post request and ultimately times out once it hits Cloudflare's 100 second limit: https://support.cloudflare.com/hc/en-us/articles/115003011431-Error-524-A-timeout-occurred#524error

Deployment environment

  • bitwarden_rs version: docker latest as of this posting.
  • Install method: Docker

  • Clients used: web vault

  • Reverse proxy and version: Nginx via Cloudfalre

  • MySQL/MariaDB or PostgreSQL version:

  • Other relevant details:

Steps to reproduce

Note: this also happens when purging or deleting a lot of items

  1. Log in to web vault
  2. Visit Tools > Import Data
  3. Select Bitwarden (json)
  4. Select a bitwarden json file that has around 2k passwords
  5. Click "Import Data"
  6. Wait 100 seconds for the Cloudflare 524 timeout as processing takes more time than this.

Expected behaviour

The import would succeed. Using some kind of queue process or ajax that checks progress rather than leaving the connection open until the import is complete.

Actual behaviour

Hits Cloudflare's 100 second limit and returns an HTTP 524 error

Troubleshooting data

When posting to: https://<host>:<port>/api/ciphers/import

Originally created by @defunctl on GitHub (Mar 6, 2021). Original GitHub issue: https://github.com/dani-garcia/vaultwarden/issues/1466 <!-- # ### NOTE: Please update to the latest version of bitwarden_rs before reporting an issue! This saves you and us a lot of time and troubleshooting. See: * https://github.com/dani-garcia/bitwarden_rs/issues/1180 * https://github.com/dani-garcia/bitwarden_rs/wiki/Updating-the-bitwarden-image # ### --> <!-- Please fill out the following template to make solving your problem easier and faster for us. This is only a guideline. If you think that parts are unnecessary for your issue, feel free to remove them. Remember to hide/redact personal or confidential information, such as passwords, IP addresses, and DNS names as appropriate. --> ### Subject of the issue When importing a Bitwarden JSON file behind Cloudflare it's one long post request and ultimately times out once it hits Cloudflare's 100 second limit: https://support.cloudflare.com/hc/en-us/articles/115003011431-Error-524-A-timeout-occurred#524error ### Deployment environment <!-- ========================================================================================= Preferably, use the `Generate Support String` button on the admin page's Diagnostics tab. That will auto-generate most of the info requested in this section. ========================================================================================= --> <!-- The version number, obtained from the logs (at startup) or the admin diagnostics page --> <!-- This is NOT the version number shown on the web vault, which is versioned separately from bitwarden_rs --> <!-- Remember to check if your issue exists on the latest version first! --> * bitwarden_rs version: docker latest as of this posting. <!-- How the server was installed: Docker image, OS package, built from source, etc. --> * Install method: Docker * Clients used: web vault * Reverse proxy and version: Nginx via Cloudfalre * MySQL/MariaDB or PostgreSQL version: <!-- if applicable --> * Other relevant details: ### Steps to reproduce <!-- Tell us how to reproduce this issue. What parameters did you set (differently from the defaults) and how did you start bitwarden_rs? --> **Note: this also happens when purging or deleting a lot of items** 1. Log in to web vault 2. Visit Tools > Import Data 3. Select Bitwarden (json) 4. Select a bitwarden json file that has around 2k passwords 5. Click "Import Data" 6. Wait 100 seconds for the Cloudflare 524 timeout as processing takes more time than this. ### Expected behaviour <!-- Tell us what you expected to happen --> The import would succeed. Using some kind of queue process or ajax that checks progress rather than leaving the connection open until the import is complete. ### Actual behaviour <!-- Tell us what actually happened --> Hits Cloudflare's 100 second limit and returns an HTTP 524 error ### Troubleshooting data <!-- Share any log files, screenshots, or other relevant troubleshooting data --> When posting to: `https://<host>:<port>/api/ciphers/import`
kerem closed this issue 2026-03-03 02:05:11 +03:00
Author
Owner

@defunctl commented on GitHub (Mar 7, 2021):

I should also note that it appears to complete after some time regardless of the timeout.

<!-- gh-comment-id:792125532 --> @defunctl commented on GitHub (Mar 7, 2021): I should also note that it appears to complete after some time regardless of the timeout.
Author
Owner

@jjlin commented on GitHub (Mar 7, 2021):

Are you running on very slow hardware or a remote database or something like that? A couple of thousand entries shouldn't normally take over 100 seconds to process. The API behavior is basically dictated by how upstream works, so I don't think this is something we can really fix independently.

<!-- gh-comment-id:792205339 --> @jjlin commented on GitHub (Mar 7, 2021): Are you running on very slow hardware or a remote database or something like that? A couple of thousand entries shouldn't normally take over 100 seconds to process. The API behavior is basically dictated by how upstream works, so I don't think this is something we can really fix independently.
Author
Owner

@defunctl commented on GitHub (Mar 7, 2021):

Yeah, it's a Synology DS918+ with WD red drives. It takes a lot longer than 100 seconds to process an import/purge of that many records.

Fair enough it can't be fixed here though. Do you think I should try opening an issue upstream?

It definitely completes though, because I can hear the NAS drives churning for a couple of minutes pretty loudly and when it stops I know it's done 😂

Thanks for the reply.

<!-- gh-comment-id:792321888 --> @defunctl commented on GitHub (Mar 7, 2021): Yeah, it's a Synology DS918+ with WD red drives. It takes a lot longer than 100 seconds to process an import/purge of that many records. Fair enough it can't be fixed here though. Do you think I should try opening an issue upstream? It definitely completes though, because I can hear the NAS drives churning for a couple of minutes pretty loudly and when it stops I know it's done 😂 Thanks for the reply.
Author
Owner

@BlackDex commented on GitHub (Mar 7, 2021):

Well, we can't fix the CloudFlare timeout of course.
Also, if hardware is slower to process the request, that depends, but it's probably hard to increase the speed.

Also excepting the request and parse it in the background doesn't match upstream, so also not an option.

And reporting to upstream will probably not help, unless the time to process the import takes to long on the browser/client side.

<!-- gh-comment-id:792322865 --> @BlackDex commented on GitHub (Mar 7, 2021): Well, we can't fix the CloudFlare timeout of course. Also, if hardware is slower to process the request, that depends, but it's probably hard to increase the speed. Also excepting the request and parse it in the background doesn't match upstream, so also not an option. And reporting to upstream will probably not help, unless the time to process the import takes to long on the browser/client side.
Author
Owner

@jjlin commented on GitHub (Mar 7, 2021):

I'm not sure if you're agreeing that a "Synology DS918+ with WD red drive" is slow hardware? It doesn't sound that slow per se, but maybe your RAID configuration is such that random disk writes are slow. If you can't fix your write speed, you could probably manually chunk your import file, or not use Cloudflare, at least temporarily. But I don't think upstream is going to be receptive to making a relatively complex change to accommodate someone's relatively unusual configuration.

<!-- gh-comment-id:792326731 --> @jjlin commented on GitHub (Mar 7, 2021): I'm not sure if you're agreeing that a "Synology DS918+ with WD red drive" is slow hardware? It doesn't sound that slow per se, but maybe your RAID configuration is such that random disk writes are slow. If you can't fix your write speed, you could probably manually chunk your import file, or not use Cloudflare, at least temporarily. But I don't think upstream is going to be receptive to making a relatively complex change to accommodate someone's relatively unusual configuration.
Author
Owner

@defunctl commented on GitHub (Mar 7, 2021):

I was agreeing it's not a super fast machine, powered by Intel Celeron J3455 quad-core CPU. Could definitely be the RAID configuration, I'm using the Synology Hybrid RAID (SHR) on the btrfs file system.

This was more of a user experience thing, as it ultimately does complete its task, it's just the user wouldn't know and it seemed strange regardless it would leave the request open.

Maybe I'll try disabling write caching on the disks or some other configuration tweaks.

It does appear to be a disk issue though as CPU usage was low and disk access increased quite a bit (I could both see it in the monitor and hear it, since this is a loud unit).

I'll close this issue out and report back if anything improved the performance.

Thanks for everyone's time, it's appreciated.

<!-- gh-comment-id:792332373 --> @defunctl commented on GitHub (Mar 7, 2021): I was agreeing it's not a super fast machine, powered by Intel Celeron J3455 quad-core CPU. Could definitely be the RAID configuration, I'm using the Synology Hybrid RAID (SHR) on the btrfs file system. This was more of a user experience thing, as it ultimately does complete its task, it's just the user wouldn't know and it seemed strange regardless it would leave the request open. Maybe I'll try disabling write caching on the disks or some other configuration tweaks. It does appear to be a disk issue though as CPU usage was low and disk access increased quite a bit (I could both see it in the monitor and hear it, since this is a loud unit). I'll close this issue out and report back if anything improved the performance. Thanks for everyone's time, it's appreciated.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/vaultwarden#972
No description provided.