[GH-ISSUE #603] Feature request: Supporting handling of S3 SlowDown response #340

Closed
opened 2026-03-04 01:44:32 +03:00 by kerem · 3 comments
Owner

Originally created by @colakong on GitHub (May 17, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/603

Additional Information

  • Version of s3fs being used (s3fs --version)

  • v1.82

  • Version of fuse being used (pkg-config --modversion fuse)

  • 2.8.4

  • System information (uname -a)

  • Linux ac80f64a48ed 4.4.0-77-generic #98-Ubuntu SMP Wed Apr 26 08:34:02 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

  • Distro (cat /etc/issue)

  • Ubuntu 16.04.2 LTS

  • s3fs command line used (if applicable)

s3fs c3tp-perf /mnt/s3_mount -f -o passwd_file=s3_cred -o url=https://ddn-wos-s3/ -o use_cache=/tmp/s3fs_cache -o del_cache -o use_path_request_style -o no_check_certificate -o sigv2 -o allow_other -o mp_umask=0000 -o umask=0000 -d -d
  • /etc/fstab entry (if applicable):
# From /proc/mounts
s3fs /mnt/s3_mount fuse.s3fs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0

Details about issue

If an S3 endpoint sends s3fs a SlowDown response for an operation, s3fs sees an HTTP 503 status code and immediately retries the operation until retries limit is reached.

The feature would have s3fs throttle its request rate (then retry the operation) after receiving a SlowDown response.

Originally created by @colakong on GitHub (May 17, 2017). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/603 #### Additional Information - Version of s3fs being used (s3fs --version) - `v1.82` - Version of fuse being used (pkg-config --modversion fuse) - `2.8.4` - System information (uname -a) - `Linux ac80f64a48ed 4.4.0-77-generic #98-Ubuntu SMP Wed Apr 26 08:34:02 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux` - Distro (cat /etc/issue) - `Ubuntu 16.04.2 LTS` - s3fs command line used (if applicable) ``` s3fs c3tp-perf /mnt/s3_mount -f -o passwd_file=s3_cred -o url=https://ddn-wos-s3/ -o use_cache=/tmp/s3fs_cache -o del_cache -o use_path_request_style -o no_check_certificate -o sigv2 -o allow_other -o mp_umask=0000 -o umask=0000 -d -d ``` - /etc/fstab entry (if applicable): ``` # From /proc/mounts s3fs /mnt/s3_mount fuse.s3fs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0 ``` #### Details about issue If an S3 endpoint sends s3fs a [SlowDown](http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html) response for an operation, s3fs sees an _HTTP 503_ status code and immediately retries the operation until `retries` limit is reached. The feature would have s3fs throttle its request rate (then retry the operation) after receiving a _SlowDown_ response.
kerem closed this issue 2026-03-04 01:44:32 +03:00
Author
Owner

@gaul commented on GitHub (May 17, 2017):

@colakong Which s3fs operations did you execute such that you saw the SlowError error code?

<!-- gh-comment-id:302241477 --> @gaul commented on GitHub (May 17, 2017): @colakong Which s3fs operations did you execute such that you saw the `SlowError` error code?
Author
Owner

@colakong commented on GitHub (May 17, 2017):

@andrewgaul Constant GetObject requests over a long period of time (several hours), due to the way that some of the software using the mounted filesystem was working.

<!-- gh-comment-id:302242587 --> @colakong commented on GitHub (May 17, 2017): @andrewgaul Constant `GetObject` requests over a long period of time (several hours), due to the way that some of the software using the mounted filesystem was working.
Author
Owner

@sqlbot commented on GitHub (May 18, 2017):

This sounds like an excellent idea. 503 and 500 errors should retry less aggressively. Note that there are two 503 errors with the same implications -- something on S3 is being overwhelmed. 500 also is a candidate, as I have seen this one during a batch of many writes of small files.

501 errors are unlikely to succeed if retried, but they are also pretty unlikely to occur with s3fs, since we aren't typically going to try impossible things.

So any 5XX might be a candidate for exponential backoff of, say, 2^(n-1) + a random number of 0 - 0.999 seconds.

http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html

<!-- gh-comment-id:302363352 --> @sqlbot commented on GitHub (May 18, 2017): This sounds like an excellent idea. 503 and 500 errors should retry less aggressively. Note that there are two 503 errors with the same implications -- something on S3 is being overwhelmed. 500 also is a candidate, as I have seen this one during a batch of many writes of small files. 501 errors are unlikely to succeed if retried, but they are also pretty unlikely to occur with s3fs, since we aren't typically going to try impossible things. So any 5XX might be a candidate for exponential backoff of, say, 2^(n-1) + a random number of 0 - 0.999 seconds. http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#340
No description provided.