mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #603] Feature request: Supporting handling of S3 SlowDown response #340
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#340
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @colakong on GitHub (May 17, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/603
Additional Information
Version of s3fs being used (s3fs --version)
v1.82Version of fuse being used (pkg-config --modversion fuse)
2.8.4System information (uname -a)
Linux ac80f64a48ed 4.4.0-77-generic #98-Ubuntu SMP Wed Apr 26 08:34:02 UTC 2017 x86_64 x86_64 x86_64 GNU/LinuxDistro (cat /etc/issue)
Ubuntu 16.04.2 LTSs3fs command line used (if applicable)
Details about issue
If an S3 endpoint sends s3fs a SlowDown response for an operation, s3fs sees an HTTP 503 status code and immediately retries the operation until
retrieslimit is reached.The feature would have s3fs throttle its request rate (then retry the operation) after receiving a SlowDown response.
@gaul commented on GitHub (May 17, 2017):
@colakong Which s3fs operations did you execute such that you saw the
SlowErrorerror code?@colakong commented on GitHub (May 17, 2017):
@andrewgaul Constant
GetObjectrequests over a long period of time (several hours), due to the way that some of the software using the mounted filesystem was working.@sqlbot commented on GitHub (May 18, 2017):
This sounds like an excellent idea. 503 and 500 errors should retry less aggressively. Note that there are two 503 errors with the same implications -- something on S3 is being overwhelmed. 500 also is a candidate, as I have seen this one during a batch of many writes of small files.
501 errors are unlikely to succeed if retried, but they are also pretty unlikely to occur with s3fs, since we aren't typically going to try impossible things.
So any 5XX might be a candidate for exponential backoff of, say, 2^(n-1) + a random number of 0 - 0.999 seconds.
http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html