mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[PR #2673] [CLOSED] Fixed a logic in the multi head request retry count check #2698
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#2698
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/s3fs-fuse/s3fs-fuse/pull/2673
Author: @ggtakec
Created: 5/31/2025
Status: ❌ Closed
Base:
master← Head:fix/retrymulti📄 Description
Relevant Issue (if applicable)
n/a
Details
Found an issue with checking the retry limit for multi head requests and multi put head requests.
When s3fs executes commands(such as ls) on a directory that contains multiple files and directories, head requests are processed in parallel as the multi head request.
Currently, the retry limit is set for all multiple head requests.
This limit prevents new head requests(after problem request) from being sent if the retry limit is reached while processing multiple head requests.
The problem is that if an object that cannot be read exists in the directory where user is trying to execute
lsand the retry limit is reached, user will not be able to read some files or directories after reaching limit.For example, if user execute
lson a directory with 100 files, if there is a file that has a reason to reach the retry limit, the result of thelsmay be 80(example) files instead of 99.To avoid this issue, in the case of a head request, it is necessary to allow one head request to be executed even if the retry limit is reached.
This can be avoided by deleting the first retry upper limit check in the
multi_head_req_threadworker()andmultipart_put_head_req_threadworker()functions.(Previously, when we added the retry upper limit check for multi requests, but we did not take into consideration head requests like this case.)
A similar retry upper limit check is also performed in
parallel_get_object_req_threadworker(), but since this is parallel execution of the upload process for one file, there is no problem with the current code.🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.