[PR #2673] [CLOSED] Fixed a logic in the multi head request retry count check #2698

Closed
opened 2026-03-04 02:06:51 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/s3fs-fuse/s3fs-fuse/pull/2673
Author: @ggtakec
Created: 5/31/2025
Status: Closed

Base: masterHead: fix/retrymulti


📄 Description

Relevant Issue (if applicable)

n/a

Details

Found an issue with checking the retry limit for multi head requests and multi put head requests.

When s3fs executes commands(such as ls) on a directory that contains multiple files and directories, head requests are processed in parallel as the multi head request.

Currently, the retry limit is set for all multiple head requests.
This limit prevents new head requests(after problem request) from being sent if the retry limit is reached while processing multiple head requests.

The problem is that if an object that cannot be read exists in the directory where user is trying to execute ls and the retry limit is reached, user will not be able to read some files or directories after reaching limit.

For example, if user execute ls on a directory with 100 files, if there is a file that has a reason to reach the retry limit, the result of the ls may be 80(example) files instead of 99.

To avoid this issue, in the case of a head request, it is necessary to allow one head request to be executed even if the retry limit is reached.
This can be avoided by deleting the first retry upper limit check in the multi_head_req_threadworker() and multipart_put_head_req_threadworker() functions.
(Previously, when we added the retry upper limit check for multi requests, but we did not take into consideration head requests like this case.)

A similar retry upper limit check is also performed in parallel_get_object_req_threadworker(), but since this is parallel execution of the upload process for one file, there is no problem with the current code.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/s3fs-fuse/s3fs-fuse/pull/2673 **Author:** [@ggtakec](https://github.com/ggtakec) **Created:** 5/31/2025 **Status:** ❌ Closed **Base:** `master` ← **Head:** `fix/retrymulti` --- ### 📄 Description ### Relevant Issue (if applicable) n/a ### Details Found an issue with checking the retry limit for multi head requests and multi put head requests. When s3fs executes commands(such as ls) on a directory that contains multiple files and directories, head requests are processed in parallel as the multi head request. Currently, the retry limit is set for all multiple head requests. This limit prevents new head requests(after problem request) from being sent if the retry limit is reached while processing multiple head requests. The problem is that if an object that cannot be read exists in the directory where user is trying to execute `ls` and the retry limit is reached, user will not be able to read some files or directories after reaching limit. For example, if user execute `ls` on a directory with 100 files, if there is a file that has a reason to reach the retry limit, the result of the `ls` may be 80(example) files instead of 99. To avoid this issue, in the case of a head request, it is necessary to allow one head request to be executed even if the retry limit is reached. This can be avoided by deleting the first retry upper limit check in the `multi_head_req_threadworker()` and `multipart_put_head_req_threadworker()` functions. (Previously, when we added the retry upper limit check for multi requests, but we did not take into consideration head requests like this case.) A similar retry upper limit check is also performed in `parallel_get_object_req_threadworker()`, but since this is parallel execution of the upload process for one file, there is no problem with the current code. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-04 02:06:51 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#2698
No description provided.