[PR #783] [CLOSED] Issues with listing collections that contains over 10 000 objects #1640

Closed
opened 2026-03-04 02:01:26 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/s3fs-fuse/s3fs-fuse/pull/783
Author: @pawelmarkowski
Created: 7/2/2018
Status: Closed

Base: masterHead: master


📝 Commits (3)

  • fc37162 Increase Cache Size
  • 3e2b724 Increase retries to 5 tries
  • a4b1b1d Increase max-keys to 15 thousands because listing of bigger collections is too slow (that should be configurable as run parameter)

📊 Changes

3 files changed (+3 additions, -3 deletions)

View changed files

📝 src/cache.cpp (+1 -1)
📝 src/curl.cpp (+1 -1)
📝 src/s3fs.cpp (+1 -1)

📄 Description

Relevant Issue (if applicable)

We were not able to perform ls in a bucket that contains over 10 000 objects. This objects starts with specified key: for instance ls /test/2018/06/02/ contains over 12 000 objects that simulates folders of our products.

Details

We increase cache size because we faced a deadlock issue. We change max-keys option, which should be in our opinion configurable as run parameter (if user does not use max-keys, then it should be 1 000 that is the default value for S3.). Our quick fix increases max-keys value from 1000 to 15 000, and retries from 3 to 5 but that are the minor changes that should increase platform performance and stability.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/s3fs-fuse/s3fs-fuse/pull/783 **Author:** [@pawelmarkowski](https://github.com/pawelmarkowski) **Created:** 7/2/2018 **Status:** ❌ Closed **Base:** `master` ← **Head:** `master` --- ### 📝 Commits (3) - [`fc37162`](https://github.com/s3fs-fuse/s3fs-fuse/commit/fc371627314c7b254f765950f5cc3428ca80d16c) Increase Cache Size - [`3e2b724`](https://github.com/s3fs-fuse/s3fs-fuse/commit/3e2b724d375e196bc1f6619f968ce40e8ae8e65d) Increase retries to 5 tries - [`a4b1b1d`](https://github.com/s3fs-fuse/s3fs-fuse/commit/a4b1b1d0981fcba60a0fd7f1863d37f7c5a73b26) Increase max-keys to 15 thousands because listing of bigger collections is too slow (that should be configurable as run parameter) ### 📊 Changes **3 files changed** (+3 additions, -3 deletions) <details> <summary>View changed files</summary> 📝 `src/cache.cpp` (+1 -1) 📝 `src/curl.cpp` (+1 -1) 📝 `src/s3fs.cpp` (+1 -1) </details> ### 📄 Description ### Relevant Issue (if applicable) We were not able to perform ls in a bucket that contains over 10 000 objects. This objects starts with specified key: for instance ls /test/2018/06/02/ contains over 12 000 objects that simulates folders of our products. ### Details We increase cache size because we faced a deadlock issue. We change max-keys option, which should be in our opinion configurable as run parameter (if user does not use max-keys, then it should be 1 000 that is the default value for S3.). Our quick fix increases max-keys value from 1000 to 15 000, and retries from 3 to 5 but that are the minor changes that should increase platform performance and stability. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-04 02:01:26 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1640
No description provided.