[GH-ISSUE #2111] Size of mounted filesystem is 16 EiB #1075

Closed
opened 2026-03-04 01:51:11 +03:00 by kerem · 11 comments
Owner

Originally created by @ghost on GitHub (Feb 14, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2111

Additional Information

Version of s3fs being used (s3fs --version)

1.91

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

2.9.2

Minio version

RELEASE.2023-01-25T00-19-54Z

Kernel information (uname -r)

3.10.0-1160.81.1.el7.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

CentOS 7 (host on which I mount the bucket)
AlmaLinux 8.7 (host running minio server)

How to run s3fs, if applicable

command line:

s3fs my-second-bucket /mnt/bucket -f -o passwd_file=${HOME}/.passwd-s3fs -o url=http://10.0.5.17:9000/ -o use_path_request_style -o dbglevel=info -f -o curldbg

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

2023-02-13T16:21:33.957Z [INF] s3fs version 1.91(e715b77) : s3fs -f -o passwd_file=/root/.passwd-s3fs -o url=http://10.0.5.17:9000 -o use_path_request_style -o dbglevel=info -f -o curldbg my-second-bucket /mnt/bucket
2023-02-13T16:21:33.960Z [CRT] s3fs_logger.cpp:LowSetLogLevel(239): change debug level from [CRT] to [INF]
2023-02-13T16:21:33.960Z [INF]     s3fs.cpp:set_mountpoint_attribute(4372): PROC(uid=0, gid=0) - MountPoint(uid=993, gid=991, mode=40777)
2023-02-13T16:21:33.962Z [INF] curl.cpp:InitMimeType(431): Loaded mime information from /etc/mime.types
2023-02-13T16:21:33.962Z [INF] fdcache_stat.cpp:CheckCacheFileStatTopDir(78): The path to cache top dir is empty, thus not need to check permission.
2023-02-13T16:21:33.962Z [INF] threadpoolman.cpp:StopThreads(195): Any threads are running now, then nothing to do.
2023-02-13T16:21:33.962Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-13T16:21:33.962Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-13T16:21:33.962Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-13T16:21:33.962Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-13T16:21:33.962Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-13T16:21:33.963Z [CRT] s3fs_cred.cpp:VersionS3fsCredential(60): Check why built-in function was called, the external credential library must have VersionS3fsCredential function.
2023-02-13T16:21:33.963Z [INF] s3fs.cpp:s3fs_init(4094): init v1.91(commit:e715b77) with OpenSSL, credential-library(built-in)
2023-02-13T16:21:33.963Z [INF] s3fs.cpp:s3fs_check_service(4238): check services.
2023-02-13T16:21:33.963Z [INF]       curl.cpp:CheckBucket(3667): check a bucket.
2023-02-13T16:21:33.963Z [WAN] curl.cpp:ResetHandle(1969): The CURLOPT_SSL_ENABLE_ALPN option could not be unset. S3 server does not support ALPN, then this option should be disabled to maximize performance. you need to use libcurl 7.36.0 or later.
2023-02-13T16:21:33.963Z [WAN] curl.cpp:ResetHandle(1972): The S3FS_CURLOPT_KEEP_SENDING_ON_ERROR option could not be set. For maximize performance you need to enable this option and you should use libcurl 7.51.0 or later.
2023-02-13T16:21:33.963Z [INF]       curl_util.cpp:prepare_url(257): URL is http://10.0.5.17:9000/my-second-bucket/
2023-02-13T16:21:33.963Z [INF]       curl_util.cpp:prepare_url(290): URL changed is http://10.0.5.17:9000/my-second-bucket/
2023-02-13T16:21:33.963Z [INF]       curl.cpp:insertV4Headers(2886): computing signature [GET] [/] [] []
2023-02-13T16:21:33.963Z [INF]       curl_util.cpp:url_to_host(334): url is http://10.0.5.17:9000
2023-02-13T16:21:33.963Z [CURL DBG] * About to connect() to 10.0.5.17 port 9000 (#0)
2023-02-13T16:21:33.963Z [CURL DBG] *   Trying 10.0.5.17...
2023-02-13T16:21:33.964Z [CURL DBG] * Connected to 10.0.5.17 (10.0.5.17) port 9000 (#0)
2023-02-13T16:21:33.964Z [CURL DBG] > GET /my-second-bucket/ HTTP/1.1
2023-02-13T16:21:33.964Z [CURL DBG] > User-Agent: s3fs/1.91 (commit hash e715b77; OpenSSL)
2023-02-13T16:21:33.964Z [CURL DBG] > Accept: */*
2023-02-13T16:21:33.964Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=HnyUr0Ujw8g3uFoy/20230213/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=78b7f1fabf044112e2ccb5f101ea97d614871716c76ec6eb95fbf03b36a64324
2023-02-13T16:21:33.964Z [CURL DBG] > host: 10.0.5.17:9000
2023-02-13T16:21:33.964Z [CURL DBG] > x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2023-02-13T16:21:33.964Z [CURL DBG] > x-amz-date: 20230213T162133Z
2023-02-13T16:21:33.964Z [CURL DBG] >
2023-02-13T16:21:33.965Z [CURL DBG] < HTTP/1.1 200 OK
2023-02-13T16:21:33.965Z [CURL DBG] < Accept-Ranges: bytes
2023-02-13T16:21:33.965Z [CURL DBG] < Content-Length: 265
2023-02-13T16:21:33.965Z [CURL DBG] < Content-Security-Policy: block-all-mixed-content
2023-02-13T16:21:33.965Z [CURL DBG] < Content-Type: application/xml
2023-02-13T16:21:33.965Z [CURL DBG] < Server: MinIO
2023-02-13T16:21:33.965Z [CURL DBG] < Strict-Transport-Security: max-age=31536000; includeSubDomains
2023-02-13T16:21:33.965Z [CURL DBG] < Vary: Origin
2023-02-13T16:21:33.965Z [CURL DBG] < Vary: Accept-Encoding
2023-02-13T16:21:33.965Z [CURL DBG] < X-Amz-Request-Id: 17436EC88DED1FC5
2023-02-13T16:21:33.965Z [CURL DBG] < X-Content-Type-Options: nosniff
2023-02-13T16:21:33.965Z [CURL DBG] < X-Xss-Protection: 1; mode=block
2023-02-13T16:21:33.965Z [CURL DBG] < Date: Mon, 13 Feb 2023 16:21:33 GMT
2023-02-13T16:21:33.965Z [CURL DBG] <
2023-02-13T16:21:33.965Z [CURL DBG] * Connection #0 to host 10.0.5.17 left intact
2023-02-13T16:21:33.965Z [INF]       curl.cpp:RequestPerform(2514): HTTP response code 200
2023-02-13T16:21:41.320Z [INF] s3fs.cpp:s3fs_destroy(4138): destroy

Details about issue

Hello, I am experiencing a problem similar to the issue #1870 .

I am mounting a bucket with s3fs from a private single-node MinIO server.

The command I am using is:

s3fs my-second-bucket /mnt/bucket -f -o passwd_file=${HOME}/.passwd-s3fs -o url=http://10.0.5.17:9000/ -o use_path_request_style -o dbglevel=info -f -o curldbg

This doesn't throw any errors (see output above).
However, df -h shows a size of 16EiB for the filesystem, while the MinIo server as a whole has a size of 33G.

This is problematic for me, since the software I am using (Puppet) cannot handle such a large number (16 EiB).

Is the size set to 16 EiB by design or is there a way to get the correct value?

Thanks in advance!

Originally created by @ghost on GitHub (Feb 14, 2023). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2111 <!-- -------------------------------------------------------------------------- The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD. --------------------------------------------------------------------------- --> ### Additional Information #### Version of s3fs being used (`s3fs --version`) <!-- example: V1.91 (commit:b19262a) --> 1.91 #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) <!-- example: 2.9.2 --> 2.9.2 #### Minio version RELEASE.2023-01-25T00-19-54Z #### Kernel information (`uname -r`) <!-- example: 5.10.96-90.460.amzn2.x86_64 --> 3.10.0-1160.81.1.el7.x86_64 #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) <!-- command result --> CentOS 7 (host on which I mount the bucket) AlmaLinux 8.7 (host running minio server) #### How to run s3fs, if applicable <!-- Describe the s3fs "command line" or "/etc/fstab" entry used. --> command line: ``` s3fs my-second-bucket /mnt/bucket -f -o passwd_file=${HOME}/.passwd-s3fs -o url=http://10.0.5.17:9000/ -o use_path_request_style -o dbglevel=info -f -o curldbg ``` #### s3fs syslog messages (`grep s3fs /var/log/syslog`, `journalctl | grep s3fs`, or `s3fs outputs`) <!-- if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages. --> ``` 2023-02-13T16:21:33.957Z [INF] s3fs version 1.91(e715b77) : s3fs -f -o passwd_file=/root/.passwd-s3fs -o url=http://10.0.5.17:9000 -o use_path_request_style -o dbglevel=info -f -o curldbg my-second-bucket /mnt/bucket 2023-02-13T16:21:33.960Z [CRT] s3fs_logger.cpp:LowSetLogLevel(239): change debug level from [CRT] to [INF] 2023-02-13T16:21:33.960Z [INF] s3fs.cpp:set_mountpoint_attribute(4372): PROC(uid=0, gid=0) - MountPoint(uid=993, gid=991, mode=40777) 2023-02-13T16:21:33.962Z [INF] curl.cpp:InitMimeType(431): Loaded mime information from /etc/mime.types 2023-02-13T16:21:33.962Z [INF] fdcache_stat.cpp:CheckCacheFileStatTopDir(78): The path to cache top dir is empty, thus not need to check permission. 2023-02-13T16:21:33.962Z [INF] threadpoolman.cpp:StopThreads(195): Any threads are running now, then nothing to do. 2023-02-13T16:21:33.962Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-13T16:21:33.962Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-13T16:21:33.962Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-13T16:21:33.962Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-13T16:21:33.962Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-13T16:21:33.963Z [CRT] s3fs_cred.cpp:VersionS3fsCredential(60): Check why built-in function was called, the external credential library must have VersionS3fsCredential function. 2023-02-13T16:21:33.963Z [INF] s3fs.cpp:s3fs_init(4094): init v1.91(commit:e715b77) with OpenSSL, credential-library(built-in) 2023-02-13T16:21:33.963Z [INF] s3fs.cpp:s3fs_check_service(4238): check services. 2023-02-13T16:21:33.963Z [INF] curl.cpp:CheckBucket(3667): check a bucket. 2023-02-13T16:21:33.963Z [WAN] curl.cpp:ResetHandle(1969): The CURLOPT_SSL_ENABLE_ALPN option could not be unset. S3 server does not support ALPN, then this option should be disabled to maximize performance. you need to use libcurl 7.36.0 or later. 2023-02-13T16:21:33.963Z [WAN] curl.cpp:ResetHandle(1972): The S3FS_CURLOPT_KEEP_SENDING_ON_ERROR option could not be set. For maximize performance you need to enable this option and you should use libcurl 7.51.0 or later. 2023-02-13T16:21:33.963Z [INF] curl_util.cpp:prepare_url(257): URL is http://10.0.5.17:9000/my-second-bucket/ 2023-02-13T16:21:33.963Z [INF] curl_util.cpp:prepare_url(290): URL changed is http://10.0.5.17:9000/my-second-bucket/ 2023-02-13T16:21:33.963Z [INF] curl.cpp:insertV4Headers(2886): computing signature [GET] [/] [] [] 2023-02-13T16:21:33.963Z [INF] curl_util.cpp:url_to_host(334): url is http://10.0.5.17:9000 2023-02-13T16:21:33.963Z [CURL DBG] * About to connect() to 10.0.5.17 port 9000 (#0) 2023-02-13T16:21:33.963Z [CURL DBG] * Trying 10.0.5.17... 2023-02-13T16:21:33.964Z [CURL DBG] * Connected to 10.0.5.17 (10.0.5.17) port 9000 (#0) 2023-02-13T16:21:33.964Z [CURL DBG] > GET /my-second-bucket/ HTTP/1.1 2023-02-13T16:21:33.964Z [CURL DBG] > User-Agent: s3fs/1.91 (commit hash e715b77; OpenSSL) 2023-02-13T16:21:33.964Z [CURL DBG] > Accept: */* 2023-02-13T16:21:33.964Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=HnyUr0Ujw8g3uFoy/20230213/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=78b7f1fabf044112e2ccb5f101ea97d614871716c76ec6eb95fbf03b36a64324 2023-02-13T16:21:33.964Z [CURL DBG] > host: 10.0.5.17:9000 2023-02-13T16:21:33.964Z [CURL DBG] > x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2023-02-13T16:21:33.964Z [CURL DBG] > x-amz-date: 20230213T162133Z 2023-02-13T16:21:33.964Z [CURL DBG] > 2023-02-13T16:21:33.965Z [CURL DBG] < HTTP/1.1 200 OK 2023-02-13T16:21:33.965Z [CURL DBG] < Accept-Ranges: bytes 2023-02-13T16:21:33.965Z [CURL DBG] < Content-Length: 265 2023-02-13T16:21:33.965Z [CURL DBG] < Content-Security-Policy: block-all-mixed-content 2023-02-13T16:21:33.965Z [CURL DBG] < Content-Type: application/xml 2023-02-13T16:21:33.965Z [CURL DBG] < Server: MinIO 2023-02-13T16:21:33.965Z [CURL DBG] < Strict-Transport-Security: max-age=31536000; includeSubDomains 2023-02-13T16:21:33.965Z [CURL DBG] < Vary: Origin 2023-02-13T16:21:33.965Z [CURL DBG] < Vary: Accept-Encoding 2023-02-13T16:21:33.965Z [CURL DBG] < X-Amz-Request-Id: 17436EC88DED1FC5 2023-02-13T16:21:33.965Z [CURL DBG] < X-Content-Type-Options: nosniff 2023-02-13T16:21:33.965Z [CURL DBG] < X-Xss-Protection: 1; mode=block 2023-02-13T16:21:33.965Z [CURL DBG] < Date: Mon, 13 Feb 2023 16:21:33 GMT 2023-02-13T16:21:33.965Z [CURL DBG] < 2023-02-13T16:21:33.965Z [CURL DBG] * Connection #0 to host 10.0.5.17 left intact 2023-02-13T16:21:33.965Z [INF] curl.cpp:RequestPerform(2514): HTTP response code 200 2023-02-13T16:21:41.320Z [INF] s3fs.cpp:s3fs_destroy(4138): destroy ``` ### Details about issue <!-- Please describe the content of the issue in detail. --> Hello, I am experiencing a problem similar to the issue #1870 . I am mounting a bucket with s3fs from a private single-node MinIO server. The command I am using is: ``` s3fs my-second-bucket /mnt/bucket -f -o passwd_file=${HOME}/.passwd-s3fs -o url=http://10.0.5.17:9000/ -o use_path_request_style -o dbglevel=info -f -o curldbg ``` This doesn't throw any errors (see output above). However, `df -h` shows a size of `16EiB` for the filesystem, while the MinIo server as a whole has a size of 33G. This is problematic for me, since the software I am using (Puppet) cannot handle such a large number (16 EiB). Is the size set to 16 EiB by design or is there a way to get the correct value? Thanks in advance!
kerem closed this issue 2026-03-04 01:51:11 +03:00
Author
Owner

@gaul commented on GitHub (Feb 15, 2023):

Could you file a bug with Puppet about how they handle large volume sizes? One workaround you could make is to write a simple LD_PRELOAD that changes the results of statvfs.

<!-- gh-comment-id:1431377066 --> @gaul commented on GitHub (Feb 15, 2023): Could you file a bug with Puppet about how they handle large volume sizes? One workaround you could make is to write a simple `LD_PRELOAD` that changes the results of `statvfs`.
Author
Owner

@ghost commented on GitHub (Feb 16, 2023):

Thank you for your answer and suggested workaround.

Unfortunately it doesn't look like the max integer limit in puppet will be solved anytime soon.

I was wondering whether getting 16 EiB when mounting a bucket from a private MinIO server with ~ 30GB of disk space really is the desired behaviour.

Do you have some insights about this?

Thanks!

<!-- gh-comment-id:1433341513 --> @ghost commented on GitHub (Feb 16, 2023): Thank you for your answer and suggested workaround. Unfortunately it doesn't look like the max integer limit in puppet will be solved anytime soon. I was wondering whether getting `16 EiB` when mounting a bucket from a private MinIO server with `~ 30GB` of disk space really is the desired behaviour. Do you have some insights about this? Thanks!
Author
Owner

@gaul commented on GitHub (Feb 17, 2023):

s3fs just reports the maximum possible size. It does not actually compute the real volume size since this could be expensive. If you want a local workaround, just change stbuf->f_blocks in s3fs_statfs to any value you like and recompile s3fs:

https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/s3fs.cpp#L2803

Could you share the Puppet bug report?

<!-- gh-comment-id:1434010243 --> @gaul commented on GitHub (Feb 17, 2023): s3fs just reports the maximum possible size. It does not actually compute the real volume size since this could be expensive. If you want a local workaround, just change `stbuf->f_blocks` in `s3fs_statfs` to any value you like and recompile s3fs: https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/s3fs.cpp#L2803 Could you share the Puppet bug report?
Author
Owner

@ghost commented on GitHub (Feb 17, 2023):

Thank you! I will recompile the code as you suggested!

Here is the old puppet issue about large integers: https://tickets.puppetlabs.com/projects/FACT/issues/FACT-1732?filter=allopenissues .

<!-- gh-comment-id:1434285077 --> @ghost commented on GitHub (Feb 17, 2023): Thank you! I will recompile the code as you suggested! Here is the old puppet issue about large integers: https://tickets.puppetlabs.com/projects/FACT/issues/FACT-1732?filter=allopenissues .
Author
Owner

@p3lim commented on GitHub (Mar 1, 2023):

could this perhaps be exposed as an option, so we don't have to recompile the software on every system, or use separate binaries for separate buckets, just to get accurate metrics?

<!-- gh-comment-id:1450021747 --> @p3lim commented on GitHub (Mar 1, 2023): could this perhaps be exposed as an option, so we don't have to recompile the software on every system, or use separate binaries for separate buckets, just to get accurate metrics?
Author
Owner

@michaelsmoody commented on GitHub (Mar 3, 2023):

Ah Puppet....the gift that I moved on from....

@p3lim Hmm. A better idea might be a "use only signed integers" option, for platforms that are older or simply don't support larger ints. While it's not terribly common, it might work. I doubt anyone else runs into the edge case, but ¯_(ツ)_/¯

I confirmed that Puppet limits to 64-bit signed integers and double precision float. Perhaps someone wants to submit a PR that limits integers to 64-bit signed with an option on the command line (IF that's acceptable to the dev team, and hell, wouldn't break everything).

<!-- gh-comment-id:1453817521 --> @michaelsmoody commented on GitHub (Mar 3, 2023): Ah Puppet....the gift that I moved on from.... @p3lim Hmm. A better idea might be a "use only signed integers" option, for platforms that are _older_ or simply don't support larger ints. While it's not terribly common, it might work. I doubt anyone else runs into the edge case, but ¯\_(ツ)_/¯ I confirmed that Puppet limits to 64-bit signed integers and double precision float. Perhaps someone wants to submit a PR that limits integers to 64-bit signed with an option on the command line (IF that's acceptable to the dev team, and hell, wouldn't break everything).
Author
Owner

@p3lim commented on GitHub (Mar 3, 2023):

We ran into this issue, we can't get accurate metrics from another system using the s3fs directory since it reports incorrect max capacity, we need that to be accurate. If the software can't figure it out on its own then we should be able to specify it ourselves without having to recompile the software and using multiple binaries for each bucket. Hence the suggestion to make it a configurable option.

<!-- gh-comment-id:1453881148 --> @p3lim commented on GitHub (Mar 3, 2023): We ran into this issue, we can't get accurate metrics from another system using the s3fs directory since it reports incorrect max capacity, we need that to be accurate. If the software can't figure it out on its own then we should be able to specify it ourselves without having to recompile the software and using multiple binaries for each bucket. Hence the suggestion to make it a configurable option.
Author
Owner

@ggtakec commented on GitHub (Mar 26, 2023):

@balducciatix @michaelsmoody @p3lim
Could this problem be circumvented by calculating the maximum size of s3fs mounted filesystems in signed 64 bit?

As already explained by @gaul, s3fs does not report actual max size/used size/free size/etc when responsing filesystem state(statvfs): Strictly speaking, it cannot be calculated.
Therefore, s3fs reports the maximum size and fee size as fully unsigned 64 bit bytes.(Ex: df command)

This is because it is not possible or difficult to get the currently used size, available size, its maximum size, etc. from the remote(S3 server).
In order to obtain the number of bytes used, information on all existing files must be collected, so it takes a lot of time.

I think if this problem could be circumvented by simply changing the maximum size to signed, the fix would not be difficult.
(Does adding option for calculation with signed 64 bit solve this issue?)

<!-- gh-comment-id:1484066221 --> @ggtakec commented on GitHub (Mar 26, 2023): @balducciatix @michaelsmoody @p3lim Could this problem be circumvented by calculating the maximum size of s3fs mounted filesystems in signed 64 bit? As already explained by @gaul, s3fs does not report actual max size/used size/free size/etc when responsing filesystem state(statvfs): Strictly speaking, it cannot be calculated. Therefore, s3fs reports the maximum size and fee size as fully unsigned 64 bit bytes.(Ex: df command) This is because it is not possible or difficult to get the currently used size, available size, its maximum size, etc. from the remote(S3 server). In order to obtain the number of bytes used, information on all existing files must be collected, so it takes a lot of time. I think if this problem could be circumvented by simply changing the maximum size to signed, the fix would not be difficult. (Does adding option for calculation with signed 64 bit solve this issue?)
Author
Owner

@ggtakec commented on GitHub (Apr 23, 2023):

@balducciatix @michaelsmoody @p3lim @balducciatix
Merged the PR from @OttaviaB.
This adds a bucket_size option.
Please try the bucket_size option as it may solve this issue.

<!-- gh-comment-id:1519026165 --> @ggtakec commented on GitHub (Apr 23, 2023): @balducciatix @michaelsmoody @p3lim @balducciatix Merged the PR from @OttaviaB. This adds a `bucket_size` option. Please try the `bucket_size` option as it may solve this issue.
Author
Owner

@michaelsmoody commented on GitHub (May 10, 2023):

This appears to solve the issue on a few tests (though I wasn't specifically having it). Any chance of a new release tag in the near future? The last "release" was 14 months ago, and we generally rely upon tagged releases, as they're in our upstream repo pre-packaged (RPM/DEB).

<!-- gh-comment-id:1542112345 --> @michaelsmoody commented on GitHub (May 10, 2023): This _appears_ to solve the issue on a few tests (though I wasn't specifically having it). Any chance of a new release tag in the near future? The last "release" was 14 months ago, and we generally rely upon tagged releases, as they're in our upstream repo pre-packaged (RPM/DEB).
Author
Owner

@ggtakec commented on GitHub (May 10, 2023):

@michaelsmoody
I would like to make new release after the ongoing PR is sorted out.
I apologize for the delay new version, but please wait a little longer.
(I will follow about new release at #2102.)

This issue will be closed, if you still have problems, please reopen this.

<!-- gh-comment-id:1542224686 --> @ggtakec commented on GitHub (May 10, 2023): @michaelsmoody I would like to make new release after the ongoing PR is sorted out. I apologize for the delay new version, but please wait a little longer. (I will follow about new release at #2102.) This issue will be closed, if you still have problems, please reopen this.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1075
No description provided.