mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #2111] Size of mounted filesystem is 16 EiB #1075
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1075
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ghost on GitHub (Feb 14, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2111
Additional Information
Version of s3fs being used (
s3fs --version)1.91
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)2.9.2
Minio version
RELEASE.2023-01-25T00-19-54Z
Kernel information (
uname -r)3.10.0-1160.81.1.el7.x86_64
GNU/Linux Distribution, if applicable (
cat /etc/os-release)CentOS 7 (host on which I mount the bucket)
AlmaLinux 8.7 (host running minio server)
How to run s3fs, if applicable
command line:
s3fs syslog messages (
grep s3fs /var/log/syslog,journalctl | grep s3fs, ors3fs outputs)Details about issue
Hello, I am experiencing a problem similar to the issue #1870 .
I am mounting a bucket with s3fs from a private single-node MinIO server.
The command I am using is:
This doesn't throw any errors (see output above).
However,
df -hshows a size of16EiBfor the filesystem, while the MinIo server as a whole has a size of 33G.This is problematic for me, since the software I am using (Puppet) cannot handle such a large number (16 EiB).
Is the size set to 16 EiB by design or is there a way to get the correct value?
Thanks in advance!
@gaul commented on GitHub (Feb 15, 2023):
Could you file a bug with Puppet about how they handle large volume sizes? One workaround you could make is to write a simple
LD_PRELOADthat changes the results ofstatvfs.@ghost commented on GitHub (Feb 16, 2023):
Thank you for your answer and suggested workaround.
Unfortunately it doesn't look like the max integer limit in puppet will be solved anytime soon.
I was wondering whether getting
16 EiBwhen mounting a bucket from a private MinIO server with~ 30GBof disk space really is the desired behaviour.Do you have some insights about this?
Thanks!
@gaul commented on GitHub (Feb 17, 2023):
s3fs just reports the maximum possible size. It does not actually compute the real volume size since this could be expensive. If you want a local workaround, just change
stbuf->f_blocksins3fs_statfsto any value you like and recompile s3fs:https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/s3fs.cpp#L2803
Could you share the Puppet bug report?
@ghost commented on GitHub (Feb 17, 2023):
Thank you! I will recompile the code as you suggested!
Here is the old puppet issue about large integers: https://tickets.puppetlabs.com/projects/FACT/issues/FACT-1732?filter=allopenissues .
@p3lim commented on GitHub (Mar 1, 2023):
could this perhaps be exposed as an option, so we don't have to recompile the software on every system, or use separate binaries for separate buckets, just to get accurate metrics?
@michaelsmoody commented on GitHub (Mar 3, 2023):
Ah Puppet....the gift that I moved on from....
@p3lim Hmm. A better idea might be a "use only signed integers" option, for platforms that are older or simply don't support larger ints. While it's not terribly common, it might work. I doubt anyone else runs into the edge case, but ¯_(ツ)_/¯
I confirmed that Puppet limits to 64-bit signed integers and double precision float. Perhaps someone wants to submit a PR that limits integers to 64-bit signed with an option on the command line (IF that's acceptable to the dev team, and hell, wouldn't break everything).
@p3lim commented on GitHub (Mar 3, 2023):
We ran into this issue, we can't get accurate metrics from another system using the s3fs directory since it reports incorrect max capacity, we need that to be accurate. If the software can't figure it out on its own then we should be able to specify it ourselves without having to recompile the software and using multiple binaries for each bucket. Hence the suggestion to make it a configurable option.
@ggtakec commented on GitHub (Mar 26, 2023):
@balducciatix @michaelsmoody @p3lim
Could this problem be circumvented by calculating the maximum size of s3fs mounted filesystems in signed 64 bit?
As already explained by @gaul, s3fs does not report actual max size/used size/free size/etc when responsing filesystem state(statvfs): Strictly speaking, it cannot be calculated.
Therefore, s3fs reports the maximum size and fee size as fully unsigned 64 bit bytes.(Ex: df command)
This is because it is not possible or difficult to get the currently used size, available size, its maximum size, etc. from the remote(S3 server).
In order to obtain the number of bytes used, information on all existing files must be collected, so it takes a lot of time.
I think if this problem could be circumvented by simply changing the maximum size to signed, the fix would not be difficult.
(Does adding option for calculation with signed 64 bit solve this issue?)
@ggtakec commented on GitHub (Apr 23, 2023):
@balducciatix @michaelsmoody @p3lim @balducciatix
Merged the PR from @OttaviaB.
This adds a
bucket_sizeoption.Please try the
bucket_sizeoption as it may solve this issue.@michaelsmoody commented on GitHub (May 10, 2023):
This appears to solve the issue on a few tests (though I wasn't specifically having it). Any chance of a new release tag in the near future? The last "release" was 14 months ago, and we generally rely upon tagged releases, as they're in our upstream repo pre-packaged (RPM/DEB).
@ggtakec commented on GitHub (May 10, 2023):
@michaelsmoody
I would like to make new release after the ongoing PR is sorted out.
I apologize for the delay new version, but please wait a little longer.
(I will follow about new release at #2102.)
This issue will be closed, if you still have problems, please reopen this.