[GH-ISSUE #1870] s3fs Filesystem Size is 16E with df -h #953

Closed
opened 2026-03-04 01:50:11 +03:00 by kerem · 4 comments
Owner

Originally created by @indychou on GitHub (Jan 24, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1870

s3fs is working very well for my vm.

It is likely that the bucket size is too large causing an error in some applications.
it show bucket size (16E) with df -h command.

Have any way to get a correct bucket size value with df command?

The following information from my vm:
s3fs version: 1.90
fuse Version : 2.9.2
uname -a: 3.10.0-1160.45.1.el7.x86_64
cat /etc/os-release: centos 7.9

/etc/fstab:
mybucket /data fuse.s3fs _netdev,dbglevel=debug,enable_noobj_cache,allow_other,mp_umask=002,use_path_request_style,use_cache=/dev/shm,del_cache,big_writes,multipart_size=100,max_write=131072,max_stat_cache_size=1000000,parallel_count=100,url=https://s3.twcc.ai,nonempty 0 0

Originally created by @indychou on GitHub (Jan 24, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1870 s3fs is working very well for my vm. It is likely that the bucket size is too large causing an error in some applications. it show bucket size (16E) with df -h command. Have any way to get a correct bucket size value with df command? The following information from my vm: s3fs version: 1.90 fuse Version : 2.9.2 uname -a: 3.10.0-1160.45.1.el7.x86_64 cat /etc/os-release: centos 7.9 /etc/fstab: mybucket /data fuse.s3fs _netdev,dbglevel=debug,enable_noobj_cache,allow_other,mp_umask=002,use_path_request_style,use_cache=/dev/shm,del_cache,big_writes,multipart_size=100,max_write=131072,max_stat_cache_size=1000000,parallel_count=100,url=https://s3.twcc.ai,nonempty 0 0
kerem 2026-03-04 01:50:11 +03:00
  • closed this issue
  • added the
    need info
    label
Author
Owner

@gaul commented on GitHub (Jan 24, 2022):

s3fs is working as intended. AWS S3 has "infinite" space so it reports the largest possible value. The S3 API does not expose either a current or maximum size, either in number of object or their sizes.

Which application cannot handle this value? Have you reported issues against these projects?

<!-- gh-comment-id:1020070148 --> @gaul commented on GitHub (Jan 24, 2022): s3fs is working as intended. AWS S3 has "infinite" space so it reports the largest possible value. The S3 API does not expose either a current or maximum size, either in number of object or their sizes. Which application cannot handle this value? Have you reported issues against these projects?
Author
Owner

@indychou commented on GitHub (Feb 11, 2022):

Thanks to your help, we modified our application and have solved the problem.

<!-- gh-comment-id:1036011626 --> @indychou commented on GitHub (Feb 11, 2022): Thanks to your help, we modified our application and have solved the problem.
Author
Owner

@noaccident commented on GitHub (May 26, 2022):

@indychou I have encountered the same problem, how did you solve it please?

<!-- gh-comment-id:1138309089 --> @noaccident commented on GitHub (May 26, 2022): @indychou I have encountered the same problem, how did you solve it please?
Author
Owner

@gaul commented on GitHub (May 26, 2022):

What is the same problem?

<!-- gh-comment-id:1138497931 --> @gaul commented on GitHub (May 26, 2022): What is the same problem?
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#953
No description provided.