[GH-ISSUE #575] Maximum size - clarification #328

Closed
opened 2026-03-04 01:44:26 +03:00 by kerem · 11 comments
Owner

Originally created by @mohanen on GitHub (May 3, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/575

The wiki says "Maximum file size=64GB (limited by s3fs, not Amazon)."

Correct me if i am wrong ,The max supported size of a single file is 64 GB , not the mounted bucket size

If yes
Does mounted bucket size has any size limitations , say like i have 5TB of objects in a bucket which can it be mounted and used without any size limitations (other than 64 GB single file size)

Thanks and Regards

Originally created by @mohanen on GitHub (May 3, 2017). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/575 The wiki says "**Maximum file size=64GB (limited by s3fs, not Amazon)**." Correct me if i am wrong ,The max supported size of a single file is 64 GB , not the mounted bucket size If yes Does mounted bucket size has any size limitations , say like i have 5TB of objects in a bucket which can it be mounted and used without any size limitations (other than 64 GB single file size) Thanks and Regards
kerem closed this issue 2026-03-04 01:44:27 +03:00
Author
Owner

@ggtakec commented on GitHub (May 6, 2017):

@mohanen Thanks for reporting.
The size limit description of the man page and wiki remained old, so I updated it.
I'm closing this issue, but if you finda problem, please reopen this.
Regrads,

<!-- gh-comment-id:299615867 --> @ggtakec commented on GitHub (May 6, 2017): @mohanen Thanks for reporting. The size limit description of the man page and wiki remained old, so I updated it. I'm closing this issue, but if you finda problem, please reopen this. Regrads,
Author
Owner

@okigan commented on GitHub (Aug 30, 2018):

The description is still very confusing (and leaks implementation details of PUT and multi-part uploads)

Which flag does one need to use to get to 5TB objects supported? Conversely which flag limits it to 5GB?

@ggtakec ^

<!-- gh-comment-id:417170740 --> @okigan commented on GitHub (Aug 30, 2018): The description is still very confusing (and leaks implementation details of PUT and multi-part uploads) Which flag does one need to use to get to 5TB objects supported? Conversely which flag limits it to 5GB? @ggtakec ^
Author
Owner

@gaul commented on GitHub (Aug 30, 2018):

s3fs should allow 5 TB files by default via multipart uploads. If you specify -o nomultipart most providers will only allow 5 GB.

<!-- gh-comment-id:417186797 --> @gaul commented on GitHub (Aug 30, 2018): s3fs should allow 5 TB files by default via multipart uploads. If you specify `-o nomultipart` most providers will only allow 5 GB.
Author
Owner

@okigan commented on GitHub (Aug 30, 2018):

Thanks !

I thinks this is much more informative than current documentation.

On Aug 29, 2018, at 9:33 PM, Andrew Gaul notifications@github.com wrote:

s3fs should allow 5 TB files by default via multipart uploads. If you specify -o nomultipart most providers will only allow 5 GB.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

<!-- gh-comment-id:417189189 --> @okigan commented on GitHub (Aug 30, 2018): Thanks ! I thinks this is much more informative than current documentation. > On Aug 29, 2018, at 9:33 PM, Andrew Gaul <notifications@github.com> wrote: > > s3fs should allow 5 TB files by default via multipart uploads. If you specify -o nomultipart most providers will only allow 5 GB. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub, or mute the thread.
Author
Owner

@okigan commented on GitHub (Aug 30, 2018):

@gaul @ggtakec

Still running into error in file copying that should be supported per documentation and above description.

Copying 120GB file fails at about 5GB, see commands and log output:

$ time cp /mnt/.../7 7.s3fs
cp: error reading '/mnt/../7': Input/output error
$ ls -la *.s3fs

-rwxr-xr-x 1 .. .. 5190451200 Aug 30 17:57 7.s3fs

syslog:

Aug 30 00:16:22 ..-i-0f... s3fs[14652]: ParallelGetObjectRequest(1260): error occuered in multi request(errno=-5).
Aug 30 00:16:22 ..-i-0f.. s3fs[14652]: DownloadWriteCallback(728): write file error(28).
Aug 30 00:16:22 ..-i-0f..s3fs[14652]: DownloadWriteCallback(728): write file error(28).

we are running V1.79, is that related to this issues (seems as major limitation for large files):
https://github.com/s3fs-fuse/s3fs-fuse/issues/533
https://github.com/s3fs-fuse/s3fs-fuse/issues/269

<!-- gh-comment-id:417465972 --> @okigan commented on GitHub (Aug 30, 2018): @gaul @ggtakec Still running into error in file copying that should be supported per documentation and above description. Copying 120GB file fails at about 5GB, see commands and log output: ``` $ time cp /mnt/.../7 7.s3fs cp: error reading '/mnt/../7': Input/output error ``` ``` $ ls -la *.s3fs -rwxr-xr-x 1 .. .. 5190451200 Aug 30 17:57 7.s3fs ``` syslog: ``` Aug 30 00:16:22 ..-i-0f... s3fs[14652]: ParallelGetObjectRequest(1260): error occuered in multi request(errno=-5). Aug 30 00:16:22 ..-i-0f.. s3fs[14652]: DownloadWriteCallback(728): write file error(28). Aug 30 00:16:22 ..-i-0f..s3fs[14652]: DownloadWriteCallback(728): write file error(28). ``` we are running V1.79, is that related to this issues (seems as major limitation for large files): https://github.com/s3fs-fuse/s3fs-fuse/issues/533 https://github.com/s3fs-fuse/s3fs-fuse/issues/269
Author
Owner

@nikaro commented on GitHub (Jan 2, 2019):

And what about the maximum "mounted bucket size"? Currently, under Ubuntu Bionic, df report a size of 256T for the S3FS mount point. What if i have more than 256T of data?

<!-- gh-comment-id:450906389 --> @nikaro commented on GitHub (Jan 2, 2019): And what about the maximum "mounted bucket size"? Currently, under Ubuntu Bionic, `df` report a size of 256T for the S3FS mount point. What if i have more than 256T of data?
Author
Owner

@kollyma commented on GitHub (Jan 17, 2019):

@nikaro: thanks for the question. I would also be interested about the maximum "mounted bucket size".
How does it scales when S3 buckets are mounted with s3fs into Posix file systems? Where are the bottlenecks?

<!-- gh-comment-id:455217518 --> @kollyma commented on GitHub (Jan 17, 2019): @nikaro: thanks for the question. I would also be interested about the maximum "mounted bucket size". How does it scales when S3 buckets are mounted with s3fs into Posix file systems? Where are the bottlenecks?
Author
Owner

@ggtakec commented on GitHub (Jan 20, 2019):

@nikaro s3fs implements the FUSE statfs interface, but s3fs and S3 can not determine the free size of bucket.
Therefore, s3fs always returns the free size as 256TB.
If the total object size in the bucket exceeds 256TB, it may exceed 100% on display of the df command.
However, I think that it does not affect the behavior of s3fs or S3.

<!-- gh-comment-id:455852301 --> @ggtakec commented on GitHub (Jan 20, 2019): @nikaro s3fs implements the FUSE statfs interface, but s3fs and S3 can not determine the free size of bucket. Therefore, s3fs always returns the free size as 256TB. If the total object size in the bucket exceeds 256TB, it may exceed 100% on display of the df command. However, I think that it does not affect the behavior of s3fs or S3.
Author
Owner

@kollyma commented on GitHub (Jan 21, 2019):

@ggtakec: thanks for your reply. The s3 protocol offers an object key of max 1024 bytes and an object size of max 5TB. The theoretical storage capacity is huge (8^1024 * 5TB).
If mounted with s3fs do we face other limitations (OS level or Posix) ?

<!-- gh-comment-id:455993278 --> @kollyma commented on GitHub (Jan 21, 2019): @ggtakec: thanks for your reply. The s3 protocol offers an object key of max 1024 bytes and an object size of max 5TB. The theoretical storage capacity is huge (8^1024 * 5TB). If mounted with s3fs do we face other limitations (OS level or Posix) ?
Author
Owner

@ggtakec commented on GitHub (Jan 21, 2019):

@kollyma
If s3fs deals with huge objects, you need local disk space.
If the object is larger than the free disk capacity, you will have to use s3fs with setting not to use the cache.
And you will need at least a free space of size that you can get in multipart.
In this case, since s3fs does not use caches, I think that the performance is not very good.

<!-- gh-comment-id:456069375 --> @ggtakec commented on GitHub (Jan 21, 2019): @kollyma If s3fs deals with huge objects, you need local disk space. If the object is larger than the free disk capacity, you will have to use s3fs with setting not to use the cache. And you will need at least a free space of size that you can get in multipart. In this case, since s3fs does not use caches, I think that the performance is not very good.
Author
Owner

@threadfly commented on GitHub (Jun 28, 2019):

@ggtakec I have the same problem,The upload speed tested with the dd tool is only 40MB/s,I have modified the source code to close the cache, and the result is slower.

<!-- gh-comment-id:506618079 --> @threadfly commented on GitHub (Jun 28, 2019): @ggtakec I have the same problem,The upload speed tested with the dd tool is only 40MB/s,I have modified the source code to close the cache, and the result is slower.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#328
No description provided.