mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #575] Maximum size - clarification #328
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#328
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mohanen on GitHub (May 3, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/575
The wiki says "Maximum file size=64GB (limited by s3fs, not Amazon)."
Correct me if i am wrong ,The max supported size of a single file is 64 GB , not the mounted bucket size
If yes
Does mounted bucket size has any size limitations , say like i have 5TB of objects in a bucket which can it be mounted and used without any size limitations (other than 64 GB single file size)
Thanks and Regards
@ggtakec commented on GitHub (May 6, 2017):
@mohanen Thanks for reporting.
The size limit description of the man page and wiki remained old, so I updated it.
I'm closing this issue, but if you finda problem, please reopen this.
Regrads,
@okigan commented on GitHub (Aug 30, 2018):
The description is still very confusing (and leaks implementation details of PUT and multi-part uploads)
Which flag does one need to use to get to 5TB objects supported? Conversely which flag limits it to 5GB?
@ggtakec ^
@gaul commented on GitHub (Aug 30, 2018):
s3fs should allow 5 TB files by default via multipart uploads. If you specify
-o nomultipartmost providers will only allow 5 GB.@okigan commented on GitHub (Aug 30, 2018):
Thanks !
I thinks this is much more informative than current documentation.
@okigan commented on GitHub (Aug 30, 2018):
@gaul @ggtakec
Still running into error in file copying that should be supported per documentation and above description.
Copying 120GB file fails at about 5GB, see commands and log output:
syslog:
we are running V1.79, is that related to this issues (seems as major limitation for large files):
https://github.com/s3fs-fuse/s3fs-fuse/issues/533
https://github.com/s3fs-fuse/s3fs-fuse/issues/269
@nikaro commented on GitHub (Jan 2, 2019):
And what about the maximum "mounted bucket size"? Currently, under Ubuntu Bionic,
dfreport a size of 256T for the S3FS mount point. What if i have more than 256T of data?@kollyma commented on GitHub (Jan 17, 2019):
@nikaro: thanks for the question. I would also be interested about the maximum "mounted bucket size".
How does it scales when S3 buckets are mounted with s3fs into Posix file systems? Where are the bottlenecks?
@ggtakec commented on GitHub (Jan 20, 2019):
@nikaro s3fs implements the FUSE statfs interface, but s3fs and S3 can not determine the free size of bucket.
Therefore, s3fs always returns the free size as 256TB.
If the total object size in the bucket exceeds 256TB, it may exceed 100% on display of the df command.
However, I think that it does not affect the behavior of s3fs or S3.
@kollyma commented on GitHub (Jan 21, 2019):
@ggtakec: thanks for your reply. The s3 protocol offers an object key of max 1024 bytes and an object size of max 5TB. The theoretical storage capacity is huge (8^1024 * 5TB).
If mounted with s3fs do we face other limitations (OS level or Posix) ?
@ggtakec commented on GitHub (Jan 21, 2019):
@kollyma
If s3fs deals with huge objects, you need local disk space.
If the object is larger than the free disk capacity, you will have to use s3fs with setting not to use the cache.
And you will need at least a free space of size that you can get in multipart.
In this case, since s3fs does not use caches, I think that the performance is not very good.
@threadfly commented on GitHub (Jun 28, 2019):
@ggtakec I have the same problem,The upload speed tested with the dd tool is only 40MB/s,I have modified the source code to close the cache, and the result is slower.