mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #315] Is there a file size limitation? #162
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#162
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Phantom-Studio on GitHub (Dec 7, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/315
Hi all,
As suggested in the title, I wanted to know if there is a file size limitation?
Because on Github page of s3fs-fuse, I can read "Maximum file size=64GB (limited by s3fs, not Amazon)." But I suspect s3fs to fail when trying to upload file greater than 5Gb ; everytime I get a write error.
Thanks for your help.
Regards.
@gaul commented on GitHub (Dec 7, 2015):
Can you provide the specific error via the debug flags:
as well as share the S3 implementation you use, e.g., Amazon, Ceph? Errors at the 5 GB boundary imply some misconfiguration around multi-part uploads.
@Phantom-Studio commented on GitHub (Dec 7, 2015):
Hi Andrew,
Thanks for your help!
Could you please explain a bit more the step in order to retreive the log please?
s3fs $BUCKET $MOUNTPOINT -d -d -f -o f2 -o curldbg (I also need to add -o use_cache and -o allow_other)
Regards.
Jonathan
@gaul commented on GitHub (Dec 7, 2015):
Yes please invoke s3fs with those options then reproduce the symptoms with your application. When you encounter the error, please attach the relevant symptoms here or in a gist.
@ggtakec commented on GitHub (Dec 20, 2015):
@Phantom-Studio
s3fs has file size limit which is dependent on "multipart_size" option.
S3 has a limit count for multi part uploading, it is 10000 parts, and s3fs is specified each part size by this option.
So that, the file size limitation is (10000 * multipart_size).
(If disk space is low it is also another restriction, but the standard will in the above size.)
https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/fdcache.cpp#L1353
Please try to set multipart_size option.
Thanks in advance for your help.
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.
@alphainets commented on GitHub (Jan 20, 2020):
Hi ggtakec,
I have problem uploading 230GB file to ceph s3 server,
I saw it was caching locally,
but when copy was donw, the file size in s3 became 0
and got below error:
fail to close: operation not supported
below is how I mount my ceph s3:
s3fs testing /mnt/testing -o passwd_file=/testing/passwd-s3fs -o url=http://192.168.0.100:7480 -o use_path_request_style -o dbglevel=dbg -f -o use_cache="/nova/tmp" -o curldbg
May I know if there is any configuration mistakes?
I have no problem when file size is small, says 20GB.
@ezman commented on GitHub (Nov 26, 2020):
@alphainets I am seeing the same issue with a 100GB, and 432GB files. Did you resolve this issue?
I am using 1.8.7, on a Ubuntu 18.04 server.
@gaul commented on GitHub (Nov 27, 2020):
s3fs 1.87 and earlier require temporary space equal to the object size. Please test with the latest master which includes a large file optimization that reduces temporary space usage. If this symptom persists, please run with
-f -dand open a new issue.