[GH-ISSUE #315] Is there a file size limitation? #162

Closed
opened 2026-03-04 01:42:45 +03:00 by kerem · 8 comments
Owner

Originally created by @Phantom-Studio on GitHub (Dec 7, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/315

Hi all,

As suggested in the title, I wanted to know if there is a file size limitation?
Because on Github page of s3fs-fuse, I can read "Maximum file size=64GB (limited by s3fs, not Amazon)." But I suspect s3fs to fail when trying to upload file greater than 5Gb ; everytime I get a write error.

Thanks for your help.
Regards.

Originally created by @Phantom-Studio on GitHub (Dec 7, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/315 Hi all, As suggested in the title, I wanted to know if there is a file size limitation? Because on Github page of s3fs-fuse, I can read "Maximum file size=64GB (limited by s3fs, not Amazon)." But I suspect s3fs to fail when trying to upload file greater than 5Gb ; everytime I get a write error. Thanks for your help. Regards.
kerem closed this issue 2026-03-04 01:42:46 +03:00
Author
Owner

@gaul commented on GitHub (Dec 7, 2015):

Can you provide the specific error via the debug flags:

s3fs $BUCKET $MOUNTPOINT -d -d -f -o f2 -o curldbg

as well as share the S3 implementation you use, e.g., Amazon, Ceph? Errors at the 5 GB boundary imply some misconfiguration around multi-part uploads.

<!-- gh-comment-id:162465698 --> @gaul commented on GitHub (Dec 7, 2015): Can you provide the specific error via the debug flags: ``` s3fs $BUCKET $MOUNTPOINT -d -d -f -o f2 -o curldbg ``` as well as share the S3 implementation you use, e.g., Amazon, Ceph? Errors at the 5 GB boundary imply some misconfiguration around multi-part uploads.
Author
Owner

@Phantom-Studio commented on GitHub (Dec 7, 2015):

Hi Andrew,

Thanks for your help!

Could you please explain a bit more the step in order to retreive the log please?

  • So correct me if I'm wrong but I need to umount my bucket first then mount it with :

s3fs $BUCKET $MOUNTPOINT -d -d -f -o f2 -o curldbg (I also need to add -o use_cache and -o allow_other)

Regards.
Jonathan

<!-- gh-comment-id:162502978 --> @Phantom-Studio commented on GitHub (Dec 7, 2015): Hi Andrew, Thanks for your help! Could you please explain a bit more the step in order to retreive the log please? - So correct me if I'm wrong but I need to umount my bucket first then mount it with : s3fs $BUCKET $MOUNTPOINT -d -d -f -o f2 -o curldbg (I also need to add -o use_cache and -o allow_other) Regards. Jonathan
Author
Owner

@gaul commented on GitHub (Dec 7, 2015):

Yes please invoke s3fs with those options then reproduce the symptoms with your application. When you encounter the error, please attach the relevant symptoms here or in a gist.

<!-- gh-comment-id:162525542 --> @gaul commented on GitHub (Dec 7, 2015): Yes please invoke s3fs with those options then reproduce the symptoms with your application. When you encounter the error, please attach the relevant symptoms here or in a gist.
Author
Owner

@ggtakec commented on GitHub (Dec 20, 2015):

@Phantom-Studio
s3fs has file size limit which is dependent on "multipart_size" option.
S3 has a limit count for multi part uploading, it is 10000 parts, and s3fs is specified each part size by this option.
So that, the file size limitation is (10000 * multipart_size).
(If disk space is low it is also another restriction, but the standard will in the above size.)

https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/fdcache.cpp#L1353

Please try to set multipart_size option.
Thanks in advance for your help.

<!-- gh-comment-id:166080444 --> @ggtakec commented on GitHub (Dec 20, 2015): @Phantom-Studio s3fs has file size limit which is dependent on "multipart_size" option. S3 has a limit count for multi part uploading, it is 10000 parts, and s3fs is specified each part size by this option. So that, the file size limitation is (10000 \* multipart_size). (If disk space is low it is also another restriction, but the standard will in the above size.) https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/fdcache.cpp#L1353 Please try to set multipart_size option. Thanks in advance for your help.
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.

<!-- gh-comment-id:478214272 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. I will close this, but if the problem persists, please reopen or post a new issue.
Author
Owner

@alphainets commented on GitHub (Jan 20, 2020):

Hi ggtakec,
I have problem uploading 230GB file to ceph s3 server,
I saw it was caching locally,
but when copy was donw, the file size in s3 became 0
and got below error:
fail to close: operation not supported

below is how I mount my ceph s3:
s3fs testing /mnt/testing -o passwd_file=/testing/passwd-s3fs -o url=http://192.168.0.100:7480 -o use_path_request_style -o dbglevel=dbg -f -o use_cache="/nova/tmp" -o curldbg

May I know if there is any configuration mistakes?

I have no problem when file size is small, says 20GB.

<!-- gh-comment-id:576089272 --> @alphainets commented on GitHub (Jan 20, 2020): Hi ggtakec, I have problem uploading 230GB file to ceph s3 server, I saw it was caching locally, but when copy was donw, the file size in s3 became 0 and got below error: fail to close: operation not supported below is how I mount my ceph s3: s3fs testing /mnt/testing -o passwd_file=/testing/passwd-s3fs -o url=http://192.168.0.100:7480 -o use_path_request_style -o dbglevel=dbg -f -o use_cache="/nova/tmp" -o curldbg May I know if there is any configuration mistakes? I have no problem when file size is small, says 20GB.
Author
Owner

@ezman commented on GitHub (Nov 26, 2020):

@alphainets I am seeing the same issue with a 100GB, and 432GB files. Did you resolve this issue?
I am using 1.8.7, on a Ubuntu 18.04 server.

<!-- gh-comment-id:734460837 --> @ezman commented on GitHub (Nov 26, 2020): @alphainets I am seeing the same issue with a 100GB, and 432GB files. Did you resolve this issue? I am using 1.8.7, on a Ubuntu 18.04 server.
Author
Owner

@gaul commented on GitHub (Nov 27, 2020):

s3fs 1.87 and earlier require temporary space equal to the object size. Please test with the latest master which includes a large file optimization that reduces temporary space usage. If this symptom persists, please run with -f -d and open a new issue.

<!-- gh-comment-id:734810326 --> @gaul commented on GitHub (Nov 27, 2020): s3fs 1.87 and earlier require temporary space equal to the object size. Please test with the latest master which includes a large file optimization that reduces temporary space usage. If this symptom persists, please run with `-f -d` and open a new issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#162
No description provided.