[GH-ISSUE #2044] Uploading large files with multipart enabled / disabled is failing #1032

Closed
opened 2026-03-04 01:50:48 +03:00 by kerem · 3 comments
Owner

Originally created by @ghost on GitHub (Oct 5, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2044

Version of s3fs being used (s3fs --version)

V1.91 (commit:3e242d0)

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

Version: 2.9.7-1ubuntu1

Kernel information (uname -r)

5.4.0-1084-aws

GNU/Linux Distribution, if applicable (cat /etc/os-release)

Ubuntu 18.04.6 LTS

/etc/fstab entry, if applicable

s3fs#<bucket_name> /mnt/s3/<bucket_name> fuse _netdev,allow_other,use_sse=1,endpoint=us-east-1,uid=1000,gid=1000,iam_role=auto 0 0

Details about issue

Hello Team,
We are facing issues when copying / downloading large files via the s3fs, with or with out multipart enabled

When the fstab entry is as below
s3fs#<bucket_name> /mnt/s3/<bucket_name> fuse _netdev,allow_other,use_sse=1,endpoint=us-east-1,uid=1000,gid=1000,iam_role=auto 0 0
we are getting "operation not permitted" error with a file size of 50 mb

as a workaround, we have updated the fstab entry as below
s3fs#<bucket_name> /mnt/s3/<bucket_name> fuse _netdev,allow_other,use_sse=1,endpoint=us-east-1,uid=1000,gid=1000,iam_role=auto,nomultipart 0 0
which resolved the issue with 50 mb files, however, when trying to download files with are 600mb / close to 1GB we are getting a file too large error.

Also, we have tried bunch of other options like multipart_size, multipart_copy_size etc.. but none of them worked in our case, can you please suggest a solution for this issue and also why this issue is happening .
Also, can you please let me know what is the file size that I can copy / download without the multipart enabled.

Thanks,
krishna

Originally created by @ghost on GitHub (Oct 5, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2044 #### Version of s3fs being used (s3fs --version) V1.91 (commit:3e242d0) #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) Version: 2.9.7-1ubuntu1 #### Kernel information (uname -r) 5.4.0-1084-aws #### GNU/Linux Distribution, if applicable (cat /etc/os-release) Ubuntu 18.04.6 LTS #### /etc/fstab entry, if applicable s3fs#<bucket_name> /mnt/s3/<bucket_name> fuse _netdev,allow_other,use_sse=1,endpoint=us-east-1,uid=1000,gid=1000,iam_role=auto 0 0 ### Details about issue Hello Team, We are facing issues when copying / downloading large files via the s3fs, with or with out multipart enabled When the fstab entry is as below s3fs#<bucket_name> /mnt/s3/<bucket_name> fuse _netdev,allow_other,use_sse=1,endpoint=us-east-1,uid=1000,gid=1000,iam_role=auto 0 0 we are getting "operation not permitted" error with a file size of 50 mb as a workaround, we have updated the fstab entry as below s3fs#<bucket_name> /mnt/s3/<bucket_name> fuse _netdev,allow_other,use_sse=1,endpoint=us-east-1,uid=1000,gid=1000,iam_role=auto,nomultipart 0 0 which resolved the issue with 50 mb files, however, when trying to download files with are 600mb / close to 1GB we are getting a file too large error. Also, we have tried bunch of other options like multipart_size, multipart_copy_size etc.. but none of them worked in our case, can you please suggest a solution for this issue and also why this issue is happening . Also, can you please let me know what is the file size that I can copy / download without the multipart enabled. Thanks, krishna
kerem closed this issue 2026-03-04 01:50:49 +03:00
Author
Owner

@ghost commented on GitHub (Oct 10, 2022):

Can someone pls look into this issue ?
I see a series of 403's and 404's in this process

Oct 10 23:26:22 rgc-024082244 s3fs[1552]: curl.cpp:RequestPerform(2448): HTTP response code 403, returning EPERM. Body Text: #012AccessDeniedAccess DeniedxxxxxxxxxxxxxxxxxxxxT/k/xxxxxxxxxxxxxxxxxxxxxxx=

Oct 10 23:26:22 rgc-024082244 s3fs[1552]: computing signature [HEAD] [xxxxxxxxxxxxxxxxxxxxxxxxxxx] [] []
Oct 10 23:26:22 rgc-024082244 s3fs[1552]: url is https://s3.amazonaws.com/
Oct 10 23:26:22 rgc-024082244 s3fs[1552]: HTTP response code 404 was returned, returning ENOENT

Thanks..

<!-- gh-comment-id:1273555435 --> @ghost commented on GitHub (Oct 10, 2022): Can someone pls look into this issue ? I see a series of 403's and 404's in this process Oct 10 23:26:22 rgc-024082244 s3fs[1552]: curl.cpp:RequestPerform(2448): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>xxxxxxxxxxxxxxxxxxxx</RequestId><HostId>T/k/xxxxxxxxxxxxxxxxxxxxxxx=</HostId></Error> Oct 10 23:26:22 rgc-024082244 s3fs[1552]: computing signature [HEAD] [xxxxxxxxxxxxxxxxxxxxxxxxxxx] [] [] Oct 10 23:26:22 rgc-024082244 s3fs[1552]: url is https://s3.amazonaws.com/ Oct 10 23:26:22 rgc-024082244 s3fs[1552]: HTTP response code 404 was returned, returning ENOENT Thanks..
Author
Owner

@ggtakec commented on GitHub (Feb 12, 2023):

@krishna8189 Sorry for my late reply.
This issue may have the same cause as the issue fixed in #2091.
Could you try it with the current master branch code?
Thanks in advance for your assistance.

<!-- gh-comment-id:1426950522 --> @ggtakec commented on GitHub (Feb 12, 2023): @krishna8189 Sorry for my late reply. This issue may have the same cause as the issue fixed in #2091. Could you try it with the current master branch code? Thanks in advance for your assistance.
Author
Owner

@gaul commented on GitHub (Sep 8, 2023):

Please reopen if symptoms persist with the latest 1.93.

<!-- gh-comment-id:1710983660 --> @gaul commented on GitHub (Sep 8, 2023): Please reopen if symptoms persist with the latest 1.93.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1032
No description provided.