mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2044] Uploading large files with multipart enabled / disabled is failing #1032
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1032
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ghost on GitHub (Oct 5, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2044
Version of s3fs being used (s3fs --version)
V1.91 (commit:3e242d0)
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
Version: 2.9.7-1ubuntu1
Kernel information (uname -r)
5.4.0-1084-aws
GNU/Linux Distribution, if applicable (cat /etc/os-release)
Ubuntu 18.04.6 LTS
/etc/fstab entry, if applicable
s3fs#<bucket_name> /mnt/s3/<bucket_name> fuse _netdev,allow_other,use_sse=1,endpoint=us-east-1,uid=1000,gid=1000,iam_role=auto 0 0
Details about issue
Hello Team,
We are facing issues when copying / downloading large files via the s3fs, with or with out multipart enabled
When the fstab entry is as below
s3fs#<bucket_name> /mnt/s3/<bucket_name> fuse _netdev,allow_other,use_sse=1,endpoint=us-east-1,uid=1000,gid=1000,iam_role=auto 0 0
we are getting "operation not permitted" error with a file size of 50 mb
as a workaround, we have updated the fstab entry as below
s3fs#<bucket_name> /mnt/s3/<bucket_name> fuse _netdev,allow_other,use_sse=1,endpoint=us-east-1,uid=1000,gid=1000,iam_role=auto,nomultipart 0 0
which resolved the issue with 50 mb files, however, when trying to download files with are 600mb / close to 1GB we are getting a file too large error.
Also, we have tried bunch of other options like multipart_size, multipart_copy_size etc.. but none of them worked in our case, can you please suggest a solution for this issue and also why this issue is happening .
Also, can you please let me know what is the file size that I can copy / download without the multipart enabled.
Thanks,
krishna
@ghost commented on GitHub (Oct 10, 2022):
Can someone pls look into this issue ?
I see a series of 403's and 404's in this process
Oct 10 23:26:22 rgc-024082244 s3fs[1552]: curl.cpp:RequestPerform(2448): HTTP response code 403, returning EPERM. Body Text: #012
AccessDeniedAccess DeniedxxxxxxxxxxxxxxxxxxxxT/k/xxxxxxxxxxxxxxxxxxxxxxx=Oct 10 23:26:22 rgc-024082244 s3fs[1552]: computing signature [HEAD] [xxxxxxxxxxxxxxxxxxxxxxxxxxx] [] []
Oct 10 23:26:22 rgc-024082244 s3fs[1552]: url is https://s3.amazonaws.com/
Oct 10 23:26:22 rgc-024082244 s3fs[1552]: HTTP response code 404 was returned, returning ENOENT
Thanks..
@ggtakec commented on GitHub (Feb 12, 2023):
@krishna8189 Sorry for my late reply.
This issue may have the same cause as the issue fixed in #2091.
Could you try it with the current master branch code?
Thanks in advance for your assistance.
@gaul commented on GitHub (Sep 8, 2023):
Please reopen if symptoms persist with the latest 1.93.