mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #532] S3 Multipart and KMS Key #302
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#302
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @bjay1404 on GitHub (Feb 10, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/532
Hi,
I'm having trouble using a KMS key with multi part upload, I can't create objects larger than 20MB. I've tried to turn off Multipart upload and use parallel =1, but nothing has worked
FSTAB
s3fs#bucket /s3/bucket fuse _netdev,allow_other,iam_role=role,dbglevel=dbg,use_sse_kmsid:xyz 0 0
Any ideas to get all of this working? I'd be fine with no multipart upload as well if I have to.
@pritambarhate commented on GitHub (Feb 24, 2017):
I am also facing this same issue. The moment I try to copy anything over 20MB, the process fails. This has become a showstopper us as our production env. requires encryption.
@pritambarhate commented on GitHub (Feb 24, 2017):
Here is some more info:
The command used to mount the bucket: sudo s3fs my-bucket /mnt/my-folder -ouse_cache=/tmp -oallow_other -ourl=https://s3.amazonaws.com -ouse_sse=kmsid:[my-ksm-key-id]
Copying and creating small files work. But files beyond 20MB don't work.
Switched to S3 Server Side Encryption instead of KMS managed keys, it is working properly with s3fs for large files also.
@ggtakec commented on GitHub (Mar 26, 2017):
@bjay1404 @pritambarhate
I tried to check this problem on my EC2, and following command(s3fs is run on foreground).
s3fs /mnt/s3 -o allow_other,use_sse=kmsid:,url=https://s3.amazonaws.com,dbglevel=info,curldbg -f
I used test file as 10MB/40MB/400MB, but I did not get failure result for any size file.
If you can, please try to run s3fs with debug options and get something error messages.
We need it to solve this issue.
Thanks in advance for your assistance.
@bjay1404 commented on GitHub (Apr 3, 2017):
What would that command look like with fstab?
I will try and post some log information here as well.
@ggtakec commented on GitHub (Apr 9, 2017):
@bjay1404
You can see messages from s3fs in /var/log/messages or syslog, etc when s3fs is run with fstab.
Please try to check.
Regards,
@jsalatiel commented on GitHub (Feb 1, 2018):
I am also having the same problem with digitalocean's space.
I can copyfiles < 20MB , but any over that size will fail.
Below the debug log
any ideas ?
@niallone commented on GitHub (Mar 3, 2018):
Multipart is the 20MB limit thing, this works..
s3fs spacename /mnt/spacenamefolder -onomultipart -ourl=https://sgp1.digitaloceanspaces.comThat's if you completely want to deactivate it or refer to docs here on multipart configs: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon (though documentation isn't correct about file sizes)
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
Is this problem continuing?
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.
@gaul commented on GitHub (Apr 9, 2019):
Could you test with master? It includes a fix for #696 which may resolve your issue as well.
@gaul commented on GitHub (Apr 30, 2019):
Please reopen if you can reproduce these symptoms.