[GH-ISSUE #532] S3 Multipart and KMS Key #302

Closed
opened 2026-03-04 01:44:11 +03:00 by kerem · 10 comments
Owner

Originally created by @bjay1404 on GitHub (Feb 10, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/532

Hi,

I'm having trouble using a KMS key with multi part upload, I can't create objects larger than 20MB. I've tried to turn off Multipart upload and use parallel =1, but nothing has worked

FSTAB
s3fs#bucket /s3/bucket fuse _netdev,allow_other,iam_role=role,dbglevel=dbg,use_sse_kmsid:xyz 0 0

Any ideas to get all of this working? I'd be fine with no multipart upload as well if I have to.

Originally created by @bjay1404 on GitHub (Feb 10, 2017). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/532 Hi, I'm having trouble using a KMS key with multi part upload, I can't create objects larger than 20MB. I've tried to turn off Multipart upload and use parallel =1, but nothing has worked FSTAB s3fs#bucket /s3/bucket fuse _netdev,allow_other,iam_role=role,dbglevel=dbg,use_sse_kmsid:xyz 0 0 Any ideas to get all of this working? I'd be fine with no multipart upload as well if I have to.
kerem closed this issue 2026-03-04 01:44:11 +03:00
Author
Owner

@pritambarhate commented on GitHub (Feb 24, 2017):

I am also facing this same issue. The moment I try to copy anything over 20MB, the process fails. This has become a showstopper us as our production env. requires encryption.

<!-- gh-comment-id:282265616 --> @pritambarhate commented on GitHub (Feb 24, 2017): I am also facing this same issue. The moment I try to copy anything over 20MB, the process fails. This has become a showstopper us as our production env. requires encryption.
Author
Owner

@pritambarhate commented on GitHub (Feb 24, 2017):

Here is some more info:

The command used to mount the bucket: sudo s3fs my-bucket /mnt/my-folder -ouse_cache=/tmp -oallow_other -ourl=https://s3.amazonaws.com -ouse_sse=kmsid:[my-ksm-key-id]

Copying and creating small files work. But files beyond 20MB don't work.

Switched to S3 Server Side Encryption instead of KMS managed keys, it is working properly with s3fs for large files also.

<!-- gh-comment-id:282267422 --> @pritambarhate commented on GitHub (Feb 24, 2017): Here is some more info: The command used to mount the bucket: sudo s3fs my-bucket /mnt/my-folder -ouse_cache=/tmp -oallow_other -ourl=https://s3.amazonaws.com -ouse_sse=kmsid:[my-ksm-key-id] Copying and creating small files work. But files beyond 20MB don't work. Switched to S3 Server Side Encryption instead of KMS managed keys, it is working properly with s3fs for large files also.
Author
Owner

@ggtakec commented on GitHub (Mar 26, 2017):

@bjay1404 @pritambarhate
I tried to check this problem on my EC2, and following command(s3fs is run on foreground).
s3fs /mnt/s3 -o allow_other,use_sse=kmsid:,url=https://s3.amazonaws.com,dbglevel=info,curldbg -f

I used test file as 10MB/40MB/400MB, but I did not get failure result for any size file.
If you can, please try to run s3fs with debug options and get something error messages.
We need it to solve this issue.

Thanks in advance for your assistance.

<!-- gh-comment-id:289265578 --> @ggtakec commented on GitHub (Mar 26, 2017): @bjay1404 @pritambarhate I tried to check this problem on my EC2, and following command(s3fs is run on foreground). s3fs <my bucket> /mnt/s3 -o allow_other,use_sse=kmsid:<my kms id>,url=https://s3.amazonaws.com,dbglevel=info,curldbg -f I used test file as 10MB/40MB/400MB, but I did not get failure result for any size file. If you can, please try to run s3fs with debug options and get something error messages. We need it to solve this issue. Thanks in advance for your assistance.
Author
Owner

@bjay1404 commented on GitHub (Apr 3, 2017):

What would that command look like with fstab?

I will try and post some log information here as well.

<!-- gh-comment-id:291215414 --> @bjay1404 commented on GitHub (Apr 3, 2017): What would that command look like with fstab? I will try and post some log information here as well.
Author
Owner

@ggtakec commented on GitHub (Apr 9, 2017):

@bjay1404
You can see messages from s3fs in /var/log/messages or syslog, etc when s3fs is run with fstab.
Please try to check.
Regards,

<!-- gh-comment-id:292767011 --> @ggtakec commented on GitHub (Apr 9, 2017): @bjay1404 You can see messages from s3fs in /var/log/messages or syslog, etc when s3fs is run with fstab. Please try to check. Regards,
Author
Owner

@jsalatiel commented on GitHub (Feb 1, 2018):

I am also having the same problem with digitalocean's space.
I can copyfiles < 20MB , but any over that size will fail.

Below the debug log
any ideas ?

[INF] s3fs.cpp:s3fs_getattr(808): [path=/]
[INF] s3fs.cpp:s3fs_getattr(808): [path=/21Mfile]
[INF] s3fs.cpp:s3fs_open(2019): [path=/21Mfile][flags=33281]
[INF]       cache.cpp:DelStat(549): delete stat cache entry[path=/21Mfile]
[INF]       curl.cpp:HeadRequest(2486): [tpath=/21Mfile]
[INF]       curl.cpp:PreHeadRequest(2423): [tpath=/21Mfile][bpath=][save=][sseckeypos=-1]
[INF]       curl.cpp:prepare_url(4175): URL is https://ams3.digitaloceanspaces.com/XXXXXXXXX/21Mfile
[INF]       curl.cpp:prepare_url(4207): URL changed is https://XXXXXXXXX.ams3.digitaloceanspaces.com/21Mfile
[INF]       curl.cpp:insertV4Headers(2237): computing signature [HEAD] [/21Mfile] [] []
[INF]       curl.cpp:url_to_host(100): url is https://ams3.digitaloceanspaces.com
* Connection 3 seems to be dead!
* Closing connection 3
*   Trying 5.101.110.225...
* TCP_NODELAY set
* Connected to XXXXXXXXX.ams3.digitaloceanspaces.com (5.101.110.225) port 443 (#4)
* found 166 certificates in /etc/ssl/certs/ca-certificates.crt
* found 664 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL re-using session ID
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* 	 server certificate verification OK
* 	 server certificate status verification SKIPPED
* 	 common name: *.ams3.digitaloceanspaces.com (matched)
* 	 server certificate expiration date OK
* 	 server certificate activation date OK
* 	 certificate public key: RSA
* 	 certificate version: #3
* 	 subject: C=US,ST=New York,L=New York,O=DigitalOcean\, LLC,CN=*.ams3.digitaloceanspaces.com
* 	 start date: Tue, 17 Oct 2017 00:00:00 GMT
* 	 expire date: Mon, 22 Oct 2018 12:00:00 GMT
* 	 issuer: C=US,O=DigiCert Inc,CN=DigiCert SHA2 Secure Server CA
* 	 compression: NULL
* ALPN, server did not agree to a protocol
> HEAD /21Mfile HTTP/1.1
host: XXXXXXXXX.ams3.digitaloceanspaces.com
User-Agent: s3fs/1.80 (commit hash unknown; GnuTLS(gcrypt))
Accept: */*
Authorization: AWS4-HMAC-SHA256 Credential=removed/20180201/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=removed
x-amz-content-sha256: removed
x-amz-date: 20180201T222042Z

< HTTP/1.1 200 OK
< Content-Length: 0
< Accept-Ranges: bytes
< Last-Modified: Thu, 01 Feb 2018 22:14:43 GMT
< ETag: "removed"
< x-amz-meta-gid: 0
< x-amz-meta-mode: 33188
< x-amz-meta-mtime: 1517523283
< x-amz-meta-uid: 0
< x-amz-request-id: removed-005a7392ba-a7bb5-ams3a
< Content-Type: application/octet-stream
< Date: Thu, 01 Feb 2018 22:20:42 GMT
< Strict-Transport-Security: max-age=15552000; includeSubDomains; preload
< 
* Curl_http_done: called premature == 0
* Connection #4 to host XXXXXXXXX.ams3.digitaloceanspaces.com left intact
[INF]       curl.cpp:RequestPerform(1910): HTTP response code 200
[INF]       cache.cpp:AddStat(346): add stat cache entry[path=/21Mfile]
[INF]       fdcache.cpp:SetMtime(936): [path=/21Mfile][fd=5][time=1517523283]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/21Mfile][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getattr(808): [path=/21Mfile]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/21Mfile][name=security.capability][value=(nil)][size=0]
...
[INF] s3fs.cpp:s3fs_flush(2141): [path=/21Mfile][fd=5]
[INF]       fdcache.cpp:RowFlush(1345): [tpath=][path=/21Mfile][fd=5]
[INF]       curl.cpp:ParallelMultipartUploadRequest(1202): [tpath=/21Mfile][fd=5]
[INF]       curl.cpp:PreMultipartPostRequest(2965): [tpath=/21Mfile]
[INF]       curl.cpp:prepare_url(4175): URL is https://ams3.digitaloceanspaces.com/XXXXXXXXX/21Mfile?uploads=
[INF]       curl.cpp:prepare_url(4207): URL changed is https://XXXXXXXXX.ams3.digitaloceanspaces.com/21Mfile?uploads=
[INF]       curl.cpp:insertV4Headers(2237): computing signature [POST] [/21Mfile] [uploads=] []
[INF]       curl.cpp:url_to_host(100): url is https://ams3.digitaloceanspaces.com
* Found bundle for host XXXXXXXXX.ams3.digitaloceanspaces.com: 0x7f5bdc003ee0 [can pipeline]
* Re-using existing connection! (#4) with host XXXXXXXXX.ams3.digitaloceanspaces.com
* Connected to XXXXXXXXX.ams3.digitaloceanspaces.com (5.101.110.225) port 443 (#4)
> POST /21Mfile?uploads= HTTP/1.1
host: XXXXXXXXX.ams3.digitaloceanspaces.com
User-Agent: s3fs/1.80 (commit hash unknown; GnuTLS(gcrypt))
Authorization: AWS4-HMAC-SHA256 Credential=removed/20180201/us-east-1/s3/aws4_request, SignedHeaders=accept;content-length;content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-meta-gid;x-amz-meta-mode;x-amz-meta-mtime;x-amz-meta-uid, Signature=removed
Content-Type: application/octet-stream
x-amz-acl: private
x-amz-content-sha256: removed
x-amz-date: 20180201T222042Z
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1517523642
x-amz-meta-uid: 0

< HTTP/1.1 403 Forbidden
< Content-Length: 189
< x-amz-request-id: removed
< Accept-Ranges: bytes
< Content-Type: application/xml
< Date: Thu, 01 Feb 2018 22:20:42 GMT
< Strict-Transport-Security: max-age=15552000; includeSubDomains; preload
* HTTP error before end of send, stop sending
< 
* Curl_http_done: called premature == 0
* Closing connection 4
[INF]       curl.cpp:RequestPerform(1927): HTTP response code 403 was returned, returning EPERM
[INF] s3fs.cpp:s3fs_release(2194): [path=/21Mfile][fd=5]
[INF]       cache.cpp:DelStat(549): delete stat cache entry[path=/21Mfile]
[INF]       fdcache.cpp:GetFdEntity(1846): [path=/21Mfile][fd=5]
<!-- gh-comment-id:362422899 --> @jsalatiel commented on GitHub (Feb 1, 2018): I am also having the same problem with digitalocean's space. I can copyfiles < 20MB , but any over that size will fail. Below the debug log any ideas ? ``` [INF] s3fs.cpp:s3fs_getattr(808): [path=/] [INF] s3fs.cpp:s3fs_getattr(808): [path=/21Mfile] [INF] s3fs.cpp:s3fs_open(2019): [path=/21Mfile][flags=33281] [INF] cache.cpp:DelStat(549): delete stat cache entry[path=/21Mfile] [INF] curl.cpp:HeadRequest(2486): [tpath=/21Mfile] [INF] curl.cpp:PreHeadRequest(2423): [tpath=/21Mfile][bpath=][save=][sseckeypos=-1] [INF] curl.cpp:prepare_url(4175): URL is https://ams3.digitaloceanspaces.com/XXXXXXXXX/21Mfile [INF] curl.cpp:prepare_url(4207): URL changed is https://XXXXXXXXX.ams3.digitaloceanspaces.com/21Mfile [INF] curl.cpp:insertV4Headers(2237): computing signature [HEAD] [/21Mfile] [] [] [INF] curl.cpp:url_to_host(100): url is https://ams3.digitaloceanspaces.com * Connection 3 seems to be dead! * Closing connection 3 * Trying 5.101.110.225... * TCP_NODELAY set * Connected to XXXXXXXXX.ams3.digitaloceanspaces.com (5.101.110.225) port 443 (#4) * found 166 certificates in /etc/ssl/certs/ca-certificates.crt * found 664 certificates in /etc/ssl/certs * ALPN, offering http/1.1 * SSL re-using session ID * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256 * server certificate verification OK * server certificate status verification SKIPPED * common name: *.ams3.digitaloceanspaces.com (matched) * server certificate expiration date OK * server certificate activation date OK * certificate public key: RSA * certificate version: #3 * subject: C=US,ST=New York,L=New York,O=DigitalOcean\, LLC,CN=*.ams3.digitaloceanspaces.com * start date: Tue, 17 Oct 2017 00:00:00 GMT * expire date: Mon, 22 Oct 2018 12:00:00 GMT * issuer: C=US,O=DigiCert Inc,CN=DigiCert SHA2 Secure Server CA * compression: NULL * ALPN, server did not agree to a protocol > HEAD /21Mfile HTTP/1.1 host: XXXXXXXXX.ams3.digitaloceanspaces.com User-Agent: s3fs/1.80 (commit hash unknown; GnuTLS(gcrypt)) Accept: */* Authorization: AWS4-HMAC-SHA256 Credential=removed/20180201/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=removed x-amz-content-sha256: removed x-amz-date: 20180201T222042Z < HTTP/1.1 200 OK < Content-Length: 0 < Accept-Ranges: bytes < Last-Modified: Thu, 01 Feb 2018 22:14:43 GMT < ETag: "removed" < x-amz-meta-gid: 0 < x-amz-meta-mode: 33188 < x-amz-meta-mtime: 1517523283 < x-amz-meta-uid: 0 < x-amz-request-id: removed-005a7392ba-a7bb5-ams3a < Content-Type: application/octet-stream < Date: Thu, 01 Feb 2018 22:20:42 GMT < Strict-Transport-Security: max-age=15552000; includeSubDomains; preload < * Curl_http_done: called premature == 0 * Connection #4 to host XXXXXXXXX.ams3.digitaloceanspaces.com left intact [INF] curl.cpp:RequestPerform(1910): HTTP response code 200 [INF] cache.cpp:AddStat(346): add stat cache entry[path=/21Mfile] [INF] fdcache.cpp:SetMtime(936): [path=/21Mfile][fd=5][time=1517523283] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/21Mfile][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getattr(808): [path=/21Mfile] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/21Mfile][name=security.capability][value=(nil)][size=0] ... [INF] s3fs.cpp:s3fs_flush(2141): [path=/21Mfile][fd=5] [INF] fdcache.cpp:RowFlush(1345): [tpath=][path=/21Mfile][fd=5] [INF] curl.cpp:ParallelMultipartUploadRequest(1202): [tpath=/21Mfile][fd=5] [INF] curl.cpp:PreMultipartPostRequest(2965): [tpath=/21Mfile] [INF] curl.cpp:prepare_url(4175): URL is https://ams3.digitaloceanspaces.com/XXXXXXXXX/21Mfile?uploads= [INF] curl.cpp:prepare_url(4207): URL changed is https://XXXXXXXXX.ams3.digitaloceanspaces.com/21Mfile?uploads= [INF] curl.cpp:insertV4Headers(2237): computing signature [POST] [/21Mfile] [uploads=] [] [INF] curl.cpp:url_to_host(100): url is https://ams3.digitaloceanspaces.com * Found bundle for host XXXXXXXXX.ams3.digitaloceanspaces.com: 0x7f5bdc003ee0 [can pipeline] * Re-using existing connection! (#4) with host XXXXXXXXX.ams3.digitaloceanspaces.com * Connected to XXXXXXXXX.ams3.digitaloceanspaces.com (5.101.110.225) port 443 (#4) > POST /21Mfile?uploads= HTTP/1.1 host: XXXXXXXXX.ams3.digitaloceanspaces.com User-Agent: s3fs/1.80 (commit hash unknown; GnuTLS(gcrypt)) Authorization: AWS4-HMAC-SHA256 Credential=removed/20180201/us-east-1/s3/aws4_request, SignedHeaders=accept;content-length;content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-meta-gid;x-amz-meta-mode;x-amz-meta-mtime;x-amz-meta-uid, Signature=removed Content-Type: application/octet-stream x-amz-acl: private x-amz-content-sha256: removed x-amz-date: 20180201T222042Z x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1517523642 x-amz-meta-uid: 0 < HTTP/1.1 403 Forbidden < Content-Length: 189 < x-amz-request-id: removed < Accept-Ranges: bytes < Content-Type: application/xml < Date: Thu, 01 Feb 2018 22:20:42 GMT < Strict-Transport-Security: max-age=15552000; includeSubDomains; preload * HTTP error before end of send, stop sending < * Curl_http_done: called premature == 0 * Closing connection 4 [INF] curl.cpp:RequestPerform(1927): HTTP response code 403 was returned, returning EPERM [INF] s3fs.cpp:s3fs_release(2194): [path=/21Mfile][fd=5] [INF] cache.cpp:DelStat(549): delete stat cache entry[path=/21Mfile] [INF] fdcache.cpp:GetFdEntity(1846): [path=/21Mfile][fd=5] ```
Author
Owner

@niallone commented on GitHub (Mar 3, 2018):

Multipart is the 20MB limit thing, this works..
s3fs spacename /mnt/spacenamefolder -onomultipart -ourl=https://sgp1.digitaloceanspaces.com
That's if you completely want to deactivate it or refer to docs here on multipart configs: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon (though documentation isn't correct about file sizes)

<!-- gh-comment-id:370137133 --> @niallone commented on GitHub (Mar 3, 2018): Multipart is the 20MB limit thing, this works.. `s3fs spacename /mnt/spacenamefolder -onomultipart -ourl=https://sgp1.digitaloceanspaces.com` That's if you completely want to deactivate it or refer to docs here on multipart configs: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon (though documentation isn't correct about file sizes)
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.
Is this problem continuing?
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.

<!-- gh-comment-id:478227740 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. Is this problem continuing? We launch new version 1.86, which fixed some problem(bugs). Please use the latest version.
Author
Owner

@gaul commented on GitHub (Apr 9, 2019):

Could you test with master? It includes a fix for #696 which may resolve your issue as well.

<!-- gh-comment-id:481134809 --> @gaul commented on GitHub (Apr 9, 2019): Could you test with master? It includes a fix for #696 which may resolve your issue as well.
Author
Owner

@gaul commented on GitHub (Apr 30, 2019):

Please reopen if you can reproduce these symptoms.

<!-- gh-comment-id:487863448 --> @gaul commented on GitHub (Apr 30, 2019): Please reopen if you can reproduce these symptoms.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#302
No description provided.