[GH-ISSUE #2095] Multipart upload fails with Cloudflare R2 #1064

Closed
opened 2026-03-04 01:51:04 +03:00 by kerem · 14 comments
Owner

Originally created by @gaul on GitHub (Jan 15, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2095

Cloudflare supports multipart upload but copying a 25 MB file fails:

2023-01-15T10:03:35.182Z [CURL DBG] > PUT /gaultest/25MB?partNumber=2&uploadId=XXX HTTP/1.1
2023-01-15T10:03:35.182Z [CURL DBG] > Host: gaultest.XXX.r2.cloudflarestorage.com
2023-01-15T10:03:35.182Z [CURL DBG] > User-Agent: s3fs/1.91 (commit hash unknown; OpenSSL)
2023-01-15T10:03:35.182Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=XXX/20230115/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=XXX
2023-01-15T10:03:35.182Z [CURL DBG] > x-amz-content-sha256: 9ddf536a76cd196e4d111b0d358ed5f0e462442112b5dfea41cb59c34af79e28
2023-01-15T10:03:35.182Z [CURL DBG] > x-amz-date: 20230115T100335Z
2023-01-15T10:03:35.182Z [CURL DBG] > Content-Length: 15728640
2023-01-15T10:03:35.182Z [CURL DBG] > Expect: 100-continue
...
2023-01-15T10:03:36.922Z [ERR] curl.cpp:RequestPerform(2371): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your secret access key and signing method. </Message></Error>

A 5 MB file (single part request) succeeds.

Original bug report: https://twitter.com/menkatsukiroku/status/1602606940491759616

Originally created by @gaul on GitHub (Jan 15, 2023). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2095 Cloudflare [supports multipart upload](https://developers.cloudflare.com/r2/data-access/s3-api/api/) but copying a 25 MB file fails: ``` 2023-01-15T10:03:35.182Z [CURL DBG] > PUT /gaultest/25MB?partNumber=2&uploadId=XXX HTTP/1.1 2023-01-15T10:03:35.182Z [CURL DBG] > Host: gaultest.XXX.r2.cloudflarestorage.com 2023-01-15T10:03:35.182Z [CURL DBG] > User-Agent: s3fs/1.91 (commit hash unknown; OpenSSL) 2023-01-15T10:03:35.182Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=XXX/20230115/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=XXX 2023-01-15T10:03:35.182Z [CURL DBG] > x-amz-content-sha256: 9ddf536a76cd196e4d111b0d358ed5f0e462442112b5dfea41cb59c34af79e28 2023-01-15T10:03:35.182Z [CURL DBG] > x-amz-date: 20230115T100335Z 2023-01-15T10:03:35.182Z [CURL DBG] > Content-Length: 15728640 2023-01-15T10:03:35.182Z [CURL DBG] > Expect: 100-continue ... 2023-01-15T10:03:36.922Z [ERR] curl.cpp:RequestPerform(2371): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your secret access key and signing method. </Message></Error> ``` A 5 MB file (single part request) succeeds. Original bug report: https://twitter.com/menkatsukiroku/status/1602606940491759616
kerem closed this issue 2026-03-04 01:51:04 +03:00
Author
Owner

@gaul commented on GitHub (Jan 15, 2023):

This seems to be caused by uploadId that contain a wider set of characters than AWS usually uses:

AJnWgyTLmBrVSSKLVsNPghD3xz1N1ag5sgLEQtPKGVpNZXapKL/kqAxEfy1uanOqVc8h3eoI7119jlLKMuPwvmezTOegyqfgrqcnSbcWapyPD7ZI7P3r7cl5FIl9+ovFA1Sm+EUvPxuVsP3RS3DY+QTau6NQhcNOI1gX12Ujhxn6BVU3Zqz3HVIhI4zo7jfHRMezCk71XgT2zlmco6lf/8nD6NKro4KjJoCrl0ZmsPD4LCzNCRy22BGHFM3suSa/MESXFnIbOe5DQwPf06U9mh4VI0R4o8ObBdjDHL4yrm1ffRaaOGrAM//imhCSlaQP8mMkpLVp1LBIFfASZZGMYmM=

These characters /, +, and = need to be URL-encoded.

<!-- gh-comment-id:1383141181 --> @gaul commented on GitHub (Jan 15, 2023): This seems to be caused by uploadId that contain a wider set of characters than AWS usually uses: > AJnWgyTLmBrVSSKLVsNPghD3xz1N1ag5sgLEQtPKGVpNZXapKL/kqAxEfy1uanOqVc8h3eoI7119jlLKMuPwvmezTOegyqfgrqcnSbcWapyPD7ZI7P3r7cl5FIl9+ovFA1Sm+EUvPxuVsP3RS3DY+QTau6NQhcNOI1gX12Ujhxn6BVU3Zqz3HVIhI4zo7jfHRMezCk71XgT2zlmco6lf/8nD6NKro4KjJoCrl0ZmsPD4LCzNCRy22BGHFM3suSa/MESXFnIbOe5DQwPf06U9mh4VI0R4o8ObBdjDHL4yrm1ffRaaOGrAM//imhCSlaQP8mMkpLVp1LBIFfASZZGMYmM= These characters `/`, `+`, and `=` need to be URL-encoded.
Author
Owner

@ggtakec commented on GitHub (Jan 22, 2023):

@gaul
If uoloadId is passed in normal encoding in advance, it seems to work, but I'm a little worried.
When using encoded uploadId, I don't know if the encoded string is used as the base string when creating the Signature for the header.

I think the fix is easy if it's no problem to use a encoded uploadId string for generating Signature.
However, if it uses the original uploadId string, it becomes a little more cumbersome to modify.

<!-- gh-comment-id:1399394290 --> @ggtakec commented on GitHub (Jan 22, 2023): @gaul If uoloadId is passed in normal encoding in advance, it seems to work, but I'm a little worried. When using encoded `uploadId`, I don't know if the encoded string is used as the base string when creating the `Signature` for the header. I think the fix is easy if it's no problem to use a encoded `uploadId` string for generating `Signature`. However, if it uses the original `uploadId` string, it becomes a little more cumbersome to modify.
Author
Owner

@ggtakec commented on GitHub (Jan 22, 2023):

@gaul I have checked the current source.
Although it is not a query string, the signature is used the encoded string from the URL path.
Considering this, it seems no problem to pass the encoded string of the query string(included uploadId) to the signature calculation.
I will try to modify the source code according to this policy.

<!-- gh-comment-id:1399397537 --> @ggtakec commented on GitHub (Jan 22, 2023): @gaul I have checked the current source. Although it is not a query string, the signature is used the encoded string from the `URL path`. Considering this, it seems no problem to pass the encoded string of the query string(included `uploadId`) to the signature calculation. I will try to modify the source code according to this policy.
Author
Owner

@ggtakec commented on GitHub (Jan 22, 2023):

@gaul
I posted PR #2095.
I don't have Cloudflare set up on hand, can I try it with this PR?

<!-- gh-comment-id:1399412454 --> @ggtakec commented on GitHub (Jan 22, 2023): @gaul I posted PR #2095. I don't have Cloudflare set up on hand, can I try it with this PR?
Author
Owner

@ggtakec commented on GitHub (Jan 24, 2023):

@gaul
The following error reported in #2095 should now work correctly with the fix in this PR.

<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your secret access key and signing method. </Message>

However, we still got the error below that you and I detected.

<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your secret access key and signing method. </Message>

The reason for this error was that s3fs was working with MixUpload(default).

In MixUpload mode, each part size is not constant.(see. #1822)
In this mode, the range of data that needs to be uploaded is uploaded in multipart size as much as possible, but the size is not fixed on a case-by-case basis.
This seems to be an error in Cloudflare because each part size is not fixed.

To resolve, either give the nomixupload option or give the streamupload option.
For these options (No-MixUpload or StreamUpload) each part size is fixed.

Also, for Cloudflare, do not give the enable_content_md5 option.
The upload fails because the ETag(=md5) at PUT of each part and the ETag(not md5 in case of Cloudflare) of the response do not match.

<!-- gh-comment-id:1402213972 --> @ggtakec commented on GitHub (Jan 24, 2023): @gaul The following error reported in #2095 should now work correctly with the fix in this PR. ``` <Code>SignatureDoesNotMatch</Code> <Message>The request signature we calculated does not match the signature you provided. Check your secret access key and signing method. </Message> ``` However, we still got the error below that you and I detected. ``` <Code>SignatureDoesNotMatch</Code> <Message>The request signature we calculated does not match the signature you provided. Check your secret access key and signing method. </Message> ``` The reason for this error was that s3fs was working with **MixUpload**(default). In MixUpload mode, each part size is **not constant.**(see. #1822) In this mode, the range of data that needs to be uploaded is uploaded in multipart size as much as possible, but the size is not fixed on a case-by-case basis. This seems to be an error in Cloudflare because each part size is not fixed. To resolve, either give the `nomixupload` option or give the `streamupload` option. For these options (No-MixUpload or StreamUpload) each part size is **fixed**. Also, for Cloudflare, do **not** give the `enable_content_md5` option. The upload fails because the ETag(=md5) at PUT of each part and the ETag(not md5 in case of Cloudflare) of the response do not match.
Author
Owner

@barabo commented on GitHub (Jan 25, 2023):

Is the problem that MixUpload mode does not work with enable_content_md5 when using R2? I'm wondering if MixUpload mode with work when enable_content_md5 is disabled.

<!-- gh-comment-id:1403082968 --> @barabo commented on GitHub (Jan 25, 2023): Is the problem that MixUpload mode does not work with `enable_content_md5` when using R2? I'm wondering if MixUpload mode with work when `enable_content_md5` is disabled.
Author
Owner

@ggtakec commented on GitHub (Jan 25, 2023):

@barabo
MixUpload mode and enable_content_md5 are not linked.
There is no difference about enable_content_md5 between regular MultipartUpload and MixUpload and StreamUpload.

enable_content_md5 is a function to check the consistency of the data before sending the data and the data received by the server when the upload is completed.

As for processing, if enable_content_md5 is specified, s3fs will calculate the md5 value of the uploaded data and compare it with the ETag value returned in the upload response.
For AWS S3, the ETag in the response is an md5 value, so specifying enable_content_md5 increases the accuracy of content integrity.

However, in Cloudflare R2 the ETag in the response is NOT the md5 value. So specifying enable_content_md5 is an error. In other words, Cloudflare R2 cannot check the integrity of the transmitted data using the md5 value.

The following is a summary of the issues raised in this issue.

  • s3fs bug(UploadId was not URL encoded)
  • MixUpload cannot be used with Cloudflare R2(because each part size must be fixed)
  • With Cloudflare R3, enable_content_md5 cannot be used either.
<!-- gh-comment-id:1403447675 --> @ggtakec commented on GitHub (Jan 25, 2023): @barabo MixUpload mode and `enable_content_md5` are not linked. There is no difference about `enable_content_md5` between regular MultipartUpload and MixUpload and StreamUpload. `enable_content_md5` is a function to check the consistency of the data before sending the data and the data received by the server when the upload is completed. As for processing, if `enable_content_md5` is specified, s3fs will calculate the md5 value of the uploaded data and compare it with the `ETag` value returned in the upload response. For `AWS S3`, the `ETag` in the response is an md5 value, so specifying `enable_content_md5` increases the accuracy of content integrity. However, in `Cloudflare R2` the `ETag` in the response is NOT the md5 value. So specifying `enable_content_md5` is an error. In other words, `Cloudflare R2` cannot check the integrity of the transmitted data using the md5 value. The following is a summary of the issues raised in this issue. - s3fs bug(`UploadId` was not URL encoded) - `MixUpload` cannot be used with `Cloudflare R2`(because each part size must be fixed) - With `Cloudflare R3`, `enable_content_md5` cannot be used either.
Author
Owner

@barabo commented on GitHub (Jan 25, 2023):

@ggtakec - Thank you for the detailed response!

I was looking at the R2 API docs and it seems like UploadPart operation should support providing a content MD5 for the uploaded part. Is the problem in the response afterward? You mentioned that the response ETag is not the md5, but I'm wondering if the md5 is in another header, or if there's some other signal to indicate that the part upload matched the checksum provided when upload began.

FWIW - if you would like access to an R2 bucket to experiment with I have a personal account I can provide you with keys to play with.

Anyway, this is not an urgent issue for me - I was just curious! Thanks, again!

<!-- gh-comment-id:1403686216 --> @barabo commented on GitHub (Jan 25, 2023): @ggtakec - Thank you for the detailed response! I was looking at the R2 API docs and it seems like `UploadPart` operation [should](https://developers.cloudflare.com/r2/data-access/s3-api/api/#:~:text=System%20Metadata%3A%0A%E2%80%83%20%E2%9C%85-,Content%2DMD5,-%E2%9D%8C%20SSE%2DC%3A) support providing a content MD5 for the uploaded part. Is the problem in the response afterward? You mentioned that the response `ETag` is not the md5, but I'm wondering if the md5 is in another header, or if there's some other signal to indicate that the part upload matched the checksum provided when upload began. FWIW - if you would like access to an R2 bucket to experiment with I have a personal account I can provide you with keys to play with. Anyway, this is not an urgent issue for me - I was just curious! Thanks, again!
Author
Owner

@ggtakec commented on GitHub (Jan 26, 2023):

@barabo
Below is the transmission/reception log of one part when performing a multipart upload to Cloutflare R2.

  • Send
Host: <host-part>.r2.cloudflarestorage.com
User-Agent: s3fs/1.91 (commit hash 16bc449; OpenSSL)
Authorization: AWS4-HMAC-SHA256 Credential=*************, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**************
x-amz-content-sha256: 715f7678fc563de39cc31e8867e7bacbdb6db40ca0bdeb9b3129250fcebda169
x-amz-date: 20230126T110915Z
Content-Length: 10485760
Expect: 100-continue
  • Response
Date: Thu, 26 Jan 2023 11:09:17 GMT
Content-Length: 0
Connection: keep-alive
ETag: "AJ1VPukMfW6zAmd7C3TdI+g3MWlKGLoxa3D+/r2cdz1E4ubC8og1r13Y68/a9Q3ie7hB4FgFkVwe261gI8pDKJnMHlpiFEdeeS87l1vRLLiqcLq3nyhgDURPxeFu6g3BcThI1HEQQH2av+7LKTlJhQkWgfa/Qh0UQCcyeogROe/r5khU+S0uGE6Bi4IaOpWk/fUzqHMITxiP0HckYBjrCXBVlYxkNcmIoTwIE+CmNNP/tpg/zsLfOxbv1D+bD6bQ1w=="
Server: cloudflare
CF-RAY: 78f8cb9fb92fafca-NRT

There is no md5 etc in the response as above.
It is a possible that I'm missing unknown options/parameters.
But I don't think s3fs(client side) can validate it because there is no information about the content received in the response(such as MD5 value).

<!-- gh-comment-id:1404873577 --> @ggtakec commented on GitHub (Jan 26, 2023): @barabo Below is the transmission/reception log of one part when performing a multipart upload to Cloutflare R2. - Send ```PUT /<bucket-name>/testfile?partNumber=1&uploadId=AKlJIZfgRVOQwS2UEs3Uuv1WM%2FJFaNz4lSKmjc9eqD%2BEW4dqz%2F%2FkWHa7ugXUEqJfeSpECxfGSy0xy4TD4MUjuEa7de2G0TuLvMqTOd3%2FO4ZqY6COpsMpBYmTsJXzTBQPZH9gd7h0vL11LRTVgMWZtkTRPzRYr4Krg9rb6VS6je4M1iSJ%2FoSs2GTdyu7n%2FRgGwwJohY%2BDkije1wE00lLSbBQUHbPPAoULO1VyUHJEw8uumchFHL%2BVoFlsiN3Fy4po%2F4H1SJlubTO7VxtE2HHPL6jsEKMcBfLLijFwY8HrkWp8sgHXit3Hvxnl4F3mN4giyUOZ3KAOu5SsMcWAVhbUrjs%3D HTTP/1.1 Host: <host-part>.r2.cloudflarestorage.com User-Agent: s3fs/1.91 (commit hash 16bc449; OpenSSL) Authorization: AWS4-HMAC-SHA256 Credential=*************, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=************** x-amz-content-sha256: 715f7678fc563de39cc31e8867e7bacbdb6db40ca0bdeb9b3129250fcebda169 x-amz-date: 20230126T110915Z Content-Length: 10485760 Expect: 100-continue ``` - Response ```HTTP/1.1 200 OK Date: Thu, 26 Jan 2023 11:09:17 GMT Content-Length: 0 Connection: keep-alive ETag: "AJ1VPukMfW6zAmd7C3TdI+g3MWlKGLoxa3D+/r2cdz1E4ubC8og1r13Y68/a9Q3ie7hB4FgFkVwe261gI8pDKJnMHlpiFEdeeS87l1vRLLiqcLq3nyhgDURPxeFu6g3BcThI1HEQQH2av+7LKTlJhQkWgfa/Qh0UQCcyeogROe/r5khU+S0uGE6Bi4IaOpWk/fUzqHMITxiP0HckYBjrCXBVlYxkNcmIoTwIE+CmNNP/tpg/zsLfOxbv1D+bD6bQ1w==" Server: cloudflare CF-RAY: 78f8cb9fb92fafca-NRT ``` There is no md5 etc in the response as above. _It is a possible that I'm missing unknown options/parameters._ But I don't think s3fs(client side) can validate it because there is no information about the content received in the response(such as MD5 value).
Author
Owner

@gaul commented on GitHub (Jan 29, 2023):

This still fails:

2023-01-29T09:24:30.807Z [CURL DBG] > POST /25MB?uploadId=AABOQWNM0Vogo2iRZHGa1SeyAWkXbMbMGh8mvyvhOjzePWLXEvLQQ8hNJQjxiJzZuvVWshm40vBcfjtUUO%2B81GpKzSgpUPlVhhr5DABEfHvpp1SAQq4c8Sf%2BEYoH6dByXvOY6e1lqRubgt0rhwLWq0uZJPo%2F7FjjHM5YpQM6XsIz%2BmaL1cT3ZXQdqPgDe%2BFJagmSbk5PtQ5YDc%2FMdo%2FsLgeN83UWr0OWaP%2B3j7Qi1mF%2BzKb14UfptVV0DYVlwnOVVisqlryqcpt8RWqb9Xbrsk7DdYKL8Sjh0j4L38KQR%2BfGA4me7BKNAg%2FSSYLqdfrBODA4vSEzn81IOtG%2Fa3QP5XQ%3D HTTP/1.1
2023-01-29T09:24:30.807Z [CURL DBG] > Host: XXX.ff6ff4be7a24f1bef9408007c04e983f.r2.cloudflarestorage.com
2023-01-29T09:24:30.807Z [CURL DBG] > User-Agent: s3fs/1.91 (commit hash 22ecfa6; OpenSSL)
2023-01-29T09:24:30.807Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=XXX/20230129/us-east-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date, Signature=XXX
2023-01-29T09:24:30.807Z [CURL DBG] > Content-Type: application/xml
2023-01-29T09:24:30.807Z [CURL DBG] > x-amz-content-sha256: 071105f96627961bb36ab7131d9da6f8872c69f8124e9702c7f6ad76985be470
2023-01-29T09:24:30.807Z [CURL DBG] > x-amz-date: 20230129T092430Z
2023-01-29T09:24:30.807Z [CURL DBG] > Content-Length: 697
2023-01-29T09:24:30.807Z [CURL DBG] >
2023-01-29T09:24:30.807Z [CURL DBG] * TLSv1.2 (OUT), TLS header, Supplemental data (23):
2023-01-29T09:24:30.807Z [CURL DBG] * We are completely uploaded and fine
2023-01-29T09:24:31.487Z [CURL DBG] * TLSv1.2 (IN), TLS header, Supplemental data (23):
2023-01-29T09:24:31.487Z [CURL DBG] * Mark bundle as not supporting multiuse
2023-01-29T09:24:31.487Z [CURL DBG] < HTTP/1.1 400 Bad Request
2023-01-29T09:24:31.487Z [CURL DBG] < Date: Sun, 29 Jan 2023 09:24:31 GMT
2023-01-29T09:24:31.487Z [CURL DBG] < Content-Type: application/xml
2023-01-29T09:24:31.487Z [CURL DBG] < Content-Length: 142
2023-01-29T09:24:31.487Z [CURL DBG] < Connection: keep-alive
2023-01-29T09:24:31.487Z [CURL DBG] < Server: cloudflare
2023-01-29T09:24:31.487Z [CURL DBG] < CF-RAY: 7910ea4c9c77d5fd-NRT
2023-01-29T09:24:31.487Z [CURL DBG] <
2023-01-29T09:24:31.487Z [CURL DBG] * Connection #0 to host gaultest.ff6ff4be7a24f1bef9408007c04e983f.r2.cloudflarestorage.com left intact
2023-01-29T09:24:31.488Z [ERR] curl.cpp:RequestPerform(2412): HTTP response code 400, returning EIO. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidPart</Code><Message>There was a problem with the multipart upload.</Message></Error>
2023-01-29T09:24:31.488Z [ERR] fdcache_entity.cpp:UploadPending(2431): failed to flush for file(/25MB) by(-5).
2023-01-29T09:24:31.488Z [ERR] s3fs.cpp:s3fs_release(2757): could not upload pending data(meta, etc) for pseudo_fd(2) / path(/25MB)

I don't have time to look at this now but I believe that s3fs should maintain query parameters in a list<pair<string, string> >. There are different encoding rules for the AWS signature and HTTP encoding.

<!-- gh-comment-id:1407610314 --> @gaul commented on GitHub (Jan 29, 2023): This still fails: ``` 2023-01-29T09:24:30.807Z [CURL DBG] > POST /25MB?uploadId=AABOQWNM0Vogo2iRZHGa1SeyAWkXbMbMGh8mvyvhOjzePWLXEvLQQ8hNJQjxiJzZuvVWshm40vBcfjtUUO%2B81GpKzSgpUPlVhhr5DABEfHvpp1SAQq4c8Sf%2BEYoH6dByXvOY6e1lqRubgt0rhwLWq0uZJPo%2F7FjjHM5YpQM6XsIz%2BmaL1cT3ZXQdqPgDe%2BFJagmSbk5PtQ5YDc%2FMdo%2FsLgeN83UWr0OWaP%2B3j7Qi1mF%2BzKb14UfptVV0DYVlwnOVVisqlryqcpt8RWqb9Xbrsk7DdYKL8Sjh0j4L38KQR%2BfGA4me7BKNAg%2FSSYLqdfrBODA4vSEzn81IOtG%2Fa3QP5XQ%3D HTTP/1.1 2023-01-29T09:24:30.807Z [CURL DBG] > Host: XXX.ff6ff4be7a24f1bef9408007c04e983f.r2.cloudflarestorage.com 2023-01-29T09:24:30.807Z [CURL DBG] > User-Agent: s3fs/1.91 (commit hash 22ecfa6; OpenSSL) 2023-01-29T09:24:30.807Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=XXX/20230129/us-east-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date, Signature=XXX 2023-01-29T09:24:30.807Z [CURL DBG] > Content-Type: application/xml 2023-01-29T09:24:30.807Z [CURL DBG] > x-amz-content-sha256: 071105f96627961bb36ab7131d9da6f8872c69f8124e9702c7f6ad76985be470 2023-01-29T09:24:30.807Z [CURL DBG] > x-amz-date: 20230129T092430Z 2023-01-29T09:24:30.807Z [CURL DBG] > Content-Length: 697 2023-01-29T09:24:30.807Z [CURL DBG] > 2023-01-29T09:24:30.807Z [CURL DBG] * TLSv1.2 (OUT), TLS header, Supplemental data (23): 2023-01-29T09:24:30.807Z [CURL DBG] * We are completely uploaded and fine 2023-01-29T09:24:31.487Z [CURL DBG] * TLSv1.2 (IN), TLS header, Supplemental data (23): 2023-01-29T09:24:31.487Z [CURL DBG] * Mark bundle as not supporting multiuse 2023-01-29T09:24:31.487Z [CURL DBG] < HTTP/1.1 400 Bad Request 2023-01-29T09:24:31.487Z [CURL DBG] < Date: Sun, 29 Jan 2023 09:24:31 GMT 2023-01-29T09:24:31.487Z [CURL DBG] < Content-Type: application/xml 2023-01-29T09:24:31.487Z [CURL DBG] < Content-Length: 142 2023-01-29T09:24:31.487Z [CURL DBG] < Connection: keep-alive 2023-01-29T09:24:31.487Z [CURL DBG] < Server: cloudflare 2023-01-29T09:24:31.487Z [CURL DBG] < CF-RAY: 7910ea4c9c77d5fd-NRT 2023-01-29T09:24:31.487Z [CURL DBG] < 2023-01-29T09:24:31.487Z [CURL DBG] * Connection #0 to host gaultest.ff6ff4be7a24f1bef9408007c04e983f.r2.cloudflarestorage.com left intact 2023-01-29T09:24:31.488Z [ERR] curl.cpp:RequestPerform(2412): HTTP response code 400, returning EIO. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidPart</Code><Message>There was a problem with the multipart upload.</Message></Error> 2023-01-29T09:24:31.488Z [ERR] fdcache_entity.cpp:UploadPending(2431): failed to flush for file(/25MB) by(-5). 2023-01-29T09:24:31.488Z [ERR] s3fs.cpp:s3fs_release(2757): could not upload pending data(meta, etc) for pseudo_fd(2) / path(/25MB) ``` I don't have time to look at this now but I believe that s3fs should maintain query parameters in a `list<pair<string, string> >`. There are different encoding rules for the AWS signature and HTTP encoding.
Author
Owner

@ggtakec commented on GitHub (Jan 29, 2023):

@gaul
Does it occur even with the nomixupload(or streamupload) option specified?

In the case I confirmed, InvalidPart occurred when the part size was not fixed by using mixupload.
InvalidPart does not occur by fixing the part size(excluding the final part).
Specify nomixupload and investigate a little more to see if the same phenomenon can be reproduced.

<!-- gh-comment-id:1407613961 --> @ggtakec commented on GitHub (Jan 29, 2023): @gaul Does it occur even with the `nomixupload`(or `streamupload`) option specified? In the case I confirmed, `InvalidPart` occurred when the part size was not fixed by using `mixupload`. `InvalidPart` does not occur by fixing the part size(excluding the final part). Specify `nomixupload` and investigate a little more to see if the same phenomenon can be reproduced.
Author
Owner

@ggtakec commented on GitHub (Feb 18, 2023):

#2097 has been merged.
When accessing Cloudflare R2 with code of master branch, you should specify the nomultipart or streamupload options.
I think that will solve this problem.

<!-- gh-comment-id:1435465547 --> @ggtakec commented on GitHub (Feb 18, 2023): #2097 has been merged. When accessing Cloudflare R2 with code of master branch, you should specify the `nomultipart` or `streamupload` options. I think that will solve this problem.
Author
Owner

@ggtakec commented on GitHub (Mar 19, 2023):

This will be closed. If you still have problems, please reopen or post a new issue.

<!-- gh-comment-id:1475121104 --> @ggtakec commented on GitHub (Mar 19, 2023): This will be closed. If you still have problems, please reopen or post a new issue.
Author
Owner

@abelbeck commented on GitHub (Jul 4, 2024):

#2097 has been merged. When accessing Cloudflare R2 with code of master branch, you should specify the nomultipart or streamupload options. I think that will solve this problem.

Did you mean to say "you should specify the nomixupload or streamupload options" as you stated before ?

I have been testing Cloudflare R2 with s3fs 1.94 and large uploads failed until I set -o nomixupload.

I think the WIKI https://github.com/s3fs-fuse/s3fs-fuse/wiki/Non-Amazon-S3#cloudflare-r2 for "Cloudflare R2" should replace nomultipart with nomixupload.

<!-- gh-comment-id:2209304205 --> @abelbeck commented on GitHub (Jul 4, 2024): > #2097 has been merged. When accessing Cloudflare R2 with code of master branch, you should specify the `nomultipart` or `streamupload` options. I think that will solve this problem. Did you mean to say "you should specify the `nomixupload` or `streamupload` options" as you stated before ? I have been testing Cloudflare R2 with `s3fs 1.94` and large uploads failed until I set `-o nomixupload`. I think the WIKI https://github.com/s3fs-fuse/s3fs-fuse/wiki/Non-Amazon-S3#cloudflare-r2 for "Cloudflare R2" should replace `nomultipart` with `nomixupload`.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1064
No description provided.