[GH-ISSUE #1865] Flush of dirty data w/ SSE custom key fails after first round #950

Closed
opened 2026-03-04 01:50:09 +03:00 by kerem · 4 comments
Owner

Originally created by @lmlsna on GitHub (Jan 21, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1865

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD

Version of s3fs being used (s3fs --version)

1.89

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.9

Kernel information (uname -r)

5.13.0-25-generic

GNU/Linux Distribution, if applicable (cat /etc/os-release)

PRETTY_NAME="Ubuntu 21.10"
NAME="Ubuntu"
VERSION_ID="21.10"
VERSION="21.10 (Impish Indri)"
VERSION_CODENAME=impish
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=impish

s3fs command line used, if applicable

s3fs -d -f -o max_dirty_data=-1 -o use_sse=custom:/home/ubuntu/s3fs/sse-key -o load_sse_c=/home/ubuntu/s3fs/sse-key -o passwd_file=/home/ubuntu/s3fs/passwd my-bucket:/ /mount/point

/etc/fstab entry, if applicable

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

s3fs[3822577]: curl.cpp:RequestPerform(2318): HTTP response code 400, returning EIO. Body Text:
<?xml version="1.0" encoding="UTF-8"?><Error>
<Code>InvalidRequest</Code>
<Message>The multipart upload initiate requested encryption. Subsequent part requests must include the appropriate encryption parameters.</Message>
</Error>

s3fs[3822577]: curl_multi.cpp:MultiPerform(135): thread failed - rc(-5)

Details about issue

If max_dirty_data is set to -1 or to a size larger than the file I am trying to upload (5GB by default), it works fine.
However, if the file is larger, it will successfully write an encrypted file the size of max_dirty_data, but subsequent PUT requests will fail giving the error message above.

On non-AWS host, so I am using -o url= option which may be improperly relied on as a variable for subsequent part requests? IDK.

Originally created by @lmlsna on GitHub (Jan 21, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1865 ### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ _Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD_ #### Version of s3fs being used (s3fs --version) 1.89 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.9 #### Kernel information (uname -r) 5.13.0-25-generic #### GNU/Linux Distribution, if applicable (cat /etc/os-release) PRETTY_NAME="Ubuntu 21.10" NAME="Ubuntu" VERSION_ID="21.10" VERSION="21.10 (Impish Indri)" VERSION_CODENAME=impish ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=impish #### s3fs command line used, if applicable ``` s3fs -d -f -o max_dirty_data=-1 -o use_sse=custom:/home/ubuntu/s3fs/sse-key -o load_sse_c=/home/ubuntu/s3fs/sse-key -o passwd_file=/home/ubuntu/s3fs/passwd my-bucket:/ /mount/point ``` #### /etc/fstab entry, if applicable ``` ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) _if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages_ ``` s3fs[3822577]: curl.cpp:RequestPerform(2318): HTTP response code 400, returning EIO. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error> <Code>InvalidRequest</Code> <Message>The multipart upload initiate requested encryption. Subsequent part requests must include the appropriate encryption parameters.</Message> </Error> s3fs[3822577]: curl_multi.cpp:MultiPerform(135): thread failed - rc(-5) ``` ### Details about issue If `max_dirty_data` is set to `-1` or to a size larger than the file I am trying to upload (5GB by default), it works fine. However, if the file is larger, it will successfully write an encrypted file the size of `max_dirty_data`, but subsequent PUT requests will fail giving the error message above. On non-AWS host, so I am using `-o url=` option which may be improperly relied on as a variable for subsequent part requests? IDK.
kerem closed this issue 2026-03-04 01:50:09 +03:00
Author
Owner

@gaul commented on GitHub (Jan 26, 2022):

s3fs has a blind spot in its testing, both large files blocked by #1543 and server-side encryption blocked by gaul/s3proxy#402.

<!-- gh-comment-id:1022149290 --> @gaul commented on GitHub (Jan 26, 2022): s3fs has a blind spot in its testing, both large files blocked by #1543 and server-side encryption blocked by gaul/s3proxy#402.
Author
Owner

@gaul commented on GitHub (Jan 26, 2022):

I successfully wrote a 5 GB + 1 file with s3fs -o use_sse -o max_dirty_data=-1 using the test junk_data command. Could you give some other instructions to reproduce the symptom?

<!-- gh-comment-id:1022217419 --> @gaul commented on GitHub (Jan 26, 2022): I successfully wrote a 5 GB + 1 file with `s3fs -o use_sse -o max_dirty_data=-1` using the test `junk_data` command. Could you give some other instructions to reproduce the symptom?
Author
Owner

@lmlsna commented on GitHub (Jan 11, 2023):

I successfully wrote a 5 GB + 1 file with s3fs -o use_sse -o max_dirty_data=-1 using the test junk_data command. Could you give some other instructions to reproduce the symptom?

According to the docs, -o use_sse defaults to "sse-s3" type key, whereas I had the issue with the "sse-c[ustom]" type.

So providing a custom key with -o use_sse=custom:<custom_sse_key> (or more specifically -o use_see=custom -o load_sse_c=/path/to/sse-c.key where the path contains the custom key) with dirty_data=-1 will likely reproduce the problem.

Multipart SSE-C uploads have some extra headers that SSE-S3 multipart uploads don't, apparently. I'll look at the code and send a PR if that's what's up. Eventually. Probably. 😄

<!-- gh-comment-id:1378698270 --> @lmlsna commented on GitHub (Jan 11, 2023): > I successfully wrote a 5 GB + 1 file with `s3fs -o use_sse -o max_dirty_data=-1` using the test `junk_data` command. Could you give some other instructions to reproduce the symptom? According to the docs, `-o use_sse` defaults to "sse-s3" type key, whereas I had the issue with the "sse-c[ustom]" type. So providing a custom key with `-o use_sse=custom:<custom_sse_key>` (or more specifically `-o use_see=custom -o load_sse_c=/path/to/sse-c.key` where the path contains the custom key) with `dirty_data=-1` will likely reproduce the problem. [Multipart SSE-C uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html#specifying-s3-c-encryption#sse-c-using-sdks-multipart-uploads) have some extra headers that SSE-S3 multipart uploads don't, apparently. I'll look at the code and send a PR if that's what's up. Eventually. Probably. 😄
Author
Owner

@ggtakec commented on GitHub (Jan 15, 2023):

The code for #2088 has been merged.
Please try new code in master if you can.

This issue will be closed, but please reopen it if the problem still exists.
Thanks in advance for your assistance.

<!-- gh-comment-id:1383075414 --> @ggtakec commented on GitHub (Jan 15, 2023): The code for #2088 has been merged. Please try new code in master if you can. This issue will be closed, but please reopen it if the problem still exists. Thanks in advance for your assistance.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#950
No description provided.