mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1865] Flush of dirty data w/ SSE custom key fails after first round #950
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#950
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @lmlsna on GitHub (Jan 21, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1865
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD
Version of s3fs being used (s3fs --version)
1.89
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.9
Kernel information (uname -r)
5.13.0-25-generic
GNU/Linux Distribution, if applicable (cat /etc/os-release)
PRETTY_NAME="Ubuntu 21.10"
NAME="Ubuntu"
VERSION_ID="21.10"
VERSION="21.10 (Impish Indri)"
VERSION_CODENAME=impish
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=impish
s3fs command line used, if applicable
/etc/fstab entry, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
If
max_dirty_datais set to-1or to a size larger than the file I am trying to upload (5GB by default), it works fine.However, if the file is larger, it will successfully write an encrypted file the size of
max_dirty_data, but subsequent PUT requests will fail giving the error message above.On non-AWS host, so I am using
-o url=option which may be improperly relied on as a variable for subsequent part requests? IDK.@gaul commented on GitHub (Jan 26, 2022):
s3fs has a blind spot in its testing, both large files blocked by #1543 and server-side encryption blocked by gaul/s3proxy#402.
@gaul commented on GitHub (Jan 26, 2022):
I successfully wrote a 5 GB + 1 file with
s3fs -o use_sse -o max_dirty_data=-1using the testjunk_datacommand. Could you give some other instructions to reproduce the symptom?@lmlsna commented on GitHub (Jan 11, 2023):
According to the docs,
-o use_ssedefaults to "sse-s3" type key, whereas I had the issue with the "sse-c[ustom]" type.So providing a custom key with
-o use_sse=custom:<custom_sse_key>(or more specifically-o use_see=custom -o load_sse_c=/path/to/sse-c.keywhere the path contains the custom key) withdirty_data=-1will likely reproduce the problem.Multipart SSE-C uploads have some extra headers that SSE-S3 multipart uploads don't, apparently. I'll look at the code and send a PR if that's what's up. Eventually. Probably. 😄
@ggtakec commented on GitHub (Jan 15, 2023):
The code for #2088 has been merged.
Please try new code in master if you can.
This issue will be closed, but please reopen it if the problem still exists.
Thanks in advance for your assistance.