mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2650] Cannot list folders when using SSE-C on Scaleway #1264
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1264
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @lanormerh on GitHub (Mar 20, 2025).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2650
Additional Information
Version of s3fs being used (
s3fs --version)V1.95(commit:561ce1e
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)fuse (2.9.9-6)
Kernel information (
uname -r)6.8.0-49-generic
GNU/Linux Distribution, if applicable (
cat /etc/os-release)Debian GNU/Linux 12 (bookworm)
How to run s3fs, if applicable
s3fs -f -d -o curldbg -o allow_other,use_path_request_style,nocopyapi,sigv2 -o use_sse=custom:/etc/passwd-sse-c -o passwd_file=/etc/passwd-s3fs -o url=https://s3.fr-par.scw.cloud -o endpoint=fr-par bucket-name-redacted /mnt/s3fs
Details about issue
I'm trying to run s3fs with SSE-C using Scaleway object storage.
As soon as I add an encrypted file, I can't list folders anymore (HEAD 400 Bad request)
@lanormerh commented on GitHub (Mar 25, 2025):
I noticed that the SSE-C headers are missing when s3fs retries the HEAD request
Longer logs : https://gist.github.com/lanormerh/e1078112f30fcc27fe0ea4f2f1b9dbfa
@ggtakec commented on GitHub (Mar 31, 2025):
I checked your log and it looks like the request(and retry request) was successful.
When you configure SSE-C, at first s3fs will send a request without any SSE-C-related headers.
If that request results in an error(404), s3fs will send the request with the SSE-C headers added.
I looked from your log, I think these actions appear to be working as good.
If you still have the logs which this SSE-C header was not added, please add them.
@BaptisteBdn commented on GitHub (Nov 27, 2025):
I am having the same problem with SSE-C on scaleway.
It works when the bucket is empty, but as soon as an encrypted file is in the bucket, I have a permission denied when using s3fs.
DEBIAN_VERSION_FULL=13.2
S3FS = V1.95
FUSE = Version: 3.17.2-3
Kernel = 6.12.48+deb13-cloud-amd64
Command to run s3fs :
Logs : https://gist.github.com/BaptisteBdn/ef29018c81e74ffe7ddc47b6c3f14104
Only thing I saw was that the md5 of the key (x-amz-server-side-encryption-customer-key-md5) is not the same as what is described in scaleway documentation (https://www.scaleway.com/en/docs/object-storage/api-cli/enable-sse-c/).
Generating the md5 using :
openssl dgst -md5 -binary ssec.key | base64gives something different than what I see inx-amz-server-side-encryption-customer-key-md5.@ggtakec commented on GitHub (Dec 7, 2025):
@BaptisteBdn I apologize for my inaccurate answer to your initial question.
When s3fs-fuse checks objects(using HEAD request), it first sends a HEAD request without an SSE-C key.
If that request fails and it has an SSE-C key, it will try each key in turn.
I believe the log you sent us shows this behavior.
Therefore, it appears that s3fs-fuse is working correctly.
The reason it works this way is because a bucket can contain a mixture of encrypted and unencrypted objects, and it needs to try each one.
(Thus, if you don't specify use_sse (which is usually the case), these ifallbacks will not occur. This is what happens when the option is specified.)
@BaptisteBdn commented on GitHub (Dec 8, 2025):
@ggtakec, thank you for your answer, however, s3fs with sse-c currently does not work with Scaleway. It works as long as there is no encrypted file in the bucket, but once one is added, I can't access the mounted directory.
How do you generate the
x-amz-server-side-encryption-customer-key-md5?When trying I do see a difference :
Using the generation method given by scaleway (which works when uploading using s3api)
When starting s3fs wtih the command :
s3fs redacted-bucket /root/s3-data -o allow_other -o passwd_file=$HOME/.passwd-s3fs -o use_path_request_style -o endpoint=fr-par -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o use_cache=/root/s3fs-cache -o use_sse=custom:/root/ssec-b64.key -o load_sse_c=/root/ssec-b64.key -o url=https://s3.fr-par.scw.cloud -o dbglevel=info -f -o curldbgEncryption key is the same, but md5 is not. When trying using the md5 generated by s3fs using s3api:
An error occurred (InvalidArgument) when calling the PutObject operation: The calculated MD5 hash of the key did not match the hash that was provided.@BaptisteBdn commented on GitHub (Dec 8, 2025):
Never mind, I was just using the wrong key for s3fs, i used the base64 key when I just should have used the 32 byte key. I think the issue can be closed.
@ggtakec commented on GitHub (Dec 8, 2025):
I tried the following for AWS S3 and it works fine.
First, I created the key as follows:
I saved this key in ~/sseckeys(0600) and pass
use_sse=custom:.../sseckeyswhen starting s3fs.Then, when I uploaded a file, the API was called as shown below:
If I access this file, I will see something like this:
In other words, both the base64 value and the MD5 value of the created key were specified without any problems.
The MD5 is created internally by s3fs_base64 function, but I don't think there's any problem with it.
And you said that the mount point itself becomes invisible, but I can't imagine why.
I think only the object(file) is encrypted with SSE-C, but am I wrong?
And I'm not using Scaleway, so I can't imagine what the cause of this problem might be.
If possible, try starting s3fs with the
dbglevel=dbg,curldbgoption and checking the detailed trace log.@BaptisteBdn commented on GitHub (Dec 9, 2025):
So with further testing, I think the md5 hash is calculated differently depending on the version.
With
Amazon Simple Storage Service File System V1.95 with GnuTLS(gcrypt):With
Amazon Simple Storage Service File System V1.97(commit:0d96734 +untracked files) with OpenSSL(compiled from source) :Both are on debian13, with the v1.95 being latest for debian repository.
The second one is valid while the first is not, and s3fs works with the version 1.97 compiled.
@ggtakec commented on GitHub (Dec 9, 2025):
@BaptisteBdn Thank you for your greate reply.
It seems likely that there was a problem with the version using GnuTLS.
We will look into GnuTLS.
Currently, s3fs is unified to use OpenSSL (v3), but is it possible for you to use the latest version?
@gaul commented on GitHub (Dec 9, 2025):
IAAAAAAAAACsW2slnn8AAA==seems like an unlikely value for a hash.@BaptisteBdn commented on GitHub (Dec 10, 2025):
@ggtakec Yes I can use the lastest version, thanks
@ggtakec commented on GitHub (Dec 14, 2025):
@BaptisteBdn Sorry for the wait.
We've found an issue in the code when using GnuTLS and will be working on fixing it.
Please wait a little longer.
@gaul commented on GitHub (Dec 14, 2025):
Please test with the latest master.