[GH-ISSUE #2650] Cannot list folders when using SSE-C on Scaleway #1264

Closed
opened 2026-03-04 01:52:40 +03:00 by kerem · 13 comments
Owner

Originally created by @lanormerh on GitHub (Mar 20, 2025).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2650

Additional Information

Version of s3fs being used (s3fs --version)

V1.95(commit:561ce1e

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

fuse (2.9.9-6)

Kernel information (uname -r)

6.8.0-49-generic

GNU/Linux Distribution, if applicable (cat /etc/os-release)

Debian GNU/Linux 12 (bookworm)

How to run s3fs, if applicable

s3fs -f -d -o curldbg -o allow_other,use_path_request_style,nocopyapi,sigv2 -o use_sse=custom:/etc/passwd-sse-c -o passwd_file=/etc/passwd-s3fs -o url=https://s3.fr-par.scw.cloud -o endpoint=fr-par bucket-name-redacted /mnt/s3fs

Details about issue

I'm trying to run s3fs with SSE-C using Scaleway object storage.
As soon as I add an encrypted file, I can't list folders anymore (HEAD 400 Bad request)

2025-03-20T14:54:22.972Z [CURL DBG] < content-type: application/xml
2025-03-20T14:54:22.972Z [CURL DBG] < date: Thu, 20 Mar 2025 14:54:22 GMT
2025-03-20T14:54:22.972Z [CURL DBG] < x-amz-id-2: txgeaa2fbb4361c4a059d61-0067dc2c1e
2025-03-20T14:54:22.972Z [CURL DBG] < x-amz-request-id: txgeaa2fbb4361c4a059d61-0067dc2c1e
2025-03-20T14:54:22.972Z [CURL DBG] <
2025-03-20T14:54:22.972Z [CURL DBG] * Connection #2 to host s3.fr-par.scw.cloud left intact
2025-03-20T14:54:22.972Z [ERR] curl.cpp:RequestPerform(2629): HEAD HTTP response code 400, returning EPERM.
2025-03-20T14:54:22.972Z [WAN] curl_multi.cpp:MultiPerform(167): thread terminated with non-zero return code: -1
2025-03-20T14:54:22.972Z [ERR] s3fs.cpp:readdir_multi_head(3290): error occurred in multi request(errno=-1).
2025-03-20T14:54:22.972Z [ERR] s3fs.cpp:s3fs_readdir(3380): readdir_multi_head returns error(-1).
2025-03-20T14:54:45.251Z [INF] s3fs.cpp:s3fs_opendir(3114): [path=/ludo/DSN/TEMPORAIRE][flags=0x18800][pid=231,uid=0,gid=0]
2025-03-20T14:54:45.251Z [INF] s3fs.cpp:s3fs_getattr(954): [path=/ludo/DSN/TEMPORAIRE][pid=231,uid=0,gid=0]
2025-03-20T14:54:45.251Z [INF] s3fs.cpp:s3fs_readdir(3355): [path=/ludo/DSN/TEMPORAIRE][pid=231,uid=0,gid=0]
2025-03-20T14:54:45.251Z [INF]   s3fs.cpp:list_bucket(3398): [path=/ludo/DSN/TEMPORAIRE]
2025-03-20T14:54:45.251Z [INF]       curl.cpp:ListBucketRequest(3840): [tpath=/ludo/DSN/TEMPORAIRE]
2025-03-20T14:54:45.251Z [INF]       curl_util.cpp:prepare_url(211): URL is https://s3.fr-par.scw.cloud/bucket-name-redacted?delimiter=/&max-keys=1000&prefix=ludo/DSN/TEMPORAIRE/
2025-03-20T14:54:45.251Z [INF]       curl_util.cpp:prepare_url(244): URL changed is https://s3.fr-par.scw.cloud/bucket-name-redacted/?delimiter=/&max-keys=1000&prefix=ludo/DSN/TEMPORAIRE/
2025-03-20T14:54:45.252Z [CURL DBG] * Found bundle for host: 0x73b768051e60 [serially]
2025-03-20T14:54:45.252Z [CURL DBG] * Can not multiplex, even if we wanted to
2025-03-20T14:54:45.252Z [CURL DBG] * Re-using existing connection #2 with host s3.fr-par.scw.cloud
2025-03-20T14:54:45.252Z [CURL DBG] > GET /bucket-name-redacted/?delimiter=/&max-keys=1000&prefix=ludo/DSN/TEMPORAIRE/ HTTP/1.1
2025-03-20T14:54:45.252Z [CURL DBG] > Host: s3.fr-par.scw.cloud
2025-03-20T14:54:45.252Z [CURL DBG] > User-Agent: s3fs/1.95 (commit hash (commit:561ce1e +untracked files); OpenSSL)
2025-03-20T14:54:45.252Z [CURL DBG] > Accept: */*
2025-03-20T14:54:45.252Z [CURL DBG] > Authorization: AWS SCW_ACCESS_KEY_REDACTED:ZJCE/h1QoYwydlRTf0gPCrx68Ug=
2025-03-20T14:54:45.252Z [CURL DBG] > Date: Thu, 20 Mar 2025 14:54:45 GMT
2025-03-20T14:54:45.252Z [CURL DBG] >
2025-03-20T14:54:45.252Z [CURL DBG] * TLSv1.3 (IN), TLS alert, close notify (256):
2025-03-20T14:54:45.252Z [CURL DBG] * Connection died, retrying a fresh connect (retry count: 1)
2025-03-20T14:54:45.252Z [CURL DBG] * Closing connection 2
2025-03-20T14:54:45.252Z [CURL DBG] * TLSv1.3 (OUT), TLS alert, close notify (256):
2025-03-20T14:54:45.254Z [CURL DBG] * Issue another request to this URL: 'https://s3.fr-par.scw.cloud/bucket-name-redacted/?delimiter=/&max-keys=1000&prefix=ludo/DSN/TEMPORAIRE/'
2025-03-20T14:54:45.254Z [CURL DBG] * Hostname s3.fr-par.scw.cloud was found in DNS cache
2025-03-20T14:54:45.254Z [CURL DBG] *   Trying 51.159.62.19:443...
2025-03-20T14:54:45.256Z [CURL DBG] * Connected to s3.fr-par.scw.cloud (51.159.62.19) port 443 (#3)
2025-03-20T14:54:45.257Z [CURL DBG] * SSL re-using session ID
2025-03-20T14:54:45.257Z [CURL DBG] * TLSv1.3 (OUT), TLS handshake, Client hello (1):
2025-03-20T14:54:45.301Z [CURL DBG] *  CAfile: /etc/ssl/certs/ca-certificates.crt
2025-03-20T14:54:45.301Z [CURL DBG] *  CApath: /etc/ssl/certs
2025-03-20T14:54:45.301Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Server hello (2):
2025-03-20T14:54:45.301Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
2025-03-20T14:54:45.301Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Certificate (11):
2025-03-20T14:54:45.302Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, CERT verify (15):
2025-03-20T14:54:45.302Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Finished (20):
2025-03-20T14:54:45.302Z [CURL DBG] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
2025-03-20T14:54:45.302Z [CURL DBG] * TLSv1.3 (OUT), TLS handshake, Finished (20):
2025-03-20T14:54:45.302Z [CURL DBG] * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
2025-03-20T14:54:45.302Z [CURL DBG] * Server certificate:
2025-03-20T14:54:45.302Z [CURL DBG] *  subject: CN=s3.fr-par.scw.cloud
2025-03-20T14:54:45.302Z [CURL DBG] *  start date: Feb  3 04:39:02 2025 GMT
2025-03-20T14:54:45.302Z [CURL DBG] *  expire date: May  4 04:39:01 2025 GMT
2025-03-20T14:54:45.302Z [CURL DBG] *  subjectAltName: host "s3.fr-par.scw.cloud" matched cert's "s3.fr-par.scw.cloud"
2025-03-20T14:54:45.302Z [CURL DBG] *  issuer: C=US; O=Let's Encrypt; CN=R11
2025-03-20T14:54:45.302Z [CURL DBG] *  SSL certificate verify ok.
2025-03-20T14:54:45.302Z [CURL DBG] > GET /bucket-name-redacted/?delimiter=/&max-keys=1000&prefix=ludo/DSN/TEMPORAIRE/ HTTP/1.1
2025-03-20T14:54:45.302Z [CURL DBG] > Host: s3.fr-par.scw.cloud
2025-03-20T14:54:45.302Z [CURL DBG] > User-Agent: s3fs/1.95 (commit hash (commit:561ce1e +untracked files); OpenSSL)
2025-03-20T14:54:45.302Z [CURL DBG] > Accept: */*
2025-03-20T14:54:45.302Z [CURL DBG] > Authorization: AWS SCW_ACCESS_KEY_REDACTED:ZJCE/h1QoYwydlRTf0gPCrx68Ug=
2025-03-20T14:54:45.302Z [CURL DBG] > Date: Thu, 20 Mar 2025 14:54:45 GMT
2025-03-20T14:54:45.302Z [CURL DBG] >
2025-03-20T14:54:45.303Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
2025-03-20T14:54:45.303Z [CURL DBG] * old SSL session ID is stale, removing
2025-03-20T14:54:45.304Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
2025-03-20T14:54:45.304Z [CURL DBG] * old SSL session ID is stale, removing
2025-03-20T14:54:45.435Z [CURL DBG] < HTTP/1.1 200 OK
2025-03-20T14:54:45.435Z [CURL DBG] < content-type: application/xml
2025-03-20T14:54:45.435Z [CURL DBG] < date: Thu, 20 Mar 2025 14:54:45 GMT
2025-03-20T14:54:45.435Z [CURL DBG] < x-amz-id-2: txg7c5bee271fec476e8c9e-0067dc2c35
2025-03-20T14:54:45.435Z [CURL DBG] < x-amz-request-id: txg7c5bee271fec476e8c9e-0067dc2c35
2025-03-20T14:54:45.435Z [CURL DBG] < content-length: 1113
2025-03-20T14:54:45.435Z [CURL DBG] <
2025-03-20T14:54:45.435Z [CURL DBG] * Connection #3 to host s3.fr-par.scw.cloud left intact
2025-03-20T14:54:45.435Z [INF]       curl.cpp:RequestPerform(2591): HTTP response code 200
2025-03-20T14:54:45.435Z [INF]   s3fs.cpp:readdir_multi_head(3227): [path=/ludo/DSN/TEMPORAIRE/][list=0]
2025-03-20T14:54:45.435Z [INF]       curl.cpp:PreHeadRequest(3348): [tpath=/ludo/DSN/TEMPORAIRE/debug.log][bpath=/ludo/DSN/TEMPORAIRE/debug.log][save=/ludo/DSN/TEMPORAIRE/debug.log][sseckeypos=18446744073709551615]
2025-03-20T14:54:45.435Z [INF]       curl_util.cpp:prepare_url(211): URL is https://s3.fr-par.scw.cloud/bucket-name-redacted/ludo/DSN/TEMPORAIRE/debug.log
2025-03-20T14:54:45.435Z [INF]       curl_util.cpp:prepare_url(244): URL changed is https://s3.fr-par.scw.cloud/bucket-name-redacted/ludo/DSN/TEMPORAIRE/debug.log
2025-03-20T14:54:45.435Z [INF]       curl_multi.cpp:Request(299): [count=1]
2025-03-20T14:54:45.435Z [CURL DBG] * Found bundle for host: 0x73b768051e60 [serially]
2025-03-20T14:54:45.435Z [CURL DBG] * Can not multiplex, even if we wanted to
2025-03-20T14:54:45.435Z [CURL DBG] * Re-using existing connection #3 with host s3.fr-par.scw.cloud
2025-03-20T14:54:45.435Z [CURL DBG] > HEAD /bucket-name-redacted/ludo/DSN/TEMPORAIRE/debug.log HTTP/1.1
2025-03-20T14:54:45.435Z [CURL DBG] > Host: s3.fr-par.scw.cloud
2025-03-20T14:54:45.435Z [CURL DBG] > User-Agent: s3fs/1.95 (commit hash (commit:561ce1e +untracked files); OpenSSL)
2025-03-20T14:54:45.435Z [CURL DBG] > Accept: */*
2025-03-20T14:54:45.435Z [CURL DBG] > Authorization: AWS SCW_ACCESS_KEY_REDACTED:l2w7PTpCmmUz2Wp6UwPkAdFBrqU=
2025-03-20T14:54:45.435Z [CURL DBG] > Date: Thu, 20 Mar 2025 14:54:45 GMT
2025-03-20T14:54:45.435Z [CURL DBG] >
2025-03-20T14:54:45.447Z [CURL DBG] < HTTP/1.1 400 Bad Request
2025-03-20T14:54:45.447Z [CURL DBG] < content-type: application/xml
2025-03-20T14:54:45.447Z [CURL DBG] < date: Thu, 20 Mar 2025 14:54:45 GMT
Originally created by @lanormerh on GitHub (Mar 20, 2025). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2650 <!-- -------------------------------------------------------------------------- The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. --------------------------------------------------------------------------- --> ### Additional Information #### Version of s3fs being used (`s3fs --version`) V1.95(commit:561ce1e #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) fuse (2.9.9-6) #### Kernel information (`uname -r`) 6.8.0-49-generic #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) Debian GNU/Linux 12 (bookworm) #### How to run s3fs, if applicable s3fs -f -d -o curldbg -o allow_other,use_path_request_style,nocopyapi,sigv2 -o use_sse=custom:/etc/passwd-sse-c -o passwd_file=/etc/passwd-s3fs -o url=https://s3.fr-par.scw.cloud -o endpoint=fr-par bucket-name-redacted /mnt/s3fs ### Details about issue <!-- Please describe the content of the issue in detail. --> I'm trying to run s3fs with SSE-C using Scaleway object storage. As soon as I add an encrypted file, I can't list folders anymore (HEAD 400 Bad request) ``` 2025-03-20T14:54:22.972Z [CURL DBG] < content-type: application/xml 2025-03-20T14:54:22.972Z [CURL DBG] < date: Thu, 20 Mar 2025 14:54:22 GMT 2025-03-20T14:54:22.972Z [CURL DBG] < x-amz-id-2: txgeaa2fbb4361c4a059d61-0067dc2c1e 2025-03-20T14:54:22.972Z [CURL DBG] < x-amz-request-id: txgeaa2fbb4361c4a059d61-0067dc2c1e 2025-03-20T14:54:22.972Z [CURL DBG] < 2025-03-20T14:54:22.972Z [CURL DBG] * Connection #2 to host s3.fr-par.scw.cloud left intact 2025-03-20T14:54:22.972Z [ERR] curl.cpp:RequestPerform(2629): HEAD HTTP response code 400, returning EPERM. 2025-03-20T14:54:22.972Z [WAN] curl_multi.cpp:MultiPerform(167): thread terminated with non-zero return code: -1 2025-03-20T14:54:22.972Z [ERR] s3fs.cpp:readdir_multi_head(3290): error occurred in multi request(errno=-1). 2025-03-20T14:54:22.972Z [ERR] s3fs.cpp:s3fs_readdir(3380): readdir_multi_head returns error(-1). 2025-03-20T14:54:45.251Z [INF] s3fs.cpp:s3fs_opendir(3114): [path=/ludo/DSN/TEMPORAIRE][flags=0x18800][pid=231,uid=0,gid=0] 2025-03-20T14:54:45.251Z [INF] s3fs.cpp:s3fs_getattr(954): [path=/ludo/DSN/TEMPORAIRE][pid=231,uid=0,gid=0] 2025-03-20T14:54:45.251Z [INF] s3fs.cpp:s3fs_readdir(3355): [path=/ludo/DSN/TEMPORAIRE][pid=231,uid=0,gid=0] 2025-03-20T14:54:45.251Z [INF] s3fs.cpp:list_bucket(3398): [path=/ludo/DSN/TEMPORAIRE] 2025-03-20T14:54:45.251Z [INF] curl.cpp:ListBucketRequest(3840): [tpath=/ludo/DSN/TEMPORAIRE] 2025-03-20T14:54:45.251Z [INF] curl_util.cpp:prepare_url(211): URL is https://s3.fr-par.scw.cloud/bucket-name-redacted?delimiter=/&max-keys=1000&prefix=ludo/DSN/TEMPORAIRE/ 2025-03-20T14:54:45.251Z [INF] curl_util.cpp:prepare_url(244): URL changed is https://s3.fr-par.scw.cloud/bucket-name-redacted/?delimiter=/&max-keys=1000&prefix=ludo/DSN/TEMPORAIRE/ 2025-03-20T14:54:45.252Z [CURL DBG] * Found bundle for host: 0x73b768051e60 [serially] 2025-03-20T14:54:45.252Z [CURL DBG] * Can not multiplex, even if we wanted to 2025-03-20T14:54:45.252Z [CURL DBG] * Re-using existing connection #2 with host s3.fr-par.scw.cloud 2025-03-20T14:54:45.252Z [CURL DBG] > GET /bucket-name-redacted/?delimiter=/&max-keys=1000&prefix=ludo/DSN/TEMPORAIRE/ HTTP/1.1 2025-03-20T14:54:45.252Z [CURL DBG] > Host: s3.fr-par.scw.cloud 2025-03-20T14:54:45.252Z [CURL DBG] > User-Agent: s3fs/1.95 (commit hash (commit:561ce1e +untracked files); OpenSSL) 2025-03-20T14:54:45.252Z [CURL DBG] > Accept: */* 2025-03-20T14:54:45.252Z [CURL DBG] > Authorization: AWS SCW_ACCESS_KEY_REDACTED:ZJCE/h1QoYwydlRTf0gPCrx68Ug= 2025-03-20T14:54:45.252Z [CURL DBG] > Date: Thu, 20 Mar 2025 14:54:45 GMT 2025-03-20T14:54:45.252Z [CURL DBG] > 2025-03-20T14:54:45.252Z [CURL DBG] * TLSv1.3 (IN), TLS alert, close notify (256): 2025-03-20T14:54:45.252Z [CURL DBG] * Connection died, retrying a fresh connect (retry count: 1) 2025-03-20T14:54:45.252Z [CURL DBG] * Closing connection 2 2025-03-20T14:54:45.252Z [CURL DBG] * TLSv1.3 (OUT), TLS alert, close notify (256): 2025-03-20T14:54:45.254Z [CURL DBG] * Issue another request to this URL: 'https://s3.fr-par.scw.cloud/bucket-name-redacted/?delimiter=/&max-keys=1000&prefix=ludo/DSN/TEMPORAIRE/' 2025-03-20T14:54:45.254Z [CURL DBG] * Hostname s3.fr-par.scw.cloud was found in DNS cache 2025-03-20T14:54:45.254Z [CURL DBG] * Trying 51.159.62.19:443... 2025-03-20T14:54:45.256Z [CURL DBG] * Connected to s3.fr-par.scw.cloud (51.159.62.19) port 443 (#3) 2025-03-20T14:54:45.257Z [CURL DBG] * SSL re-using session ID 2025-03-20T14:54:45.257Z [CURL DBG] * TLSv1.3 (OUT), TLS handshake, Client hello (1): 2025-03-20T14:54:45.301Z [CURL DBG] * CAfile: /etc/ssl/certs/ca-certificates.crt 2025-03-20T14:54:45.301Z [CURL DBG] * CApath: /etc/ssl/certs 2025-03-20T14:54:45.301Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Server hello (2): 2025-03-20T14:54:45.301Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): 2025-03-20T14:54:45.301Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Certificate (11): 2025-03-20T14:54:45.302Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, CERT verify (15): 2025-03-20T14:54:45.302Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Finished (20): 2025-03-20T14:54:45.302Z [CURL DBG] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): 2025-03-20T14:54:45.302Z [CURL DBG] * TLSv1.3 (OUT), TLS handshake, Finished (20): 2025-03-20T14:54:45.302Z [CURL DBG] * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 2025-03-20T14:54:45.302Z [CURL DBG] * Server certificate: 2025-03-20T14:54:45.302Z [CURL DBG] * subject: CN=s3.fr-par.scw.cloud 2025-03-20T14:54:45.302Z [CURL DBG] * start date: Feb 3 04:39:02 2025 GMT 2025-03-20T14:54:45.302Z [CURL DBG] * expire date: May 4 04:39:01 2025 GMT 2025-03-20T14:54:45.302Z [CURL DBG] * subjectAltName: host "s3.fr-par.scw.cloud" matched cert's "s3.fr-par.scw.cloud" 2025-03-20T14:54:45.302Z [CURL DBG] * issuer: C=US; O=Let's Encrypt; CN=R11 2025-03-20T14:54:45.302Z [CURL DBG] * SSL certificate verify ok. 2025-03-20T14:54:45.302Z [CURL DBG] > GET /bucket-name-redacted/?delimiter=/&max-keys=1000&prefix=ludo/DSN/TEMPORAIRE/ HTTP/1.1 2025-03-20T14:54:45.302Z [CURL DBG] > Host: s3.fr-par.scw.cloud 2025-03-20T14:54:45.302Z [CURL DBG] > User-Agent: s3fs/1.95 (commit hash (commit:561ce1e +untracked files); OpenSSL) 2025-03-20T14:54:45.302Z [CURL DBG] > Accept: */* 2025-03-20T14:54:45.302Z [CURL DBG] > Authorization: AWS SCW_ACCESS_KEY_REDACTED:ZJCE/h1QoYwydlRTf0gPCrx68Ug= 2025-03-20T14:54:45.302Z [CURL DBG] > Date: Thu, 20 Mar 2025 14:54:45 GMT 2025-03-20T14:54:45.302Z [CURL DBG] > 2025-03-20T14:54:45.303Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): 2025-03-20T14:54:45.303Z [CURL DBG] * old SSL session ID is stale, removing 2025-03-20T14:54:45.304Z [CURL DBG] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): 2025-03-20T14:54:45.304Z [CURL DBG] * old SSL session ID is stale, removing 2025-03-20T14:54:45.435Z [CURL DBG] < HTTP/1.1 200 OK 2025-03-20T14:54:45.435Z [CURL DBG] < content-type: application/xml 2025-03-20T14:54:45.435Z [CURL DBG] < date: Thu, 20 Mar 2025 14:54:45 GMT 2025-03-20T14:54:45.435Z [CURL DBG] < x-amz-id-2: txg7c5bee271fec476e8c9e-0067dc2c35 2025-03-20T14:54:45.435Z [CURL DBG] < x-amz-request-id: txg7c5bee271fec476e8c9e-0067dc2c35 2025-03-20T14:54:45.435Z [CURL DBG] < content-length: 1113 2025-03-20T14:54:45.435Z [CURL DBG] < 2025-03-20T14:54:45.435Z [CURL DBG] * Connection #3 to host s3.fr-par.scw.cloud left intact 2025-03-20T14:54:45.435Z [INF] curl.cpp:RequestPerform(2591): HTTP response code 200 2025-03-20T14:54:45.435Z [INF] s3fs.cpp:readdir_multi_head(3227): [path=/ludo/DSN/TEMPORAIRE/][list=0] 2025-03-20T14:54:45.435Z [INF] curl.cpp:PreHeadRequest(3348): [tpath=/ludo/DSN/TEMPORAIRE/debug.log][bpath=/ludo/DSN/TEMPORAIRE/debug.log][save=/ludo/DSN/TEMPORAIRE/debug.log][sseckeypos=18446744073709551615] 2025-03-20T14:54:45.435Z [INF] curl_util.cpp:prepare_url(211): URL is https://s3.fr-par.scw.cloud/bucket-name-redacted/ludo/DSN/TEMPORAIRE/debug.log 2025-03-20T14:54:45.435Z [INF] curl_util.cpp:prepare_url(244): URL changed is https://s3.fr-par.scw.cloud/bucket-name-redacted/ludo/DSN/TEMPORAIRE/debug.log 2025-03-20T14:54:45.435Z [INF] curl_multi.cpp:Request(299): [count=1] 2025-03-20T14:54:45.435Z [CURL DBG] * Found bundle for host: 0x73b768051e60 [serially] 2025-03-20T14:54:45.435Z [CURL DBG] * Can not multiplex, even if we wanted to 2025-03-20T14:54:45.435Z [CURL DBG] * Re-using existing connection #3 with host s3.fr-par.scw.cloud 2025-03-20T14:54:45.435Z [CURL DBG] > HEAD /bucket-name-redacted/ludo/DSN/TEMPORAIRE/debug.log HTTP/1.1 2025-03-20T14:54:45.435Z [CURL DBG] > Host: s3.fr-par.scw.cloud 2025-03-20T14:54:45.435Z [CURL DBG] > User-Agent: s3fs/1.95 (commit hash (commit:561ce1e +untracked files); OpenSSL) 2025-03-20T14:54:45.435Z [CURL DBG] > Accept: */* 2025-03-20T14:54:45.435Z [CURL DBG] > Authorization: AWS SCW_ACCESS_KEY_REDACTED:l2w7PTpCmmUz2Wp6UwPkAdFBrqU= 2025-03-20T14:54:45.435Z [CURL DBG] > Date: Thu, 20 Mar 2025 14:54:45 GMT 2025-03-20T14:54:45.435Z [CURL DBG] > 2025-03-20T14:54:45.447Z [CURL DBG] < HTTP/1.1 400 Bad Request 2025-03-20T14:54:45.447Z [CURL DBG] < content-type: application/xml 2025-03-20T14:54:45.447Z [CURL DBG] < date: Thu, 20 Mar 2025 14:54:45 GMT ```
kerem closed this issue 2026-03-04 01:52:40 +03:00
Author
Owner

@lanormerh commented on GitHub (Mar 25, 2025):

I noticed that the SSE-C headers are missing when s3fs retries the HEAD request
Longer logs : https://gist.github.com/lanormerh/e1078112f30fcc27fe0ea4f2f1b9dbfa

<!-- gh-comment-id:2750425264 --> @lanormerh commented on GitHub (Mar 25, 2025): I noticed that the SSE-C headers are missing when s3fs retries the HEAD request Longer logs : https://gist.github.com/lanormerh/e1078112f30fcc27fe0ea4f2f1b9dbfa
Author
Owner

@ggtakec commented on GitHub (Mar 31, 2025):

I checked your log and it looks like the request(and retry request) was successful.
When you configure SSE-C, at first s3fs will send a request without any SSE-C-related headers.
If that request results in an error(404), s3fs will send the request with the SSE-C headers added.
I looked from your log, I think these actions appear to be working as good.

I noticed that the SSE-C headers are missing when s3fs retries the HEAD request

If you still have the logs which this SSE-C header was not added, please add them.

<!-- gh-comment-id:2764856230 --> @ggtakec commented on GitHub (Mar 31, 2025): I checked [your log](https://gist.github.com/lanormerh/e1078112f30fcc27fe0ea4f2f1b9dbfa) and it looks like the request(and retry request) was successful. When you configure SSE-C, at first s3fs will send a request without any SSE-C-related headers. If that request results in an error(404), s3fs will send the request with the SSE-C headers added. I looked from [your log](https://gist.github.com/lanormerh/e1078112f30fcc27fe0ea4f2f1b9dbfa), I think these actions appear to be working as good. > I noticed that the SSE-C headers are missing when s3fs retries the HEAD request If you still have the logs which this SSE-C header was not added, please add them.
Author
Owner

@BaptisteBdn commented on GitHub (Nov 27, 2025):

I am having the same problem with SSE-C on scaleway.
It works when the bucket is empty, but as soon as an encrypted file is in the bucket, I have a permission denied when using s3fs.

DEBIAN_VERSION_FULL=13.2
S3FS = V1.95
FUSE = Version: 3.17.2-3
Kernel = 6.12.48+deb13-cloud-amd64

Command to run s3fs :

s3fs redacted-bucket /root/s3-data -o allow_other -o passwd_file=$HOME/.passwd-s3fs -o use_path_request_style -o endpoint=fr-par -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o use_cache=/root/s3fs-cache -o use_sse=custom:/root/ssec-b64.key -o load_sse_c=/root/ssec-b64.key -o url=https://s3.fr-par.scw.cloud -o dbglevel=info -f -o curldbg

Logs : https://gist.github.com/BaptisteBdn/ef29018c81e74ffe7ddc47b6c3f14104

Only thing I saw was that the md5 of the key (x-amz-server-side-encryption-customer-key-md5) is not the same as what is described in scaleway documentation (https://www.scaleway.com/en/docs/object-storage/api-cli/enable-sse-c/).
Generating the md5 using : openssl dgst -md5 -binary ssec.key | base64 gives something different than what I see in x-amz-server-side-encryption-customer-key-md5.

<!-- gh-comment-id:3585347927 --> @BaptisteBdn commented on GitHub (Nov 27, 2025): I am having the same problem with SSE-C on scaleway. It works when the bucket is empty, but as soon as an encrypted file is in the bucket, I have a permission denied when using s3fs. DEBIAN_VERSION_FULL=13.2 S3FS = V1.95 FUSE = Version: 3.17.2-3 Kernel = 6.12.48+deb13-cloud-amd64 Command to run s3fs : ``` s3fs redacted-bucket /root/s3-data -o allow_other -o passwd_file=$HOME/.passwd-s3fs -o use_path_request_style -o endpoint=fr-par -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o use_cache=/root/s3fs-cache -o use_sse=custom:/root/ssec-b64.key -o load_sse_c=/root/ssec-b64.key -o url=https://s3.fr-par.scw.cloud -o dbglevel=info -f -o curldbg ``` Logs : https://gist.github.com/BaptisteBdn/ef29018c81e74ffe7ddc47b6c3f14104 Only thing I saw was that the md5 of the key (x-amz-server-side-encryption-customer-key-md5) is not the same as what is described in scaleway documentation (https://www.scaleway.com/en/docs/object-storage/api-cli/enable-sse-c/). Generating the md5 using : `openssl dgst -md5 -binary ssec.key | base64` gives something different than what I see in `x-amz-server-side-encryption-customer-key-md5`.
Author
Owner

@ggtakec commented on GitHub (Dec 7, 2025):

@BaptisteBdn I apologize for my inaccurate answer to your initial question.

When s3fs-fuse checks objects(using HEAD request), it first sends a HEAD request without an SSE-C key.
If that request fails and it has an SSE-C key, it will try each key in turn.
I believe the log you sent us shows this behavior.
Therefore, it appears that s3fs-fuse is working correctly.

The reason it works this way is because a bucket can contain a mixture of encrypted and unencrypted objects, and it needs to try each one.
(Thus, if you don't specify use_sse (which is usually the case), these ifallbacks will not occur. This is what happens when the option is specified.)

<!-- gh-comment-id:3621815366 --> @ggtakec commented on GitHub (Dec 7, 2025): @BaptisteBdn I apologize for my inaccurate answer to your initial question. When s3fs-fuse checks objects(using HEAD request), it first sends a HEAD request without an SSE-C key. If that request fails and it has an SSE-C key, it will try each key in turn. I believe the log you sent us shows this behavior. Therefore, it appears that s3fs-fuse is working correctly. The reason it works this way is because a bucket can contain a mixture of encrypted and unencrypted objects, and it needs to try each one. (Thus, if you don't specify use_sse (which is usually the case), these ifallbacks will not occur. This is what happens when the option is specified.)
Author
Owner

@BaptisteBdn commented on GitHub (Dec 8, 2025):

@ggtakec, thank you for your answer, however, s3fs with sse-c currently does not work with Scaleway. It works as long as there is no encrypted file in the bucket, but once one is added, I can't access the mounted directory.

How do you generate the x-amz-server-side-encryption-customer-key-md5 ?
When trying I do see a difference :

Using the generation method given by scaleway (which works when uploading using s3api)

root@scw-test:~# openssl rand -out ssec.key 32
root@scw-test:~# ENCRYPTION_KEY=$(cat ssec.key | base64)
root@scw-test:~# KEY_DIGEST=$(openssl dgst -md5 -binary ssec.key | base64)
root@scw-test:~# echo $ENCRYPTION_KEY
o1XToCZVVYQ5DNRLXidcZcCAUoLFVjldKb4Zft+cYUA=
root@scw-test:~# echo $KEY_DIGEST
iTy6h86P18A/mVQccWpnMg==

When starting s3fs wtih the command : s3fs redacted-bucket /root/s3-data -o allow_other -o passwd_file=$HOME/.passwd-s3fs -o use_path_request_style -o endpoint=fr-par -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o use_cache=/root/s3fs-cache -o use_sse=custom:/root/ssec-b64.key -o load_sse_c=/root/ssec-b64.key -o url=https://s3.fr-par.scw.cloud -o dbglevel=info -f -o curldbg

2025-12-08T06:18:07.256Z [CURL DBG] > x-amz-server-side-encryption-customer-algorithm: AES256
2025-12-08T06:18:07.256Z [CURL DBG] > x-amz-server-side-encryption-customer-key: o1XToCZVVYQ5DNRLXidcZcCAUoLFVjldKb4Zft+cYUA=
2025-12-08T06:18:07.256Z [CURL DBG] > x-amz-server-side-encryption-customer-key-md5: LAAAAAAAAACsO6sVV38AAA==

Encryption key is the same, but md5 is not. When trying using the md5 generated by s3fs using s3api:
An error occurred (InvalidArgument) when calling the PutObject operation: The calculated MD5 hash of the key did not match the hash that was provided.

<!-- gh-comment-id:3625207583 --> @BaptisteBdn commented on GitHub (Dec 8, 2025): @ggtakec, thank you for your answer, however, s3fs with sse-c currently does not work with Scaleway. It works as long as there is no encrypted file in the bucket, but once one is added, I can't access the mounted directory. How do you generate the `x-amz-server-side-encryption-customer-key-md5` ? When trying I do see a difference : Using the generation method given by scaleway (which works when uploading using s3api) ``` root@scw-test:~# openssl rand -out ssec.key 32 root@scw-test:~# ENCRYPTION_KEY=$(cat ssec.key | base64) root@scw-test:~# KEY_DIGEST=$(openssl dgst -md5 -binary ssec.key | base64) root@scw-test:~# echo $ENCRYPTION_KEY o1XToCZVVYQ5DNRLXidcZcCAUoLFVjldKb4Zft+cYUA= root@scw-test:~# echo $KEY_DIGEST iTy6h86P18A/mVQccWpnMg== ``` When starting s3fs wtih the command : `s3fs redacted-bucket /root/s3-data -o allow_other -o passwd_file=$HOME/.passwd-s3fs -o use_path_request_style -o endpoint=fr-par -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o use_cache=/root/s3fs-cache -o use_sse=custom:/root/ssec-b64.key -o load_sse_c=/root/ssec-b64.key -o url=https://s3.fr-par.scw.cloud -o dbglevel=info -f -o curldbg` ``` 2025-12-08T06:18:07.256Z [CURL DBG] > x-amz-server-side-encryption-customer-algorithm: AES256 2025-12-08T06:18:07.256Z [CURL DBG] > x-amz-server-side-encryption-customer-key: o1XToCZVVYQ5DNRLXidcZcCAUoLFVjldKb4Zft+cYUA= 2025-12-08T06:18:07.256Z [CURL DBG] > x-amz-server-side-encryption-customer-key-md5: LAAAAAAAAACsO6sVV38AAA== ``` Encryption key is the same, but md5 is not. When trying using the md5 generated by s3fs using s3api: `An error occurred (InvalidArgument) when calling the PutObject operation: The calculated MD5 hash of the key did not match the hash that was provided.`
Author
Owner

@BaptisteBdn commented on GitHub (Dec 8, 2025):

Never mind, I was just using the wrong key for s3fs, i used the base64 key when I just should have used the 32 byte key. I think the issue can be closed.

<!-- gh-comment-id:3627133195 --> @BaptisteBdn commented on GitHub (Dec 8, 2025): Never mind, I was just using the wrong key for s3fs, i used the base64 key when I just should have used the 32 byte key. I think the issue can be closed.
Author
Owner

@ggtakec commented on GitHub (Dec 8, 2025):

I tried the following for AWS S3 and it works fine.
First, I created the key as follows:

$ openssl rand -out s3_secret.key 32
$ base64 -w0 s3_secret.key
  YbH9V5bjyeN9uAUFcE+oOXB+a1CCjdth9/r3PTfIheg=
$ openssl dgst -md5 -binary s3_secret.key  | base64
  12hWLHzSef1mVJcO+YnI0Q==

I saved this key in ~/sseckeys(0600) and pass use_sse=custom:.../sseckeys when starting s3fs.
Then, when I uploaded a file, the API was called as shown below:

PUT /encripted_file HTTP/1.1
...
...
x-amz-server-side-encryption-customer-algorithm: AES256
x-amz-server-side-encryption-customer-key: YbH9V5bjyeN9uAUFcE+oOXB+a1CCjdth9/r3PTfIheg=
x-amz-server-side-encryption-customer-key-md5: 12hWLHzSef1mVJcO+YnI0Q==

If I access this file, I will see something like this:

HEAD /z3 HTTP/1.1
...
...
x-amz-server-side-encryption-customer-algorithm: AES256
x-amz-server-side-encryption-customer-key: YbH9V5bjyeN9uAUFcE+oOXB+a1CCjdth9/r3PTfIheg=
x-amz-server-side-encryption-customer-key-md5: 12hWLHzSef1mVJcO+YnI0Q==

In other words, both the base64 value and the MD5 value of the created key were specified without any problems.
The MD5 is created internally by s3fs_base64 function, but I don't think there's any problem with it.

And you said that the mount point itself becomes invisible, but I can't imagine why.
I think only the object(file) is encrypted with SSE-C, but am I wrong?
And I'm not using Scaleway, so I can't imagine what the cause of this problem might be.

If possible, try starting s3fs with the dbglevel=dbg,curldbg option and checking the detailed trace log.

<!-- gh-comment-id:3627148149 --> @ggtakec commented on GitHub (Dec 8, 2025): I tried the following for AWS S3 and it works fine. First, I created the key as follows: ``` $ openssl rand -out s3_secret.key 32 $ base64 -w0 s3_secret.key YbH9V5bjyeN9uAUFcE+oOXB+a1CCjdth9/r3PTfIheg= $ openssl dgst -md5 -binary s3_secret.key | base64 12hWLHzSef1mVJcO+YnI0Q== ``` I saved this key in ~/sseckeys(0600) and pass `use_sse=custom:.../sseckeys` when starting s3fs. Then, when I uploaded a file, the API was called as shown below: ``` PUT /encripted_file HTTP/1.1 ... ... x-amz-server-side-encryption-customer-algorithm: AES256 x-amz-server-side-encryption-customer-key: YbH9V5bjyeN9uAUFcE+oOXB+a1CCjdth9/r3PTfIheg= x-amz-server-side-encryption-customer-key-md5: 12hWLHzSef1mVJcO+YnI0Q== ``` If I access this file, I will see something like this: ``` HEAD /z3 HTTP/1.1 ... ... x-amz-server-side-encryption-customer-algorithm: AES256 x-amz-server-side-encryption-customer-key: YbH9V5bjyeN9uAUFcE+oOXB+a1CCjdth9/r3PTfIheg= x-amz-server-side-encryption-customer-key-md5: 12hWLHzSef1mVJcO+YnI0Q== ``` In other words, both the base64 value and the MD5 value of the created key were specified without any problems. The MD5 is created internally by [s3fs_base64](https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/string_util.cpp#L388-L410) function, but I don't think there's any problem with it. And you said that the mount point itself becomes invisible, but I can't imagine why. I think only the object(file) is encrypted with SSE-C, but am I wrong? And I'm not using Scaleway, so I can't imagine what the cause of this problem might be. If possible, try starting s3fs with the `dbglevel=dbg,curldbg` option and checking the detailed trace log.
Author
Owner

@BaptisteBdn commented on GitHub (Dec 9, 2025):

So with further testing, I think the md5 hash is calculated differently depending on the version.

With Amazon Simple Storage Service File System V1.95 with GnuTLS(gcrypt) :

2025-12-09T06:39:33.262Z [CURL DBG] > x-amz-server-side-encryption-customer-key: X1/IagJkBToTHM/p1GnCtVvDNe0gK4xonN2Xw4if7fI=
2025-12-09T06:39:33.262Z [CURL DBG] > x-amz-server-side-encryption-customer-key-md5: IAAAAAAAAACsW2slnn8AAA==

With Amazon Simple Storage Service File System V1.97(commit:0d96734 +untracked files) with OpenSSL (compiled from source) :

2025-12-09T06:38:24.106Z [CURL DBG] > x-amz-server-side-encryption-customer-key: X1/IagJkBToTHM/p1GnCtVvDNe0gK4xonN2Xw4if7fI=
2025-12-09T06:38:24.106Z [CURL DBG] > x-amz-server-side-encryption-customer-key-md5: JOD6Em8P6BirSJb7BYuseQ==

Both are on debian13, with the v1.95 being latest for debian repository.
The second one is valid while the first is not, and s3fs works with the version 1.97 compiled.

<!-- gh-comment-id:3630614746 --> @BaptisteBdn commented on GitHub (Dec 9, 2025): So with further testing, I think the md5 hash is calculated differently depending on the version. With `Amazon Simple Storage Service File System V1.95 with GnuTLS(gcrypt)` : ``` 2025-12-09T06:39:33.262Z [CURL DBG] > x-amz-server-side-encryption-customer-key: X1/IagJkBToTHM/p1GnCtVvDNe0gK4xonN2Xw4if7fI= 2025-12-09T06:39:33.262Z [CURL DBG] > x-amz-server-side-encryption-customer-key-md5: IAAAAAAAAACsW2slnn8AAA== ``` With `Amazon Simple Storage Service File System V1.97(commit:0d96734 +untracked files) with OpenSSL` (compiled from source) : ``` 2025-12-09T06:38:24.106Z [CURL DBG] > x-amz-server-side-encryption-customer-key: X1/IagJkBToTHM/p1GnCtVvDNe0gK4xonN2Xw4if7fI= 2025-12-09T06:38:24.106Z [CURL DBG] > x-amz-server-side-encryption-customer-key-md5: JOD6Em8P6BirSJb7BYuseQ== ``` Both are on debian13, with the v1.95 being latest for debian repository. The second one is valid while the first is not, and s3fs works with the version 1.97 compiled.
Author
Owner

@ggtakec commented on GitHub (Dec 9, 2025):

@BaptisteBdn Thank you for your greate reply.
It seems likely that there was a problem with the version using GnuTLS.
We will look into GnuTLS.

Currently, s3fs is unified to use OpenSSL (v3), but is it possible for you to use the latest version?

<!-- gh-comment-id:3634687982 --> @ggtakec commented on GitHub (Dec 9, 2025): @BaptisteBdn Thank you for your greate reply. It seems likely that there was a problem with the version using GnuTLS. We will look into GnuTLS. Currently, s3fs is unified to use OpenSSL (v3), but is it possible for you to use the latest version?
Author
Owner

@gaul commented on GitHub (Dec 9, 2025):

IAAAAAAAAACsW2slnn8AAA== seems like an unlikely value for a hash.

<!-- gh-comment-id:3634694658 --> @gaul commented on GitHub (Dec 9, 2025): `IAAAAAAAAACsW2slnn8AAA==` seems like an unlikely value for a hash.
Author
Owner

@BaptisteBdn commented on GitHub (Dec 10, 2025):

@ggtakec Yes I can use the lastest version, thanks

<!-- gh-comment-id:3635348387 --> @BaptisteBdn commented on GitHub (Dec 10, 2025): @ggtakec Yes I can use the lastest version, thanks
Author
Owner

@ggtakec commented on GitHub (Dec 14, 2025):

@BaptisteBdn Sorry for the wait.
We've found an issue in the code when using GnuTLS and will be working on fixing it.
Please wait a little longer.

<!-- gh-comment-id:3650160273 --> @ggtakec commented on GitHub (Dec 14, 2025): @BaptisteBdn Sorry for the wait. We've found an issue in the code when using GnuTLS and will be working on fixing it. Please wait a little longer.
Author
Owner

@gaul commented on GitHub (Dec 14, 2025):

Please test with the latest master.

<!-- gh-comment-id:3650406289 --> @gaul commented on GitHub (Dec 14, 2025): Please test with the latest master.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1264
No description provided.