[GH-ISSUE #1193] EC2 instance of IAM user access S3 occur error #631

Closed
opened 2026-03-04 01:47:22 +03:00 by kerem · 12 comments
Owner

Originally created by @orangeSi on GitHub (Nov 7, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1193

Version of s3fs being used (s3fs --version)

V1.85

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.4

Kernel information (uname -r)

4.14.146-93.123.amzn1.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Amazon Linux AMI"
VERSION="2018.03"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2018.03"
PRETTY_NAME="Amazon Linux AMI 2018.03"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2018.03:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"

s3fs command line used, if applicable

hi, I create a Ec2 instance in cn-northwest-1c region with a IAM user account, and then create a S3 bucket in cn-northwest-1 region and upload data to it with same IAM user. And want to mount that S3 bucket to Ec2 instance, so I run this command in instance:

s3fs hq-atac-h9  /home/ec2-user/s3mnt -o uid=500 -o gid=500 -o umask=0077  \
-o use_cache=/tmp/s3cache  -o passwd_file=/home/ec2-user/.passwd-s3fs \
-o endpoint=cn-northwest-1   -f 

then got this error:

[INF] s3fs.cpp:s3fs_init(3476): init v1.85(commit:a07a533) with OpenSSL
[CRT] s3fs.cpp:s3fs_check_service(3841): The bucket region is not 'cn-northwest-1c',  \
it is correctly 'us-east-1'. You should specify 'endpoint=us-east-1' option.
[CRT] s3fs.cpp:s3fs_check_service(3866): Failed to connect by sigv4,  \
so retry to connect by signature version 2.
[CRT] s3fs.cpp:s3fs_check_service(3881): invalid credentials(host=https://s3.amazonaws.com) - result of checking service.

if add -o curldbg -d

s3fs hq-atac-h9  /home/ec2-user/s3mnt -o uid=500 -o gid=500 -o umask=0077  \
-o use_cache=/tmp/s3cache  -o passwd_file=/home/ec2-user/.passwd-s3fs \
-o endpoint=cn-northwest-1  -o curldbg -f -d

then got this detail error:

[CRT] s3fs.cpp:set_s3fs_log_level(297): change debug level from [CRT] to [INF] 
[INF]     s3fs.cpp:set_mountpoint_attribute(4384): PROC(uid=500, gid=500) - MountPoint(uid=500, gid=500, mode=40775)
[INF] s3fs.cpp:s3fs_init(3476): init v1.85(commit:a07a533) with OpenSSL
[INF] s3fs.cpp:s3fs_check_service(3811): check services.
[INF]       curl.cpp:CheckBucket(3397): check a bucket.
[INF]       curl.cpp:prepare_url(4685): URL is https://s3.amazonaws.com/hq-atac-h9/
[INF]       curl.cpp:prepare_url(4718): URL changed is https://hq-atac-h9.s3.amazonaws.com/
[INF]       curl.cpp:insertV4Headers(2745): computing signature [GET] [/] [] []
[INF]       curl.cpp:url_to_host(99): url is https://s3.amazonaws.com
*   Trying 52.216.207.155...
* TCP_NODELAY set
* Connected to hq-atac-h9.s3.amazonaws.com (52.216.207.155) port 443 (#0)
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
*  subject: C=US; ST=Washington; L=Seattle; O=Amazon.com Inc.; CN=*.s3.amazonaws.com
*  start date: Nov  7 00:00:00 2018 GMT
*  expire date: Feb  7 12:00:00 2020 GMT
*  subjectAltName: host "hq-atac-h9.s3.amazonaws.com" matched cert's "*.s3.amazonaws.com"
*  issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert Baltimore CA-2 G2
*  SSL certificate verify ok.
> GET / HTTP/1.1
Host: hq-atac-h9.s3.amazonaws.com
User-Agent: s3fs/1.85 (commit hash a07a533; OpenSSL)
Accept: */*
Authorization: AWS4-HMAC-SHA256 Credential=AKIAR6CVCH2CE5G3AC5X/20191104/cn-northwest-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=1d5196320df6587ffd0cc8434bb2d8f5ddf89e127dc6ac06a44bab28151c9cf2
x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date: 20191104T040949Z

< HTTP/1.1 400 Bad Request
< x-amz-request-id: 6495910D69A787C3
< x-amz-id-2: VyGI2WU7ZSIWtmaF3/5dnncU9NlWcYlssiJlSMPZZ7LHwXV4yeYxSzZlqqO5hON3JvKCmejlKw8=
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Mon, 04 Nov 2019 04:09:49 GMT
< Connection: close
< Server: AmazonS3
< 
* Closing connection 0
[ERR] curl.cpp:RequestPerform(2423): HTTP response code 400, returning EIO. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'cn-northwest-1' is wrong; expecting 'us-east-1'</Message><Region>us-east-1</Region><RequestId>6495910D69A787C3</RequestId><HostId>VyGI2WU7ZSIWtmaF3/5dnncU9NlWcYlssiJlSMPZZ7LHwXV4yeYxSzZlqqO5hON3JvKCmejlKw8=</HostId></Error>
[ERR] curl.cpp:CheckBucket(3423): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'cn-northwest-1' is wrong; expecting 'us-east-1'</Message><Region>us-east-1</Region><RequestId>6495910D69A787C3</RequestId><HostId>VyGI2WU7ZSIWtmaF3/5dnncU9NlWcYlssiJlSMPZZ7LHwXV4yeYxSzZlqqO5hON3JvKCmejlKw8=</HostId></Error>
[CRT] s3fs.cpp:s3fs_check_service(3841): The bucket region is not 'cn-northwest-1', it is correctly 'us-east-1'. You should specify 'endpoint=us-east-1' option.
[CRT] s3fs.cpp:s3fs_check_service(3866): Failed to connect by sigv4, so retry to connect by signature version 2.
[INF] curl.cpp:ReturnHandler(318): Pool full: destroy the oldest handler
[INF]       curl.cpp:CheckBucket(3397): check a bucket.
[INF]       curl.cpp:prepare_url(4685): URL is https://s3.amazonaws.com/hq-atac-h9/
[INF]       curl.cpp:prepare_url(4718): URL changed is https://hq-atac-h9.s3.amazonaws.com/
* Hostname hq-atac-h9.s3.amazonaws.com was found in DNS cache
*   Trying 52.216.207.155...
* TCP_NODELAY set
* Connected to hq-atac-h9.s3.amazonaws.com (52.216.207.155) port 443 (#1)
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL re-using session ID
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* old SSL session ID is stale, removing
* Server certificate:
*  subject: C=US; ST=Washington; L=Seattle; O=Amazon.com Inc.; CN=*.s3.amazonaws.com
*  start date: Nov  7 00:00:00 2018 GMT
*  expire date: Feb  7 12:00:00 2020 GMT
*  subjectAltName: host "hq-atac-h9.s3.amazonaws.com" matched cert's "*.s3.amazonaws.com"
*  issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert Baltimore CA-2 G2
*  SSL certificate verify ok.
> GET / HTTP/1.1
Host: hq-atac-h9.s3.amazonaws.com
User-Agent: s3fs/1.85 (commit hash a07a533; OpenSSL)
Accept: */*
Authorization: AWS AKIAR6CVCH2CE5G3AC5X:F7FQnfCbTudq92PZC0IEYK5cN40=
Date: Mon, 04 Nov 2019 04:09:50 GMT

< HTTP/1.1 403 Forbidden
< x-amz-request-id: 37A9B36446E6ED83
< x-amz-id-2: 076rKLIWIbpPL+1n0Ya+UdkhpSijk0ijn84C9drCKrGD+fjO+qISeie8hlIB6PFLvyZLVwHvqiU=
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Mon, 04 Nov 2019 04:09:52 GMT
< Server: AmazonS3
< 
* Connection #1 to host hq-atac-h9.s3.amazonaws.com left intact
[ERR] curl.cpp:RequestPerform(2428): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>AKIAR6CVCH2CE5G3AC5X</AWSAccessKeyId><RequestId>37A9B36446E6ED83</RequestId><HostId>076rKLIWIbpPL+1n0Ya+UdkhpSijk0ijn84C9drCKrGD+fjO+qISeie8hlIB6PFLvyZLVwHvqiU=</HostId></Error>
[ERR] curl.cpp:CheckBucket(3423): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>AKIAR6CVCH2CE5G3AC5X</AWSAccessKeyId><RequestId>37A9B36446E6ED83</RequestId><HostId>076rKLIWIbpPL+1n0Ya+UdkhpSijk0ijn84C9drCKrGD+fjO+qISeie8hlIB6PFLvyZLVwHvqiU=</HostId></Error>
[CRT] s3fs.cpp:s3fs_check_service(3881): invalid credentials(host=https://s3.amazonaws.com) - result of checking service.
[ERR] s3fs.cpp:s3fs_exit_fuseloop(3466): Exiting FUSE event loop due to errors

[INF] s3fs.cpp:s3fs_destroy(3529): destroy

it seems to force to us-east-1 instead of recognize cn-northwest-1 region, ignore the parameter ``-o endpoint=cn-northwest-1 ```.

it confuse me a lot, thanks for help~

Regard
Si

Originally created by @orangeSi on GitHub (Nov 7, 2019). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1193 #### Version of s3fs being used (s3fs --version) V1.85 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.4 #### Kernel information (uname -r) 4.14.146-93.123.amzn1.x86_64 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) NAME="Amazon Linux AMI" VERSION="2018.03" ID="amzn" ID_LIKE="rhel fedora" VERSION_ID="2018.03" PRETTY_NAME="Amazon Linux AMI 2018.03" ANSI_COLOR="0;33" CPE_NAME="cpe:/o:amazon:linux:2018.03:ga" HOME_URL="http://aws.amazon.com/amazon-linux-ami/" #### s3fs command line used, if applicable hi, I create a Ec2 instance in ```cn-northwest-1c``` region with ```a IAM user account```, and then create a S3 bucket in ```cn-northwest-1``` region and upload data to it ```with same IAM user```. And want to mount that S3 bucket to Ec2 instance, so I run this command in instance: ``` s3fs hq-atac-h9 /home/ec2-user/s3mnt -o uid=500 -o gid=500 -o umask=0077 \ -o use_cache=/tmp/s3cache -o passwd_file=/home/ec2-user/.passwd-s3fs \ -o endpoint=cn-northwest-1 -f ``` then got this error: ``` [INF] s3fs.cpp:s3fs_init(3476): init v1.85(commit:a07a533) with OpenSSL [CRT] s3fs.cpp:s3fs_check_service(3841): The bucket region is not 'cn-northwest-1c', \ it is correctly 'us-east-1'. You should specify 'endpoint=us-east-1' option. [CRT] s3fs.cpp:s3fs_check_service(3866): Failed to connect by sigv4, \ so retry to connect by signature version 2. [CRT] s3fs.cpp:s3fs_check_service(3881): invalid credentials(host=https://s3.amazonaws.com) - result of checking service. ``` if add ``` -o curldbg -d ``` ``` s3fs hq-atac-h9 /home/ec2-user/s3mnt -o uid=500 -o gid=500 -o umask=0077 \ -o use_cache=/tmp/s3cache -o passwd_file=/home/ec2-user/.passwd-s3fs \ -o endpoint=cn-northwest-1 -o curldbg -f -d ``` then got this detail error: ``` [CRT] s3fs.cpp:set_s3fs_log_level(297): change debug level from [CRT] to [INF] [INF] s3fs.cpp:set_mountpoint_attribute(4384): PROC(uid=500, gid=500) - MountPoint(uid=500, gid=500, mode=40775) [INF] s3fs.cpp:s3fs_init(3476): init v1.85(commit:a07a533) with OpenSSL [INF] s3fs.cpp:s3fs_check_service(3811): check services. [INF] curl.cpp:CheckBucket(3397): check a bucket. [INF] curl.cpp:prepare_url(4685): URL is https://s3.amazonaws.com/hq-atac-h9/ [INF] curl.cpp:prepare_url(4718): URL changed is https://hq-atac-h9.s3.amazonaws.com/ [INF] curl.cpp:insertV4Headers(2745): computing signature [GET] [/] [] [] [INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com * Trying 52.216.207.155... * TCP_NODELAY set * Connected to hq-atac-h9.s3.amazonaws.com (52.216.207.155) port 443 (#0) * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * Server certificate: * subject: C=US; ST=Washington; L=Seattle; O=Amazon.com Inc.; CN=*.s3.amazonaws.com * start date: Nov 7 00:00:00 2018 GMT * expire date: Feb 7 12:00:00 2020 GMT * subjectAltName: host "hq-atac-h9.s3.amazonaws.com" matched cert's "*.s3.amazonaws.com" * issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert Baltimore CA-2 G2 * SSL certificate verify ok. > GET / HTTP/1.1 Host: hq-atac-h9.s3.amazonaws.com User-Agent: s3fs/1.85 (commit hash a07a533; OpenSSL) Accept: */* Authorization: AWS4-HMAC-SHA256 Credential=AKIAR6CVCH2CE5G3AC5X/20191104/cn-northwest-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=1d5196320df6587ffd0cc8434bb2d8f5ddf89e127dc6ac06a44bab28151c9cf2 x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date: 20191104T040949Z < HTTP/1.1 400 Bad Request < x-amz-request-id: 6495910D69A787C3 < x-amz-id-2: VyGI2WU7ZSIWtmaF3/5dnncU9NlWcYlssiJlSMPZZ7LHwXV4yeYxSzZlqqO5hON3JvKCmejlKw8= < Content-Type: application/xml < Transfer-Encoding: chunked < Date: Mon, 04 Nov 2019 04:09:49 GMT < Connection: close < Server: AmazonS3 < * Closing connection 0 [ERR] curl.cpp:RequestPerform(2423): HTTP response code 400, returning EIO. Body Text: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'cn-northwest-1' is wrong; expecting 'us-east-1'</Message><Region>us-east-1</Region><RequestId>6495910D69A787C3</RequestId><HostId>VyGI2WU7ZSIWtmaF3/5dnncU9NlWcYlssiJlSMPZZ7LHwXV4yeYxSzZlqqO5hON3JvKCmejlKw8=</HostId></Error> [ERR] curl.cpp:CheckBucket(3423): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'cn-northwest-1' is wrong; expecting 'us-east-1'</Message><Region>us-east-1</Region><RequestId>6495910D69A787C3</RequestId><HostId>VyGI2WU7ZSIWtmaF3/5dnncU9NlWcYlssiJlSMPZZ7LHwXV4yeYxSzZlqqO5hON3JvKCmejlKw8=</HostId></Error> [CRT] s3fs.cpp:s3fs_check_service(3841): The bucket region is not 'cn-northwest-1', it is correctly 'us-east-1'. You should specify 'endpoint=us-east-1' option. [CRT] s3fs.cpp:s3fs_check_service(3866): Failed to connect by sigv4, so retry to connect by signature version 2. [INF] curl.cpp:ReturnHandler(318): Pool full: destroy the oldest handler [INF] curl.cpp:CheckBucket(3397): check a bucket. [INF] curl.cpp:prepare_url(4685): URL is https://s3.amazonaws.com/hq-atac-h9/ [INF] curl.cpp:prepare_url(4718): URL changed is https://hq-atac-h9.s3.amazonaws.com/ * Hostname hq-atac-h9.s3.amazonaws.com was found in DNS cache * Trying 52.216.207.155... * TCP_NODELAY set * Connected to hq-atac-h9.s3.amazonaws.com (52.216.207.155) port 443 (#1) * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL re-using session ID * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * old SSL session ID is stale, removing * Server certificate: * subject: C=US; ST=Washington; L=Seattle; O=Amazon.com Inc.; CN=*.s3.amazonaws.com * start date: Nov 7 00:00:00 2018 GMT * expire date: Feb 7 12:00:00 2020 GMT * subjectAltName: host "hq-atac-h9.s3.amazonaws.com" matched cert's "*.s3.amazonaws.com" * issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert Baltimore CA-2 G2 * SSL certificate verify ok. > GET / HTTP/1.1 Host: hq-atac-h9.s3.amazonaws.com User-Agent: s3fs/1.85 (commit hash a07a533; OpenSSL) Accept: */* Authorization: AWS AKIAR6CVCH2CE5G3AC5X:F7FQnfCbTudq92PZC0IEYK5cN40= Date: Mon, 04 Nov 2019 04:09:50 GMT < HTTP/1.1 403 Forbidden < x-amz-request-id: 37A9B36446E6ED83 < x-amz-id-2: 076rKLIWIbpPL+1n0Ya+UdkhpSijk0ijn84C9drCKrGD+fjO+qISeie8hlIB6PFLvyZLVwHvqiU= < Content-Type: application/xml < Transfer-Encoding: chunked < Date: Mon, 04 Nov 2019 04:09:52 GMT < Server: AmazonS3 < * Connection #1 to host hq-atac-h9.s3.amazonaws.com left intact [ERR] curl.cpp:RequestPerform(2428): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>AKIAR6CVCH2CE5G3AC5X</AWSAccessKeyId><RequestId>37A9B36446E6ED83</RequestId><HostId>076rKLIWIbpPL+1n0Ya+UdkhpSijk0ijn84C9drCKrGD+fjO+qISeie8hlIB6PFLvyZLVwHvqiU=</HostId></Error> [ERR] curl.cpp:CheckBucket(3423): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>AKIAR6CVCH2CE5G3AC5X</AWSAccessKeyId><RequestId>37A9B36446E6ED83</RequestId><HostId>076rKLIWIbpPL+1n0Ya+UdkhpSijk0ijn84C9drCKrGD+fjO+qISeie8hlIB6PFLvyZLVwHvqiU=</HostId></Error> [CRT] s3fs.cpp:s3fs_check_service(3881): invalid credentials(host=https://s3.amazonaws.com) - result of checking service. [ERR] s3fs.cpp:s3fs_exit_fuseloop(3466): Exiting FUSE event loop due to errors [INF] s3fs.cpp:s3fs_destroy(3529): destroy ``` it seems to force to ```us-east-1``` instead of recognize ```cn-northwest-1``` region, ignore the parameter ``-o endpoint=cn-northwest-1 ```. it confuse me a lot, thanks for help~ Regard Si
kerem 2026-03-04 01:47:22 +03:00
  • closed this issue
  • added the
    need info
    label
Author
Owner

@bramevo commented on GitHub (Feb 20, 2020):

Try setting both -o url="https://s3-eu-west-1.amazonaws.com" and -o endpoint="eu-west-1"

<!-- gh-comment-id:589286898 --> @bramevo commented on GitHub (Feb 20, 2020): Try setting both -o url="https://s3-eu-west-1.amazonaws.com" and -o endpoint="eu-west-1"
Author
Owner

@web-engineer commented on GitHub (Apr 29, 2020):

We are having the same problem or very similar -

url=http://s3-eu-west-2.amazonaws.com

HTTP response code 400

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

Trying to find info on a fix but answers seem to be centred around the use a different endpoint that supports the older protocol. However - this seems to me to be flawed since this is likely to be a problem for everyone eventually.

Has this been fixed in the repo? Or will there be another release soon? Does anyone know of any way to patch this?

<!-- gh-comment-id:621130364 --> @web-engineer commented on GitHub (Apr 29, 2020): We are having the same problem or very similar - url=http://s3-eu-west-2.amazonaws.com HTTP response code 400 *The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.* Trying to find info on a fix but answers seem to be centred around the use a different endpoint that supports the older protocol. However - this seems to me to be flawed since this is likely to be a problem for everyone eventually. Has this been fixed in the repo? Or will there be another release soon? Does anyone know of any way to patch this?
Author
Owner

@web-engineer commented on GitHub (Apr 29, 2020):

To add some more detail - have pulled from git and tried -

[root@id107616 s3fs-fuse]# /usr/local/bin/s3fs --version
Amazon Simple Storage Service File System V1.86 (commit:005a684) with OpenSSL
Copyright (C) 2010 Randy Rizun <rrizun@gmail.com>
License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Result the same when using the region above.

Also updated curl to the latest (as this was mentioned in the debug) made no difference however.

AWS is insisting on the updated V4 authentication protocol - whats the state of play here?

<!-- gh-comment-id:621147129 --> @web-engineer commented on GitHub (Apr 29, 2020): To add some more detail - have pulled from git and tried - ``` [root@id107616 s3fs-fuse]# /usr/local/bin/s3fs --version Amazon Simple Storage Service File System V1.86 (commit:005a684) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. ``` Result the same when using the region above. Also updated curl to the latest (as this was mentioned in the debug) made no difference however. AWS is insisting on the updated V4 authentication protocol - whats the state of play here?
Author
Owner

@orangeSi commented on GitHub (Apr 30, 2020):

We are having the same problem or very similar -

url=http://s3-eu-west-2.amazonaws.com

HTTP response code 400

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

Trying to find info on a fix but answers seem to be centred around the use a different endpoint that supports the older protocol. However - this seems to me to be flawed since this is likely to be a problem for everyone eventually.

Has this been fixed in the repo? Or will there be another release soon? Does anyone know of any way to patch this?

I still not reslove this yet

<!-- gh-comment-id:621562268 --> @orangeSi commented on GitHub (Apr 30, 2020): > We are having the same problem or very similar - > > url=http://s3-eu-west-2.amazonaws.com > > HTTP response code 400 > > _The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256._ > > Trying to find info on a fix but answers seem to be centred around the use a different endpoint that supports the older protocol. However - this seems to me to be flawed since this is likely to be a problem for everyone eventually. > > Has this been fixed in the repo? Or will there be another release soon? Does anyone know of any way to patch this? I still not reslove this yet
Author
Owner

@lpasselin commented on GitHub (Jul 17, 2020):

I had a similar issue. It was solved by removing endpoint argument and upgrading from 1.80 to 1.86.

<!-- gh-comment-id:660192430 --> @lpasselin commented on GitHub (Jul 17, 2020): I had a similar issue. It was solved by removing endpoint argument and upgrading from 1.80 to 1.86.
Author
Owner

@orangeSi commented on GitHub (Jul 27, 2020):

I had a similar issue. It was solved by removing endpoint argument and upgrading from 1.80 to 1.86.

Thanks for the reply! beacase I am not in the situation yet , so I have no chance to do anything or as your solution to reslove the error.

<!-- gh-comment-id:664208488 --> @orangeSi commented on GitHub (Jul 27, 2020): > I had a similar issue. It was solved by removing endpoint argument and upgrading from 1.80 to 1.86. Thanks for the reply! beacase I am not in the situation yet , so I have no chance to do anything or as your solution to reslove the error.
Author
Owner

@caleuanhopkins commented on GitHub (Aug 6, 2020):

I had a similar issue. It was solved by removing endpoint argument and upgrading from 1.80 to 1.86.

Don this but I still have a 403 coming back. When using an IAM Role does a passwd_file still have to be supplied?

<!-- gh-comment-id:669754517 --> @caleuanhopkins commented on GitHub (Aug 6, 2020): > I had a similar issue. It was solved by removing endpoint argument and upgrading from 1.80 to 1.86. Don this but I still have a 403 coming back. When using an IAM Role does a `passwd_file` still have to be supplied?
Author
Owner

@gaul commented on GitHub (Oct 10, 2020):

What is the current status of this bug? Can someone test again with the latest version 1.87 and the suggested flags -o url="https://s3-eu-west-1.amazonaws.com" -o endpoint="eu-west-1"?

<!-- gh-comment-id:706568173 --> @gaul commented on GitHub (Oct 10, 2020): What is the current status of this bug? Can someone test again with the latest version 1.87 and the suggested flags `-o url="https://s3-eu-west-1.amazonaws.com" -o endpoint="eu-west-1"`?
Author
Owner

@kontrollanten commented on GitHub (Nov 6, 2020):

@gaul Since I upgraded from 1.86 to 1.87 I got it working. Thanks!

Ubuntu 20.04
kernel version: 5.4.0-52-generic
fuse version: 2.9.9-3

<!-- gh-comment-id:722856195 --> @kontrollanten commented on GitHub (Nov 6, 2020): @gaul Since I upgraded from 1.86 to 1.87 I got it working. Thanks! Ubuntu 20.04 kernel version: 5.4.0-52-generic fuse version: 2.9.9-3
Author
Owner

@gaul commented on GitHub (Nov 6, 2020):

Thanks for testing!

<!-- gh-comment-id:723048208 --> @gaul commented on GitHub (Nov 6, 2020): Thanks for testing!
Author
Owner

@vjmedina commented on GitHub (Feb 3, 2023):

I'm having the same issue connecting to buckets in the eu-west-1 region. Even with use of the endpoint and/or url flags I still get errors. Policies are fine, as I can list and download files using the AWS CLI and also get successful results using the IAM policy simulator.

Right now I'm copying files over to an EC2 instance using the AWS CLI, but I'm using s3fs as part of an automatic script that requires mounting s3 folders inside the EC2 instance, so copying the files every time is not feasible as it takes a lot of time and disk space.

To provide a little more context, I'm using s3fs-fuse version 1.91 (built from source) on Ubuntu 18.04. Previously I tried with the pre-built s3fs version 1.82 with the same results.

I'm trying to access buckets on two different accounts, A and B (both in the same region, eu-west-1), from an EC2 instance in account A.

When I try to mount the bucket in account A, the process seems to finish (HTTP response code 200), but it just never releases the lock on the directory, so I can't do anything with it:

~$ sudo s3fs -f -o dbglevel=info -o endpoint=eu-west-1 -o url=https://s3.eu-west-1.amazonaws.com ************************ /home/ubuntu/workspace/s3/
2023-02-03T12:10:08.550Z [INF] s3fs version 1.91(38e8a83) : s3fs -f -o dbglevel=info -o endpoint=eu-west-1 -o url=https://s3.eu-west-1.amazonaws.com ************************ /home/ubuntu/workspace/s3/
2023-02-03T12:10:08.551Z [CRT] s3fs_logger.cpp:LowSetLogLevel(239): change debug level from [CRT] to [INF] 
2023-02-03T12:10:08.551Z [INF]     s3fs.cpp:set_mountpoint_attribute(4192): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
2023-02-03T12:10:08.552Z [INF] curl.cpp:InitMimeType(428): Loaded mime information from /etc/mime.types
2023-02-03T12:10:08.552Z [INF] fdcache_stat.cpp:CheckCacheFileStatTopDir(78): The path to cache top dir is empty, thus not need to check permission.
2023-02-03T12:10:08.552Z [INF] threadpoolman.cpp:StopThreads(195): Any threads are running now, then nothing to do.
2023-02-03T12:10:08.552Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-03T12:10:08.552Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-03T12:10:08.552Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-03T12:10:08.552Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-03T12:10:08.552Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-03T12:10:08.553Z [CRT] s3fs_cred.cpp:VersionS3fsCredential(60): Check why built-in function was called, the external credential library must have VersionS3fsCredential function.
2023-02-03T12:10:08.553Z [INF] s3fs.cpp:s3fs_init(3914): init v1.91(commit:38e8a83) with OpenSSL, credential-library(built-in)
2023-02-03T12:10:08.553Z [INF] s3fs.cpp:s3fs_check_service(4058): check services.
2023-02-03T12:10:08.553Z [INF]       curl.cpp:CheckBucket(3522): check a bucket.
2023-02-03T12:10:08.553Z [INF]       curl_util.cpp:prepare_url(257): URL is https://s3.eu-west-1.amazonaws.com/************************/
2023-02-03T12:10:08.553Z [INF]       curl_util.cpp:prepare_url(290): URL changed is https://************************.s3.eu-west-1.amazonaws.com/
2023-02-03T12:10:08.553Z [INF]       curl.cpp:insertV4Headers(2741): computing signature [GET] [/] [] []
2023-02-03T12:10:08.553Z [INF]       curl_util.cpp:url_to_host(334): url is https://s3.eu-west-1.amazonaws.com
2023-02-03T12:10:08.794Z [INF]       curl.cpp:RequestPerform(2369): HTTP response code 200

When I try to access the bucket in account B, I get the following error and then s3fs exits:

$> sudo s3fs -f -o dbglevel=info -o endpoint=eu-west-1 -o url=https://s3.eu-west-1.amazonaws.com **************************** /home/ubuntu/workspace/s3/
2023-02-03T12:08:10.078Z [INF] s3fs version 1.91(38e8a83) : s3fs -f -o dbglevel=info -o endpoint=eu-west-1 -o url=https://s3.eu-west-1.amazonaws.com **************************** /home/ubuntu/workspace/s3/
2023-02-03T12:08:10.079Z [CRT] s3fs_logger.cpp:LowSetLogLevel(239): change debug level from [CRT] to [INF] 
2023-02-03T12:08:10.079Z [INF]     s3fs.cpp:set_mountpoint_attribute(4192): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
2023-02-03T12:08:10.080Z [INF] curl.cpp:InitMimeType(428): Loaded mime information from /etc/mime.types
2023-02-03T12:08:10.080Z [INF] fdcache_stat.cpp:CheckCacheFileStatTopDir(78): The path to cache top dir is empty, thus not need to check permission.
2023-02-03T12:08:10.080Z [INF] threadpoolman.cpp:StopThreads(195): Any threads are running now, then nothing to do.
2023-02-03T12:08:10.080Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-03T12:08:10.080Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-03T12:08:10.080Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-03T12:08:10.080Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-03T12:08:10.080Z [INF]       threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan.
2023-02-03T12:08:10.082Z [CRT] s3fs_cred.cpp:VersionS3fsCredential(60): Check why built-in function was called, the external credential library must have VersionS3fsCredential function.
2023-02-03T12:08:10.082Z [INF] s3fs.cpp:s3fs_init(3914): init v1.91(commit:38e8a83) with OpenSSL, credential-library(built-in)
2023-02-03T12:08:10.082Z [INF] s3fs.cpp:s3fs_check_service(4058): check services.
2023-02-03T12:08:10.082Z [INF]       curl.cpp:CheckBucket(3522): check a bucket.
2023-02-03T12:08:10.082Z [INF]       curl_util.cpp:prepare_url(257): URL is https://s3.eu-west-1.amazonaws.com/****************************/
2023-02-03T12:08:10.082Z [INF]       curl_util.cpp:prepare_url(290): URL changed is https://****************************.s3.eu-west-1.amazonaws.com/
2023-02-03T12:08:10.082Z [INF]       curl.cpp:insertV4Headers(2741): computing signature [GET] [/] [] []
2023-02-03T12:08:10.082Z [INF]       curl_util.cpp:url_to_host(334): url is https://s3.eu-west-1.amazonaws.com
2023-02-03T12:08:10.120Z [ERR] curl.cpp:RequestPerform(2416): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>BV90R1YYBCWF4A9Z</RequestId><HostId>g2CMhfEnJZRCSh1XLK0uTwqFzh+xE8oj0btozxXW9hClT+EBn9+QRUr/mSsw7cFHqRAgilV1Ah4=</HostId></Error>
2023-02-03T12:08:10.120Z [ERR] curl.cpp:CheckBucket(3569): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>BV90R1YYBCWF4A9Z</RequestId><HostId>g2CMhfEnJZRCSh1XLK0uTwqFzh+xE8oj0btozxXW9hClT+EBn9+QRUr/mSsw7cFHqRAgilV1Ah4=</HostId></Error>
2023-02-03T12:08:10.120Z [CRT] s3fs.cpp:s3fs_check_service(4129): Failed to connect by sigv4, so retry to connect by signature version 2.
2023-02-03T12:08:10.120Z [INF]       curl.cpp:CheckBucket(3522): check a bucket.
2023-02-03T12:08:10.120Z [INF]       curl_util.cpp:prepare_url(257): URL is https://s3.eu-west-1.amazonaws.com/****************************/
2023-02-03T12:08:10.120Z [INF]       curl_util.cpp:prepare_url(290): URL changed is https://****************************.s3.eu-west-1.amazonaws.com/
2023-02-03T12:08:10.129Z [ERR] curl.cpp:RequestPerform(2416): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>BV974280VQ20SSHB</RequestId><HostId>KiwPIzV/LYCfTJHD5duZwE2FHJJETF3SGWiDlx+nRT8zxsMz1PHPI5wB5Gt3sLnwDKziEGYf8VQ=</HostId></Error>
2023-02-03T12:08:10.129Z [ERR] curl.cpp:CheckBucket(3569): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>BV974280VQ20SSHB</RequestId><HostId>KiwPIzV/LYCfTJHD5duZwE2FHJJETF3SGWiDlx+nRT8zxsMz1PHPI5wB5Gt3sLnwDKziEGYf8VQ=</HostId></Error>
2023-02-03T12:08:10.129Z [CRT] s3fs.cpp:s3fs_check_service(4151): invalid credentials(host=https://s3.eu-west-1.amazonaws.com) - result of checking service.
2023-02-03T12:08:10.129Z [ERR] s3fs.cpp:s3fs_exit_fuseloop(3904): Exiting FUSE event loop due to errors

2023-02-03T12:08:10.129Z [INF] s3fs.cpp:s3fs_destroy(3958): destroy
<!-- gh-comment-id:1415797001 --> @vjmedina commented on GitHub (Feb 3, 2023): I'm having the same issue connecting to buckets in the eu-west-1 region. **Even with use of the endpoint and/or url flags I still get errors.** Policies are fine, as I can list and download files using the AWS CLI and also get successful results using the IAM policy simulator. Right now I'm copying files over to an EC2 instance using the AWS CLI, but I'm using s3fs as part of an automatic script that requires mounting s3 folders inside the EC2 instance, so copying the files every time is not feasible as it takes a lot of time and disk space. To provide a little more context, I'm using s3fs-fuse version 1.91 (built from source) on Ubuntu 18.04. Previously I tried with the pre-built s3fs version 1.82 with the same results. I'm trying to access buckets on two different accounts, **A and B** (both in the same region, eu-west-1), from an EC2 instance in account A. When I try to mount the **bucket in account A**, the process seems to finish (HTTP response code 200), but it just never releases the lock on the directory, so I can't do anything with it: ``` ~$ sudo s3fs -f -o dbglevel=info -o endpoint=eu-west-1 -o url=https://s3.eu-west-1.amazonaws.com ************************ /home/ubuntu/workspace/s3/ 2023-02-03T12:10:08.550Z [INF] s3fs version 1.91(38e8a83) : s3fs -f -o dbglevel=info -o endpoint=eu-west-1 -o url=https://s3.eu-west-1.amazonaws.com ************************ /home/ubuntu/workspace/s3/ 2023-02-03T12:10:08.551Z [CRT] s3fs_logger.cpp:LowSetLogLevel(239): change debug level from [CRT] to [INF] 2023-02-03T12:10:08.551Z [INF] s3fs.cpp:set_mountpoint_attribute(4192): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) 2023-02-03T12:10:08.552Z [INF] curl.cpp:InitMimeType(428): Loaded mime information from /etc/mime.types 2023-02-03T12:10:08.552Z [INF] fdcache_stat.cpp:CheckCacheFileStatTopDir(78): The path to cache top dir is empty, thus not need to check permission. 2023-02-03T12:10:08.552Z [INF] threadpoolman.cpp:StopThreads(195): Any threads are running now, then nothing to do. 2023-02-03T12:10:08.552Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-03T12:10:08.552Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-03T12:10:08.552Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-03T12:10:08.552Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-03T12:10:08.552Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-03T12:10:08.553Z [CRT] s3fs_cred.cpp:VersionS3fsCredential(60): Check why built-in function was called, the external credential library must have VersionS3fsCredential function. 2023-02-03T12:10:08.553Z [INF] s3fs.cpp:s3fs_init(3914): init v1.91(commit:38e8a83) with OpenSSL, credential-library(built-in) 2023-02-03T12:10:08.553Z [INF] s3fs.cpp:s3fs_check_service(4058): check services. 2023-02-03T12:10:08.553Z [INF] curl.cpp:CheckBucket(3522): check a bucket. 2023-02-03T12:10:08.553Z [INF] curl_util.cpp:prepare_url(257): URL is https://s3.eu-west-1.amazonaws.com/************************/ 2023-02-03T12:10:08.553Z [INF] curl_util.cpp:prepare_url(290): URL changed is https://************************.s3.eu-west-1.amazonaws.com/ 2023-02-03T12:10:08.553Z [INF] curl.cpp:insertV4Headers(2741): computing signature [GET] [/] [] [] 2023-02-03T12:10:08.553Z [INF] curl_util.cpp:url_to_host(334): url is https://s3.eu-west-1.amazonaws.com 2023-02-03T12:10:08.794Z [INF] curl.cpp:RequestPerform(2369): HTTP response code 200 ``` When I try to access the **bucket in account B**, I get the following error and then s3fs exits: ``` $> sudo s3fs -f -o dbglevel=info -o endpoint=eu-west-1 -o url=https://s3.eu-west-1.amazonaws.com **************************** /home/ubuntu/workspace/s3/ 2023-02-03T12:08:10.078Z [INF] s3fs version 1.91(38e8a83) : s3fs -f -o dbglevel=info -o endpoint=eu-west-1 -o url=https://s3.eu-west-1.amazonaws.com **************************** /home/ubuntu/workspace/s3/ 2023-02-03T12:08:10.079Z [CRT] s3fs_logger.cpp:LowSetLogLevel(239): change debug level from [CRT] to [INF] 2023-02-03T12:08:10.079Z [INF] s3fs.cpp:set_mountpoint_attribute(4192): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) 2023-02-03T12:08:10.080Z [INF] curl.cpp:InitMimeType(428): Loaded mime information from /etc/mime.types 2023-02-03T12:08:10.080Z [INF] fdcache_stat.cpp:CheckCacheFileStatTopDir(78): The path to cache top dir is empty, thus not need to check permission. 2023-02-03T12:08:10.080Z [INF] threadpoolman.cpp:StopThreads(195): Any threads are running now, then nothing to do. 2023-02-03T12:08:10.080Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-03T12:08:10.080Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-03T12:08:10.080Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-03T12:08:10.080Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-03T12:08:10.080Z [INF] threadpoolman.cpp:Worker(76): Start worker thread in ThreadPoolMan. 2023-02-03T12:08:10.082Z [CRT] s3fs_cred.cpp:VersionS3fsCredential(60): Check why built-in function was called, the external credential library must have VersionS3fsCredential function. 2023-02-03T12:08:10.082Z [INF] s3fs.cpp:s3fs_init(3914): init v1.91(commit:38e8a83) with OpenSSL, credential-library(built-in) 2023-02-03T12:08:10.082Z [INF] s3fs.cpp:s3fs_check_service(4058): check services. 2023-02-03T12:08:10.082Z [INF] curl.cpp:CheckBucket(3522): check a bucket. 2023-02-03T12:08:10.082Z [INF] curl_util.cpp:prepare_url(257): URL is https://s3.eu-west-1.amazonaws.com/****************************/ 2023-02-03T12:08:10.082Z [INF] curl_util.cpp:prepare_url(290): URL changed is https://****************************.s3.eu-west-1.amazonaws.com/ 2023-02-03T12:08:10.082Z [INF] curl.cpp:insertV4Headers(2741): computing signature [GET] [/] [] [] 2023-02-03T12:08:10.082Z [INF] curl_util.cpp:url_to_host(334): url is https://s3.eu-west-1.amazonaws.com 2023-02-03T12:08:10.120Z [ERR] curl.cpp:RequestPerform(2416): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>BV90R1YYBCWF4A9Z</RequestId><HostId>g2CMhfEnJZRCSh1XLK0uTwqFzh+xE8oj0btozxXW9hClT+EBn9+QRUr/mSsw7cFHqRAgilV1Ah4=</HostId></Error> 2023-02-03T12:08:10.120Z [ERR] curl.cpp:CheckBucket(3569): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>BV90R1YYBCWF4A9Z</RequestId><HostId>g2CMhfEnJZRCSh1XLK0uTwqFzh+xE8oj0btozxXW9hClT+EBn9+QRUr/mSsw7cFHqRAgilV1Ah4=</HostId></Error> 2023-02-03T12:08:10.120Z [CRT] s3fs.cpp:s3fs_check_service(4129): Failed to connect by sigv4, so retry to connect by signature version 2. 2023-02-03T12:08:10.120Z [INF] curl.cpp:CheckBucket(3522): check a bucket. 2023-02-03T12:08:10.120Z [INF] curl_util.cpp:prepare_url(257): URL is https://s3.eu-west-1.amazonaws.com/****************************/ 2023-02-03T12:08:10.120Z [INF] curl_util.cpp:prepare_url(290): URL changed is https://****************************.s3.eu-west-1.amazonaws.com/ 2023-02-03T12:08:10.129Z [ERR] curl.cpp:RequestPerform(2416): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>BV974280VQ20SSHB</RequestId><HostId>KiwPIzV/LYCfTJHD5duZwE2FHJJETF3SGWiDlx+nRT8zxsMz1PHPI5wB5Gt3sLnwDKziEGYf8VQ=</HostId></Error> 2023-02-03T12:08:10.129Z [ERR] curl.cpp:CheckBucket(3569): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>BV974280VQ20SSHB</RequestId><HostId>KiwPIzV/LYCfTJHD5duZwE2FHJJETF3SGWiDlx+nRT8zxsMz1PHPI5wB5Gt3sLnwDKziEGYf8VQ=</HostId></Error> 2023-02-03T12:08:10.129Z [CRT] s3fs.cpp:s3fs_check_service(4151): invalid credentials(host=https://s3.eu-west-1.amazonaws.com) - result of checking service. 2023-02-03T12:08:10.129Z [ERR] s3fs.cpp:s3fs_exit_fuseloop(3904): Exiting FUSE event loop due to errors 2023-02-03T12:08:10.129Z [INF] s3fs.cpp:s3fs_destroy(3958): destroy ```
Author
Owner

@c-imp11 commented on GitHub (May 7, 2024):

I'm having the same issue in Linux 2023, connecting to a bucket in eu-central-1. For context I have an existing bucket in us-east-1 and it works fine on that. I have 1.94 version

<!-- gh-comment-id:2098497576 --> @c-imp11 commented on GitHub (May 7, 2024): I'm having the same issue in Linux 2023, connecting to a bucket in eu-central-1. For context I have an existing bucket in us-east-1 and it works fine on that. I have 1.94 version
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#631
No description provided.