[GH-ISSUE #693] Bucket name includes dots + use_path_request_style + eu-central-1; Gives 301 Moved Permanently #388

Open
opened 2026-03-04 01:45:04 +03:00 by kerem · 20 comments
Owner

Originally created by @catavan on GitHub (Nov 27, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/693

Version of s3fs being used

Amazon Simple Storage Service File System V1.82(commit:566961c) with OpenSSL

Version of fuse being used

2.9.7

System information

4.13.0-17-generic

Distro

Ubuntu 17.10

s3fs command line used

s3fs my.working.bucket.name /home/me/s3 -o use_path_request_style -o endpoint=eu-central-1 -o passwd_file=/etc/passwd-s3fs -o curldbg -f

s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)

`[INF] s3fs.cpp:s3fs_init(3371): init v1.82(commit:566961c) with OpenSSL

  • Trying 54.231.98.203...
  • TCP_NODELAY set
  • Connected to s3.amazonaws.com (54.231.98.203) port 443 (#0)
  • found 148 certificates in /etc/ssl/certs/ca-certificates.crt
  • found 595 certificates in /etc/ssl/certs
  • ALPN, offering http/1.1
  • SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
  • server certificate verification OK
  • server certificate status verification SKIPPED
  • common name: s3.amazonaws.com (matched)
  • server certificate expiration date OK
  • server certificate activation date OK
  • certificate public key: RSA
  • certificate version: #3
  • subject: C=US,ST=Washington,L=Seattle,O=Amazon.com Inc.,CN=s3.amazonaws.com
  • start date: Tue, 26 Sep 2017 00:00:00 GMT
  • expire date: Thu, 20 Sep 2018 12:00:00 GMT
  • issuer: C=US,O=DigiCert Inc,OU=www.digicert.com,CN=DigiCert Baltimore CA-2 G2
  • compression: NULL
  • ALPN, server did not agree to a protocol

GET /my.working.bucket.name/ HTTP/1.1
host: s3.amazonaws.com
User-Agent: s3fs/1.82 (commit hash 566961c; OpenSSL)
Accept: /
Authorization: AWS4-HMAC-SHA256 Credential=AKIAILS2DKV4WVE2MUBA/20171127/eu-central-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=364abd3e5b561a9a71a35f28264a5700f6e546d66f96b7648fa9102d0600b35c
x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date: 20171127T155745Z

< HTTP/1.1 301 Moved Permanently
< x-amz-bucket-region: eu-central-1
< x-amz-request-id: ACC928843271B61C
< x-amz-id-2: /zO0/tq1K1JyaIAoJmWmg0A94Iud8HAQMRIRJ/S/+N/3H4xV9m+ql2MqdxCSG4N3WM4qirwmBQQ=
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Mon, 27 Nov 2017 15:57:45 GMT
< Server: AmazonS3
<

  • Connection #0 to host s3.amazonaws.com left intact`

Details about issue

Hi guys

I have a bucket on eu-central-1 and with dots in the bucket name and after some investigation I found the FAQ page (https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ#q-https-connecting-failed-if-bucket-name-includes-dod)

Tried to mount using "-o use_path_request_style -o endpoint=eu-central-1":
s3fs my.working.bucket.name /home/me/s3 -o use_path_request_style -o endpoint=eu-central-1 -o passwd_file=/etc/passwd-s3fs -o curldbg -f

It seams I got a 301 in the response and the mounted folder is "empty"

HTTP/1.1 301 Moved Permanently  
x-amz-bucket-region: eu-central-1

I am doing something wrong with the options ? Some help will be really appreciated!
THX!

Originally created by @catavan on GitHub (Nov 27, 2017). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/693 #### Version of s3fs being used Amazon Simple Storage Service File System V1.82(commit:566961c) with OpenSSL #### Version of fuse being used 2.9.7 #### System information 4.13.0-17-generic #### Distro Ubuntu 17.10 #### s3fs command line used `s3fs my.working.bucket.name /home/me/s3 -o use_path_request_style -o endpoint=eu-central-1 -o passwd_file=/etc/passwd-s3fs -o curldbg -f` #### s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs) `[INF] s3fs.cpp:s3fs_init(3371): init v1.82(commit:566961c) with OpenSSL * Trying 54.231.98.203... * TCP_NODELAY set * Connected to s3.amazonaws.com (54.231.98.203) port 443 (#0) * found 148 certificates in /etc/ssl/certs/ca-certificates.crt * found 595 certificates in /etc/ssl/certs * ALPN, offering http/1.1 * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256 * server certificate verification OK * server certificate status verification SKIPPED * common name: s3.amazonaws.com (matched) * server certificate expiration date OK * server certificate activation date OK * certificate public key: RSA * certificate version: #3 * subject: C=US,ST=Washington,L=Seattle,O=Amazon.com Inc.,CN=s3.amazonaws.com * start date: Tue, 26 Sep 2017 00:00:00 GMT * expire date: Thu, 20 Sep 2018 12:00:00 GMT * issuer: C=US,O=DigiCert Inc,OU=www.digicert.com,CN=DigiCert Baltimore CA-2 G2 * compression: NULL * ALPN, server did not agree to a protocol > GET /my.working.bucket.name/ HTTP/1.1 host: s3.amazonaws.com User-Agent: s3fs/1.82 (commit hash 566961c; OpenSSL) Accept: */* Authorization: AWS4-HMAC-SHA256 Credential=AKIAILS2DKV4WVE2MUBA/20171127/eu-central-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=364abd3e5b561a9a71a35f28264a5700f6e546d66f96b7648fa9102d0600b35c x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date: 20171127T155745Z < HTTP/1.1 301 Moved Permanently < x-amz-bucket-region: eu-central-1 < x-amz-request-id: ACC928843271B61C < x-amz-id-2: /zO0/tq1K1JyaIAoJmWmg0A94Iud8HAQMRIRJ/S/+N/3H4xV9m+ql2MqdxCSG4N3WM4qirwmBQQ= < Content-Type: application/xml < Transfer-Encoding: chunked < Date: Mon, 27 Nov 2017 15:57:45 GMT < Server: AmazonS3 < * Connection #0 to host s3.amazonaws.com left intact` ### Details about issue Hi guys I have a bucket on eu-central-1 and with dots in the bucket name and after some investigation I found the FAQ page (https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ#q-https-connecting-failed-if-bucket-name-includes-dod) Tried to mount using "-o use_path_request_style -o endpoint=eu-central-1": `s3fs my.working.bucket.name /home/me/s3 -o use_path_request_style -o endpoint=eu-central-1 -o passwd_file=/etc/passwd-s3fs -o curldbg -f` It seams I got a 301 in the response and the mounted folder is "empty" ``` HTTP/1.1 301 Moved Permanently x-amz-bucket-region: eu-central-1 ``` I am doing something wrong with the options ? Some help will be really appreciated! THX!
Author
Owner

@haydenyoung commented on GitHub (Jan 5, 2018):

Can confirm the same problem for eu-west-2 region (London).

HTTP/1.1 301 Moved Permanently

Using

s3fs bucket /media/bucket -o passwd_file=/etc/s3fs-pwd,use_path_request_style,endpoint=eu-west-2,curldbg -ouid=1001,gid=1001,allow_other,mp_umask=002 -o sigv2 -o nocopyapi

Problem causes any file I try to edit using vim to be readonly. When I try to exit the file it seems to get stuck in some kind of infinite loop and I have to kill vim.

<!-- gh-comment-id:355551716 --> @haydenyoung commented on GitHub (Jan 5, 2018): Can confirm the same problem for eu-west-2 region (London). `HTTP/1.1 301 Moved Permanently ` Using `s3fs bucket /media/bucket -o passwd_file=/etc/s3fs-pwd,use_path_request_style,endpoint=eu-west-2,curldbg -ouid=1001,gid=1001,allow_other,mp_umask=002 -o sigv2 -o nocopyapi` Problem causes any file I try to edit using vim to be readonly. When I try to exit the file it seems to get stuck in some kind of infinite loop and I have to kill vim.
Author
Owner

@haydenyoung commented on GitHub (Jan 5, 2018):

Can confirm that changing my bucket from my.bucket.for.mounting to my-bucket-for-mounting fixes the problem so it appears the issues related to "use_path_request_style"?

<!-- gh-comment-id:355555390 --> @haydenyoung commented on GitHub (Jan 5, 2018): Can confirm that changing my bucket from my.bucket.for.mounting to my-bucket-for-mounting fixes the problem so it appears the issues related to "use_path_request_style"?
Author
Owner

@Kusmarius commented on GitHub (Jan 22, 2018):

It have nothing to do with dots.
My bucket name is formatted like this x-xxx-xxx and still getting 301 moved

<!-- gh-comment-id:359502990 --> @Kusmarius commented on GitHub (Jan 22, 2018): It have nothing to do with dots. My bucket name is formatted like this x-xxx-xxx and still getting 301 moved
Author
Owner

@vash1486 commented on GitHub (Jan 29, 2018):

same problem here with eu-west-1
s3fs my.bucket.name /mnt/s3/ -o passwd_file=/etc/passwd-s3fs -o dbglevel=info -f -o curldbg -o use_path_request_style -o endpoint=eu-west-1

P.s. Os and version are the same as @catavan

<!-- gh-comment-id:361210783 --> @vash1486 commented on GitHub (Jan 29, 2018): same problem here with eu-west-1 `s3fs my.bucket.name /mnt/s3/ -o passwd_file=/etc/passwd-s3fs -o dbglevel=info -f -o curldbg -o use_path_request_style -o endpoint=eu-west-1` P.s. Os and version are the same as @catavan
Author
Owner

@rymesaint commented on GitHub (Feb 19, 2018):

try this :
s3fs bucketName /mountpoint -o passwd_file=/etc/passwd-s3fs -o use_path_request_style -o url=http://s3-your endpoint.amazonaws.com

<!-- gh-comment-id:366661850 --> @rymesaint commented on GitHub (Feb 19, 2018): try this : s3fs bucketName /mountpoint -o passwd_file=/etc/passwd-s3fs -o use_path_request_style -o url=http://s3-your endpoint.amazonaws.com
Author
Owner

@vash1486 commented on GitHub (Feb 19, 2018):

hi, @rymesaint! unfortunately I have the same problem with your suggestion :(
s3fs bucket.name /mnt/s3 -o passwd_file=/etc/passwd-s3fs -o use_path_request_style -o url=http://s3-eu-west-1.amazonaws.com -o dbglevel=info -f -o curldbg

<!-- gh-comment-id:366684558 --> @vash1486 commented on GitHub (Feb 19, 2018): hi, @rymesaint! unfortunately I have the same problem with your suggestion :( s3fs bucket.name /mnt/s3 -o passwd_file=/etc/passwd-s3fs -o use_path_request_style -o url=http://s3-eu-west-1.amazonaws.com -o dbglevel=info -f -o curldbg
Author
Owner

@rymesaint commented on GitHub (Feb 20, 2018):

@vash1486 What's your log error? Can you share it?

<!-- gh-comment-id:366849004 --> @rymesaint commented on GitHub (Feb 20, 2018): @vash1486 What's your log error? Can you share it?
Author
Owner

@vash1486 commented on GitHub (Feb 20, 2018):

ops, my mistake! trying again, it worked fine! thanks @rymesaint 👍 :)

<!-- gh-comment-id:366892564 --> @vash1486 commented on GitHub (Feb 20, 2018): ops, my mistake! trying again, it worked fine! thanks @rymesaint :+1: :)
Author
Owner

@gaul commented on GitHub (Jan 16, 2019):

s3fs handles this properly for me:

$ aws s3 mb s3://gaul-eu-west-1 --region eu-west-1
...

$ s3fs gaul-eu-west-1 mnt/ -f -d
...
[INF] s3fs.cpp:s3fs_check_service(3738): check services.
[INF]       curl.cpp:CheckBucket(3094): check a bucket.
[INF]       curl.cpp:prepare_url(4325): URL is https://s3.amazonaws.com/gaul-eu-west-1/
[INF]       curl.cpp:prepare_url(4357): URL changed is https://gaul-eu-west-1.s3.amazonaws.com/
[INF]       curl.cpp:insertV4Headers(2422): computing signature [GET] [/] [] []
[INF]       curl.cpp:url_to_host(102): url is https://s3.amazonaws.com
[ERR] curl.cpp:RequestPerform(2088): HTTP response code 400, returning EIO. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-west-1'</Message><Region>eu-west-1</Region><RequestId>58298A2285267CFD</RequestId><HostId>9+SP7BYjtpbi0XdoTlRkt0qKCXPYr34wq7UPf1UyfFvXucJPDZ6APh/8vfnc7FyKvPiTM9V3sWM=</HostId></Error>
[ERR] curl.cpp:CheckBucket(3122): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-west-1'</Message><Region>eu-west-1</Region><RequestId>58298A2285267CFD</RequestId><HostId>9+SP7BYjtpbi0XdoTlRkt0qKCXPYr34wq7UPf1UyfFvXucJPDZ6APh/8vfnc7FyKvPiTM9V3sWM=</HostId></Error>
[CRT] s3fs.cpp:s3fs_check_service(3759): Could not connect wrong region us-east-1, so retry to connect region eu-west-1.

Can you provide exact steps to reproduce this issue?

<!-- gh-comment-id:454982326 --> @gaul commented on GitHub (Jan 16, 2019): s3fs handles this properly for me: ``` $ aws s3 mb s3://gaul-eu-west-1 --region eu-west-1 ... $ s3fs gaul-eu-west-1 mnt/ -f -d ... [INF] s3fs.cpp:s3fs_check_service(3738): check services. [INF] curl.cpp:CheckBucket(3094): check a bucket. [INF] curl.cpp:prepare_url(4325): URL is https://s3.amazonaws.com/gaul-eu-west-1/ [INF] curl.cpp:prepare_url(4357): URL changed is https://gaul-eu-west-1.s3.amazonaws.com/ [INF] curl.cpp:insertV4Headers(2422): computing signature [GET] [/] [] [] [INF] curl.cpp:url_to_host(102): url is https://s3.amazonaws.com [ERR] curl.cpp:RequestPerform(2088): HTTP response code 400, returning EIO. Body Text: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-west-1'</Message><Region>eu-west-1</Region><RequestId>58298A2285267CFD</RequestId><HostId>9+SP7BYjtpbi0XdoTlRkt0qKCXPYr34wq7UPf1UyfFvXucJPDZ6APh/8vfnc7FyKvPiTM9V3sWM=</HostId></Error> [ERR] curl.cpp:CheckBucket(3122): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-west-1'</Message><Region>eu-west-1</Region><RequestId>58298A2285267CFD</RequestId><HostId>9+SP7BYjtpbi0XdoTlRkt0qKCXPYr34wq7UPf1UyfFvXucJPDZ6APh/8vfnc7FyKvPiTM9V3sWM=</HostId></Error> [CRT] s3fs.cpp:s3fs_check_service(3759): Could not connect wrong region us-east-1, so retry to connect region eu-west-1. ``` Can you provide exact steps to reproduce this issue?
Author
Owner

@gaul commented on GitHub (Jan 16, 2019):

Similarly with -o use_path_request_style:

[INF] s3fs.cpp:s3fs_check_service(3738): check services.
[INF]       curl.cpp:CheckBucket(3094): check a bucket.
[DBG] curl.cpp:GetHandler(286): Get handler from pool: 31
[INF]       curl.cpp:prepare_url(4325): URL is https://s3.amazonaws.com/gaul-eu-west-1/
[INF]       curl.cpp:prepare_url(4357): URL changed is https://s3.amazonaws.com/gaul-eu-west-1/
[INF]       curl.cpp:insertV4Headers(2422): computing signature [GET] [/] [] []
[INF]       curl.cpp:url_to_host(102): url is https://s3.amazonaws.com
[DBG] curl.cpp:RequestPerform(2059): connecting to URL https://s3.amazonaws.com/gaul-eu-west-1/
[INF]       curl.cpp:RequestPerform(2076): HTTP response code 301
[DBG] curl.cpp:ReturnHandler(310): Return handler to pool: 31
<!-- gh-comment-id:454983023 --> @gaul commented on GitHub (Jan 16, 2019): Similarly with `-o use_path_request_style`: ``` [INF] s3fs.cpp:s3fs_check_service(3738): check services. [INF] curl.cpp:CheckBucket(3094): check a bucket. [DBG] curl.cpp:GetHandler(286): Get handler from pool: 31 [INF] curl.cpp:prepare_url(4325): URL is https://s3.amazonaws.com/gaul-eu-west-1/ [INF] curl.cpp:prepare_url(4357): URL changed is https://s3.amazonaws.com/gaul-eu-west-1/ [INF] curl.cpp:insertV4Headers(2422): computing signature [GET] [/] [] [] [INF] curl.cpp:url_to_host(102): url is https://s3.amazonaws.com [DBG] curl.cpp:RequestPerform(2059): connecting to URL https://s3.amazonaws.com/gaul-eu-west-1/ [INF] curl.cpp:RequestPerform(2076): HTTP response code 301 [DBG] curl.cpp:ReturnHandler(310): Return handler to pool: 31 ```
Author
Owner

@gaul commented on GitHub (Jan 17, 2019):

One problem is that s3fs does not flag the 301 as a failure. CheckBucket interprets AuthorizationHeaderMalformed correctly but also needs to consume PermanentRedirect responses.

<!-- gh-comment-id:455368011 --> @gaul commented on GitHub (Jan 17, 2019): One problem is that s3fs does not flag the 301 as a failure. `CheckBucket` interprets `AuthorizationHeaderMalformed` correctly but also needs to consume `PermanentRedirect` responses.
Author
Owner

@gaul commented on GitHub (Jan 24, 2019):

@catavan master has some fixes for this; could you test and report back?

<!-- gh-comment-id:457035786 --> @gaul commented on GitHub (Jan 24, 2019): @catavan master has some fixes for this; could you test and report back?
Author
Owner

@gaul commented on GitHub (Apr 9, 2019):

Closing due to inactivity. Please reopen if symptoms persist.

<!-- gh-comment-id:481185369 --> @gaul commented on GitHub (Apr 9, 2019): Closing due to inactivity. Please reopen if symptoms persist.
Author
Owner

@HeshamMeneisi commented on GitHub (Jun 19, 2020):

This issue still exists:

docker-compose command for the s3fs container (just ubuntu:latest with s3fs latest)

bash -c
            "mkdir -p ~/.aws &&
             echo [default] > ~/.aws/credentials &&
             echo aws_access_key_id=${AWS_ACCESS_KEY_ID} >> ~/.aws/credentials &&
             echo aws_secret_access_key=${AWS_SECRET_ACCESS_KEY} >> ~/.aws/credentials &&
             echo [default] > ~/.aws/config &&
             echo region=${AWS_REGION:-us-west-2} >> ~/.aws/config &&
             echo output=${AWS_OUTPUT:-json} >> ~/.aws/config &&
             set -o xtrace &&
             s3fs -f -d ${AWS_BUCKET}:${AWS_BUCKET_PATH:-/} /data -o use_path_request_style"

Debug Log

s3fs_1  | [ERR] curl.cpp:RequestPerform(2426): The options of url and endpoint may be useful for solving, please try to use both options.
s3fs_1  | [ERR] s3fs.cpp:list_bucket(2632): ListBucketRequest returns with error.
s3fs_1  | [ERR] s3fs.cpp:s3fs_readdir(2561): list_bucket returns error(-5).
s3fs_1  | [INF] s3fs.cpp:s3fs_opendir(2404): [path=/][flags=0x18800]
s3fs_1  | [INF] s3fs.cpp:s3fs_getattr(876): [path=/]
s3fs_1  | [INF] s3fs.cpp:s3fs_readdir(2553): [path=/]
s3fs_1  | [INF]   s3fs.cpp:list_bucket(2596): [path=/]
s3fs_1  | [INF]       curl.cpp:ListBucketRequest(3446): [tpath=/]
s3fs_1  | [INF]       curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/bucket.name.co?delimiter=/&max-keys=1000&prefix=
s3fs_1  | [INF]       curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/bucket.name.co/?delimiter=/&max-keys=1000&prefix=
s3fs_1  | [INF]       curl.cpp:insertV4Headers(2753): computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=] []
s3fs_1  | [INF]       curl.cpp:url_to_host(99): url is https://s3.amazonaws.com
s3fs_1  | [ERR] curl.cpp:RequestPerform(2425): HTTP response code 301(Moved Permanently: also happens when bucket's region is incorrect), returning EIO. Body Text: <?xml version="1.0" encoding="UTF-8"?>

The only way to get it to work currently is to not use "use_path_request_style" and instead disable https

...
s3fs -f -d ${AWS_BUCKET}:${AWS_BUCKET_PATH:-/} /data -o url=http://s3.amazonaws.com"
...
<!-- gh-comment-id:646533404 --> @HeshamMeneisi commented on GitHub (Jun 19, 2020): This issue still exists: docker-compose command for the s3fs container (just ubuntu:latest with s3fs latest) ``` bash -c "mkdir -p ~/.aws && echo [default] > ~/.aws/credentials && echo aws_access_key_id=${AWS_ACCESS_KEY_ID} >> ~/.aws/credentials && echo aws_secret_access_key=${AWS_SECRET_ACCESS_KEY} >> ~/.aws/credentials && echo [default] > ~/.aws/config && echo region=${AWS_REGION:-us-west-2} >> ~/.aws/config && echo output=${AWS_OUTPUT:-json} >> ~/.aws/config && set -o xtrace && s3fs -f -d ${AWS_BUCKET}:${AWS_BUCKET_PATH:-/} /data -o use_path_request_style" ``` Debug Log ``` s3fs_1 | [ERR] curl.cpp:RequestPerform(2426): The options of url and endpoint may be useful for solving, please try to use both options. s3fs_1 | [ERR] s3fs.cpp:list_bucket(2632): ListBucketRequest returns with error. s3fs_1 | [ERR] s3fs.cpp:s3fs_readdir(2561): list_bucket returns error(-5). s3fs_1 | [INF] s3fs.cpp:s3fs_opendir(2404): [path=/][flags=0x18800] s3fs_1 | [INF] s3fs.cpp:s3fs_getattr(876): [path=/] s3fs_1 | [INF] s3fs.cpp:s3fs_readdir(2553): [path=/] s3fs_1 | [INF] s3fs.cpp:list_bucket(2596): [path=/] s3fs_1 | [INF] curl.cpp:ListBucketRequest(3446): [tpath=/] s3fs_1 | [INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/bucket.name.co?delimiter=/&max-keys=1000&prefix= s3fs_1 | [INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/bucket.name.co/?delimiter=/&max-keys=1000&prefix= s3fs_1 | [INF] curl.cpp:insertV4Headers(2753): computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=] [] s3fs_1 | [INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com s3fs_1 | [ERR] curl.cpp:RequestPerform(2425): HTTP response code 301(Moved Permanently: also happens when bucket's region is incorrect), returning EIO. Body Text: <?xml version="1.0" encoding="UTF-8"?> ``` The only way to get it to work currently is to not use "use_path_request_style" and instead disable https ``` ... s3fs -f -d ${AWS_BUCKET}:${AWS_BUCKET_PATH:-/} /data -o url=http://s3.amazonaws.com" ... ```
Author
Owner

@gpoole commented on GitHub (Dec 7, 2020):

I had this issue on 1.87 and had success with rymesaint's workaround. Worked for me with HTTPS too, which is probably a good option to try first:

s3fs my.bucket /mount/point -o use_path_request_style,url=https://s3-region.amazonaws.com
<!-- gh-comment-id:739619338 --> @gpoole commented on GitHub (Dec 7, 2020): I had this issue on 1.87 and had success with [rymesaint's workaround](https://github.com/s3fs-fuse/s3fs-fuse/issues/693#issuecomment-366661850). Worked for me with HTTPS too, which is probably a good option to try first: ```sh s3fs my.bucket /mount/point -o use_path_request_style,url=https://s3-region.amazonaws.com ```
Author
Owner

@yaegor commented on GitHub (Mar 25, 2021):

I confirm that for the bucket name with dots in region s3-eu-west-1, the only working command line from discussed above is "s3fs name.with.dots /data -o passwd_file=${HOME}/.passwd-s3fs -o url=http://s3-eu-west-1.amazonaws.com -o use_path_request_style" (note usage of "http://").
"-o url=https://s3-eu-west-1.amazonaws.com" results in "subjectAltName does not match name.with.dots.s3.amazonaws.com" TLS connection error.

FAQ suggests that "s3fs name.with.dots /data -o passwd_file=${HOME}/.passwd-s3fs -o use_path_request_style -o endpoint=eu-west-1" should work, but this reports 301 response "The bucket you are attempting to access must be addressed using the specified endpoint" noting "name.with.dots.amazonaws.com" endpoint and hangs for me after "curl_handlerpool.cpp:ReturnHandler(110): Pool full: destroy the oldest handler".

<!-- gh-comment-id:806554496 --> @yaegor commented on GitHub (Mar 25, 2021): I confirm that for the bucket name with dots in region s3-eu-west-1, the only working command line from discussed above is "s3fs name.with.dots /data -o passwd_file=${HOME}/.passwd-s3fs -o url=http://s3-eu-west-1.amazonaws.com -o use_path_request_style" (note usage of "http://"). "-o url=https://s3-eu-west-1.amazonaws.com" results in "subjectAltName does not match name.with.dots.s3.amazonaws.com" TLS connection error. FAQ [suggests](https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ#q-https-connecting-failed-if-bucket-name-includes-dot-) that "s3fs name.with.dots /data -o passwd_file=${HOME}/.passwd-s3fs -o use_path_request_style -o endpoint=eu-west-1" should work, but this reports 301 response "The bucket you are attempting to access must be addressed using the specified endpoint" noting "name.with.dots.amazonaws.com" endpoint and hangs for me after "curl_handlerpool.cpp:ReturnHandler(110): Pool full: destroy the oldest handler".
Author
Owner

@gaul commented on GitHub (Apr 21, 2021):

I tested this and successfully mounted a bucket via:

s3fs name.with.dots mnt/ -f -d -o use_path_request_style -o url=https://s3.us-west-2.amazonaws.com -o endpoint=us-west-2

AWS actually returns a PermanentRedirect error with Endpoint but it seems tricky to fix this up in the presence of the path request style. Happy to accept PRs to improve the s3fs_check_service logic.

<!-- gh-comment-id:824108047 --> @gaul commented on GitHub (Apr 21, 2021): I tested this and successfully mounted a bucket via: ``` s3fs name.with.dots mnt/ -f -d -o use_path_request_style -o url=https://s3.us-west-2.amazonaws.com -o endpoint=us-west-2 ``` AWS actually returns a `PermanentRedirect` error with `Endpoint` but it seems tricky to fix this up in the presence of the path request style. Happy to accept PRs to improve the `s3fs_check_service` logic.
Author
Owner

@gaul commented on GitHub (Apr 25, 2021):

I fixed up the CheckBucket logic which should give a more useful error. Can some user tell me if this helps?

<!-- gh-comment-id:826173583 --> @gaul commented on GitHub (Apr 25, 2021): I fixed up the `CheckBucket` logic which should give a more useful error. Can some user tell me if this helps?
Author
Owner

@nadyshalaby commented on GitHub (Sep 12, 2022):

I have concluded the final steps to connect to s3 bucket in these bash commands:

echo "<AccessKey>:<AccessSecret>" | sudo tee /etc/passwd-s3fs
sudo chmod 600 /etc/passwd-s3fs
sudo mkdir /mnt/<mnt-folder>
sudo s3fs <bucketName> /mnt/<mnt-folder> -f -d -o passwd_file=/etc/passwd-s3fs -o use_path_request_style -o url=https://s3.<s3-region>.amazonaws.com -o endpoint=<s3-region>
<!-- gh-comment-id:1243234666 --> @nadyshalaby commented on GitHub (Sep 12, 2022): I have concluded the final steps to connect to s3 bucket in these bash commands: ```bash echo "<AccessKey>:<AccessSecret>" | sudo tee /etc/passwd-s3fs sudo chmod 600 /etc/passwd-s3fs sudo mkdir /mnt/<mnt-folder> sudo s3fs <bucketName> /mnt/<mnt-folder> -f -d -o passwd_file=/etc/passwd-s3fs -o use_path_request_style -o url=https://s3.<s3-region>.amazonaws.com -o endpoint=<s3-region> ```
Author
Owner

@teveelnicks commented on GitHub (Mar 31, 2025):

A working example for in fstab:

s3fs#<bucket.name> /mnt/<bucket.name> fuse _netdev,allow_other,use_path_request_style,url=https://s3.<s3-region>.amazonaws.com,endpoint=<s3-region> 0 0

<!-- gh-comment-id:2765700110 --> @teveelnicks commented on GitHub (Mar 31, 2025): A working example for in fstab: `s3fs#<bucket.name> /mnt/<bucket.name> fuse _netdev,allow_other,use_path_request_style,url=https://s3.<s3-region>.amazonaws.com,endpoint=<s3-region> 0 0`
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#388
No description provided.