[GH-ISSUE #2093] Unable to mount bucket on private s3 server due to url rewritting #1060

Closed
opened 2026-03-04 01:51:02 +03:00 by kerem · 7 comments
Owner

Originally created by @joaoe on GitHub (Jan 11, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2093

Issue

My company runs a local instance of s3. There is a folder which I'm trying to mount on my computer.
I've tried some options, all failed.

Option 1.

$ flags=(-f -s -d -d -o dbglevel=debug -o passwd_file=~/.ssh/s3-passwd)
$ s3fs my-bucket-name /home/me/fld flags -o url=https://storage.apps.company.net

This fails with the error

[DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 31
[INF]       curl_util.cpp:prepare_url(254): URL is https://storage.apps.company.net/my-bucket-name/
[INF]       curl_util.cpp:prepare_url(287): **URL changed is https://my-bucket-name.storage.apps.company.net/**
[DBG] curl.cpp:RequestPerform(2283): connecting to URL https://my-bucket-name.storage.apps.company.net/
[INF]       curl.cpp:insertV4Headers(2680): computing signature [GET] [/] [] []
[INF]       curl_util.cpp:url_to_host(331): url is https://storage.apps.company.net
[ERR] curl.cpp:RequestPerform(2455): ### CURLE_SSL_CACERT
[INF] curl.cpp:RequestPerform(2515): ### retrying...
[INF]       curl.cpp:RemakeHandle(2107): Retry request. [type=5][url=https://my-bucket-name.storage.apps.company.net/][path=/]
[INF]       curl.cpp:insertV4Headers(2680): computing signature [GET] [/] [] []
[INF]       curl_util.cpp:url_to_host(331): url is https://storage.apps.company.net
[ERR] curl.cpp:RequestPerform(2455): ### CURLE_SSL_CACERT
[ERR] curl.cpp:RequestPerform(2466): curlCode: 60  msg: **SSL peer certificate or SSH remote key was not OK**
[ERR] curl.cpp:CheckBucket(3421): Check bucket failed, S3 response:
[CRT] s3fs.cpp:s3fs_check_service(3597): unable to connect(host=https://storage.apps.company.net) - result of checking service.

That url rewrite seems to break everything.
I've tried to workaround the url rewrite the following way:

Option 2.

$ s3fs storage /home/me/fld flags -o url=https://apps.company.net -o bucket=my-bucket-name

This fails ever harder

s3fs: unable to access MOUNTPOINT storage: No such file or directory

So I tried a variation

Option 3

$ s3fs storage /home/me/fld flags -o url=https://apps.company.net/my-bucket-name

Which goes further but fails as well

[DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 31
[INF]       curl_util.cpp:prepare_url(254): URL is https://apps.company.net/my-bucket-name/storage/
[INF]       curl_util.cpp:prepare_url(287): URL changed is https://storage.apps.company.net/my-bucket-name/
[DBG] curl.cpp:RequestPerform(2283): connecting to URL https://storage.apps.company.net/my-bucket-name/
[INF]       curl.cpp:insertV4Headers(2680): computing signature [GET] [/] [] []
[INF]       curl_util.cpp:url_to_host(331): url is https://apps.company.net/my-bucket-name
[ERR] curl.cpp:RequestPerform(2363): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your AWS secret access key and signing method. For more information, see REST Authentication and SOAP Authentication for details.</Message><Resource>/my-bucket-name/</Resource><RequestId>lcrncqpe-50l2jf-a6i</RequestId></Error>
[ERR] curl.cpp:CheckBucket(3421): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your AWS secret access key and signing method. For more information, see REST Authentication and SOAP Authentication for details.</Message><Resource>/my-bucket-name/</Resource><RequestId>lcrncqpe-50l2jf-a6i</RequestId></Error>
...
[ERR] s3fs.cpp:s3fs_exit_fuseloop(3372): Exiting FUSE event loop due to errors

I see both the https://apps.company.net/ and https://storage.apps.company.net/ urls. Seems s3fs is confused which url to use and perhaps that affects something that causes that 403 error.

Option 4

To make sure my configuration works I tried some python code

import os
import s3fs

url = "https://storage.apps.company.net"
auth_key = "..."
auth_secret = "..."
bucket_name = "my-bucket-name"
api_version = "s3v4"

print(f"Accessing {url} :: {auth_key} :: {bucket_name}")

fs = s3fs.S3FileSystem(
    anon=False,
    key=auth_key,
    secret=auth_secret,
    client_kwargs={"endpoint_url": url},
    config_kwargs={"signature_version": api_version},
)

name_iter = (
    "/" + "/".join(filter(bool, (root, name))) + suffix
    for root, folders, files in fs.walk("/")
    for names, suffix in ((folders, "/"), (files, ""))
    for name in names
)
print("\n".join(name_iter))

Which prints successfully

/my-bucket-name/file1
/my-bucket-name/file2
/my-bucket-name/file3

So my configuration is correct and can access the bucket from my workstation.

Additional Information

Version of s3fs being used (s3fs --version)

$ s3fs --version
Amazon Simple Storage Service File System V1.90 (commit:unknown) with GnuTLS(gcrypt)

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

FUSE statically compiled into WSL kernel

Kernel information (uname -r)

$ uname -r
5.10.16.3-microsoft-standard-WSL2

GNU/Linux Distribution, if applicable (cat /etc/os-release)

Running a docker container with WSL 2 on Windows.

Originally created by @joaoe on GitHub (Jan 11, 2023). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2093 ## Issue My company runs a local instance of s3. There is a folder which I'm trying to mount on my computer. I've tried some options, all failed. ### Option 1. ``` $ flags=(-f -s -d -d -o dbglevel=debug -o passwd_file=~/.ssh/s3-passwd) $ s3fs my-bucket-name /home/me/fld flags -o url=https://storage.apps.company.net ``` This fails with the error ``` [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 31 [INF] curl_util.cpp:prepare_url(254): URL is https://storage.apps.company.net/my-bucket-name/ [INF] curl_util.cpp:prepare_url(287): **URL changed is https://my-bucket-name.storage.apps.company.net/** [DBG] curl.cpp:RequestPerform(2283): connecting to URL https://my-bucket-name.storage.apps.company.net/ [INF] curl.cpp:insertV4Headers(2680): computing signature [GET] [/] [] [] [INF] curl_util.cpp:url_to_host(331): url is https://storage.apps.company.net [ERR] curl.cpp:RequestPerform(2455): ### CURLE_SSL_CACERT [INF] curl.cpp:RequestPerform(2515): ### retrying... [INF] curl.cpp:RemakeHandle(2107): Retry request. [type=5][url=https://my-bucket-name.storage.apps.company.net/][path=/] [INF] curl.cpp:insertV4Headers(2680): computing signature [GET] [/] [] [] [INF] curl_util.cpp:url_to_host(331): url is https://storage.apps.company.net [ERR] curl.cpp:RequestPerform(2455): ### CURLE_SSL_CACERT [ERR] curl.cpp:RequestPerform(2466): curlCode: 60 msg: **SSL peer certificate or SSH remote key was not OK** [ERR] curl.cpp:CheckBucket(3421): Check bucket failed, S3 response: [CRT] s3fs.cpp:s3fs_check_service(3597): unable to connect(host=https://storage.apps.company.net) - result of checking service. ``` That url rewrite seems to break everything. I've tried to workaround the url rewrite the following way: ### Option 2. ``` $ s3fs storage /home/me/fld flags -o url=https://apps.company.net -o bucket=my-bucket-name ``` This fails ever harder ``` s3fs: unable to access MOUNTPOINT storage: No such file or directory ``` So I tried a variation ### Option 3 ``` $ s3fs storage /home/me/fld flags -o url=https://apps.company.net/my-bucket-name ``` Which goes further but fails as well ``` [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 31 [INF] curl_util.cpp:prepare_url(254): URL is https://apps.company.net/my-bucket-name/storage/ [INF] curl_util.cpp:prepare_url(287): URL changed is https://storage.apps.company.net/my-bucket-name/ [DBG] curl.cpp:RequestPerform(2283): connecting to URL https://storage.apps.company.net/my-bucket-name/ [INF] curl.cpp:insertV4Headers(2680): computing signature [GET] [/] [] [] [INF] curl_util.cpp:url_to_host(331): url is https://apps.company.net/my-bucket-name [ERR] curl.cpp:RequestPerform(2363): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your AWS secret access key and signing method. For more information, see REST Authentication and SOAP Authentication for details.</Message><Resource>/my-bucket-name/</Resource><RequestId>lcrncqpe-50l2jf-a6i</RequestId></Error> [ERR] curl.cpp:CheckBucket(3421): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your AWS secret access key and signing method. For more information, see REST Authentication and SOAP Authentication for details.</Message><Resource>/my-bucket-name/</Resource><RequestId>lcrncqpe-50l2jf-a6i</RequestId></Error> ... [ERR] s3fs.cpp:s3fs_exit_fuseloop(3372): Exiting FUSE event loop due to errors ``` I see both the `https://apps.company.net/` and `https://storage.apps.company.net/` urls. Seems s3fs is confused which url to use and perhaps that affects something that causes that 403 error. ### Option 4 To make sure my configuration works I tried some python code ``` import os import s3fs url = "https://storage.apps.company.net" auth_key = "..." auth_secret = "..." bucket_name = "my-bucket-name" api_version = "s3v4" print(f"Accessing {url} :: {auth_key} :: {bucket_name}") fs = s3fs.S3FileSystem( anon=False, key=auth_key, secret=auth_secret, client_kwargs={"endpoint_url": url}, config_kwargs={"signature_version": api_version}, ) name_iter = ( "/" + "/".join(filter(bool, (root, name))) + suffix for root, folders, files in fs.walk("/") for names, suffix in ((folders, "/"), (files, "")) for name in names ) print("\n".join(name_iter)) ``` Which prints successfully ``` /my-bucket-name/file1 /my-bucket-name/file2 /my-bucket-name/file3 ``` So my configuration is correct and can access the bucket from my workstation. ## Additional Information ### Version of s3fs being used (`s3fs --version`) ``` $ s3fs --version Amazon Simple Storage Service File System V1.90 (commit:unknown) with GnuTLS(gcrypt) ``` ### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) FUSE statically compiled into WSL kernel ### Kernel information (`uname -r`) ``` $ uname -r 5.10.16.3-microsoft-standard-WSL2 ``` ### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) Running a docker container with WSL 2 on Windows.
kerem closed this issue 2026-03-04 01:51:03 +03:00
Author
Owner

@joaoe commented on GitHub (Jan 12, 2023):

Update:
The solution was to pass -o use_path_request_style.
Can I suggest that s3fs tries both strategies and uses -o use_path_request_style as a fallback if the first one with the bucket name as subdomain fail and the user has specified an url ?

<!-- gh-comment-id:1380248900 --> @joaoe commented on GitHub (Jan 12, 2023): Update: The solution was to pass `-o use_path_request_style`. Can I suggest that s3fs tries both strategies and uses `-o use_path_request_style` as a fallback if the first one with the bucket name as subdomain fail and the user has specified an url ?
Author
Owner

@ggtakec commented on GitHub (Jan 15, 2023):

@joaoe
There are currently no plans to use_path_request_style as a fallback.
If you need it, I think you should explicitly specify use_path_request_style.

Also, although it is not recommended(for security reasons), you may be able to start by adding the no_check_certificate option with the Option 1 startup method you tried.

<!-- gh-comment-id:1383066921 --> @ggtakec commented on GitHub (Jan 15, 2023): @joaoe There are currently no plans to `use_path_request_style` as a fallback. If you need it, I think you should explicitly specify `use_path_request_style`. Also, although it is not recommended(for security reasons), you may be able to start by adding the `no_check_certificate` option with the `Option 1` startup method you tried.
Author
Owner

@joaoe commented on GitHub (Jan 15, 2023):

There are currently no plans to use_path_request_style as a fallback.
If you need it, I think you should explicitly specify use_path_request_style.

Well, thank you :) That is the case right now. But the option is buried in the help.
Might I suggest at least that you print a warning message to advise the user to try that option if connecting fails ?

you may be able to start by adding the no_check_certificate option

As you well know, that is not a long term solution.

<!-- gh-comment-id:1383268820 --> @joaoe commented on GitHub (Jan 15, 2023): > There are currently no plans to `use_path_request_style` as a fallback. > If you need it, I think you should explicitly specify `use_path_request_style`. Well, thank you :) That is the case right now. But the option is buried in the help. Might I suggest at least that you print a warning message to advise the user to try that option if connecting fails ? > you may be able to start by adding the no_check_certificate option As you well know, that is not a long term solution.
Author
Owner

@MatteoAntolini commented on GitHub (Jan 25, 2023):

I managed to mount it with this command

s3fs bucketname /mnt/s3-bucket/ -o passwd_file=/etc/passwd-s3fs -o url=https://host:port -o use_path_request_style -o ssl_verify_hostname=0 -o no_check_certificate

I have a bucket on minio on an internal server

EDIT:

/etc/fstab

s3fs#bucketname /mnt/s3-bucket fuse _netdev,allow_other,url=https://host:port,use_path_request_style,ssl_verify_hostname=0,no_check_certificate,passwd_file=/etc/passwd-s3fs 0 0

<!-- gh-comment-id:1404252409 --> @MatteoAntolini commented on GitHub (Jan 25, 2023): I managed to mount it with this command `s3fs bucketname /mnt/s3-bucket/ -o passwd_file=/etc/passwd-s3fs -o url=https://host:port -o use_path_request_style -o ssl_verify_hostname=0 -o no_check_certificate` I have a bucket on minio on an internal server EDIT: `/etc/fstab` `s3fs#bucketname /mnt/s3-bucket fuse _netdev,allow_other,url=https://host:port,use_path_request_style,ssl_verify_hostname=0,no_check_certificate,passwd_file=/etc/passwd-s3fs 0 0`
Author
Owner

@ggtakec commented on GitHub (Jan 29, 2023):

@joaoe
I gave up on wording as a suggestion in runtime errors because it is difficult to narrow down the countermeasures because the reasons differ depending on the situation.
Instead of that, I added a note to the man page regarding url option specification and use_path_request_style.(I posted it as PR #2104.)
Thanks.

<!-- gh-comment-id:1407639919 --> @ggtakec commented on GitHub (Jan 29, 2023): @joaoe I gave up on wording as a suggestion in runtime errors because it is difficult to narrow down the countermeasures because the reasons differ depending on the situation. Instead of that, I added a note to the man page regarding `url` option specification and `use_path_request_style`.(I posted it as PR #2104.) Thanks.
Author
Owner

@ggtakec commented on GitHub (Feb 1, 2023):

The note about those options were added to t he man page.
For now, let me close this issue.
If you still have problems, please reopen or post a new issue.

<!-- gh-comment-id:1412179384 --> @ggtakec commented on GitHub (Feb 1, 2023): The note about those options were added to t he man page. For now, let me close this issue. If you still have problems, please reopen or post a new issue.
Author
Owner

@ggtakec commented on GitHub (Feb 4, 2023):

I'll close this Issue, but if you have a problem about this, please reopen this or post new issue.
Thanks,

<!-- gh-comment-id:1416696079 --> @ggtakec commented on GitHub (Feb 4, 2023): I'll close this Issue, but if you have a problem about this, please reopen this or post new issue. Thanks,
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1060
No description provided.