[GH-ISSUE #1452] Access Denied Error: Check bucket failed with full S3 Access #761

Closed
opened 2026-03-04 01:48:33 +03:00 by kerem · 6 comments
Owner

Originally created by @valerius21 on GitHub (Oct 15, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1452

Additional Information

Version of s3fs being used (s3fs --version)

1.86

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.9-3

Kernel information (uname -r)

5.4.0-48-generic

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

s3fs command line used, if applicable

s3fs <testbucket> ${HOME}/s3/<data> -o profile=default -f -d

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

Oct 15 06:49:52 candyland s3fs[311520]: init v1.86(commit:unknown) with GnuTLS(gcrypt)
Oct 15 06:49:52 candyland s3fs[311520]: s3fs.cpp:s3fs_check_service(3883): Failed to connect by sigv4, so retry to connect by signature version 2.
Oct 15 06:49:53 candyland s3fs[311520]: s3fs.cpp:s3fs_check_service(3898): invalid credentials(host=https://s3.amazonaws.com) - result of checking service.

Details about the issue

Using the ~/.aws/credentials file over the ~/.passwd-s3fs file produces the same error. The IAM role has full S3 access and the permissions have been tested with the same profile on the aws cli.

No files show up in the desired folder.

I am getting the following error when trying to mount the s3 bucket to a folder in my home directory:

[CRT] s3fs.cpp:set_s3fs_log_level(297): change debug level from [CRT] to [INF] 
[INF]     s3fs.cpp:set_mountpoint_attribute(4400): PROC(uid=1000, gid=1000) - MountPoint(uid=1000, gid=1000, mode=40775)
[INF] s3fs.cpp:s3fs_init(3493): init v1.86(commit:unknown) with GnuTLS(gcrypt)
[INF] s3fs.cpp:s3fs_check_service(3828): check services.
[INF]       curl.cpp:CheckBucket(3413): check a bucket.
[INF]       curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/<testbucket>/
[INF]       curl.cpp:prepare_url(4736): URL changed is https://<testbucket>.s3.amazonaws.com/
[INF]       curl.cpp:insertV4Headers(2753): computing signature [GET] [/] [] []
[INF]       curl.cpp:url_to_host(99): url is https://s3.amazonaws.com
[ERR] curl.cpp:RequestPerform(2436): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>758692A7D0B75B21</RequestId><HostId>rR5KyjpVjOeSwIcNYJKqAH851ZACQ071DM3gY6JSOTlMDa1Q9W4AX+4xD49QFkopGsUtTxbNNJI=</HostId></Error>
[ERR] curl.cpp:CheckBucket(3439): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>758692A7D0B75B21</RequestId><HostId>rR5KyjpVjOeSwIcNYJKqAH851ZACQ071DM3gY6JSOTlMDa1Q9W4AX+4xD49QFkopGsUtTxbNNJI=</HostId></Error>
[CRT] s3fs.cpp:s3fs_check_service(3883): Failed to connect by sigv4, so retry to connect by signature version 2.
[INF] curl.cpp:ReturnHandler(318): Pool full: destroy the oldest handler
[INF]       curl.cpp:CheckBucket(3413): check a bucket.
[INF]       curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/<testbucket>/
[INF]       curl.cpp:prepare_url(4736): URL changed is https://<testbucket>.s3.amazonaws.com/
[ERR] curl.cpp:RequestPerform(2436): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>C8DFAB144A328448</RequestId><HostId>79ouhJkZp2dFEdcYnCUkXNRvgZP5Z4D6ZE7a/N0Is/JcGVI57MB1A2AVBTtO653sJdB4/FIEcPQ=</HostId></Error>
[ERR] curl.cpp:CheckBucket(3439): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>C8DFAB144A328448</RequestId><HostId>79ouhJkZp2dFEdcYnCUkXNRvgZP5Z4D6ZE7a/N0Is/JcGVI57MB1A2AVBTtO653sJdB4/FIEcPQ=</HostId></Error>
[CRT] s3fs.cpp:s3fs_check_service(3898): invalid credentials(host=https://s3.amazonaws.com) - result of checking service.
[ERR] s3fs.cpp:s3fs_exit_fuseloop(3483): Exiting FUSE event loop due to errors

[INF] s3fs.cpp:s3fs_destroy(3546): destroy
Originally created by @valerius21 on GitHub (Oct 15, 2020). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1452 ### Additional Information #### Version of s3fs being used (s3fs --version) _1.86_ #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) _2.9.9-3_ #### Kernel information (uname -r) _5.4.0-48-generic_ #### GNU/Linux Distribution, if applicable (cat /etc/os-release) ``` NAME="Ubuntu" VERSION="20.04.1 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.1 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal ``` #### s3fs command line used, if applicable ``` s3fs <testbucket> ${HOME}/s3/<data> -o profile=default -f -d ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) _if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages_ ``` Oct 15 06:49:52 candyland s3fs[311520]: init v1.86(commit:unknown) with GnuTLS(gcrypt) Oct 15 06:49:52 candyland s3fs[311520]: s3fs.cpp:s3fs_check_service(3883): Failed to connect by sigv4, so retry to connect by signature version 2. Oct 15 06:49:53 candyland s3fs[311520]: s3fs.cpp:s3fs_check_service(3898): invalid credentials(host=https://s3.amazonaws.com) - result of checking service. ``` ### Details about the issue Using the `~/.aws/credentials` file over the `~/.passwd-s3fs` file produces the same error. The IAM role has full S3 access and the permissions have been tested with the same profile on the aws cli. No files show up in the desired folder. I am getting the following error when trying to mount the s3 bucket to a folder in my home directory: ``` [CRT] s3fs.cpp:set_s3fs_log_level(297): change debug level from [CRT] to [INF] [INF] s3fs.cpp:set_mountpoint_attribute(4400): PROC(uid=1000, gid=1000) - MountPoint(uid=1000, gid=1000, mode=40775) [INF] s3fs.cpp:s3fs_init(3493): init v1.86(commit:unknown) with GnuTLS(gcrypt) [INF] s3fs.cpp:s3fs_check_service(3828): check services. [INF] curl.cpp:CheckBucket(3413): check a bucket. [INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/<testbucket>/ [INF] curl.cpp:prepare_url(4736): URL changed is https://<testbucket>.s3.amazonaws.com/ [INF] curl.cpp:insertV4Headers(2753): computing signature [GET] [/] [] [] [INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com [ERR] curl.cpp:RequestPerform(2436): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>758692A7D0B75B21</RequestId><HostId>rR5KyjpVjOeSwIcNYJKqAH851ZACQ071DM3gY6JSOTlMDa1Q9W4AX+4xD49QFkopGsUtTxbNNJI=</HostId></Error> [ERR] curl.cpp:CheckBucket(3439): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>758692A7D0B75B21</RequestId><HostId>rR5KyjpVjOeSwIcNYJKqAH851ZACQ071DM3gY6JSOTlMDa1Q9W4AX+4xD49QFkopGsUtTxbNNJI=</HostId></Error> [CRT] s3fs.cpp:s3fs_check_service(3883): Failed to connect by sigv4, so retry to connect by signature version 2. [INF] curl.cpp:ReturnHandler(318): Pool full: destroy the oldest handler [INF] curl.cpp:CheckBucket(3413): check a bucket. [INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/<testbucket>/ [INF] curl.cpp:prepare_url(4736): URL changed is https://<testbucket>.s3.amazonaws.com/ [ERR] curl.cpp:RequestPerform(2436): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>C8DFAB144A328448</RequestId><HostId>79ouhJkZp2dFEdcYnCUkXNRvgZP5Z4D6ZE7a/N0Is/JcGVI57MB1A2AVBTtO653sJdB4/FIEcPQ=</HostId></Error> [ERR] curl.cpp:CheckBucket(3439): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>C8DFAB144A328448</RequestId><HostId>79ouhJkZp2dFEdcYnCUkXNRvgZP5Z4D6ZE7a/N0Is/JcGVI57MB1A2AVBTtO653sJdB4/FIEcPQ=</HostId></Error> [CRT] s3fs.cpp:s3fs_check_service(3898): invalid credentials(host=https://s3.amazonaws.com) - result of checking service. [ERR] s3fs.cpp:s3fs_exit_fuseloop(3483): Exiting FUSE event loop due to errors [INF] s3fs.cpp:s3fs_destroy(3546): destroy ```
kerem closed this issue 2026-03-04 01:48:33 +03:00
Author
Owner

@valerius21 commented on GitHub (Oct 15, 2020):

Apparently, you need to wait a few hours so amazon can update the permissions across their system.

<!-- gh-comment-id:709042856 --> @valerius21 commented on GitHub (Oct 15, 2020): Apparently, you need to wait a few hours so amazon can update the permissions across their system.
Author
Owner

@aamorozov commented on GitHub (Jan 14, 2021):

Did you have to do anything else aside from waiting? I am experiencing the exact same issue but it's been the case for at least a day now @valerius21

<!-- gh-comment-id:759876048 --> @aamorozov commented on GitHub (Jan 14, 2021): Did you have to do anything else aside from waiting? I am experiencing the exact same issue but it's been the case for at least a day now @valerius21
Author
Owner

@gaul commented on GitHub (Jan 14, 2021):

A few things could be going on with the original issue -- creating a bucket in one region and recreating it in another might make the DNS entries temporarily stale. Specifying the full URL suck as https://s3.us-west-1.amazonaws.com and the endpoint such as us-west-1 should always work though.

<!-- gh-comment-id:760023591 --> @gaul commented on GitHub (Jan 14, 2021): A few things could be going on with the original issue -- creating a bucket in one region and recreating it in another might make the DNS entries temporarily stale. Specifying the full URL suck as https://s3.us-west-1.amazonaws.com and the endpoint such as us-west-1 should always work though.
Author
Owner

@aamorozov commented on GitHub (Jan 14, 2021):

I tried with both url and endpoint options but still no luck - same result for iam_role option.
I've created a ticket here describing the issue - https://github.com/s3fs-fuse/s3fs-fuse/issues/1518

<!-- gh-comment-id:760368592 --> @aamorozov commented on GitHub (Jan 14, 2021): I tried with both `url` and `endpoint` options but still no luck - same result for `iam_role` option. I've created a ticket here describing the issue - https://github.com/s3fs-fuse/s3fs-fuse/issues/1518
Author
Owner

@christopherdalton commented on GitHub (Apr 16, 2021):

Confirm I am experiencing the same behaviour. Both with the -o Endpoint and -o Url parameters configured. I am able to connect to buckets that were created prior to the test bucket I am working with. There is clearly an issue in passing the endpoint/url inside s3fs.

When interacting with newly created buckets using the AWS-CLI / PowerShell / Terraform, passing the region is sufficient.

The requirement for passing the region is because of DNS propagation not IAM policy permissions.

Please re-open and investigate?

<!-- gh-comment-id:821149017 --> @christopherdalton commented on GitHub (Apr 16, 2021): Confirm I am experiencing the same behaviour. Both with the -o Endpoint and -o Url parameters configured. I am able to connect to buckets that were created prior to the test bucket I am working with. There is clearly an issue in passing the endpoint/url inside s3fs. When interacting with newly created buckets using the AWS-CLI / PowerShell / Terraform, passing the region is sufficient. The requirement for passing the region is because of DNS propagation not IAM policy permissions. Please re-open and investigate?
Author
Owner

@Noorahoon commented on GitHub (May 5, 2025):

<!-- gh-comment-id:2850474390 --> @Noorahoon commented on GitHub (May 5, 2025): []()
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#761
No description provided.