mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1452] Access Denied Error: Check bucket failed with full S3 Access #761
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#761
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @valerius21 on GitHub (Oct 15, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1452
Additional Information
Version of s3fs being used (s3fs --version)
1.86
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.9-3
Kernel information (uname -r)
5.4.0-48-generic
GNU/Linux Distribution, if applicable (cat /etc/os-release)
s3fs command line used, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about the issue
Using the
~/.aws/credentialsfile over the~/.passwd-s3fsfile produces the same error. The IAM role has full S3 access and the permissions have been tested with the same profile on the aws cli.No files show up in the desired folder.
I am getting the following error when trying to mount the s3 bucket to a folder in my home directory:
@valerius21 commented on GitHub (Oct 15, 2020):
Apparently, you need to wait a few hours so amazon can update the permissions across their system.
@aamorozov commented on GitHub (Jan 14, 2021):
Did you have to do anything else aside from waiting? I am experiencing the exact same issue but it's been the case for at least a day now @valerius21
@gaul commented on GitHub (Jan 14, 2021):
A few things could be going on with the original issue -- creating a bucket in one region and recreating it in another might make the DNS entries temporarily stale. Specifying the full URL suck as https://s3.us-west-1.amazonaws.com and the endpoint such as us-west-1 should always work though.
@aamorozov commented on GitHub (Jan 14, 2021):
I tried with both
urlandendpointoptions but still no luck - same result foriam_roleoption.I've created a ticket here describing the issue - https://github.com/s3fs-fuse/s3fs-fuse/issues/1518
@christopherdalton commented on GitHub (Apr 16, 2021):
Confirm I am experiencing the same behaviour. Both with the -o Endpoint and -o Url parameters configured. I am able to connect to buckets that were created prior to the test bucket I am working with. There is clearly an issue in passing the endpoint/url inside s3fs.
When interacting with newly created buckets using the AWS-CLI / PowerShell / Terraform, passing the region is sufficient.
The requirement for passing the region is because of DNS propagation not IAM policy permissions.
Please re-open and investigate?
@Noorahoon commented on GitHub (May 5, 2025):