mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #807] response code 403 - Access denied - invalid credentials #465
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#465
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @LouNik1984 on GitHub (Aug 7, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/807
Additional Information
I'm testing s3fs on a virtual machine with ubuntu 16.04. I already have an s3 private bucket with a user which has a policy that allows access to everything:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:",
"Resource": ""
}
If I mount my already existing bucket, no issue at all, everything works as expected.
But then I created a new bucket, attached this new bucket to the same user/policy/group as the previous one but when I try to mount this new bucket it doesn't work, I get the error HTTP error code 403 Access Denied Invalid Credentials.
But credentials are the same of the first bucket, so is the aws policy. So how is it that it works with one bucket but not with the other? How can I mount my second bucket?
Version of s3fs being used
Amazon Simple Storage Service File System V1.84 (commit:06032aa) with OpenSSL
Version of fuse being used
2.9.4
System information
4.13.0-36-generic
Distro
Ubuntu 16.04.4 LTS
s3fs command line used
/usr/bin/s3fs -f -d mybucketname /mnt/s3 -o passwd_file=/etc/passwd-s3fs
@tbooth commented on GitHub (Dec 11, 2018):
Hi,
I think I am seeing the same problem. Connection to an existing bucket works, but if I make a new one, with the same credentials and permissions, it fails. I'm working in a non-default region (eu-west-1).
My guess here is that some time in the hours after the bucket is created, some metadata gets copied around the AWS system and the exact connection protocol for the bucket changes subtly. I can see that the initial connection to the existing bucket gets a "400 Bad Request" whereas a connection to the brand new bucket gets a "307 Temporary Redirect". In the first case, s3fs ends up correctly fixing its access credentials and logging in, but in the second case it never gets it right and ends up with a 404.
So it looks like there is some missing logic in s3fs regarding the authorization protocol. The following work-around worked for me, setting both options:
-o endpoint=eu-west-1 -o url="https://s3-eu-west-1.amazonaws.com"@tsmgeek commented on GitHub (Jan 17, 2019):
I have the same issue, I just passed the
-o urland it worked.@ggtakec commented on GitHub (Jan 20, 2019):
@LouNik1984 @tbooth @tsmgeek
I merged a new code #912.
This is a modification to access the correct region without specifying the url option even if you specify the region with endpoint and receive "307 Temporary Redirect".
Please try to use master branch code if you can.
I'm closing this issue, but if you have a problem yet, please reopen this.
@ggtakec commented on GitHub (Jan 21, 2019):
@LouNik1984 @tbooth @tsmgeek I'm sorry about #912.
The code modification of #912 was not good.
We will revise it based on Issue #916.
Regards,