[GH-ISSUE #807] response code 403 - Access denied - invalid credentials #465

Closed
opened 2026-03-04 01:45:50 +03:00 by kerem · 4 comments
Owner

Originally created by @LouNik1984 on GitHub (Aug 7, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/807

Additional Information

I'm testing s3fs on a virtual machine with ubuntu 16.04. I already have an s3 private bucket with a user which has a policy that allows access to everything:

"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:",
"Resource": "
"
}

If I mount my already existing bucket, no issue at all, everything works as expected.

But then I created a new bucket, attached this new bucket to the same user/policy/group as the previous one but when I try to mount this new bucket it doesn't work, I get the error HTTP error code 403 Access Denied Invalid Credentials.

But credentials are the same of the first bucket, so is the aws policy. So how is it that it works with one bucket but not with the other? How can I mount my second bucket?

Version of s3fs being used

Amazon Simple Storage Service File System V1.84 (commit:06032aa) with OpenSSL

Version of fuse being used

2.9.4

System information

4.13.0-36-generic

Distro

Ubuntu 16.04.4 LTS

s3fs command line used

/usr/bin/s3fs -f -d mybucketname /mnt/s3 -o passwd_file=/etc/passwd-s3fs

Originally created by @LouNik1984 on GitHub (Aug 7, 2018). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/807 ### Additional Information I'm testing s3fs on a virtual machine with ubuntu 16.04. I already have an s3 private bucket with a user which has a policy that allows access to everything: "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "*" } If I mount my already existing bucket, no issue at all, everything works as expected. But then I created a new bucket, attached this new bucket to the same user/policy/group as the previous one but when I try to mount this new bucket it doesn't work, I get the error HTTP error code 403 Access Denied Invalid Credentials. But credentials are the same of the first bucket, so is the aws policy. So how is it that it works with one bucket but not with the other? How can I mount my second bucket? #### Version of s3fs being used Amazon Simple Storage Service File System V1.84 (commit:06032aa) with OpenSSL #### Version of fuse being used 2.9.4 #### System information 4.13.0-36-generic #### Distro Ubuntu 16.04.4 LTS #### s3fs command line used /usr/bin/s3fs -f -d mybucketname /mnt/s3 -o passwd_file=/etc/passwd-s3fs
kerem closed this issue 2026-03-04 01:45:50 +03:00
Author
Owner

@tbooth commented on GitHub (Dec 11, 2018):

Hi,

I think I am seeing the same problem. Connection to an existing bucket works, but if I make a new one, with the same credentials and permissions, it fails. I'm working in a non-default region (eu-west-1).

My guess here is that some time in the hours after the bucket is created, some metadata gets copied around the AWS system and the exact connection protocol for the bucket changes subtly. I can see that the initial connection to the existing bucket gets a "400 Bad Request" whereas a connection to the brand new bucket gets a "307 Temporary Redirect". In the first case, s3fs ends up correctly fixing its access credentials and logging in, but in the second case it never gets it right and ends up with a 404.

So it looks like there is some missing logic in s3fs regarding the authorization protocol. The following work-around worked for me, setting both options:

-o endpoint=eu-west-1 -o url="https://s3-eu-west-1.amazonaws.com"

<!-- gh-comment-id:446170699 --> @tbooth commented on GitHub (Dec 11, 2018): Hi, I think I am seeing the same problem. Connection to an existing bucket works, but if I make a new one, with the same credentials and permissions, it fails. I'm working in a non-default region (eu-west-1). My guess here is that some time in the hours after the bucket is created, some metadata gets copied around the AWS system and the exact connection protocol for the bucket changes subtly. I can see that the initial connection to the existing bucket gets a "400 Bad Request" whereas a connection to the brand new bucket gets a "307 Temporary Redirect". In the first case, s3fs ends up correctly fixing its access credentials and logging in, but in the second case it never gets it right and ends up with a 404. So it looks like there is some missing logic in s3fs regarding the authorization protocol. The following work-around worked for me, setting both options: ```-o endpoint=eu-west-1 -o url="https://s3-eu-west-1.amazonaws.com"```
Author
Owner

@tsmgeek commented on GitHub (Jan 17, 2019):

Hi,

I think I am seeing the same problem. Connection to an existing bucket works, but if I make a new one, with the same credentials and permissions, it fails. I'm working in a non-default region (eu-west-1).

My guess here is that some time in the hours after the bucket is created, some metadata gets copied around the AWS system and the exact connection protocol for the bucket changes subtly. I can see that the initial connection to the existing bucket gets a "400 Bad Request" whereas a connection to the brand new bucket gets a "307 Temporary Redirect". In the first case, s3fs ends up correctly fixing its access credentials and logging in, but in the second case it never gets it right and ends up with a 404.

So it looks like there is some missing logic in s3fs regarding the authorization protocol. The following work-around worked for me, setting both options:

-o endpoint=eu-west-1 -o url="https://s3-eu-west-1.amazonaws.com"

I have the same issue, I just passed the -o url and it worked.

<!-- gh-comment-id:455136449 --> @tsmgeek commented on GitHub (Jan 17, 2019): > Hi, > > I think I am seeing the same problem. Connection to an existing bucket works, but if I make a new one, with the same credentials and permissions, it fails. I'm working in a non-default region (eu-west-1). > > My guess here is that some time in the hours after the bucket is created, some metadata gets copied around the AWS system and the exact connection protocol for the bucket changes subtly. I can see that the initial connection to the existing bucket gets a "400 Bad Request" whereas a connection to the brand new bucket gets a "307 Temporary Redirect". In the first case, s3fs ends up correctly fixing its access credentials and logging in, but in the second case it never gets it right and ends up with a 404. > > So it looks like there is some missing logic in s3fs regarding the authorization protocol. The following work-around worked for me, setting both options: > > `-o endpoint=eu-west-1 -o url="https://s3-eu-west-1.amazonaws.com"` I have the same issue, I just passed the `-o url` and it worked.
Author
Owner

@ggtakec commented on GitHub (Jan 20, 2019):

@LouNik1984 @tbooth @tsmgeek
I merged a new code #912.
This is a modification to access the correct region without specifying the url option even if you specify the region with endpoint and receive "307 Temporary Redirect".
Please try to use master branch code if you can.
I'm closing this issue, but if you have a problem yet, please reopen this.

<!-- gh-comment-id:455857493 --> @ggtakec commented on GitHub (Jan 20, 2019): @LouNik1984 @tbooth @tsmgeek I merged a new code #912. This is a modification to access the correct region without specifying the url option even if you specify the region with endpoint and receive "307 Temporary Redirect". Please try to use master branch code if you can. I'm closing this issue, but if you have a problem yet, please reopen this.
Author
Owner

@ggtakec commented on GitHub (Jan 21, 2019):

@LouNik1984 @tbooth @tsmgeek I'm sorry about #912.
The code modification of #912 was not good.
We will revise it based on Issue #916.
Regards,

<!-- gh-comment-id:456064450 --> @ggtakec commented on GitHub (Jan 21, 2019): @LouNik1984 @tbooth @tsmgeek I'm sorry about #912. The code modification of #912 was not good. We will revise it based on Issue #916. Regards,
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#465
No description provided.