[GH-ISSUE #179] Invalid Security Credentials - Google Cloud #101

Closed
opened 2026-03-04 01:42:08 +03:00 by kerem · 8 comments
Owner

Originally created by @tortib on GitHub (Apr 30, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/179

Hello,

I'm trying to connect to my google cloud storage bucket and I'm using the following format in the /etc/passwd-s3fs
Your documentation states to use accessKeyId:secretAccessKey as the format for the credentials file. However, when I try to connect using the Client ID and Client Secret provided by the google cloud api, it gives me this as the result:
$s3fs storage1 /mnt/user_storage -oallow_other -o url=https://storage.googleapis.com -f -d set_moutpoint_attribute(3539): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) s3fs_init(2713): init s3fs_check_service(3072): check services. CheckBucket(2525): check a bucket. insertV4Headers(1961): computing signature [GET] [/] [] [] url_to_host(99): url is https://storage.googleapis.com RequestPerform(1619): connecting to URL https://storage1.storage.googleapis.com/ RequestPerform(1653): HTTP response code 403 was returned, returning EPERM CheckBucket(2563): Check bucket failed, S3 response: <?xml version='1.0' encoding='UTF-8'?><Error><Code>InvalidSecurity</Code><Message>The provided security credentials are not valid.</Message><Details>Incorrect Authorization header</Details></Error>

Any help would be appreciated.

Originally created by @tortib on GitHub (Apr 30, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/179 Hello, I'm trying to connect to my google cloud storage bucket and I'm using the following format in the /etc/passwd-s3fs Your documentation states to use accessKeyId:secretAccessKey as the format for the credentials file. However, when I try to connect using the Client ID and Client Secret provided by the google cloud api, it gives me this as the result: `$s3fs storage1 /mnt/user_storage -oallow_other -o url=https://storage.googleapis.com -f -d set_moutpoint_attribute(3539): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) s3fs_init(2713): init s3fs_check_service(3072): check services. CheckBucket(2525): check a bucket. insertV4Headers(1961): computing signature [GET] [/] [] [] url_to_host(99): url is https://storage.googleapis.com RequestPerform(1619): connecting to URL https://storage1.storage.googleapis.com/ RequestPerform(1653): HTTP response code 403 was returned, returning EPERM CheckBucket(2563): Check bucket failed, S3 response: <?xml version='1.0' encoding='UTF-8'?><Error><Code>InvalidSecurity</Code><Message>The provided security credentials are not valid.</Message><Details>Incorrect Authorization header</Details></Error>` Any help would be appreciated.
kerem closed this issue 2026-03-04 01:42:08 +03:00
Author
Owner

@gaul commented on GitHub (Apr 30, 2015):

@tortib Did you configure your project for interoperable mode and use the provided S3-compatible credentials? The following wiki entry explains how:

https://github.com/s3fs-fuse/s3fs-fuse/wiki/Google-Cloud-Storage

<!-- gh-comment-id:97873699 --> @gaul commented on GitHub (Apr 30, 2015): @tortib Did you configure your project for interoperable mode and use the provided S3-compatible credentials? The following wiki entry explains how: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Google-Cloud-Storage
Author
Owner

@tortib commented on GitHub (Apr 30, 2015):

Yes, I have interoperable mode enabled. I have tried using the Access Key and Secret that you can assign in the interoperable interface. However, it still gives me the same error stating the security credentials are not valid.

<!-- gh-comment-id:97905910 --> @tortib commented on GitHub (Apr 30, 2015): Yes, I have interoperable mode enabled. I have tried using the Access Key and Secret that you can assign in the interoperable interface. However, it still gives me the same error stating the security credentials are not valid.
Author
Owner

@sqlbot commented on GitHub (Apr 30, 2015):

S3 has two different and incompatible signing algorithms for requests, known as V2 and V4. Within S3 itself, V4 works in all regions, but V2 only works in regions where S3 was launched before 2014... yet, the Google "migrating" documentation shows examples that use Signature Version 2, and there's no mention of or allusion to support of V4 (as far as I can find) in the Google Cloud Storage docs... but, the default for s3fs now appears to be V4, and this log entry suggests you are using that (default) setting:

insertV4Headers(1961): computing signature

Based on the docs here, I would suggest you need to add the s3fs option -o sigv2 for it s3fs work with GCS.

If my speculation is correct, the wiki page @andrewgaul cited may be missing that information because, perhaps, it dates back to before V4 was implemented or before it was made the default signing mechanism.

There is some commit history of attemps to fall back to V2, but that doesn't appear to be happening, here... perhaps it was removed (and if so, that's probably a good thing. Correct configuration would be better than simple configuration, particularly when you are paying for the failed requests).

<!-- gh-comment-id:97972041 --> @sqlbot commented on GitHub (Apr 30, 2015): S3 has two different and incompatible signing algorithms for requests, known as V2 and V4. Within S3 itself, V4 works in all regions, but V2 only works in regions where S3 was launched before 2014... yet, the [Google "migrating" documentation](https://cloud.google.com/storage/docs/migrating) shows examples that use Signature Version 2, and there's no mention of or allusion to support of V4 (as far as I can find) in the Google Cloud Storage docs... but, the default for s3fs now appears to be V4, and this log entry suggests you are using that (default) setting: `insertV4Headers(1961): computing signature` Based on the docs here, I would suggest you need to add the s3fs option `-o sigv2` for it s3fs work with GCS. If my speculation is correct, the wiki page @andrewgaul cited may be missing that information because, perhaps, it dates back to before V4 was implemented or before it was made the default signing mechanism. There is some commit history of attemps to fall back to V2, but that doesn't appear to be happening, here... perhaps it was removed (and if so, that's probably a good thing. Correct configuration would be better than simple configuration, particularly when you are paying for the failed requests).
Author
Owner

@gaul commented on GitHub (Apr 30, 2015):

@sqlbot This is correct and I updated the wiki to reflect this. I wonder why s3fs does not fall back from v4 to v2 correctly?

<!-- gh-comment-id:97982326 --> @gaul commented on GitHub (Apr 30, 2015): @sqlbot This is correct and I updated the wiki to reflect this. I wonder why s3fs does not fall back from v4 to v2 correctly?
Author
Owner

@kahing commented on GitHub (Apr 30, 2015):

s3fs only falls back to sigv2 on HTTP 400 (which is what aws s3 returns), and looks like google is returning 403.

<!-- gh-comment-id:97994805 --> @kahing commented on GitHub (Apr 30, 2015): s3fs only falls back to sigv2 on HTTP 400 (which is what aws s3 returns), and looks like google is returning 403.
Author
Owner

@kahing commented on GitHub (Apr 30, 2015):

In addition to that, s3fs retry logic only kicks in if endpoint is not explicitly specified

<!-- gh-comment-id:97996522 --> @kahing commented on GitHub (Apr 30, 2015): In addition to that, s3fs retry logic only kicks in if endpoint is not explicitly specified
Author
Owner

@tortib commented on GitHub (Apr 30, 2015):

Thank you for helping me resolve this problem. It now works properly and I'm able to copy files.

<!-- gh-comment-id:98004093 --> @tortib commented on GitHub (Apr 30, 2015): Thank you for helping me resolve this problem. It now works properly and I'm able to copy files.
Author
Owner

@ggtakec commented on GitHub (Jun 20, 2015):

@tortib @kahing
I merged #200 which codes are based @kahing PR codes.
Please try to use master branch and if you find bugs please poste new issue or reopen this issue.
Regards,

<!-- gh-comment-id:113709537 --> @ggtakec commented on GitHub (Jun 20, 2015): @tortib @kahing I merged #200 which codes are based @kahing PR codes. Please try to use master branch and if you find bugs please poste new issue or reopen this issue. Regards,
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#101
No description provided.