mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #666] Cannot mount s3 bucket - Invalid Credentials CRT event. #379
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#379
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @shawnjrose on GitHub (Nov 1, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/666
fuse version: V1.82
fuse: 2.9.4
os: Amazon Linux AMI release 2016.03
kenel: Linux ip-172-18-0-180 4.4.5-15.26.amzn1.x86_64 #1 SMP Wed Mar 16 17:15:34 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
I'm trying to mount an s3 bucket located in the same region as the machine. The bucket is set to private. I run the following. I'm using the credentials in /etc/passwd-s3fs. The credentials files are copied and pasted from the IAM User
/usr/local/bin/s3fs stg-mount /stg-mount -o passwd_file=/etc/passwd-s3fs -d -d -f -o f2 -o curldbgThe following Debug information is outputted
@shawnjrose commented on GitHub (Nov 2, 2017):
The issue wasn't due to the username:password. But it was caused by IAM permissions. Once I associate the proper IAM policy for the user, I could mount the filesystem.
@ggtakec commented on GitHub (Nov 5, 2017):
@shawnjrose
Have you solved this problem with modifying the IAM policy?
If you have additional problems, please let me know the details.
If the problem is resolved close this Issue.
Thanks in advance for your assistance.
@alasundkar commented on GitHub (Jan 30, 2018):
I followed below steps but getting error HTTP 403 request time skewd(shared logs also.)
yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel
cd /usr/local/src
wget https://github.com/libfuse/libfuse/releases/download/fuse-2.9.7/fuse-2.9.7.tar.gz
tar -zxvf fuse-2.9.4.tar.gz
cd fuse-2.9.7
./configure --prefix=/usr/
make;make install
export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/lib64/pkgconfig/
ldconfig
pkg-config --modversion fuse
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure
make; make install
s3fs --version
===============================================================================================
enter accesskey:secretkey
vi /etc/passwd-s3fs
chmod 600 /etc/passwd-s3fs
mkdir /awss3
s3fs alasun /awss3 -o passwd_file=/etc/passwd-s3fs -d -d -f -o f2 -o curldbg
ERROR GETTING:
[root@localhost ~]# s3fs alasun /awss3 -o passwd_file=/etc/passwd-s3fs -d -d -f -o f2 -o curldbg
[CRT] s3fs.cpp:set_s3fs_log_level(271): change debug level from [CRT] to [INF]
[CRT] s3fs.cpp:set_s3fs_log_level(271): change debug level from [INF] to [DBG]
[INF] s3fs.cpp:set_mountpoint_attribute(4206): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
fuse: warning: library too old, some operations may not not work
FUSE library version: 2.8.3
nullpath_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.14
flags=0x0000f07b
max_readahead=0x00020000
[INF] s3fs.cpp:s3fs_init(3371): init v1.83(commit:e1dafe7) with OpenSSL
[WAN] curl.cpp:InitShareCurl(501): curl_share_setopt(SSL SESSION) returns 1(Unknown share option), but continue without shared ssl session data.
[INF] s3fs.cpp:s3fs_check_service(3747): check services.
[INF] curl.cpp:CheckBucket(3068): check a bucket.
[DBG] curl.cpp:GetHandler(285): Get handler from pool: 31
[INF] curl.cpp:prepare_url(4253): URL is https://s3.amazonaws.com/alasun/
[INF] curl.cpp:prepare_url(4285): URL changed is https://alasun.s3.amazonaws.com/
[INF] curl.cpp:insertV4Headers(2400): computing signature [GET] [/] [] []
[INF] curl.cpp:url_to_host(101): url is https://s3.amazonaws.com
[DBG] curl.cpp:RequestPerform(2034): connecting to URL https://alasun.s3.amazonaws.com/
CApath: none
< HTTP/1.1 403 Forbidden
< x-amz-bucket-region: us-east-2
< x-amz-request-id: B6B97E40890CE7B3
< x-amz-id-2: 8EIwfY9c0noAoBunJF1a1TN0u7uhZ7kwZMNe/ZUrCmP3LXmLDOddEThnARU3WBSm6eK5E/BOCxU=
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Tue, 30 Jan 2018 10:07:05 GMT
< Server: AmazonS3
<
[INF] curl.cpp:RequestPerform(2068): HTTP response code 403 was returned, returning EPERM
[DBG] curl.cpp:RequestPerform(2069): Body Text:
RequestTimeTooSkewedThe difference between the request time and the current time is too large.20180130T232642Z2018-01-30T10:07:06Z900000B6B97E40890CE7B38EIwfY9c0noAoBunJF1a1TN0u7uhZ7kwZMNe/ZUrCmP3LXmLDOddEThnARU3WBSm6eK5E/BOCxU=[ERR] curl.cpp:CheckBucket(3096): Check bucket failed, S3 response:
RequestTimeTooSkewedThe difference between the request time and the current time is too large.20180130T232642Z2018-01-30T10:07:06Z900000B6B97E40890CE7B38EIwfY9c0noAoBunJF1a1TN0u7uhZ7kwZMNe/ZUrCmP3LXmLDOddEThnARU3WBSm6eK5E/BOCxU=[WAN] s3fs.cpp:s3fs_check_service(3788): Could not connect, so retry to connect by signature version 2.
[DBG] curl.cpp:ReturnHandler(308): Return handler to pool: 31
[INF] curl.cpp:CheckBucket(3068): check a bucket.
[DBG] curl.cpp:GetHandler(285): Get handler from pool: 31
[INF] curl.cpp:prepare_url(4253): URL is https://s3.amazonaws.com/alasun/
[INF] curl.cpp:prepare_url(4285): URL changed is https://alasun.s3.amazonaws.com/
[DBG] curl.cpp:RequestPerform(2034): connecting to URL https://alasun.s3.amazonaws.com/
< HTTP/1.1 403 Forbidden
< x-amz-bucket-region: us-east-2
< x-amz-request-id: 45D74B8C7BB64B94
< x-amz-id-2: kfjgRsFmRVUWMzb+0myDaZH2XLTABQYRx01Z+kflT6q98x+9xjBkLBbYqgD+GVMsIvxLhrxuIuE=
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Tue, 30 Jan 2018 10:07:06 GMT
< Server: AmazonS3
<
Connection #0 to host alasun.s3.amazonaws.com left intact
[INF] curl.cpp:RequestPerform(2068): HTTP response code 403 was returned, returning EPERM
[DBG] curl.cpp:RequestPerform(2069): Body Text:
RequestTimeTooSkewedThe difference between the request time and the current time is too large.Tue, 30 Jan 2018 23:26:43 GMT2018-01-30T10:07:07Z90000045D74B8C7BB64B94kfjgRsFmRVUWMzb+0myDaZH2XLTABQYRx01Z+kflT6q98x+9xjBkLBbYqgD+GVMsIvxLhrxuIuE=[ERR] curl.cpp:CheckBucket(3096): Check bucket failed, S3 response:
RequestTimeTooSkewedThe difference between the request time and the current time is too large.Tue, 30 Jan 2018 23:26:43 GMT2018-01-30T10:07:07Z90000045D74B8C7BB64B94kfjgRsFmRVUWMzb+0myDaZH2XLTABQYRx01Z+kflT6q98x+9xjBkLBbYqgD+GVMsIvxLhrxuIuE=[CRT] s3fs.cpp:s3fs_check_service(3803): invalid credentials(host=https://s3.amazonaws.com) - result of checking service.
[DBG] curl.cpp:ReturnHandler(308): Return handler to pool: 31
[ERR] s3fs.cpp:s3fs_exit_fuseloop(3361): Exiting FUSE event loop due to errors
INIT: 7.12
flags=0x00000011
max_readahead=0x00020000
max_write=0x00020000
unique: 1, success, outsize: 40
[INF] s3fs.cpp:s3fs_destroy(3434): destroy
Closing connection #0
[WAN] s3fs.cpp:s3fs_destroy(3438): Could not release curl library.
[root@localhost ~]#
@roopeshpk commented on GitHub (Mar 14, 2018):
Do we have any solution yet for this one ? Even I am facing the same issue .
s3fs version i have is 1.83 , IAM policies looks good , even tried givein complete access to the s3 bucket
still the same issue .
Mar 14 17:15:51 ip-10-216-21-159 s3fs[4852]: s3fs.cpp:s3fs_check_service(3803): invalid credentials(host=https://s3.amazonaws.com) - result of checking service.
Mar 14 17:20:35 ip-10-216-21-159 s3fs[4964]: init v1.83(commit:unknown) with OpenSSL
Mar 14 17:20:37 ip-10-216-21-159 s3fs[4964]: s3fs.cpp:s3fs_check_service(3803): invalid credentials(host=https://s3.amazonaws.com) - result of checking service
any Idea folks ?
@lrodri29 commented on GitHub (Nov 20, 2018):
Same issue as @roopeshpk here
@Shinrai commented on GitHub (Dec 30, 2018):
In case anyone else comes across this. Just spent a couple hours trying to figure out why my newly created user wasn't able to mount the s3 bucket. Turns out I had the right policy attached to a group and attached that group to the user. But I had to re-attach the policy to the group. Adding the policy directly to the user worked as well.
@mgbii commented on GitHub (Mar 21, 2019):
this was solved on my end by adding a url option
-o url=https://s3-us-west-1.amazonaws.com@ligett commented on GitHub (Aug 12, 2019):
This helped me also (I appended "-o url=https://s3.eu-central-1.amazonaws.com")
@naweeng commented on GitHub (Feb 26, 2020):
In my case the command was failing since the server's time was out of sync. I think the error message can be made more meaningful.
@hopeseekr commented on GitHub (Aug 19, 2020):
Adding the
-o urlis the ONLY thing that worked for me. I guess all the docs everywhere on the Internet except this page are out of date?@tembaby commented on GitHub (Nov 14, 2020):
Using -o url helped me as well.
Anybody sure why by using the default https://s3.amazonaws.com it doesn't work?
@ziXet commented on GitHub (Mar 23, 2021):
I have same issue! I have some buckets that I don't need to provide the URL! but recently I created a bucket that s3fs cannot mount without providing -o URL with the correct region. (both bucket in the same account)
@JacksonChen63 commented on GitHub (Oct 6, 2021):
It's work for me.
-o url=https://s3-us-west-1.amazonaws.com
@tuan-nguyen-ts commented on GitHub (Dec 23, 2022):
same for me, I already had 2 buckets mounted without -o URL but the third one didn't work. From the mount command output, it showed that the bucket was already mounted, but when I accessed the mount point, I got this error message:
# ls -l upload/ ls: reading directory upload/: Operation not permitted total 0Then I searched and found this ticket. The
-o url=https://s3.ap-southeast-1.amazonaws.comworks for me.Thank you all and especially @mgbii