[GH-ISSUE #140] failed to access bucket or infinite getattr(691) #83

Closed
opened 2026-03-04 01:41:52 +03:00 by kerem · 1 comment
Owner

Originally created by @iriakhov on GitHub (Mar 5, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/140

Hello GitHub Community,

Did anyone experience failed to access bucket while trying to mount amazon’s bucket over s3fs? And what does s3fs_getattr(691): [path=/] mean ?

I have the password file under /etc with appropriate permissions, my amazons account is read only and is not part of IAM role, contents in the bucket is about 400G

RedHat

$ s3fs --version
Amazon Simple Storage Service File System 1.74

Rpm –qa:

libcurl-7.19.7-40.el6_6.4.x86_64
libcurl-devel-7.19.7-40.el6_6.4.x86_64
python-pycurl-7.19.0-8.el6.x86_64
curl-7.19.7-40.el6_6.4.x86_64

fuse-2.8.3-4.el6.x86_64

uname -a
Linux {box} 2.6.32-504.8.1.el6.x86_64 #1 SMP Fri Dec 19 12:09:25 EST 2014 x86_64 x86_64 x86_64 GNU/Linux

[]$curl -I google.com
HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: http://www.google.ca/?gfe_rd=cr&ei=J5P4VLD2M4qN8QfYg4CIBw
Content-Length: 258
Date: Thu, 05 Mar 2015 17:32:23 GMT
Server: GFE/2.0
Alternate-Protocol: 80:quic,p=0.08
Proxy-Connection: Keep-Alive
Connection: Keep-Alive

[]$ sudo s3fs -f {bucket} /mnt/s3 -o allow_other -o use_cache=/tmp/cache
set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40777)
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1524): ### CURLE_OPERATION_TIMEDOUT
RequestPerform(1617): ### retrying...
RemakeHandle(1288): Retry request. [type=5][url=http://{bucket}.s3.amazonaws.com/][path=/]
RequestPerform(1524): ### CURLE_OPERATION_TIMEDOUT
RequestPerform(1617): ### retrying...
RemakeHandle(1288): Retry request. [type=5][url=http://{bucket}.s3.amazonaws.com/][path=/]
RequestPerform(1524): ### CURLE_OPERATION_TIMEDOUT
RequestPerform(1617): ### retrying...
RemakeHandle(1288): Retry request. [type=5][url=http://{bucket}.s3.amazonaws.com/][path=/]
RequestPerform(1624): ### giving up
s3fs: Failed to access bucket.

Now without sudo:

[]$ s3fs -f {bucket} /mnt/s3 -o allow_other -o use_cache=/tmp/cache
set_moutpoint_attribute(3291): PROC(uid=3290, gid=1111) - MountPoint(uid=0, gid=0, mode=40777)
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1483): HTTP response code 200
s3fs_getattr(691): [path=/]
s3fs_getattr(691): [path=/]
s3fs_getattr(691): [path=/]
keeps going on and on repeating above msg

Any thoughts ???

Originally created by @iriakhov on GitHub (Mar 5, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/140 Hello GitHub Community, Did anyone experience failed to access bucket while trying to mount amazon’s bucket over s3fs? And what does s3fs_getattr(691): [path=/] mean ? I have the password file under /etc with appropriate permissions, my amazons account is read only and is not part of IAM role, contents in the bucket is about 400G RedHat $ s3fs --version Amazon Simple Storage Service File System 1.74 Rpm –qa: libcurl-7.19.7-40.el6_6.4.x86_64 libcurl-devel-7.19.7-40.el6_6.4.x86_64 python-pycurl-7.19.0-8.el6.x86_64 curl-7.19.7-40.el6_6.4.x86_64 fuse-2.8.3-4.el6.x86_64 uname -a Linux {box} 2.6.32-504.8.1.el6.x86_64 #1 SMP Fri Dec 19 12:09:25 EST 2014 x86_64 x86_64 x86_64 GNU/Linux []$curl -I google.com HTTP/1.1 302 Found Cache-Control: private Content-Type: text/html; charset=UTF-8 Location: http://www.google.ca/?gfe_rd=cr&ei=J5P4VLD2M4qN8QfYg4CIBw Content-Length: 258 Date: Thu, 05 Mar 2015 17:32:23 GMT Server: GFE/2.0 Alternate-Protocol: 80:quic,p=0.08 Proxy-Connection: Keep-Alive Connection: Keep-Alive []$ sudo s3fs -f {bucket} /mnt/s3 -o allow_other -o use_cache=/tmp/cache set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40777) s3fs_init(2595): init s3fs_check_service(2894): check services. CheckBucket(2228): check a bucket. RequestPerform(1524): ### CURLE_OPERATION_TIMEDOUT RequestPerform(1617): ### retrying... RemakeHandle(1288): Retry request. [type=5][url=http://{bucket}.s3.amazonaws.com/][path=/] RequestPerform(1524): ### CURLE_OPERATION_TIMEDOUT RequestPerform(1617): ### retrying... RemakeHandle(1288): Retry request. [type=5][url=http://{bucket}.s3.amazonaws.com/][path=/] RequestPerform(1524): ### CURLE_OPERATION_TIMEDOUT RequestPerform(1617): ### retrying... RemakeHandle(1288): Retry request. [type=5][url=http://{bucket}.s3.amazonaws.com/][path=/] RequestPerform(1624): ### giving up s3fs: Failed to access bucket. Now without sudo: []$ s3fs -f {bucket} /mnt/s3 -o allow_other -o use_cache=/tmp/cache set_moutpoint_attribute(3291): PROC(uid=3290, gid=1111) - MountPoint(uid=0, gid=0, mode=40777) s3fs_init(2595): init s3fs_check_service(2894): check services. CheckBucket(2228): check a bucket. RequestPerform(1483): HTTP response code 200 s3fs_getattr(691): [path=/] s3fs_getattr(691): [path=/] s3fs_getattr(691): [path=/] keeps going on and on repeating above msg Any thoughts ???
kerem closed this issue 2026-03-04 01:41:52 +03:00
Author
Owner

@iriakhov commented on GitHub (Mar 5, 2015):

in addition, i can access the bucket through S3 browser but had to add our bucket as an external bucket, im suspecting there is permission issue with the id where ListBucket bucket might need to be allowed for the amazon id

<!-- gh-comment-id:77457762 --> @iriakhov commented on GitHub (Mar 5, 2015): in addition, i can access the bucket through S3 browser but had to add our bucket as an external bucket, im suspecting there is permission issue with the id where ListBucket bucket might need to be allowed for the amazon id
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#83
No description provided.