[GH-ISSUE #57] s3fs only works properly with -f option? #33

Closed
opened 2026-03-04 01:41:21 +03:00 by kerem · 14 comments
Owner

Originally created by @timbostop on GitHub (Sep 24, 2014).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/57

Hi

I've successfully setup s3fs working on an EC2 instance and connecting to S3 via an IAM role (using the iam_role option).

Everything works fine when I invoke s3fs using the -f option - and I see the log of what's going on in the terminal window.

However if I try to run this without the -f option or to add it to /etc/fstab so that it runs at boot and mounts automatically it doesn't work.
The command returns fine, but when I try to access the local directory containing the mount, I get:

s3fs: unable to access MOUNTPOINT /mnt/batches: Transport endpoint is not connected

I can get it working again by running umount and then s3fs with -f option again.

Any clues as to what I'm doing wrong?

Originally created by @timbostop on GitHub (Sep 24, 2014). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/57 Hi I've successfully setup s3fs working on an EC2 instance and connecting to S3 via an IAM role (using the iam_role option). Everything works fine when I invoke s3fs using the -f option - and I see the log of what's going on in the terminal window. However if I try to run this without the -f option or to add it to /etc/fstab so that it runs at boot and mounts automatically it doesn't work. The command returns fine, but when I try to access the local directory containing the mount, I get: s3fs: unable to access MOUNTPOINT /mnt/batches: Transport endpoint is not connected I can get it working again by running umount and then s3fs with -f option again. Any clues as to what I'm doing wrong?
kerem closed this issue 2026-03-04 01:41:21 +03:00
Author
Owner

@ggtakec commented on GitHub (Oct 13, 2014):

Please check /etc/passwd-s3fs or $HOME/.passwd-s3fs or "-o passwd_file" option.
Maybe, I think s3fs used wrong passwd file.
If it used correct passwd file, you can see logs by run with "-d" option.

Thanks

<!-- gh-comment-id:58839587 --> @ggtakec commented on GitHub (Oct 13, 2014): Please check /etc/passwd-s3fs or $HOME/.passwd-s3fs or "-o passwd_file" option. Maybe, I think s3fs used wrong passwd file. If it used correct passwd file, you can see logs by run with "-d" option. Thanks
Author
Owner

@jriehl commented on GitHub (Oct 14, 2014):

I'm pretty sure this caused by a race condition between the network manager and the mounter. I've been using the following fstab entry (modulo actual bucket name):

s3fs#MYBUCKET:  /test  fuse  _netdev,-d,-d,use_cache=/tmp,allow_other,url=https://s3.amazonaws.com/,rw  0  0

I think this is harder to reproduce when using the "-d" option,since the debug output would slow things down a little, but after five reboots I got this to fail:

Last login: Tue Oct 14 21:05:22 2014 from ...
bubba@nitrcce:~$ ls /test/
bahr  foo
bubba@nitrcce:~$ sudo reboot NOW   # Fourth time's the charm!
...
Last login: Tue Oct 14 21:07:25 2014 from ...
bubba@nitrcce:~$ ls /test/
ls: cannot access /test/: Transport endpoint is not connected

Here are the syslog entries I think are appropriate to document this:

Oct 14 21:09:07 nitrcce s3fs: ### retrying...
Oct 14 21:09:07 nitrcce s3fs: Retry request. [type=5][url=https://MYBUCKET.s3.amazonaws.com/][path=/]
Oct 14 21:09:07 nitrcce s3fs: ### CURLE_COULDNT_RESOLVE_HOST
...
Oct 14 21:09:07 nitrcce NetworkManager[1001]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled...
...
Oct 14 21:09:08 nitrcce NetworkManager[1001]: <info> DNS: starting dnsmasq...
...
Oct 14 21:09:08 nitrcce NetworkManager[1001]: <info> (eth0): writing resolv.conf to /sbin/resolvconf
Oct 14 21:09:09 nitrcce dnsmasq[1376]: setting upstream servers from DBus
Oct 14 21:09:09 nitrcce dnsmasq[1376]: using nameserver 172.16.0.23#53
...
Oct 14 21:09:09 nitrcce NetworkManager[1001]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete.
...
Oct 14 21:09:09 nitrcce s3fs: ### retrying...
Oct 14 21:09:09 nitrcce s3fs: Retry request. [type=5][url=https://MYBUCKET.s3.amazonaws.com/][path=/]
Oct 14 21:09:09 nitrcce s3fs: ### giving up

I don't know why it gives up when retrying after DNS comes up, but the CURL error should sum things up.

Regards,
-Jon

<!-- gh-comment-id:59124375 --> @jriehl commented on GitHub (Oct 14, 2014): I'm pretty sure this caused by a race condition between the network manager and the mounter. I've been using the following `fstab` entry (modulo actual bucket name): ``` s3fs#MYBUCKET: /test fuse _netdev,-d,-d,use_cache=/tmp,allow_other,url=https://s3.amazonaws.com/,rw 0 0 ``` I think this is harder to reproduce when using the "-d" option,since the debug output would slow things down a little, but after five reboots I got this to fail: ``` Last login: Tue Oct 14 21:05:22 2014 from ... bubba@nitrcce:~$ ls /test/ bahr foo bubba@nitrcce:~$ sudo reboot NOW # Fourth time's the charm! ... Last login: Tue Oct 14 21:07:25 2014 from ... bubba@nitrcce:~$ ls /test/ ls: cannot access /test/: Transport endpoint is not connected ``` Here are the syslog entries I think are appropriate to document this: ``` Oct 14 21:09:07 nitrcce s3fs: ### retrying... Oct 14 21:09:07 nitrcce s3fs: Retry request. [type=5][url=https://MYBUCKET.s3.amazonaws.com/][path=/] Oct 14 21:09:07 nitrcce s3fs: ### CURLE_COULDNT_RESOLVE_HOST ... Oct 14 21:09:07 nitrcce NetworkManager[1001]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... ... Oct 14 21:09:08 nitrcce NetworkManager[1001]: <info> DNS: starting dnsmasq... ... Oct 14 21:09:08 nitrcce NetworkManager[1001]: <info> (eth0): writing resolv.conf to /sbin/resolvconf Oct 14 21:09:09 nitrcce dnsmasq[1376]: setting upstream servers from DBus Oct 14 21:09:09 nitrcce dnsmasq[1376]: using nameserver 172.16.0.23#53 ... Oct 14 21:09:09 nitrcce NetworkManager[1001]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. ... Oct 14 21:09:09 nitrcce s3fs: ### retrying... Oct 14 21:09:09 nitrcce s3fs: Retry request. [type=5][url=https://MYBUCKET.s3.amazonaws.com/][path=/] Oct 14 21:09:09 nitrcce s3fs: ### giving up ``` I don't know why it gives up when retrying after DNS comes up, but the CURL error should sum things up. Regards, -Jon
Author
Owner

@ggtakec commented on GitHub (Oct 15, 2014):

Hi, Jon

I seems that s3fs gave up before resolving fqdn.
So if you can, please try to set "retries" option as over 3.(please see man page)
I hope it solves this problem.

Regards,
Takeshi

<!-- gh-comment-id:59221973 --> @ggtakec commented on GitHub (Oct 15, 2014): Hi, Jon I seems that s3fs gave up before resolving fqdn. So if you can, please try to set "retries" option as over 3.(please see man page) I hope it solves this problem. Regards, Takeshi
Author
Owner

@jriehl commented on GitHub (Dec 10, 2014):

Hi Takeshi,

I finally had some time to look at this again.

I've tried to set retries=5, with the current fstab entry appearing thusly:

s3fs#MYBUCKET:     /test   fuse    _netdev,-d,-d,use_cache=/tmp,allow_other,url=https://s3.amazonaws.com/,retries=5,rw        0       0

When I reboot, I do not see any output to the syslog from S3FS, and now it seems the system isn't attempting to mount the S3FS fstab entries at boot.

Are you aware of any recent changes in Ubuntu 12.04 that would change mounting behavior at boot time?

Just to confirm, the fstab entry does work; when I use sudo mount -a, the fstab entry for the bucket does mount (and dumps a lot of debug info to the console).

Thanks,
-Jon

<!-- gh-comment-id:66527358 --> @jriehl commented on GitHub (Dec 10, 2014): Hi Takeshi, I finally had some time to look at this again. I've tried to set retries=5, with the current fstab entry appearing thusly: ``` s3fs#MYBUCKET: /test fuse _netdev,-d,-d,use_cache=/tmp,allow_other,url=https://s3.amazonaws.com/,retries=5,rw 0 0 ``` When I reboot, I do not see any output to the syslog from S3FS, and now it seems the system isn't attempting to mount the S3FS fstab entries at boot. Are you aware of any recent changes in Ubuntu 12.04 that would change mounting behavior at boot time? Just to confirm, the fstab entry does work; when I use `sudo mount -a`, the fstab entry for the bucket does mount (and dumps a lot of debug info to the console). Thanks, -Jon
Author
Owner

@jriehl commented on GitHub (Dec 10, 2014):

As a side note: I tried the above with 1.77, 1.78, and building from master. All S3FS versions I tested had the same behavior, which again led me to think something has changed in Ubuntu 12.04.5.

<!-- gh-comment-id:66527719 --> @jriehl commented on GitHub (Dec 10, 2014): As a side note: I tried the above with 1.77, 1.78, and building from master. All S3FS versions I tested had the same behavior, which again led me to think something has changed in Ubuntu 12.04.5.
Author
Owner

@ggtakec commented on GitHub (Dec 23, 2014):

Hi, jriehl
(Sorry for my late reply.)

I tried to mount from fstab at rebooting on Ubuntu 12.04.4 LTS, and I got to success it.
There is following one line log in syslog.
s3fs: HTTP response code 200

But if s3fs could not connect to servers at booting, there was no log line.
Thus I think that we need to set rsyslog configuration for this case, but I could not find how to do yet.

<!-- gh-comment-id:67988845 --> @ggtakec commented on GitHub (Dec 23, 2014): Hi, jriehl (Sorry for my late reply.) I tried to mount from fstab at rebooting on Ubuntu 12.04.4 LTS, and I got to success it. There is following one line log in syslog. s3fs: HTTP response code 200 But if s3fs could not connect to servers at booting, there was no log line. Thus I think that we need to set rsyslog configuration for this case, but I could not find how to do yet.
Author
Owner

@caztial commented on GitHub (Jan 26, 2015):

mount (shows)
s3fs on /var/www/html/skwirk/uploadFiles type fuse.s3fs (rw,allow_other)
s3fs on /var/www/html/media/system/images/avatar/user_images type fuse.s3fs (rw,allow_other)

df -h (shows)
df: `/var/www/html/media/system/images/avatar/user_images': Transport endpoint is not connected
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 99G 3.6G 90G 4% /
tmpfs 3.7G 0 3.7G 0% /dev/shm
s3fs 256T 0 256T 0% /var/www/html/skwirk/uploadFiles

uploadfiles connecting but user images not connecting

<!-- gh-comment-id:71451796 --> @caztial commented on GitHub (Jan 26, 2015): mount (shows) s3fs on /var/www/html/skwirk/uploadFiles type fuse.s3fs (rw,allow_other) s3fs on /var/www/html/media/system/images/avatar/user_images type fuse.s3fs (rw,allow_other) df -h (shows) df: `/var/www/html/media/system/images/avatar/user_images': Transport endpoint is not connected Filesystem Size Used Avail Use% Mounted on /dev/xvde1 99G 3.6G 90G 4% / tmpfs 3.7G 0 3.7G 0% /dev/shm s3fs 256T 0 256T 0% /var/www/html/skwirk/uploadFiles uploadfiles connecting but user images not connecting
Author
Owner

@caztial commented on GitHub (Jan 27, 2015):

looks like you cant go too deep in a bucket folders
s3fs -f skwirks3au:/media/system/images -o nonempty -o allow_other -o default_acl=public-read /var/www/html/media/system/images/avatar/userimg
set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1595): ### CURLE_HTTP_RETURNED_ERROR
RequestPerform(1600): HTTP response code =404
s3fs: Failed to access bucket.

s3fs -f skwirks3au:/media/system -o nonempty -o allow_other -o default_acl=public-read /var/www/html/media/system/images/avatar/userimg
set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1483): HTTP response code 200

<!-- gh-comment-id:71607687 --> @caztial commented on GitHub (Jan 27, 2015): looks like you cant go too deep in a bucket folders s3fs -f skwirks3au:/media/system/images -o nonempty -o allow_other -o default_acl=public-read /var/www/html/media/system/images/avatar/userimg set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) s3fs_init(2595): init s3fs_check_service(2894): check services. CheckBucket(2228): check a bucket. RequestPerform(1595): ### CURLE_HTTP_RETURNED_ERROR RequestPerform(1600): HTTP response code =404 s3fs: Failed to access bucket. s3fs -f skwirks3au:/media/system -o nonempty -o allow_other -o default_acl=public-read /var/www/html/media/system/images/avatar/userimg set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) s3fs_init(2595): init s3fs_check_service(2894): check services. CheckBucket(2228): check a bucket. RequestPerform(1483): HTTP response code 200
Author
Owner

@ggtakec commented on GitHub (Mar 9, 2015):

@caztial Thanks about your report.
I found some bugs about setting deep folder for mount point.
It will need to check all of codes about signature( signature v4 too), so please wait a while.

Regards,

<!-- gh-comment-id:77888299 --> @ggtakec commented on GitHub (Mar 9, 2015): @caztial Thanks about your report. I found some bugs about setting deep folder for mount point. It will need to check all of codes about signature( signature v4 too), so please wait a while. Regards,
Author
Owner

@nickstinger commented on GitHub (Sep 14, 2015):

Having the same problem here on Ubuntu 15.04.
I'm trying to mount the root of a bucket, and getting the same error.

s3fs: ### retrying...
s3fs: Retry request. [type=5][url=http://xxx.s3.amazonaws.com/][path=/]
s3fs: ### CURLE_COULDNT_RESOLVE_HOST

The url is correct (although obfuscated).
I wonder if the mount is occurring before the network is up.
Mounting from shell after login works fine.

Amazon Simple Storage Service File System V1.79(commit:489f9ed) with OpenSSL

<!-- gh-comment-id:139930499 --> @nickstinger commented on GitHub (Sep 14, 2015): Having the same problem here on Ubuntu 15.04. I'm trying to mount the root of a bucket, and getting the same error. ``` s3fs: ### retrying... s3fs: Retry request. [type=5][url=http://xxx.s3.amazonaws.com/][path=/] s3fs: ### CURLE_COULDNT_RESOLVE_HOST ``` The url is correct (although obfuscated). I wonder if the mount is occurring before the network is up. Mounting from shell after login works fine. Amazon Simple Storage Service File System V1.79(commit:489f9ed) with OpenSSL
Author
Owner

@RobbKistler commented on GitHub (Sep 14, 2015):

You could could try adding _netdev to the mount options but I'm not sure that works for later version of Ubuntu. s3fs-fuse retries a few times but if its a wi-fi network that doesn't come up until you log in....

<!-- gh-comment-id:139940839 --> @RobbKistler commented on GitHub (Sep 14, 2015): You could could try adding _netdev to the mount options but I'm not sure that works for later version of Ubuntu. s3fs-fuse retries a few times but if its a wi-fi network that doesn't come up until you log in....
Author
Owner

@nickstinger commented on GitHub (Sep 15, 2015):

I am sorry, I did not know about the _netdev option. I tried it after considering jriehl and RobbKistler's replies, and the bucket started mounting properly. My post is now off topic, but so as not to be a waste of mental effort, I will see if I can submit this information to the Wiki.

<!-- gh-comment-id:140245013 --> @nickstinger commented on GitHub (Sep 15, 2015): I am sorry, I did not know about the _netdev option. I tried it after considering jriehl and RobbKistler's replies, and the bucket started mounting properly. My post is now off topic, but so as not to be a waste of mental effort, I will see if I can submit this information to the Wiki.
Author
Owner

@ggtakec commented on GitHub (Sep 28, 2015):

@nickstinger I'm sorry for replying late, and I added FAQ about _netdev.
Sorry for taking your time about this issue.

<!-- gh-comment-id:143754099 --> @ggtakec commented on GitHub (Sep 28, 2015): @nickstinger I'm sorry for replying late, and I added FAQ about _netdev. Sorry for taking your time about this issue.
Author
Owner

@anskrish commented on GitHub (Mar 12, 2019):

these are the updates steps to mount the S3 bucket on redhat 7

  • yum update -y
  • sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel
  • git clone https://github.com/s3fs-fuse/s3fs-fuse.git
  • cd s3fs-fuse
  • ./autogen.sh
  • ./configure
  • make
  • sudo make install
  • echo access-key:secreat-key > ~/.passwd-s3fs
  • chmod 600 ~/.passwd-s3fs
  • mkdir /test
  • vim ~/.bash_profile
    ##update below line
    PATH=$PATH:$HOME/bin:/usr/local/bin
  • source ~/.bash_profile
  • s3fs bucketname /test -o passwd_file=~/.passwd-s3fs
  • df -h
<!-- gh-comment-id:471854386 --> @anskrish commented on GitHub (Mar 12, 2019): these are the updates steps to mount the S3 bucket on redhat 7 - yum update -y - sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel - git clone https://github.com/s3fs-fuse/s3fs-fuse.git - cd s3fs-fuse - ./autogen.sh - ./configure - make - sudo make install - echo access-key:secreat-key > ~/.passwd-s3fs - chmod 600 ~/.passwd-s3fs - mkdir /test - vim ~/.bash_profile ##update below line PATH=$PATH:$HOME/bin:/usr/local/bin - source ~/.bash_profile - s3fs bucketname /test -o passwd_file=~/.passwd-s3fs - df -h
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#33
No description provided.