[GH-ISSUE #297] Can't mount s3fs on boot with Ubuntu 14.04 (used _netdev) #154

Closed
opened 2026-03-04 01:42:41 +03:00 by kerem · 16 comments
Owner

Originally created by @emschwar on GitHub (Nov 21, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/297

On Ubuntu 14.04 I tried setting s3fs to mount on boot in fstab with the following line (or near enough):

s3fs#my-bucket /opt/path/to/s3 fuse _netdev,allow_other,iam_role=some-iam-role,dbglevel=dbg 0 0

I particularly made sure to use _netdev, since I have seen in other issues that this fixes mount issues at boot. It did not here. Whenever I access it after rebooting, I get told ls: cannot access /opt/path/to/s3: Transport endpoint is not connected

I looked in /var/log/syslog, and found these lines that seem relevant:

Nov 21 00:01:20 localhost s3fs[404]: ### retrying...
Nov 21 00:01:20 localhost s3fs[404]:       Retry request. [type=-1][url=http://169.254.169.254/latest/meta-data/iam/security-credentials/some-iam-role][path=]
Nov 21 00:01:20 localhost s3fs[404]: Failed to reset handle and internal data for retrying.
Nov 21 00:01:20 localhost s3fs[404]: s3fs.cpp:s3fs_check_service(3657): Failed to check IAM role name(some-iam-role).

I suspect the Failed to reset handle and internal data for retrying line is relevant, but I'm not sure what it means, or how to fix it.

If I then remount (sudo umount /opt/path/to/s3; sudo mount /opt/path/to/s3), it works every time.

Originally created by @emschwar on GitHub (Nov 21, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/297 On Ubuntu 14.04 I tried setting s3fs to mount on boot in fstab with the following line (or near enough): `s3fs#my-bucket /opt/path/to/s3 fuse _netdev,allow_other,iam_role=some-iam-role,dbglevel=dbg 0 0` I particularly made sure to use `_netdev`, since I have seen in other issues that this fixes mount issues at boot. It did not here. Whenever I access it after rebooting, I get told `ls: cannot access /opt/path/to/s3: Transport endpoint is not connected` I looked in /var/log/syslog, and found these lines that seem relevant: ``` Nov 21 00:01:20 localhost s3fs[404]: ### retrying... Nov 21 00:01:20 localhost s3fs[404]: Retry request. [type=-1][url=http://169.254.169.254/latest/meta-data/iam/security-credentials/some-iam-role][path=] Nov 21 00:01:20 localhost s3fs[404]: Failed to reset handle and internal data for retrying. Nov 21 00:01:20 localhost s3fs[404]: s3fs.cpp:s3fs_check_service(3657): Failed to check IAM role name(some-iam-role). ``` I suspect the `Failed to reset handle and internal data for retrying` line is relevant, but I'm not sure what it means, or how to fix it. If I then remount (`sudo umount /opt/path/to/s3; sudo mount /opt/path/to/s3`), it works every time.
kerem closed this issue 2026-03-04 01:42:41 +03:00
Author
Owner

@elmobp commented on GitHub (Nov 22, 2015):

Same issue but our workaround for now

umount /opt/path/to/s3; sleep 1; mount -a, 

in /etc/rc.local looking at it the network is not fully ready at that point!

<!-- gh-comment-id:158735543 --> @elmobp commented on GitHub (Nov 22, 2015): Same issue but our workaround for now ``` umount /opt/path/to/s3; sleep 1; mount -a, ``` in /etc/rc.local looking at it the network is not fully ready at that point!
Author
Owner

@ggtakec commented on GitHub (Nov 24, 2015):

@emschwar It seems that your specified "IAM role name" was not found in.
At booting, the system might not have access to http://169.254.169.254/latest/meta-data/iam/security-credentials/some-iam-role.
So we need to check s3fs running after network up, and why could not access the IAM role meta-data URL.
I try to check this.
Regards,

<!-- gh-comment-id:159330031 --> @ggtakec commented on GitHub (Nov 24, 2015): @emschwar It seems that your specified "IAM role name" was not found in. At booting, the system might not have access to http://169.254.169.254/latest/meta-data/iam/security-credentials/some-iam-role. So we need to check s3fs running after network up, and why could not access the IAM role meta-data URL. I try to check this. Regards,
Author
Owner

@MattFenelon commented on GitHub (Jan 18, 2016):

I've come across the same problem, I've tried running with bootwait and _netdev options, with no success.

s3fs is running on an Ubuntu 14.04 instance on EC2.

Jan 18 09:32:58 ip-172-31-26-178 acpid: starting up with netlink and the input layer
Jan 18 09:32:58 ip-172-31-26-178 acpid: 1 rule loaded
Jan 18 09:32:58 ip-172-31-26-178 acpid: waiting for events: event logging is off
Jan 18 09:32:58 ip-172-31-26-178 cron[907]: (CRON) INFO (pidfile fd = 3)
Jan 18 09:32:58 ip-172-31-26-178 cron[962]: (CRON) STARTUP (fork ok)
Jan 18 09:32:58 ip-172-31-26-178 cron[962]: (CRON) INFO (Running @reboot jobs)
Jan 18 09:32:58 ip-172-31-26-178 /usr/sbin/irqbalance: Balancing is ineffective on systems with a single cache domain.  Shutting down
Jan 18 09:32:58 ip-172-31-26-178 pollinate[1030]: system was previously seeded at [2016-01-18 09:31:25.335924515 +0000]
Jan 18 09:32:58 ip-172-31-26-178 pollinate[1032]: To re-seed this system again, use the -r|--reseed option
Jan 18 09:32:58 ip-172-31-26-178 kernel: [   11.439299] init: plymouth-upstart-bridge main process (231) killed by TERM signal
Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: ### retrying...
Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]:       Retry request. [type=-1][url=http://169.254.169.254/latest/meta-data/iam/security-credentials/some-iam-role][path=]
Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: Failed to reset handle and internal data for retrying.
Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: s3fs.cpp:s3fs_check_service(3677): Failed to check IAM role name(some-iam-role).
Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: s3fs.cpp:s3fs_exit_fuseloop(3304): Exiting FUSE event loop due to errors
Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: destroy
Jan 18 09:33:03 ip-172-31-26-178 ntpdate[665]: step time server 91.189.89.199 offset 0.601446 sec
Jan 18 09:33:10 ip-172-31-26-178 ntpdate[1129]: adjust time server 91.189.89.199 offset -0.000118 sec
Jan 18 09:57:04 ip-172-31-26-178 dhclient: DHCPREQUEST of 172.31.26.178 on eth0 to 172.31.16.1 port 67 (xid=0x7f980b74)
Jan 18 09:57:04 ip-172-31-26-178 dhclient: DHCPACK of 172.31.26.178 from 172.31.16.1
Jan 18 09:57:04 ip-172-31-26-178 dhclient: bound to 172.31.26.178 -- renewal in 1718 seconds.
<!-- gh-comment-id:172489461 --> @MattFenelon commented on GitHub (Jan 18, 2016): I've come across the same problem, I've tried running with `bootwait` and `_netdev` options, with no success. s3fs is running on an Ubuntu 14.04 instance on EC2. ``` Jan 18 09:32:58 ip-172-31-26-178 acpid: starting up with netlink and the input layer Jan 18 09:32:58 ip-172-31-26-178 acpid: 1 rule loaded Jan 18 09:32:58 ip-172-31-26-178 acpid: waiting for events: event logging is off Jan 18 09:32:58 ip-172-31-26-178 cron[907]: (CRON) INFO (pidfile fd = 3) Jan 18 09:32:58 ip-172-31-26-178 cron[962]: (CRON) STARTUP (fork ok) Jan 18 09:32:58 ip-172-31-26-178 cron[962]: (CRON) INFO (Running @reboot jobs) Jan 18 09:32:58 ip-172-31-26-178 /usr/sbin/irqbalance: Balancing is ineffective on systems with a single cache domain. Shutting down Jan 18 09:32:58 ip-172-31-26-178 pollinate[1030]: system was previously seeded at [2016-01-18 09:31:25.335924515 +0000] Jan 18 09:32:58 ip-172-31-26-178 pollinate[1032]: To re-seed this system again, use the -r|--reseed option Jan 18 09:32:58 ip-172-31-26-178 kernel: [ 11.439299] init: plymouth-upstart-bridge main process (231) killed by TERM signal Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: ### retrying... Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: Retry request. [type=-1][url=http://169.254.169.254/latest/meta-data/iam/security-credentials/some-iam-role][path=] Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: Failed to reset handle and internal data for retrying. Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: s3fs.cpp:s3fs_check_service(3677): Failed to check IAM role name(some-iam-role). Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: s3fs.cpp:s3fs_exit_fuseloop(3304): Exiting FUSE event loop due to errors Jan 18 09:32:59 ip-172-31-26-178 s3fs[479]: destroy Jan 18 09:33:03 ip-172-31-26-178 ntpdate[665]: step time server 91.189.89.199 offset 0.601446 sec Jan 18 09:33:10 ip-172-31-26-178 ntpdate[1129]: adjust time server 91.189.89.199 offset -0.000118 sec Jan 18 09:57:04 ip-172-31-26-178 dhclient: DHCPREQUEST of 172.31.26.178 on eth0 to 172.31.16.1 port 67 (xid=0x7f980b74) Jan 18 09:57:04 ip-172-31-26-178 dhclient: DHCPACK of 172.31.26.178 from 172.31.16.1 Jan 18 09:57:04 ip-172-31-26-178 dhclient: bound to 172.31.26.178 -- renewal in 1718 seconds. ```
Author
Owner

@ggtakec commented on GitHub (Jan 18, 2016):

Could you get http://169.254.169.254/latest/meta-data/iam/security-credentials/some-iam-role by curl command manually on terminal after bootup system?
If the result is success, you may not be able to get iam role at boot time.
Please check it.

Thanks in advance for your assistance.

<!-- gh-comment-id:172530670 --> @ggtakec commented on GitHub (Jan 18, 2016): Could you get http://169.254.169.254/latest/meta-data/iam/security-credentials/some-iam-role by curl command manually on terminal after bootup system? If the result is success, you may not be able to get iam role at boot time. Please check it. Thanks in advance for your assistance.
Author
Owner

@MattFenelon commented on GitHub (Jan 18, 2016):

@ggtakec After boot it works fine, I can curl the metadata URL, and umount; mount -a mounts the share correctly.

I'm using cloud-init to initialise the instance. In cloud-init.log I see cloud-init accessing the metadata service successfully at 14:06:59, although not for the IAM path; in syslog I see s3fs fail to access the IAM metadata URL at 14:07:01. So perhaps it's to do with the IAM bit rather than the whole metadata service?

<!-- gh-comment-id:172542521 --> @MattFenelon commented on GitHub (Jan 18, 2016): @ggtakec After boot it works fine, I can `curl` the metadata URL, and `umount; mount -a` mounts the share correctly. I'm using cloud-init to initialise the instance. In `cloud-init.log` I see cloud-init accessing the metadata service successfully at 14:06:59, although not for the IAM path; in `syslog` I see s3fs fail to access the IAM metadata URL at 14:07:01. So perhaps it's to do with the IAM bit rather than the whole metadata service?
Author
Owner

@ggtakec commented on GitHub (Jan 21, 2016):

@MattFenelon Thanks for your response.
If we could not get IAM meta data at boot, we could not call s3fs(with aim_role option) from fstab at boot.
(I think that we do not want to manually start from rc script instead of stab.)
So I would examine whether there is a solution for this.
Regards,

<!-- gh-comment-id:173604878 --> @ggtakec commented on GitHub (Jan 21, 2016): @MattFenelon Thanks for your response. If we could not get IAM meta data at boot, we could not call s3fs(with aim_role option) from fstab at boot. (I think that we do not want to manually start from rc script instead of stab.) So I would examine whether there is a solution for this. Regards,
Author
Owner

@ggtakec commented on GitHub (Jan 24, 2016):

I found a bug about retrying to connect metadata(169.254.169.254) for getting iam role data.
This issue had two causes.

  1. a bug about retrying for getting iam role data.
    If s3fs failed to connect metadata at booting, but s3fs could not retry to connect.
    (The default retries option value is 2, but no retries.)
    This bug is fixed #338
    In most cases, I had the once failed to connect metadata.

  2. a line in fstab for s3fs
    For ubuntu we need to use following format in fstab.
    fuse.s3fs _netdev,...... 0 0

If you write following format in fstab, s3fs could not be run because s3fs called with wrong bucketname("s3fs")
s3fs# fuse _netdev,...... 0 0

So you need to change a line for s3fs in fstab to following:
my-bucket /opt/path/to/s3 fuse.s3fs _netdev,allow_other,iam_role=some-iam-role,dbglevel=dbg 0 0

Please try to use latest codes in master branch.
If it does not work, please be set to "retries" option three or more.

Regards,

<!-- gh-comment-id:174254836 --> @ggtakec commented on GitHub (Jan 24, 2016): I found a bug about retrying to connect metadata(169.254.169.254) for getting iam role data. This issue had two causes. 1) a bug about retrying for getting iam role data. If s3fs failed to connect metadata at booting, but s3fs could not retry to connect. (The default retries option value is 2, but no retries.) This bug is fixed #338 In most cases, I had the once failed to connect metadata. 2) a line in fstab for s3fs For ubuntu we need to use following format in fstab. <bucket name> <mount point> fuse.s3fs _netdev,...... 0 0 If you write following format in fstab, s3fs could not be run because s3fs called with wrong bucketname("s3fs") s3fs#<bucket name> <mount point> fuse _netdev,...... 0 0 So you need to change a line for s3fs in fstab to following: my-bucket /opt/path/to/s3 fuse.s3fs _netdev,allow_other,iam_role=some-iam-role,dbglevel=dbg 0 0 Please try to use latest codes in master branch. If it does not work, please be set to "retries" option three or more. Regards,
Author
Owner

@MattFenelon commented on GitHub (Jan 25, 2016):

Thank you for the bug fix. The retries seem to be working now, I can see the mounts connecting at boot.

  1. I've tried changing the format of the fstab before (from s3fs# to fuse.s3fs), it results in my server becoming unbootable. Have you tested the setting?
<!-- gh-comment-id:174473518 --> @MattFenelon commented on GitHub (Jan 25, 2016): Thank you for the bug fix. The retries seem to be working now, I can see the mounts connecting at boot. 2) I've tried changing the format of the fstab before (from s3fs# to fuse.s3fs), it results in my server becoming unbootable. Have you tested the setting?
Author
Owner

@ggtakec commented on GitHub (Jan 31, 2016):

@MattFenelon Yes, I tested it on my ubuntu14.04 on EC2.
I built same ubuntu on ec2 again and tested fstab(fuse.s3fs), after that it worked with no problem.

However, it is the same as before, I could not mount on boot with fstab which had "s3fs#bucket" entry.
I have examined this problem, and got two strange things.
One is that the s3fs has been called more than once from mountall.
The first calling s3fs is called with correct bucket name, but later is called with wrong bucket name as "s3fs".
Thus the second time failed because the bucket name is "s3fs".
The other is that the first call is failed by CURLE_COULDNT_CONNECT error, despite specifying _netdev option.

Then my test environment ec2 successes to mount by only "fuse.s3fs" fstab, the case of "s3fs#bucket" is failed.

If you can, please let me know about the difference of the results and the format of fstab.

<!-- gh-comment-id:177546988 --> @ggtakec commented on GitHub (Jan 31, 2016): @MattFenelon Yes, I tested it on my ubuntu14.04 on EC2. I built same ubuntu on ec2 again and tested fstab(fuse.s3fs), after that it worked with no problem. However, it is the same as before, I could not mount on boot with fstab which had "s3fs#bucket" entry. I have examined this problem, and got two strange things. One is that the s3fs has been called more than once from mountall. The first calling s3fs is called with correct bucket name, but later is called with wrong bucket name as "s3fs". Thus the second time failed because the bucket name is "s3fs". The other is that the first call is failed by CURLE_COULDNT_CONNECT error, despite specifying _netdev option. Then my test environment ec2 successes to mount by only "fuse.s3fs" fstab, the case of "s3fs#bucket" is failed. If you can, please let me know about the difference of the results and the format of fstab.
Author
Owner

@MattFenelon commented on GitHub (Feb 2, 2016):

@ggtakec Thanks again for your help. The settings I'm using are equivalent to:

_netdev,dbglevel=info,url=https://s3.amazonaws.com,iam_role=some-iam-role,use_sse,allow_other,mp_umask=0077,enable_content_md5,retries=10,enable_noobj_cache,stat_cache_expire=600,uid=1000,gid=1000

This is a sample entry from my fstab (some sensitive details changed):

s3fs#some-bucket-name:/some-bucket-path /var/sftp/some-folder fuse _netdev,dbglevel=info,url=https://s3.amazonaws.com,iam_role=some-iam-role,use_sse,allow_other,mp_umask=0077,enable_content_md5,retries=10,enable_noobj_cache,stat_cache_expire=600,uid=1000,gid=1000 0 0

/var/sftp/some-folder is owned by the user.

The box is on version: 'Ubuntu 14.04.3 LTS'

$ mountall --version
mountall 2.49

$ mount --version
mount from util-linux 2.20.1 (with libblkid and selinux support)

$ curl --version
curl 7.35.0 (x86_64-pc-linux-gnu) libcurl/7.35.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
<!-- gh-comment-id:178520376 --> @MattFenelon commented on GitHub (Feb 2, 2016): @ggtakec Thanks again for your help. The settings I'm using are equivalent to: `_netdev,dbglevel=info,url=https://s3.amazonaws.com,iam_role=some-iam-role,use_sse,allow_other,mp_umask=0077,enable_content_md5,retries=10,enable_noobj_cache,stat_cache_expire=600,uid=1000,gid=1000` This is a sample entry from my fstab (some sensitive details changed): ``` s3fs#some-bucket-name:/some-bucket-path /var/sftp/some-folder fuse _netdev,dbglevel=info,url=https://s3.amazonaws.com,iam_role=some-iam-role,use_sse,allow_other,mp_umask=0077,enable_content_md5,retries=10,enable_noobj_cache,stat_cache_expire=600,uid=1000,gid=1000 0 0 ``` `/var/sftp/some-folder` is owned by the user. The box is on version: 'Ubuntu 14.04.3 LTS' ``` $ mountall --version mountall 2.49 $ mount --version mount from util-linux 2.20.1 (with libblkid and selinux support) $ curl --version curl 7.35.0 (x86_64-pc-linux-gnu) libcurl/7.35.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3 Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smtp smtps telnet tftp Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP ```
Author
Owner

@ggtakec commented on GitHub (Feb 6, 2016):

@MattFenelon
My test environment(the format of the fstab and version that your are using) is the same as yours.

I have tested some following format for fstab, those are is very simple.

<bucket>        /mnt/s3 fuse.s3fs   _netdev,allow_other             0 0
s3fs#<bucket>       /mnt/s3 fuse        _netdev,allow_other             0 0
s3fs#<bucket>       /mnt/s3 fuse        _netdev,allow_other,retries=5           0 0
s3fs#<bucket>       /mnt/s3 fuse        _netdev,bootwait,allow_other,retries=5      0 0
s3fs#<bucket>:<path>    /mnt/s3 fuse        _netdev,allow_other,retries=5           0 0
s3fs#<bucket>       /mnt/s3 fuse        _netdev,allow_other,nodnscache,retries=5    0 0

The result is that "mount.TYPE" format only succeed, the other entries failed.

All type of "s3fs#bucket" failed because s3fs could not resolve the hostname.
(I'm sorry that I had to reply CURLE_COULDNT_RESOLVE_HOST wrong with CURLE_COULDNT_CONNECT.)
The reason is probably that s3fs was called before network was up.

It seems that there is difference between "mount.TYPE" and "s3fs#bucket" for timming.
I could not find any option pattern(_netdev, bootwait, etc) which success to resolve the hostname.

If you failed with "mount.TYPE" format, you only write s3fs booting in rc script.

So I do not know all the mountall and ubuntu, I want advice from the familiar to it.

Regards,

<!-- gh-comment-id:180681166 --> @ggtakec commented on GitHub (Feb 6, 2016): @MattFenelon My test environment(the format of the fstab and version that your are using) is the same as yours. I have tested some following format for fstab, those are is very simple. ``` <bucket> /mnt/s3 fuse.s3fs _netdev,allow_other 0 0 s3fs#<bucket> /mnt/s3 fuse _netdev,allow_other 0 0 s3fs#<bucket> /mnt/s3 fuse _netdev,allow_other,retries=5 0 0 s3fs#<bucket> /mnt/s3 fuse _netdev,bootwait,allow_other,retries=5 0 0 s3fs#<bucket>:<path> /mnt/s3 fuse _netdev,allow_other,retries=5 0 0 s3fs#<bucket> /mnt/s3 fuse _netdev,allow_other,nodnscache,retries=5 0 0 ``` The result is that "mount.TYPE" format only succeed, the other entries failed. All type of "s3fs#bucket" failed because s3fs could not resolve the hostname. (I'm sorry that I had to reply CURLE_COULDNT_RESOLVE_HOST wrong with CURLE_COULDNT_CONNECT.) The reason is probably that s3fs was called before network was up. It seems that there is difference between "mount.TYPE" and "s3fs#bucket" for timming. I could not find any option pattern(_netdev, bootwait, etc) which success to resolve the hostname. If you failed with "mount.TYPE" format, you only write s3fs booting in rc script. So I do not know all the mountall and ubuntu, I want advice from the familiar to it. Regards,
Author
Owner

@evil-c commented on GitHub (Jul 21, 2016):

Hi,

I think I'm having a similar, perhaps the same issue. When I run the s3fs command, everything is fine. On boot however, nothing gets mounted. I'm using ansible to mount the bucket, etc.

My /etc/fstab looks like:

<bucket_name> <mount_point> fuse.s3fs _netdev,allow_other,dbglevel=dbg,curldbg 0 0

Here is the output in /var/log/syslog with all the debug levels set:

Jul 21 04:06:47 ip-10-0-3-5 ansible-git: Invoked with executable=None force=False refspec=None reference=None dest=s3fs-fuse accept_hostkey=False clone=True verify_commit=False update=True ssh_opts=None repo=https://github.com/s3fs-fuse/s3fs-fuse.git depth=None version=HEAD bare=False recursive=True remote=origin key_file=None track_submodules=False
Jul 21 04:06:50 ip-10-0-3-5 ansible-command: Invoked with warn=True executable=None _uses_shell=False _raw_params=./autogen.sh removes=None creates=None chdir=s3fs-fuse
Jul 21 04:06:52 ip-10-0-3-5 ansible-command: Invoked with warn=True executable=None _uses_shell=False _raw_params=./configure --prefix=/usr removes=None creates=None chdir=s3fs-fuse
Jul 21 04:06:54 ip-10-0-3-5 ansible-command: Invoked with warn=True executable=None _uses_shell=False _raw_params=make removes=None creates=None chdir=s3fs-fuse
Jul 21 04:07:14 ip-10-0-3-5 ansible-command: Invoked with warn=True executable=None _uses_shell=False _raw_params=make install removes=None creates=None chdir=s3fs-fuse
Jul 21 04:07:14 ip-10-0-3-5 ansible-command: Invoked with warn=True executable=None chdir=None _raw_params=aws s3 --region us-east-1 cp s3://casper-ops-config/<bucket_name>_fuse.config /etc/passwd-s3fs removes=None creates=None _uses_shell=False
Jul 21 04:07:15 ip-10-0-3-5 ansible-file: Invoked with directory_mode=None force=False remote_src=None path=/etc/passwd-s3fs owner=None follow=False group=None state=file content=NOT_LOGGING_PARAMETER serole=None diff_peek=None setype=None dest=/etc/passwd-s3fs selevel=None original_basename=None regexp=None validate=None src=None seuser=None recurse=False delimiter=None mode=256 backup=None
Jul 21 04:07:15 ip-10-0-3-5 ansible-file: Invoked with directory_mode=None force=False remote_src=None path=data_dir owner=chaseftp follow=False group=chaseftp state=directory content=NOT_LOGGING_PARAMETER serole=None diff_peek=None setype=None selevel=None original_basename=None regexp=None validate=None src=None seuser=None recurse=False delimiter=None mode=448 backup=None
Jul 21 04:07:15 ip-10-0-3-5 ansible-mount: Invoked with src=<bucket_name> name=<mount_point> dump=0 fstab=/etc/fstab passno=0 fstype=fuse.s3fs state=mounted opts=_netdev,allow_other,dbglevel=dbg,curldbg
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15411]: s3fs.cpp:set_s3fs_log_level(253): change debug level from [CRT] to [DBG]
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15411]:     PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: s3fs.cpp:s3fs_init(3358): init v1.80(commit:b76fc35) with OpenSSL
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: check services.
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]:       check a bucket.
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: curl.cpp:GetHandler(272): Get handler from pool: 31
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]:       URL is http://s3.amazonaws.com/<bucket_name>/
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]:       URL changed is http://<bucket_name>.s3.amazonaws.com/
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]:       computing signature [GET] [/] [] []
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]:       url is http://s3.amazonaws.com
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: curl.cpp:RequestPerform(1893): connecting to URL http://<bucket_name>.s3.amazonaws.com/
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: * Hostname was NOT found in DNS cache
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: *   Trying <ip_address>...
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: * Connected to <bucket_name>.s3.amazonaws.com  <ip_address>) port 80 (#0)
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: > GET / HTTP/1.1#015#012User-Agent: s3fs/1.80 (commit hash b76fc35; OpenSSL)#015#012Accept: */*#015#012Authorization: AWS4-HMAC-SHA256 Credential=<access_key>/20160721/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=546233920480858732ba7dbe7b7d69eb2d925b02f8692aac95d798a8365fa40d#015#012host: <bucket_name>.s3.amazonaws.com#015#012x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855#015#012x-amz-date: 20160721T040715Z#015#012#015
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < HTTP/1.1 200 OK#015
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < x-amz-id-2: MLuxO9ytNe3ilDvODFg+CzfbH4wZG64z5xwM56V7RZTCJgaIRZER0iAcjyhvWNYMQQzGuk2A4hc=#015
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < x-amz-request-id: 131D64D19A662061#015
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < Date: Thu, 21 Jul 2016 04:07:16 GMT#015
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < x-amz-bucket-region: us-east-1#015
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < Content-Type: application/xml#015
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < Transfer-Encoding: chunked#015
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: * Server AmazonS3 is not blacklisted
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < Server: AmazonS3#015
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < #015
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: * Connection #0 to host <bucket_name>.s3.amazonaws.com left intact
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]:       HTTP response code 200
Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: curl.cpp:ReturnHandler(295): Return handler to pool: 31

Thanks in advance for any insight you can provide.

<!-- gh-comment-id:234259271 --> @evil-c commented on GitHub (Jul 21, 2016): Hi, I think I'm having a similar, perhaps the same issue. When I run the `s3fs` command, everything is fine. On boot however, nothing gets mounted. I'm using ansible to mount the bucket, etc. My `/etc/fstab` looks like: ``` <bucket_name> <mount_point> fuse.s3fs _netdev,allow_other,dbglevel=dbg,curldbg 0 0 ``` Here is the output in `/var/log/syslog` with all the debug levels set: ``` Jul 21 04:06:47 ip-10-0-3-5 ansible-git: Invoked with executable=None force=False refspec=None reference=None dest=s3fs-fuse accept_hostkey=False clone=True verify_commit=False update=True ssh_opts=None repo=https://github.com/s3fs-fuse/s3fs-fuse.git depth=None version=HEAD bare=False recursive=True remote=origin key_file=None track_submodules=False Jul 21 04:06:50 ip-10-0-3-5 ansible-command: Invoked with warn=True executable=None _uses_shell=False _raw_params=./autogen.sh removes=None creates=None chdir=s3fs-fuse Jul 21 04:06:52 ip-10-0-3-5 ansible-command: Invoked with warn=True executable=None _uses_shell=False _raw_params=./configure --prefix=/usr removes=None creates=None chdir=s3fs-fuse Jul 21 04:06:54 ip-10-0-3-5 ansible-command: Invoked with warn=True executable=None _uses_shell=False _raw_params=make removes=None creates=None chdir=s3fs-fuse Jul 21 04:07:14 ip-10-0-3-5 ansible-command: Invoked with warn=True executable=None _uses_shell=False _raw_params=make install removes=None creates=None chdir=s3fs-fuse Jul 21 04:07:14 ip-10-0-3-5 ansible-command: Invoked with warn=True executable=None chdir=None _raw_params=aws s3 --region us-east-1 cp s3://casper-ops-config/<bucket_name>_fuse.config /etc/passwd-s3fs removes=None creates=None _uses_shell=False Jul 21 04:07:15 ip-10-0-3-5 ansible-file: Invoked with directory_mode=None force=False remote_src=None path=/etc/passwd-s3fs owner=None follow=False group=None state=file content=NOT_LOGGING_PARAMETER serole=None diff_peek=None setype=None dest=/etc/passwd-s3fs selevel=None original_basename=None regexp=None validate=None src=None seuser=None recurse=False delimiter=None mode=256 backup=None Jul 21 04:07:15 ip-10-0-3-5 ansible-file: Invoked with directory_mode=None force=False remote_src=None path=data_dir owner=chaseftp follow=False group=chaseftp state=directory content=NOT_LOGGING_PARAMETER serole=None diff_peek=None setype=None selevel=None original_basename=None regexp=None validate=None src=None seuser=None recurse=False delimiter=None mode=448 backup=None Jul 21 04:07:15 ip-10-0-3-5 ansible-mount: Invoked with src=<bucket_name> name=<mount_point> dump=0 fstab=/etc/fstab passno=0 fstype=fuse.s3fs state=mounted opts=_netdev,allow_other,dbglevel=dbg,curldbg Jul 21 04:07:15 ip-10-0-3-5 s3fs[15411]: s3fs.cpp:set_s3fs_log_level(253): change debug level from [CRT] to [DBG] Jul 21 04:07:15 ip-10-0-3-5 s3fs[15411]: PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: s3fs.cpp:s3fs_init(3358): init v1.80(commit:b76fc35) with OpenSSL Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: check services. Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: check a bucket. Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: curl.cpp:GetHandler(272): Get handler from pool: 31 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: URL is http://s3.amazonaws.com/<bucket_name>/ Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: URL changed is http://<bucket_name>.s3.amazonaws.com/ Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: computing signature [GET] [/] [] [] Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: url is http://s3.amazonaws.com Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: curl.cpp:RequestPerform(1893): connecting to URL http://<bucket_name>.s3.amazonaws.com/ Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: * Hostname was NOT found in DNS cache Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: * Trying <ip_address>... Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: * Connected to <bucket_name>.s3.amazonaws.com <ip_address>) port 80 (#0) Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: > GET / HTTP/1.1#015#012User-Agent: s3fs/1.80 (commit hash b76fc35; OpenSSL)#015#012Accept: */*#015#012Authorization: AWS4-HMAC-SHA256 Credential=<access_key>/20160721/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=546233920480858732ba7dbe7b7d69eb2d925b02f8692aac95d798a8365fa40d#015#012host: <bucket_name>.s3.amazonaws.com#015#012x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855#015#012x-amz-date: 20160721T040715Z#015#012#015 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < HTTP/1.1 200 OK#015 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < x-amz-id-2: MLuxO9ytNe3ilDvODFg+CzfbH4wZG64z5xwM56V7RZTCJgaIRZER0iAcjyhvWNYMQQzGuk2A4hc=#015 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < x-amz-request-id: 131D64D19A662061#015 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < Date: Thu, 21 Jul 2016 04:07:16 GMT#015 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < x-amz-bucket-region: us-east-1#015 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < Content-Type: application/xml#015 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < Transfer-Encoding: chunked#015 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: * Server AmazonS3 is not blacklisted Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < Server: AmazonS3#015 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: < #015 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: * Connection #0 to host <bucket_name>.s3.amazonaws.com left intact Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: HTTP response code 200 Jul 21 04:07:15 ip-10-0-3-5 s3fs[15414]: curl.cpp:ReturnHandler(295): Return handler to pool: 31 ``` Thanks in advance for any insight you can provide.
Author
Owner

@ggtakec commented on GitHub (Jul 21, 2016):

@evil-c
You can try to specify "retries" option, I think it helps us to solve your problem.
I hope it, and thanks for your assistance.

<!-- gh-comment-id:234261381 --> @ggtakec commented on GitHub (Jul 21, 2016): @evil-c You can try to specify "retries" option, I think it helps us to solve your problem. I hope it, and thanks for your assistance.
Author
Owner

@evil-c commented on GitHub (Jul 21, 2016):

@ggtakec
worked! Thanks for the quick reply!

<!-- gh-comment-id:234269863 --> @evil-c commented on GitHub (Jul 21, 2016): @ggtakec worked! Thanks for the quick reply!
Author
Owner

@idlecool commented on GitHub (Oct 4, 2018):

adding _netdev as one of fstab options worked! thank you.

<!-- gh-comment-id:427151273 --> @idlecool commented on GitHub (Oct 4, 2018): adding `_netdev` as one of fstab options worked! thank you.
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.

<!-- gh-comment-id:478214051 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. I will close this, but if the problem persists, please reopen or post a new issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#154
No description provided.