mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #297] Can't mount s3fs on boot with Ubuntu 14.04 (used _netdev) #154
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#154
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @emschwar on GitHub (Nov 21, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/297
On Ubuntu 14.04 I tried setting s3fs to mount on boot in fstab with the following line (or near enough):
s3fs#my-bucket /opt/path/to/s3 fuse _netdev,allow_other,iam_role=some-iam-role,dbglevel=dbg 0 0I particularly made sure to use
_netdev, since I have seen in other issues that this fixes mount issues at boot. It did not here. Whenever I access it after rebooting, I get toldls: cannot access /opt/path/to/s3: Transport endpoint is not connectedI looked in /var/log/syslog, and found these lines that seem relevant:
I suspect the
Failed to reset handle and internal data for retryingline is relevant, but I'm not sure what it means, or how to fix it.If I then remount (
sudo umount /opt/path/to/s3; sudo mount /opt/path/to/s3), it works every time.@elmobp commented on GitHub (Nov 22, 2015):
Same issue but our workaround for now
in /etc/rc.local looking at it the network is not fully ready at that point!
@ggtakec commented on GitHub (Nov 24, 2015):
@emschwar It seems that your specified "IAM role name" was not found in.
At booting, the system might not have access to http://169.254.169.254/latest/meta-data/iam/security-credentials/some-iam-role.
So we need to check s3fs running after network up, and why could not access the IAM role meta-data URL.
I try to check this.
Regards,
@MattFenelon commented on GitHub (Jan 18, 2016):
I've come across the same problem, I've tried running with
bootwaitand_netdevoptions, with no success.s3fs is running on an Ubuntu 14.04 instance on EC2.
@ggtakec commented on GitHub (Jan 18, 2016):
Could you get http://169.254.169.254/latest/meta-data/iam/security-credentials/some-iam-role by curl command manually on terminal after bootup system?
If the result is success, you may not be able to get iam role at boot time.
Please check it.
Thanks in advance for your assistance.
@MattFenelon commented on GitHub (Jan 18, 2016):
@ggtakec After boot it works fine, I can
curlthe metadata URL, andumount; mount -amounts the share correctly.I'm using cloud-init to initialise the instance. In
cloud-init.logI see cloud-init accessing the metadata service successfully at 14:06:59, although not for the IAM path; insyslogI see s3fs fail to access the IAM metadata URL at 14:07:01. So perhaps it's to do with the IAM bit rather than the whole metadata service?@ggtakec commented on GitHub (Jan 21, 2016):
@MattFenelon Thanks for your response.
If we could not get IAM meta data at boot, we could not call s3fs(with aim_role option) from fstab at boot.
(I think that we do not want to manually start from rc script instead of stab.)
So I would examine whether there is a solution for this.
Regards,
@ggtakec commented on GitHub (Jan 24, 2016):
I found a bug about retrying to connect metadata(169.254.169.254) for getting iam role data.
This issue had two causes.
a bug about retrying for getting iam role data.
If s3fs failed to connect metadata at booting, but s3fs could not retry to connect.
(The default retries option value is 2, but no retries.)
This bug is fixed #338
In most cases, I had the once failed to connect metadata.
a line in fstab for s3fs
For ubuntu we need to use following format in fstab.
fuse.s3fs _netdev,...... 0 0
If you write following format in fstab, s3fs could not be run because s3fs called with wrong bucketname("s3fs")
s3fs# fuse _netdev,...... 0 0
So you need to change a line for s3fs in fstab to following:
my-bucket /opt/path/to/s3 fuse.s3fs _netdev,allow_other,iam_role=some-iam-role,dbglevel=dbg 0 0
Please try to use latest codes in master branch.
If it does not work, please be set to "retries" option three or more.
Regards,
@MattFenelon commented on GitHub (Jan 25, 2016):
Thank you for the bug fix. The retries seem to be working now, I can see the mounts connecting at boot.
@ggtakec commented on GitHub (Jan 31, 2016):
@MattFenelon Yes, I tested it on my ubuntu14.04 on EC2.
I built same ubuntu on ec2 again and tested fstab(fuse.s3fs), after that it worked with no problem.
However, it is the same as before, I could not mount on boot with fstab which had "s3fs#bucket" entry.
I have examined this problem, and got two strange things.
One is that the s3fs has been called more than once from mountall.
The first calling s3fs is called with correct bucket name, but later is called with wrong bucket name as "s3fs".
Thus the second time failed because the bucket name is "s3fs".
The other is that the first call is failed by CURLE_COULDNT_CONNECT error, despite specifying _netdev option.
Then my test environment ec2 successes to mount by only "fuse.s3fs" fstab, the case of "s3fs#bucket" is failed.
If you can, please let me know about the difference of the results and the format of fstab.
@MattFenelon commented on GitHub (Feb 2, 2016):
@ggtakec Thanks again for your help. The settings I'm using are equivalent to:
_netdev,dbglevel=info,url=https://s3.amazonaws.com,iam_role=some-iam-role,use_sse,allow_other,mp_umask=0077,enable_content_md5,retries=10,enable_noobj_cache,stat_cache_expire=600,uid=1000,gid=1000This is a sample entry from my fstab (some sensitive details changed):
/var/sftp/some-folderis owned by the user.The box is on version: 'Ubuntu 14.04.3 LTS'
@ggtakec commented on GitHub (Feb 6, 2016):
@MattFenelon
My test environment(the format of the fstab and version that your are using) is the same as yours.
I have tested some following format for fstab, those are is very simple.
The result is that "mount.TYPE" format only succeed, the other entries failed.
All type of "s3fs#bucket" failed because s3fs could not resolve the hostname.
(I'm sorry that I had to reply CURLE_COULDNT_RESOLVE_HOST wrong with CURLE_COULDNT_CONNECT.)
The reason is probably that s3fs was called before network was up.
It seems that there is difference between "mount.TYPE" and "s3fs#bucket" for timming.
I could not find any option pattern(_netdev, bootwait, etc) which success to resolve the hostname.
If you failed with "mount.TYPE" format, you only write s3fs booting in rc script.
So I do not know all the mountall and ubuntu, I want advice from the familiar to it.
Regards,
@evil-c commented on GitHub (Jul 21, 2016):
Hi,
I think I'm having a similar, perhaps the same issue. When I run the
s3fscommand, everything is fine. On boot however, nothing gets mounted. I'm using ansible to mount the bucket, etc.My
/etc/fstablooks like:Here is the output in
/var/log/syslogwith all the debug levels set:Thanks in advance for any insight you can provide.
@ggtakec commented on GitHub (Jul 21, 2016):
@evil-c
You can try to specify "retries" option, I think it helps us to solve your problem.
I hope it, and thanks for your assistance.
@evil-c commented on GitHub (Jul 21, 2016):
@ggtakec
worked! Thanks for the quick reply!
@idlecool commented on GitHub (Oct 4, 2018):
adding
_netdevas one of fstab options worked! thank you.@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.