mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #412] Mounted s3 gone after rebooting EC2 instance #217
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#217
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @yuanzhou on GitHub (May 10, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/412
Here is my use case: I created a cluster on AWS using
cfnclusterand successfully mounted my s3 bucket to all the cluster nodes using s3fs.And as suggested, I also edited the
/etc/fstabby adding this mountpointI was also prompted to enable the "user_allow_other" in
/etc/fuse.conf.But when I tried to reboot or stop/start the master instance via AWS web console, the s3 is not mounted. I had to manually mount it again using
Is there anything that I missed?
@yuanzhou commented on GitHub (May 11, 2016):
Just to update, I also tried to add a cron job to mount this s3 bucket upon reboot. But it didn't work. I tested with other simple cron jobs on reboot and they worked though.
@ggtakec commented on GitHub (May 14, 2016):
@yuanzhou
Is there some log about s3fs in /var/messages(or other log file) after failure?
(you can change the log level by dbglevel option or -d option)
Before, we had to specify the retries option in fstab for using iam_role option.
Please try to get a log, if you can please set retries option too.
Thanks in advance for your help.
@selimnasrallah88 commented on GitHub (Jul 4, 2016):
Hello yuanzhou ,
S3 mounting should be executed after networking services
Solution: Create a service using chkconfig and add it to the last order on boot S99 per exemple and 80 order on kill
#!/bin/sh
chkconfig: 2345 99 80
copy this file into /etc/init.d/
sudo chkconfig --add s3
sudo service s3 start
Optional: Configure required runlevels for the service
start() {
echo -n $"Starting..."
sudo /usr/local/bin/s3fs -o allow_other -o use_cache=**** bucket /s3dbbackup
sudo ls -al /s3dbbackup
}
stop() {
echo -n $"Stopping..."
sudo umount /s3dbbackup
}
reload(){
sudo umount /s3dbbackup
sudo /usr/local/bin/s3fs -o allow_other -o use_cache=**** bucket /s3dbbackup kafbucket /s3dbbackup
sudo ls -al /s3dbbackup
}
status(){
sudo ls -al /s3dbbackup
}
See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status
;;
restart)
stop
start
;;
reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|status}"
RETVAL=2
esac
exit $RETVAL
@yuanzhou commented on GitHub (Jul 5, 2016):
@selimnasrallah88 thanks a lot for the suggestion. We've switched from S3 to EBS due to the performance issue. AWS also made EFS available recently.
@ngbranitsky commented on GitHub (Jul 5, 2016):
What was your use case for S3 as EBS clearly does not support shared storage?
AWS EFS is restricted to a single VPC, doesn't support snapshots, and costs $0.30/GB/Month.
If you are looking for a cross VPC solution that does support snapshots,
consider a Virtual Private SAN (VPSA), from AWS partner Zadara, for $0.08/GB/Month.
It's available in the AWS Marketplace.
Norman Branitsky
@ggtakec commented on GitHub (Jul 18, 2016):
I'm sorry fo rmy late reply.
@selimnasrallah88, @ngbranitsky thanks for your help.
@yuanzhou
You can try to specify "retries" option in fstab.
Maybe, you might be able to resolve this problem by this option(ex. retries=5, retries=10...)
Thanks in advance for your assistance.
@daavve commented on GitHub (Jul 19, 2017):
I have the same problem with an Arch Linux server fstab. I have successfully mounted using s3fs from my user account, but I cannot get the mount working using fstab.
When I try to mount get the following:
I have lots of files inside by bucket so ls should have done something. Dmesg looks normal and I cannot find any log messages.
-Thanks,
-Dave
@stormm2138 commented on GitHub (Feb 24, 2018):
I updated the init script provided by @selimnasrallah88 --
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.
@manikandan24f commented on GitHub (Jul 15, 2019):
4)Then chmod +x /etc/rc.local
This worked for me even after my instance got restarted..!
Hope this helps..
@BuffMcBigHuge commented on GitHub (Nov 14, 2022):
Just chiming in here. You can also auto-mount the filesystem on a specific user with the following method: