[GH-ISSUE #412] Mounted s3 gone after rebooting EC2 instance #217

Closed
opened 2026-03-04 01:43:20 +03:00 by kerem · 11 comments
Owner

Originally created by @yuanzhou on GitHub (May 10, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/412

Here is my use case: I created a cluster on AWS using cfncluster and successfully mounted my s3 bucket to all the cluster nodes using s3fs.

#!/bin/bash

# Have everyting installed under ec2-user home directory so we can login and submit jobs later
cd /home/ec2-user/

# Creating mountpoint
mkdir s3mnt

# So ec2-user can access when logged in via ssh
chmod 777 s3mnt

# Ensure we have all the dependencies
yum -y install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel

# Compile from master via the following commands
git clone https://github.com/s3fs-fuse/s3fs-fuse.git

cd s3fs-fuse

./autogen.sh

./configure

make

make install

# Don't forget to jump out
cd ..

# No need to keep the source files
rm -rf s3fs-fuse

# Enter S3 identity and credential in a file
echo xxxx:yyyyy > .passwd-s3fs

# Allow ec2-user to own this file
chown ec2-user:ec2-user .passwd-s3fs

# Make sure the file has proper permissions
chmod 600 .passwd-s3fs

# Actual mounting
# Need -o allow_other, otherwise will see ???? in directory listing
s3fs my-s3 /home/ec2-user/s3mnt -o allow_other -o passwd_file=/home/ec2-user/.passwd-s3fs -o umask=0000

And as suggested, I also edited the /etc/fstab by adding this mountpoint

[ec2-user@ip-172-31-19-81 ~]$ sudo cat /etc/fstab
#
LABEL=/     /           ext4    defaults,noatime  1   1
tmpfs       /dev/shm    tmpfs   defaults        0   0
devpts      /dev/pts    devpts  gid=5,mode=620  0   0
sysfs       /sys        sysfs   defaults        0   0
proc        /proc       proc    defaults        0   0
/dev/disk/by-ebs-volumeid/vol-f5824650 /shared ext4 _netdev 0 0
s3fs#my-s3 /home/ec2-user/s3mnt fuse _netdev,allow_other 0 0

I was also prompted to enable the "user_allow_other" in /etc/fuse.conf.

But when I tried to reboot or stop/start the master instance via AWS web console, the s3 is not mounted. I had to manually mount it again using

s3fs my-s3 /home/ec2-user/s3mnt -o allow_other -o passwd_file=/home/ec2-user/.passwd-s3fs -o umask=0000

Is there anything that I missed?

Originally created by @yuanzhou on GitHub (May 10, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/412 Here is my use case: I created a cluster on AWS using `cfncluster` and successfully mounted my s3 bucket to all the cluster nodes using s3fs. ``` #!/bin/bash # Have everyting installed under ec2-user home directory so we can login and submit jobs later cd /home/ec2-user/ # Creating mountpoint mkdir s3mnt # So ec2-user can access when logged in via ssh chmod 777 s3mnt # Ensure we have all the dependencies yum -y install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel # Compile from master via the following commands git clone https://github.com/s3fs-fuse/s3fs-fuse.git cd s3fs-fuse ./autogen.sh ./configure make make install # Don't forget to jump out cd .. # No need to keep the source files rm -rf s3fs-fuse # Enter S3 identity and credential in a file echo xxxx:yyyyy > .passwd-s3fs # Allow ec2-user to own this file chown ec2-user:ec2-user .passwd-s3fs # Make sure the file has proper permissions chmod 600 .passwd-s3fs # Actual mounting # Need -o allow_other, otherwise will see ???? in directory listing s3fs my-s3 /home/ec2-user/s3mnt -o allow_other -o passwd_file=/home/ec2-user/.passwd-s3fs -o umask=0000 ``` And as suggested, I also edited the `/etc/fstab` by adding this mountpoint ``` [ec2-user@ip-172-31-19-81 ~]$ sudo cat /etc/fstab # LABEL=/ / ext4 defaults,noatime 1 1 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/disk/by-ebs-volumeid/vol-f5824650 /shared ext4 _netdev 0 0 s3fs#my-s3 /home/ec2-user/s3mnt fuse _netdev,allow_other 0 0 ``` I was also prompted to enable the "user_allow_other" in `/etc/fuse.conf`. But when I tried to reboot or stop/start the master instance via AWS web console, the s3 is not mounted. I had to manually mount it again using ``` s3fs my-s3 /home/ec2-user/s3mnt -o allow_other -o passwd_file=/home/ec2-user/.passwd-s3fs -o umask=0000 ``` Is there anything that I missed?
kerem closed this issue 2026-03-04 01:43:20 +03:00
Author
Owner

@yuanzhou commented on GitHub (May 11, 2016):

Just to update, I also tried to add a cron job to mount this s3 bucket upon reboot. But it didn't work. I tested with other simple cron jobs on reboot and they worked though.

<!-- gh-comment-id:218518973 --> @yuanzhou commented on GitHub (May 11, 2016): Just to update, I also tried to add a cron job to mount this s3 bucket upon reboot. But it didn't work. I tested with other simple cron jobs on reboot and they worked though.
Author
Owner

@ggtakec commented on GitHub (May 14, 2016):

@yuanzhou
Is there some log about s3fs in /var/messages(or other log file) after failure?
(you can change the log level by dbglevel option or -d option)

Before, we had to specify the retries option in fstab for using iam_role option.

Please try to get a log, if you can please set retries option too.

Thanks in advance for your help.

<!-- gh-comment-id:219232701 --> @ggtakec commented on GitHub (May 14, 2016): @yuanzhou Is there some log about s3fs in /var/messages(or other log file) after failure? (you can change the log level by dbglevel option or -d option) Before, we had to specify the retries option in fstab for using iam_role option. Please try to get a log, if you can please set retries option too. Thanks in advance for your help.
Author
Owner

@selimnasrallah88 commented on GitHub (Jul 4, 2016):

Hello yuanzhou ,

S3 mounting should be executed after networking services

Solution: Create a service using chkconfig and add it to the last order on boot S99 per exemple and 80 order on kill

#!/bin/sh

chkconfig: 2345 99 80

copy this file into /etc/init.d/

sudo chkconfig --add s3

sudo service s3 start

Optional: Configure required runlevels for the service

start() {
echo -n $"Starting..."
sudo /usr/local/bin/s3fs -o allow_other -o use_cache=**** bucket /s3dbbackup

sudo ls -al /s3dbbackup
}

stop() {
echo -n $"Stopping..."
sudo umount /s3dbbackup
}

reload(){
sudo umount /s3dbbackup
sudo /usr/local/bin/s3fs -o allow_other -o use_cache=**** bucket /s3dbbackup kafbucket /s3dbbackup
sudo ls -al /s3dbbackup
}

status(){
sudo ls -al /s3dbbackup
}

See how we were called.

case "$1" in
start)
start
;;
stop)
stop
;;
status)
status
;;
restart)
stop
start
;;
reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|status}"
RETVAL=2
esac

exit $RETVAL

<!-- gh-comment-id:230363042 --> @selimnasrallah88 commented on GitHub (Jul 4, 2016): Hello yuanzhou , S3 mounting should be executed after networking services Solution: Create a service using chkconfig and add it to the last order on boot S99 per exemple and 80 order on kill #!/bin/sh chkconfig: 2345 99 80 copy this file into /etc/init.d/ sudo chkconfig --add s3 sudo service s3 start Optional: Configure required runlevels for the service start() { echo -n $"Starting..." sudo /usr/local/bin/s3fs -o allow_other -o use_cache=***\* bucket /s3dbbackup sudo ls -al /s3dbbackup } stop() { echo -n $"Stopping..." sudo umount /s3dbbackup } reload(){ sudo umount /s3dbbackup sudo /usr/local/bin/s3fs -o allow_other -o use_cache=***\* bucket /s3dbbackup kafbucket /s3dbbackup sudo ls -al /s3dbbackup } status(){ sudo ls -al /s3dbbackup } See how we were called. case "$1" in start) start ;; stop) stop ;; status) status ;; restart) stop start ;; reload) reload ;; *) echo $"Usage: $prog {start|stop|restart|status}" RETVAL=2 esac exit $RETVAL
Author
Owner

@yuanzhou commented on GitHub (Jul 5, 2016):

@selimnasrallah88 thanks a lot for the suggestion. We've switched from S3 to EBS due to the performance issue. AWS also made EFS available recently.

<!-- gh-comment-id:230468657 --> @yuanzhou commented on GitHub (Jul 5, 2016): @selimnasrallah88 thanks a lot for the suggestion. We've switched from S3 to EBS due to the performance issue. AWS also made EFS available recently.
Author
Owner

@ngbranitsky commented on GitHub (Jul 5, 2016):

What was your use case for S3 as EBS clearly does not support shared storage?
AWS EFS is restricted to a single VPC, doesn't support snapshots, and costs $0.30/GB/Month.
If you are looking for a cross VPC solution that does support snapshots,
consider a Virtual Private SAN (VPSA), from AWS partner Zadara, for $0.08/GB/Month.
It's available in the AWS Marketplace.

Norman Branitsky

On Jul 5, 2016, at 8:49 AM, yuanzhou notifications@github.com wrote:

@selimnasrallah88 thanks a lot for the suggestion. We've switched from S3 to EBS due to the performance issue. AWS also made EFS available recently.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

<!-- gh-comment-id:230474179 --> @ngbranitsky commented on GitHub (Jul 5, 2016): What was your use case for S3 as EBS clearly does not support shared storage? AWS EFS is restricted to a single VPC, doesn't support snapshots, and costs $0.30/GB/Month. If you are looking for a cross VPC solution that does support snapshots, consider a Virtual Private SAN (VPSA), from AWS partner Zadara, for $0.08/GB/Month. It's available in the AWS Marketplace. Norman Branitsky > On Jul 5, 2016, at 8:49 AM, yuanzhou notifications@github.com wrote: > > @selimnasrallah88 thanks a lot for the suggestion. We've switched from S3 to EBS due to the performance issue. AWS also made EFS available recently. > > — > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub, or mute the thread.
Author
Owner

@ggtakec commented on GitHub (Jul 18, 2016):

I'm sorry fo rmy late reply.

@selimnasrallah88, @ngbranitsky thanks for your help.

@yuanzhou
You can try to specify "retries" option in fstab.
Maybe, you might be able to resolve this problem by this option(ex. retries=5, retries=10...)

Thanks in advance for your assistance.

<!-- gh-comment-id:233295901 --> @ggtakec commented on GitHub (Jul 18, 2016): I'm sorry fo rmy late reply. @selimnasrallah88, @ngbranitsky thanks for your help. @yuanzhou You can try to specify "retries" option in fstab. Maybe, you might be able to resolve this problem by this option(ex. retries=5, retries=10...) Thanks in advance for your assistance.
Author
Owner

@daavve commented on GitHub (Jul 19, 2017):

I have the same problem with an Arch Linux server fstab. I have successfully mounted using s3fs from my user account, but I cannot get the mount working using fstab.

my-bucket /mnt/s3fs fuse.s3fs _netdev,allow_other,endpoint=us-west-2,use_cache=/tmp,storage_class=reduced_redundancy 0 0

When I try to mount get the following:

# mount -a -v
/                        : ignored
/mnt/s3fs                : successfully mounted
# lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    254:0    0  10G  0 disk 
└─vda1 254:1    0  10G  0 part /
#ls -al /mnt/s3fs/
total 8
drwxr-xr-x 2 root root 4096 Jul 18 21:22 .
drwxr-xr-x 3 root root 4096 Jul 18 22:59 ..

I have lots of files inside by bucket so ls should have done something. Dmesg looks normal and I cannot find any log messages.

-Thanks,

-Dave

<!-- gh-comment-id:316277647 --> @daavve commented on GitHub (Jul 19, 2017): I have the same problem with an Arch Linux server fstab. I have successfully mounted using s3fs from my user account, but I cannot get the mount working using fstab. ``` my-bucket /mnt/s3fs fuse.s3fs _netdev,allow_other,endpoint=us-west-2,use_cache=/tmp,storage_class=reduced_redundancy 0 0 ``` When I try to mount get the following: ``` # mount -a -v / : ignored /mnt/s3fs : successfully mounted # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 254:0 0 10G 0 disk └─vda1 254:1 0 10G 0 part / #ls -al /mnt/s3fs/ total 8 drwxr-xr-x 2 root root 4096 Jul 18 21:22 . drwxr-xr-x 3 root root 4096 Jul 18 22:59 .. ``` I have lots of files inside by bucket so ls should have done something. Dmesg looks normal and I cannot find any log messages. -Thanks, -Dave
Author
Owner

@stormm2138 commented on GitHub (Feb 24, 2018):

I updated the init script provided by @selimnasrallah88 --

#!/bin/sh
# fuse_s3_mount          Mount / unmount an s3 bucket using fuse
#
# chkconfig: 2345 85 15
# description: Use Fuse to mount an s3 bucket locally -- https://github.com/s3fs-fuse/s3fs-fuse
#
### BEGIN INIT INFO
# Provides: fuse_s3_mount
# Required-Start: $local_fs $remote_fs $network $named $fuse
# Required-Stop: $local_fs $remote_fs $network
# Short-Description: start and stop s3 Fuse Mount
# Description: Use Fuse to mount an s3 bucket locally -- https://github.com/s3fs-fuse/s3fs-fuse
### END INIT INFO


# Source function library.
. /etc/init.d/functions

MOUNT_PATH=""
RETVAL=0

if [ -z "${MOUNT_PATH}" ]; then
    echo -n "MOUNT_PATH must be set"
    failure
    exit 1
fi

start() {
    echo "Starting Fuse s3 mount service to mount ${MOUNT_PATH} to s3"
    if mountpoint ${MOUNT_PATH} >> /dev/null; then
       echo -n "${MOUNT_PATH} is already mounted, skipping start"
       warning
       echo
    else
       mount ${MOUNT_PATH}
       mountpoint ${MOUNT_PATH} >> /dev/null
       RETVAL=$?
       echo -n "Mounting fuse s3 mount... "
       if [ ${RETVAL} != 0 ]; then
         failure
          echo -e "\n You can debug the Fuse mounting using:\n"
          echo -e "s3fs BUCKET_NAME ${MOUNT_PATH}  -o dbglevel=info -f -o curldbg"
       else
          success
          echo -e "\nMount contents: $(ls -l ${MOUNT_PATH}/)"
       fi
    fi
}

stop() {
    echo "Stopping Fuse s3 mount service."
    if ! mountpoint ${MOUNT_PATH} >> /dev/null; then
       echo -n "${MOUNT_PATH} was not mounted, skipping stop"
       warning
    else
       umount ${MOUNT_PATH}
       RETVAL=$?
       echo -n "Unmounting fuse s3 mount..."
       if [ ${RETVAL} != 0 ]; then
          failure
       else
          success
       fi
    fi
    echo
}

status(){
    mountpoint ${MOUNT_PATH}
    echo "Mount contents: $(ls -l ${MOUNT_PATH}/)"
}

case "$1" in
start)
    start
    ;;
stop)
    stop
    ;;
status)
    status
    ;;
restart)
    stop
    start
    ;;
*)
    echo $"Usage: $prog {start|stop|restart|status}"
    RETVAL=2
    esac

exit $RETVAL

<!-- gh-comment-id:368180984 --> @stormm2138 commented on GitHub (Feb 24, 2018): I updated the init script provided by @selimnasrallah88 -- ```bash #!/bin/sh # fuse_s3_mount Mount / unmount an s3 bucket using fuse # # chkconfig: 2345 85 15 # description: Use Fuse to mount an s3 bucket locally -- https://github.com/s3fs-fuse/s3fs-fuse # ### BEGIN INIT INFO # Provides: fuse_s3_mount # Required-Start: $local_fs $remote_fs $network $named $fuse # Required-Stop: $local_fs $remote_fs $network # Short-Description: start and stop s3 Fuse Mount # Description: Use Fuse to mount an s3 bucket locally -- https://github.com/s3fs-fuse/s3fs-fuse ### END INIT INFO # Source function library. . /etc/init.d/functions MOUNT_PATH="" RETVAL=0 if [ -z "${MOUNT_PATH}" ]; then echo -n "MOUNT_PATH must be set" failure exit 1 fi start() { echo "Starting Fuse s3 mount service to mount ${MOUNT_PATH} to s3" if mountpoint ${MOUNT_PATH} >> /dev/null; then echo -n "${MOUNT_PATH} is already mounted, skipping start" warning echo else mount ${MOUNT_PATH} mountpoint ${MOUNT_PATH} >> /dev/null RETVAL=$? echo -n "Mounting fuse s3 mount... " if [ ${RETVAL} != 0 ]; then failure echo -e "\n You can debug the Fuse mounting using:\n" echo -e "s3fs BUCKET_NAME ${MOUNT_PATH} -o dbglevel=info -f -o curldbg" else success echo -e "\nMount contents: $(ls -l ${MOUNT_PATH}/)" fi fi } stop() { echo "Stopping Fuse s3 mount service." if ! mountpoint ${MOUNT_PATH} >> /dev/null; then echo -n "${MOUNT_PATH} was not mounted, skipping stop" warning else umount ${MOUNT_PATH} RETVAL=$? echo -n "Unmounting fuse s3 mount..." if [ ${RETVAL} != 0 ]; then failure else success fi fi echo } status(){ mountpoint ${MOUNT_PATH} echo "Mount contents: $(ls -l ${MOUNT_PATH}/)" } case "$1" in start) start ;; stop) stop ;; status) status ;; restart) stop start ;; *) echo $"Usage: $prog {start|stop|restart|status}" RETVAL=2 esac exit $RETVAL ```
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.

<!-- gh-comment-id:478217031 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. I will close this, but if the problem persists, please reopen or post a new issue.
Author
Owner

@manikandan24f commented on GitHub (Jul 15, 2019):

  1. Just check --> which s3fs (Note down the path)
  2. Then vi /etc/rc.local
  3. Enter the path copied from the step1 s3fs your_bucketname -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket
    4)Then chmod +x /etc/rc.local

This worked for me even after my instance got restarted..!
Hope this helps..

<!-- gh-comment-id:511345302 --> @manikandan24f commented on GitHub (Jul 15, 2019): 1) Just check --> which s3fs (Note down the path) 2) Then vi /etc/rc.local 3) Enter the path copied from the step1 s3fs your_bucketname -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket 4)Then chmod +x /etc/rc.local This worked for me even after my instance got restarted..! Hope this helps..
Author
Owner

@BuffMcBigHuge commented on GitHub (Nov 14, 2022):

Just chiming in here. You can also auto-mount the filesystem on a specific user with the following method:

sudo su - 
nano /etc/rc.local
#!/bin/bash

# Mount S3FS on boot with user 'ubuntu'
su ubuntu -c 's3fs my-folder ${HOME}/my-folder -o passwd_file=${HOME}/.passwd-s3fs'
chmod +x /etc/rc.local
<!-- gh-comment-id:1314411495 --> @BuffMcBigHuge commented on GitHub (Nov 14, 2022): Just chiming in here. You can also auto-mount the filesystem on a specific user with the following method: ``` sudo su - nano /etc/rc.local ``` ``` #!/bin/bash # Mount S3FS on boot with user 'ubuntu' su ubuntu -c 's3fs my-folder ${HOME}/my-folder -o passwd_file=${HOME}/.passwd-s3fs' ``` ``` chmod +x /etc/rc.local ```
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#217
No description provided.