[GH-ISSUE #407] Can't ssh into aws instance once instance is restarted #219

Closed
opened 2026-03-04 01:43:20 +03:00 by kerem · 2 comments
Owner

Originally created by @shawnFallon on GitHub (May 4, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/407

Hi Guys,

We have been using s3fs to mount an AWS s3 bucket to our AWS instances for an application. We have had some troubles with not being able to ssh into instances once the instances is rebooted. I've had a lot of back and forths with AWS and they say that it is an s3fs issue.

We are using an IAM role for the s3fs setup rather than an IAM user.

Our ansible script for setting up s3fs is as follows:


---
  - name: Update apt cache
    apt: update_cache=yes
  - name: Install necessary packages for s3fs
    apt: name="{{ item }}" state=latest update_cache=yes
    with_items:
      - build-essential
      - libfuse-dev
      - libcurl4-openssl-dev
      - libxml2-dev
      - mime-support
      - automake
      - libtool
      - wget
      - tar

  - name: Download s3fs version {{s3fs_version}}
    get_url: url=https://github.com/s3fs-fuse/s3fs-fuse/archive/{{s3fs_tar}} dest={{user_dir}}{{s3fs_tar}}
  - name: Download and unpacks s3fs archive files
    unarchive: src='{{user_dir}}{{s3fs_tar}}' dest={{user_dir}} copy=no
  - name: Build and install s3fs
    command: '{{ item }}'
    with_items:
      - ./autogen.sh
      - ./configure --prefix=/usr
      - make
      - make install
    args:
      chdir: '{{s3fs_dir}}'
  - name: Allow fuse to run as none root
    shell: sed -i -e "s/#user_allow_other/user_allow_other/g" /etc/fuse.conf


Our ansible script to mount the bucket looks like this:

- name: Ensure that {{item}} directory exists
  file: path={{item}} state=directory
  with_items:
    - "{{mounted_dir}}"
    - "{{s3fs_config_dir}}"
- name: Add configuration into fstab in order to mount s3fs on reboot
  lineinfile:
    dest: /etc/fstab
    regexp: '^s3fs#{{s3_bucket}}'
    line: 's3fs#{{s3_bucket}} {{mounted_dir}} fuse use_cache=/tmp,noatime,allow_other 0 0'
    state: present
    insertafter: EOF
- name: mount bucket {{s3_bucket}} to {{mounted_dir}} using ami role
  shell: s3fs {{s3_bucket}} {{mounted_dir}} -o iam_role={{ami_iam_profile}},use_cache=/tmp,noatime,nonempty,allow_other,umask=0
  tags: mount_s3fs

Just wondering if anyone else has had this problem and if they have found a work around...

Cheers

Originally created by @shawnFallon on GitHub (May 4, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/407 Hi Guys, We have been using s3fs to mount an AWS s3 bucket to our AWS instances for an application. We have had some troubles with not being able to ssh into instances once the instances is rebooted. I've had a lot of back and forths with AWS and they say that it is an s3fs issue. We are using an IAM role for the s3fs setup rather than an IAM user. Our ansible script for setting up s3fs is as follows: ``` --- - name: Update apt cache apt: update_cache=yes - name: Install necessary packages for s3fs apt: name="{{ item }}" state=latest update_cache=yes with_items: - build-essential - libfuse-dev - libcurl4-openssl-dev - libxml2-dev - mime-support - automake - libtool - wget - tar - name: Download s3fs version {{s3fs_version}} get_url: url=https://github.com/s3fs-fuse/s3fs-fuse/archive/{{s3fs_tar}} dest={{user_dir}}{{s3fs_tar}} - name: Download and unpacks s3fs archive files unarchive: src='{{user_dir}}{{s3fs_tar}}' dest={{user_dir}} copy=no - name: Build and install s3fs command: '{{ item }}' with_items: - ./autogen.sh - ./configure --prefix=/usr - make - make install args: chdir: '{{s3fs_dir}}' - name: Allow fuse to run as none root shell: sed -i -e "s/#user_allow_other/user_allow_other/g" /etc/fuse.conf ``` Our ansible script to mount the bucket looks like this: ``` - name: Ensure that {{item}} directory exists file: path={{item}} state=directory with_items: - "{{mounted_dir}}" - "{{s3fs_config_dir}}" - name: Add configuration into fstab in order to mount s3fs on reboot lineinfile: dest: /etc/fstab regexp: '^s3fs#{{s3_bucket}}' line: 's3fs#{{s3_bucket}} {{mounted_dir}} fuse use_cache=/tmp,noatime,allow_other 0 0' state: present insertafter: EOF - name: mount bucket {{s3_bucket}} to {{mounted_dir}} using ami role shell: s3fs {{s3_bucket}} {{mounted_dir}} -o iam_role={{ami_iam_profile}},use_cache=/tmp,noatime,nonempty,allow_other,umask=0 tags: mount_s3fs ``` Just wondering if anyone else has had this problem and if they have found a work around... Cheers
kerem closed this issue 2026-03-04 01:43:21 +03:00
Author
Owner

@ggtakec commented on GitHub (May 14, 2016):

@shawnFallon
Could you log into that instance before the problem is occurred?
And could you know the status(ex. s3fs process/mount status/s3fs's log/etc) after the problem is occurred?

If you can, please check s3fs status.
you can get logs by s3fs, you run s3fs with dbglevel option.
We need to know s3fs log, it helps us to solve this issue.

Thanks in advance for your help.

<!-- gh-comment-id:219226750 --> @ggtakec commented on GitHub (May 14, 2016): @shawnFallon Could you log into that instance before the problem is occurred? And could you know the status(ex. s3fs process/mount status/s3fs's log/etc) after the problem is occurred? If you can, please check s3fs status. you can get logs by s3fs, you run s3fs with dbglevel option. We need to know s3fs log, it helps us to solve this issue. Thanks in advance for your help.
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.

<!-- gh-comment-id:478216823 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. I will close this, but if the problem persists, please reopen or post a new issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#219
No description provided.