mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #407] Can't ssh into aws instance once instance is restarted #219
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#219
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @shawnFallon on GitHub (May 4, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/407
Hi Guys,
We have been using s3fs to mount an AWS s3 bucket to our AWS instances for an application. We have had some troubles with not being able to ssh into instances once the instances is rebooted. I've had a lot of back and forths with AWS and they say that it is an s3fs issue.
We are using an IAM role for the s3fs setup rather than an IAM user.
Our ansible script for setting up s3fs is as follows:
Our ansible script to mount the bucket looks like this:
Just wondering if anyone else has had this problem and if they have found a work around...
Cheers
@ggtakec commented on GitHub (May 14, 2016):
@shawnFallon
Could you log into that instance before the problem is occurred?
And could you know the status(ex. s3fs process/mount status/s3fs's log/etc) after the problem is occurred?
If you can, please check s3fs status.
you can get logs by s3fs, you run s3fs with dbglevel option.
We need to know s3fs log, it helps us to solve this issue.
Thanks in advance for your help.
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.