mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1196] Mounting S3 Bucket using IAM Role #634
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#634
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @aric49 on GitHub (Nov 12, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1196
Additional Information
In our AWS account, we have two EC2 instances configured with an IAM role which allows read and write access to our S3 bucket. One of our instances is able to successfully mount this instance using the IAM role while the other instance cannot. Both instances were working successfully until we took one of the instances down for a reboot earlier today.
Version of s3fs being used (s3fs --version)
On both instances:
Amazon Simple Storage Service File System V1.85 (commit:cc4a307) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
On both instances:
Kernel information (uname -r)
GNU/Linux Distribution, if applicable (cat /etc/os-release)
s3fs command line used, if applicable
/etc/fstab entry, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Attempting to run directly from Bash:
Trying to mount from /etc/fstab:
Details about issue
Mounting the S3 bucket using the IAM role does not appear to work, only one one instance. The other instance seems to be working fine. I have checked that both instances have the
ecsInstanceRoleIAM role, and I can see it from the EC2 metadata from inside the instance:NonWorking Instance:
Working Instance:
I thought maybe something was up with networking on the non-working instance, but I can mount the S3 bucket if I specify a credential file:
To me, the issue seems isolated to using IAM role to mount the bucket, which was previously working. Any help is super appreciated!
@aric49 commented on GitHub (Nov 13, 2019):
On further investigation, setting the log level to Debug, I am seeing the following messages:
Hitting the meta data URL manually using CURL, I am able to generate credentials from this box:
So it looks like the credentials are created properly... Any idea what I might be missing?
@aric49 commented on GitHub (Nov 13, 2019):
So it looks like checking out the exact v1.8.5 revision solves the issue for me, from this post: https://github.com/s3fs-fuse/s3fs-fuse/issues/1162#issuecomment-536864032
The autoscaling group we were using was just cloning master and building it. Perhaps there is some new code in master that's breaking mounting using IAM roles?
@rallister commented on GitHub (Jan 18, 2020):
there's recursion happening, if it's running as daemon it segfaults and leaves the folder mounted with fuse. transport error is just a symptom.