[GH-ISSUE #30] Ok to use mount as a shared file directory across a cluster of servers? #19

Closed
opened 2026-03-04 01:41:13 +03:00 by kerem · 17 comments
Owner

Originally created by @curiosity26 on GitHub (Apr 3, 2014).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/30

What I'd like to do is use a S3 bucket as a mountpoint for a cluster of compute instances. The configuration would do more reading than writing, so I figure that would be ok. I thought of this solution when I was looking at auto-scaling and a Hadoop configuration and thought it would be better for scaling and management to automount an S3 Bucket via S3fs and boom. I feel the support for write-locking would prevent corruption, even though the system I use doesn't overwrite files, just renames by appending _N. Any pros/cons to this solution?

Originally created by @curiosity26 on GitHub (Apr 3, 2014). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/30 What I'd like to do is use a S3 bucket as a mountpoint for a cluster of compute instances. The configuration would do more reading than writing, so I figure that would be ok. I thought of this solution when I was looking at auto-scaling and a Hadoop configuration and thought it would be better for scaling and management to automount an S3 Bucket via S3fs and boom. I feel the support for write-locking would prevent corruption, even though the system I use doesn't overwrite files, just renames by appending _N. Any pros/cons to this solution?
kerem closed this issue 2026-03-04 01:41:14 +03:00
Author
Owner

@timurb commented on GitHub (Apr 3, 2014):

The first thing that comes to my mind is it could be slow.
If your keep large files (several Gb) on S3 the delays between actual read/write starts could be several minutes until the file is downloaded/uploaded to S3.

<!-- gh-comment-id:39462453 --> @timurb commented on GitHub (Apr 3, 2014): The first thing that comes to my mind is it could be slow. If your keep large files (several Gb) on S3 the delays between actual read/write starts could be several minutes until the file is downloaded/uploaded to S3.
Author
Owner

@curiosity26 commented on GitHub (Apr 3, 2014):

No nothing that large. Typical web files, images, php files. It's going to
be via a Drupal installation and I will be using MongoDB for session and
front end caching. There might be some videos but I've been trying to move
those to YouTube and stream them that way.

On Thursday, April 3, 2014, Timur Batyrshin notifications@github.com
wrote:

The first thing that comes to my mind is it could be slow.
If your keep large files (several Gb) on S3 the delays between actual
read/write starts could be several minutes until the file is
downloaded/uploaded to S3.

Reply to this email directly or view it on GitHubhttps://github.com/s3fs-fuse/s3fs-fuse/issues/30#issuecomment-39462453
.

<!-- gh-comment-id:39462727 --> @curiosity26 commented on GitHub (Apr 3, 2014): No nothing that large. Typical web files, images, php files. It's going to be via a Drupal installation and I will be using MongoDB for session and front end caching. There might be some videos but I've been trying to move those to YouTube and stream them that way. On Thursday, April 3, 2014, Timur Batyrshin notifications@github.com wrote: > The first thing that comes to my mind is it could be slow. > If your keep large files (several Gb) on S3 the delays between actual > read/write starts could be several minutes until the file is > downloaded/uploaded to S3. > > ## > > Reply to this email directly or view it on GitHubhttps://github.com/s3fs-fuse/s3fs-fuse/issues/30#issuecomment-39462453 > .
Author
Owner

@ggtakec commented on GitHub (Apr 4, 2014):

Hi, curiosity

I think you can specify s3fs entry in /etc/fstab for automation.
(if I mis-understand about this issue please let me know.)
You specify below example entry in stab:
s3fs# /mnt/s3 fuse _netdev,ro,allow_other,... 0 0

Regards,

<!-- gh-comment-id:39590349 --> @ggtakec commented on GitHub (Apr 4, 2014): Hi, curiosity I think you can specify s3fs entry in /etc/fstab for automation. (if I mis-understand about this issue please let me know.) You specify below example entry in stab: s3fs#<your bucket> /mnt/s3 fuse _netdev,ro,allow_other,... 0 0 Regards,
Author
Owner

@curiosity26 commented on GitHub (Apr 4, 2014):

Thanks Takeshi! I was going to do the /etc/fatal mount so thanks for the
specific entry line. And thanks for building this software!

On Friday, April 4, 2014, Takeshi Nakatani notifications@github.com wrote:

Hi, curiosity

I think you can specify s3fs entry in /etc/fstab for automation.
(if I mis-understand about this issue please let me know.)
You specify below example entry in stab:
s3fs# /mnt/s3 fuse _netdev,ro,allow_other,... 0 0

Regards,

Reply to this email directly or view it on GitHubhttps://github.com/s3fs-fuse/s3fs-fuse/issues/30#issuecomment-39590349
.

<!-- gh-comment-id:39605223 --> @curiosity26 commented on GitHub (Apr 4, 2014): Thanks Takeshi! I was going to do the /etc/fatal mount so thanks for the specific entry line. And thanks for building this software! On Friday, April 4, 2014, Takeshi Nakatani notifications@github.com wrote: > Hi, curiosity > > I think you can specify s3fs entry in /etc/fstab for automation. > (if I mis-understand about this issue please let me know.) > You specify below example entry in stab: > s3fs# /mnt/s3 fuse _netdev,ro,allow_other,... 0 0 > > Regards, > > ## > > Reply to this email directly or view it on GitHubhttps://github.com/s3fs-fuse/s3fs-fuse/issues/30#issuecomment-39590349 > .
Author
Owner

@zetas commented on GitHub (Apr 16, 2014):

I'm also planning on using s3fs for this. Migrating a dirty php pre-packaged cms to elastic beanstalk and lack of a stable store for user uploaded content has led me to this to persist across the cluster. Let's hope it works.

<!-- gh-comment-id:40611574 --> @zetas commented on GitHub (Apr 16, 2014): I'm also planning on using s3fs for this. Migrating a dirty php pre-packaged cms to elastic beanstalk and lack of a stable store for user uploaded content has led me to this to persist across the cluster. Let's hope it works.
Author
Owner

@curiosity26 commented on GitHub (Apr 22, 2014):

Running into a weird problem. Hopefully you can help. I have my s3 bucket setup and I have the policy configured so that the IAM role Webservers can access the objects in the bucket. The IAM Role Webservers also has full admin access over s3 buckets (just to take that out of the equation). I also configured a user with admin access of s3 buckets to use in the passwd-s3fs file. All that is configured correctly, but when I try to mount the bucket, I get the following error when trying to access the mount. No logged errors or anything, just when I try to access the mount.

Here's the command I mount with:

sudo s3fs -o use_cache=/tmp,allow_other,use_rrs,iam_role=arn:aws:iam::############:role/WebServer -d sageagewebroot /media/webroot

Here's the error when I try 'ls /media/webroot':

ls: cannot access webroot: Transport endpoint is not connected

How can I troubleshoot this? I've tried everything in the Wiki, but I don't get any logs or error messages.

Thanks

<!-- gh-comment-id:41053315 --> @curiosity26 commented on GitHub (Apr 22, 2014): Running into a weird problem. Hopefully you can help. I have my s3 bucket setup and I have the policy configured so that the IAM role Webservers can access the objects in the bucket. The IAM Role Webservers also has full admin access over s3 buckets (just to take that out of the equation). I also configured a user with admin access of s3 buckets to use in the passwd-s3fs file. All that is configured correctly, but when I try to mount the bucket, I get the following error when trying to access the mount. No logged errors or anything, just when I try to access the mount. Here's the command I mount with: sudo s3fs -o use_cache=/tmp,allow_other,use_rrs,iam_role=arn:aws:iam::############:role/WebServer -d sageagewebroot /media/webroot Here's the error when I try 'ls /media/webroot': ls: cannot access webroot: Transport endpoint is not connected How can I troubleshoot this? I've tried everything in the Wiki, but I don't get any logs or error messages. Thanks
Author
Owner

@ggtakec commented on GitHub (Apr 26, 2014):

Hi, curlosity26

s3fs supports IAM role on EC2, please see following url.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

So that, you should specify only IAM role name for aim_role option, example "aim_role=myrole"(myrole is role name).

Please try it.
Regards,

<!-- gh-comment-id:41469587 --> @ggtakec commented on GitHub (Apr 26, 2014): Hi, curlosity26 s3fs supports IAM role on EC2, please see following url. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html So that, you should specify only IAM role name for aim_role option, example "aim_role=myrole"(myrole is role name). Please try it. Regards,
Author
Owner

@curiosity26 commented on GitHub (Apr 26, 2014):

Ok I thought they wanted the full ARN. I'll try it with just the name.

Can you tell me if there's an inactive timeout for the mount connection? I
had my web servers shoot a transport not connected error after sitting idle
for a day

On Saturday, April 26, 2014, Takeshi Nakatani notifications@github.com
wrote:

Hi, curlosity26

s3fs supports IAM role on EC2, please see following url.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

So that, you should specify only IAM role name for aim_role option,
example "aim_role=myrole"(myrole is role name).

Please try it.
Regards,


Reply to this email directly or view it on GitHubhttps://github.com/s3fs-fuse/s3fs-fuse/issues/30#issuecomment-41469587
.

<!-- gh-comment-id:41469669 --> @curiosity26 commented on GitHub (Apr 26, 2014): Ok I thought they wanted the full ARN. I'll try it with just the name. Can you tell me if there's an inactive timeout for the mount connection? I had my web servers shoot a transport not connected error after sitting idle for a day On Saturday, April 26, 2014, Takeshi Nakatani notifications@github.com wrote: > Hi, curlosity26 > > s3fs supports IAM role on EC2, please see following url. > > http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html > > So that, you should specify only IAM role name for aim_role option, > example "aim_role=myrole"(myrole is role name). > > Please try it. > Regards, > > — > Reply to this email directly or view it on GitHubhttps://github.com/s3fs-fuse/s3fs-fuse/issues/30#issuecomment-41469587 > .
Author
Owner

@ggtakec commented on GitHub (Apr 26, 2014):

s3fs does not keep alive(connection) to S3 server, if s3fs is needed to send a request by client(process) it makes new each connection for HTTP(S).
I think s3fs does not cause any problem after the web server does not access(send requests) a mount point for a while.
And if you use a IAM role, on this case s3fs knows that a token is expired and s3fs gets new token automatically before sending a request.

<!-- gh-comment-id:41470053 --> @ggtakec commented on GitHub (Apr 26, 2014): s3fs does not keep alive(connection) to S3 server, if s3fs is needed to send a request by client(process) it makes new each connection for HTTP(S). I think s3fs does not cause any problem after the web server does not access(send requests) a mount point for a while. And if you use a IAM role, on this case s3fs knows that a token is expired and s3fs gets new token automatically before sending a request.
Author
Owner

@curiosity26 commented on GitHub (Apr 26, 2014):

Thanks Takeshi! I'll try and get the IAM role working and see if that helps
prevent the tunnel fail issue.

On Saturday, April 26, 2014, Takeshi Nakatani notifications@github.com
wrote:

s3fs does not keep alive(connection) to S3 server, if s3fs is needed to
send a request by client(process) it makes new each connection for HTTP(S).
I think s3fs does not cause any problem after the web server does not
access(send requests) a mount point for a while.
And if you use a IAM role, on this case s3fs knows that a token is expired
and s3fs gets new token automatically before sending a request.


Reply to this email directly or view it on GitHubhttps://github.com/s3fs-fuse/s3fs-fuse/issues/30#issuecomment-41470053
.

<!-- gh-comment-id:41470348 --> @curiosity26 commented on GitHub (Apr 26, 2014): Thanks Takeshi! I'll try and get the IAM role working and see if that helps prevent the tunnel fail issue. On Saturday, April 26, 2014, Takeshi Nakatani notifications@github.com wrote: > s3fs does not keep alive(connection) to S3 server, if s3fs is needed to > send a request by client(process) it makes new each connection for HTTP(S). > I think s3fs does not cause any problem after the web server does not > access(send requests) a mount point for a while. > And if you use a IAM role, on this case s3fs knows that a token is expired > and s3fs gets new token automatically before sending a request. > > — > Reply to this email directly or view it on GitHubhttps://github.com/s3fs-fuse/s3fs-fuse/issues/30#issuecomment-41470053 > .
Author
Owner

@zetas commented on GitHub (May 1, 2014):

I'm not sure if this is helpful to anyone but I have an update on my situation. I tried using s3fs-fuse in a clustered environment on elastic beanstalk but it just wasn't reliable enough. Too many files failed to copy and failed to be available quick enough for the cloudfront CDN to pick them up on the subsequent page load for the user. I tried tweaking the retries and using a local cache, nothing seemed to help.

I ended up scrapping the idea and just going traditional. I setup a NAS using the SoftNAS AWS Marketplace option and i just mount the NFS share with a script in the Elasticbeanstalk configuration. Every time a new node is added to the cluster, it automatically mounts the NFS share and everything is good to go. Zero issue so far, it's been running under load for about a week.

<!-- gh-comment-id:41947777 --> @zetas commented on GitHub (May 1, 2014): I'm not sure if this is helpful to anyone but I have an update on my situation. I tried using s3fs-fuse in a clustered environment on elastic beanstalk but it just wasn't reliable enough. Too many files failed to copy and failed to be available quick enough for the cloudfront CDN to pick them up on the subsequent page load for the user. I tried tweaking the retries and using a local cache, nothing seemed to help. I ended up scrapping the idea and just going traditional. I setup a NAS using the SoftNAS AWS Marketplace option and i just mount the NFS share with a script in the Elasticbeanstalk configuration. Every time a new node is added to the cluster, it automatically mounts the NFS share and everything is good to go. Zero issue so far, it's been running under load for about a week.
Author
Owner

@tspicer commented on GitHub (Jun 29, 2015):

Just for clarification in case any stumbles upon this (like me). The proper syntax is iam_role="role name" and not aim_role="role name".

Example: -o iam_role="s3access-role"

<!-- gh-comment-id:116703998 --> @tspicer commented on GitHub (Jun 29, 2015): Just for clarification in case any stumbles upon this (like me). The proper syntax is iam_role="role name" and not aim_role="role name". Example: -o iam_role="s3access-role"
Author
Owner

@rorysavage77 commented on GitHub (Jul 1, 2015):

This works well - confirmed.

s3fs bucketName /target/mount/point -o iam_role="role name"

<!-- gh-comment-id:117679515 --> @rorysavage77 commented on GitHub (Jul 1, 2015): This works well - confirmed. s3fs bucketName /target/mount/point -o iam_role="role name"
Author
Owner

@ggtakec commented on GitHub (Jul 5, 2015):

@tspicer Thanks for help my fault.
It is correct the option name is iam_role.
Regards,

<!-- gh-comment-id:118603863 --> @ggtakec commented on GitHub (Jul 5, 2015): @tspicer Thanks for help my fault. It is correct the option name is **iam_role**. Regards,
Author
Owner

@gl-lamhnguyen commented on GitHub (Oct 23, 2015):

If I want to mount on boot by editing /etc/fstab file, where should I store iam_role?

<!-- gh-comment-id:150632227 --> @gl-lamhnguyen commented on GitHub (Oct 23, 2015): If I want to mount on boot by editing /etc/fstab file, where should I store iam_role?
Author
Owner

@lvandeyar commented on GitHub (Dec 17, 2015):

I can't see to setup /etc/fstab with an IAM role. Is there official documentation anywhere?

<!-- gh-comment-id:165548912 --> @lvandeyar commented on GitHub (Dec 17, 2015): I can't see to setup /etc/fstab with an IAM role. Is there official documentation anywhere?
Author
Owner

@ggtakec commented on GitHub (May 14, 2016):

@lvandeyar I updated iam_role option by #411.
This change added "auto" value for iam_role option, it gets iam_role name from meta data on your EC2.
Then if you can get the meta data at booting, I think it works good.
Please try it.
Thanks in advance for your help.

<!-- gh-comment-id:219217169 --> @ggtakec commented on GitHub (May 14, 2016): @lvandeyar I updated iam_role option by #411. This change added "auto" value for iam_role option, it gets iam_role name from meta data on your EC2. Then if you can get the meta data at booting, I think it works good. Please try it. Thanks in advance for your help.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#19
No description provided.