mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #30] Ok to use mount as a shared file directory across a cluster of servers? #19
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#19
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @curiosity26 on GitHub (Apr 3, 2014).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/30
What I'd like to do is use a S3 bucket as a mountpoint for a cluster of compute instances. The configuration would do more reading than writing, so I figure that would be ok. I thought of this solution when I was looking at auto-scaling and a Hadoop configuration and thought it would be better for scaling and management to automount an S3 Bucket via S3fs and boom. I feel the support for write-locking would prevent corruption, even though the system I use doesn't overwrite files, just renames by appending _N. Any pros/cons to this solution?
@timurb commented on GitHub (Apr 3, 2014):
The first thing that comes to my mind is it could be slow.
If your keep large files (several Gb) on S3 the delays between actual read/write starts could be several minutes until the file is downloaded/uploaded to S3.
@curiosity26 commented on GitHub (Apr 3, 2014):
No nothing that large. Typical web files, images, php files. It's going to
be via a Drupal installation and I will be using MongoDB for session and
front end caching. There might be some videos but I've been trying to move
those to YouTube and stream them that way.
On Thursday, April 3, 2014, Timur Batyrshin notifications@github.com
wrote:
@ggtakec commented on GitHub (Apr 4, 2014):
Hi, curiosity
I think you can specify s3fs entry in /etc/fstab for automation.
(if I mis-understand about this issue please let me know.)
You specify below example entry in stab:
s3fs# /mnt/s3 fuse _netdev,ro,allow_other,... 0 0
Regards,
@curiosity26 commented on GitHub (Apr 4, 2014):
Thanks Takeshi! I was going to do the /etc/fatal mount so thanks for the
specific entry line. And thanks for building this software!
On Friday, April 4, 2014, Takeshi Nakatani notifications@github.com wrote:
@zetas commented on GitHub (Apr 16, 2014):
I'm also planning on using s3fs for this. Migrating a dirty php pre-packaged cms to elastic beanstalk and lack of a stable store for user uploaded content has led me to this to persist across the cluster. Let's hope it works.
@curiosity26 commented on GitHub (Apr 22, 2014):
Running into a weird problem. Hopefully you can help. I have my s3 bucket setup and I have the policy configured so that the IAM role Webservers can access the objects in the bucket. The IAM Role Webservers also has full admin access over s3 buckets (just to take that out of the equation). I also configured a user with admin access of s3 buckets to use in the passwd-s3fs file. All that is configured correctly, but when I try to mount the bucket, I get the following error when trying to access the mount. No logged errors or anything, just when I try to access the mount.
Here's the command I mount with:
sudo s3fs -o use_cache=/tmp,allow_other,use_rrs,iam_role=arn:aws:iam::############:role/WebServer -d sageagewebroot /media/webroot
Here's the error when I try 'ls /media/webroot':
ls: cannot access webroot: Transport endpoint is not connected
How can I troubleshoot this? I've tried everything in the Wiki, but I don't get any logs or error messages.
Thanks
@ggtakec commented on GitHub (Apr 26, 2014):
Hi, curlosity26
s3fs supports IAM role on EC2, please see following url.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
So that, you should specify only IAM role name for aim_role option, example "aim_role=myrole"(myrole is role name).
Please try it.
Regards,
@curiosity26 commented on GitHub (Apr 26, 2014):
Ok I thought they wanted the full ARN. I'll try it with just the name.
Can you tell me if there's an inactive timeout for the mount connection? I
had my web servers shoot a transport not connected error after sitting idle
for a day
On Saturday, April 26, 2014, Takeshi Nakatani notifications@github.com
wrote:
@ggtakec commented on GitHub (Apr 26, 2014):
s3fs does not keep alive(connection) to S3 server, if s3fs is needed to send a request by client(process) it makes new each connection for HTTP(S).
I think s3fs does not cause any problem after the web server does not access(send requests) a mount point for a while.
And if you use a IAM role, on this case s3fs knows that a token is expired and s3fs gets new token automatically before sending a request.
@curiosity26 commented on GitHub (Apr 26, 2014):
Thanks Takeshi! I'll try and get the IAM role working and see if that helps
prevent the tunnel fail issue.
On Saturday, April 26, 2014, Takeshi Nakatani notifications@github.com
wrote:
@zetas commented on GitHub (May 1, 2014):
I'm not sure if this is helpful to anyone but I have an update on my situation. I tried using s3fs-fuse in a clustered environment on elastic beanstalk but it just wasn't reliable enough. Too many files failed to copy and failed to be available quick enough for the cloudfront CDN to pick them up on the subsequent page load for the user. I tried tweaking the retries and using a local cache, nothing seemed to help.
I ended up scrapping the idea and just going traditional. I setup a NAS using the SoftNAS AWS Marketplace option and i just mount the NFS share with a script in the Elasticbeanstalk configuration. Every time a new node is added to the cluster, it automatically mounts the NFS share and everything is good to go. Zero issue so far, it's been running under load for about a week.
@tspicer commented on GitHub (Jun 29, 2015):
Just for clarification in case any stumbles upon this (like me). The proper syntax is iam_role="role name" and not aim_role="role name".
Example: -o iam_role="s3access-role"
@rorysavage77 commented on GitHub (Jul 1, 2015):
This works well - confirmed.
s3fs bucketName /target/mount/point -o iam_role="role name"
@ggtakec commented on GitHub (Jul 5, 2015):
@tspicer Thanks for help my fault.
It is correct the option name is iam_role.
Regards,
@gl-lamhnguyen commented on GitHub (Oct 23, 2015):
If I want to mount on boot by editing /etc/fstab file, where should I store iam_role?
@lvandeyar commented on GitHub (Dec 17, 2015):
I can't see to setup /etc/fstab with an IAM role. Is there official documentation anywhere?
@ggtakec commented on GitHub (May 14, 2016):
@lvandeyar I updated iam_role option by #411.
This change added "auto" value for iam_role option, it gets iam_role name from meta data on your EC2.
Then if you can get the meta data at booting, I think it works good.
Please try it.
Thanks in advance for your help.