mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #152] How to fix permanently transport end point is not connected in S3fs #88
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#88
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mitosistech on GitHub (Mar 16, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/152
I have one s3 bucket and two EC2 instance.I have mounted S3 bucket in EC2 instance.We are acessing all file of S3 bucket through our App.Now a days "Tansport end point is not connected" to s3fs is coming frequently.For fix this issue I am unmounting and mount again s3 bucket in EC2 instance.But I need the root cause of this issue.Why its coming frequently.Please help me.
@rehoehle commented on GitHub (Mar 17, 2015):
I have the same problem its killing my mount point. Is there a solution its very annoying?
@mitosistech which version do you use?
@mitosistech commented on GitHub (Mar 17, 2015):
I have used s3fs 1.74
@rehoehle commented on GitHub (Mar 17, 2015):
ok :( i have used the newest version 1.78.. hmm
@mitosistech commented on GitHub (Mar 17, 2015):
For you also same Tranport endpoint is not connected error?
@rehoehle commented on GitHub (Mar 17, 2015):
Yes same problem
@mitosistech commented on GitHub (Mar 17, 2015):
If will do un mount and again mount then it will work.But this is not permanent fix.If many times this same problem will come ,it is not good.So I am searching permanent solution.I think this is not version problem.
@boazrf commented on GitHub (Mar 18, 2015):
Some observations:
In any case - this is indeed an annoying issue that needs attention.
@mitosistech commented on GitHub (Mar 18, 2015):
Can you please tell me ,where this multireq_max param is present?
@rehoehle commented on GitHub (Mar 18, 2015):
I think your can set that parameter in your fstab file i have test it now
use allow_other,multireq_max=5 0 0
I don't know if its correct but its working.
@mitosistech commented on GitHub (Mar 18, 2015):
But I have set use user allow_other in fuse.conf
@boazrf commented on GitHub (Mar 18, 2015):
I'm setting it when running s3fs from command line:
/usr/local/bin/s3fs [bucket] [mount-point] -o multireq_max=5
@mitosistech commented on GitHub (Mar 19, 2015):
I have used this command
$ sudo s3fs your_backet_name -o use_cache=/tmp -o allow_other /mnt/s3storage
you are telling ,I will give command like this
$ sudo s3fs your_backet_name -o use_cache=/tmp -o allow_other -o multireq_max=5 /mnt/s3storage
Am I right?
Have you know , how one S3 bucket will work for mount local folder of two EC2 instance?
@darrencruse commented on GitHub (Mar 19, 2015):
Just fwiw we've been migrating a bunch of php content from some old apache servers to amazon s3 w/s3fs and I'd had this problem 2 or 3 times in the last few months.
But we changed our configurations to about double our load of requests on tuesday and this problem has went to happening about 2 or 3 times now per day.
Glad to see the suggestion about multireq_max am getting ready to try it.
Update: It seems to have helped woo hoo!!!! No dropped mount points since I lowered the multireq_max setting to 5!!
Update to the Update: It seems I'd woo hoo'ed prematurely - I did have a mount point drop on me again yesterday even with the multireq_max setting. I wonder if anybody's running like a monitor on their mount points to automatically detect and remount them? I think that's where I'm headed - not sure exactly how yet any tips appreciated...
Does anybody know though - if multireq_max refers to the number of simultaneous requests to amazon S3, and the default is 20 but we reduce it to 5...
Will this hurt performance? i.e. response times?
Update: still curious about this though - anybody know if lowering this hurts performance?
@darrencruse commented on GitHub (Apr 1, 2015):
I just finished putting a watchdog on this mount point that we keep losing - thought I'd share it in case it helps others here.
I used this blog as a guide it was helpful:
http://blog.eracc.com/2010/05/08/linux-monitor-a-service-with-a-watchdog-script/
Here's my script I put in root's crontab:
Hope this helps others here,
Darren
@dev-yohan commented on GitHub (Apr 7, 2015):
seems to work fine @darrencruse, thanks for the script!
@ggtakec commented on GitHub (Apr 12, 2015):
#167 may have solved this Issue.
If this issue reason is timeout for connecting/reading/writing, the default value changing maybe solve this.
If you got same error with latest codes, please try to set connect_timeout and readwrite_timeout options.
Thanks in advance for your help.
@chrisschaub commented on GitHub (Apr 14, 2015):
I'm using the latest master and can get the connect to work by specifying keys directly. But, using -o iam_role=somerole does not work, always get Transport endpoint is not connected. Does the iam_role stuff work?
@ggtakec commented on GitHub (Apr 18, 2015):
@chrisschaub I think "iam_role" option is working without problem.
Probably you should do that this option needs "role name" parameter please check name and your role setting on console.
Thanks in advance for your help.
@jackyq2015 commented on GitHub (Aug 20, 2015):
Thanks for the watchdog script. It's great!
Looking forward a fix in the code base though.
@Fei-Guang commented on GitHub (Jun 2, 2016):
this issue happened today and the version is ossfs_1.79.8_ubuntu14.04_amd64
@stefandelaet commented on GitHub (Jan 12, 2017):
I have the same issue using s3fs version 1.80 , using centos 7 on an ec2 instance . I am using the default bucket name mount point fuse.s3fs _netdev, 0 0 mounting in /etc/fstab.
It was running ok for 5 hours, than suddenly the mount was gone.
@ggtakec commented on GitHub (Jan 15, 2017):
@stefandelaet
Please try to specify readwrite_timeout(and connect_timeout) option, the result maybe help us to solve this issue.
And if you can, please use latest version in master branch(like #506)
Thanks in advance for your help.
@mgla commented on GitHub (Feb 10, 2017):
I am experiencing this issue with s3fs v1.180 (commit
d40da2c, which is the newest version).Mount options: _netdev,allow_other,iam_role=auto,umask=0000
readwrite_timeout is 60 seconds (default, although the Readme.md states 30 is the default).
connect_timeout is 300 seconds (default).
I found a rather old pull request (https://github.com/s3fs-fuse/s3fs-fuse/pull/167), which seemingly tried to address this issue, but I suppose it did not work.
Is there anything I can do to assist with this issue? I do believe this bug was first reported in early 2015.
@davidfischer-ch commented on GitHub (May 3, 2017):
Hello, I implemented a watcher that try to read a file inside the s3fs "directory" to detect any disconnection and then remount the target in case of error. This check is made by default every second. Since then, I never encountered the issue. I though it was worth sharing my experiment.
@ClemensSchneider commented on GitHub (Jul 11, 2017):
Is there any logfile we could check to give a hint for the reason of the sudden disconnects?
@b0ku1 commented on GitHub (Aug 5, 2017):
Did anyone get to the bottom of this? Thanks.
@etwillbefine commented on GitHub (Aug 10, 2017):
Using latest s3fs master: For us it was a memory issue. I enabled debug using
-d, found this in the syslog (not in messages log file as documented):Trying now to solve it following this comment. Will update this post when it seems to be solved.
We are not interested in high performance / caching anyway for our use case. If you are, you might want to follow a different approach.
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.
I will close this, but if the problem persists, please reopen or post a new issue.
@TheRealAnubis commented on GitHub (Apr 22, 2019):
Sorry for the thread necromancy, but for me the issue seemed to be caused by
updatedbdaemon triggering the OOM killer on S3FS. Adding my S3 mountpoint to thePRUNEPATHSsection ofupdatedb.confseemed to resolve the issue.@ggtakec commented on GitHub (Apr 22, 2019):
@TheRealAnubis The same point was mentioned in https://github.com/s3fs-fuse/s3fs-fuse/issues/417#issuecomment-291850522 .
We need to be careful about updatedb.
Thanks for your kindness.
@gaul We should describe this in the wiki.
I tried to add FAQ in WIKI
Please let me know if there is a problem.
@vcombey commented on GitHub (Jan 3, 2021):
I got random disconnects using: https://github.com/efrecon/docker-s3fs-client/.
I'm using version 1.86.
issue link: https://github.com/efrecon/docker-s3fs-client/issues/8
@rohit-the-coder commented on GitHub (Feb 5, 2021):
I am facing this issue too. Has anybody found the permanent solution to it?
@gaul commented on GitHub (Apr 21, 2021):
Please test with the latest version 1.89 and open a new issue if symptoms persist.
@aarcro commented on GitHub (May 9, 2021):
I just hit it on 1.89. large -4.5 TB and growing wasabi bucket. unmount/mount only recovered it briefly (minute or two). I removed use_cache and it is no longer disconnecting (but using more bandwidth). Prior to the first instance of the failure I'd hit 98% disk use, and removed some older files from the use_cache dir.
@devopseze commented on GitHub (May 13, 2021):
Hi aarcro, please can you elaborate more on you solution.
@aarcro commented on GitHub (May 13, 2021):
Ultimate solution was to switch to goofys
@wcheek commented on GitHub (Apr 9, 2025):
I've been seeing the same random disconnects (after ~1 month) using
mount-s3. Getting my inspiration from @darrencruse above, I made a working script solution for mount-s3 and Amazon Linux 2.