[GH-ISSUE #152] How to fix permanently transport end point is not connected in S3fs #88

Closed
opened 2026-03-04 01:41:56 +03:00 by kerem · 37 comments
Owner

Originally created by @mitosistech on GitHub (Mar 16, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/152

I have one s3 bucket and two EC2 instance.I have mounted S3 bucket in EC2 instance.We are acessing all file of S3 bucket through our App.Now a days "Tansport end point is not connected" to s3fs is coming frequently.For fix this issue I am unmounting and mount again s3 bucket in EC2 instance.But I need the root cause of this issue.Why its coming frequently.Please help me.

Originally created by @mitosistech on GitHub (Mar 16, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/152 I have one s3 bucket and two EC2 instance.I have mounted S3 bucket in EC2 instance.We are acessing all file of S3 bucket through our App.Now a days "Tansport end point is not connected" to s3fs is coming frequently.For fix this issue I am unmounting and mount again s3 bucket in EC2 instance.But I need the root cause of this issue.Why its coming frequently.Please help me.
kerem closed this issue 2026-03-04 01:41:56 +03:00
Author
Owner

@rehoehle commented on GitHub (Mar 17, 2015):

I have the same problem its killing my mount point. Is there a solution its very annoying?

@mitosistech which version do you use?

<!-- gh-comment-id:82223476 --> @rehoehle commented on GitHub (Mar 17, 2015): I have the same problem its killing my mount point. Is there a solution its very annoying? @mitosistech which version do you use?
Author
Owner

@mitosistech commented on GitHub (Mar 17, 2015):

I have used s3fs 1.74

<!-- gh-comment-id:82259433 --> @mitosistech commented on GitHub (Mar 17, 2015): I have used s3fs 1.74
Author
Owner

@rehoehle commented on GitHub (Mar 17, 2015):

ok :( i have used the newest version 1.78.. hmm

<!-- gh-comment-id:82260467 --> @rehoehle commented on GitHub (Mar 17, 2015): ok :( i have used the newest version 1.78.. hmm
Author
Owner

@mitosistech commented on GitHub (Mar 17, 2015):

For you also same Tranport endpoint is not connected error?

<!-- gh-comment-id:82264082 --> @mitosistech commented on GitHub (Mar 17, 2015): For you also same Tranport endpoint is not connected error?
Author
Owner

@rehoehle commented on GitHub (Mar 17, 2015):

Yes same problem

<!-- gh-comment-id:82264185 --> @rehoehle commented on GitHub (Mar 17, 2015): Yes same problem
Author
Owner

@mitosistech commented on GitHub (Mar 17, 2015):

If will do un mount and again mount then it will work.But this is not permanent fix.If many times this same problem will come ,it is not good.So I am searching permanent solution.I think this is not version problem.

<!-- gh-comment-id:82265196 --> @mitosistech commented on GitHub (Mar 17, 2015): If will do un mount and again mount then it will work.But this is not permanent fix.If many times this same problem will come ,it is not good.So I am searching permanent solution.I think this is not version problem.
Author
Owner

@boazrf commented on GitHub (Mar 18, 2015):

Some observations:

  1. Seems like that problem is less frequent when using full debug + foreground mode (-d -f -o curldbg -o f2). Might be due to some tiny delays caused by writing out the debug messages.
  2. From https://code.google.com/p/s3fs/issues/detail?id=426: it looks like it can be related to multireq_max param. Default is 20 but it is recommended to try smaller number (i.e. multireq_max=5).

In any case - this is indeed an annoying issue that needs attention.

<!-- gh-comment-id:82857063 --> @boazrf commented on GitHub (Mar 18, 2015): Some observations: 1. Seems like that problem is less frequent when using full debug + foreground mode (-d -f -o curldbg -o f2). Might be due to some tiny delays caused by writing out the debug messages. 2. From https://code.google.com/p/s3fs/issues/detail?id=426: it looks like it can be related to multireq_max param. Default is 20 but it is recommended to try smaller number (i.e. multireq_max=5). In any case - this is indeed an annoying issue that needs attention.
Author
Owner

@mitosistech commented on GitHub (Mar 18, 2015):

Can you please tell me ,where this multireq_max param is present?

<!-- gh-comment-id:82961163 --> @mitosistech commented on GitHub (Mar 18, 2015): Can you please tell me ,where this multireq_max param is present?
Author
Owner

@rehoehle commented on GitHub (Mar 18, 2015):

I think your can set that parameter in your fstab file i have test it now

use allow_other,multireq_max=5 0 0

I don't know if its correct but its working.

<!-- gh-comment-id:82982216 --> @rehoehle commented on GitHub (Mar 18, 2015): I think your can set that parameter in your fstab file i have test it now use allow_other,multireq_max=5 0 0 I don't know if its correct but its working.
Author
Owner

@mitosistech commented on GitHub (Mar 18, 2015):

But I have set use user allow_other in fuse.conf

<!-- gh-comment-id:83007014 --> @mitosistech commented on GitHub (Mar 18, 2015): But I have set use user allow_other in fuse.conf
Author
Owner

@boazrf commented on GitHub (Mar 18, 2015):

I'm setting it when running s3fs from command line:
/usr/local/bin/s3fs [bucket] [mount-point] -o multireq_max=5

<!-- gh-comment-id:83081857 --> @boazrf commented on GitHub (Mar 18, 2015): I'm setting it when running s3fs from command line: /usr/local/bin/s3fs [bucket] [mount-point] -o multireq_max=5
Author
Owner

@mitosistech commented on GitHub (Mar 19, 2015):

I have used this command
$ sudo s3fs your_backet_name -o use_cache=/tmp -o allow_other /mnt/s3storage

you are telling ,I will give command like this

$ sudo s3fs your_backet_name -o use_cache=/tmp -o allow_other -o multireq_max=5 /mnt/s3storage

Am I right?

Have you know , how one S3 bucket will work for mount local folder of two EC2 instance?

<!-- gh-comment-id:83351019 --> @mitosistech commented on GitHub (Mar 19, 2015): I have used this command $ sudo s3fs your_backet_name -o use_cache=/tmp -o allow_other /mnt/s3storage you are telling ,I will give command like this $ sudo s3fs your_backet_name -o use_cache=/tmp -o allow_other -o multireq_max=5 /mnt/s3storage Am I right? Have you know , how one S3 bucket will work for mount local folder of two EC2 instance?
Author
Owner

@darrencruse commented on GitHub (Mar 19, 2015):

Just fwiw we've been migrating a bunch of php content from some old apache servers to amazon s3 w/s3fs and I'd had this problem 2 or 3 times in the last few months.

But we changed our configurations to about double our load of requests on tuesday and this problem has went to happening about 2 or 3 times now per day.

Glad to see the suggestion about multireq_max am getting ready to try it.

Update: It seems to have helped woo hoo!!!! No dropped mount points since I lowered the multireq_max setting to 5!!
Update to the Update: It seems I'd woo hoo'ed prematurely - I did have a mount point drop on me again yesterday even with the multireq_max setting. I wonder if anybody's running like a monitor on their mount points to automatically detect and remount them? I think that's where I'm headed - not sure exactly how yet any tips appreciated...

Does anybody know though - if multireq_max refers to the number of simultaneous requests to amazon S3, and the default is 20 but we reduce it to 5...

Will this hurt performance? i.e. response times?

Update: still curious about this though - anybody know if lowering this hurts performance?

<!-- gh-comment-id:83740854 --> @darrencruse commented on GitHub (Mar 19, 2015): Just fwiw we've been migrating a bunch of php content from some old apache servers to amazon s3 w/s3fs and I'd had this problem 2 or 3 times in the last few months. But we changed our configurations to about double our load of requests on tuesday and this problem has went to happening about 2 or 3 times now _per day_. Glad to see the suggestion about multireq_max am getting ready to try it. Update: It seems to have helped woo hoo!!!! No dropped mount points since I lowered the multireq_max setting to 5!! Update to the Update: It seems I'd woo hoo'ed prematurely - I did have a mount point drop on me again yesterday even with the multireq_max setting. I wonder if anybody's running like a monitor on their mount points to automatically detect and remount them? I think that's where I'm headed - not sure exactly how yet any tips appreciated... Does anybody know though - if multireq_max refers to the number of simultaneous requests to amazon S3, and the default is 20 but we reduce it to 5... Will this hurt performance? i.e. response times? Update: still curious about this though - anybody know if lowering this hurts performance?
Author
Owner

@darrencruse commented on GitHub (Apr 1, 2015):

I just finished putting a watchdog on this mount point that we keep losing - thought I'd share it in case it helps others here.

I used this blog as a guide it was helpful:
http://blog.eracc.com/2010/05/08/linux-monitor-a-service-with-a-watchdog-script/

Here's my script I put in root's crontab:

#!/bin/bash
#
# s3fs-watchdog.sh
#
# Run from the root user's crontab to keep an eye on s3fs which should always
# be mounted.
#
# Note:  If getting the amazon S3 credentials from environment variables
#   these must be entered in the actual crontab file (otherwise use one
#   of the s3fs other ways of getting credentials).
#
# Example:  To run it once every minute getting credentials from envrironment
# variables enter this via "sudo crontab -e":
#
#   AWSACCESSKEYID=XXXXXXXXXXXXXX
#   AWSSECRETACCESSKEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
#   * * * * * /root/s3fs-watchdog.sh
#

NAME=s3fs
BUCKET=<yourbucket>
MOUNTPATH=<yourmountpath>
MOUNT=/bin/mount
UMOUNT=/bin/umount
NOTIFY=<whotoemail>
NOTIFYCC=<whoelsetoemail>
GREP=/bin/grep
PS=/bin/ps
NOP=/bin/true
DATE=/bin/date
MAIL=/bin/mail
RM=/bin/rm

$PS -ef|$GREP -v grep|$GREP $NAME|grep $BUCKET >/dev/null 2>&1
case "$?" in
   0)
   # It is running in this case so we do nothing.
   $NOP
   ;;
   1)
   echo "$NAME is NOT RUNNING for bucket $BUCKET. Remounting $BUCKET with $NAME and sending notices."
   $UMOUNT $MOUNTPATH >/dev/null 2>&1
   $MOUNT $MOUNTPATH >/tmp/watchdogmount.out 2>&1
   NOTICE=/tmp/watchdog.txt
   echo "$NAME for $BUCKET was not running and was started on `$DATE`" > $NOTICE
   $MAIL -n -s "$BUCKET $NAME mount point lost and remounted" -c $NOTIFYCC $NOTIFY < $NOTICE
   $RM -f $NOTICE
   ;;
esac

exit

Hope this helps others here,

Darren

<!-- gh-comment-id:88657503 --> @darrencruse commented on GitHub (Apr 1, 2015): I just finished putting a watchdog on this mount point that we keep losing - thought I'd share it in case it helps others here. I used this blog as a guide it was helpful: http://blog.eracc.com/2010/05/08/linux-monitor-a-service-with-a-watchdog-script/ Here's my script I put in root's crontab: ``` #!/bin/bash # # s3fs-watchdog.sh # # Run from the root user's crontab to keep an eye on s3fs which should always # be mounted. # # Note: If getting the amazon S3 credentials from environment variables # these must be entered in the actual crontab file (otherwise use one # of the s3fs other ways of getting credentials). # # Example: To run it once every minute getting credentials from envrironment # variables enter this via "sudo crontab -e": # # AWSACCESSKEYID=XXXXXXXXXXXXXX # AWSSECRETACCESSKEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX # * * * * * /root/s3fs-watchdog.sh # NAME=s3fs BUCKET=<yourbucket> MOUNTPATH=<yourmountpath> MOUNT=/bin/mount UMOUNT=/bin/umount NOTIFY=<whotoemail> NOTIFYCC=<whoelsetoemail> GREP=/bin/grep PS=/bin/ps NOP=/bin/true DATE=/bin/date MAIL=/bin/mail RM=/bin/rm $PS -ef|$GREP -v grep|$GREP $NAME|grep $BUCKET >/dev/null 2>&1 case "$?" in 0) # It is running in this case so we do nothing. $NOP ;; 1) echo "$NAME is NOT RUNNING for bucket $BUCKET. Remounting $BUCKET with $NAME and sending notices." $UMOUNT $MOUNTPATH >/dev/null 2>&1 $MOUNT $MOUNTPATH >/tmp/watchdogmount.out 2>&1 NOTICE=/tmp/watchdog.txt echo "$NAME for $BUCKET was not running and was started on `$DATE`" > $NOTICE $MAIL -n -s "$BUCKET $NAME mount point lost and remounted" -c $NOTIFYCC $NOTIFY < $NOTICE $RM -f $NOTICE ;; esac exit ``` Hope this helps others here, Darren
Author
Owner

@dev-yohan commented on GitHub (Apr 7, 2015):

seems to work fine @darrencruse, thanks for the script!

<!-- gh-comment-id:90587222 --> @dev-yohan commented on GitHub (Apr 7, 2015): seems to work fine @darrencruse, thanks for the script!
Author
Owner

@ggtakec commented on GitHub (Apr 12, 2015):

#167 may have solved this Issue.
If this issue reason is timeout for connecting/reading/writing, the default value changing maybe solve this.
If you got same error with latest codes, please try to set connect_timeout and readwrite_timeout options.
Thanks in advance for your help.

<!-- gh-comment-id:91972376 --> @ggtakec commented on GitHub (Apr 12, 2015): #167 may have solved this Issue. If this issue reason is timeout for connecting/reading/writing, the default value changing maybe solve this. If you got same error with latest codes, please try to set connect_timeout and readwrite_timeout options. Thanks in advance for your help.
Author
Owner

@chrisschaub commented on GitHub (Apr 14, 2015):

I'm using the latest master and can get the connect to work by specifying keys directly. But, using -o iam_role=somerole does not work, always get Transport endpoint is not connected. Does the iam_role stuff work?

<!-- gh-comment-id:93063617 --> @chrisschaub commented on GitHub (Apr 14, 2015): I'm using the latest master and can get the connect to work by specifying keys directly. But, using -o iam_role=somerole does not work, always get Transport endpoint is not connected. Does the iam_role stuff work?
Author
Owner

@ggtakec commented on GitHub (Apr 18, 2015):

@chrisschaub I think "iam_role" option is working without problem.
Probably you should do that this option needs "role name" parameter please check name and your role setting on console.
Thanks in advance for your help.

<!-- gh-comment-id:94164124 --> @ggtakec commented on GitHub (Apr 18, 2015): @chrisschaub I think "iam_role" option is working without problem. Probably you should do that this option needs "role name" parameter please check name and your role setting on console. Thanks in advance for your help.
Author
Owner

@jackyq2015 commented on GitHub (Aug 20, 2015):

Thanks for the watchdog script. It's great!

Looking forward a fix in the code base though.

<!-- gh-comment-id:133031744 --> @jackyq2015 commented on GitHub (Aug 20, 2015): Thanks for the watchdog script. It's great! Looking forward a fix in the code base though.
Author
Owner

@Fei-Guang commented on GitHub (Jun 2, 2016):

this issue happened today and the version is ossfs_1.79.8_ubuntu14.04_amd64

<!-- gh-comment-id:223234113 --> @Fei-Guang commented on GitHub (Jun 2, 2016): this issue happened today and the version is ossfs_1.79.8_ubuntu14.04_amd64
Author
Owner

@stefandelaet commented on GitHub (Jan 12, 2017):

I have the same issue using s3fs version 1.80 , using centos 7 on an ec2 instance . I am using the default bucket name mount point fuse.s3fs _netdev, 0 0 mounting in /etc/fstab.
It was running ok for 5 hours, than suddenly the mount was gone.

<!-- gh-comment-id:272145243 --> @stefandelaet commented on GitHub (Jan 12, 2017): I have the same issue using s3fs version 1.80 , using centos 7 on an ec2 instance . I am using the default bucket name mount point fuse.s3fs _netdev, 0 0 mounting in /etc/fstab. It was running ok for 5 hours, than suddenly the mount was gone.
Author
Owner

@ggtakec commented on GitHub (Jan 15, 2017):

@stefandelaet
Please try to specify readwrite_timeout(and connect_timeout) option, the result maybe help us to solve this issue.
And if you can, please use latest version in master branch(like #506)

Thanks in advance for your help.

<!-- gh-comment-id:272684583 --> @ggtakec commented on GitHub (Jan 15, 2017): @stefandelaet Please try to specify readwrite_timeout(and connect_timeout) option, the result maybe help us to solve this issue. And if you can, please use latest version in master branch(like #506) Thanks in advance for your help.
Author
Owner

@mgla commented on GitHub (Feb 10, 2017):

I am experiencing this issue with s3fs v1.180 (commit d40da2c, which is the newest version).
Mount options: _netdev,allow_other,iam_role=auto,umask=0000

readwrite_timeout is 60 seconds (default, although the Readme.md states 30 is the default).
connect_timeout is 300 seconds (default).

I found a rather old pull request (https://github.com/s3fs-fuse/s3fs-fuse/pull/167), which seemingly tried to address this issue, but I suppose it did not work.

Is there anything I can do to assist with this issue? I do believe this bug was first reported in early 2015.

<!-- gh-comment-id:278894805 --> @mgla commented on GitHub (Feb 10, 2017): I am experiencing this issue with s3fs v1.180 (commit d40da2c, which is the newest version). Mount options: _netdev,allow_other,iam_role=auto,umask=0000 readwrite_timeout is 60 seconds (default, although the Readme.md states 30 is the default). connect_timeout is 300 seconds (default). I found a rather old pull request (https://github.com/s3fs-fuse/s3fs-fuse/pull/167), which seemingly tried to address this issue, but I suppose it did not work. Is there anything I can do to assist with this issue? I do believe this bug was first reported in early 2015.
Author
Owner

@davidfischer-ch commented on GitHub (May 3, 2017):

Hello, I implemented a watcher that try to read a file inside the s3fs "directory" to detect any disconnection and then remount the target in case of error. This check is made by default every second. Since then, I never encountered the issue. I though it was worth sharing my experiment.

<!-- gh-comment-id:298891943 --> @davidfischer-ch commented on GitHub (May 3, 2017): Hello, I implemented a watcher that try to read a file inside the s3fs "directory" to detect any disconnection and then remount the target in case of error. This check is made by default every second. Since then, I never encountered the issue. I though it was worth sharing my experiment.
Author
Owner

@ClemensSchneider commented on GitHub (Jul 11, 2017):

Is there any logfile we could check to give a hint for the reason of the sudden disconnects?

<!-- gh-comment-id:314375182 --> @ClemensSchneider commented on GitHub (Jul 11, 2017): Is there any logfile we could check to give a hint for the reason of the sudden disconnects?
Author
Owner

@b0ku1 commented on GitHub (Aug 5, 2017):

Did anyone get to the bottom of this? Thanks.

<!-- gh-comment-id:320433204 --> @b0ku1 commented on GitHub (Aug 5, 2017): Did anyone get to the bottom of this? Thanks.
Author
Owner

@etwillbefine commented on GitHub (Aug 10, 2017):

Using latest s3fs master: For us it was a memory issue. I enabled debug using -d, found this in the syslog (not in messages log file as documented):

kernel: [61022.091965] Out of memory: Kill process 2441 (s3fs) score 569 or sacrifice child
kernel: [61022.095825] Killed process 2441 (s3fs) total-vm:1292244kB, anon-rss:593880kB, file-rss:0kB

Trying now to solve it following this comment. Will update this post when it seems to be solved.
We are not interested in high performance / caching anyway for our use case. If you are, you might want to follow a different approach.

<!-- gh-comment-id:321489848 --> @etwillbefine commented on GitHub (Aug 10, 2017): Using latest s3fs master: For us it was a memory issue. I enabled debug using `-d`, found this in the syslog (not in messages log file as documented): ``` kernel: [61022.091965] Out of memory: Kill process 2441 (s3fs) score 569 or sacrifice child kernel: [61022.095825] Killed process 2441 (s3fs) total-vm:1292244kB, anon-rss:593880kB, file-rss:0kB ``` Trying now to solve it following [this comment](https://github.com/s3fs-fuse/s3fs-fuse/issues/340#issuecomment-195930065). Will update this post when it seems to be solved. We are not interested in high performance / caching anyway for our use case. If you are, you might want to follow a different approach.
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.
I will close this, but if the problem persists, please reopen or post a new issue.

<!-- gh-comment-id:478213112 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. We launch new version 1.86, which fixed some problem(bugs). Please use the latest version. I will close this, but if the problem persists, please reopen or post a new issue.
Author
Owner

@TheRealAnubis commented on GitHub (Apr 22, 2019):

Sorry for the thread necromancy, but for me the issue seemed to be caused by updatedb daemon triggering the OOM killer on S3FS. Adding my S3 mountpoint to the PRUNEPATHS section of updatedb.conf seemed to resolve the issue.

<!-- gh-comment-id:485422594 --> @TheRealAnubis commented on GitHub (Apr 22, 2019): Sorry for the thread necromancy, but for me the issue seemed to be caused by `updatedb` daemon triggering the OOM killer on S3FS. Adding my S3 mountpoint to the `PRUNEPATHS` section of `updatedb.conf` seemed to resolve the issue.
Author
Owner

@ggtakec commented on GitHub (Apr 22, 2019):

@TheRealAnubis The same point was mentioned in https://github.com/s3fs-fuse/s3fs-fuse/issues/417#issuecomment-291850522 .
We need to be careful about updatedb.
Thanks for your kindness.

@gaul We should describe this in the wiki.
I tried to add FAQ in WIKI
Please let me know if there is a problem.

<!-- gh-comment-id:485442704 --> @ggtakec commented on GitHub (Apr 22, 2019): @TheRealAnubis The same point was mentioned in https://github.com/s3fs-fuse/s3fs-fuse/issues/417#issuecomment-291850522 . We need to be careful about updatedb. Thanks for your kindness. @gaul We should describe this in the wiki. I tried to add [FAQ in WIKI](https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ#q-becomes-unstable-using-updatedbkicked-oom-killer) Please let me know if there is a problem.
Author
Owner

@vcombey commented on GitHub (Jan 3, 2021):

I got random disconnects using: https://github.com/efrecon/docker-s3fs-client/.
I'm using version 1.86.

issue link: https://github.com/efrecon/docker-s3fs-client/issues/8

<!-- gh-comment-id:753637547 --> @vcombey commented on GitHub (Jan 3, 2021): I got random disconnects using: https://github.com/efrecon/docker-s3fs-client/. I'm using version 1.86. issue link: https://github.com/efrecon/docker-s3fs-client/issues/8
Author
Owner

@rohit-the-coder commented on GitHub (Feb 5, 2021):

I am facing this issue too. Has anybody found the permanent solution to it?

<!-- gh-comment-id:773975354 --> @rohit-the-coder commented on GitHub (Feb 5, 2021): I am facing this issue too. Has anybody found the permanent solution to it?
Author
Owner

@gaul commented on GitHub (Apr 21, 2021):

Please test with the latest version 1.89 and open a new issue if symptoms persist.

<!-- gh-comment-id:824110326 --> @gaul commented on GitHub (Apr 21, 2021): Please test with the latest version 1.89 and open a new issue if symptoms persist.
Author
Owner

@aarcro commented on GitHub (May 9, 2021):

I just hit it on 1.89. large -4.5 TB and growing wasabi bucket. unmount/mount only recovered it briefly (minute or two). I removed use_cache and it is no longer disconnecting (but using more bandwidth). Prior to the first instance of the failure I'd hit 98% disk use, and removed some older files from the use_cache dir.

<!-- gh-comment-id:835674901 --> @aarcro commented on GitHub (May 9, 2021): I just hit it on 1.89. large -4.5 TB and growing wasabi bucket. unmount/mount only recovered it briefly (minute or two). I removed use_cache and it is no longer disconnecting (but using more bandwidth). Prior to the first instance of the failure I'd hit 98% disk use, and removed some older files from the use_cache dir.
Author
Owner

@devopseze commented on GitHub (May 13, 2021):

Hi aarcro, please can you elaborate more on you solution.

<!-- gh-comment-id:840648462 --> @devopseze commented on GitHub (May 13, 2021): Hi aarcro, please can you elaborate more on you solution.
Author
Owner

@aarcro commented on GitHub (May 13, 2021):

Hi aarcro, please can you elaborate more on you solution.

Ultimate solution was to switch to goofys

<!-- gh-comment-id:840784498 --> @aarcro commented on GitHub (May 13, 2021): > Hi aarcro, please can you elaborate more on you solution. Ultimate solution was to switch to goofys
Author
Owner

@wcheek commented on GitHub (Apr 9, 2025):

I've been seeing the same random disconnects (after ~1 month) using mount-s3. Getting my inspiration from @darrencruse above, I made a working script solution for mount-s3 and Amazon Linux 2.

<!-- gh-comment-id:2788237640 --> @wcheek commented on GitHub (Apr 9, 2025): I've been seeing the same random disconnects (after ~1 month) using [`mount-s3`](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mountpoint.html). Getting my inspiration from @darrencruse [above](https://github.com/s3fs-fuse/s3fs-fuse/issues/152), I made a [working script solution for mount-s3 and Amazon Linux 2](https://gist.github.com/wcheek/c1a1347b660775866e70629bee6b29cc).
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#88
No description provided.