mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #2356] After running for several days, the error "Transport endpoint is not connected" frequently occurs. #1161
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1161
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jestiny0 on GitHub (Oct 22, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2356
Additional Information
Version of s3fs being used (
s3fs --version)Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)Kernel information (
uname -r)5.10.196-185.743.amzn2.aarch64
GNU/Linux Distribution, if applicable (
cat /etc/os-release)How to run s3fs, if applicable
mkdir /mnt-s3 && echo mypassword > /passwd-s3fs && chmod 600 /passwd-s3fs && \ s3fs mys3bucket /mnt-s3 -o passwd_file=/passwd-s3fs -o stat_cache_expire=30 -o nosscache -o nodnscacheDetails about issue
My online service, which mounts an AWS S3 bucket using the s3fs command, can run stably for a period of time. However, after a few days, a large number of Exceptions:
Transport endpoint is not connectedfrequently occur, and the only solution is to restart the service. I reviewed previous issues and found that someone resolved this problem by adding the-o nodnscacheoption. This option indeed solved the problem for a while, but recently the issue has recurred. Is there a better solution available?Note: I have several other online services that also use the same command and are currently running stable.
@jestiny0 commented on GitHub (Nov 1, 2023):
Can someone please help take a look at this problem? It keeps happening every few days lately. Thank you so much in advance
@nguyenminhdungpg commented on GitHub (Nov 3, 2023):
@jestiny0 Hi, I also has the error "Transport endpoint is not connected" quite often but I cannot figure out when exactly it happens. So I apply a solution that can also help you.
I upload a dump text file to the bucket, eg named "s3fs_connection_status.txt", has a simple content like "s3fs connection status", then in the VM, I create a cronjob runs every 5 minutes, cat the content of the file in mounted folder, check if the content is "s3fs connection status". If it is false, I re-mount the bucket by unmount and mount again.
Script looks like
Cronjob setup look like
*/5 * * * * root run-one ./check_s3fs_connection_and_fix.sh >> /var/log/s3fs-mountpoint-status.logCronjob script file check_s3fs_connection_and_fix.sh looks like
In this script, you can add a timestamp to echo msg and setup an observability stack to trace log of s3fs and the cronjob log. Base on logs, you can add alerts to make them noticeable.
@jestiny0 commented on GitHub (Nov 7, 2023):
@nguyenminhdungpg
Thank you for your suggestion. I plan to adopt a similar approach like yours, by periodically checking and remounting. However, I still hope that the official solution could provide a better resolution.
@nguyenminhdungpg commented on GitHub (Nov 7, 2023):
@jestiny0 I also hope that but at this time the work around does its job quite good. Sometimes I get Slack notification msg that it has just been remounted, may be 2 times in one random night, may be no notification after 3 weeks...
@alphaxvzf commented on GitHub (Apr 4, 2024):
I had the same problem - after a while, s3fs mounted directory got disconnected. In my case, the reason was s3fs mount used all the free space on a device for its cache. Cleaning the cache or rebooting the device (in my case, the cache was in /tmp that is cleaned on reboot) solves the problem for a while, so it is better you either avoid using s3fs cache or use a larger disk for it.
@Allan-Nava commented on GitHub (Jan 21, 2025):
Any good solutions?