[GH-ISSUE #441] after few request s3fs get stuck #238

Closed
opened 2026-03-04 01:43:35 +03:00 by kerem · 4 comments
Owner

Originally created by @ekrako on GitHub (Jun 27, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/441

I am using s3fs with sharing using unfs3.
after few requests s3fs get stuck on requests and in order to resolve it I need to kill the process and remount the bucket

/etc/fstab
#s3fs#pelephone-cdn-healthcheck /mnt/s3/healthcheck fuse allow_other,use_cache=/cache,url=https://s3.amazonaws.com,passwd_file=/root/.s3credentials,default_acl=public-read 0 0

/var/log/messages -
s3fserror.txt

Originally created by @ekrako on GitHub (Jun 27, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/441 I am using s3fs with sharing using unfs3. after few requests s3fs get stuck on requests and in order to resolve it I need to kill the process and remount the bucket /etc/fstab `#s3fs#pelephone-cdn-healthcheck /mnt/s3/healthcheck fuse allow_other,use_cache=/cache,url=https://s3.amazonaws.com,passwd_file=/root/.s3credentials,default_acl=public-read 0 0` /var/log/messages - [s3fserror.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/334758/s3fserror.txt)
kerem closed this issue 2026-03-04 01:43:35 +03:00
Author
Owner

@ggtakec commented on GitHub (Jul 18, 2016):

@ekrako
I did not find the trouble of s3fs in your log.
Could you try to run s3fs with dbglevel option and curldbg, and run it forground(-f option)?
Thus we can get more information about s3fs running, and it helps us to solve this issue.
(Please note about that the debug message size is large)

<!-- gh-comment-id:233322168 --> @ggtakec commented on GitHub (Jul 18, 2016): @ekrako I did not find the trouble of s3fs in your log. Could you try to run s3fs with dbglevel option and curldbg, and run it forground(-f option)? Thus we can get more information about s3fs running, and it helps us to solve this issue. (Please note about that the debug message size is large)
Author
Owner

@strk commented on GitHub (Nov 16, 2017):

I'm having a similar experience, strace shows the process stuck at read(3, everything blocks in a bad way (cannot even kill the process) and during that blocking period you cannot even ls the directory containing the mountpoint directory (ls and the shell invoking it would block as well).

My experience was with version 1.80 of the software, hadn't tried 1.82 yet.

An interesting aspect of this is that the running process is reading from a file which is already fully downloaded in the s3fs cache so I don't understand why it needs to block at all. Running the same process directly on the cache file works great, so isn't the cache used ?

<!-- gh-comment-id:344797944 --> @strk commented on GitHub (Nov 16, 2017): I'm having a similar experience, `strace` shows the process stuck at `read(3`, everything blocks in a bad way (cannot even kill the process) and during that blocking period you cannot even `ls` the directory *containing* the mountpoint directory (`ls` and the shell invoking it would block as well). My experience was with version 1.80 of the software, hadn't tried 1.82 yet. An interesting aspect of this is that the running process is reading from a file which is already fully downloaded in the s3fs cache so I don't understand why it needs to block at all. Running the same process directly on the cache file works great, so isn't the cache used ?
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.

s3fs has the possibility to operate cache files simultaneously from different threads and has exclusive control.
I think that blocking for this has occurred.
We launch new version 1.86, which fixed some problem(bugs) and small changed logic.
Please try to use the latest version.

I will close this, but if the problem persists, please reopen or post a new issue.

<!-- gh-comment-id:478218183 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. s3fs has the possibility to operate cache files simultaneously from different threads and has exclusive control. I think that blocking for this has occurred. We launch new version 1.86, which fixed some problem(bugs) and small changed logic. Please try to use the latest version. I will close this, but if the problem persists, please reopen or post a new issue.
Author
Owner

@camway commented on GitHub (Jun 29, 2020):

I realize this is an older issue, but this seems to be the only match for the issue I'm experiencing. Essentially after s3fs has run for a few minutes, it seems to deadlock. All usage stats drop to near 0, and it never releases. I can't ls the directory it's in, I can't kill the running command, and it doesn't seem to ever complete. The only way I've found to shut it down is by killing the docker container it's running in.

This is happening with the latest in ubuntu's repo (1.82). So I downloaded and compiled the project manually (1.86) which behaves identically.

Can anyone give me a direction to try? I'm really trying to avoid having to manually code the gap s3fs was filling for me.

<!-- gh-comment-id:651398898 --> @camway commented on GitHub (Jun 29, 2020): I realize this is an older issue, but this seems to be the only match for the issue I'm experiencing. Essentially after s3fs has run for a few minutes, it seems to deadlock. All usage stats drop to near 0, and it never releases. I can't ls the directory it's in, I can't kill the running command, and it doesn't seem to ever complete. The only way I've found to shut it down is by killing the docker container it's running in. This is happening with the latest in ubuntu's repo (1.82). So I downloaded and compiled the project manually (1.86) which behaves identically. Can anyone give me a direction to try? I'm really trying to avoid having to manually code the gap s3fs was filling for me.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#238
No description provided.