mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #441] after few request s3fs get stuck #238
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#238
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ekrako on GitHub (Jun 27, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/441
I am using s3fs with sharing using unfs3.
after few requests s3fs get stuck on requests and in order to resolve it I need to kill the process and remount the bucket
/etc/fstab
#s3fs#pelephone-cdn-healthcheck /mnt/s3/healthcheck fuse allow_other,use_cache=/cache,url=https://s3.amazonaws.com,passwd_file=/root/.s3credentials,default_acl=public-read 0 0/var/log/messages -
s3fserror.txt
@ggtakec commented on GitHub (Jul 18, 2016):
@ekrako
I did not find the trouble of s3fs in your log.
Could you try to run s3fs with dbglevel option and curldbg, and run it forground(-f option)?
Thus we can get more information about s3fs running, and it helps us to solve this issue.
(Please note about that the debug message size is large)
@strk commented on GitHub (Nov 16, 2017):
I'm having a similar experience,
straceshows the process stuck atread(3, everything blocks in a bad way (cannot even kill the process) and during that blocking period you cannot evenlsthe directory containing the mountpoint directory (lsand the shell invoking it would block as well).My experience was with version 1.80 of the software, hadn't tried 1.82 yet.
An interesting aspect of this is that the running process is reading from a file which is already fully downloaded in the s3fs cache so I don't understand why it needs to block at all. Running the same process directly on the cache file works great, so isn't the cache used ?
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
s3fs has the possibility to operate cache files simultaneously from different threads and has exclusive control.
I think that blocking for this has occurred.
We launch new version 1.86, which fixed some problem(bugs) and small changed logic.
Please try to use the latest version.
I will close this, but if the problem persists, please reopen or post a new issue.
@camway commented on GitHub (Jun 29, 2020):
I realize this is an older issue, but this seems to be the only match for the issue I'm experiencing. Essentially after s3fs has run for a few minutes, it seems to deadlock. All usage stats drop to near 0, and it never releases. I can't ls the directory it's in, I can't kill the running command, and it doesn't seem to ever complete. The only way I've found to shut it down is by killing the docker container it's running in.
This is happening with the latest in ubuntu's repo (1.82). So I downloaded and compiled the project manually (1.86) which behaves identically.
Can anyone give me a direction to try? I'm really trying to avoid having to manually code the gap s3fs was filling for me.