mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #750] s3fs crashing on Alpine 3.7 #432
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#432
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @tobsch on GitHub (Apr 17, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/750
Additional Information
s3fs is crashing after a while on alpine 3.7 with the following message in dmesg:
traps: s3fs[29880] general protection ip:7f8e452b085b sp:7f8e454c9a58 error:0 in ld-musl-x86_64.so.1[7f8e4528d000+89000]
Version of s3fs being used (s3fs --version)
1.8.3
Version of fuse being used (pkg-config --modversion fuse)
fusermount version: 2.9.7
System information (uname -r)
4.4.111+
Distro (cat /etc/issue)
Welcome to Alpine Linux 3.7
s3fs command line used (if applicable)
/etc/fstab entry (if applicable):
s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
It's a bit hard to tell what makes it crash. It just happens randomly after a few minutes.
Any Ideas on that?
@tobsch commented on GitHub (Apr 26, 2018):
This really is a pain for us and seems to be an issue with musl libc.
@ggtakec any ideas / hints?
@ggtakec commented on GitHub (May 2, 2018):
@tobsch
If you can output the log of s3fs, please try logging by using the dbglevel option etc(ex: "-o dbglevel=info -o curldbg").
We might be able to get hints on what s3fs was doing just before crashing.
There is a possibility that hints can be obtained for the operation/command being executed when s3fs crash.
And we may also get hints by trying to reduce s3fs options one by one(such as use_cache) and trying to crash.
Thanks in advance for your assistance.
@tobsch commented on GitHub (May 2, 2018):
@ggtakec it's unfortunately very hard to debug as it's running on k8s containers etc.
A few days ago I was able to reproduce it. The issue was that there was a file different servers tried to log to on GCS. We removed the "write heavy" scenario, now it seems to work.
Does that ring any bell? Tobias
@ggtakec commented on GitHub (May 2, 2018):
@tobsch There is a possibility that problems may be caused by exclusion processing of s3fs.
What is the rate (and size and parallel number) of your "write heavy" scenario?
I will investigate whether it can reproduce even here.
@tobsch commented on GitHub (May 2, 2018):
@ggtakec I would say below 60 writes per minute. With a bit of luck there could have been a "collision" with two writes per sec. Thats why I quoted write heavy ;-)
@tobsch commented on GitHub (May 2, 2018):
If you end up with no ideas, I could re-enable the logging?
@ggtakec commented on GitHub (May 3, 2018):
@tobsch Thanks for giving information to me.
Please try logging if it is possible.
I hope that will lead to a solution, but it may not understand only the log.
Thanks in advance for your kindness.
@gaul commented on GitHub (Jan 26, 2019):
@tobsch Could you attach
gdband share the backtrace from the crash? Note that this may be due to musl shortcomings.@gaul commented on GitHub (Jun 25, 2019):
Closing due to inactivity.