[GH-ISSUE #750] s3fs crashing on Alpine 3.7 #432

Closed
opened 2026-03-04 01:45:30 +03:00 by kerem · 9 comments
Owner

Originally created by @tobsch on GitHub (Apr 17, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/750

Additional Information

s3fs is crashing after a while on alpine 3.7 with the following message in dmesg:
traps: s3fs[29880] general protection ip:7f8e452b085b sp:7f8e454c9a58 error:0 in ld-musl-x86_64.so.1[7f8e4528d000+89000]

Version of s3fs being used (s3fs --version)

1.8.3

Version of fuse being used (pkg-config --modversion fuse)

fusermount version: 2.9.7

System information (uname -r)

4.4.111+

Distro (cat /etc/issue)

Welcome to Alpine Linux 3.7

s3fs command line used (if applicable)

s3fs xxx-staging /var/www/html/uploads/ -o max_stat_cache_size=100000 -o parallel_count=20 -o passwd_file=/secrets/s3fs/s3fs-credentials  -o use_cache=/tmp -o url=https://storage.googleapis.com -o sigv2 -o nomultipart -o nonempty -o allow_other -o uid=100 -o gid=100 -o umask=0002

/etc/fstab entry (if applicable):

s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)

if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

Details about issue

It's a bit hard to tell what makes it crash. It just happens randomly after a few minutes.

Any Ideas on that?

Originally created by @tobsch on GitHub (Apr 17, 2018). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/750 ### Additional Information s3fs is crashing after a while on alpine 3.7 with the following message in dmesg: traps: s3fs[29880] general protection ip:7f8e452b085b sp:7f8e454c9a58 error:0 in ld-musl-x86_64.so.1[7f8e4528d000+89000] #### Version of s3fs being used (s3fs --version) 1.8.3 #### Version of fuse being used (pkg-config --modversion fuse) fusermount version: 2.9.7 #### System information (uname -r) 4.4.111+ #### Distro (cat /etc/issue) Welcome to Alpine Linux 3.7 #### s3fs command line used (if applicable) ``` s3fs xxx-staging /var/www/html/uploads/ -o max_stat_cache_size=100000 -o parallel_count=20 -o passwd_file=/secrets/s3fs/s3fs-credentials -o use_cache=/tmp -o url=https://storage.googleapis.com -o sigv2 -o nomultipart -o nonempty -o allow_other -o uid=100 -o gid=100 -o umask=0002 ``` #### /etc/fstab entry (if applicable): ``` ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs) _if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages_ ``` ``` ### Details about issue It's a bit hard to tell what makes it crash. It just happens randomly after a few minutes. Any Ideas on that?
kerem closed this issue 2026-03-04 01:45:31 +03:00
Author
Owner

@tobsch commented on GitHub (Apr 26, 2018):

This really is a pain for us and seems to be an issue with musl libc.
@ggtakec any ideas / hints?

<!-- gh-comment-id:384554114 --> @tobsch commented on GitHub (Apr 26, 2018): This really is a pain for us and seems to be an issue with musl libc. @ggtakec any ideas / hints?
Author
Owner

@ggtakec commented on GitHub (May 2, 2018):

@tobsch
If you can output the log of s3fs, please try logging by using the dbglevel option etc(ex: "-o dbglevel=info -o curldbg").
We might be able to get hints on what s3fs was doing just before crashing.
There is a possibility that hints can be obtained for the operation/command being executed when s3fs crash.
And we may also get hints by trying to reduce s3fs options one by one(such as use_cache) and trying to crash.

Thanks in advance for your assistance.

<!-- gh-comment-id:385871701 --> @ggtakec commented on GitHub (May 2, 2018): @tobsch If you can output the log of s3fs, please try logging by using the dbglevel option etc(ex: "-o dbglevel=info -o curldbg"). We might be able to get hints on what s3fs was doing just before crashing. There is a possibility that hints can be obtained for the operation/command being executed when s3fs crash. And we may also get hints by trying to reduce s3fs options one by one(such as use_cache) and trying to crash. Thanks in advance for your assistance.
Author
Owner

@tobsch commented on GitHub (May 2, 2018):

@ggtakec it's unfortunately very hard to debug as it's running on k8s containers etc.
A few days ago I was able to reproduce it. The issue was that there was a file different servers tried to log to on GCS. We removed the "write heavy" scenario, now it seems to work.

Does that ring any bell? Tobias

<!-- gh-comment-id:386010964 --> @tobsch commented on GitHub (May 2, 2018): @ggtakec it's unfortunately very hard to debug as it's running on k8s containers etc. A few days ago I was able to reproduce it. The issue was that there was a file different servers tried to log to on GCS. We removed the "write heavy" scenario, now it seems to work. Does that ring any bell? Tobias
Author
Owner

@ggtakec commented on GitHub (May 2, 2018):

@tobsch There is a possibility that problems may be caused by exclusion processing of s3fs.
What is the rate (and size and parallel number) of your "write heavy" scenario?
I will investigate whether it can reproduce even here.

<!-- gh-comment-id:386022216 --> @ggtakec commented on GitHub (May 2, 2018): @tobsch There is a possibility that problems may be caused by exclusion processing of s3fs. What is the rate (and size and parallel number) of your "write heavy" scenario? I will investigate whether it can reproduce even here.
Author
Owner

@tobsch commented on GitHub (May 2, 2018):

@ggtakec I would say below 60 writes per minute. With a bit of luck there could have been a "collision" with two writes per sec. Thats why I quoted write heavy ;-)

<!-- gh-comment-id:386081310 --> @tobsch commented on GitHub (May 2, 2018): @ggtakec I would say below 60 writes per minute. With a bit of luck there could have been a "collision" with two writes per sec. Thats why I quoted write heavy ;-)
Author
Owner

@tobsch commented on GitHub (May 2, 2018):

If you end up with no ideas, I could re-enable the logging?

<!-- gh-comment-id:386081389 --> @tobsch commented on GitHub (May 2, 2018): If you end up with no ideas, I could re-enable the logging?
Author
Owner

@ggtakec commented on GitHub (May 3, 2018):

@tobsch Thanks for giving information to me.
Please try logging if it is possible.
I hope that will lead to a solution, but it may not understand only the log.
Thanks in advance for your kindness.

<!-- gh-comment-id:386176236 --> @ggtakec commented on GitHub (May 3, 2018): @tobsch Thanks for giving information to me. Please try logging if it is possible. I hope that will lead to a solution, but it may not understand only the log. Thanks in advance for your kindness.
Author
Owner

@gaul commented on GitHub (Jan 26, 2019):

@tobsch Could you attach gdb and share the backtrace from the crash? Note that this may be due to musl shortcomings.

<!-- gh-comment-id:457785434 --> @gaul commented on GitHub (Jan 26, 2019): @tobsch Could you attach `gdb` and share the backtrace from the crash? Note that this may be due to musl shortcomings.
Author
Owner

@gaul commented on GitHub (Jun 25, 2019):

Closing due to inactivity.

<!-- gh-comment-id:505610934 --> @gaul commented on GitHub (Jun 25, 2019): Closing due to inactivity.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#432
No description provided.