mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #1162] s3fs crashes with segmentation fault error in amazon linux #608
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#608
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ggegiya1 on GitHub (Sep 30, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1162
Additional Information
Unable to mount an s3 bucket locally using s3fs on Amazon Linux AWS EC2 instance:
s3fs mount command exits quitely, although, an error message is reported in dmesg:
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.85 (commit:1db94a0) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.4
Kernel information (uname -r)
4.14.138-89.102.amzn1.x86_64 #1 SMP Thu Aug 15 15:41:44 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
GNU/Linux Distribution, if applicable (cat /etc/os-release)
NAME="Amazon Linux AMI"
VERSION="2018.03"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2018.03"
PRETTY_NAME="Amazon Linux AMI 2018.03"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2018.03:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
s3fs command line used, if applicable
/etc/fstab entry, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
Unable to mount an s3 bucket locally using s3fs on Amazon Linux AWS EC2 instance:
s3fs mount command exits quitely, although, an error message is reported in dmesg:
s3fs is built successfully from the git master branch:
@jphilipdevorigin commented on GitHub (Oct 1, 2019):
Hi,
We are having the same issue for any new environment we launch for our system based on s3fs-fuse.
One thing I tried in addition to the above is checking out an old version from around 6 months ago before installing, however this has made no difference and we get the same issue:
`git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
git checkout
895d5006bb./autogen.sh
./configure
make
make install`
An instance in an environment that was deployed 26 days ago is still working correctly, however when we increased our desired auto scaling instances so a new instance spun up it has the same problem, which suggests the problem is with creating new mounts - existing mounts are not affected.
Thanks
@iptizer commented on GitHub (Oct 1, 2019):
Hi,
same problem up here. Following version IS affected:
We fixxed it by checking out & compiling from tag=v1.85. So the following version IS NOT affected:
@cloud4t0r commented on GitHub (Oct 1, 2019):
Hi,
Same issue for me on master branch . but re-compiling from last tag is working.
git clone https://github.com/s3fs-fuse/s3fs-fuse.git s3fs-fuse
cd s3fs-fuse
git checkout tags/v1.85
./autogen.sh
./configure --prefix=/usr
make && make install => OK
@ggegiya1 commented on GitHub (Oct 1, 2019):
Thank you @iptizer , @Superbezo
I recompiled s3fs using tag instead of master
git checkout tags/v1.85Everything is working fine now!
@iptizer commented on GitHub (Oct 2, 2019):
Öhhm, but this doesn't solve the problem. Don't you want to keep this open? When a new release is tagged you probably want this to be fixxed, don't you?
@cloud4t0r commented on GitHub (Oct 2, 2019):
Yes, it IS juste a workaround but thé bug still persist in master branch, please reopen this issue.
@ggegiya1 commented on GitHub (Oct 2, 2019):
Reopening the issue as the bug still exists in the master branch
@MattFenelon commented on GitHub (Oct 7, 2019):
I believe https://github.com/s3fs-fuse/s3fs-fuse/issues/1164 is the same issue.
@pradeepnnv commented on GitHub (Oct 15, 2019):
Thanks for the workaround @iptizer , @Superbezo
@MattFenelon commented on GitHub (Oct 15, 2019):
As per #1164 - I've found that using the commit before
github.com/s3fs-fuse/s3fs-fuse@58b3cce320(github.com/s3fs-fuse/s3fs-fuse@81102a5963) fixes the issue for me.@gaul commented on GitHub (Oct 24, 2019):
Could you run this under gdb and share the backtrace of the SIGABRT? Alternatively Valgrind should show the same thing.
@moonspell79 commented on GitHub (Oct 25, 2019):
Any news when the problem will be solved? We are running s3fs in production and new EC2 in ASG cannot work properly.
@hc1jshea commented on GitHub (Oct 25, 2019):
@gaul I think I have been experiencing the same error. I documented the valgrind output in my report: https://github.com/s3fs-fuse/s3fs-fuse/issues/1180
@CarlosDomingues commented on GitHub (Nov 7, 2019):
I had this issue as well. Solved by checking out tag v1.85 as advised by others. Details:
OS: Amazon Linux 2.0
s3fs version: current master /
3e7b728800c2670d4deeff8ce8be0bc2fc42d98dBehaviour: I was able to mount buckets without errors. However, when trying any operation on the mounted folder (cd, ls) I got the following error:
My
dmesgoutputs:@gaul commented on GitHub (Nov 7, 2019):
@moonspell79 @CarlosDomingues See https://github.com/s3fs-fuse/s3fs-fuse/issues/1162#issuecomment-546129528.
@gaul commented on GitHub (Feb 3, 2020):
Closing due to inactivity. Please reopen if symptoms persist.
@bramevo commented on GitHub (Feb 20, 2020):
This issue still exists.
#dmesg [190695.504116] s3fs[29439]: segfault at 0 ip 00007f095a00e746 sp 00007f09533c3558 error 4 in libc-2.23.so[7f0959f83000+1c0000]#uname -r 4.14.158-129.185.amzn2.x86_64#s3fs --version Amazon Simple Storage Service File System V1.86 (commit:b72f4b4) with OpenSSLSo do we really have to downgrade? Really need to figure out a fix for this.
@gaul commented on GitHub (Jun 23, 2020):
@bramevo Please run using gdb or Valgrind so we can trace the cause of the error.
@crbunney commented on GitHub (Sep 15, 2020):
I've also been running into this issue (or at least this sympton). Can you provide any instructions on how to run using gdb?
Or how to resolve this problem when using gdb:
(command was
sudo gdb --args s3fs -f -d "mybucket" '/mnt/s3fs' -o ecs -o curldbg)@gaul commented on GitHub (Sep 16, 2020):
After running
gdb --args s3fs -f -d "mybucket" '/mnt/s3fs' -o ecs -o curldbgyou need to typerun. s3fs will continue until it hits a fatal condition. After this, typebt. You should also run the suggesteddebuginfo-installcommand beforehand. Thanks for your help!@crbunney commented on GitHub (Sep 17, 2020):
Here's the output of
gdb --args s3fs -f -d -d -o f2 -o curldbg -o ecs pr-107-sesar-data-processin-stagingbucketforbatch-1semb98e6andr /mnt/s3fs:Here's the info about the system I was running on:
s3fs version:
Amazon Simple Storage Service File System V1.87 (commit:194262c) with OpenSSLfuse version: 2.9.2
kernel info:
4.14.193-149.317.amzn2.x86_64Distro info:
Logged output from
/var/log/messages:Hopefully this helps you diagnose the issue! It's prevented me from getting s3fs setup on this system and unfortunately I'm going to have to move on, but if it helps you find and fix an issue, then at least it wasn't for nothing :)
@gaul commented on GitHub (Sep 17, 2020):
I have little familiarity with IAM and ECS but it appears that you are running with the latter due to the
-o ecsflag. s3fs is crashing since the environment variableAWS_CONTAINER_CREDENTIALS_RELATIVE_URIis not set. It looks like the ECS agent should set this automatically. I can change s3fs to more robustly check the environment and not crash with a NULL string, but this may not resolve the root cause which is the missing environment variable.