[GH-ISSUE #964] s3fs: segfault error 4 in libc-2.27.so #539

Closed
opened 2026-03-04 01:46:29 +03:00 by kerem · 18 comments
Owner

Originally created by @woodcoder on GitHub (Feb 22, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/964

I'm having issues with s3fs using very high CPU when under load after a few days of running -- having switched on debug it segfaulted. Details below.

The other symptom is occasionally ending up with Transport endpoint is not connected errors. In all cases restarting the mount recovers the situation (although occasionally I get left with target is busy errors and have to umount -l to get things back).

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD

Version of s3fs being used (s3fs --version)

Amazon Simple Storage Service File System V1.84(commit:unknown) with OpenSSL

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.7

Kernel information (uname -r)

4.15.0-1029-aws

GNU/Linux Distribution, if applicable (cat /etc/os-release)

Ubuntu 18.04.1 LTS \n \l

systemd mount options

[Mount]
What=s3fs#bucket-name
Where=/mnt/bucket-name
Type=fuse
Options=allow_other,use_sse=1,iam_role=bucket-name-iam-role,host=https://s3.amazonaws.com,use_cache=/tmp/bucket-name-cache,retries=5,dbglevel=debug

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:s3fs_getattr(867): [path=/uploads/5ighisf7m42a/images] uid=33, gid=33, mode=40775
Feb 22 06:37:26 localhost s3fs[27354]: [path=/][mask=X_OK ]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/]
Feb 22 06:37:26 localhost s3fs[27354]: [path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_parent_object_access(700): [path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a/images]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images]
Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/][time=4351371.503592130][hit count=1086]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a]
Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/][time=4351371.483592839][hit count=2414]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads]
Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/][time=4351336.868821382][hit count=3809]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/download.jpeg][time=4359996.428966772][hit count=4]
Feb 22 06:37:26 localhost s3fs[27354]: fdcache.cpp:ExistOpen(2118): [path=/uploads/5ighisf7m42a/images/download.jpeg][fd=-1][ignore_existfd=false]
Feb 22 06:37:26 localhost s3fs[27354]: fdcache.cpp:Open(2050): [path=/uploads/5ighisf7m42a/images/download.jpeg][size=-1][time=-1]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:s3fs_getattr(867): [path=/uploads/5ighisf7m42a/images/download.jpeg] uid=0, gid=0, mode=100644
Feb 22 06:37:26 localhost s3fs[27354]: [path=/uploads/5ighisf7m42a/images/download.jpeg][mask=R_OK ]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/download.jpeg][time=4359996.428966772][hit count=5]
Feb 22 06:37:26 localhost s3fs[27354]: [path=/uploads/5ighisf7m42a/images/download.jpeg][flags=32768]
Feb 22 06:37:26 localhost s3fs[27354]:       delete stat cache entry[path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_parent_object_access(700): [path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a/images]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images]
Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/][time=4351371.503592130][hit count=1087]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a]
Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/][time=4351371.483592839][hit count=2415]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads]
Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/][time=4351336.868821382][hit count=3810]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]:       [tpath=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]:       [tpath=/uploads/5ighisf7m42a/images/download.jpeg][bpath=][save=][sseckeypos=-1]
Feb 22 06:37:26 localhost s3fs[27354]: curl.cpp:GetHandler(285): Get handler from pool: 31
Feb 22 06:37:26 localhost s3fs[27354]:       URL is https://s3.amazonaws.com/bucket-name/uploads/5ighisf7m42a/images/download.jpeg
Feb 22 06:37:26 localhost s3fs[27354]:       URL changed is https://bucket-name.s3.amazonaws.com/uploads/5ighisf7m42a/images/download.jpeg
Feb 22 06:37:26 localhost s3fs[27354]:       computing signature [HEAD] [/uploads/5ighisf7m42a/images/download.jpeg] [] []
Feb 22 06:37:26 localhost s3fs[27354]:       url is https://s3.amazonaws.com
Feb 22 06:37:26 localhost s3fs[27354]: curl.cpp:RequestPerform(2045): connecting to URL https://bucket-name.s3.amazonaws.com/uploads/5ighisf7m42a/images/download.jpeg
Feb 22 06:37:26 localhost s3fs[27354]:       HTTP response code 200
Feb 22 06:37:26 localhost s3fs[27354]: curl.cpp:ReturnHandler(309): Return handler to pool: 31
Feb 22 06:37:26 localhost s3fs[27354]:       add stat cache entry[path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/download.jpeg][time=4394799.895610764][hit count=0]
Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images/download.jpeg]
Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/download.jpeg][time=4394799.895610764][hit count=1]
Feb 22 06:37:26 localhost s3fs[27354]: fdcache.cpp:Open(2050): [path=/uploads/5ighisf7m42a/images/download.jpeg][size=1676][time=1549567125]
Feb 22 06:37:26 localhost kernel: [4394955.981904] show_signal_msg: 15 callbacks suppressed
Feb 22 06:37:26 localhost kernel: [4394955.981907] s3fs[27428]: segfault at 7fd400000002 ip 00007fd4732ac207 sp 00007fd453ffe890 error 4 in libc-2.27.so[7fd473215000+1e7000]

Details about issue

sudo apport-unpack /var/crash/_usr_bin_s3fs.0.crash s3fscrash
cd s3fscrash
gdb `cat ExecutablePath` CoreDump
Core was generated by `s3fs bucket-name /mnt/bucket-name -o rw,allow_'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  tcache_get (tc_idx=19) at malloc.c:2943
2943	malloc.c: No such file or directory.
[Current thread is 1 (Thread 0x7fd453fff700 (LWP 27428))]
(gdb) bt
#0  tcache_get (tc_idx=19) at malloc.c:2943
#1  __GI___libc_malloc (bytes=328) at malloc.c:3050
#2  0x00007fd473ad01a8 in operator new(unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3  0x000056484b09a76e in FdManager::Open (this=0x56484b2c3a60 <FdManager::singleton>, 
    path=0x7fd448027480 "/uploads/5ighisf7m42a/images/download.jpeg", pmeta=0x7fd453ffe9f0, size=1676, time=1549567125, 
    force_tmpfile=<optimized out>, is_create=true, no_fd_lock_wait=false) at fdcache.cpp:2087
#4  0x000056484b052ff2 in s3fs_open (path=0x7fd448027480 "/uploads/5ighisf7m42a/images/download.jpeg", fi=0x7fd453ffeca0)
    at s3fs.cpp:2099
#5  0x00007fd474887a40 in fuse_fs_open () from /lib/x86_64-linux-gnu/libfuse.so.2
#6  0x00007fd474887b22 in ?? () from /lib/x86_64-linux-gnu/libfuse.so.2
#7  0x00007fd474891f9c in ?? () from /lib/x86_64-linux-gnu/libfuse.so.2
#8  0x00007fd4748916c1 in ?? () from /lib/x86_64-linux-gnu/libfuse.so.2
#9  0x00007fd47488de68 in ?? () from /lib/x86_64-linux-gnu/libfuse.so.2
#10 0x00007fd47360d6db in start_thread (arg=0x7fd453fff700) at pthread_create.c:463
#11 0x00007fd47333688f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Originally created by @woodcoder on GitHub (Feb 22, 2019). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/964 I'm having issues with s3fs using very high CPU when under load after a few days of running -- having switched on debug it segfaulted. Details below. The other symptom is occasionally ending up with `Transport endpoint is not connected` errors. In all cases restarting the mount recovers the situation (although occasionally I get left with `target is busy` errors and have to `umount -l` to get things back). ### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ _Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD_ #### Version of s3fs being used (s3fs --version) Amazon Simple Storage Service File System V1.84(commit:unknown) with OpenSSL #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.7 #### Kernel information (uname -r) 4.15.0-1029-aws #### GNU/Linux Distribution, if applicable (cat /etc/os-release) Ubuntu 18.04.1 LTS \n \l #### systemd mount options ``` [Mount] What=s3fs#bucket-name Where=/mnt/bucket-name Type=fuse Options=allow_other,use_sse=1,iam_role=bucket-name-iam-role,host=https://s3.amazonaws.com,use_cache=/tmp/bucket-name-cache,retries=5,dbglevel=debug ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) ``` Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:s3fs_getattr(867): [path=/uploads/5ighisf7m42a/images] uid=33, gid=33, mode=40775 Feb 22 06:37:26 localhost s3fs[27354]: [path=/][mask=X_OK ] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/] Feb 22 06:37:26 localhost s3fs[27354]: [path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_parent_object_access(700): [path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a/images] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images] Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/][time=4351371.503592130][hit count=1086] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a] Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/][time=4351371.483592839][hit count=2414] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads] Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/][time=4351336.868821382][hit count=3809] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/download.jpeg][time=4359996.428966772][hit count=4] Feb 22 06:37:26 localhost s3fs[27354]: fdcache.cpp:ExistOpen(2118): [path=/uploads/5ighisf7m42a/images/download.jpeg][fd=-1][ignore_existfd=false] Feb 22 06:37:26 localhost s3fs[27354]: fdcache.cpp:Open(2050): [path=/uploads/5ighisf7m42a/images/download.jpeg][size=-1][time=-1] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:s3fs_getattr(867): [path=/uploads/5ighisf7m42a/images/download.jpeg] uid=0, gid=0, mode=100644 Feb 22 06:37:26 localhost s3fs[27354]: [path=/uploads/5ighisf7m42a/images/download.jpeg][mask=R_OK ] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/download.jpeg][time=4359996.428966772][hit count=5] Feb 22 06:37:26 localhost s3fs[27354]: [path=/uploads/5ighisf7m42a/images/download.jpeg][flags=32768] Feb 22 06:37:26 localhost s3fs[27354]: delete stat cache entry[path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_parent_object_access(700): [path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a/images] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images] Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/][time=4351371.503592130][hit count=1087] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a] Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/][time=4351371.483592839][hit count=2415] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads] Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/][time=4351336.868821382][hit count=3810] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:check_object_access(594): [path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: [tpath=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: [tpath=/uploads/5ighisf7m42a/images/download.jpeg][bpath=][save=][sseckeypos=-1] Feb 22 06:37:26 localhost s3fs[27354]: curl.cpp:GetHandler(285): Get handler from pool: 31 Feb 22 06:37:26 localhost s3fs[27354]: URL is https://s3.amazonaws.com/bucket-name/uploads/5ighisf7m42a/images/download.jpeg Feb 22 06:37:26 localhost s3fs[27354]: URL changed is https://bucket-name.s3.amazonaws.com/uploads/5ighisf7m42a/images/download.jpeg Feb 22 06:37:26 localhost s3fs[27354]: computing signature [HEAD] [/uploads/5ighisf7m42a/images/download.jpeg] [] [] Feb 22 06:37:26 localhost s3fs[27354]: url is https://s3.amazonaws.com Feb 22 06:37:26 localhost s3fs[27354]: curl.cpp:RequestPerform(2045): connecting to URL https://bucket-name.s3.amazonaws.com/uploads/5ighisf7m42a/images/download.jpeg Feb 22 06:37:26 localhost s3fs[27354]: HTTP response code 200 Feb 22 06:37:26 localhost s3fs[27354]: curl.cpp:ReturnHandler(309): Return handler to pool: 31 Feb 22 06:37:26 localhost s3fs[27354]: add stat cache entry[path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/download.jpeg][time=4394799.895610764][hit count=0] Feb 22 06:37:26 localhost s3fs[27354]: s3fs.cpp:get_object_attribute(447): [path=/uploads/5ighisf7m42a/images/download.jpeg] Feb 22 06:37:26 localhost s3fs[27354]: cache.cpp:GetStat(276): stat cache hit [path=/uploads/5ighisf7m42a/images/download.jpeg][time=4394799.895610764][hit count=1] Feb 22 06:37:26 localhost s3fs[27354]: fdcache.cpp:Open(2050): [path=/uploads/5ighisf7m42a/images/download.jpeg][size=1676][time=1549567125] Feb 22 06:37:26 localhost kernel: [4394955.981904] show_signal_msg: 15 callbacks suppressed Feb 22 06:37:26 localhost kernel: [4394955.981907] s3fs[27428]: segfault at 7fd400000002 ip 00007fd4732ac207 sp 00007fd453ffe890 error 4 in libc-2.27.so[7fd473215000+1e7000] ``` ### Details about issue ``` sudo apport-unpack /var/crash/_usr_bin_s3fs.0.crash s3fscrash cd s3fscrash gdb `cat ExecutablePath` CoreDump ``` ``` Core was generated by `s3fs bucket-name /mnt/bucket-name -o rw,allow_'. Program terminated with signal SIGSEGV, Segmentation fault. #0 tcache_get (tc_idx=19) at malloc.c:2943 2943 malloc.c: No such file or directory. [Current thread is 1 (Thread 0x7fd453fff700 (LWP 27428))] (gdb) bt #0 tcache_get (tc_idx=19) at malloc.c:2943 #1 __GI___libc_malloc (bytes=328) at malloc.c:3050 #2 0x00007fd473ad01a8 in operator new(unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #3 0x000056484b09a76e in FdManager::Open (this=0x56484b2c3a60 <FdManager::singleton>, path=0x7fd448027480 "/uploads/5ighisf7m42a/images/download.jpeg", pmeta=0x7fd453ffe9f0, size=1676, time=1549567125, force_tmpfile=<optimized out>, is_create=true, no_fd_lock_wait=false) at fdcache.cpp:2087 #4 0x000056484b052ff2 in s3fs_open (path=0x7fd448027480 "/uploads/5ighisf7m42a/images/download.jpeg", fi=0x7fd453ffeca0) at s3fs.cpp:2099 #5 0x00007fd474887a40 in fuse_fs_open () from /lib/x86_64-linux-gnu/libfuse.so.2 #6 0x00007fd474887b22 in ?? () from /lib/x86_64-linux-gnu/libfuse.so.2 #7 0x00007fd474891f9c in ?? () from /lib/x86_64-linux-gnu/libfuse.so.2 #8 0x00007fd4748916c1 in ?? () from /lib/x86_64-linux-gnu/libfuse.so.2 #9 0x00007fd47488de68 in ?? () from /lib/x86_64-linux-gnu/libfuse.so.2 #10 0x00007fd47360d6db in start_thread (arg=0x7fd453fff700) at pthread_create.c:463 #11 0x00007fd47333688f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 ```
kerem closed this issue 2026-03-04 01:46:29 +03:00
Author
Owner

@woodcoder commented on GitHub (Feb 22, 2019):

May be related to #805, #547 and #759.

<!-- gh-comment-id:466332069 --> @woodcoder commented on GitHub (Feb 22, 2019): May be related to #805, #547 and #759.
Author
Owner

@gaul commented on GitHub (Feb 22, 2019):

Looks like heap corruption -- could you try running s3fs under Valgrind?

<!-- gh-comment-id:466401768 --> @gaul commented on GitHub (Feb 22, 2019): Looks like heap corruption -- could you try running s3fs under Valgrind?
Author
Owner

@woodcoder commented on GitHub (Feb 22, 2019):

That might be possible (it's a live service) -- how would I build the s3fs to provide the most useful info? At the moment our s3fs install process is:

./autogen.sh
./configure --prefix=/usr
make
sudo make install

And then rather than using the systemd unit I presume we just run s3fs under valgrind from the command line (with the same options)?

<!-- gh-comment-id:466457796 --> @woodcoder commented on GitHub (Feb 22, 2019): That might be possible (it's a live service) -- how would I build the s3fs to provide the most useful info? At the moment our s3fs install process is: ``` ./autogen.sh ./configure --prefix=/usr make sudo make install ``` And then rather than using the systemd unit I presume we just run s3fs under valgrind from the command line (with the same options)?
Author
Owner

@gaul commented on GitHub (Feb 23, 2019):

The default configuration will likely work but you may help by disabling optimization but retaining debug symbols:

CXXFLAGS='-O0 -g' ./configure

Testing against master will make it easier to correlate. Valgrind does impose CPU overhead but if s3fs is IO bound as it usually is then Valgrind should not slow access too much. We would really appreciate your assistance here since this issue affects several people and we cannot reproduce it ourselves.

<!-- gh-comment-id:466598134 --> @gaul commented on GitHub (Feb 23, 2019): The default configuration will likely work but you may help by disabling optimization but retaining debug symbols: ``` CXXFLAGS='-O0 -g' ./configure ``` Testing against master will make it easier to correlate. Valgrind does impose CPU overhead but if s3fs is IO bound as it usually is then Valgrind should not slow access too much. We would really appreciate your assistance here since this issue affects several people and we cannot reproduce it ourselves.
Author
Owner

@woodcoder commented on GitHub (Feb 25, 2019):

Hi!

Here're the results of a period of running s3fs under valgrind (sudo valgrind --leak-check=full --log-file=valgrind.log s3fs...).

This is the 1.84 s3fs version as downloaded from the releases and built as above. We didn't see any segfault (yet?) but I wondered if this valgrind.log was what you were looking for?

I can run it again, but let me know if I need to use any further parameters (or update the source to master).

Many thanks!

<!-- gh-comment-id:467204035 --> @woodcoder commented on GitHub (Feb 25, 2019): Hi! Here're the results of a period of running s3fs under valgrind (`sudo valgrind --leak-check=full --log-file=valgrind.log s3fs...`). This is the 1.84 s3fs version as downloaded from the releases and built as above. We didn't see any segfault (yet?) but I wondered if this [valgrind.log](https://github.com/s3fs-fuse/s3fs-fuse/files/2902801/valgrind.log) was what you were looking for? I can run it again, but let me know if I need to use any further parameters (or update the source to master). Many thanks!
Author
Owner

@gaul commented on GitHub (Mar 3, 2019):

@woodcoder This output is very helpful! I am still puzzling over why this happens -- do you know which sequence of operations triggers it? Also which flags do you provide to s3fs?

<!-- gh-comment-id:469005126 --> @gaul commented on GitHub (Mar 3, 2019): @woodcoder This output is very helpful! I am still puzzling over why this happens -- do you know which sequence of operations triggers it? Also which flags do you provide to s3fs?
Author
Owner

@woodcoder commented on GitHub (Mar 3, 2019):

The flags are at the top of the logfile (although bucket-name isn't the real bucket name):
-o rw,allow_other,use_sse=1,iam_role=bucket-name-iam-role,host=https://s3.amazonaws.com,use_cache=/tmp/bucket-name-cache,retries=5,dev,suid

I'm not sure of the exact sequence of operations that triggers it -- however the filesystem is used for uploading image files, so in general it's a case of uploading an image, sometimes creating a thumbnail, and then lots of reads for downloads. I would imagine there might be some directory listing (to ensure unique filenames) and file attribute checking (to ensure up-to-dateness, before serving files) going on too.

The only other info that might be relevant is that:

  • the s3fs mount is being accessed via a clamfs mount, and
  • there's also a crontab deleting files from the cache after a few days.

The crontab is running /usr/bin/find /tmp/bucket-name-cache -type f -daystart -mtime +5 -delete.

Version 1.83, while memory hungry, doesn't seem to suffer the same problem (at least on Ubuntu 16.04 LTS -- we've rolled back to this version now to see if it's stable on Ubuntu 18.04 LTS too).

<!-- gh-comment-id:469026860 --> @woodcoder commented on GitHub (Mar 3, 2019): The flags are at the top of the logfile (although bucket-name isn't the real bucket name): `-o rw,allow_other,use_sse=1,iam_role=bucket-name-iam-role,host=https://s3.amazonaws.com,use_cache=/tmp/bucket-name-cache,retries=5,dev,suid` I'm not sure of the exact sequence of operations that triggers it -- however the filesystem is used for uploading image files, so in general it's a case of uploading an image, sometimes creating a thumbnail, and then lots of reads for downloads. I would imagine there might be some directory listing (to ensure unique filenames) and file attribute checking (to ensure up-to-dateness, before serving files) going on too. The only other info that might be relevant is that: * the s3fs mount is being accessed via a clamfs mount, and * there's also a crontab deleting files from the cache after a few days. The crontab is running `/usr/bin/find /tmp/bucket-name-cache -type f -daystart -mtime +5 -delete`. Version 1.83, while memory hungry, doesn't seem to suffer the same problem (at least on Ubuntu 16.04 LTS -- we've rolled back to this version now to see if it's stable on Ubuntu 18.04 LTS too).
Author
Owner

@gaul commented on GitHub (Mar 4, 2019):

I successfully reproduced these symptoms with a simple concurrent test creating and removing files. However, the reference counting and locking discipline confuses me and I will need some more time to investigate.

<!-- gh-comment-id:469194614 --> @gaul commented on GitHub (Mar 4, 2019): I successfully reproduced these symptoms with a simple concurrent test creating and removing files. However, the reference counting and locking discipline confuses me and I will need some more time to investigate.
Author
Owner

@woodcoder commented on GitHub (Apr 19, 2019):

@gaul @ggtakec I wanted to say thank you for fixing this issue! We've been running 1.85 for a while now and it seems much more stable.

I still have a bit of feeling that the memory usage is very slowly creeping up, so I did run a valgrind again in the same way as above, but with 1.85 release. It reports one error, valgrind.log, I don't know if that's an issue?

Nonetheless, I've not seen any more segfaults so far so thank you!!!

<!-- gh-comment-id:484932473 --> @woodcoder commented on GitHub (Apr 19, 2019): @gaul @ggtakec I wanted to say thank you for fixing this issue! We've been running 1.85 for a while now and it seems much more stable. I still have a bit of feeling that the memory usage is very slowly creeping up, so I did run a valgrind again in the same way as above, but with 1.85 release. It reports one error, [valgrind.log](https://github.com/s3fs-fuse/s3fs-fuse/files/3098528/valgrind.log), I don't know if that's an issue? Nonetheless, I've not seen any more segfaults so far so thank you!!!
Author
Owner

@ggtakec commented on GitHub (Apr 22, 2019):

@woodcoder Thanks for your help.
There maybe still be problems, I examine in detail for this.

<!-- gh-comment-id:485450910 --> @ggtakec commented on GitHub (Apr 22, 2019): @woodcoder Thanks for your help. There maybe still be problems, I examine in detail for this.
Author
Owner

@gaul commented on GitHub (Apr 27, 2019):

@woodcoder I am glad that s3fs works better and the Valgrind feedback you provided suggests that this is not yet completely fixed. Reopening...

<!-- gh-comment-id:487256177 --> @gaul commented on GitHub (Apr 27, 2019): @woodcoder I am glad that s3fs works better and the Valgrind feedback you provided suggests that this is not yet completely fixed. Reopening...
Author
Owner

@woodcoder commented on GitHub (Jun 8, 2019):

I'm unfortunately definitely still seeing memory usage grow with 1.85 and so I've done some more logging in the hope it will be useful in locating the problem. Following the advice in another issue I tried running it with valgrind --tool=massif this time. Logs are below:

valgrind-massif.log
massif.out.9798.txt
massif.out.9801.txt

The first time I tried this I actually recreated the segfault, but didn't realise that massif created separate log files, so unfortunately only have the valgrind file for this:

valgrind-massif.log

Finally, I ran it again using valgrind --leak-check=full in the hope that this might help:

valgrind.log

These are all running against the 1.85 release compiled from source as above.

<!-- gh-comment-id:500118461 --> @woodcoder commented on GitHub (Jun 8, 2019): I'm unfortunately definitely still seeing memory usage grow with 1.85 and so I've done some more logging in the hope it will be useful in locating the problem. Following the advice in another issue I tried running it with `valgrind --tool=massif` this time. Logs are below: [valgrind-massif.log](https://github.com/s3fs-fuse/s3fs-fuse/files/3268466/valgrind-massif.log) [massif.out.9798.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/3268472/massif.out.9798.txt) [massif.out.9801.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/3268473/massif.out.9801.txt) The first time I tried this I actually recreated the segfault, but didn't realise that massif created separate log files, so unfortunately only have the valgrind file for this: [valgrind-massif.log](https://github.com/s3fs-fuse/s3fs-fuse/files/3268477/valgrind-massif.log) Finally, I ran it again using `valgrind --leak-check=full` in the hope that this might help: [valgrind.log](https://github.com/s3fs-fuse/s3fs-fuse/files/3269348/valgrind.log) These are all running against the 1.85 release compiled from source as above.
Author
Owner

@woodcoder commented on GitHub (Aug 13, 2019):

Hi @gaul and @ggtakec

I'm still seeing s3fs memory usage climbing from about 400M resident to 900M+ after a several days.

I ran a couple more valgrind massif logging sessions against the latest source (as of the 11 Aug 2019, commit ccc79ec139):

valgrind-massif.log
massif.out.22949.txt
massif.out.22952.txt

valgrind-massif.log
massif.out.1411.txt
massif.out.1413.txt

These logs (and the ones against the 1.85 release in the above comment) only show a fairly short period of time, so I don't know if they show the leak/help at all?

Very happy to run some longer/different valgrind settings to help resolve this.

<!-- gh-comment-id:520744149 --> @woodcoder commented on GitHub (Aug 13, 2019): Hi @gaul and @ggtakec I'm still seeing s3fs memory usage climbing from about 400M resident to 900M+ after a several days. I ran a couple more valgrind massif logging sessions against the latest source (as of the 11 Aug 2019, commit ccc79ec139825ee18dcaa05435a8416235e6df52): [valgrind-massif.log](https://github.com/s3fs-fuse/s3fs-fuse/files/3495987/valgrind-massif.log) [massif.out.22949.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/3495988/massif.out.22949.txt) [massif.out.22952.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/3495990/massif.out.22952.txt) [valgrind-massif.log](https://github.com/s3fs-fuse/s3fs-fuse/files/3495994/valgrind-massif.log) [massif.out.1411.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/3495995/massif.out.1411.txt) [massif.out.1413.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/3495997/massif.out.1413.txt) These logs (and the ones against the 1.85 release in the above comment) only show a fairly short period of time, so I don't know if they show the leak/help at all? Very happy to run some longer/different valgrind settings to help resolve this.
Author
Owner

@gaul commented on GitHub (Sep 4, 2019):

Sorry for the delayed response but I still do not understand the error. Using ms_print shows the top memory consumer is gnutls:

89.98% (258,177,890B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->34.97% (100,337,936B) 0x9492122: ??? (in /usr/lib/x86_64-linux-gnu/libtasn1.so.6.5.5)
| ->25.04% (71,842,192B) 0x948F49F: ??? (in /usr/lib/x86_64-linux-gnu/libtasn1.so.6.5.5)
| | ->12.89% (36,973,696B) 0x948EA7A: asn1_der_decoding2 (in /usr/lib/x86_64-linux-gnu/libtasn1.so.6.5.5)
| | | ->12.89% (36,973,696B) 0x7424C5A: gnutls_x509_crt_import (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.14.10)
| | | | ->12.89% (36,973,696B) 0x7428F03: gnutls_x509_crt_list_import (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.14.10)
| | | | | ->12.89% (36,973,696B) 0x74291E6: gnutls_x509_crt_list_import2 (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.14.10)
| | | | |   ->12.89% (36,973,696B) 0x7431CB0: gnutls_x509_trust_list_add_trust_mem (in /usr/lib/x86_64-linux-gnu/libgnutls.so
.30.14.10)
| | | | |     ->12.89% (36,973,696B) 0x7432145: gnutls_x509_trust_list_add_trust_file (in /usr/lib/x86_64-linux-gnu/libgnutls
.so.30.14.10)
| | | | |       ->09.65% (27,689,536B) 0x743240B: ??? (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.14.10)
| | | | |       | ->09.65% (27,689,536B) 0x7432459: gnutls_x509_trust_list_add_trust_dir (in /usr/lib/x86_64-linux-gnu/libgnu
tls.so.30.14.10)
| | | | |       |   ->09.65% (27,689,536B) 0x73D24F8: gnutls_certificate_set_x509_trust_dir (in /usr/lib/x86_64-linux-gnu/lib
gnutls.so.30.14.10)
| | | | |       |     ->09.65% (27,689,536B) 0x50CC356: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0)
| | | | |       |       ->09.65% (27,689,536B) 0x50CE038: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0)
| | | | |       |         ->09.65% (27,689,536B) 0x50807D0: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0)
| | | | |       |           ->09.65% (27,689,536B) 0x508D0F4: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0)
| | | | |       |             ->09.65% (27,689,536B) 0x50A2A44: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0)
| | | | |       |               ->09.65% (27,689,536B) 0x50A39C1: curl_multi_perform (in /usr/lib/x86_64-linux-gnu/libcurl-gn
utls.so.4.5.0)
| | | | |       |                 ->09.65% (27,689,536B) 0x50999E2: curl_easy_perform (in /usr/lib/x86_64-linux-gnu/libcurl-g
nutls.so.4.5.0)
| | | | |       |                   ->09.65% (27,689,536B) 0x147196: S3fsCurl::RequestPerform() (curl.cpp:2240)
| | | | |       |                     ->09.16% (26,293,264B) 0x1579E2: S3fsMultiCurl::RequestPerformWrapper(void*) (curl.cpp:
4269)
| | | | |       |                     | ->09.16% (26,293,264B) 0x60CD6D9: start_thread (pthread_create.c:463)
| | | | |       |                     |   ->09.16% (26,293,264B) 0x640688D: clone (clone.S:95)
| | | | |       |                     |
| | | | |       |                     ->00.49% (1,396,272B) in 1+ places, all below ms_print's threshold (01.00%)

100 MB of ASN-related data seems unlikely but the way we use libcurl seems correct. I wonder if you run s3fs -o no_check_certificate if this works around your issue? Testing this might point us in the right direction.

<!-- gh-comment-id:528122089 --> @gaul commented on GitHub (Sep 4, 2019): Sorry for the delayed response but I still do not understand the error. Using `ms_print` shows the top memory consumer is gnutls: ``` 89.98% (258,177,890B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc. ->34.97% (100,337,936B) 0x9492122: ??? (in /usr/lib/x86_64-linux-gnu/libtasn1.so.6.5.5) | ->25.04% (71,842,192B) 0x948F49F: ??? (in /usr/lib/x86_64-linux-gnu/libtasn1.so.6.5.5) | | ->12.89% (36,973,696B) 0x948EA7A: asn1_der_decoding2 (in /usr/lib/x86_64-linux-gnu/libtasn1.so.6.5.5) | | | ->12.89% (36,973,696B) 0x7424C5A: gnutls_x509_crt_import (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.14.10) | | | | ->12.89% (36,973,696B) 0x7428F03: gnutls_x509_crt_list_import (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.14.10) | | | | | ->12.89% (36,973,696B) 0x74291E6: gnutls_x509_crt_list_import2 (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.14.10) | | | | | ->12.89% (36,973,696B) 0x7431CB0: gnutls_x509_trust_list_add_trust_mem (in /usr/lib/x86_64-linux-gnu/libgnutls.so .30.14.10) | | | | | ->12.89% (36,973,696B) 0x7432145: gnutls_x509_trust_list_add_trust_file (in /usr/lib/x86_64-linux-gnu/libgnutls .so.30.14.10) | | | | | ->09.65% (27,689,536B) 0x743240B: ??? (in /usr/lib/x86_64-linux-gnu/libgnutls.so.30.14.10) | | | | | | ->09.65% (27,689,536B) 0x7432459: gnutls_x509_trust_list_add_trust_dir (in /usr/lib/x86_64-linux-gnu/libgnu tls.so.30.14.10) | | | | | | ->09.65% (27,689,536B) 0x73D24F8: gnutls_certificate_set_x509_trust_dir (in /usr/lib/x86_64-linux-gnu/lib gnutls.so.30.14.10) | | | | | | ->09.65% (27,689,536B) 0x50CC356: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0) | | | | | | ->09.65% (27,689,536B) 0x50CE038: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0) | | | | | | ->09.65% (27,689,536B) 0x50807D0: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0) | | | | | | ->09.65% (27,689,536B) 0x508D0F4: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0) | | | | | | ->09.65% (27,689,536B) 0x50A2A44: ??? (in /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0) | | | | | | ->09.65% (27,689,536B) 0x50A39C1: curl_multi_perform (in /usr/lib/x86_64-linux-gnu/libcurl-gn utls.so.4.5.0) | | | | | | ->09.65% (27,689,536B) 0x50999E2: curl_easy_perform (in /usr/lib/x86_64-linux-gnu/libcurl-g nutls.so.4.5.0) | | | | | | ->09.65% (27,689,536B) 0x147196: S3fsCurl::RequestPerform() (curl.cpp:2240) | | | | | | ->09.16% (26,293,264B) 0x1579E2: S3fsMultiCurl::RequestPerformWrapper(void*) (curl.cpp: 4269) | | | | | | | ->09.16% (26,293,264B) 0x60CD6D9: start_thread (pthread_create.c:463) | | | | | | | ->09.16% (26,293,264B) 0x640688D: clone (clone.S:95) | | | | | | | | | | | | | ->00.49% (1,396,272B) in 1+ places, all below ms_print's threshold (01.00%) ``` 100 MB of ASN-related data seems unlikely but the way we use libcurl seems correct. I wonder if you run `s3fs -o no_check_certificate` if this works around your issue? Testing this might point us in the right direction.
Author
Owner

@woodcoder commented on GitHub (Sep 5, 2019):

Hi @gaul -- thank you for looking into this! I will give that option a try and let you know if it improves things. Would a valgrind massif log against the latest source be useful for you?

I see what you mean about 100MB of ASN data seeming unlikely! I'm using IAM and SSE options -- is that relevant at all (or does that all happen outside curl)?

<!-- gh-comment-id:528243044 --> @woodcoder commented on GitHub (Sep 5, 2019): Hi @gaul -- thank you for looking into this! I will give that option a try and let you know if it improves things. Would a valgrind massif log against the latest source be useful for you? I see what you mean about 100MB of ASN data seeming unlikely! I'm using IAM and SSE options -- is that relevant at all (or does that all happen outside curl)?
Author
Owner

@woodcoder commented on GitHub (Jan 26, 2020):

Hi @gaul and @ggtakec

Here's a further valgrind massif logging session against the source (as of the 8 Sep 2019, commit 81102a5963):

valgrind-massif.log
massif.out.19228.txt
massif.out.19230.txt

This is using the no_check_certificate option. I'm not convinced it solves the problem? Please let me know if I can try any other options/logging with different versions etc. to help get to the bottom of this.

<!-- gh-comment-id:578509736 --> @woodcoder commented on GitHub (Jan 26, 2020): Hi @gaul and @ggtakec Here's a further valgrind massif logging session against the source (as of the 8 Sep 2019, commit 81102a59637ff75ecf139bc977759188c566e280): [valgrind-massif.log](https://github.com/s3fs-fuse/s3fs-fuse/files/4113601/valgrind-massif.log) [massif.out.19228.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/4113602/massif.out.19228.txt) [massif.out.19230.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/4113603/massif.out.19230.txt) This is using the `no_check_certificate` option. I'm not convinced it solves the problem? Please let me know if I can try any other options/logging with different versions etc. to help get to the bottom of this.
Author
Owner

@woodcoder commented on GitHub (Jan 26, 2020):

BTW while getting set up for the above valgrind run I was also seeing the No Transport endpoint error (mentioned in #1228) from 23 Sep 2019 commit 58b3cce320 onwards. If I build any version from source after that commit I get a segfault on startup and the mount point requires a umount -l to clear.

Here's the valgrind logging from that version in case it's useful:
valgrind-massif.log
massif.out.17875.txt
massif.out.17877.txt

<!-- gh-comment-id:578510341 --> @woodcoder commented on GitHub (Jan 26, 2020): BTW while getting set up for the above valgrind run I was also seeing the No Transport endpoint error (mentioned in #1228) from 23 Sep 2019 commit 58b3cce3207576150e7285148bb623c92caf8db3 onwards. If I build any version from source after that commit I get a segfault on startup and the mount point requires a `umount -l` to clear. Here's the valgrind logging from that version in case it's useful: [valgrind-massif.log](https://github.com/s3fs-fuse/s3fs-fuse/files/4113819/valgrind-massif.log) [massif.out.17875.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/4113820/massif.out.17875.txt) [massif.out.17877.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/4113821/massif.out.17877.txt)
Author
Owner

@woodcoder commented on GitHub (Mar 28, 2020):

This issue has been confirmed as due to excessive memory usage when using curl with the GnuTLS backend see https://github.com/curl/curl/issues/5102.

Workaround is to use OpenSSL.

<!-- gh-comment-id:605488450 --> @woodcoder commented on GitHub (Mar 28, 2020): This issue has been confirmed as due to excessive memory usage when using curl with the GnuTLS backend see https://github.com/curl/curl/issues/5102. Workaround is to use OpenSSL.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#539
No description provided.