[GH-ISSUE #254] s3fs is leaking memory for me #131

Closed
opened 2026-03-04 01:42:26 +03:00 by kerem · 22 comments
Owner

Originally created by @monken on GitHub (Aug 27, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/254

I installed s3fs from master like this

git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
git checkout cfdfecb4d17b52cdd88faaea53c64f098d0cceff
./autogen.sh
./configure --prefix=/usr --with-nss
make
make install

on a AWS Linux EC2 instance. libcurl is linked against nss on this machine.

I have mounted a bucket like this

s3fs -o url=https://s3-eu-west-1.amazonaws.com -o use_sse=1 -o default_acl=bucket-owner-full-control -o enable_content_md5 -o endpoint=eu-west-1 testbucket testbucket

The bucket has roughly 4000 files in it. Each find testbucket/ | wc -l grows the s3fs process by ~60mb.

Any advice would be highly appreciated!

Originally created by @monken on GitHub (Aug 27, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/254 I installed s3fs from master like this ``` bash git clone https://github.com/s3fs-fuse/s3fs-fuse cd s3fs-fuse/ git checkout cfdfecb4d17b52cdd88faaea53c64f098d0cceff ./autogen.sh ./configure --prefix=/usr --with-nss make make install ``` on a AWS Linux EC2 instance. libcurl is linked against nss on this machine. I have mounted a bucket like this ``` s3fs -o url=https://s3-eu-west-1.amazonaws.com -o use_sse=1 -o default_acl=bucket-owner-full-control -o enable_content_md5 -o endpoint=eu-west-1 testbucket testbucket ``` The bucket has roughly 4000 files in it. Each `find testbucket/ | wc -l` grows the s3fs process by ~60mb. Any advice would be highly appreciated!
kerem closed this issue 2026-03-04 01:42:26 +03:00
Author
Owner

@gaul commented on GitHub (Aug 27, 2015):

@monken Can you provide ps u output before and after your find command? I measured before, after one run, and after a second run against a bucket with 700 objects:

$ ps u `pidof s3fs`
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
gaul     31252  1.3  0.0 358172  7672 pts/28   Sl+  13:49   0:00 src/s3fs

$ ps u `pidof s3fs`
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
gaul     31252  1.1  0.1 1775412 9996 pts/28   Sl+  13:49   0:01 src/s3fs

$ ps u `pidof s3fs`
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
gaul     31252  0.8  0.1 1775412 9996 pts/28   Sl+  13:49   0:01 src/s3fs

Note that RSS only grew by 2 MB while VSZ grew by 140 MB. The latter is not actually in use, just virtual address space reserved by the process.

<!-- gh-comment-id:135552955 --> @gaul commented on GitHub (Aug 27, 2015): @monken Can you provide `ps u` output before and after your `find` command? I measured before, after one run, and after a second run against a bucket with 700 objects: ``` $ ps u `pidof s3fs` USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND gaul 31252 1.3 0.0 358172 7672 pts/28 Sl+ 13:49 0:00 src/s3fs $ ps u `pidof s3fs` USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND gaul 31252 1.1 0.1 1775412 9996 pts/28 Sl+ 13:49 0:01 src/s3fs $ ps u `pidof s3fs` USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND gaul 31252 0.8 0.1 1775412 9996 pts/28 Sl+ 13:49 0:01 src/s3fs ``` Note that RSS only grew by 2 MB while VSZ grew by 140 MB. The latter is not actually in use, just virtual address space reserved by the process.
Author
Owner

@monken commented on GitHub (Aug 28, 2015):

Right, the virtual address space is not what I'm concerned about. The RSS grew by ~60mb to a point where the server started swapping and eventually killed the s3fs process.

<!-- gh-comment-id:135745546 --> @monken commented on GitHub (Aug 28, 2015): Right, the virtual address space is not what I'm concerned about. The RSS grew by ~60mb to a point where the server started swapping and eventually killed the s3fs process.
Author
Owner

@gaul commented on GitHub (Aug 31, 2015):

@monken Sorry I cannot reproduce your symptoms. Can you provide a test case, for example, something like:

for i in `seq 4000`; do touch /path/to/mountpoint/$i
ps u `pidof s3fs`
find /path/to/mountpoint
ps u `pidof s3fs`
<!-- gh-comment-id:136525597 --> @gaul commented on GitHub (Aug 31, 2015): @monken Sorry I cannot reproduce your symptoms. Can you provide a test case, for example, something like: ``` for i in `seq 4000`; do touch /path/to/mountpoint/$i ps u `pidof s3fs` find /path/to/mountpoint ps u `pidof s3fs` ```
Author
Owner

@thilanga commented on GitHub (Sep 4, 2015):

I faced the same problem with s3fs 1.79 today

[58667.246623] Out of memory: Kill process 2085 (s3fs) score 889 or sacrifice child
[58667.252140] Killed process 2085 (s3fs) total-vm:4669852kB, anon-rss:3708776kB, file-rss:0kB
<!-- gh-comment-id:137681822 --> @thilanga commented on GitHub (Sep 4, 2015): I faced the same problem with s3fs 1.79 today ``` [58667.246623] Out of memory: Kill process 2085 (s3fs) score 889 or sacrifice child [58667.252140] Killed process 2085 (s3fs) total-vm:4669852kB, anon-rss:3708776kB, file-rss:0kB ```
Author
Owner

@ggtakec commented on GitHub (Sep 13, 2015):

@monken @thilanga
Please see following issue on Google code(old s3fs project on GC).
https://code.google.com/p/s3fs/issues/detail?id=314#c19

When s3fs run with libcurl + nss, we should use it after libcurl 7.21.4.
If using older 7.21.4, we have confirmed the memory leak.
Please refer to the version of confirmation.

Regards,

<!-- gh-comment-id:139849702 --> @ggtakec commented on GitHub (Sep 13, 2015): @monken @thilanga Please see following issue on Google code(old s3fs project on GC). https://code.google.com/p/s3fs/issues/detail?id=314#c19 When s3fs run with libcurl + nss, we should use it after libcurl 7.21.4. If using older 7.21.4, we have confirmed the memory leak. Please refer to the version of confirmation. Regards,
Author
Owner

@monken commented on GitHub (Sep 15, 2015):

The AWS Linux is running with

curl 7.40.0 (x86_64-redhat-linux-gnu) libcurl/7.40.0 NSS/3.19.1 Basic ECC zlib/1.2.8 libidn/1.18 libssh2/1.4.2
<!-- gh-comment-id:140477559 --> @monken commented on GitHub (Sep 15, 2015): The AWS Linux is running with ``` curl 7.40.0 (x86_64-redhat-linux-gnu) libcurl/7.40.0 NSS/3.19.1 Basic ECC zlib/1.2.8 libidn/1.18 libssh2/1.4.2 ```
Author
Owner

@tspicer commented on GitHub (Sep 16, 2015):

I have been experiencing this issue today as well. I have Jmeter sending the same file for a single test user once a second via FTP to an S3 mount. This will eventually cause a memory error that was described.

The standard curl installed on Centos 7 is 7.29, so I upgrade to 7.44 via:

rpm -Uvh http://www.city-fan.org/ftp/contrib/yum-repo/rhel7/x86_64/city-fan.org-release-1-13.rhel7.noarch.rpm

and ..

yum install curl curl-devel -y

This installs..

curl-7.44.0-1.0.cf.rhel7.x86_64
libmetalink.x86_64 0:0.1.2-8.rhel7
libssh2-devel.x86_64 0:1.6.0-2.0.cf.rhel7         
nspr-devel.x86_64 0:4.10.8-1.el7_1
nss-devel.x86_64 0:3.19.1-5.el7_1
nss-softokn-devel.x86_64 0:3.16.2.3-13.el7_1
nss-softokn-freebl-devel.x86_64 0:3.16.2.3-13.el7_1
nss-util-devel.x86_64 0:3.19.1-3.el7_1
libcurl-devel.x86_64 0:7.44.0-1.0.cf.rhel7

per the Wiki...

cd /usr/src/ ;\
    git clone https://github.com/s3fs-fuse/s3fs-fuse ;\
    cd s3fs-fuse/ ;\
    ./autogen.sh ;\
    ./configure --prefix=/usr --with-openssl ;\
    make ;\
    make install

running...

/usr/bin/s3fs -o url=https://s3-external-1.amazonaws.com s3bucket /mnt -o iam_role="role" -o enable_noobj_cache -o stat_cache_expire=30 -o enable_content_md5 -o max_stat_cache_size=50000 -o use_rrs=1 -o use_cache=/ebs/cache -o allow_other -o nonempty -o default_acl=bucket-owner-read

check ps u pidof s3fs ....

ps u `pidof s3fs`
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
foo     19259 18.4 11.7 699356 119764 ?       Ssl  00:05   0:48 /usr/bin/s3fs -o url=https://s3-external-1.amazonaws.com s3bucket /mnt -o iam_role=foo -o enable_noobj_cache -o stat_cache_expire=30 -o enable_content_md5 -o max_stat_cache_size=50000 -o use_rrs=1 -o use_cache=/ebs/cache -o allow_other -o nonempty -o default_acl=bucket-owner-read

ps u `pidof s3fs`
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
foo     19259 18.4 16.8 699364 172360 ?       Ssl  00:05   1:09 /usr/bin/s3fs -o url=https://s3-external-1.amazonaws.com s3bucket /mnt -o iam_role=foo -o enable_noobj_cache -o stat_cache_expire=30 -o enable_content_md5 -o max_stat_cache_size=50000 -o use_rrs=1 -o use_cache=/ebs/cache -o allow_other -o nonempty -o default_acl=bucket-owner-read


ps u `pidof s3fs`
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
foo     19259 18.2 22.1 707556 226284 ?       Ssl  00:05   1:33 /usr/bin/s3fs -o url=https://s3-external-1.amazonaws.com s3bucket /mnt -o iam_role=foo -o enable_noobj_cache -o stat_cache_expire=30 -o enable_content_md5 -o max_stat_cache_size=50000 -o use_rrs=1 -o use_cache=/ebs/cache -o allow_other -o nonempty -o default_acl=bucket-owner-read
<!-- gh-comment-id:140586517 --> @tspicer commented on GitHub (Sep 16, 2015): I have been experiencing this issue today as well. I have Jmeter sending the same file for a single test user once a second via FTP to an S3 mount. This will eventually cause a memory error that was described. The standard curl installed on Centos 7 is 7.29, so I upgrade to 7.44 via: ``` rpm -Uvh http://www.city-fan.org/ftp/contrib/yum-repo/rhel7/x86_64/city-fan.org-release-1-13.rhel7.noarch.rpm ``` and .. ``` yum install curl curl-devel -y ``` This installs.. ``` curl-7.44.0-1.0.cf.rhel7.x86_64 libmetalink.x86_64 0:0.1.2-8.rhel7 libssh2-devel.x86_64 0:1.6.0-2.0.cf.rhel7 nspr-devel.x86_64 0:4.10.8-1.el7_1 nss-devel.x86_64 0:3.19.1-5.el7_1 nss-softokn-devel.x86_64 0:3.16.2.3-13.el7_1 nss-softokn-freebl-devel.x86_64 0:3.16.2.3-13.el7_1 nss-util-devel.x86_64 0:3.19.1-3.el7_1 libcurl-devel.x86_64 0:7.44.0-1.0.cf.rhel7 ``` per the Wiki... ``` cd /usr/src/ ;\ git clone https://github.com/s3fs-fuse/s3fs-fuse ;\ cd s3fs-fuse/ ;\ ./autogen.sh ;\ ./configure --prefix=/usr --with-openssl ;\ make ;\ make install ``` running... ``` /usr/bin/s3fs -o url=https://s3-external-1.amazonaws.com s3bucket /mnt -o iam_role="role" -o enable_noobj_cache -o stat_cache_expire=30 -o enable_content_md5 -o max_stat_cache_size=50000 -o use_rrs=1 -o use_cache=/ebs/cache -o allow_other -o nonempty -o default_acl=bucket-owner-read ``` check ps u `pidof s3fs` .... ``` ps u `pidof s3fs` USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND foo 19259 18.4 11.7 699356 119764 ? Ssl 00:05 0:48 /usr/bin/s3fs -o url=https://s3-external-1.amazonaws.com s3bucket /mnt -o iam_role=foo -o enable_noobj_cache -o stat_cache_expire=30 -o enable_content_md5 -o max_stat_cache_size=50000 -o use_rrs=1 -o use_cache=/ebs/cache -o allow_other -o nonempty -o default_acl=bucket-owner-read ps u `pidof s3fs` USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND foo 19259 18.4 16.8 699364 172360 ? Ssl 00:05 1:09 /usr/bin/s3fs -o url=https://s3-external-1.amazonaws.com s3bucket /mnt -o iam_role=foo -o enable_noobj_cache -o stat_cache_expire=30 -o enable_content_md5 -o max_stat_cache_size=50000 -o use_rrs=1 -o use_cache=/ebs/cache -o allow_other -o nonempty -o default_acl=bucket-owner-read ps u `pidof s3fs` USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND foo 19259 18.2 22.1 707556 226284 ? Ssl 00:05 1:33 /usr/bin/s3fs -o url=https://s3-external-1.amazonaws.com s3bucket /mnt -o iam_role=foo -o enable_noobj_cache -o stat_cache_expire=30 -o enable_content_md5 -o max_stat_cache_size=50000 -o use_rrs=1 -o use_cache=/ebs/cache -o allow_other -o nonempty -o default_acl=bucket-owner-read ```
Author
Owner

@RobbKistler commented on GitHub (Sep 16, 2015):

I find that valgrind is quite useful for investigating problems like this. It should be available in the package repos for any distro.

valgrind --leak-check=full --log-file=s3fs-valgrind.log /usr/bin/s3fs -f ...

The -f option keeps s3fs from forking. If you don't use -f, you can also add --trace-children=yes to the valgrind options.

valgrind will write the final summary log when you kill s3fs or umount the file system.

<!-- gh-comment-id:140590256 --> @RobbKistler commented on GitHub (Sep 16, 2015): I find that valgrind is quite useful for investigating problems like this. It should be available in the package repos for any distro. ``` valgrind --leak-check=full --log-file=s3fs-valgrind.log /usr/bin/s3fs -f ... ``` The -f option keeps s3fs from forking. If you don't use -f, you can also add --trace-children=yes to the valgrind options. valgrind will write the final summary log when you kill s3fs or umount the file system.
Author
Owner

@tspicer commented on GitHub (Sep 16, 2015):

Ran valgrind and it did not report any errors. However, we started to remove/adjust some of the S3FS commands. The one command that seemed to minimize/solve the issue was using url=http... rather than url=https...

UPDATE:
Burned this in most of the day today. We have not seen any memory issues that we encountered when using url=HTTPS....

All settings are the same as previously shared save for this one edit.

<!-- gh-comment-id:140783895 --> @tspicer commented on GitHub (Sep 16, 2015): Ran valgrind and it did not report any errors. However, we started to remove/adjust some of the S3FS commands. The one command that seemed to minimize/solve the issue was using url=http... rather than url=https... UPDATE: Burned this in most of the day today. We have not seen any memory issues that we encountered when using url=HTTPS.... All settings are the same as previously shared save for this one edit.
Author
Owner

@ggtakec commented on GitHub (Sep 28, 2015):

I think that valgrind did not report anything about leaking is really no leaking.
If using memory is growing, it probably means memory fragmented.
I want to know about fragments, and we know it by following environment using.
If you can, please run s3fs with MALLOC_MMAP_MAX=0 and MALLOC_TRIM_THRESHOLD_=0 environment setting.
And check ‘ps u’ as same as before you trying.
These env are to change the behavior of sbrk(free) to use minimum size.

Thanks in advance for your help.

<!-- gh-comment-id:143795466 --> @ggtakec commented on GitHub (Sep 28, 2015): I think that valgrind did not report anything about leaking is really no leaking. If using memory is growing, it probably means memory fragmented. I want to know about fragments, and we know it by following environment using. If you can, please run s3fs with MALLOC_MMAP_MAX=0 and MALLOC_TRIM_THRESHOLD_=0 environment setting. And check ‘ps u’ as same as before you trying. These env are to change the behavior of sbrk(free) to use minimum size. Thanks in advance for your help.
Author
Owner

@ClemensSchneider commented on GitHub (Jun 3, 2016):

Seeing a constant increase in memory usage on our EC2 machine (Amazon Linux), too (currently using 1.80, same issue with 1.79, updated because of several notes of memory-leak related fixes).
We're performing read-operations only (~ once per minute) on the mounted bucket and thus mounted it read-only:

s3fs bucket_name /mnt/s3-bucket -o url=https://s3.amazonaws.com,iam_role=our-iam-role,stat_cache_expire=300,max_stat_cache_size=10000,retries=10,allow_other,nonempty,ro

S3FS starts out at ~3 MB and is now using ~687 MB after two days running.

On another EC2 machine (Ubuntu 14.04), where S3FS (still running 1.79) is used way more heavily (continuous read/write operations), we do not face this leakage. Here, S3FS is started on boot using /etc/fstab:

bucket_name /mnt/s3-bucket fuse.s3fs _netdev,url=https://s3.amazonaws.com,enable_noobj_cache,max_stat_cache_size=10000,stat_cache_expire=300,retries=10,multireq_max=500,use_cache=/var/tmp/s3fs/s3fs_cache,allow_other,rw,nonempty,uid=1000,gid=1000 0 0

We will try to skip using https as suggested by @tspicer and report back here.

Any other suggestions?

<!-- gh-comment-id:223460639 --> @ClemensSchneider commented on GitHub (Jun 3, 2016): Seeing a constant increase in memory usage on our EC2 machine (Amazon Linux), too (currently using 1.80, same issue with 1.79, updated because of several notes of memory-leak related fixes). We're performing read-operations only (~ once per minute) on the mounted bucket and thus mounted it read-only: `s3fs bucket_name /mnt/s3-bucket -o url=https://s3.amazonaws.com,iam_role=our-iam-role,stat_cache_expire=300,max_stat_cache_size=10000,retries=10,allow_other,nonempty,ro` S3FS starts out at ~3 MB and is now using ~687 MB after two days running. On another EC2 machine (Ubuntu 14.04), where S3FS (still running 1.79) is used way more heavily (continuous read/write operations), we do not face this leakage. Here, S3FS is started on boot using `/etc/fstab`: `bucket_name /mnt/s3-bucket fuse.s3fs _netdev,url=https://s3.amazonaws.com,enable_noobj_cache,max_stat_cache_size=10000,stat_cache_expire=300,retries=10,multireq_max=500,use_cache=/var/tmp/s3fs/s3fs_cache,allow_other,rw,nonempty,uid=1000,gid=1000 0 0` We will try to skip using https as suggested by @tspicer and report back here. Any other suggestions?
Author
Owner

@monken commented on GitHub (Jun 5, 2016):

What version Amazon Linux vs Ubuntu are you running?

<!-- gh-comment-id:223792417 --> @monken commented on GitHub (Jun 5, 2016): What version Amazon Linux vs Ubuntu are you running?
Author
Owner

@ClemensSchneider commented on GitHub (Jun 6, 2016):

Amazon Linux: 4.4.8-20.46.amzn1.x86_64 (2016/03)
Ubuntu: Ubuntu Server 14.04.03 LTS

Having turned off usage of https-endpoint seems to have fixed the issue for us (after 5 days uptime only ~20MB are being used compared to >600MB after 2 days uptime before).

Maybe I just found the reason for the leak: On Amazon Linux, libcurl is linked against NSS/3.19.1 but we bulilt S3FS using --with-openssl. On Ubuntu, libcurlis linked against openssl where it matched our build-switch...

@tspicer: Could you check if it's the same in your case?

I will report back again.

<!-- gh-comment-id:223873061 --> @ClemensSchneider commented on GitHub (Jun 6, 2016): Amazon Linux: `4.4.8-20.46.amzn1.x86_64 (2016/03)` Ubuntu: `Ubuntu Server 14.04.03 LTS` Having turned off usage of https-endpoint seems to have fixed the issue for us (after 5 days uptime only ~20MB are being used compared to >600MB after 2 days uptime before). Maybe I just found the reason for the leak: On Amazon Linux, `libcurl` is linked against `NSS/3.19.1` but we bulilt S3FS using `--with-openssl`. On Ubuntu, `libcurl`is linked against openssl where it matched our build-switch... @tspicer: Could you check if it's the same in your case? I will report back again.
Author
Owner

@ggtakec commented on GitHub (Jun 12, 2016):

Hi, @ClemensSchneider
If this problem is related to libnss, or you can see the contents of the following?
https://code.google.com/archive/p/s3fs/issues/314
Previously, the contents of the examination of leak of s3fs in the Issue of google code.

Thanks in advance for your help.

<!-- gh-comment-id:225405067 --> @ggtakec commented on GitHub (Jun 12, 2016): Hi, @ClemensSchneider If this problem is related to libnss, or you can see the contents of the following? https://code.google.com/archive/p/s3fs/issues/314 Previously, the contents of the examination of leak of s3fs in the Issue of google code. Thanks in advance for your help.
Author
Owner

@tspicer commented on GitHub (Jun 12, 2016):

@ClemensSchneider Im using s3fs inside a centos7 docker container

curl -V

curl 7.47.1 (x86_64-redhat-linux-gnu) libcurl/7.47.1 NSS/3.19.1 Basic ECC zlib/1.2.7 libidn/1.32 libpsl/0.7.0 (+libicu/50.1.2) libssh2/1.7.0

Im pulling curl from here
rpm -Uvh http://www.city-fan.org/ftp/contrib/yum-repo/rhel7/x86_64/city-fan.org-release-1-13.rhel7.noarch.rpm

<!-- gh-comment-id:225430236 --> @tspicer commented on GitHub (Jun 12, 2016): @ClemensSchneider Im using s3fs inside a centos7 docker container curl -V curl 7.47.1 (x86_64-redhat-linux-gnu) libcurl/7.47.1 NSS/3.19.1 Basic ECC zlib/1.2.7 libidn/1.32 libpsl/0.7.0 (+libicu/50.1.2) libssh2/1.7.0 Im pulling curl from here rpm -Uvh http://www.city-fan.org/ftp/contrib/yum-repo/rhel7/x86_64/city-fan.org-release-1-13.rhel7.noarch.rpm
Author
Owner

@ClemensSchneider commented on GitHub (Jun 12, 2016):

Okay, so it seems to be somehow related to NSS.
curl -V on the Amazon Linux machine:
curl 7.40.0 (x86_64-redhat-linux-gnu) libcurl/7.40.0 NSS/3.19.1 Basic ECC zlib/1.2.8 libidn/1.18 libssh2/1.4.2

We are now using non-SSL endpoints on both machines due to the memory leak on the Amazon Linux machine and huge performance-hits on the Ubuntu 14.04 machine.

<!-- gh-comment-id:225442798 --> @ClemensSchneider commented on GitHub (Jun 12, 2016): Okay, so it seems to be somehow related to `NSS`. `curl -V` on the Amazon Linux machine: `curl 7.40.0 (x86_64-redhat-linux-gnu) libcurl/7.40.0 NSS/3.19.1 Basic ECC zlib/1.2.8 libidn/1.18 libssh2/1.4.2` We are now using non-SSL endpoints on both machines due to the memory leak on the Amazon Linux machine and huge performance-hits on the Ubuntu 14.04 machine.
Author
Owner

@tlevi commented on GitHub (Jun 23, 2016):

I am also having problems with leaks - without SSL. It seemed like max_stat_cache_size=0 was better but didn't really run enough to be sure. When I get time I intend to run it through valgrind and see what shakes out.

<!-- gh-comment-id:227938171 --> @tlevi commented on GitHub (Jun 23, 2016): I am also having problems with leaks - without SSL. It seemed like max_stat_cache_size=0 was better but didn't really run enough to be sure. When I get time I intend to run it through valgrind and see what shakes out.
Author
Owner

@treestem commented on GitHub (Jul 12, 2016):

Another source of memory use is the fuse inode cache. This shows up in incremental valgrind reports ("monitor leak_check full reachable changed") for an overnight run of writes. Fuse didn't start running its cache cleaning routine until I added the "-o remember=60" option. The fuse name and id cache were just growing non-stop before that.

Hmm. Fuse still isn't cleaning up cache entries. It's as though forget isn't getting called, so fuse nodes aren't getting reaped.

You can see the patch I'm testing for this here: https://github.com/libfuse/libfuse/pull/61

<!-- gh-comment-id:232180685 --> @treestem commented on GitHub (Jul 12, 2016): Another source of memory use is the fuse inode cache. This shows up in incremental valgrind reports ("monitor leak_check full reachable changed") for an overnight run of writes. Fuse didn't start running its cache cleaning routine until I added the "-o remember=60" option. The fuse name and id cache were just growing non-stop before that. Hmm. Fuse still isn't cleaning up cache entries. It's as though forget isn't getting called, so fuse nodes aren't getting reaped. You can see the patch I'm testing for this here: https://github.com/libfuse/libfuse/pull/61
Author
Owner

@pascalbayer commented on GitHub (Nov 2, 2016):

I'm facing similar issues, memory consumption is growing steadily. In my opinion the issues could be related to:

<!-- gh-comment-id:257871660 --> @pascalbayer commented on GitHub (Nov 2, 2016): I'm facing similar issues, memory consumption is growing steadily. In my opinion the issues could be related to: - https://bugzilla.redhat.com/show_bug.cgi?id=1044666 - https://www.splyt.com/blog/2014-05-16-optimizing-aws-nss-softoken
Author
Owner

@treestem commented on GitHub (Nov 11, 2016):

hi
i have been away and will look at these in a few days.
as i recall libcurl was an issue.
more soon.
peter

On Wednesday, November 2, 2016, Pascal Bayer notifications@github.com
wrote:

I'm facing similar issues, memory consumption is growing steadily. In my
opinion the issues could be related to:


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/s3fs-fuse/s3fs-fuse/issues/254#issuecomment-257871660,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ASf3iw8qzkOOrE7Nmdd6K9Yh1d8bGFLjks5q6JbogaJpZM4FzdbQ
.

<!-- gh-comment-id:259852291 --> @treestem commented on GitHub (Nov 11, 2016): hi i have been away and will look at these in a few days. as i recall libcurl was an issue. more soon. peter On Wednesday, November 2, 2016, Pascal Bayer notifications@github.com wrote: > I'm facing similar issues, memory consumption is growing steadily. In my > opinion the issues could be related to: > - https://bugzilla.redhat.com/show_bug.cgi?id=1044666 > - https://www.splyt.com/blog/2014-05-16-optimizing-aws-nss-softoken > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > https://github.com/s3fs-fuse/s3fs-fuse/issues/254#issuecomment-257871660, > or mute the thread > https://github.com/notifications/unsubscribe-auth/ASf3iw8qzkOOrE7Nmdd6K9Yh1d8bGFLjks5q6JbogaJpZM4FzdbQ > .
Author
Owner

@tgmedia-nz commented on GitHub (Feb 20, 2017):

Is there any progress on this??

<!-- gh-comment-id:281205053 --> @tgmedia-nz commented on GitHub (Feb 20, 2017): Is there any progress on this??
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.
We launch new version 1.86, which fixed some memory leak.
Please use the latest version.
I will close this, but if the problem persists, please use #340 or post a new issue.

<!-- gh-comment-id:478213786 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. We launch new version 1.86, which fixed some memory leak. Please use the latest version. I will close this, but if the problem persists, please use #340 or post a new issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#131
No description provided.