mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #254] s3fs is leaking memory for me #131
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#131
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @monken on GitHub (Aug 27, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/254
I installed s3fs from master like this
on a AWS Linux EC2 instance. libcurl is linked against nss on this machine.
I have mounted a bucket like this
The bucket has roughly 4000 files in it. Each
find testbucket/ | wc -lgrows the s3fs process by ~60mb.Any advice would be highly appreciated!
@gaul commented on GitHub (Aug 27, 2015):
@monken Can you provide
ps uoutput before and after yourfindcommand? I measured before, after one run, and after a second run against a bucket with 700 objects:Note that RSS only grew by 2 MB while VSZ grew by 140 MB. The latter is not actually in use, just virtual address space reserved by the process.
@monken commented on GitHub (Aug 28, 2015):
Right, the virtual address space is not what I'm concerned about. The RSS grew by ~60mb to a point where the server started swapping and eventually killed the s3fs process.
@gaul commented on GitHub (Aug 31, 2015):
@monken Sorry I cannot reproduce your symptoms. Can you provide a test case, for example, something like:
@thilanga commented on GitHub (Sep 4, 2015):
I faced the same problem with s3fs 1.79 today
@ggtakec commented on GitHub (Sep 13, 2015):
@monken @thilanga
Please see following issue on Google code(old s3fs project on GC).
https://code.google.com/p/s3fs/issues/detail?id=314#c19
When s3fs run with libcurl + nss, we should use it after libcurl 7.21.4.
If using older 7.21.4, we have confirmed the memory leak.
Please refer to the version of confirmation.
Regards,
@monken commented on GitHub (Sep 15, 2015):
The AWS Linux is running with
@tspicer commented on GitHub (Sep 16, 2015):
I have been experiencing this issue today as well. I have Jmeter sending the same file for a single test user once a second via FTP to an S3 mount. This will eventually cause a memory error that was described.
The standard curl installed on Centos 7 is 7.29, so I upgrade to 7.44 via:
and ..
This installs..
per the Wiki...
running...
check ps u
pidof s3fs....@RobbKistler commented on GitHub (Sep 16, 2015):
I find that valgrind is quite useful for investigating problems like this. It should be available in the package repos for any distro.
The -f option keeps s3fs from forking. If you don't use -f, you can also add --trace-children=yes to the valgrind options.
valgrind will write the final summary log when you kill s3fs or umount the file system.
@tspicer commented on GitHub (Sep 16, 2015):
Ran valgrind and it did not report any errors. However, we started to remove/adjust some of the S3FS commands. The one command that seemed to minimize/solve the issue was using url=http... rather than url=https...
UPDATE:
Burned this in most of the day today. We have not seen any memory issues that we encountered when using url=HTTPS....
All settings are the same as previously shared save for this one edit.
@ggtakec commented on GitHub (Sep 28, 2015):
I think that valgrind did not report anything about leaking is really no leaking.
If using memory is growing, it probably means memory fragmented.
I want to know about fragments, and we know it by following environment using.
If you can, please run s3fs with MALLOC_MMAP_MAX=0 and MALLOC_TRIM_THRESHOLD_=0 environment setting.
And check ‘ps u’ as same as before you trying.
These env are to change the behavior of sbrk(free) to use minimum size.
Thanks in advance for your help.
@ClemensSchneider commented on GitHub (Jun 3, 2016):
Seeing a constant increase in memory usage on our EC2 machine (Amazon Linux), too (currently using 1.80, same issue with 1.79, updated because of several notes of memory-leak related fixes).
We're performing read-operations only (~ once per minute) on the mounted bucket and thus mounted it read-only:
s3fs bucket_name /mnt/s3-bucket -o url=https://s3.amazonaws.com,iam_role=our-iam-role,stat_cache_expire=300,max_stat_cache_size=10000,retries=10,allow_other,nonempty,roS3FS starts out at ~3 MB and is now using ~687 MB after two days running.
On another EC2 machine (Ubuntu 14.04), where S3FS (still running 1.79) is used way more heavily (continuous read/write operations), we do not face this leakage. Here, S3FS is started on boot using
/etc/fstab:bucket_name /mnt/s3-bucket fuse.s3fs _netdev,url=https://s3.amazonaws.com,enable_noobj_cache,max_stat_cache_size=10000,stat_cache_expire=300,retries=10,multireq_max=500,use_cache=/var/tmp/s3fs/s3fs_cache,allow_other,rw,nonempty,uid=1000,gid=1000 0 0We will try to skip using https as suggested by @tspicer and report back here.
Any other suggestions?
@monken commented on GitHub (Jun 5, 2016):
What version Amazon Linux vs Ubuntu are you running?
@ClemensSchneider commented on GitHub (Jun 6, 2016):
Amazon Linux:
4.4.8-20.46.amzn1.x86_64 (2016/03)Ubuntu:
Ubuntu Server 14.04.03 LTSHaving turned off usage of https-endpoint seems to have fixed the issue for us (after 5 days uptime only ~20MB are being used compared to >600MB after 2 days uptime before).
Maybe I just found the reason for the leak: On Amazon Linux,
libcurlis linked againstNSS/3.19.1but we bulilt S3FS using--with-openssl. On Ubuntu,libcurlis linked against openssl where it matched our build-switch...@tspicer: Could you check if it's the same in your case?
I will report back again.
@ggtakec commented on GitHub (Jun 12, 2016):
Hi, @ClemensSchneider
If this problem is related to libnss, or you can see the contents of the following?
https://code.google.com/archive/p/s3fs/issues/314
Previously, the contents of the examination of leak of s3fs in the Issue of google code.
Thanks in advance for your help.
@tspicer commented on GitHub (Jun 12, 2016):
@ClemensSchneider Im using s3fs inside a centos7 docker container
curl -V
curl 7.47.1 (x86_64-redhat-linux-gnu) libcurl/7.47.1 NSS/3.19.1 Basic ECC zlib/1.2.7 libidn/1.32 libpsl/0.7.0 (+libicu/50.1.2) libssh2/1.7.0
Im pulling curl from here
rpm -Uvh http://www.city-fan.org/ftp/contrib/yum-repo/rhel7/x86_64/city-fan.org-release-1-13.rhel7.noarch.rpm
@ClemensSchneider commented on GitHub (Jun 12, 2016):
Okay, so it seems to be somehow related to
NSS.curl -Von the Amazon Linux machine:curl 7.40.0 (x86_64-redhat-linux-gnu) libcurl/7.40.0 NSS/3.19.1 Basic ECC zlib/1.2.8 libidn/1.18 libssh2/1.4.2We are now using non-SSL endpoints on both machines due to the memory leak on the Amazon Linux machine and huge performance-hits on the Ubuntu 14.04 machine.
@tlevi commented on GitHub (Jun 23, 2016):
I am also having problems with leaks - without SSL. It seemed like max_stat_cache_size=0 was better but didn't really run enough to be sure. When I get time I intend to run it through valgrind and see what shakes out.
@treestem commented on GitHub (Jul 12, 2016):
Another source of memory use is the fuse inode cache. This shows up in incremental valgrind reports ("monitor leak_check full reachable changed") for an overnight run of writes. Fuse didn't start running its cache cleaning routine until I added the "-o remember=60" option. The fuse name and id cache were just growing non-stop before that.
Hmm. Fuse still isn't cleaning up cache entries. It's as though forget isn't getting called, so fuse nodes aren't getting reaped.
You can see the patch I'm testing for this here: https://github.com/libfuse/libfuse/pull/61
@pascalbayer commented on GitHub (Nov 2, 2016):
I'm facing similar issues, memory consumption is growing steadily. In my opinion the issues could be related to:
@treestem commented on GitHub (Nov 11, 2016):
hi
i have been away and will look at these in a few days.
as i recall libcurl was an issue.
more soon.
peter
On Wednesday, November 2, 2016, Pascal Bayer notifications@github.com
wrote:
@tgmedia-nz commented on GitHub (Feb 20, 2017):
Is there any progress on this??
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
We launch new version 1.86, which fixed some memory leak.
Please use the latest version.
I will close this, but if the problem persists, please use #340 or post a new issue.