[GH-ISSUE #340] Memory Leak - s3fs using over 12GB memory #176

Closed
opened 2026-03-04 01:42:54 +03:00 by kerem · 48 comments
Owner

Originally created by @justinfalk on GitHub (Jan 25, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/340

Hi,

First, thank you for the great product. I've really enjoyed working with it thus far.

We have s3fs running on an AWS instance connected to a bucket with ~110 million small objects. No single directory contains more than a few hundred files and our application accesses these files by their full path, there should be no traversing the directory structure or anything like that. s3fs is used only for retrieval of these objects (no new objects are written) and occasional deletion. On an average night, there are only a few thousand downloads (say 5-20k objects downloaded). I am not using a disk cache because objects are rarely downloaded more than once.

After running for about a week, the kernel killed s3fs because it was consuming all available system memory (~12GB). See logs below.

I've searched and found other memory leak issues related to old libcurl versions and non-ssl usage but those don't seem to apply to my environment. I'm not certain if this is a configuration error or a memory leak, but it seems to be a memory leak because I can easily reproduce this by simply downloading files via my application and watching s3fs memory use grow. Please let me know if there is any other information that might be helpful.

Environment:
OS: Amazon Linux 4.1.13-18.26
s3fs: 1.79
Fuse: 2.9.4-1.17
libcurl: 7.40.0-3.52

fstab config:

defaults,noatime,allow_other,umask=022,url=https://s3.amazonaws.com,use_sse,_netdev

Log snippet (full /var/log/messages output attached)

Jan 25 02:09:42 ip-10-2-0-77 kernel: [3926969.438013] Out of memory: Kill process 18320 (s3fs) score 686 or sacrifice child
Jan 25 02:09:42 ip-10-2-0-77 kernel: [3926969.441504] Killed process 18320 (s3fs) total-vm:12493224kB, anon-rss:11604912kB, file-rss:0kB

oom.txt

Originally created by @justinfalk on GitHub (Jan 25, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/340 Hi, First, thank you for the great product. I've really enjoyed working with it thus far. We have s3fs running on an AWS instance connected to a bucket with ~110 million small objects. No single directory contains more than a few hundred files and our application accesses these files by their full path, there should be no traversing the directory structure or anything like that. s3fs is used only for retrieval of these objects (no new objects are written) and occasional deletion. On an average night, there are only a few thousand downloads (say 5-20k objects downloaded). I am not using a disk cache because objects are rarely downloaded more than once. After running for about a week, the kernel killed s3fs because it was consuming all available system memory (~12GB). See logs below. I've searched and found other memory leak issues related to old libcurl versions and non-ssl usage but those don't seem to apply to my environment. I'm not certain if this is a configuration error or a memory leak, but it seems to be a memory leak because I can easily reproduce this by simply downloading files via my application and watching s3fs memory use grow. Please let me know if there is any other information that might be helpful. _Environment:_ OS: Amazon Linux 4.1.13-18.26 s3fs: 1.79 Fuse: 2.9.4-1.17 libcurl: 7.40.0-3.52 _fstab config:_ ``` defaults,noatime,allow_other,umask=022,url=https://s3.amazonaws.com,use_sse,_netdev ``` Log snippet (full /var/log/messages output attached) ``` Jan 25 02:09:42 ip-10-2-0-77 kernel: [3926969.438013] Out of memory: Kill process 18320 (s3fs) score 686 or sacrifice child Jan 25 02:09:42 ip-10-2-0-77 kernel: [3926969.441504] Killed process 18320 (s3fs) total-vm:12493224kB, anon-rss:11604912kB, file-rss:0kB ``` [oom.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/103222/oom.txt)
kerem closed this issue 2026-03-04 01:42:54 +03:00
Author
Owner

@hryang commented on GitHub (Jan 27, 2016):

Hi,

I met the same problem. In my scenario, the memory leak is due to fdcache.cpp:325, where the page is erased from the list, but not be deleted.

Also, the code may have other logic error. Please see my comments in the following piece of code.

for(fdpage_list_t::iterator iter = pages.begin(); iter != pages.end(); ){
      if(static_cast<size_t>((*iter)->next()) <= size){
        ++iter;
      }else{
        if(size <= static_cast<size_t>((*iter)->offset)){
          // XXX memory leak, the page is erased from the list, but not be deleted!
          iter = pages.erase(iter); 
        }else{
          // XXX How to terminate the loop once enter this branch.
          (*iter)->bytes = size - static_cast<size_t>((*iter)->offset); 
        }
      }
    }
<!-- gh-comment-id:175399227 --> @hryang commented on GitHub (Jan 27, 2016): Hi, I met the same problem. In my scenario, the memory leak is due to fdcache.cpp:325, where the page is erased from the list, but not be deleted. Also, the code may have other logic error. Please see my comments in the following piece of code. ``` for(fdpage_list_t::iterator iter = pages.begin(); iter != pages.end(); ){ if(static_cast<size_t>((*iter)->next()) <= size){ ++iter; }else{ if(size <= static_cast<size_t>((*iter)->offset)){ // XXX memory leak, the page is erased from the list, but not be deleted! iter = pages.erase(iter); }else{ // XXX How to terminate the loop once enter this branch. (*iter)->bytes = size - static_cast<size_t>((*iter)->offset); } } } ```
Author
Owner

@RobbKistler commented on GitHub (Jan 27, 2016):

@justinfalk have you tried with this with the tip of s3fs-fuse/master instead of the 1.79 release?

<!-- gh-comment-id:175435860 --> @RobbKistler commented on GitHub (Jan 27, 2016): @justinfalk have you tried with this with the tip of s3fs-fuse/master instead of the 1.79 release?
Author
Owner

@justinfalk commented on GitHub (Jan 27, 2016):

Hi hryang,

Thanks for the response. In your situation, were you able to resolve the memory leak? It looks like this was not resolved because trunk still contains the same logic.

RobbKistler,

I have only used the 1.79 release. I browsed through commits and didn't see anything that looked like it would have resolved this issue but I will give it a try and report back.

Thanks again.

<!-- gh-comment-id:175635286 --> @justinfalk commented on GitHub (Jan 27, 2016): Hi hryang, Thanks for the response. In your situation, were you able to resolve the memory leak? It looks like this was not resolved because trunk still contains the same logic. RobbKistler, I have only used the 1.79 release. I browsed through commits and didn't see anything that looked like it would have resolved this issue but I will give it a try and report back. Thanks again.
Author
Owner

@justinfalk commented on GitHub (Jan 27, 2016):

@RobbKistler I tried building from the latest src and the result was the same. I have 12 threads downloading small files and see a very steady progression of memory use.

I have a video of the top output to demonstrate:
https://www.dropbox.com/s/jrojipbw83zojx1/top.mov?dl=0&preview=top.mov

It sounds like @hryang may have identified the cause of the leak. Can you confirm?

<!-- gh-comment-id:175651469 --> @justinfalk commented on GitHub (Jan 27, 2016): @RobbKistler I tried building from the latest src and the result was the same. I have 12 threads downloading small files and see a very steady progression of memory use. I have a video of the top output to demonstrate: https://www.dropbox.com/s/jrojipbw83zojx1/top.mov?dl=0&preview=top.mov It sounds like @hryang may have identified the cause of the leak. Can you confirm?
Author
Owner

@justinfalk commented on GitHub (Jan 28, 2016):

Thanks @hryang ! I'll test today and report back.

<!-- gh-comment-id:176218501 --> @justinfalk commented on GitHub (Jan 28, 2016): Thanks @hryang ! I'll test today and report back.
Author
Owner

@justinfalk commented on GitHub (Jan 28, 2016):

Regrettably, the fix @hryang provided did not improve the memory leak I am observing. The memory accumulation is unaffected by whether I have a disk cache configured or not, or stat cache settings. It seems to grow linearly over time relative to the number of files downloaded.

Any other ideas?

<!-- gh-comment-id:176239561 --> @justinfalk commented on GitHub (Jan 28, 2016): Regrettably, the fix @hryang provided did not improve the memory leak I am observing. The memory accumulation is unaffected by whether I have a disk cache configured or not, or stat cache settings. It seems to grow linearly over time relative to the number of files downloaded. Any other ideas?
Author
Owner

@hryang commented on GitHub (Jan 29, 2016):

Hi,

Could you please use valgrind to check the memory leak source? In my scenario, the patch fix the memory leak and pass the valgrind check.

The usage:

valgrind --tool=memcheck --leak-check=full --log-file=v.log s3fs -f your_mount_options

Then run your small bunch of workloads and umount.

Finally, you will get valgrind report, v.log. It will show you the detail information.

<!-- gh-comment-id:176567868 --> @hryang commented on GitHub (Jan 29, 2016): Hi, Could you please use valgrind to check the memory leak source? In my scenario, the patch fix the memory leak and pass the valgrind check. The usage: valgrind --tool=memcheck --leak-check=full --log-file=v.log s3fs -f your_mount_options Then run your small bunch of workloads and umount. Finally, you will get valgrind report, v.log. It will show you the detail information.
Author
Owner

@justinfalk commented on GitHub (Jan 31, 2016):

I apologize, but I've tried to run valgrind several times on different computers and it keeps failing. What am I doing wrong?

[xxx@xxx-app-1 ~]$ sudo valgrind --tool=memcheck --leak-check=full --log-file=v.log s3fs -f my-bucket /mnt/my-bucket -o url=https://s3.amazonaws.com -o use_sse 
[xxx@xxx-app-1 ~]$ cat v.log 
==19431== Memcheck, a memory error detector
==19431== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==19431== Using Valgrind-3.9.0 and LibVEX; rerun with -h for copyright info
==19431== Command: s3fs -f my-bucket /mnt/my-bucket -o url=https://s3.amazonaws.com -o use_sse
==19431== Parent PID: 19430
==19431== 
vex amd64->IR: unhandled instruction bytes: 0x66 0xF 0x1B 0x4 0x24 0x66 0xF 0x1B
vex amd64->IR:   REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
vex amd64->IR:   VEX=0 VEX.L=0 VEX.nVVVV=0x0 ESC=0F
vex amd64->IR:   PFX.66=1 PFX.F2=0 PFX.F3=0
==19431== valgrind: Unrecognised instruction at address 0x4015237.
==19431==    at 0x4015237: _dl_runtime_resolve (in /lib64/ld-2.17.so)
==19431==    by 0x5D432F8: __exp_finite (in /lib64/libm-2.17.so)
==19431==    by 0x400B75A: _dl_relocate_object (in /lib64/ld-2.17.so)
==19431==    by 0x4003AD9: dl_main (in /lib64/ld-2.17.so)
==19431==    by 0x40167D4: _dl_sysdep_start (in /lib64/ld-2.17.so)
==19431==    by 0x4004CC0: _dl_start (in /lib64/ld-2.17.so)
==19431==    by 0x4001437: ??? (in /lib64/ld-2.17.so)
==19431==    by 0x7: ???
==19431==    by 0xFFF0006BA: ???
==19431==    by 0xFFF0006BF: ???
==19431==    by 0xFFF0006C2: ???
==19431==    by 0xFFF0006CD: ???
==19431== Your program just tried to execute an instruction that Valgrind
==19431== did not recognise.  There are two possible reasons for this.
==19431== 1. Your program has a bug and erroneously jumped to a non-code
==19431==    location.  If you are running Memcheck and you just saw a
==19431==    warning about a bad jump, it's probably your program's fault.
==19431== 2. The instruction is legitimate but Valgrind doesn't handle it,
==19431==    i.e. it's Valgrind's fault.  If you think this is the case or
==19431==    you are not sure, please let us know and we'll try to fix it.
==19431== Either way, Valgrind will now raise a SIGILL signal which will
==19431== probably kill your program.

If I run the command outside of valgrind it works fine

[xxx@xxx-app-1 ~]$ sudo s3fs -f my-bucket /mnt/my-bucket -o url=https://s3.amazonaws.com -o use_sse 
    set_moutpoint_attribute(4052): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
s3fs_init(3236): init
s3fs_check_service(3595): check services.
    CheckBucket(2537): check a bucket.
    insertV4Headers(1973): computing signature [GET] [/] [] []
    url_to_host(99): url is https://s3.amazonaws.com
    RequestPerform(1648): HTTP response code 200
<!-- gh-comment-id:177356064 --> @justinfalk commented on GitHub (Jan 31, 2016): I apologize, but I've tried to run valgrind several times on different computers and it keeps failing. What am I doing wrong? ``` [xxx@xxx-app-1 ~]$ sudo valgrind --tool=memcheck --leak-check=full --log-file=v.log s3fs -f my-bucket /mnt/my-bucket -o url=https://s3.amazonaws.com -o use_sse [xxx@xxx-app-1 ~]$ cat v.log ==19431== Memcheck, a memory error detector ==19431== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. ==19431== Using Valgrind-3.9.0 and LibVEX; rerun with -h for copyright info ==19431== Command: s3fs -f my-bucket /mnt/my-bucket -o url=https://s3.amazonaws.com -o use_sse ==19431== Parent PID: 19430 ==19431== vex amd64->IR: unhandled instruction bytes: 0x66 0xF 0x1B 0x4 0x24 0x66 0xF 0x1B vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0 vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x0 ESC=0F vex amd64->IR: PFX.66=1 PFX.F2=0 PFX.F3=0 ==19431== valgrind: Unrecognised instruction at address 0x4015237. ==19431== at 0x4015237: _dl_runtime_resolve (in /lib64/ld-2.17.so) ==19431== by 0x5D432F8: __exp_finite (in /lib64/libm-2.17.so) ==19431== by 0x400B75A: _dl_relocate_object (in /lib64/ld-2.17.so) ==19431== by 0x4003AD9: dl_main (in /lib64/ld-2.17.so) ==19431== by 0x40167D4: _dl_sysdep_start (in /lib64/ld-2.17.so) ==19431== by 0x4004CC0: _dl_start (in /lib64/ld-2.17.so) ==19431== by 0x4001437: ??? (in /lib64/ld-2.17.so) ==19431== by 0x7: ??? ==19431== by 0xFFF0006BA: ??? ==19431== by 0xFFF0006BF: ??? ==19431== by 0xFFF0006C2: ??? ==19431== by 0xFFF0006CD: ??? ==19431== Your program just tried to execute an instruction that Valgrind ==19431== did not recognise. There are two possible reasons for this. ==19431== 1. Your program has a bug and erroneously jumped to a non-code ==19431== location. If you are running Memcheck and you just saw a ==19431== warning about a bad jump, it's probably your program's fault. ==19431== 2. The instruction is legitimate but Valgrind doesn't handle it, ==19431== i.e. it's Valgrind's fault. If you think this is the case or ==19431== you are not sure, please let us know and we'll try to fix it. ==19431== Either way, Valgrind will now raise a SIGILL signal which will ==19431== probably kill your program. ``` If I run the command outside of valgrind it works fine ``` [xxx@xxx-app-1 ~]$ sudo s3fs -f my-bucket /mnt/my-bucket -o url=https://s3.amazonaws.com -o use_sse set_moutpoint_attribute(4052): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) s3fs_init(3236): init s3fs_check_service(3595): check services. CheckBucket(2537): check a bucket. insertV4Headers(1973): computing signature [GET] [/] [] [] url_to_host(99): url is https://s3.amazonaws.com RequestPerform(1648): HTTP response code 200 ```
Author
Owner

@gaul commented on GitHub (Jan 31, 2016):

@justinfalk Please test with Valgrind 3.10 or newer which includes support for MPX instructions.

<!-- gh-comment-id:177362490 --> @gaul commented on GitHub (Jan 31, 2016): @justinfalk Please test with Valgrind 3.10 or newer which includes support for MPX instructions.
Author
Owner

@justinfalk commented on GitHub (Jan 31, 2016):

Ahh, ok. Thanks @andrewgaul . I was using version 3.9 which is what's available in the Amazon Linux repo. I manually built and installed valgrind 3.11 and it worked.

Here is the result of letting it run for a little over an hour. Memory was over 2GB by the time I stopped it.

==5879== Memcheck, a memory error detector
==5879== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==5879== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==5879== Command: s3fs -f my-bucket /mnt/my-bucket -o url=https://s3.amazonaws.com -o use_sse
==5879== Parent PID: 5878
==5879== 
==5879== 
==5879== HEAP SUMMARY:
==5879==     in use at exit: 174,711 bytes in 2,671 blocks
==5879==   total heap usage: 98,818,564 allocs, 98,815,893 frees, 56,553,435,184 bytes allocated
==5879== 
==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 105 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43EBAC: allocate (new_allocator.h:104)
==5879==    by 0x43EBAC: _M_get_node (stl_tree.h:370)
==5879==    by 0x43EBAC: _M_create_node (stl_tree.h:380)
==5879==    by 0x43EBAC: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43EBAC: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43EBAC: insert (stl_map.h:648)
==5879==    by 0x43EBAC: operator[] (stl_map.h:469)
==5879==    by 0x43EBAC: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:278)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746)
==5879==    by 0x4E3F8F7: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E3FB2C: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 106 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43F0A5: allocate (new_allocator.h:104)
==5879==    by 0x43F0A5: _M_get_node (stl_tree.h:370)
==5879==    by 0x43F0A5: _M_create_node (stl_tree.h:380)
==5879==    by 0x43F0A5: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43F0A5: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43F0A5: insert (stl_map.h:648)
==5879==    by 0x43F0A5: operator[] (stl_map.h:469)
==5879==    by 0x43F0A5: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:282)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955)
==5879==    by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 107 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43EBAC: allocate (new_allocator.h:104)
==5879==    by 0x43EBAC: _M_get_node (stl_tree.h:370)
==5879==    by 0x43EBAC: _M_create_node (stl_tree.h:380)
==5879==    by 0x43EBAC: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43EBAC: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43EBAC: insert (stl_map.h:648)
==5879==    by 0x43EBAC: operator[] (stl_map.h:469)
==5879==    by 0x43EBAC: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:278)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746)
==5879==    by 0x4E3FF5C: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 108 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43F2D0: allocate (new_allocator.h:104)
==5879==    by 0x43F2D0: _M_get_node (stl_tree.h:370)
==5879==    by 0x43F2D0: _M_create_node (stl_tree.h:380)
==5879==    by 0x43F2D0: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43F2D0: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43F2D0: insert (stl_map.h:648)
==5879==    by 0x43F2D0: operator[] (stl_map.h:469)
==5879==    by 0x43F2D0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:284)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746)
==5879==    by 0x4E3FF5C: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 109 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43F0A5: allocate (new_allocator.h:104)
==5879==    by 0x43F0A5: _M_get_node (stl_tree.h:370)
==5879==    by 0x43F0A5: _M_create_node (stl_tree.h:380)
==5879==    by 0x43F0A5: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43F0A5: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43F0A5: insert (stl_map.h:648)
==5879==    by 0x43F0A5: operator[] (stl_map.h:469)
==5879==    by 0x43F0A5: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:282)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DD32: get_object_sseckey_md5(char const*) (s3fs.cpp:647)
==5879==    by 0x431A1E: S3fsCurl::GetObjectRequest(char const*, int, long, long) (curl.cpp:2519)
==5879==    by 0x448197: FdEntity::Load(long, long) (fdcache.cpp:818)
==5879==    by 0x44860B: FdEntity::Read(char*, long, unsigned long, bool) (fdcache.cpp:946)
==5879==    by 0x408B9D: s3fs_read(char const*, char*, unsigned long, long, fuse_file_info*) (s3fs.cpp:2004)
==5879==    by 0x4E41416: fuse_fs_read_buf (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E415C1: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E49B4D: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879== 
==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 110 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43F2D0: allocate (new_allocator.h:104)
==5879==    by 0x43F2D0: _M_get_node (stl_tree.h:370)
==5879==    by 0x43F2D0: _M_create_node (stl_tree.h:380)
==5879==    by 0x43F2D0: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43F2D0: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43F2D0: insert (stl_map.h:648)
==5879==    by 0x43F2D0: operator[] (stl_map.h:469)
==5879==    by 0x43F2D0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:284)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DD32: get_object_sseckey_md5(char const*) (s3fs.cpp:647)
==5879==    by 0x431A1E: S3fsCurl::GetObjectRequest(char const*, int, long, long) (curl.cpp:2519)
==5879==    by 0x448197: FdEntity::Load(long, long) (fdcache.cpp:818)
==5879==    by 0x44860B: FdEntity::Read(char*, long, unsigned long, bool) (fdcache.cpp:946)
==5879==    by 0x408B9D: s3fs_read(char const*, char*, unsigned long, long, fuse_file_info*) (s3fs.cpp:2004)
==5879==    by 0x4E41416: fuse_fs_read_buf (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E415C1: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E49B4D: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879== 
==5879== 2,590 (96 direct, 2,494 indirect) bytes in 2 blocks are definitely lost in loss record 124 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43F0A5: allocate (new_allocator.h:104)
==5879==    by 0x43F0A5: _M_get_node (stl_tree.h:370)
==5879==    by 0x43F0A5: _M_create_node (stl_tree.h:380)
==5879==    by 0x43F0A5: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43F0A5: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43F0A5: insert (stl_map.h:648)
==5879==    by 0x43F0A5: operator[] (stl_map.h:469)
==5879==    by 0x43F0A5: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:282)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746)
==5879==    by 0x4E3FF5C: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 2,602 (96 direct, 2,506 indirect) bytes in 2 blocks are definitely lost in loss record 125 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43F2D0: allocate (new_allocator.h:104)
==5879==    by 0x43F2D0: _M_get_node (stl_tree.h:370)
==5879==    by 0x43F2D0: _M_create_node (stl_tree.h:380)
==5879==    by 0x43F2D0: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43F2D0: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43F2D0: insert (stl_map.h:648)
==5879==    by 0x43F2D0: operator[] (stl_map.h:469)
==5879==    by 0x43F2D0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:284)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x41A53B: s3fs_flush(char const*, fuse_file_info*) (s3fs.cpp:2055)
==5879==    by 0x4E43236: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E434B0: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E49835: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 2,602 (432 direct, 2,170 indirect) bytes in 2 blocks are definitely lost in loss record 126 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43E4F0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:261)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DD32: get_object_sseckey_md5(char const*) (s3fs.cpp:647)
==5879==    by 0x431A1E: S3fsCurl::GetObjectRequest(char const*, int, long, long) (curl.cpp:2519)
==5879==    by 0x448197: FdEntity::Load(long, long) (fdcache.cpp:818)
==5879==    by 0x44860B: FdEntity::Read(char*, long, unsigned long, bool) (fdcache.cpp:946)
==5879==    by 0x408B9D: s3fs_read(char const*, char*, unsigned long, long, fuse_file_info*) (s3fs.cpp:2004)
==5879==    by 0x4E41416: fuse_fs_read_buf (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E415C1: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E49B4D: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879== 
==5879== 2,602 (432 direct, 2,170 indirect) bytes in 2 blocks are definitely lost in loss record 127 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43E4F0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:261)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746)
==5879==    by 0x4E3FDF9: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 3,909 (648 direct, 3,261 indirect) bytes in 3 blocks are definitely lost in loss record 132 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43E4F0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:261)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746)
==5879==    by 0x4E3FF5C: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 3,921 (144 direct, 3,777 indirect) bytes in 3 blocks are definitely lost in loss record 133 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43EEB5: allocate (new_allocator.h:104)
==5879==    by 0x43EEB5: _M_get_node (stl_tree.h:370)
==5879==    by 0x43EEB5: _M_create_node (stl_tree.h:380)
==5879==    by 0x43EEB5: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43EEB5: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43EEB5: insert (stl_map.h:648)
==5879==    by 0x43EEB5: operator[] (stl_map.h:469)
==5879==    by 0x43EEB5: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:276)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746)
==5879==    by 0x4E3FF5C: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 3,922 (648 direct, 3,274 indirect) bytes in 3 blocks are definitely lost in loss record 134 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43E4F0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:261)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746)
==5879==    by 0x4E3F8F7: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E3FB2C: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 6,511 (240 direct, 6,271 indirect) bytes in 5 blocks are definitely lost in loss record 137 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43ED9D: allocate (new_allocator.h:104)
==5879==    by 0x43ED9D: _M_get_node (stl_tree.h:370)
==5879==    by 0x43ED9D: _M_create_node (stl_tree.h:380)
==5879==    by 0x43ED9D: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43ED9D: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43ED9D: insert (stl_map.h:648)
==5879==    by 0x43ED9D: operator[] (stl_map.h:469)
==5879==    by 0x43ED9D: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:280)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955)
==5879==    by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 11,739 (432 direct, 11,307 indirect) bytes in 9 blocks are definitely lost in loss record 139 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43EEB5: allocate (new_allocator.h:104)
==5879==    by 0x43EEB5: _M_get_node (stl_tree.h:370)
==5879==    by 0x43EEB5: _M_create_node (stl_tree.h:380)
==5879==    by 0x43EEB5: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43EEB5: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43EEB5: insert (stl_map.h:648)
==5879==    by 0x43EEB5: operator[] (stl_map.h:469)
==5879==    by 0x43EEB5: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:276)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955)
==5879==    by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 18,190 (672 direct, 17,518 indirect) bytes in 14 blocks are definitely lost in loss record 143 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43EBAC: allocate (new_allocator.h:104)
==5879==    by 0x43EBAC: _M_get_node (stl_tree.h:370)
==5879==    by 0x43EBAC: _M_create_node (stl_tree.h:380)
==5879==    by 0x43EBAC: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43EBAC: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43EBAC: insert (stl_map.h:648)
==5879==    by 0x43EBAC: operator[] (stl_map.h:469)
==5879==    by 0x43EBAC: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:278)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955)
==5879==    by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 22,159 (816 direct, 21,343 indirect) bytes in 17 blocks are definitely lost in loss record 144 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43F2D0: allocate (new_allocator.h:104)
==5879==    by 0x43F2D0: _M_get_node (stl_tree.h:370)
==5879==    by 0x43F2D0: _M_create_node (stl_tree.h:380)
==5879==    by 0x43F2D0: _M_insert_ (stl_tree.h:1023)
==5879==    by 0x43F2D0: _M_insert_unique_ (stl_tree.h:1482)
==5879==    by 0x43F2D0: insert (stl_map.h:648)
==5879==    by 0x43F2D0: operator[] (stl_map.h:469)
==5879==    by 0x43F2D0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:284)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955)
==5879==    by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== 45,613 (7,560 direct, 38,053 indirect) bytes in 35 blocks are definitely lost in loss record 146 of 146
==5879==    at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333)
==5879==    by 0x43E4F0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:261)
==5879==    by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461)
==5879==    by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504)
==5879==    by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955)
==5879==    by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4)
==5879==    by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so)
==5879==    by 0x6560BDC: clone (in /lib64/libc-2.17.so)
==5879== 
==5879== LEAK SUMMARY:
==5879==    definitely lost: 12,504 bytes in 103 blocks
==5879==    indirectly lost: 121,698 bytes in 2,472 blocks
==5879==      possibly lost: 0 bytes in 0 blocks
==5879==    still reachable: 40,509 bytes in 96 blocks
==5879==         suppressed: 0 bytes in 0 blocks
==5879== Reachable blocks (those to which a pointer was found) are not shown.
==5879== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==5879== 
==5879== For counts of detected and suppressed errors, rerun with: -v
==5879== ERROR SUMMARY: 18 errors from 18 contexts (suppressed: 0 from 0)
<!-- gh-comment-id:177376986 --> @justinfalk commented on GitHub (Jan 31, 2016): Ahh, ok. Thanks @andrewgaul . I was using version 3.9 which is what's available in the Amazon Linux repo. I manually built and installed valgrind 3.11 and it worked. Here is the result of letting it run for a little over an hour. Memory was over 2GB by the time I stopped it. ``` ==5879== Memcheck, a memory error detector ==5879== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al. ==5879== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info ==5879== Command: s3fs -f my-bucket /mnt/my-bucket -o url=https://s3.amazonaws.com -o use_sse ==5879== Parent PID: 5878 ==5879== ==5879== ==5879== HEAP SUMMARY: ==5879== in use at exit: 174,711 bytes in 2,671 blocks ==5879== total heap usage: 98,818,564 allocs, 98,815,893 frees, 56,553,435,184 bytes allocated ==5879== ==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 105 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43EBAC: allocate (new_allocator.h:104) ==5879== by 0x43EBAC: _M_get_node (stl_tree.h:370) ==5879== by 0x43EBAC: _M_create_node (stl_tree.h:380) ==5879== by 0x43EBAC: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43EBAC: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43EBAC: insert (stl_map.h:648) ==5879== by 0x43EBAC: operator[] (stl_map.h:469) ==5879== by 0x43EBAC: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:278) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746) ==5879== by 0x4E3F8F7: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E3FB2C: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 106 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43F0A5: allocate (new_allocator.h:104) ==5879== by 0x43F0A5: _M_get_node (stl_tree.h:370) ==5879== by 0x43F0A5: _M_create_node (stl_tree.h:380) ==5879== by 0x43F0A5: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43F0A5: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43F0A5: insert (stl_map.h:648) ==5879== by 0x43F0A5: operator[] (stl_map.h:469) ==5879== by 0x43F0A5: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:282) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955) ==5879== by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 107 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43EBAC: allocate (new_allocator.h:104) ==5879== by 0x43EBAC: _M_get_node (stl_tree.h:370) ==5879== by 0x43EBAC: _M_create_node (stl_tree.h:380) ==5879== by 0x43EBAC: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43EBAC: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43EBAC: insert (stl_map.h:648) ==5879== by 0x43EBAC: operator[] (stl_map.h:469) ==5879== by 0x43EBAC: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:278) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746) ==5879== by 0x4E3FF5C: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 108 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43F2D0: allocate (new_allocator.h:104) ==5879== by 0x43F2D0: _M_get_node (stl_tree.h:370) ==5879== by 0x43F2D0: _M_create_node (stl_tree.h:380) ==5879== by 0x43F2D0: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43F2D0: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43F2D0: insert (stl_map.h:648) ==5879== by 0x43F2D0: operator[] (stl_map.h:469) ==5879== by 0x43F2D0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:284) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746) ==5879== by 0x4E3FF5C: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 109 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43F0A5: allocate (new_allocator.h:104) ==5879== by 0x43F0A5: _M_get_node (stl_tree.h:370) ==5879== by 0x43F0A5: _M_create_node (stl_tree.h:380) ==5879== by 0x43F0A5: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43F0A5: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43F0A5: insert (stl_map.h:648) ==5879== by 0x43F0A5: operator[] (stl_map.h:469) ==5879== by 0x43F0A5: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:282) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DD32: get_object_sseckey_md5(char const*) (s3fs.cpp:647) ==5879== by 0x431A1E: S3fsCurl::GetObjectRequest(char const*, int, long, long) (curl.cpp:2519) ==5879== by 0x448197: FdEntity::Load(long, long) (fdcache.cpp:818) ==5879== by 0x44860B: FdEntity::Read(char*, long, unsigned long, bool) (fdcache.cpp:946) ==5879== by 0x408B9D: s3fs_read(char const*, char*, unsigned long, long, fuse_file_info*) (s3fs.cpp:2004) ==5879== by 0x4E41416: fuse_fs_read_buf (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E415C1: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E49B4D: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== ==5879== 1,307 (48 direct, 1,259 indirect) bytes in 1 blocks are definitely lost in loss record 110 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43F2D0: allocate (new_allocator.h:104) ==5879== by 0x43F2D0: _M_get_node (stl_tree.h:370) ==5879== by 0x43F2D0: _M_create_node (stl_tree.h:380) ==5879== by 0x43F2D0: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43F2D0: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43F2D0: insert (stl_map.h:648) ==5879== by 0x43F2D0: operator[] (stl_map.h:469) ==5879== by 0x43F2D0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:284) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DD32: get_object_sseckey_md5(char const*) (s3fs.cpp:647) ==5879== by 0x431A1E: S3fsCurl::GetObjectRequest(char const*, int, long, long) (curl.cpp:2519) ==5879== by 0x448197: FdEntity::Load(long, long) (fdcache.cpp:818) ==5879== by 0x44860B: FdEntity::Read(char*, long, unsigned long, bool) (fdcache.cpp:946) ==5879== by 0x408B9D: s3fs_read(char const*, char*, unsigned long, long, fuse_file_info*) (s3fs.cpp:2004) ==5879== by 0x4E41416: fuse_fs_read_buf (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E415C1: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E49B4D: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== ==5879== 2,590 (96 direct, 2,494 indirect) bytes in 2 blocks are definitely lost in loss record 124 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43F0A5: allocate (new_allocator.h:104) ==5879== by 0x43F0A5: _M_get_node (stl_tree.h:370) ==5879== by 0x43F0A5: _M_create_node (stl_tree.h:380) ==5879== by 0x43F0A5: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43F0A5: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43F0A5: insert (stl_map.h:648) ==5879== by 0x43F0A5: operator[] (stl_map.h:469) ==5879== by 0x43F0A5: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:282) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746) ==5879== by 0x4E3FF5C: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 2,602 (96 direct, 2,506 indirect) bytes in 2 blocks are definitely lost in loss record 125 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43F2D0: allocate (new_allocator.h:104) ==5879== by 0x43F2D0: _M_get_node (stl_tree.h:370) ==5879== by 0x43F2D0: _M_create_node (stl_tree.h:380) ==5879== by 0x43F2D0: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43F2D0: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43F2D0: insert (stl_map.h:648) ==5879== by 0x43F2D0: operator[] (stl_map.h:469) ==5879== by 0x43F2D0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:284) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x41A53B: s3fs_flush(char const*, fuse_file_info*) (s3fs.cpp:2055) ==5879== by 0x4E43236: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E434B0: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E49835: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 2,602 (432 direct, 2,170 indirect) bytes in 2 blocks are definitely lost in loss record 126 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43E4F0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:261) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DD32: get_object_sseckey_md5(char const*) (s3fs.cpp:647) ==5879== by 0x431A1E: S3fsCurl::GetObjectRequest(char const*, int, long, long) (curl.cpp:2519) ==5879== by 0x448197: FdEntity::Load(long, long) (fdcache.cpp:818) ==5879== by 0x44860B: FdEntity::Read(char*, long, unsigned long, bool) (fdcache.cpp:946) ==5879== by 0x408B9D: s3fs_read(char const*, char*, unsigned long, long, fuse_file_info*) (s3fs.cpp:2004) ==5879== by 0x4E41416: fuse_fs_read_buf (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E415C1: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E49B4D: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== ==5879== 2,602 (432 direct, 2,170 indirect) bytes in 2 blocks are definitely lost in loss record 127 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43E4F0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:261) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746) ==5879== by 0x4E3FDF9: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 3,909 (648 direct, 3,261 indirect) bytes in 3 blocks are definitely lost in loss record 132 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43E4F0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:261) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746) ==5879== by 0x4E3FF5C: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 3,921 (144 direct, 3,777 indirect) bytes in 3 blocks are definitely lost in loss record 133 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43EEB5: allocate (new_allocator.h:104) ==5879== by 0x43EEB5: _M_get_node (stl_tree.h:370) ==5879== by 0x43EEB5: _M_create_node (stl_tree.h:380) ==5879== by 0x43EEB5: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43EEB5: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43EEB5: insert (stl_map.h:648) ==5879== by 0x43EEB5: operator[] (stl_map.h:469) ==5879== by 0x43EEB5: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:276) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746) ==5879== by 0x4E3FF5C: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E49E74: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 3,922 (648 direct, 3,274 indirect) bytes in 3 blocks are definitely lost in loss record 134 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43E4F0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:261) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E8EC: s3fs_getattr(char const*, stat*) (s3fs.cpp:746) ==5879== by 0x4E3F8F7: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E3FB2C: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 6,511 (240 direct, 6,271 indirect) bytes in 5 blocks are definitely lost in loss record 137 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43ED9D: allocate (new_allocator.h:104) ==5879== by 0x43ED9D: _M_get_node (stl_tree.h:370) ==5879== by 0x43ED9D: _M_create_node (stl_tree.h:380) ==5879== by 0x43ED9D: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43ED9D: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43ED9D: insert (stl_map.h:648) ==5879== by 0x43ED9D: operator[] (stl_map.h:469) ==5879== by 0x43ED9D: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:280) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955) ==5879== by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 11,739 (432 direct, 11,307 indirect) bytes in 9 blocks are definitely lost in loss record 139 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43EEB5: allocate (new_allocator.h:104) ==5879== by 0x43EEB5: _M_get_node (stl_tree.h:370) ==5879== by 0x43EEB5: _M_create_node (stl_tree.h:380) ==5879== by 0x43EEB5: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43EEB5: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43EEB5: insert (stl_map.h:648) ==5879== by 0x43EEB5: operator[] (stl_map.h:469) ==5879== by 0x43EEB5: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:276) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955) ==5879== by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 18,190 (672 direct, 17,518 indirect) bytes in 14 blocks are definitely lost in loss record 143 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43EBAC: allocate (new_allocator.h:104) ==5879== by 0x43EBAC: _M_get_node (stl_tree.h:370) ==5879== by 0x43EBAC: _M_create_node (stl_tree.h:380) ==5879== by 0x43EBAC: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43EBAC: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43EBAC: insert (stl_map.h:648) ==5879== by 0x43EBAC: operator[] (stl_map.h:469) ==5879== by 0x43EBAC: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:278) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955) ==5879== by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 22,159 (816 direct, 21,343 indirect) bytes in 17 blocks are definitely lost in loss record 144 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43F2D0: allocate (new_allocator.h:104) ==5879== by 0x43F2D0: _M_get_node (stl_tree.h:370) ==5879== by 0x43F2D0: _M_create_node (stl_tree.h:380) ==5879== by 0x43F2D0: _M_insert_ (stl_tree.h:1023) ==5879== by 0x43F2D0: _M_insert_unique_ (stl_tree.h:1482) ==5879== by 0x43F2D0: insert (stl_map.h:648) ==5879== by 0x43F2D0: operator[] (stl_map.h:469) ==5879== by 0x43F2D0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:284) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955) ==5879== by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== 45,613 (7,560 direct, 38,053 indirect) bytes in 35 blocks are definitely lost in loss record 146 of 146 ==5879== at 0x4C29105: operator new(unsigned long) (vg_replace_malloc.c:333) ==5879== by 0x43E4F0: StatCache::AddStat(std::string&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, bool) (cache.cpp:261) ==5879== by 0x40D709: get_object_attribute(char const*, stat*, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*) (s3fs.cpp:461) ==5879== by 0x40DEFF: check_object_access(char const*, int, stat*) (s3fs.cpp:504) ==5879== by 0x40E7AD: s3fs_open(char const*, fuse_file_info*) (s3fs.cpp:1955) ==5879== by 0x4E40F1F: fuse_fs_open (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E41036: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4AE2B: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E4A55A: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x4E47178: ??? (in /lib64/libfuse.so.2.9.4) ==5879== by 0x6255DC4: start_thread (in /lib64/libpthread-2.17.so) ==5879== by 0x6560BDC: clone (in /lib64/libc-2.17.so) ==5879== ==5879== LEAK SUMMARY: ==5879== definitely lost: 12,504 bytes in 103 blocks ==5879== indirectly lost: 121,698 bytes in 2,472 blocks ==5879== possibly lost: 0 bytes in 0 blocks ==5879== still reachable: 40,509 bytes in 96 blocks ==5879== suppressed: 0 bytes in 0 blocks ==5879== Reachable blocks (those to which a pointer was found) are not shown. ==5879== To see them, rerun with: --leak-check=full --show-leak-kinds=all ==5879== ==5879== For counts of detected and suppressed errors, rerun with: -v ==5879== ERROR SUMMARY: 18 errors from 18 contexts (suppressed: 0 from 0) ```
Author
Owner

@ggtakec commented on GitHub (Feb 6, 2016):

I'm sorry for replying late.

I checked about s3fs cached out logic, and I thought that the cache-out logic should be chaged.
So I updated master codes by #350, I hope it solve this issue.
(please read a reason for changes about bad cache out logic in #350 comment.)

Please use master latest codes, and test it.
Thanks in advance for your kindness.

<!-- gh-comment-id:180726479 --> @ggtakec commented on GitHub (Feb 6, 2016): I'm sorry for replying late. I checked about s3fs cached out logic, and I thought that the cache-out logic should be chaged. So I updated master codes by #350, I hope it solve this issue. (please read a reason for changes about bad cache out logic in #350 comment.) Please use master latest codes, and test it. Thanks in advance for your kindness.
Author
Owner

@justinfalk commented on GitHub (Feb 6, 2016):

Thanks @ggtakec I'll try it out now.

<!-- gh-comment-id:180848562 --> @justinfalk commented on GitHub (Feb 6, 2016): Thanks @ggtakec I'll try it out now.
Author
Owner

@justinfalk commented on GitHub (Feb 6, 2016):

Using the latest from master I can't even do a directory listing. Note all of the substituted variables in the debug logs like folder and _%24folder%24

fstab

s3fs#xxx-xxxxx  /mnt/xxx-xxxxx     fuse    defaults,noatime,allow_other,umask=022,url=https://s3.amazonaws.com,use_cache=/s3fscache,del_cache,use_sse,stat_cache_expire=300,dbglevel=debug,_netdev 0 0

version

Feb  6 15:57:22 ip-10-2-0-77 s3fs[7577]: s3fs.cpp:s3fs_init(3316): init v1.79(commit:938554e) with OpenSSL
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:   [path=/dir3/2015/2]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3/2015/2]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL is https://s3.amazonaws.com/xxx-xxxxx?delimiter=/&max-keys=2&prefix=dir3/2015/2/
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL changed is https://xxx-xxxxx.s3.amazonaws.com?delimiter=/&max-keys=2&prefix=dir3/2015/2/
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=dir3/2015/2/] []
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       url is https://s3.amazonaws.com
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       HTTP response code 200
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]: s3fs.cpp:append_objects_from_xml_ex(2513): contents_xp->nodesetval is empty.
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       add stat cache entry[path=/dir3/2015/2/]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       delete stat cache entry[path=/dir3/2015/2/]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3/2015]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3/2015][bpath=][save=][sseckeypos=-1]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL is https://s3.amazonaws.com/xxx-xxxxx/dir3/2015
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3/2015
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       computing signature [HEAD] [/dir3/2015] [] []
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       url is https://s3.amazonaws.com
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       HTTP response code 404 was returned, returning ENOENT
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3/2015/]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3/2015/][bpath=][save=][sseckeypos=-1]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL is https://s3.amazonaws.com/xxx-xxxxx/dir3/2015/
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3/2015/
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       computing signature [HEAD] [/dir3/2015/] [] []
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       url is https://s3.amazonaws.com
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       HTTP response code 404 was returned, returning ENOENT
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3/2015_$folder$]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3/2015_$folder$][bpath=][save=][sseckeypos=-1]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL is https://s3.amazonaws.com/xxx-xxxxx/dir3/2015_%24folder%24
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3/2015_%24folder%24
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       computing signature [HEAD] [/dir3/2015_$folder$] [] []
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       url is https://s3.amazonaws.com
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       HTTP response code 404 was returned, returning ENOENT
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:   [path=/dir3/2015]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3/2015]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL is https://s3.amazonaws.com/xxx-xxxxx?delimiter=/&max-keys=2&prefix=dir3/2015/
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL changed is https://xxx-xxxxx.s3.amazonaws.com?delimiter=/&max-keys=2&prefix=dir3/2015/
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=dir3/2015/] []
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       url is https://s3.amazonaws.com
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       HTTP response code 200
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]: s3fs.cpp:append_objects_from_xml_ex(2513): contents_xp->nodesetval is empty.
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       add stat cache entry[path=/dir3/2015/]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       delete stat cache entry[path=/dir3/2015/]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3][bpath=][save=][sseckeypos=-1]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL is https://s3.amazonaws.com/xxx-xxxxx/dir3
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       computing signature [HEAD] [/dir3] [] []
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       url is https://s3.amazonaws.com
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       HTTP response code 404 was returned, returning ENOENT
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3/]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3/][bpath=][save=][sseckeypos=-1]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL is https://s3.amazonaws.com/xxx-xxxxx/dir3/
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3/
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       computing signature [HEAD] [/dir3/] [] []
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       url is https://s3.amazonaws.com
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       HTTP response code 404 was returned, returning ENOENT
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3_$folder$]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       [tpath=/dir3_$folder$][bpath=][save=][sseckeypos=-1]
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL is https://s3.amazonaws.com/xxx-xxxxx/dir3_%24folder%24
Feb  6 16:02:21 ip-10-2-0-77 s3fs[7577]:       URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3_%24folder%24
<!-- gh-comment-id:180863939 --> @justinfalk commented on GitHub (Feb 6, 2016): Using the latest from master I can't even do a directory listing. Note all of the substituted variables in the debug logs like $folder$ and _%24folder%24 fstab ``` s3fs#xxx-xxxxx /mnt/xxx-xxxxx fuse defaults,noatime,allow_other,umask=022,url=https://s3.amazonaws.com,use_cache=/s3fscache,del_cache,use_sse,stat_cache_expire=300,dbglevel=debug,_netdev 0 0 ``` version ``` Feb 6 15:57:22 ip-10-2-0-77 s3fs[7577]: s3fs.cpp:s3fs_init(3316): init v1.79(commit:938554e) with OpenSSL ``` ``` Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [path=/dir3/2015/2] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3/2015/2] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL is https://s3.amazonaws.com/xxx-xxxxx?delimiter=/&max-keys=2&prefix=dir3/2015/2/ Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL changed is https://xxx-xxxxx.s3.amazonaws.com?delimiter=/&max-keys=2&prefix=dir3/2015/2/ Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=dir3/2015/2/] [] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: url is https://s3.amazonaws.com Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: HTTP response code 200 Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: s3fs.cpp:append_objects_from_xml_ex(2513): contents_xp->nodesetval is empty. Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: add stat cache entry[path=/dir3/2015/2/] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: delete stat cache entry[path=/dir3/2015/2/] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3/2015] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3/2015][bpath=][save=][sseckeypos=-1] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL is https://s3.amazonaws.com/xxx-xxxxx/dir3/2015 Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3/2015 Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: computing signature [HEAD] [/dir3/2015] [] [] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: url is https://s3.amazonaws.com Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: HTTP response code 404 was returned, returning ENOENT Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3/2015/] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3/2015/][bpath=][save=][sseckeypos=-1] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL is https://s3.amazonaws.com/xxx-xxxxx/dir3/2015/ Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3/2015/ Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: computing signature [HEAD] [/dir3/2015/] [] [] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: url is https://s3.amazonaws.com Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: HTTP response code 404 was returned, returning ENOENT Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3/2015_$folder$] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3/2015_$folder$][bpath=][save=][sseckeypos=-1] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL is https://s3.amazonaws.com/xxx-xxxxx/dir3/2015_%24folder%24 Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3/2015_%24folder%24 Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: computing signature [HEAD] [/dir3/2015_$folder$] [] [] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: url is https://s3.amazonaws.com Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: HTTP response code 404 was returned, returning ENOENT Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [path=/dir3/2015] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3/2015] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL is https://s3.amazonaws.com/xxx-xxxxx?delimiter=/&max-keys=2&prefix=dir3/2015/ Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL changed is https://xxx-xxxxx.s3.amazonaws.com?delimiter=/&max-keys=2&prefix=dir3/2015/ Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=dir3/2015/] [] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: url is https://s3.amazonaws.com Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: HTTP response code 200 Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: s3fs.cpp:append_objects_from_xml_ex(2513): contents_xp->nodesetval is empty. Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: add stat cache entry[path=/dir3/2015/] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: delete stat cache entry[path=/dir3/2015/] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3][bpath=][save=][sseckeypos=-1] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL is https://s3.amazonaws.com/xxx-xxxxx/dir3 Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3 Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: computing signature [HEAD] [/dir3] [] [] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: url is https://s3.amazonaws.com Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: HTTP response code 404 was returned, returning ENOENT Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3/] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3/][bpath=][save=][sseckeypos=-1] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL is https://s3.amazonaws.com/xxx-xxxxx/dir3/ Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3/ Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: computing signature [HEAD] [/dir3/] [] [] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: url is https://s3.amazonaws.com Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: HTTP response code 404 was returned, returning ENOENT Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3_$folder$] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: [tpath=/dir3_$folder$][bpath=][save=][sseckeypos=-1] Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL is https://s3.amazonaws.com/xxx-xxxxx/dir3_%24folder%24 Feb 6 16:02:21 ip-10-2-0-77 s3fs[7577]: URL changed is https://xxx-xxxxx.s3.amazonaws.com/dir3_%24folder%24 ```
Author
Owner

@justinfalk commented on GitHub (Feb 6, 2016):

I should also note that the memory leak was even more pronounced using the latest from master. I was able to get the s3fs process up to nearly 7GB resident memory after only 20 minutes or so. That may have been the result of the other issue though. Just thought I would mention in case it's relevant. Thanks.

<!-- gh-comment-id:180867934 --> @justinfalk commented on GitHub (Feb 6, 2016): I should also note that the memory leak was even more pronounced using the latest from master. I was able to get the s3fs process up to nearly 7GB resident memory after only 20 minutes or so. That may have been the result of the other issue though. Just thought I would mention in case it's relevant. Thanks.
Author
Owner

@ggtakec commented on GitHub (Feb 7, 2016):

@justinfalk
I think that this problem is not memory leak, but s3fs uses large memory for stat cache.
Probaly, you list files in the directory which has many files(objects), don't you?

You specify "stat_cache_expire=300" option, it does not clean up the stat cache which are not elapsed since the lastest access 300 seconds.
Then I think that s3fs continues to accumulate cash, and it grows up to large size.

My suggestion is that you specify max_stat_cache_size option along with now options.
max_stat_cache_size value is specified stat cache entry count. default value is 1000.

This I think that I can define the upper limit of the cache size.

Please try to set the option.
Thanks in advance for your help.

<!-- gh-comment-id:180891981 --> @ggtakec commented on GitHub (Feb 7, 2016): @justinfalk I think that this problem is not memory leak, but s3fs uses large memory for stat cache. Probaly, you list files in the directory which has many files(objects), don't you? You specify "stat_cache_expire=300" option, it does not clean up the stat cache which are not elapsed since the lastest access 300 seconds. Then I think that s3fs continues to accumulate cash, and it grows up to large size. My suggestion is that you specify max_stat_cache_size option along with now options. max_stat_cache_size value is specified stat cache entry count. default value is 1000. This I think that I can define the upper limit of the cache size. Please try to set the option. Thanks in advance for your help.
Author
Owner

@justinfalk commented on GitHub (Feb 7, 2016):

Probaly, you list files in the directory which has many files(objects), don't you?

No, my use case actually never does directory listings. I have full paths stored in the database and the application just occasionally accesses them for downloads. I manually do "ls" on a directory with a couple hundred files just to test the mount, but there is no other directory listing happening anywhere.

With 1.79 these exact same settings worked, albeit with a fairly large memory leak. What about the 404 and the variables shown in the log I just posted?

<!-- gh-comment-id:180902662 --> @justinfalk commented on GitHub (Feb 7, 2016): > Probaly, you list files in the directory which has many files(objects), don't you? No, my use case actually never does directory listings. I have full paths stored in the database and the application just occasionally accesses them for downloads. I manually do "ls" on a directory with a couple hundred files just to test the mount, but there is no other directory listing happening anywhere. With 1.79 these exact same settings worked, albeit with a fairly large memory leak. What about the 404 and the variables shown in the log I just posted?
Author
Owner

@justinfalk commented on GitHub (Feb 7, 2016):

Hi @ggtakec , I tried it again with the addition of the following two options:

stat_cache_expire=1
max_stat_cache_size=1

I literally just did the following:

mount /mnt/xxx-xxxxx
ls -lah /mnt/xxx-xxxxx/dir3/2015/2/6/0/837B4FEE/939AD159

This directory only has 89 files. s3fs memory use is at 250 MB, it's been hanging for 10 minutes, and /var/log/messages is has > 10k lines of s3fs debug output.

If I run exactly the same on the 1.79 release it works as expected.

<!-- gh-comment-id:180905266 --> @justinfalk commented on GitHub (Feb 7, 2016): Hi @ggtakec , I tried it again with the addition of the following two options: stat_cache_expire=1 max_stat_cache_size=1 I literally just did the following: mount /mnt/xxx-xxxxx ls -lah /mnt/xxx-xxxxx/dir3/2015/2/6/0/837B4FEE/939AD159 This directory only has 89 files. s3fs memory use is at 250 MB, it's been hanging for 10 minutes, and /var/log/messages is has > 10k lines of s3fs debug output. If I run exactly the same on the 1.79 release it works as expected.
Author
Owner

@ggtakec commented on GitHub (Feb 7, 2016):

@justinfalk
I tried to test on my ec2, but s3fs uses memory as 84MB after listing 1000 files in directory.
(I do not set option stat_cache_expire and max_stat_cache_size)
In the listing of 100 of the files, hard so much considered to use the memory...
There may be other causes.
If you can, please try to specify only max_stat_cache_size(ex. =100) or not specify any stat option.

And, Is your objects in the bucket is made by s3fs?(or other s3 tool made it? ex. s3-console, s3cmd...)

Note: 404 error occurs because the compatibility with other s3 tool.
s3 tools, including s3fs is different is how to make subtly directory objects.

Regards,

<!-- gh-comment-id:180911140 --> @ggtakec commented on GitHub (Feb 7, 2016): @justinfalk I tried to test on my ec2, but s3fs uses memory as 84MB after listing 1000 files in directory. (I do not set option stat_cache_expire and max_stat_cache_size) In the listing of 100 of the files, hard so much considered to use the memory... There may be other causes. If you can, please try to specify only max_stat_cache_size(ex. =100) or not specify any stat option. And, Is your objects in the bucket is made by s3fs?(or other s3 tool made it? ex. s3-console, s3cmd...) Note: 404 error occurs because the compatibility with other s3 tool. s3 tools, including s3fs is different is how to make subtly directory objects. Regards,
Author
Owner

@grutherford commented on GitHub (Mar 11, 2016):

@justinfalk I'm having a similar issue. Memory keeps being used until eventually the process is killed by the kernel as the server is out of memory. Below is the fstab settings i'm using with version 1.79, the syslog shows normal s3fs logs no errors etc.

bucket /mnt/s3 fuse.s3fs _netdev,noatime,allow_other,uid=1001,gid=1001,dbglevel=debug,curldbg 0 0

Edit: Also based on other issues reported i'm running curl 7.35.0 on Ubuntu 14.04 incase that matters

<!-- gh-comment-id:195265531 --> @grutherford commented on GitHub (Mar 11, 2016): @justinfalk I'm having a similar issue. Memory keeps being used until eventually the process is killed by the kernel as the server is out of memory. Below is the fstab settings i'm using with version 1.79, the syslog shows normal s3fs logs no errors etc. bucket /mnt/s3 fuse.s3fs _netdev,noatime,allow_other,uid=1001,gid=1001,dbglevel=debug,curldbg 0 0 Edit: Also based on other issues reported i'm running curl 7.35.0 on Ubuntu 14.04 incase that matters
Author
Owner

@ggtakec commented on GitHub (Mar 13, 2016):

Hi, @justinfalk , @grutherford
I tested s3fs(latest codes) with following option.

  • stat_cache_expire=1
  • max_stat_cache_size=1
    But I cold not reproduce this issue and could not found a bug about leaking stats cache.

I try and continue to be reproduced.

Regards,

<!-- gh-comment-id:195930065 --> @ggtakec commented on GitHub (Mar 13, 2016): Hi, @justinfalk , @grutherford I tested s3fs(latest codes) with following option. - stat_cache_expire=1 - max_stat_cache_size=1 But I cold not reproduce this issue and could not found a bug about leaking stats cache. I try and continue to be reproduced. Regards,
Author
Owner

@grantrutherford commented on GitHub (Mar 14, 2016):

Hi @ggtakec looks like I don't have the same issue as @justinfalk , I correctly set stat_cache_expire=300 and my memory issues seem to have gone. Thank you for your help!

<!-- gh-comment-id:196275096 --> @grantrutherford commented on GitHub (Mar 14, 2016): Hi @ggtakec looks like I don't have the same issue as @justinfalk , I **correctly** set stat_cache_expire=300 and my memory issues seem to have gone. Thank you for your help!
Author
Owner

@ggtakec commented on GitHub (Mar 22, 2016):

@grantrutherford thanks for reporing result. and I'm glad I no longer leak.

@justinfalk I'm sorry about I have not been able reproduced yet.
Do you still have continued to this bug?

<!-- gh-comment-id:199657070 --> @ggtakec commented on GitHub (Mar 22, 2016): @grantrutherford thanks for reporing result. and I'm glad I no longer leak. @justinfalk I'm sorry about I have not been able reproduced yet. Do you still have continued to this bug?
Author
Owner

@barsk commented on GitHub (Apr 8, 2016):

Hi, I also have the memory leak issues with 1.79. I do use heavy directory listings when scanning for and loading huge amounts of data into an elsticsearch index. It is about 500.000 files in a few thousand directories. I end up with s3fs using 800 MB after the indexing, Thats 20% of my available memory.

So, what is the solution? Setting stat_cache_expire=300, or compiling the latest sources?

<!-- gh-comment-id:207268592 --> @barsk commented on GitHub (Apr 8, 2016): Hi, I also have the memory leak issues with 1.79. I do use heavy directory listings when scanning for and loading huge amounts of data into an elsticsearch index. It is about 500.000 files in a few thousand directories. I end up with s3fs using 800 MB after the indexing, Thats 20% of my available memory. So, what is the solution? Setting stat_cache_expire=300, or compiling the latest sources?
Author
Owner

@wytcld commented on GitHub (May 12, 2016):

Had a couple of buckets each with a few million files mounted with s3fs V1.79(commit:d16d616). Was uploading files to them with s3cmd, just using s3fs to sometimes list a directory within the space due to its slight convenience. Had a couple of systems so set up run out of memory, and thought it was s3cmd. But now found one of them with 4 G ram and 8 G swap with all memory exhausted while nothing else much was running besides s3fs - which htop showed as using all the memory. Stopping s3fs freed it all up. This was sitting there idle. Very dangerous.

<!-- gh-comment-id:218772664 --> @wytcld commented on GitHub (May 12, 2016): Had a couple of buckets each with a few million files mounted with s3fs V1.79(commit:d16d616). Was uploading files to them with s3cmd, just using s3fs to sometimes list a directory within the space due to its slight convenience. Had a couple of systems so set up run out of memory, and thought it was s3cmd. But now found one of them with 4 G ram and 8 G swap with all memory exhausted while nothing else much was running besides s3fs - which htop showed as using all the memory. Stopping s3fs freed it all up. This was sitting there idle. Very dangerous.
Author
Owner

@ggtakec commented on GitHub (May 15, 2016):

@barsk and @wytcld
You can set max_stat_cache_size option, this option means counts of caching stat of file.
stat cache size for one file is over about 200 byte(you can see struct stat_cache_entry), its size depends on the header count(for meta data) of files in the object(file).
Please adjust the max_stat_cache_size and stat_cache_expire.

Thanks in advance for your assistance.

<!-- gh-comment-id:219267621 --> @ggtakec commented on GitHub (May 15, 2016): @barsk and @wytcld You can set max_stat_cache_size option, this option means counts of caching stat of file. stat cache size for one file is over about 200 byte(you can see struct stat_cache_entry), its size depends on the header count(for meta data) of files in the object(file). Please adjust the max_stat_cache_size and stat_cache_expire. Thanks in advance for your assistance.
Author
Owner

@jamessoubry commented on GitHub (Jun 6, 2016):

I found running over https ate up my memory so switched back to using http

<!-- gh-comment-id:224017063 --> @jamessoubry commented on GitHub (Jun 6, 2016): I found running over https ate up my memory so switched back to using http
Author
Owner

@ggtakec commented on GitHub (Jun 12, 2016):

Hi, @jamessoubry
I think there is a possibility of the same causes of #254.
If you can, please see following old issue on googlecode.
https://code.google.com/archive/p/s3fs/issues/314

Thanks in advance for your assistance.

<!-- gh-comment-id:225405323 --> @ggtakec commented on GitHub (Jun 12, 2016): Hi, @jamessoubry I think there is a possibility of the same causes of #254. If you can, please see following old issue on googlecode. https://code.google.com/archive/p/s3fs/issues/314 Thanks in advance for your assistance.
Author
Owner

@tlevi commented on GitHub (Jun 28, 2016):

I'd like to invite anybody having leaks unrelated to SSL to try my patch and provide feedback. I'm not certain yet this is a complete fix (or at all) so at this point I'm going to test it under load for a while longer before making a PR.

<!-- gh-comment-id:228953002 --> @tlevi commented on GitHub (Jun 28, 2016): I'd like to invite anybody having leaks unrelated to SSL to try my patch and provide feedback. I'm not certain yet this is a complete fix (or at all) so at this point I'm going to test it under load for a while longer before making a PR.
Author
Owner

@ggtakec commented on GitHub (Jul 3, 2016):

@tlevi thanks for your PR.
I merged it, please try to use latest codes.
Regards,

<!-- gh-comment-id:230133220 --> @ggtakec commented on GitHub (Jul 3, 2016): @tlevi thanks for your PR. I merged it, please try to use latest codes. Regards,
Author
Owner

@tlevi commented on GitHub (Jul 3, 2016):

Yes I haven't had any more memory issues since applying this to production.

<!-- gh-comment-id:230144400 --> @tlevi commented on GitHub (Jul 3, 2016): Yes I haven't had any more memory issues since applying this to production.
Author
Owner

@ggtakec commented on GitHub (Jul 18, 2016):

I'm sorry for my late reply.

@tlevi Thanks for your reply.
@justinfalk Can I close this issue?

<!-- gh-comment-id:233323021 --> @ggtakec commented on GitHub (Jul 18, 2016): I'm sorry for my late reply. @tlevi Thanks for your reply. @justinfalk Can I close this issue?
Author
Owner

@murainwood commented on GitHub (Feb 15, 2017):

we also get the similar issue...

<!-- gh-comment-id:280019419 --> @murainwood commented on GitHub (Feb 15, 2017): we also get the similar issue...
Author
Owner

@ggtakec commented on GitHub (May 5, 2017):

@murainwood I'm sorry for my late reply.
What s3fs version did you use?(if you use master branch, please let us know about commit sha1)
If you can, please try to use latest codes in master branch.
Thanks in advance for your assistance.

<!-- gh-comment-id:299546649 --> @ggtakec commented on GitHub (May 5, 2017): @murainwood I'm sorry for my late reply. What s3fs version did you use?(if you use master branch, please let us know about commit sha1) If you can, please try to use latest codes in master branch. Thanks in advance for your assistance.
Author
Owner

@nbalakrishnan commented on GitHub (Oct 6, 2017):

Release v1.82 is still leaking memory (pretty rapidly too). Here's my setup:

  1. RHEL 7.4 (Maipo) 64 bit
  2. FUSE 2.9.7 (compiled from source)
  3. libcurl 7.56.0 (compiled from source)
  4. nss 3.33 (compiled from source)
  5. nspr 4.17.0 (bundled with nss 3.33, compiled from source)

I tried compiling --with-nss and also (separately) using --with-openssl. Options I'm using with s3fs:

allow_other,umask=0002,max_stat_cache_size=10000,stat_cache_expire=30,multireq_max=50,use_sse

The setup I have involves listing bucket / folder contents and reading objects (files) and is equivalent to the following:

cd
find -type f -exec cat {} ;

Some stats:

  1. Number of list operations is ~10 per minute.
  2. Each list op would result in a list of files numbering between 50-100.
  3. Files are small (less than 100MB worst case, most commonly between 1-10MB). Each file is then read fully.

The attached valgrind output corresponds to a 8 hour run of the setup. At the end of this run, s3fs was occupying about 1.5G of resident memory and 24GB of virtual memory. I'm continuing to probe this - in the meantime, if anyone has any insights pls share.

Thanks...

vg.log

<!-- gh-comment-id:334652566 --> @nbalakrishnan commented on GitHub (Oct 6, 2017): Release v1.82 is still leaking memory (pretty rapidly too). Here's my setup: 1. RHEL 7.4 (Maipo) 64 bit 2. FUSE 2.9.7 (compiled from source) 3. libcurl 7.56.0 (compiled from source) 4. nss 3.33 (compiled from source) 5. nspr 4.17.0 (bundled with nss 3.33, compiled from source) I tried compiling --with-nss and also (separately) using --with-openssl. Options I'm using with s3fs: allow_other,umask=0002,max_stat_cache_size=10000,stat_cache_expire=30,multireq_max=50,use_sse The setup I have involves listing bucket / folder contents and reading objects (files) and is equivalent to the following: cd <root folder where bucket is mounted> find -type f -exec cat {} \; Some stats: 1. Number of list operations is ~10 per minute. 2. Each list op would result in a list of files numbering between 50-100. 3. Files are small (less than 100MB worst case, most commonly between 1-10MB). Each file is then read fully. The attached valgrind output corresponds to a 8 hour run of the setup. At the end of this run, s3fs was occupying about 1.5G of resident memory and 24GB of virtual memory. I'm continuing to probe this - in the meantime, if anyone has any insights pls share. Thanks... [vg.log](https://github.com/s3fs-fuse/s3fs-fuse/files/1361502/vg.log)
Author
Owner

@PVikash commented on GitHub (Oct 27, 2017):

I am also facing memory leak issue with v1.82
To resolve this I have set minimum values for following options:
max_stat_cache_size=1
stat_cache_expire=1

With above setting, I have not encountered memory leak till 6 hours but it significantly slows down the performance.
Uploading a file with few kbs taking more than a minute :( , prior to above setting(with default) it was uploading the same file within 15 seconds.

@ggtakec , Could you please put some light on this?
Any suggestion would be much appreciated.

Regards,
Vikash

<!-- gh-comment-id:339951374 --> @PVikash commented on GitHub (Oct 27, 2017): I am also facing memory leak issue with v1.82 To resolve this I have set minimum values for following options: max_stat_cache_size=1 stat_cache_expire=1 With above setting, I have not encountered memory leak till 6 hours but it significantly slows down the performance. Uploading a file with few kbs taking more than a minute :( , prior to above setting(with default) it was uploading the same file within 15 seconds. @ggtakec , Could you please put some light on this? Any suggestion would be much appreciated. Regards, Vikash
Author
Owner

@PVikash commented on GitHub (Oct 30, 2017):

I was trying to perform tuning on max_stat_cache_size & stat_cache_expire parameters.
I started with max_stat_cache_size=1 & stat_cache_expire=1 and as per documnetation with these values cache should be expired after 1 second.

stat_cache_expire (default is no expire)

  • specify expire time(seconds) for entries in the stat cache.

max_stat_cache_size (default="10000" entries (about 4MB))

  • maximum number of entries in the stat cache

But I found all the files I have copied to the mounting point, still persist in the cache even after 10 minutes.

I have provided both the properties at the time of mounting s3fs with -o option as follow:
sudo /usr/local/bin/s3fs mybucket /mnt/dev/s3/mybucket -o allow_other -o passwd_file=/etc/passwd-s3fs -o use_path_request_style -o url=https://myregion.amazonaws.com -o endpoint=myregion -o uid=uid-o gid=gid-o umask=007 -o max_stat_cache_size=1 -o stat_cache_expire=1 -o use_cache=/tmp/dev/s3/cache/mybucket -o use_sse=kmsid:mykmsid

I am checking /tmp/dev/s3/cache/mybucket for the cache.

Am I missing something or using any property incorrectly?
Can someone please comment on this?

Thank you in advance.

Regards,
Vikash

<!-- gh-comment-id:340350607 --> @PVikash commented on GitHub (Oct 30, 2017): I was trying to perform tuning on max_stat_cache_size & stat_cache_expire parameters. I started with max_stat_cache_size=1 & stat_cache_expire=1 and as per documnetation with these values cache should be expired after 1 second. > stat_cache_expire (default is no expire) > - specify expire time(seconds) for entries in the stat cache. > max_stat_cache_size (default="10000" entries (about 4MB)) > - maximum number of entries in the stat cache But I found all the files I have copied to the mounting point, still persist in the cache even after 10 minutes. I have provided both the properties at the time of mounting s3fs with -o option as follow: `sudo /usr/local/bin/s3fs mybucket /mnt/dev/s3/mybucket -o allow_other -o passwd_file=/etc/passwd-s3fs -o use_path_request_style -o url=https://myregion.amazonaws.com -o endpoint=myregion -o uid=uid-o gid=gid-o umask=007 -o max_stat_cache_size=1 -o stat_cache_expire=1 -o use_cache=/tmp/dev/s3/cache/mybucket -o use_sse=kmsid:mykmsid` I am checking /tmp/dev/s3/cache/mybucket for the cache. Am I missing something or using any property incorrectly? Can someone please comment on this? Thank you in advance. Regards, Vikash
Author
Owner

@anilkumardesai commented on GitHub (Mar 30, 2018):

@PVikash
Were you able to get rid of this memory leak? We are still facing the issue, so wanted to know if there is any fix other than just a workaround of unmounting and mounting it back.

<!-- gh-comment-id:377418813 --> @anilkumardesai commented on GitHub (Mar 30, 2018): @PVikash Were you able to get rid of this memory leak? We are still facing the issue, so wanted to know if there is any fix other than just a workaround of unmounting and mounting it back.
Author
Owner

@byvalentino commented on GitHub (Apr 13, 2018):

Same here. It doesn’t seem stable al abt i

<!-- gh-comment-id:381272013 --> @byvalentino commented on GitHub (Apr 13, 2018): Same here. It doesn’t seem stable al abt i
Author
Owner

@ggtakec commented on GitHub (May 7, 2018):

@PVikash @byvalentino Sorry for my late reply.
The same phenomenon has occurred also in #748, and we are investigating the cause.
Probabry, I think that it is possible to fix the leak problem, please wait several days.
Thanks in advance for your assistance.

<!-- gh-comment-id:387070959 --> @ggtakec commented on GitHub (May 7, 2018): @PVikash @byvalentino Sorry for my late reply. The same phenomenon has occurred also in #748, and we are investigating the cause. Probabry, I think that it is possible to fix the leak problem, please wait several days. Thanks in advance for your assistance.
Author
Owner

@anilkumardesai commented on GitHub (May 7, 2018):

We fixed the issue by changing the s3fs url from https(default) to http. To my surprise, I have not seen that issue for last 3 weeks.

<!-- gh-comment-id:387101210 --> @anilkumardesai commented on GitHub (May 7, 2018): We fixed the issue by changing the s3fs url from https(default) to http. To my surprise, I have not seen that issue for last 3 weeks.
Author
Owner

@nbalakrishnan commented on GitHub (May 7, 2018):

@anilkumardesai I have tried switching from HTTPS to HTTP because there was some blame on openssl, but the leak remained (pls see my previous message on this thread for details).

<!-- gh-comment-id:387123262 --> @nbalakrishnan commented on GitHub (May 7, 2018): @anilkumardesai I have tried switching from HTTPS to HTTP because there was some blame on openssl, but the leak remained (pls see my previous message on this thread for details).
Author
Owner

@ggtakec commented on GitHub (May 27, 2018):

@PVikash @byvalentino @anilkumardesai @nbalakrishnan
I merged #768 for this issue(memory leak).
If you can, please build latest codes in master branch and try it.(And see #748 comments)
Thanks in advance for your help.

<!-- gh-comment-id:392323610 --> @ggtakec commented on GitHub (May 27, 2018): @PVikash @byvalentino @anilkumardesai @nbalakrishnan I merged #768 for this issue(memory leak). If you can, please build latest codes in master branch and try it.(And see #748 comments) Thanks in advance for your help.
Author
Owner

@gaul commented on GitHub (Feb 2, 2019):

@justinfalk did this commit resolve your issue? If so please close this issue.

<!-- gh-comment-id:459934001 --> @gaul commented on GitHub (Feb 2, 2019): @justinfalk did this commit resolve your issue? If so please close this issue.
Author
Owner

@srflaxu40 commented on GitHub (Feb 8, 2019):

What version is the fix for this error?

I am getting this from syslog with a crashing sftp server using s3fs:

Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174068] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174074] [  404]     0   404     8818      595      21       3        0             0 systemd-journal
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174077] [  443]     0   443    23693      147      17       3        0             0 lvmetad
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174081] [  469]     0   469    10693      685      23       3        0         -1000 systemd-udevd
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174085] [  539]   100   539    25081      447      19       3        0             0 systemd-timesyn
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174088] [  998]     0   998     4030      533      11       3        0             0 dhclient
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174091] [ 1111]     0  1111     1305       29       9       4        0             0 iscsid
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174095] [ 1112]     0  1112     1430      881      11       4        0           -17 iscsid
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174098] [ 1116]     0  1116     1099      160       8       3        0             0 acpid
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174102] [ 1120]     0  1120    56094     3144      31       5        0          -900 snapd
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174105] [ 1122]     0  1122     6511      417      19       3        0             0 atd
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174108] [ 1124]     0  1124     6932      492      17       3        0             0 cron
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174112] [ 1131]     0  1131    68649      493      35       3        0             0 accounts-daemon
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174115] [ 1141]   104  1141    65157      470      29       4        0             0 rsyslogd
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174119] [ 1146]   107  1146    10724      515      26       3        0          -900 dbus-daemon
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174122] [ 1157]     0  1157     7154      493      17       3        0             0 systemd-logind
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174125] [ 1166]     0  1166    23842      297      16       3        0             0 lxcfs
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174128] [ 1170]     0  1170    16378      293      37       4        0         -1000 sshd
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174131] [ 1190]     0  1190     3343       35      10       3        0             0 mdadm
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174135] [ 1199]     0  1199    69272      583      39       3        0             0 polkitd
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174138] [ 1215]     0  1215  2958438  1968525    5483      14        0             0 s3fs
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174141] [ 1253]     0  1253     4868      429      15       3        0             0 irqbalance
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174145] [ 1267]     0  1267     3618      395      11       3        0             0 agetty
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174148] [ 1268]     0  1268     3664      344      12       3        0             0 agetty
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174151] [ 2667]     0  2667    12235      479      27       3        0             0 cron
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174155] [ 2668]     0  2668     1126      149       7       3        0             0 sh
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174158] [ 2669]     0  2669     1091      354       8       3        0             0 run-parts
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174161] [ 2756]     0  2756     2808      432       9       3        0             0 mlocate
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174164] [ 2761]     0  2761     2563       80      11       3        0             0 flock
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174167] [ 2762]     0  2762     1507       57       8       3        0             0 updatedb.mlocat
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.174170] Out of memory: Kill process 1215 (s3fs) score 937 or sacrifice child
Feb  8 06:26:19 ip-10-1-7-221 kernel: [45003.179577] Killed process 1215 (s3fs) total-vm:11833752kB, anon-rss:7874100kB, file-rss:0kB
<!-- gh-comment-id:461862953 --> @srflaxu40 commented on GitHub (Feb 8, 2019): What version is the fix for this error? I am getting this from syslog with a crashing sftp server using s3fs: ``` Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174068] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174074] [ 404] 0 404 8818 595 21 3 0 0 systemd-journal Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174077] [ 443] 0 443 23693 147 17 3 0 0 lvmetad Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174081] [ 469] 0 469 10693 685 23 3 0 -1000 systemd-udevd Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174085] [ 539] 100 539 25081 447 19 3 0 0 systemd-timesyn Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174088] [ 998] 0 998 4030 533 11 3 0 0 dhclient Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174091] [ 1111] 0 1111 1305 29 9 4 0 0 iscsid Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174095] [ 1112] 0 1112 1430 881 11 4 0 -17 iscsid Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174098] [ 1116] 0 1116 1099 160 8 3 0 0 acpid Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174102] [ 1120] 0 1120 56094 3144 31 5 0 -900 snapd Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174105] [ 1122] 0 1122 6511 417 19 3 0 0 atd Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174108] [ 1124] 0 1124 6932 492 17 3 0 0 cron Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174112] [ 1131] 0 1131 68649 493 35 3 0 0 accounts-daemon Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174115] [ 1141] 104 1141 65157 470 29 4 0 0 rsyslogd Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174119] [ 1146] 107 1146 10724 515 26 3 0 -900 dbus-daemon Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174122] [ 1157] 0 1157 7154 493 17 3 0 0 systemd-logind Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174125] [ 1166] 0 1166 23842 297 16 3 0 0 lxcfs Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174128] [ 1170] 0 1170 16378 293 37 4 0 -1000 sshd Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174131] [ 1190] 0 1190 3343 35 10 3 0 0 mdadm Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174135] [ 1199] 0 1199 69272 583 39 3 0 0 polkitd Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174138] [ 1215] 0 1215 2958438 1968525 5483 14 0 0 s3fs Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174141] [ 1253] 0 1253 4868 429 15 3 0 0 irqbalance Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174145] [ 1267] 0 1267 3618 395 11 3 0 0 agetty Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174148] [ 1268] 0 1268 3664 344 12 3 0 0 agetty Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174151] [ 2667] 0 2667 12235 479 27 3 0 0 cron Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174155] [ 2668] 0 2668 1126 149 7 3 0 0 sh Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174158] [ 2669] 0 2669 1091 354 8 3 0 0 run-parts Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174161] [ 2756] 0 2756 2808 432 9 3 0 0 mlocate Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174164] [ 2761] 0 2761 2563 80 11 3 0 0 flock Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174167] [ 2762] 0 2762 1507 57 8 3 0 0 updatedb.mlocat Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.174170] Out of memory: Kill process 1215 (s3fs) score 937 or sacrifice child Feb 8 06:26:19 ip-10-1-7-221 kernel: [45003.179577] Killed process 1215 (s3fs) total-vm:11833752kB, anon-rss:7874100kB, file-rss:0kB ```
Author
Owner

@ggtakec commented on GitHub (Feb 11, 2019):

@srflaxu40 What version do you use?
latest version is 1.84 which fixes some memory leak.
But we found two memory leak in this code and fixed these in master branch.
If you can build s3fs your local environment, please use latest master branch code.
Or please wait next version it will release by fixing last one issue.
Thanks in advance for your assistance.

<!-- gh-comment-id:462219575 --> @ggtakec commented on GitHub (Feb 11, 2019): @srflaxu40 What version do you use? latest version is 1.84 which fixes some memory leak. But we found two memory leak in this code and fixed these in master branch. If you can build s3fs your local environment, please use latest master branch code. Or please wait next version it will release by fixing last one issue. Thanks in advance for your assistance.
Author
Owner

@hudac commented on GitHub (Apr 24, 2019):

Hi @ggtakec you mention in #254 that this issue should be fixed in 1.86 which is not available yet.
I'm using commit 381835e for testing and cannot reproduce this bug anymore.

Are there any plans for releasing 1.86?

<!-- gh-comment-id:486186179 --> @hudac commented on GitHub (Apr 24, 2019): Hi @ggtakec you mention in #254 that this issue should be fixed in 1.86 which is not available yet. I'm using commit 381835e for testing and cannot reproduce this bug anymore. Are there any plans for releasing 1.86?
Author
Owner

@gaul commented on GitHub (Apr 24, 2019):

1.86 was a typo, 1.85 is the latest version. There are a few reports of memory leaks but I cannot reproduce the symptoms.

<!-- gh-comment-id:486197139 --> @gaul commented on GitHub (Apr 24, 2019): 1.86 was a typo, 1.85 is the latest version. There are a few reports of memory leaks but I cannot reproduce the symptoms.
Author
Owner

@gaul commented on GitHub (Jul 9, 2019):

I am closing this issue since it seems the symptoms are addressed. I appreciate that a memory leak could manifest in several different ways which may not yet be fixed but it would be better to open a new and scoped issue including the s3fs version and description of your workload. FWIW 455e29cbea addresses the suggestion in https://github.com/s3fs-fuse/s3fs-fuse/issues/340#issuecomment-175399227.

<!-- gh-comment-id:509779206 --> @gaul commented on GitHub (Jul 9, 2019): I am closing this issue since it seems the symptoms are addressed. I appreciate that a memory leak could manifest in several different ways which may not yet be fixed but it would be better to open a new and scoped issue including the s3fs version and description of your workload. FWIW 455e29cbea9d61bccd25f401e30e71ae1f585170 addresses the suggestion in https://github.com/s3fs-fuse/s3fs-fuse/issues/340#issuecomment-175399227.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#176
No description provided.