[GH-ISSUE #1935] [ERR] fdcache.cpp:CleanupCacheDirInternal(819) #974

Closed
opened 2026-03-04 01:50:21 +03:00 by kerem · 3 comments
Owner

Originally created by @fikipollo on GitHub (Apr 19, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1935

Additional Information

Version of s3fs being used (s3fs --version)

V1.91

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.2

Kernel information (uname -r)

3.13.0-52-generic

GNU/Linux Distribution, if applicable (cat /etc/os-release)

Ubuntu 14.04.2 LTS

s3fs command line used, if applicable

s3fs archive-s3fs-bucket /archive -o endpoint=eu-west-1 -o use_sse=1 -o use_cache=/s3fs-cache -o ensure_diskfree=2000 -o del_cache -o multipart_size=512 -o multipart_copy_size=512 -o logfile=/var/log/s3fs.log -o dbglevel=warn -o parallel_count=6 -o allow_other

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/a.out)
2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/fb.err)
2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/cd.err)
2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/a.err)
2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/fg.err)
2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/o.err)
2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/fgfg.err)

Details about issue

Hi there,

First, thanks for this awesome tool, I love it.

I'm getting a lot of error messages when copying new data (using rsync) in my mounted directory.
I've no clue about what is going wrong, May it be a misconfiguration of the cache?

I'm not sure about the benefits of using or not using the cache, it would be great to have some details about the default behavior and in which situations the cache may be recommended. I looked in the documentation but what I found wasn't enough.

Thanks for all!!

Originally created by @fikipollo on GitHub (Apr 19, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1935 ### Additional Information #### Version of s3fs being used (s3fs --version) V1.91 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.2 #### Kernel information (uname -r) 3.13.0-52-generic #### GNU/Linux Distribution, if applicable (cat /etc/os-release) Ubuntu 14.04.2 LTS #### s3fs command line used, if applicable ``` s3fs archive-s3fs-bucket /archive -o endpoint=eu-west-1 -o use_sse=1 -o use_cache=/s3fs-cache -o ensure_diskfree=2000 -o del_cache -o multipart_size=512 -o multipart_copy_size=512 -o logfile=/var/log/s3fs.log -o dbglevel=warn -o parallel_count=6 -o allow_other ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) _if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages_ ``` 2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/a.out) 2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/fb.err) 2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/cd.err) 2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/a.err) 2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/fg.err) 2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/o.err) 2022-04-19T07:58:07.264Z [ERR] fdcache.cpp:CleanupCacheDirInternal(819): could not get fd_manager_lock when clean up file(/somefile/analysis/results/whatever/sge/fgfg.err) ``` ### Details about issue Hi there, First, thanks for this awesome tool, I love it. I'm getting a lot of error messages when copying new data (using rsync) in my mounted directory. I've no clue about what is going wrong, May it be a misconfiguration of the cache? I'm not sure about the benefits of using or not using the cache, it would be great to have some details about the default behavior and in which situations the cache may be recommended. I looked in the documentation but what I found wasn't enough. Thanks for all!!
kerem closed this issue 2026-03-04 01:50:21 +03:00
Author
Owner

@ggtakec commented on GitHub (May 18, 2022):

@fikipollo
I think this error message should be a warning or information level output.
Thus, I will post a PR to change the message level later.

The meaning of the message indicates that the cache file in use was detected while trying to delete the cache file because the disk space used by s3fs for the cache was insufficient.
This was a message that should be at the warning or information level, not an error.

FYI:
s3fs uses a cache of objects to prevent unnecessary downloads.
For example, when a user browses an object (file), it downloads the file.
After that, when I try to browse the file again(assuming the file has not changed), s3fs need to download it again.
So we think it shuold cache the downloaded file for performance(and request count).
This is the cache logic that works when you specify the use_cache option of s3fs.
(I think we would like to continue organizing the documents.)

<!-- gh-comment-id:1130059186 --> @ggtakec commented on GitHub (May 18, 2022): @fikipollo I think this error message should be a warning or information level output. Thus, I will post a PR to change the message level later. The meaning of the message indicates that the cache file in use was detected while trying to delete the cache file because the disk space used by s3fs for the cache was insufficient. This was a message that should be at the warning or information level, not an error. FYI: s3fs uses a cache of objects to prevent unnecessary downloads. For example, when a user browses an object (file), it downloads the file. After that, when I try to browse the file again(assuming the file has not changed), s3fs need to download it again. So we think it shuold cache the downloaded file for performance(and request count). This is the cache logic that works when you specify the use_cache option of s3fs. (I think we would like to continue organizing the documents.)
Author
Owner

@ggtakec commented on GitHub (May 22, 2022):

@fikipollo I merged the code(#1946) that changed the message level.
If you can, try using the master code.
This issue will be closed, but if you still have problems please reopen.

<!-- gh-comment-id:1133823152 --> @ggtakec commented on GitHub (May 22, 2022): @fikipollo I merged the code(#1946) that changed the message level. If you can, try using the master code. This issue will be closed, but if you still have problems please reopen.
Author
Owner

@fikipollo commented on GitHub (May 23, 2022):

Great, thanks for the fix and the explanations.

<!-- gh-comment-id:1134326083 --> @fikipollo commented on GitHub (May 23, 2022): Great, thanks for the fix and the explanations.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#974
No description provided.