mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2125] IO error after write about 100,000 files #1081
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1081
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @huntersman on GitHub (Mar 10, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2125
Additional Information
Version of s3fs being used (
s3fs --version)V1.91(commit:unknown)
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)2.9.2
Kernel information (
uname -r)5.4.213-1.el7.elrepo.x86_64
GNU/Linux Distribution, if applicable (
cat /etc/os-release)CentOS7
How to run s3fs, if applicable
s3fs syslog messages (
grep s3fs /var/log/syslog,journalctl | grep s3fs, ors3fs outputs)As debug messages are very large, I only put last part of it.
Details about issue
I upload a lot of small size files to s3fs and find out s3fs will eventually become unusable.
Usually it happens after 100,000 files are written. I wonder know why s3fs appears IO error when write a lot of files. Thank you for your help!
@ggtakec commented on GitHub (Mar 12, 2023):
Does s3fs not terminate in this case and does s3fs is doing any communication or processing?
You may need to adjust
the max_stat_cache_size,stat_cache_expire,stat_cache_interval_expireetc options.(Are 100,000 files in the same
/root/demodirectory?)Also I'm wonder about the
use_cacheoption.I assume there are 100,000 files(as cache files) in the cache directory.
It's hard to believe that this cache files affect the
lscommand, but I can imagine that having 100,000 files in one local directory would have a big impact.(but I'm not sure that these many files effect to this issue.)@huntersman commented on GitHub (Mar 13, 2023):
The s3fs process does not exit but it seems to be stuck, and I can't get any response from it.
Yes, they are in the same directory.
@huntersman commented on GitHub (Mar 14, 2023):
@ggtakec Thank you very much. I set
max_stat_cache_size=1000000andstat_cache_expire=300, IO error never happens again.@ggtakec commented on GitHub (Mar 16, 2023):
@huntersman Thanks for confirming.
I think that the stat cache overflowed and the error occurred because the stat information was reacquired every time.