mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2027] CPU usage and I/O are high #1019
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1019
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @bhuvanp1305 on GitHub (Sep 2, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2027
Details about the issue
When we call s3fs with the following options,
As per the load-model applied, file collection is happening every 5 mins, so when it again tries next time it fetches the object that CPU usage and I/O became high
Version of s3fs being used (s3fs --version)
sh-4.4# s3fs --version
Amazon Simple Storage Service File System V1.91 (commit:unknown) with OpenSSL
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
sh-4.4# rpm -qi fuse
Name : fuse
Version : 2.9.7
Release : 15.el8
Architecture: x86_64
Install Date: Fri Jul 22 20:33:52 2022
Group : Unspecified
Size : 208300
License : GPL+
Signature : RSA/SHA256, Fri Feb 25 18:38:22 2022, Key ID 199e2f91fd431d51
Source RPM : fuse-2.9.7-15.el8.src.rpm
Build Date : Thu Feb 24 17:57:08 2022
Build Host : x86-vm-55.build.eng.bos.redhat.com
Relocations : (not relocatable)
Packager : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
Vendor : Red Hat, Inc.
URL : http://fuse.sf.net/
Summary : File System in Userspace (FUSE) v2 utilities
Description :
With FUSE it is possible to implement a fully functional filesystem in a
userspace program. This package contains the FUSE v2 userspace tools to
mount a FUSE filesystem.
Kernel information (uname -r)
sh-4.4# uname -r
4.18.0-305.49.1.el8_4.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
[admin@slab743prov ~]$ cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.4 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.4"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.4 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.4:GA"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.4
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.4"
s3fs command line used, if applicable
/etc/fstab entry, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
@ggtakec commented on GitHub (Sep 4, 2022):
The reason for this seems to be the poor performance of s3fs's internal processing when some number of jobs for fio.
Currently, I found a problem function and reviewing it.
This issue is likely the same as #2019.
@ggtakec commented on GitHub (Sep 4, 2022):
@bhuvanp1305
I have posted PR #2028 for this issue.
If you can build and test it, try https://github.com/ggtakec/s3fs-fuse/tree/fix_fdcache_page.
Thanks in advance for your help.
@ggtakec commented on GitHub (Sep 25, 2022):
PR https://github.com/s3fs-fuse/s3fs-fuse/pull/2028 was merged.
If you can try it with master's code, please try it.
We will close this issue, but if the problem still exists, please reopen it.
Issue about high memory usage continues in #2035.