mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-26 05:45:57 +03:00
[GH-ISSUE #2077] Bad performance when concurrent write #1051
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1051
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @huntersman on GitHub (Dec 12, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2077
Version of s3fs being used (
s3fs --version)V1.91
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)2.9.2
Kernel information (
uname -r)5.4.213-1.el7.elrepo.x86_64
GNU/Linux Distribution, if applicable (
cat /etc/os-release)CentOS 7
How to run s3fs, if applicable
s3fs test /mnt/test -o passwd_file=${HOME}/.passwd-s3fs -o url=http://ip -o use_path_request_style -o noxmlns -o dbglevel=error -o default_acl=public-read -o logfile=/var/log/s3fs.log -o allow_other -o nocopyapi -o use_cache="/buffer" -o del_cacheDetails about issue
I use FIO to test concurrent write performance of s3fs.
fio --name=write-test --directory=/mnt/test --ioengine=libaio --rw=write --bs=1m --size=4g --numjobs=20 --direct=1 --group_reportingAnd every command to
/mnt/testhangs, includingll /mnt/test, df -h, cd /mnt/test.I try to use
parallel_countandmultipart_sizebut it improves hardly.@huntersman commented on GitHub (Dec 13, 2022):
Looks like S3FS sends 410 multi requests for 4 GB file and
MultiReadhas to wait forMultiPerform, so S3FS hangs, when these 410 multi requests finished, S3FS returns to normal.@huntersman commented on GitHub (Dec 14, 2022):
Solved by setting
parallel_count=10000