[GH-ISSUE #2077] Bad performance when concurrent write #1051

Closed
opened 2026-03-04 01:50:58 +03:00 by kerem · 2 comments
Owner

Originally created by @huntersman on GitHub (Dec 12, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2077

Version of s3fs being used (s3fs --version)

V1.91

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

2.9.2

Kernel information (uname -r)

5.4.213-1.el7.elrepo.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

CentOS 7

How to run s3fs, if applicable

s3fs test /mnt/test -o passwd_file=${HOME}/.passwd-s3fs -o url=http://ip -o use_path_request_style -o noxmlns -o dbglevel=error -o default_acl=public-read -o logfile=/var/log/s3fs.log -o allow_other -o nocopyapi -o use_cache="/buffer" -o del_cache

Details about issue

I use FIO to test concurrent write performance of s3fs.

fio --name=write-test --directory=/mnt/test --ioengine=libaio --rw=write --bs=1m --size=4g --numjobs=20 --direct=1 --group_reporting

And every command to /mnt/test hangs, including ll /mnt/test, df -h, cd /mnt/test.

I try to use parallel_count and multipart_size but it improves hardly.

Originally created by @huntersman on GitHub (Dec 12, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2077 <!-- -------------------------------------------------------------------------- The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD. --------------------------------------------------------------------------- --> #### Version of s3fs being used (`s3fs --version`) <!-- example: V1.91 (commit:b19262a) --> V1.91 #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) <!-- example: 2.9.2 --> 2.9.2 #### Kernel information (`uname -r`) <!-- example: 5.10.96-90.460.amzn2.x86_64 --> 5.4.213-1.el7.elrepo.x86_64 #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) <!-- command result --> CentOS 7 #### How to run s3fs, if applicable <!-- Describe the s3fs "command line" or "/etc/fstab" entry used. --> `s3fs test /mnt/test -o passwd_file=${HOME}/.passwd-s3fs -o url=http://ip -o use_path_request_style -o noxmlns -o dbglevel=error -o default_acl=public-read -o logfile=/var/log/s3fs.log -o allow_other -o nocopyapi -o use_cache="/buffer" -o del_cache` ### Details about issue <!-- Please describe the content of the issue in detail. --> I use FIO to test concurrent write performance of s3fs. `fio --name=write-test --directory=/mnt/test --ioengine=libaio --rw=write --bs=1m --size=4g --numjobs=20 --direct=1 --group_reporting` And every command to `/mnt/test` hangs, including `ll /mnt/test, df -h, cd /mnt/test`. I try to use `parallel_count` and `multipart_size` but it improves hardly.
kerem closed this issue 2026-03-04 01:50:58 +03:00
Author
Owner

@huntersman commented on GitHub (Dec 13, 2022):

Looks like S3FS sends 410 multi requests for 4 GB file and MultiRead has to wait for MultiPerform, so S3FS hangs, when these 410 multi requests finished, S3FS returns to normal.

<!-- gh-comment-id:1347673372 --> @huntersman commented on GitHub (Dec 13, 2022): Looks like S3FS sends 410 multi requests for 4 GB file and `MultiRead` has to wait for `MultiPerform`, so S3FS hangs, when these 410 multi requests finished, S3FS returns to normal.
Author
Owner

@huntersman commented on GitHub (Dec 14, 2022):

Solved by setting parallel_count=10000

<!-- gh-comment-id:1350460294 --> @huntersman commented on GitHub (Dec 14, 2022): Solved by setting `parallel_count=10000`
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1051
No description provided.