[GH-ISSUE #2112] Use FIO to generate smaller file, but file doesn't change #1076

Closed
opened 2026-03-04 01:51:11 +03:00 by kerem · 1 comment
Owner

Originally created by @huntersman on GitHub (Feb 17, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2112

Additional Information

Version of s3fs being used (s3fs --version)

V1.91

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

2.9.2

Kernel information (uname -r)

5.4.213-1.el7.elrepo.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

CentOS Linux 7

How to run s3fs, if applicable

s3fs test /root/test -o passwd_file=${HOME}/.passwd-s3fs -o url=http://xx.xx.xx.xx -o use_path_request_style -o noxmlns -o dbglevel=error -o default_acl=public-read -o logfile=/var/log/s3fs.log -o allow_other -o multireq_max=500 -o nocopyapi -o use_cache="/xxx/xxx/"

Details about issue

  1. Use FIO write a file in the mount path
fio --name=sequential-write --directory=/root/test --rw=write --bs=4M --size=300m
  1. Use FIO write again but with smaller file, file in the bucket is still 300 MB
fio --name=sequential-write --directory=/root/test --rw=write --bs=4M --size=100m

BTW, dd command can work as expected.

cd /root/test
dd if=/dev/zero of=test bs=1M count=300
dd if=/dev/zero of=test bs=1M count=100

Use dd command and file is finally 100 MB.

Originally created by @huntersman on GitHub (Feb 17, 2023). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2112 <!-- -------------------------------------------------------------------------- The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD. --------------------------------------------------------------------------- --> ### Additional Information #### Version of s3fs being used (`s3fs --version`) <!-- example: V1.91 (commit:b19262a) --> V1.91 #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) <!-- example: 2.9.2 --> 2.9.2 #### Kernel information (`uname -r`) <!-- example: 5.10.96-90.460.amzn2.x86_64 --> 5.4.213-1.el7.elrepo.x86_64 #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) <!-- command result --> CentOS Linux 7 #### How to run s3fs, if applicable <!-- Describe the s3fs "command line" or "/etc/fstab" entry used. --> `s3fs test /root/test -o passwd_file=${HOME}/.passwd-s3fs -o url=http://xx.xx.xx.xx -o use_path_request_style -o noxmlns -o dbglevel=error -o default_acl=public-read -o logfile=/var/log/s3fs.log -o allow_other -o multireq_max=500 -o nocopyapi -o use_cache="/xxx/xxx/"` ### Details about issue <!-- Please describe the content of the issue in detail. --> 1. Use FIO write a file in the mount path ``` fio --name=sequential-write --directory=/root/test --rw=write --bs=4M --size=300m ``` 2. Use FIO write again but with smaller file, file in the bucket is still 300 MB ``` fio --name=sequential-write --directory=/root/test --rw=write --bs=4M --size=100m ``` BTW, `dd` command can work as expected. ``` cd /root/test dd if=/dev/zero of=test bs=1M count=300 dd if=/dev/zero of=test bs=1M count=100 ``` Use `dd` command and file is finally 100 MB.
kerem closed this issue 2026-03-04 01:51:11 +03:00
Author
Owner

@huntersman commented on GitHub (Mar 7, 2023):

I find out it probably an issue of FIO.

<!-- gh-comment-id:1457848670 --> @huntersman commented on GitHub (Mar 7, 2023): I find out it probably an issue of FIO.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1076
No description provided.