[GH-ISSUE #1776] truncate error dd: failed to truncate to 104857600000 bytes in output file 'test': Input/output error #917

Closed
opened 2026-03-04 01:49:53 +03:00 by kerem · 1 comment
Owner

Originally created by @abserari on GitHub (Oct 11, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1776

Additional Information

time dd if=/dev/zero of=test bs=1M count=0 seek=100000

Version of s3fs being used (s3fs --version)

Amazon Simple Storage Service File System V1.90 (commit:unknown) with OpenSSL

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

example: 2.9.4

Kernel information (uname -r)

4.18.0

GNU/Linux Distribution, if applicable (cat /etc/os-release)

command result: cat /etc/os-release

s3fs command line used, if applicable

s3fs -o bucket=backup:/backup /data/36000c29928c97879ea1f274bdc0be73d_005056977c60/backup/ -d -d -f -o f2 -o curldbg -o nonempty -o passwd_file=/root/.passwd-s3fs  -o use_path_request_style -o url=http://10.103.242.23:9000 -o allow_other -o umask=0 -o max_write=131072 -o big_writes -o enable_noobj_cache -o sigv2 -o del_cache

/etc/fstab entry, if applicable

Details about issue

it seems to truncate always got an error because of the fail reason: the file not exists.
does create big file on s3fs is too slow to truncate the file?

Originally created by @abserari on GitHub (Oct 11, 2021). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1776 ### Additional Information time dd if=/dev/zero of=test bs=1M count=0 seek=100000 #### Version of s3fs being used (s3fs --version) Amazon Simple Storage Service File System V1.90 (commit:unknown) with OpenSSL #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) _example: 2.9.4_ #### Kernel information (uname -r) 4.18.0 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) _command result: cat /etc/os-release_ #### s3fs command line used, if applicable ``` s3fs -o bucket=backup:/backup /data/36000c29928c97879ea1f274bdc0be73d_005056977c60/backup/ -d -d -f -o f2 -o curldbg -o nonempty -o passwd_file=/root/.passwd-s3fs -o use_path_request_style -o url=http://10.103.242.23:9000 -o allow_other -o umask=0 -o max_write=131072 -o big_writes -o enable_noobj_cache -o sigv2 -o del_cache ``` #### /etc/fstab entry, if applicable ``` ``` ### Details about issue it seems to truncate always got an error because of the fail reason: the file not exists. does create big file on s3fs is too slow to truncate the file?
kerem closed this issue 2026-03-04 01:49:54 +03:00
Author
Owner

@abserari commented on GitHub (Oct 29, 2021):

it seems is an error because:
file update would transfer the whole file to the remote s3 backend.
when adding an s3 file and want to truncate, it would be a latency to read this file and cause this error.

<!-- gh-comment-id:954562342 --> @abserari commented on GitHub (Oct 29, 2021): it seems is an error because: file update would transfer the whole file to the remote s3 backend. when adding an s3 file and want to truncate, it would be a latency to read this file and cause this error.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#917
No description provided.