[GH-ISSUE #2621] Data corruption occurs when uploading large files #1249

Open
opened 2026-03-04 01:52:33 +03:00 by kerem · 0 comments
Owner

Originally created by @hbao0915 on GitHub (Dec 17, 2024).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2621

Additional Information

Version of s3fs being used (s3fs --version)

1.89 1.91 1.95

How to run s3fs, if applicable

s3fs test-bucket /mnt -o passwd_file=/root/.passwd-s3fs -o url=https://s3xxx.com -o nonempty -o big_writes -o max_write=131072 -o ensure_diskfree=10240

Details about issue

Env info: 25GB available disk space
Testcase: start 2 s3fs processes with ensure_diskfree=10240. Then upload a 20GB file from local disk into the s3 server . Check md5 sum every time the copying is finished. Repeat the copy and checksum.
Result: The beginning 15GB of the uploaded file is filled with padding-zeros!!!

Originally created by @hbao0915 on GitHub (Dec 17, 2024). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2621 ### Additional Information #### Version of s3fs being used (`s3fs --version`) 1.89 1.91 1.95 #### How to run s3fs, if applicable s3fs test-bucket /mnt -o passwd_file=/root/.passwd-s3fs -o url=https://s3xxx.com -o nonempty -o big_writes -o max_write=131072 -o ensure_diskfree=10240 ### Details about issue Env info: 25GB available disk space Testcase: start 2 s3fs processes with ensure_diskfree=10240. Then upload a 20GB file from local disk into the s3 server . Check md5 sum every time the copying is finished. Repeat the copy and checksum. Result: The beginning 15GB of the uploaded file is filled with padding-zeros!!!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1249
No description provided.