mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2621] Data corruption occurs when uploading large files #1249
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1249
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @hbao0915 on GitHub (Dec 17, 2024).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2621
Additional Information
Version of s3fs being used (
s3fs --version)1.89 1.91 1.95
How to run s3fs, if applicable
s3fs test-bucket /mnt -o passwd_file=/root/.passwd-s3fs -o url=https://s3xxx.com -o nonempty -o big_writes -o max_write=131072 -o ensure_diskfree=10240
Details about issue
Env info: 25GB available disk space
Testcase: start 2 s3fs processes with ensure_diskfree=10240. Then upload a 20GB file from local disk into the s3 server . Check md5 sum every time the copying is finished. Repeat the copy and checksum.
Result: The beginning 15GB of the uploaded file is filled with padding-zeros!!!