mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #375] Can't write after lseek() #194
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#194
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @RobbKistler on GitHub (Mar 16, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/375
Version:
s3fs-fuse master at
cf56b35766Repro:
As part of a bigger test (
cp -R /usr/lib abucket/), I found that copying a certain file into an s3fs-fuse mount point fails:The cp always fails on the file locale-archive because the source file is sparse. It has "holes" in it because it was written using lseek() to advance past the end of the file. The cp command is smart, and tries to do the same lseek() when writing the file into the s3fs-fuse mount point. Since this file might not be sparse on all systems, I will create a PR with a test that uses lseek() directly.
Results:
The file is partially written, and then
cpgets this error:while s3fs-fuse reports this error:
The "failed to load uninitialized area before writing" happens because S3fsCurl::PreGetObjectRequest() returns an error because the value of the size parameter is 0.
This is what a strace of cp looks like. fd=3 is the source file, fd=4 is the target file on the s3fs-fuse mount. The FS_IOC_FIEMAP ioctl is a big hint that cp has detected a sparse source file.
Expected:
s3fs-fuse should write alll 0's for the bytes in the "hole" created by the lseek().
@ggtakec commented on GitHub (Mar 22, 2016):
@RobbKistler Fixed this issue by #379.
s3fs did not take into account the case of sparsed file.
When FUSE copies the sparsed file, FUSE seems to call truncate internally or open with file size for seeking.
s3fs makes the cache file(or temporary file) which is sparsed file(by using truncate system call), then s3fs should seek the file when writing to that cache file before uploading it to S3.
And we have to change the file size, which size is used to loading contents before uploading if there is area which has not been downloading yet.
I tested following:
Regards,
@RobbKistler commented on GitHub (Mar 23, 2016):
Thank you so much @ggtakec!
@ggtakec commented on GitHub (Apr 10, 2016):
Merged #376
@ggtakec commented on GitHub (Apr 12, 2016):
@RobbKistler #379 fixed this issue, but it made a new bug(Not made a decent cache files).
So I revert #379, and will make patches for this.
Thus I reopen this issue.
@ggtakec commented on GitHub (Apr 12, 2016):
@RobbKistler I merged #395 which fixed cache broken.
I re-closed this issue, if you find a bug please reopen this.
thanks in advance for your help.