mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #446] Files not completely downloaded (v1.80) #241
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#241
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @JD557 on GitHub (Jul 7, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/446
After upgrading from v1.79 to v1.80, I've noticed that most of the files downloaded by s3fs-fuse are incomplete, i.e only the first bytes are downloaded. I never had this problem with v1.79.
If I delete the file locally and re-upload it to S3, the download eventually succeeds, so this seems to be a transient error.
@ggtakec commented on GitHub (Jul 18, 2016):
@JD557
I think this issue is as same as #435, and summarized this in #435.
Please check #435 and if it is not same issue, please let me know.
Thanks in advance for your assistance.
@JD557 commented on GitHub (Jul 18, 2016):
I think my issue might be different, as the issue #435 seems to show that S3FS has problems uploading files, while my problem only appears when downloading using S3FS (the files in S3 are OK, although I don't use S3FS to upload them).
@ggtakec commented on GitHub (Jul 21, 2016):
@JD557 Thanks for your reply.
If you can reproduce this problem, please run s3fs with dbglevel option and get log message around this problem.
(the log message will be large size, please be careful. and you might run s3fs forefround for logging easy)
I hope that the log helps us to solve this issue.
Thanks in advance for your help.
@andresilva commented on GitHub (Jul 21, 2016):
I managed to replicate the issue while running with debug logging (I work with @JD557). You can find the log here: https://gist.github.com/andrebeat/c6628594ebb92f65db9986e62184df5f. I just anonymized the name of the bucket.
@andresilva commented on GitHub (Jul 21, 2016):
I've made some extra tests to try and figure out what's happening. If I edit an existing file and make it bigger it seems that only the content that's past the original length is appended to the file. If I update a file and make it smaller, it seems that the file just gets truncated to the new file size.
@ggtakec commented on GitHub (Jul 24, 2016):
@andrebeat and @JD557 Thanks for your reply.
If you specify stat cache and cache directory option, if you can please try to remove /tmp/..stat and / directories.
It seems that information in stats for each file does not match cache files, then I think that we should remove cache files(In particular stat caches).
Thanks in advance for your assistance.
@andresilva commented on GitHub (Jul 25, 2016):
@ggtakec Indeed if I remove the cache and restart
s3fsit works (although the bug is triggered again once I re-upload something to S3). Btw, this is how I'm mounting:s3fs -o allow_other,umask=022,use_cache=/tmp/s3mnt-deploy *********** /mnt/s3/deploy@ggtakec commented on GitHub (Dec 4, 2016):
@andrebeat @JD557 I'm sorry for my late reply.
I merged codes to master branch(#511) for fixing #435 bug which is very similar to this issue.
Please try to use latest codes in master branch.
I'm going to close this issue, but if this problem continues, please reopen this issue or ost new issue.
Thanks in advance for your help.