mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #406] Problem access files in s3 uploaded through the S3 console #215
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#215
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @andrewrutter on GitHub (Apr 29, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/406
I am trying to use s3fs to handle files being uploaded by a different process to an s3 bucket - currently they are being manually uploaaded. I can see files listed in that mounted directory and the files are showing correctly timestamped and with the correct size. But trying to open the files either in code or using vi or nano just shows garbage (nulls) such as ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
The files are public and if I curl the s3 link then I do see content so it is available and there do not seem to be permission issues over http.
I am able to create a new file in this directory and the content is saving fine and I can view it in the S3 explorer.
@gaul commented on GitHub (Apr 29, 2016):
Can you provide a minimized test case, e.g., s3cmd put file; s3fs cat file? Also which version of s3fs are you using?
@ggtakec commented on GitHub (May 14, 2016):
@andrewrutter What s3fs version do you use?
After v1.79 we fixed some bugs about local cache which is used by s3fs.
So that you should use latest codes in master branch if you use v1.79 or older than v1.79.
If you use latest codes, please try to do that.
you can find buckat name directory under use_cache path.
you can find ..stat file under use_cache path.
If you overwrite the file(which exists) on S3 from S3 console when you have mounted the bucket in s3fs, this problem might occur.
In particular, in case you overwrite a smaller file.
I think that there is a possibility that occurs because differ with the file size in stat cache and file cache.
At that time, please try to set 0 for max_stat_cache_size option(= no stat cache).
Thanks in advance for your assistance.
@andrewrutter commented on GitHub (May 15, 2016):
I disabled cache completely and went with very basic options which resolved the problem. Cache is not an issue in our current use case since we are using S3 as a transactional store. It is entirely possible that the change you mention above would have resolved our issue.
@ggtakec commented on GitHub (May 29, 2016):
Hi, @andrewrutter
When you upload a object(file), s3fs makes temporarily local file to save it.
And if you set enable for s3fs cache, this temporary file is used as a permanently cached.
There was a bug about s3fs cache created, and we had fixed it after the v1.79 release.
I think that it might have been affected by this bug when you upload a file in s3fs that has a problem.
Now, if the problem is in hiding, I think that it was the influence of this bug.
I'll close this issue, but if your problem recurs, please reopen this issue.
Thanks in advance for your assistance.