mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2120] File content lost after write failure #1080
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1080
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @adamqqqplay on GitHub (Feb 22, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2120
When the user's IAM role does not have permission to write to the S3 Bucket, the file will lose its original data after overwriting.
Additional Information
Version of s3fs being used (
s3fs --version)V1.91 (commit:f8a825e)
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)2.9.9
Kernel information (
uname -r)5.15.0-43-generic
GNU/Linux Distribution, if applicable (
cat /etc/os-release)PRETTY_NAME="Ubuntu 22.04.1 LTS"
How to run s3fs, if applicable
command line
s3fs syslog messages (
grep s3fs /var/log/syslog,journalctl | grep s3fs, ors3fs outputs)Details about issue
In this case, The user's IAM role only has the AmazonS3ReadOnlyAccess permission, which can only mount and read the content in the Bucket. But in fact, the root user has read and write permissions to the successfully mounted directory.
Reproduce steps:
cat ./test.txtRead the data of the file correctly.
echo qweqwe > ./test.txtData could not be written and an error was returned.
cat ./test.txtUnable to read the data of the file, it may be a cache error.
Expected behavior:
The original data will not be lost after the write fails. Or root does not have the write permission of the file when mounting.
However, if you change the
>in step 2 to>>, then the correct data can still be read in step 3, but the user does not get the returned Operation not permitted error in step 2.@adamqqqplay commented on GitHub (Feb 27, 2023):
@ggtakec @gaul Hi, could you take a look please?
@ggtakec commented on GitHub (Mar 12, 2023):
@adamqqqplay Sorry for my late reply.
I've tried a few pattern, but it's been hard to get exactly the same as your result.
However, I changed the state of the local cache manually(to wrong) and saw almost the same state.
Currently, s3fs is not clearing the local cache when an error occurs on the upload.
Below are my predictions:
When reading after an error, if the cache state(the cache of the object file itself and the state file of that cache) is consistent, I think that it reads from the local cache without reloading.
Your error appears to show the file contents, but you're probably loading 6 bytes filled with 0x00 data. (It becomes a hidden character on the console screen and looks like it doesn't display anything.)
In any case, the local cache (which is in an edited state) cannot be repaired if the upload fails, so we plan to fix it to clear the local cache.
I have submitted a corresponding PR #2127 for it, so if you can confirm it, please try it.
@adamqqqplay commented on GitHub (Mar 13, 2023):
@ggtakec Thanks, I believe your fix should do the trick, I'll test it right away.
In addition, I also proposed another fix method internally, which also deletes the wrong cache after the upload fails. The code is placed in my own repo: https://github.com/adamqqqplay/s3fs-fuse/tree/fix-cache-inconsistency , you could take a look when you are free. However, I think your fix is more general.
@adamqqqplay commented on GitHub (Mar 13, 2023):
@ggtakec Hi, I just tested your commit and it worked the same way as ours did, both fixed our previous issues. Thanks a lot!
@ggtakec commented on GitHub (Mar 13, 2023):
@adamqqqplay Thank you for your help.
I merged PR #2127.
If you still have a problem, please reopen or post a new issue.