mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #2583] dell ecs storage 409(ObjectUnderRetention) response issue #1241
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1241
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @santea on GitHub (Oct 31, 2024).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2583
Additional Information
Version of s3fs being used (
s3fs --version)V1.94
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)2.9.7
Kernel information (
uname -r)4.18.0-372.9.1.el8.x86_64
GNU/Linux Distribution, if applicable (
cat /etc/os-release)NAME="Red Hat Enterprise Linux"
VERSION="8.6 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.6"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.6 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.6"
How to run s3fs, if applicable
[] command line s3fs {bucketname} /appdata/appuser/s3 -o passwd_file=${HOME}/.passwd-s3fs -o dbglevel=info -o use_path_request_style -o url=http://{dell ecs storage ip:port}/
[X] /etc/fstab
s3fs syslog messages (
grep s3fs /var/log/syslog,journalctl | grep s3fs, ors3fs outputs)Details about issue
after mount
It seems that the error occurs because the file with size 0 is uploaded first and then uploaded when uploading to s3.
Because of the retention of ecs (maybe....), the actual file is not uploaded and a 409 response is returned.
And if you write the file again, it will be uploaded normally.
It seems like the files are uploaded consecutively and are being uploaded a second time during 'retention'.
Can't I upload them all at once?
@gaul commented on GitHub (Nov 3, 2024):
I don't think we have ever tested object locking before but I don't understand this series of operations. The documentation says:
So I would expect s3fs to allow updating an object version by creating a new version (not tested, speculation). Does the ECS behavior differ?
But s3fs should not create two objects in your example. We removed the zero byte temporary object in #1013 so I suspect something else is going on. Can you run this again with
-o curldbgto see the series of requests sent to the server?Lastly I don't think 409 ObjectUnderRetention should translate to
EIO. PerhapsEACCESorEPERMwould be more appropriate?