[GH-ISSUE #574] Can s3fs delete versions from a folder with Versioning enabled? #326

Closed
opened 2026-03-04 01:44:25 +03:00 by kerem · 4 comments
Owner

Originally created by @ngbranitsky on GitHub (May 3, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/574

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.

  • Version of s3fs being used (s3fs --version)

  • V1.79(commit:8162d49) with OpenSSL

  • Version of fuse being used (pkg-config --modversion fuse)

  • 2.9.2

  • System information (uname -a)

  • Linux ip-10-242-1-196 3.10.0-229.el7.x86_64 #1 SMP Thu Jan 29 18:37:38 EST 2015 x86_64 x86_64 x86_64 GNU/Linux

  • Distro (cat /etc/issue)

  • Red Hat Enterprise Linux Server release 7.1 (Maipo)

  • s3fs command line used (if applicable)

  • /etc/fstab entry (if applicable):
  • s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)
    if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

Details about issue

Over 1 TB of files disappeared this morning from both the source S3 Bucket in in us-east-1 that has Cross Region Replication enabled with a target S3 Bucket in us-west-2. Both source and target Buckets have Versioning enabled. It is impossible to configure Cross Region Replication
unless the target Bucket has Versioning enabled.
The us-east-1 bucket is mounted with s3fs on data center servers in the East and mounted on EC2 instances in us-west-2.
AWS Support says that in order to delete versions the DeleteObjectVersions api call is required.
Does s3fs 1.79 even know how to call DeleteObjectVersions ?

Originally created by @ngbranitsky on GitHub (May 3, 2017). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/574 #### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ - Version of s3fs being used (s3fs --version) - V1.79(commit:8162d49) with OpenSSL - Version of fuse being used (pkg-config --modversion fuse) - 2.9.2 - System information (uname -a) - Linux ip-10-242-1-196 3.10.0-229.el7.x86_64 #1 SMP Thu Jan 29 18:37:38 EST 2015 x86_64 x86_64 x86_64 GNU/Linux - Distro (cat /etc/issue) - Red Hat Enterprise Linux Server release 7.1 (Maipo) - s3fs command line used (if applicable) ```/usr/bin/s3fs xxdr /repositories /mnt/repositories -o nonempty,allow_other,uid=306,gid=306,endpoint=us-west-2a,passwd_file=/etc/CONFIG/xxdr/passwd-s3fs ``` - /etc/fstab entry (if applicable): ``` ``` - s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs) _if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages_ ``` ``` #### Details about issue Over 1 TB of files disappeared this morning from both the source S3 Bucket in in us-east-1 that has Cross Region Replication enabled with a target S3 Bucket in us-west-2. Both source and target Buckets have Versioning enabled. It is impossible to configure Cross Region Replication unless the target Bucket has Versioning enabled. The us-east-1 bucket is mounted with s3fs on data center servers in the East and mounted on EC2 instances in us-west-2. AWS Support says that in order to delete versions the DeleteObjectVersions api call is required. Does s3fs 1.79 even know how to call DeleteObjectVersions ?
kerem closed this issue 2026-03-04 01:44:25 +03:00
Author
Owner

@ggtakec commented on GitHub (May 4, 2017):

@ngbranitsky
s3fs is not affected by S3 bucket versioning.
That means s3fs does not know it even if versioning is set on the bucket, and s3fs is not working on versioning.

I think that the function of s3fs to delete the history directly is unnecessary.
The history also has the purpose of recovering data from accident due to misoperation from clients such as s3fs and I hope that you do not delete from s3fs but work from the console.

(If you really need it, I think it should be another command or option, like the "-u" option to display and delete the part which s3fs is uploading the multipart.)
Regards,

<!-- gh-comment-id:299078700 --> @ggtakec commented on GitHub (May 4, 2017): @ngbranitsky s3fs is not affected by S3 bucket versioning. That means s3fs does not know it even if versioning is set on the bucket, and s3fs is not working on versioning. I think that the function of s3fs to delete the history directly is unnecessary. The history also has the purpose of recovering data from accident due to misoperation from clients such as s3fs and I hope that you do not delete from s3fs but work from the console. (If you really need it, I think it should be another command or option, like the "-u" option to display and delete the part which s3fs is uploading the multipart.) Regards,
Author
Owner

@sqlbot commented on GitHub (May 4, 2017):

Does s3fs 1.79 even know how to call DeleteObjectVersions

@ngbranitsky it sounds like your specific question is "is it even possible thst s3fs could have somehow, inadvertently, deleted these object versions?"

The answer there is no. Object versioning in S3 was designed in such a way that it is fully compatible with utilities -- like s3fs -- that remain entirely unaware of versioning. As @ggtakec correctly indicates, s3fs does not have --and should not have -- any implementation of any versioning-related operations. It only sees and can only touch the "current" version of anything. Deleting the current version of an object using the ordinary DELETE operation creates a delete marker in the bucket, which hides the previous version from tools like s3fs... but it remains in the bucket. If the previous versions are indeed missing, s3fs should not have been involved in removing them, since it lacks the capability of removing objects by version-id.

<!-- gh-comment-id:299145579 --> @sqlbot commented on GitHub (May 4, 2017): >Does s3fs 1.79 even know how to call DeleteObjectVersions @ngbranitsky it sounds like your specific question is "is it even possible thst s3fs could have somehow, inadvertently, deleted these object versions?" The answer there is no. Object versioning in S3 was designed in such a way that it is fully compatible with utilities -- like s3fs -- that remain entirely unaware of versioning. As @ggtakec correctly indicates, s3fs does not have --and should not have -- any implementation of any versioning-related operations. It only sees and can only touch the "current" version of anything. Deleting the current version of an object using the ordinary `DELETE` operation creates a delete marker in the bucket, which hides the previous version from tools like s3fs... but it remains in the bucket. If the previous versions are indeed missing, s3fs should not have been involved in removing them, since it lacks the capability of removing objects by version-id.
Author
Owner

@ngbranitsky commented on GitHub (May 4, 2017):

As I suspected.
Thank you.
Much to my relief, the data was never actually uploaded!
The data was finally uploaded early this morning. They are currently running a script to check that all the files were successfully uploaded. The script does a test -f on the s3fs target for each file on the source. Meta data slowness means this script is taking a long time.
Wondering if there is a faster way?
The s3fs Bucket Inventory says it could be 48 hours before the first report is available.

Of the first 15K files, about 200 failed to upload on the first massive transfer.
Manual cp commands have no problem transferring these files individualy. Are the initial transfer errors due to "Eventual Consistency" issues?

Content by Norman. Spelling by iPhone.

On May 4, 2017, at 06:11, Michael Ezzell notifications@github.com wrote:

Does s3fs 1.79 even know how to call DeleteObjectVersions

@ngbranitsky it sounds like your specific question is "is it even possible thst s3fs could have somehow, inadvertently, deleted these object versions?"

The answer there is no. Object versioning in S3 was designed in such a way that it is fully compatible with utilities -- like s3fs -- that remain entirely unaware of versioning. As @ggtakec correctly indicates, s3fs does not have --and should not have -- any implementation of any versioning-related operations. It only sees and can only touch the "current" version of anything. Deleting the current version of an object using the ordinary DELETE operation creates a delete marker in the bucket, which hides the previous version from tools like s3fs... but it remains in the bucket. If the previous versions are indeed missing, s3fs should not have been involved in removing them, since it lacks the capability of removing objects by version-id.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

<!-- gh-comment-id:299223687 --> @ngbranitsky commented on GitHub (May 4, 2017): As I suspected. Thank you. Much to my relief, the data was never actually uploaded! The data was finally uploaded early this morning. They are currently running a script to check that all the files were successfully uploaded. The script does a test -f on the s3fs target for each file on the source. Meta data slowness means this script is taking a long time. Wondering if there is a faster way? The s3fs Bucket Inventory says it could be 48 hours before the first report is available. Of the first 15K files, about 200 failed to upload on the first massive transfer. Manual cp commands have no problem transferring these files individualy. Are the initial transfer errors due to "Eventual Consistency" issues? Content by Norman. Spelling by iPhone. > On May 4, 2017, at 06:11, Michael Ezzell <notifications@github.com> wrote: > > Does s3fs 1.79 even know how to call DeleteObjectVersions > > @ngbranitsky it sounds like your specific question is "is it even possible thst s3fs could have somehow, inadvertently, deleted these object versions?" > > The answer there is no. Object versioning in S3 was designed in such a way that it is fully compatible with utilities -- like s3fs -- that remain entirely unaware of versioning. As @ggtakec correctly indicates, s3fs does not have --and should not have -- any implementation of any versioning-related operations. It only sees and can only touch the "current" version of anything. Deleting the current version of an object using the ordinary DELETE operation creates a delete marker in the bucket, which hides the previous version from tools like s3fs... but it remains in the bucket. If the previous versions are indeed missing, s3fs should not have been involved in removing them, since it lacks the capability of removing objects by version-id. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or mute the thread. >
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.
Is this problem continuing?
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.
I will close this, but if the problem persists, please reopen or post a new issue.

<!-- gh-comment-id:478230297 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. Is this problem continuing? We launch new version 1.86, which fixed some problem(bugs). Please use the latest version. I will close this, but if the problem persists, please reopen or post a new issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#326
No description provided.