mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #380] Share same bucket s3fs v1.61 with current master #198
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#198
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @chrisgo on GitHub (Mar 26, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/380
We have some old servers using v1.61 of s3fs loaded via /etc/fstab (debian 7)
On old server with 1.61 code, doing the following and permissions (non-root user)
On the Amazon S3 Console (https://aws.amazon.com), it is all there. On the 1.61 version of the code, there is a "duplicate" file (in this case
old) that has a lot of properties of theoldfolder sitting beside it. Theoldfolder itself only has aDetailssectionObviously, this is super old code and the goal is to use s3fs "master" on new servers. However, every time we try to mount a bucket on a server using s3fs with version > 1.61, we could not get the mounts to work together properly.
On the new server (debian 8 with s3fs "master"), we are able to mount it with
Non-root user
On the AWS S3 Console, I noticed the new "master" code does NOT create a secondary entry in the file system (so there is no metadata). The
newfolder looks the same.Things that don't work
I am thinking this is because of the missing second file?
Are there any settings we can do on the old (1.61) and new servers (master) so that the directories and files can co-exist with each other?
Some settings in /etc/fstab that involves
gid,uid,umaskbut not really sure how to proceedOr am I pretty much out of luck? What would be the proper way to migrate from an old s3fs v1.61 mounted/created bucket to a new one while keeping them both operating?
@ggtakec commented on GitHub (Apr 10, 2016):
@chrisgo
s3fs older than 1.62 makes directory name object("dir") for directory.
But s3fs after 1.61 make directory name + "/" object("dir/") instead of it.
Latest s3fs is able to deal with "dir" type object for compatibility, but old s3fs can not handle "dir/".
Maybe, this problem is due to the handling of the directory object, since older s3fs does not have a forward-compatible to the s3fs after 1.61.
How to solve is to rename object name as "dir" to "dir/" by such as s3cmd.
(without changing object attributes)
But I think this method is not recommended.
If you can, we want to upgrade s3fs from 1.61 to latest version.
Thanks in advance for your assistance.
@chrisgo commented on GitHub (May 5, 2016):
@ggtakec
Thanks for your response. The main issue I have is that if I upgrade the s3fs latest, I will not be able to write to any existing folders ... more thinking