mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #408] Running 2 versions of s3fs on same server #218
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#218
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @chrisgo on GitHub (May 5, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/408
Using fstab, debian 8
Due to some compatibility issues (currently running 1.61), I need to run 2 version of s3fs - the 1.61 version and s3fs "latest". Is this possible? How to use this with fstab?
Compatibility issues here: https://github.com/s3fs-fuse/s3fs-fuse/issues/380
@ggtakec commented on GitHub (May 14, 2016):
@chrisgo
Maybe I think you can run both s3fs version. If you use cache for s3fs, you will need to divide the value of the use_cache. And you will also need to watch out for other options.
I think you should try the following procedure.
However, what is the compatibility issues that you say to?
s3fs is maintaining backward compatibility as much as possible, then please tell us that you have a problem with the latest version.
I think that there may be other ways.
Thanks in advance for your help.
@chrisgo commented on GitHub (May 16, 2016):
@ggtakec thank you for your response
The main issue that I have found is that the old version (1.61) creates extra "0 byte" files when you create a folder structure inside the bucket
So, for example:
bucket-name/subfolder/file.txtwould get created as 2 files instead of just file.txt
In the Properties tab (AWS console)
The
subfolderentry above will be calledObject: subfolderIn the new s3fs (and other tools),
subfolderwill actually sayFolder: subfolderfile.txt is
File: file.txtSo my current problem right now is to try to find a way to migrate these somehow to the new s3fs binary. However, since all my current servers (not too many < 20 all mounting the same bucket) are running the old s3fs 1.61, I have to do a bunch of testing to see what happens if I just upgrade the command line -- will everything work? The last time I tested it (maybe a year ago), some of the permissions got strange (code stopped being able to write to the mounts) so I had to back out and keep using the old 1.61.
Thanks,
Chris
@ggtakec commented on GitHub (May 29, 2016):
@chrisgo
Since s3fs is a file system, and it requires the permission of the directory.
For this reason, s3fs creates an object of 0 bytes for the directory, and sets the permissions to this directory object.
The other hand, S3 console(or etc tools) does not create a object for a directory.
Old s3fs(1.61) uses "dir" for a directory object name, but new s3fs uses "dir/".
Because of this difference, old s3fs(1.61) can not determine the "dir/" object as a directory.
However, new s3fs can be recognized as a directory "dir" object.(it has a backward compatibility)
In order to migrate to the new s3fs, I think that you will do the following steps.
Please note the following.
If you change the permissions of the directory object("dir") when you use new s3fs, the object name is changed from "dir" to "dir/".
Please note that the directory object("dir/") created by new s3fs can not be recognized as directory by old s3fs(1.61).
Until you can upgrade all of s3fs, I recommend that you do not create a directory and not change existed directory permission.
After updating, if you continue to use s3fs, I recommend that you should update the permissions of the directory for changing the object name from "dir" to "dir/".
I hope that your upgrade can be well.
Regards,
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.