mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #80] Operation not permitted on subdirectory #45
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#45
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ad-m on GitHub (Nov 11, 2014).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/80
Hello,
I am would like connect to S3-like API e24cloud, but I am unable to list any subdirectories or read any files. I am able only list root directory
I ran s3fs:
And do some activity:
@ggtakec commented on GitHub (Nov 16, 2014):
s3fs uses some http headers which are x-amz-meta-uid/x-amz-meta-gid/x-amz-meta-mode/x-amz-meta-mtime to get file/directory object attributes.
Your S3 objects do not have these header, so s3fs displays attributes as 0000.
Then I propose that you use the umask option, it gives certain permissions to all of the files and directory.
If you can, you do touching(or other action) to the objects through s3fs, the objects will have those http hader after that.
@mattzuba commented on GitHub (Nov 20, 2014):
So is it a fair assumption that trying to mount an existing bucket with lots of files and has never been used with s3fs before would need to have chmod and chown run on the entire bucket once mounted before it could really be used?
@ad-m commented on GitHub (Feb 3, 2015):
I connected to e24cloud and everything seems looks to work. I created new empty bucket and moved data into it instead connecting to old bucket.
@ggtakec commented on GitHub (Mar 4, 2015):
@mattzuba Sorry for replying too late.
As long as the directory(file) does not have an attribute(headers x-amz-mode etc) which can be s3fs know, s3fs must deal with attributes of the directory(file) as 000.
I think this is important to prevent that would allow unauthorized access by using the s3fs.
So, you can be avoided by using the umask option as an exception or set these headers to the existing directory(file).
The tool that can set headers handled through the s3fs to an existing directory is located test/mergedir.sh.
This tool helps to change attributes(to add headers) to directory object.
Please check and try to use it.
Thanks in advance for your assistance.
@et304383 commented on GitHub (Dec 11, 2015):
Can you provide detailed instructions how to get around this issue? What does one have to provide to the s3fs mount call to ensure existing directories can be navigated with s3fs?
I have a client that needs to do this very soon (mount existing S3 bucket) so we need a solution as soon as possible. Thanks in advance.
@ggtakec commented on GitHub (Dec 20, 2015):
@eric-tucker
s3fs is only been started, it can not grant(write) the appropriate attributes to each objects.
s3fs manipulates the attributes of a target directory(or file), then new attribute(HTTP headers) by the operation will have been granted.
If there is no corresponding object in the directory, it will be the object corresponding to the directory is created.
The correct workaround on using s3fs, after mounting by s3fs, you will detect the file(or directory) of permission 000, you must update the correct attributes.
You will need to sequentially do this task from the mount point.
Now, s3fs does not support to update the file attributes automatically.
If s3fs supports it, there will be many challenges including traffic/performance/access control/etc.
Regards,
@et304383 commented on GitHub (Dec 20, 2015):
So basically treat it as a local file system where everything is 000 and set the mode ourself to 755 or something similar?
@ggtakec commented on GitHub (Dec 20, 2015):
In case you can not update, s3fs can use the value of the umask options and give permission to the directories(or files) which do not have permission.
Because those directories(files) do not have the UID and GID, you should specify uid and gid options.
These options will work effectively if the object does not have a default attributes(uid/gid/mode).
And for updating directories permission, there is a way to update and refer to the test/mergedir.sh.
Regards,
@ggtakec commented on GitHub (Jan 17, 2016):
I added FAQ(https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ) about no permission for objects.
Regards,