mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #537] Issue with access Policy (or maybe my config error?) #304
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#304
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @lyquix-owner on GitHub (Feb 23, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/537
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Version of s3fs being used (s3fs --version)
V1.80(commit:d40da2c)
Version of fuse being used (pkg-config --modversion fuse)
2.9.2
System information (uname -a)
Linux lyquix 4.8.3-x86_64-linode76 #1 SMP Thu Oct 20 19:05:39 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
Distro (cat /etc/issue)
Ubuntu 14.04.5 LTS \n \l
s3fs command line used (if applicable)
s3fs lyquix-s3backup /s3backup -o passwd_file=/etc/passwd-s3fs/etc/fstab entry (if applicable):
lyquix-s3backup /s3backup fuse.s3fs _netdev,allow_other 0 0s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
No error shown when mounting
No errors shown in /var/log/syslog
Details about issue
I have one AWS account where I have multiple S3 buckets. I am trying to setup separate users that only have access to one bucket. I have created separate Policies for each user. When using the default "AmazonS3FullAccess" policy s3fs works perfectly:
However, when I specify the bucket, as follows:
I get the error "Operation not permitted" when I try any read/write/delete file operation:
and when I try a list I get "No such file or directory" errors:
When I test this policy in the Amazon Policy Simulator I get no errors and all operations are supposed to be allowed.
I am new to s3fs so maybe this is a dumb configuration error on my part.
Thank you for your help.
@ggtakec commented on GitHub (Apr 2, 2017):
@lyquix-owner
It seems that your access policy does not have any problem.
First of all, we should know if this problem is a problem caused by s3fs or a policy setting etc.
For that reason, let s3fs do debug output(you can use dbglevel and curldbg option, please see man page) and do simple operation.
That will cause s3fs to output a lot of logs.
I think these logs help us to give some hints to slove this problem.
Thanks in advance for your help.
@lyquix-owner commented on GitHub (Apr 3, 2017):
@ggtakec, thank you for your response.
I have mounted the s3fs drive using the debug options provided. Please note that the name of my bucket has changed since last time, it is now
lyquix-s3backup-rothman, and the policy file is the following:Here is a log from the command line:
You can find the extract from syslog here: https://gist.github.com/lyquix-owner/deecec57cd784e9789035a1465994062
Please note that I have redacted all instances of
Authorization header (access credential and signature)
x-amz-content-sha256
x-amz-id-2
@ggtakec commented on GitHub (Apr 9, 2017):
@lyquix-owner
It seems that "*.tar.gz" files which are not uploaded by s3fs do not have any attributes(mode/uid/gid).
s3fs needs file permission mode owner group(x-amz-meta-* headers present).
I think that these files are uploaded by another s3 tools(ex. s3cmd).
So if you can, you can run s3fs with uid/gid/umask option, these options set uid/gid/mode for these files.
Thus you can do "cp" command.
Please try it.
Regards,
@sqlbot commented on GitHub (Apr 9, 2017):
I believe the error actually is in the policy.
"Resource": "arn:aws:s3:::lyquix-s3backup-rothman"That's only permission on the bucket, not the objects in the bucket. Listing objects should work, but fetching metadata would not, and that may be a more likely explanation for the strange directory listing, since my assumption from the messages above is that this bucket's contents were created by s3fs, so the metadata should be there.
I believe a correct policy resource would look more like this:
"Resource": [ "arn:aws:s3:::lyquix-s3backup-rothman", "arn:aws:s3:::lyquix-s3backup-rothman/*" ]@ggtakec commented on GitHub (Apr 9, 2017):
@sqlbot Thanks you for your kindness.
@lyquix-owner Please try to change your policy like sample.
Thanks in advance for your assistance.
@lyquix-owner commented on GitHub (Apr 10, 2017):
Thanks for your help @sqlbot @ggtakec !
I am happy to report that changing the policy as recommended solved the problem.
@ggtakec commented on GitHub (Apr 23, 2017):
@lyquix-owner I'm closing this issue, and tahnks for your report.
Regards,