mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #333] Permission denied trying to access file in bucket #174
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#174
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @martyychang on GitHub (Jan 13, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/333
After mounting the bucket with following command, I'm getting "permission denied" errors when trying to access files in the bucket. For example:
If I look at the permissions on a file through the mount point, I see:
What might be causing this symptom? The command I'm using to mount S3 is shown below.
@ggtakec commented on GitHub (Jan 16, 2016):
@martyychang
Probabry, mount/point/anyfile.txt was uploaded by other S3 tools(s3cmd, s3 console, etc)
s3fs needs x-amz-meta-*(mode,uid,gid,mtime) HTTP headers for permission like filesystem.
Cause of this "----------" permission which you saw is that the object(anyfile.txt) does not have x-amz-meta-mode, or there is no object for "point" directory.
To solve the problem, you need to set the HTTP headers that is required by the s3fs.
You re-upload objects with custom headers by other tools, or you can set attributes for objects by s3fs(ex. chmod 777 mount/point; chmod 666 mount/point/anyfile.txt).
s3fs supports objects that have been uploaded by other tools as much as possible, but this issue is can only be resolved in this way now.
Regards,
@timduly4 commented on GitHub (Aug 6, 2016):
Hi there,
I was having trouble with these permissions and I thought I would give a concrete solution example of uploading a file with
boto3(important: version 1.4, see here: https://github.com/boto/boto3/issues/389#issuecomment-237610668):The mode, uid, and gid numbers (33204, 1000, and 1000, respectively) are a bit mysterious to me. I was able to discern that (33204)_10 ~ (664)_8, but can't understand '1000'.
I was able to obtain these numbers by reverse engineering: I
chmodandchownon the s3fs-mounted file to my liking (e.g., 664 and ubuntu), then looked at the metadata via:Output:
Hope this helps!
@skliarie commented on GitHub (Sep 7, 2016):
There is a way to replace metadata without re-uploading the whole object:
http://stackoverflow.com/questions/269840/is-it-possible-to-change-headers-on-an-s3-object-without-downloading-the-entire
BTW, chmod suggestion above does not work for files bigger than 5 GB and even then it takes time proportionally to the file size.
@a2f0 commented on GitHub (Dec 11, 2016):
@ggtakec I can't the chmod commands to work on OS X (for s3 objects), even with the bucket mounted as AWS superuser. I get
Operation not permitted. I've tried chmodding the mount point to 777 and re-mounting, still no dice. I'm mounting with:s3fs bucket /mnt/bucketI'd be interested in how to get this to work.
@wenhongqiang commented on GitHub (Dec 26, 2016):
anyone can figure out how to set x-amz-meta- headers on files/directories via s3cmd ? I am stuck on this?
the alternative solution, using chmod doesn't work on new directories. so it's not better choice.
@ggtakec commented on GitHub (Jan 7, 2017):
@wenhongqiang
If you allow to ignore existing permissions, you can specify umask, mp_umask, gid and uid option for s3fs.
By these option, s3fs sets permission to all objects(files, directories) under mount point.
Please see man pages at s3fs and mount.
Thanks for your assistance.
@wenhongqiang commented on GitHub (Jan 9, 2017):
@ggtakec
thanks . will try it
@pelamfi commented on GitHub (Mar 26, 2017):
For those unfamiliar how the options mentioned by @wenhongqiang work, here is an example
-o uid=1000,umask=077,gid=1000,.... (Replace 1000 with your own numeric user id.)@pifou42 commented on GitHub (Apr 4, 2017):
It's working, at last ! Thanks dudes.
I used the following command line (change the bucket and mount point names, btw)
s3fs -o allow_other,uid=33,gid=33,umask=227,use_cache=/root/cache MY-BUCKET-NAME /mnt/MY-MOUNT-POINTAlso, while most the examples I found were using a cache dir, no one recommended to empty it sometimes. I'll probably put a nightly cron to remove "old" files very soon, since I don't want my HDD to have my whole HDD filled.
Here are two other important things I had to figure out by myself:
Here's what I used :
s3fs#MY-BUCKET-NAME /mnt/MY-MOUNT-POINT fuse _netdev,allow_other,umask=227,uid=33,gid=33,use_cache=/root/cache 0 0I had to modify both my /etc/updatedb.conf and /etc/cron.daily/locate, adding " /mnt/my-bucket-name" to PRUNEPATHS and " fuse.s3fs" to PRUNEFS
I suppose adding fuse.s3fs should be enough, but... no time to take risks right now :)
I'm also wondering if the allow_other option is still necessary, but I suppose it is. Gotta check this later.
@gaul commented on GitHub (Apr 4, 2017):
#554 should address this.
@pkerpedjiev commented on GitHub (Aug 30, 2017):
In case anybody is wondering, you can get your
uidandgidby running theidcommand:@javilumbrales commented on GitHub (Nov 16, 2017):
In case someone lands here using Mac and having the same issue, I managed to be able to read folders and files with the below command:
@myfitment commented on GitHub (Jan 29, 2019):
I'm trying to set this up in a chrooted environment. All subdirectories are 755 and they can't be changed with chmod. What is the command to alter subdirectories to 777?
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
@myfitment Probabry you can use
umaskandmp_umaskoption for s3fs.And if you need more information or reporting bugs, please open new issue.(This issue will be closed.)
@Signant commented on GitHub (Nov 8, 2021):
Hi,
We are using s3fs v1.82 and we are facing same issue permission read/write issue. what can be done to resolve this. are there any bugs in this version ?
@gaul commented on GitHub (Nov 8, 2021):
@Signant please stop spamming random issues with the same comments. Try upgrading to the latest s3fs. I will lock this issue now.