[GH-ISSUE #673] Mounting is successful, but get permission denied #380

Closed
opened 2026-03-04 01:44:58 +03:00 by kerem · 28 comments
Owner

Originally created by @avnerbarr on GitHub (Nov 15, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/673

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.

  • Version of s3fs being used (s3fs --version)
  • example: 1.0

1.82

  • Version of fuse being used (pkg-config --modversion fuse)
  • example: 2.9.4
    2.9.7
  • System information (uname -a)
  • command result: uname -a

17.0.0 Darwin Kernel Version 17.0.0

  • Distro (cat /etc/issue)
  • command result: result

Details about issue

I am able to mount the s3 bucket but all of the folders have permissions denied.

Why aren't the credentials working on the nested folders? Should I add a flag?

Originally created by @avnerbarr on GitHub (Nov 15, 2017). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/673 #### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ - Version of s3fs being used (s3fs --version) - _example: 1.0_ 1.82 - Version of fuse being used (pkg-config --modversion fuse) - _example: 2.9.4_ 2.9.7 - System information (uname -a) - _command result: uname -a_ 17.0.0 Darwin Kernel Version 17.0.0 - Distro (cat /etc/issue) - _command result: result_ #### Details about issue I am able to mount the s3 bucket but all of the folders have permissions denied. Why aren't the credentials working on the nested folders? Should I add a flag?
kerem closed this issue 2026-03-04 01:44:58 +03:00
Author
Owner

@strk commented on GitHub (Nov 16, 2017):

You can set the umask option upon mounting, for example:

s3fs ... -o umask=0007
<!-- gh-comment-id:345082426 --> @strk commented on GitHub (Nov 16, 2017): You can set the `umask` option upon mounting, for example: ``` s3fs ... -o umask=0007 ```
Author
Owner

@TT-JMay commented on GitHub (Nov 17, 2017):

I am having the same issue.

Amazon Simple Storage Service File System V1.82(commit:259f028) with OpenSSL
Linux ip-192-168-101-125 4.9.58-18.55.amzn1.x86_64 #1 SMP Thu Nov 2 04:38:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
fuse-2.9.4-1.17.amzn1.x86_64
Amazon Linux AMI release 2017.09

[root@ip-192-168-101-125 ~]# touch /mnt/test
touch: setting times of ‘/mnt/test’: No such file or directory
[root@ip-192-168-101-125 ~]#

It is just odd that it will mount and everything looks good but then i cant interact with the bucket at all.

<!-- gh-comment-id:345343783 --> @TT-JMay commented on GitHub (Nov 17, 2017): I am having the same issue. Amazon Simple Storage Service File System V1.82(commit:259f028) with OpenSSL Linux ip-192-168-101-125 4.9.58-18.55.amzn1.x86_64 #1 SMP Thu Nov 2 04:38:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux fuse-2.9.4-1.17.amzn1.x86_64 Amazon Linux AMI release 2017.09 ``` [root@ip-192-168-101-125 ~]# touch /mnt/test touch: setting times of ‘/mnt/test’: No such file or directory [root@ip-192-168-101-125 ~]# ``` It is just odd that it will mount and everything looks good but then i cant interact with the bucket at all.
Author
Owner

@avnerbarr commented on GitHub (Nov 22, 2017):

I tried like this

s3fs the-bucket-name ~/.s3/local-bucket-path/ -o passwd_file=~/.s3/passwd -o umask=0007

but still can't do anything like:

ls ~/.s3/local-bucket-path/inside-bucket/
ls: : Permission denied
<!-- gh-comment-id:346291526 --> @avnerbarr commented on GitHub (Nov 22, 2017): I tried like this ``` s3fs the-bucket-name ~/.s3/local-bucket-path/ -o passwd_file=~/.s3/passwd -o umask=0007 ``` but still can't do anything like: ``` ls ~/.s3/local-bucket-path/inside-bucket/ ls: : Permission denied ```
Author
Owner

@strk commented on GitHub (Nov 23, 2017):

show the output of:

ls -l ~/.s3/local-bucket-path/

and of:

id

Does your user own the file ?
Or you can pass the uid option to make sure it does:

-o umask=0007,uid=1001 # replace 1001 with your id

Or change umask to be more widely open, like 0277

<!-- gh-comment-id:346530949 --> @strk commented on GitHub (Nov 23, 2017): show the output of: ``` ls -l ~/.s3/local-bucket-path/ ``` and of: ``` id ``` Does your user own the file ? Or you can pass the `uid` option to make sure it does: -o umask=0007,uid=1001 # replace 1001 with your id Or change umask to be more widely open, like 0277
Author
Owner

@ADTC commented on GitHub (Nov 23, 2017):

 -o umask=0007,uid=1001 # replace 1001 with your id

This is the only thing that worked for me... It needs to go into the readme!

<!-- gh-comment-id:346670799 --> @ADTC commented on GitHub (Nov 23, 2017): -o umask=0007,uid=1001 # replace 1001 with your id This is the only thing that worked for me... It needs to go into the readme!
Author
Owner

@strk commented on GitHub (Nov 23, 2017):

This is the only thing that worked for me... It needs to go into the readme!

+1

And maybe should also be the default (not sure why non-root user can
mount w/out being the owner of what's mounted, and w/out having at
least full read permission on that)

<!-- gh-comment-id:346702943 --> @strk commented on GitHub (Nov 23, 2017): > This is the only thing that worked for me... It needs to go into the readme! +1 And maybe should also be the default (not sure why non-root user can mount w/out being the owner of what's mounted, and w/out having at least full read permission on that)
Author
Owner

@ggtakec commented on GitHub (Nov 26, 2017):

@avnerbarr @ADTC @strk I added FAQ about this case.
https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ#q-could-not-access-filesdirectories-by-permission-denied
Please see/check it, and if you have a problem please reopen this issue.
Thanks in advance for your assistance.

<!-- gh-comment-id:346986760 --> @ggtakec commented on GitHub (Nov 26, 2017): @avnerbarr @ADTC @strk I added FAQ about this case. https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ#q-could-not-access-filesdirectories-by-permission-denied Please see/check it, and if you have a problem please reopen this issue. Thanks in advance for your assistance.
Author
Owner

@avnerbarr commented on GitHub (Nov 28, 2017):

I don't understand how i should mount using the uid option.

Can you please elaborate?

I tried taking the id from the passwd file and putting it in the mount command (i tried several variations of the following)

s3fs bucket ~/.s3/local_bucket/ -o passwd_file=~/.s3/passwd -o umask=0007 -o uid='my token'

s3fs bucket ~/.s3/local_bucket/ -o passwd_file=~/.s3/passwd -o uid='my token' -o umask=0007

s3fs bucket ~/.s3/local_bucket/ -o passwd_file=~/.s3/passwd -o umask=0007,uid='my token'

s3fs bucket ~/.s3/local_bucket/ -o passwd_file=~/.s3/passwd -o umask=0007 uid='my token'

always get this error

fuse: invalid parameter in option `uid=my token'
<!-- gh-comment-id:347537452 --> @avnerbarr commented on GitHub (Nov 28, 2017): I don't understand how i should mount using the uid option. Can you please elaborate? I tried taking the id from the passwd file and putting it in the mount command (i tried several variations of the following) ``` s3fs bucket ~/.s3/local_bucket/ -o passwd_file=~/.s3/passwd -o umask=0007 -o uid='my token' s3fs bucket ~/.s3/local_bucket/ -o passwd_file=~/.s3/passwd -o uid='my token' -o umask=0007 s3fs bucket ~/.s3/local_bucket/ -o passwd_file=~/.s3/passwd -o umask=0007,uid='my token' s3fs bucket ~/.s3/local_bucket/ -o passwd_file=~/.s3/passwd -o umask=0007 uid='my token' ``` always get this error ``` fuse: invalid parameter in option `uid=my token' ```
Author
Owner

@strk commented on GitHub (Nov 28, 2017):

What's the value of 'my token' ? It should be an integer value,
the 3rd field in /etc/passwd, or what is reported as "uid"
from the id command

<!-- gh-comment-id:347546241 --> @strk commented on GitHub (Nov 28, 2017): What's the value of 'my token' ? It should be an integer value, the 3rd field in /etc/passwd, or what is reported as "uid" from the `id` command
Author
Owner

@zewt commented on GitHub (Jun 27, 2018):

The behavior when there's no s3fs metadata is confusing. It should use reasonable defaults (owned by the mounting user, 0600 permissions), so you can mount any bucket without having to search through the FAQ to figure out why you're seeing this:

09:50 PM user@linux/7 [~] ls test
---------- 1 root root 3096 Mar 15 2017 test.txt
---------- 1 root root 59183 Mar 15 2017 test2.txt

which is what everyone currently sees when they follow the examples to mount an existing bucket.

<!-- gh-comment-id:400528277 --> @zewt commented on GitHub (Jun 27, 2018): The behavior when there's no s3fs metadata is confusing. It should use reasonable defaults (owned by the mounting user, 0600 permissions), so you can mount any bucket without having to search through the FAQ to figure out why you're seeing this: 09:50 PM user@linux/7 [~] ls test ---------- 1 root root 3096 Mar 15 2017 test.txt ---------- 1 root root 59183 Mar 15 2017 test2.txt which is what everyone currently sees when they follow the examples to mount an existing bucket.
Author
Owner

@momania commented on GitHub (Jul 12, 2018):

Agree with above. Documentation makes it look like this is a no-brainer and easy to setup and use, but in practice I can't get anything to work, as permissions are one big mess and too confusing to get working.

<!-- gh-comment-id:404517410 --> @momania commented on GitHub (Jul 12, 2018): Agree with above. Documentation makes it look like this is a no-brainer and easy to setup and use, but in practice I can't get anything to work, as permissions are one big mess and too confusing to get working.
Author
Owner

@solomonxie commented on GitHub (Oct 31, 2018):

-o umask=0007,uid=1000,gid=1000

this works for me, and it has to have both uid and gid.
my id is:

$ id
>>> uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu)....

without pointing out gid, its permission will be denied everytime

<!-- gh-comment-id:434787508 --> @solomonxie commented on GitHub (Oct 31, 2018): ``` -o umask=0007,uid=1000,gid=1000 ``` this works for me, and it has to have both `uid` and `gid`. my id is: ``` $ id >>> uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu).... ``` without pointing out `gid`, its permission will be denied everytime
Author
Owner

@AndresPineros commented on GitHub (Nov 1, 2018):

This shouldn't be placed in the FAQ, this should be in the usage steps.

<!-- gh-comment-id:434927432 --> @AndresPineros commented on GitHub (Nov 1, 2018): This shouldn't be placed in the FAQ, this should be in the usage steps.
Author
Owner

@solomonxie commented on GitHub (Nov 1, 2018):

This shouldn't be placed in the FAQ, this should be in the usage steps.

That's what i'm thinking as well, better to be in the README.md for majority who encounter this problem.

<!-- gh-comment-id:434927934 --> @solomonxie commented on GitHub (Nov 1, 2018): > This shouldn't be placed in the FAQ, this should be in the usage steps. That's what i'm thinking as well, better to be in the README.md for majority who encounter this problem.
Author
Owner

@zewt commented on GitHub (Nov 1, 2018):

It's not a documentation problem, it's just not great default behavior. It seems like the current defaults assume that people are mostly using this with buckets that are only used with S3FS and its special permission metadata exists. But, surely the most common use is to mount an existing S3 bucket created elsewhere, as a convenient way to access a webpage bucket, etc.

It should just default to 0600 permissions and the mounting user's UID when permission metadata doesn't exist, so you don't need to jump hoops with mount options.

I think saving file ownership to metadata shouldn't be enabled by default, either. It's just something to cause problems when you mount a bucket on multiple systems. This is a view into an S3 bucket, after all not an NFS mount...

<!-- gh-comment-id:434930449 --> @zewt commented on GitHub (Nov 1, 2018): It's not a documentation problem, it's just not great default behavior. It seems like the current defaults assume that people are mostly using this with buckets that are only used with S3FS and its special permission metadata exists. But, surely the most common use is to mount an existing S3 bucket created elsewhere, as a convenient way to access a webpage bucket, etc. It should just default to 0600 permissions and the mounting user's UID when permission metadata doesn't exist, so you don't need to jump hoops with mount options. I think saving file ownership to metadata shouldn't be enabled by default, either. It's just something to cause problems when you mount a bucket on multiple systems. This is a view into an S3 bucket, after all not an NFS mount...
Author
Owner

@KES777 commented on GitHub (Nov 14, 2018):

This also can belongs to fuse configuration itself. Read this

<!-- gh-comment-id:438807885 --> @KES777 commented on GitHub (Nov 14, 2018): This also can belongs to fuse configuration itself. [Read this](https://stackoverflow.com/a/30119806/4632019)
Author
Owner

@krisnova commented on GitHub (Dec 16, 2018):

On a macbook:

# Will tell you your user ID
id

# Then you can mount in one command
s3fs nivenly-photos /Users/nova/Photos/ -o passwd_file=${HOME}/.passwd-s3fs -o umask=0007,uid=501
<!-- gh-comment-id:447656684 --> @krisnova commented on GitHub (Dec 16, 2018): On a macbook: ```bash # Will tell you your user ID id # Then you can mount in one command s3fs nivenly-photos /Users/nova/Photos/ -o passwd_file=${HOME}/.passwd-s3fs -o umask=0007,uid=501 ```
Author
Owner

@polvoazul commented on GitHub (Jan 11, 2019):

just use $UID variable!
s3fs bucket /mount/point -o umask=0007,uid=$UID

<!-- gh-comment-id:453592323 --> @polvoazul commented on GitHub (Jan 11, 2019): just use $UID variable! `s3fs bucket /mount/point -o umask=0007,uid=$UID`
Author
Owner

@gaul commented on GitHub (Jan 11, 2019):

It should just default to 0600 permissions and the mounting user's UID when permission metadata doesn't exist, so you don't need to jump hoops with mount options.

I think saving file ownership to metadata shouldn't be enabled by default, either. It's just something to cause problems when you mount a bucket on multiple systems. This is a view into an S3 bucket, after all not an NFS mount...

@zewt Agree that permissions frustrate users but s3fs is trying to give the highest-fidelity POSIX filesystem that the S3 API allows. This is obviously not possible in all situations and some of the defaults probably do more harm than good. As a counterpoint, goofys makes different tradeoffs in POSIX vs. performance and ease of use. I opened #890 to track changing the defaults.

<!-- gh-comment-id:453654439 --> @gaul commented on GitHub (Jan 11, 2019): > It should just default to 0600 permissions and the mounting user's UID when permission metadata doesn't exist, so you don't need to jump hoops with mount options. > > I think saving file ownership to metadata shouldn't be enabled by default, either. It's just something to cause problems when you mount a bucket on multiple systems. This is a view into an S3 bucket, after all not an NFS mount... @zewt Agree that permissions frustrate users but s3fs is trying to give the highest-fidelity POSIX filesystem that the S3 API allows. This is obviously not possible in all situations and some of the defaults probably do more harm than good. As a counterpoint, [goofys](https://github.com/kahing/goofys) makes different tradeoffs in POSIX vs. performance and ease of use. I opened #890 to track changing the defaults.
Author
Owner

@adnangul commented on GitHub (Sep 13, 2019):

Still unable to get it working, I'm using role instead of secret. Unable to write, copy anything in the folder

s3fs -o iam_role="liferay-ec2" -o url="https://s3.us-east-2.amazonaws.com" -o endpoint=us-east-2 -o dbglevel=info -o curldbg -o allow_other -o use_cache=/tmp -o nonempty -o umask=0007,uid=1000,gid=1000 bucket /home/ec2-user/liferaymedia/document_library

uid, gid are correct obtained using 'id' command

<!-- gh-comment-id:531328325 --> @adnangul commented on GitHub (Sep 13, 2019): Still unable to get it working, I'm using role instead of secret. Unable to write, copy anything in the folder s3fs -o iam_role="liferay-ec2" -o url="https://s3.us-east-2.amazonaws.com" -o endpoint=us-east-2 -o dbglevel=info -o curldbg -o allow_other -o use_cache=/tmp -o nonempty -o umask=0007,uid=1000,gid=1000 bucket /home/ec2-user/liferaymedia/document_library uid, gid are correct obtained using 'id' command
Author
Owner

@99aulas commented on GitHub (Oct 2, 2019):

I am not using umask= 0007. I´m using umask=0000. In my case the directory is a sub directory of webserver that send some files to S3. the owner must be ec2-user because of pipeline user and the group must be apache because of running.

<!-- gh-comment-id:537684081 --> @99aulas commented on GitHub (Oct 2, 2019): I am not using umask= 0007. I´m using umask=0000. In my case the directory is a sub directory of webserver that send some files to S3. the owner must be ec2-user because of pipeline user and the group must be apache because of running.
Author
Owner

@squalsoft commented on GitHub (Mar 31, 2020):

show the output of: ls -l ~/.s3/local-bucket-path/ and of: id Does your user own the file ? Or you can pass the uid option to make sure it does: -o umask=0007,uid=1001 # replace 1001 with your id Or change umask to be more widely open, like 0277

You can get your uid with command: echo $UID

<!-- gh-comment-id:606553974 --> @squalsoft commented on GitHub (Mar 31, 2020): > show the output of: ``` ls -l ~/.s3/local-bucket-path/ ``` and of: ``` id ``` Does your user own the file ? Or you can pass the `uid` option to make sure it does: -o umask=0007,uid=1001 # replace 1001 with your id Or change umask to be more widely open, like 0277 You can get your uid with command: echo $UID
Author
Owner

@lysukhin commented on GitHub (Nov 2, 2020):

In my case the problem was not gone even after I've added all the necessary masks & UIDs.
The reason was an accidental sudo at the mounting command. Removing it made everything ok.

<!-- gh-comment-id:720464733 --> @lysukhin commented on GitHub (Nov 2, 2020): In my case the problem was not gone even after I've added all the necessary masks & UIDs. The reason was an accidental `sudo` at the mounting command. Removing it made everything ok.
Author
Owner

@anthonymobile commented on GitHub (Jan 4, 2021):

In my case the problem was not gone even after I've added all the necessary masks & UIDs.
The reason was an accidental sudo at the mounting command. Removing it made everything ok.

me too

<!-- gh-comment-id:754198424 --> @anthonymobile commented on GitHub (Jan 4, 2021): > In my case the problem was not gone even after I've added all the necessary masks & UIDs. > The reason was an accidental `sudo` at the mounting command. Removing it made everything ok. me too
Author
Owner

@ognjen-it commented on GitHub (Jan 29, 2021):

I have a simular problem. I have linux user "myusername" and his uid is 1001.
When I try to cat file from this folder I get error:
cat: s3dir/test.txt: Input/output error

My command for mounting is:
s3fs "mybackentrealname123" /home/myusername/s3dir -o passwd_file=/etc/passwd-s3fs,use_path_request_style,gid=1001,uid=1001,mp_umask=0007,allow_other,rw,dbglevel=info -f

IAM policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:",
"Resource": "
"
}
]
}

error:

[INF] s3fs.cpp:s3fs_getattr(876): [path=/]
[INF] curl.cpp:HeadRequest(3049): [tpath=/]
[INF] curl.cpp:PreHeadRequest(3009): [tpath=/][bpath=][save=][sseckeypos=-1]
[INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123/
[INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123/
[INF] curl.cpp:insertV4Headers(2753): computing signature [HEAD] [/] [] []
[INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com
[INF] curl.cpp:RequestPerform(2441): HTTP response code 404 was returned, returning ENOENT
[INF] curl.cpp:HeadRequest(3049): [tpath=//]
[INF] curl.cpp:PreHeadRequest(3009): [tpath=//][bpath=][save=][sseckeypos=-1]
[INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123//
[INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123//
[INF] curl.cpp:insertV4Headers(2753): computing signature [HEAD] [//] [] []
[INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com
[INF] curl.cpp:RequestPerform(2416): HTTP response code 200
[INF] cache.cpp:AddStat(371): add stat cache entry[path=//]
[INF] s3fs.cpp:s3fs_getattr(876): [path=/test.txt]
[INF] curl.cpp:HeadRequest(3049): [tpath=/test.txt]
[INF] curl.cpp:PreHeadRequest(3009): [tpath=/test.txt][bpath=][save=][sseckeypos=-1]
[INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123/test.txt
[INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123/test.txt
[INF] curl.cpp:insertV4Headers(2753): computing signature [HEAD] [/test.txt] [] []
[INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com
[INF] curl.cpp:RequestPerform(2416): HTTP response code 200
[INF] cache.cpp:AddStat(371): add stat cache entry[path=/test.txt]
[INF] s3fs.cpp:s3fs_open(2170): [path=/test.txt][flags=0x8000]
[INF] s3fs.cpp:s3fs_flush(2302): [path=/test.txt][fd=0]
[INF] cache.cpp:DelStat(579): delete stat cache entry[path=/test.txt]
[INF] curl.cpp:HeadRequest(3049): [tpath=/test.txt]
[INF] curl.cpp:PreHeadRequest(3009): [tpath=/test.txt][bpath=][save=][sseckeypos=-1]
[INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123/test.txt
[INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123/test.txt
[INF] curl.cpp:insertV4Headers(2753): computing signature [HEAD] [/test.txt] [] []
[INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com
[INF] curl.cpp:RequestPerform(2416): HTTP response code 200
[INF] cache.cpp:AddStat(371): add stat cache entry[path=/test.txt]
[INF] fdcache.cpp:SetMtime(1469): [path=/test.txt][fd=7][time=1611920916]
[INF] curl.cpp:GetObjectRequest(3384): [tpath=/test.txt][start=0][size=4]
[INF] curl.cpp:PreGetObjectRequest(3332): [tpath=/test.txt][start=0][size=4]
[INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123/test.txt
[INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123/test.txt
[INF] curl.cpp:GetObjectRequest(3403): downloading... [path=/test.txt][fd=7]
[INF] curl.cpp:insertV4Headers(2753): computing signature [GET] [/test.txt] [] []
[INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com
[ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR
[INF] curl.cpp:RequestPerform(2577): ### retrying...
[INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt]
[ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR
[INF] curl.cpp:RequestPerform(2577): ### retrying...
[INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt]
[ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR
[INF] curl.cpp:RequestPerform(2577): ### retrying...
[INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt]
[ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR
[INF] curl.cpp:RequestPerform(2577): ### retrying...
[INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt]
[ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR
[INF] curl.cpp:RequestPerform(2577): ### retrying...
[INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt]
[ERR] curl.cpp:RequestPerform(2595): ### giving up
[ERR] fdcache.cpp:Read(2107): could not download. start(0), size(4096), errno(-5)
[WAN] s3fs.cpp:s3fs_read(2255): failed to read file(/test.txt). result=-5
[INF] curl.cpp:GetObjectRequest(3384): [tpath=/test.txt][start=0][size=4]
[INF] curl.cpp:PreGetObjectRequest(3332): [tpath=/test.txt][start=0][size=4]
[INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123/test.txt
[INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123/test.txt
[INF] curl.cpp:GetObjectRequest(3403): downloading... [path=/test.txt][fd=7]
[INF] curl.cpp:insertV4Headers(2753): computing signature [GET] [/test.txt] [] []
[INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com
[ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR
[INF] curl.cpp:RequestPerform(2577): ### retrying...
[INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt]
[ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR
[INF] curl.cpp:RequestPerform(2577): ### retrying...
[INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt]
[ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR
[INF] curl.cpp:RequestPerform(2577): ### retrying...
[INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt]
[ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR
[INF] curl.cpp:RequestPerform(2577): ### retrying...
[INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt]
[ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR
[INF] curl.cpp:RequestPerform(2577): ### retrying...
[INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt]
[ERR] curl.cpp:RequestPerform(2595): ### giving up
[ERR] fdcache.cpp:Read(2107): could not download. start(0), size(4096), errno(-5)
[WAN] s3fs.cpp:s3fs_read(2255): failed to read file(/test.txt). result=-5
[INF] s3fs.cpp:s3fs_flush(2302): [path=/test.txt][fd=7]
[INF] fdcache.cpp:RowFlush(1885): [tpath=][path=/test.txt][fd=7]
[INF] s3fs.cpp:s3fs_release(2357): [path=/test.txt][fd=7]
[INF] fdcache.cpp:GetFdEntity(2505): [path=/test.txt][fd=7]

<!-- gh-comment-id:769819360 --> @ognjen-it commented on GitHub (Jan 29, 2021): I have a simular problem. I have linux user "myusername" and his uid is 1001. When I try to cat file from this folder I get error: `cat: s3dir/test.txt: Input/output error` My command for mounting is: s3fs "mybackentrealname123" /home/myusername/s3dir -o passwd_file=/etc/passwd-s3fs,use_path_request_style,gid=1001,uid=1001,mp_umask=0007,allow_other,rw,dbglevel=info -f IAM policy: > { > "Version": "2012-10-17", > "Statement": [ > { > "Effect": "Allow", > "Action": "s3:*", > "Resource": "*" > } > ] > } error: > [INF] s3fs.cpp:s3fs_getattr(876): [path=/] > [INF] curl.cpp:HeadRequest(3049): [tpath=/] > [INF] curl.cpp:PreHeadRequest(3009): [tpath=/][bpath=][save=][sseckeypos=-1] > [INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123/ > [INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123/ > [INF] curl.cpp:insertV4Headers(2753): computing signature [HEAD] [/] [] [] > [INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com > [INF] curl.cpp:RequestPerform(2441): HTTP response code 404 was returned, returning ENOENT > [INF] curl.cpp:HeadRequest(3049): [tpath=//] > [INF] curl.cpp:PreHeadRequest(3009): [tpath=//][bpath=][save=][sseckeypos=-1] > [INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123// > [INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123// > [INF] curl.cpp:insertV4Headers(2753): computing signature [HEAD] [//] [] [] > [INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com > [INF] curl.cpp:RequestPerform(2416): HTTP response code 200 > [INF] cache.cpp:AddStat(371): add stat cache entry[path=//] > [INF] s3fs.cpp:s3fs_getattr(876): [path=/test.txt] > [INF] curl.cpp:HeadRequest(3049): [tpath=/test.txt] > [INF] curl.cpp:PreHeadRequest(3009): [tpath=/test.txt][bpath=][save=][sseckeypos=-1] > [INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123/test.txt > [INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123/test.txt > [INF] curl.cpp:insertV4Headers(2753): computing signature [HEAD] [/test.txt] [] [] > [INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com > [INF] curl.cpp:RequestPerform(2416): HTTP response code 200 > [INF] cache.cpp:AddStat(371): add stat cache entry[path=/test.txt] > [INF] s3fs.cpp:s3fs_open(2170): [path=/test.txt][flags=0x8000] > [INF] s3fs.cpp:s3fs_flush(2302): [path=/test.txt][fd=0] > [INF] cache.cpp:DelStat(579): delete stat cache entry[path=/test.txt] > [INF] curl.cpp:HeadRequest(3049): [tpath=/test.txt] > [INF] curl.cpp:PreHeadRequest(3009): [tpath=/test.txt][bpath=][save=][sseckeypos=-1] > [INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123/test.txt > [INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123/test.txt > [INF] curl.cpp:insertV4Headers(2753): computing signature [HEAD] [/test.txt] [] [] > [INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com > [INF] curl.cpp:RequestPerform(2416): HTTP response code 200 > [INF] cache.cpp:AddStat(371): add stat cache entry[path=/test.txt] > [INF] fdcache.cpp:SetMtime(1469): [path=/test.txt][fd=7][time=1611920916] > [INF] curl.cpp:GetObjectRequest(3384): [tpath=/test.txt][start=0][size=4] > [INF] curl.cpp:PreGetObjectRequest(3332): [tpath=/test.txt][start=0][size=4] > [INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123/test.txt > [INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123/test.txt > [INF] curl.cpp:GetObjectRequest(3403): downloading... [path=/test.txt][fd=7] > [INF] curl.cpp:insertV4Headers(2753): computing signature [GET] [/test.txt] [] [] > [INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com > [ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR > [INF] curl.cpp:RequestPerform(2577): ### retrying... > [INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt] > [ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR > [INF] curl.cpp:RequestPerform(2577): ### retrying... > [INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt] > [ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR > [INF] curl.cpp:RequestPerform(2577): ### retrying... > [INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt] > [ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR > [INF] curl.cpp:RequestPerform(2577): ### retrying... > [INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt] > [ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR > [INF] curl.cpp:RequestPerform(2577): ### retrying... > [INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt] > [ERR] curl.cpp:RequestPerform(2595): ### giving up > [ERR] fdcache.cpp:Read(2107): could not download. start(0), size(4096), errno(-5) > [WAN] s3fs.cpp:s3fs_read(2255): failed to read file(/test.txt). result=-5 > [INF] curl.cpp:GetObjectRequest(3384): [tpath=/test.txt][start=0][size=4] > [INF] curl.cpp:PreGetObjectRequest(3332): [tpath=/test.txt][start=0][size=4] > [INF] curl.cpp:prepare_url(4703): URL is https://s3.amazonaws.com/mybackentrealname123/test.txt > [INF] curl.cpp:prepare_url(4736): URL changed is https://s3.amazonaws.com/mybackentrealname123/test.txt > [INF] curl.cpp:GetObjectRequest(3403): downloading... [path=/test.txt][fd=7] > [INF] curl.cpp:insertV4Headers(2753): computing signature [GET] [/test.txt] [] [] > [INF] curl.cpp:url_to_host(99): url is https://s3.amazonaws.com > [ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR > [INF] curl.cpp:RequestPerform(2577): ### retrying... > [INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt] > [ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR > [INF] curl.cpp:RequestPerform(2577): ### retrying... > [INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt] > [ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR > [INF] curl.cpp:RequestPerform(2577): ### retrying... > [INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt] > [ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR > [INF] curl.cpp:RequestPerform(2577): ### retrying... > [INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt] > [ERR] curl.cpp:RequestPerform(2466): ### CURLE_WRITE_ERROR > [INF] curl.cpp:RequestPerform(2577): ### retrying... > [INF] curl.cpp:RemakeHandle(2209): Retry request. [type=4][url=https://s3.amazonaws.com/mybackentrealname123/test.txt][path=/test.txt] > [ERR] curl.cpp:RequestPerform(2595): ### giving up > [ERR] fdcache.cpp:Read(2107): could not download. start(0), size(4096), errno(-5) > [WAN] s3fs.cpp:s3fs_read(2255): failed to read file(/test.txt). result=-5 > [INF] s3fs.cpp:s3fs_flush(2302): [path=/test.txt][fd=7] > [INF] fdcache.cpp:RowFlush(1885): [tpath=][path=/test.txt][fd=7] > [INF] s3fs.cpp:s3fs_release(2357): [path=/test.txt][fd=7] > [INF] fdcache.cpp:GetFdEntity(2505): [path=/test.txt][fd=7] >
Author
Owner

@sitzbrau commented on GitHub (Apr 4, 2024):

i also fixed with

just use $UID variable! s3fs bucket /mount/point -o umask=0007,uid=$UID

this is my complete command (remove '<>':

sudo s3fs <PUT S3 ID HERE> <PUT MOUNT FOLDER PATH HERE> -o iam_role=<IAM ROLE FOR EC2> -o use_cache=/opt/dlami/nvme -o allow_other -o uid=1000 -o mp_umask=0007 -o multireq_max=5 -o use_path_request_style

you need to create a IAM ROLE with "AmazonS3FullAccess" for EC2 INSTANCES

<!-- gh-comment-id:2037689595 --> @sitzbrau commented on GitHub (Apr 4, 2024): i also fixed with > just use $UID variable! `s3fs bucket /mount/point -o umask=0007,uid=$UID` this is my complete command (remove '<>': `sudo s3fs <PUT S3 ID HERE> <PUT MOUNT FOLDER PATH HERE> -o iam_role=<IAM ROLE FOR EC2> -o use_cache=/opt/dlami/nvme -o allow_other -o uid=1000 -o mp_umask=0007 -o multireq_max=5 -o use_path_request_style` you need to create a IAM ROLE with "AmazonS3FullAccess" for EC2 INSTANCES
Author
Owner

@KES777 commented on GitHub (Apr 22, 2024):

Strange, why issues are closed when they were not resolved yet?

<!-- gh-comment-id:2068318979 --> @KES777 commented on GitHub (Apr 22, 2024): Strange, why issues are closed when they were not resolved yet?
Author
Owner

@aza1200 commented on GitHub (Jul 30, 2024):

In my Case [Mac OS] about the permission Issue
The command line

 s3fs [MY_BUCKET] ~/mnt/ -o passwd_file=${HOME}/.passwd-s3fs -o allow_other -o use_cache=/tmp -o umask=0007,uid=[$UID],gid=[$GID]

was not working in my mac ITerm Terminal
But it was working when i changed to MAC OS Terminal

<!-- gh-comment-id:2257937360 --> @aza1200 commented on GitHub (Jul 30, 2024): In my Case [Mac OS] about the permission Issue The command line ``` s3fs [MY_BUCKET] ~/mnt/ -o passwd_file=${HOME}/.passwd-s3fs -o allow_other -o use_cache=/tmp -o umask=0007,uid=[$UID],gid=[$GID] ``` was not working in my mac **ITerm Terminal** But it was working when i changed to **MAC OS Terminal**
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#380
No description provided.