[GH-ISSUE #197] ls: reading directory .: Input/output error on top level directory #111

Closed
opened 2026-03-04 01:42:12 +03:00 by kerem · 10 comments
Owner

Originally created by @kabads on GitHub (Jun 19, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/197

root@myserver:/srv/s3# ls
ls: reading directory .: Input/output error

This is only for the first top-level directory. I can cd in to the next directory down and start to ls there.

I have seen other bugs that are similar to this - but not exactly the same issue as mine. This is a stock ubuntu 14.04 machine with s3fs version 1.78 with openssl.

Any ideas what can be wrong with this top level dir?

Originally created by @kabads on GitHub (Jun 19, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/197 root@myserver:/srv/s3# ls ls: reading directory .: Input/output error This is only for the first top-level directory. I can cd in to the next directory down and start to ls there. I have seen other bugs that are similar to this - but not exactly the same issue as mine. This is a stock ubuntu 14.04 machine with s3fs version 1.78 with openssl. Any ideas what can be wrong with this top level dir?
kerem closed this issue 2026-03-04 01:42:12 +03:00
Author
Owner

@gaul commented on GitHub (Jun 19, 2015):

@kabads Can you run s3fs with the -d -d -f -o f2 -o curldbg flags and report any errors?

<!-- gh-comment-id:113597217 --> @gaul commented on GitHub (Jun 19, 2015): @kabads Can you run s3fs with the `-d -d -f -o f2 -o curldbg` flags and report any errors?
Author
Owner

@kabads commented on GitHub (Jun 20, 2015):

I have redacted my bucket name:

  • Connection #0 to host mybucket.s3-eu-west-1.amazonaws.com left intact
    MultiRead(3481): failed a request(403: http://mybucket.s3-eu-west-1.amazonaws.com/%7EVersionArchive/)
    multi_head_retry_callback(2170): Over retry count(3) limit(/~VersionArchive/).
    readdir_multi_head(2246): error occuered in multi request(errno=-5).
    s3fs_readdir(2301): readdir_multi_head returns error(-5).
    unique: 24, error: -5 (Input/output error), outsize: 16
    unique: 25, opcode: RELEASEDIR (29), nodeid: 1, insize: 64, pid: 0
    unique: 25, success, outsize: 16
<!-- gh-comment-id:113789610 --> @kabads commented on GitHub (Jun 20, 2015): I have redacted my bucket name: - Connection #0 to host mybucket.s3-eu-west-1.amazonaws.com left intact MultiRead(3481): failed a request(403: http://mybucket.s3-eu-west-1.amazonaws.com/%7EVersionArchive/) multi_head_retry_callback(2170): Over retry count(3) limit(/~VersionArchive/). readdir_multi_head(2246): error occuered in multi request(errno=-5). s3fs_readdir(2301): readdir_multi_head returns error(-5). unique: 24, error: -5 (Input/output error), outsize: 16 unique: 25, opcode: RELEASEDIR (29), nodeid: 1, insize: 64, pid: 0 unique: 25, success, outsize: 16
Author
Owner

@enmatt commented on GitHub (Jun 25, 2015):

I've noticed that when there are merely folders in the directory, ls works fine. If there are standard files inside the directory I too am getting the error. I can touch a file, and receive the error, but see that file was uploaded to the s3 bucket on Amazon's side.

<!-- gh-comment-id:115405393 --> @enmatt commented on GitHub (Jun 25, 2015): I've noticed that when there are merely folders in the directory, ls works fine. If there are standard files inside the directory I too am getting the error. I can touch a file, and receive the error, but see that file was uploaded to the s3 bucket on Amazon's side.
Author
Owner

@ggtakec commented on GitHub (Jan 17, 2016):

I'm sorry for that this issue had been left at a long period of time.

This problem has occurred in that s3fs is probabry timeouted to get the object head information(multipart head request).
So please use latest codes which is fixed about multipart request problem, and try to set "retries" parameter for s3fs.
And if you get a same problem, please run s3fs with dbglevel(and curldbg) option.
These option helps us solve this issue.

Thanks in advance for your help.

<!-- gh-comment-id:172309574 --> @ggtakec commented on GitHub (Jan 17, 2016): I'm sorry for that this issue had been left at a long period of time. This problem has occurred in that s3fs is probabry timeouted to get the object head information(multipart head request). So please use latest codes which is fixed about multipart request problem, and try to set "retries" parameter for s3fs. And if you get a same problem, please run s3fs with dbglevel(and curldbg) option. These option helps us solve this issue. Thanks in advance for your help.
Author
Owner

@albac commented on GitHub (Feb 25, 2016):

I am having the exact same problem and I am using the latest code I believe:
s3fs --version
Amazon Simple Storage Service File System V1.79 with OpenSSL
fstab:
s3fs#my-bucket /mnt/my-folder fuse -d,iam_role=my-role,allow_other,curldbg,retries=5,endpoint=us-west-2,url=https://s3-us-west-2.amazonaws.com
And I am getting the following:
ls: reading directory .: Input/output error
The instance has the iam role and I am able to use "aws s3 --region us-west-2 ls s3://my-bucket/"
Strange thing is that I get in /var/log/messages the list of folders with the message:
s3fs: failed a request(403: https://s3-bucket-url/file-visible)..
s3fs: Over retry count(5) limit(/document.tgz).
s3fs: error occuered in multi request(errno=-5)
s3fs: readdir_multi_head returns error(-5).

So it is actually able to show the filename on /var/log/messages

Strange thing is that I do have another instance recently setup that works same role, the instance failing was working sometime ago, I am wondering if there is any service running that I need to restart.

I have unmount and mounted but the problem persist, I have also been using the following to unmount: fusermount -u , no difference.

<!-- gh-comment-id:188670168 --> @albac commented on GitHub (Feb 25, 2016): I am having the exact same problem and I am using the latest code I believe: s3fs --version Amazon Simple Storage Service File System V1.79 with OpenSSL fstab: s3fs#my-bucket /mnt/my-folder fuse -d,iam_role=my-role,allow_other,curldbg,retries=5,endpoint=us-west-2,url=https://s3-us-west-2.amazonaws.com And I am getting the following: ls: reading directory .: Input/output error The instance has the iam role and I am able to use "aws s3 --region us-west-2 ls s3://my-bucket/" Strange thing is that I get in /var/log/messages the list of folders with the message: s3fs: failed a request(403: https://s3-bucket-url/file-visible).. s3fs: Over retry count(5) limit(/document.tgz). s3fs: error occuered in multi request(errno=-5) s3fs: readdir_multi_head returns error(-5). So it is actually able to show the filename on /var/log/messages Strange thing is that I do have another instance recently setup that works same role, the instance failing was working sometime ago, I am wondering if there is any service running that I need to restart. I have unmount and mounted but the problem persist, I have also been using the following to unmount: fusermount -u , no difference.
Author
Owner

@ggtakec commented on GitHub (Mar 6, 2016):

First I want to see whether s3fs have failed to mount?
(did df command failed?)

As far as your results, s3fs seems to not be able to access by the access control of the S3.

We need to know the reason of this failure, if you can please set dbglevel/curldbg option and get the debug log detail.

Regards,

<!-- gh-comment-id:192889859 --> @ggtakec commented on GitHub (Mar 6, 2016): First I want to see whether s3fs have failed to mount? (did df command failed?) As far as your results, s3fs seems to not be able to access by the access control of the S3. We need to know the reason of this failure, if you can please set dbglevel/curldbg option and get the debug log detail. Regards,
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.
I will close this, but if the problem persists, please reopen or post a new issue.

<!-- gh-comment-id:478213258 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. We launch new version 1.86, which fixed some problem(bugs). Please use the latest version. I will close this, but if the problem persists, please reopen or post a new issue.
Author
Owner

@ivan-ayala-mx commented on GitHub (Jun 21, 2019):

we are having the same issue.

ls: reading directory .: Input/output error
total 0

There is a mounted bucket and during operation (probably more than 5 times per day) we are getting this message. I have to umount and mount it again to resolve it, and something I need to reboot the server.

We are using 1.85 version.

This ticket is open https://github.com/s3fs-fuse/s3fs-fuse/issues/1040

<!-- gh-comment-id:504564743 --> @ivan-ayala-mx commented on GitHub (Jun 21, 2019): we are having the same issue. ls: reading directory .: Input/output error total 0 There is a mounted bucket and during operation (probably more than 5 times per day) we are getting this message. I have to umount and mount it again to resolve it, and something I need to reboot the server. We are using 1.85 version. This ticket is open https://github.com/s3fs-fuse/s3fs-fuse/issues/1040
Author
Owner

@Davidrjx commented on GitHub (Jul 8, 2022):

same problem using V1.91 (commit:df7bbb2)

<!-- gh-comment-id:1178888487 --> @Davidrjx commented on GitHub (Jul 8, 2022): same problem using V1.91 (commit:df7bbb2)
Author
Owner

@dumpyFox commented on GitHub (Sep 4, 2023):

Same here. V1.93 (commit:3f64c72) with OpenSSL

<!-- gh-comment-id:1705110825 --> @dumpyFox commented on GitHub (Sep 4, 2023): Same here. V1.93 (commit:3f64c72) with OpenSSL
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#111
No description provided.