[GH-ISSUE #2081] use_path_request_style always results in Input/output error #1055

Closed
opened 2026-03-04 01:50:59 +03:00 by kerem · 5 comments
Owner

Originally created by @nick-youngblut on GitHub (Dec 22, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2081

Additional Information

Version of s3fs being used (s3fs --version)

V1.86 (newest on Ubuntu 20.04)

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

2.9.9-3

Kernel information (uname -r)

5.4.0-1094-azure

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

How to run s3fs, if applicable

sudo mkdir -p /mnt/my_bucket
sudo s3fs -o endpoint=us-west-2 \
  -o allow_other \
  -o use_cache=/tmp \
  -o dbglevel=info \
  -o passwd_file=${HOME}/.passwd-s3fs \
  -o use_path_request_style \
  my.bucket /mnt/my_bucket
ls /mnt/my_bucket
# ls: reading directory '/mnt/my_bucket': Input/output error
sudo ls /mnt/my_bucket
# ls: reading directory '/mnt/my_bucket': Input/output error

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

No logging info

Details about issue

All of the buckets for my organization include dots in the name, so I use -o use_path_request_style.
However, that results in the Input/output error.
If I create a new bucket lacking any dots in the name and mount without -o use_path_request_style,
the command successfully executes, but nothing is actually mounted.

I'm using the GitHub codespaces with the following devcontainer config:

{
    "name": "nfcore",
    "image": "nfcore/gitpod:2.7.1",
    "postCreateCommand": "python -m pip install --upgrade -r requirements-dev.txt -e ../ && pre-commit install --install-hooks",
    "remoteUser": "gitpod",
    "postCreateCommand": "bash .devcontainer/setup.sh",
    // Configure tool-specific properties.
    "customizations": {
        // Configure properties specific to VS Code.
        "vscode": {
            // Set *default* container specific settings.json values on container create.
            "settings": {
                "python.defaultInterpreterPath": "/opt/conda/bin/python",
                "python.linting.enabled": true,
                "python.linting.pylintEnabled": true,
                "python.formatting.autopep8Path": "/opt/conda/bin/autopep8",
                "python.formatting.yapfPath": "/opt/conda/bin/yapf",
                "python.linting.flake8Path": "/opt/conda/bin/flake8",
                "python.linting.pycodestylePath": "/opt/conda/bin/pycodestyle",
                "python.linting.pydocstylePath": "/opt/conda/bin/pydocstyle",
                "python.linting.pylintPath": "/opt/conda/bin/pylint"
            },

            // Add the IDs of extensions you want installed when the container is created.
            "extensions": ["ms-python.python", "ms-python.vscode-pylance", "REditorSupport.r", "EditorConfig.EditorConfig",
                           "codezombiech.gitignore", "Gruntfuggly.todo-tree", "ms-vsliveshare.vsliveshare",
                           "nextflow.nextflow", "redhat.vscode-yaml", "streetsidesoftware.code-spell-checker"]
        }
	},
	"features": {
		"ghcr.io/devcontainers/features/docker-in-docker:2": {
            "version": "latest"
        },
        "ghcr.io/devcontainers/features/aws-cli:1": {
            "version": "latest"
        }
	}
}

My AWS credentials work, given that aws s3 ls my.bucket works with the same credentials.

Originally created by @nick-youngblut on GitHub (Dec 22, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2081 <!-- -------------------------------------------------------------------------- The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD. --------------------------------------------------------------------------- --> ### Additional Information #### Version of s3fs being used (`s3fs --version`) V1.86 (newest on Ubuntu 20.04) #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) 2.9.9-3 #### Kernel information (`uname -r`) 5.4.0-1094-azure #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) ``` NAME="Ubuntu" VERSION="20.04.5 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.5 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal ``` #### How to run s3fs, if applicable ``` sudo mkdir -p /mnt/my_bucket sudo s3fs -o endpoint=us-west-2 \ -o allow_other \ -o use_cache=/tmp \ -o dbglevel=info \ -o passwd_file=${HOME}/.passwd-s3fs \ -o use_path_request_style \ my.bucket /mnt/my_bucket ``` ``` ls /mnt/my_bucket # ls: reading directory '/mnt/my_bucket': Input/output error sudo ls /mnt/my_bucket # ls: reading directory '/mnt/my_bucket': Input/output error ``` #### s3fs syslog messages (`grep s3fs /var/log/syslog`, `journalctl | grep s3fs`, or `s3fs outputs`) No logging info ### Details about issue All of the buckets for my organization include dots in the name, so I use `-o use_path_request_style`. However, that results in the `Input/output error`. If I create a new bucket lacking any dots in the name and mount without `-o use_path_request_style`, the command successfully executes, but nothing is actually mounted. I'm using the GitHub codespaces with the following devcontainer config: ``` { "name": "nfcore", "image": "nfcore/gitpod:2.7.1", "postCreateCommand": "python -m pip install --upgrade -r requirements-dev.txt -e ../ && pre-commit install --install-hooks", "remoteUser": "gitpod", "postCreateCommand": "bash .devcontainer/setup.sh", // Configure tool-specific properties. "customizations": { // Configure properties specific to VS Code. "vscode": { // Set *default* container specific settings.json values on container create. "settings": { "python.defaultInterpreterPath": "/opt/conda/bin/python", "python.linting.enabled": true, "python.linting.pylintEnabled": true, "python.formatting.autopep8Path": "/opt/conda/bin/autopep8", "python.formatting.yapfPath": "/opt/conda/bin/yapf", "python.linting.flake8Path": "/opt/conda/bin/flake8", "python.linting.pycodestylePath": "/opt/conda/bin/pycodestyle", "python.linting.pydocstylePath": "/opt/conda/bin/pydocstyle", "python.linting.pylintPath": "/opt/conda/bin/pylint" }, // Add the IDs of extensions you want installed when the container is created. "extensions": ["ms-python.python", "ms-python.vscode-pylance", "REditorSupport.r", "EditorConfig.EditorConfig", "codezombiech.gitignore", "Gruntfuggly.todo-tree", "ms-vsliveshare.vsliveshare", "nextflow.nextflow", "redhat.vscode-yaml", "streetsidesoftware.code-spell-checker"] } }, "features": { "ghcr.io/devcontainers/features/docker-in-docker:2": { "version": "latest" }, "ghcr.io/devcontainers/features/aws-cli:1": { "version": "latest" } } } ``` My AWS credentials work, given that `aws s3 ls my.bucket` works with the same credentials.
kerem closed this issue 2026-03-04 01:50:59 +03:00
Author
Owner

@nick-youngblut commented on GitHub (Dec 22, 2022):

The same error occurs if a use a "blank" github codespace (no devcontainer.json) running Ubuntu 20.04.5

<!-- gh-comment-id:1362420531 --> @nick-youngblut commented on GitHub (Dec 22, 2022): The same error occurs if a use a "blank" github codespace (no `devcontainer.json`) running Ubuntu 20.04.5
Author
Owner

@nick-youngblut commented on GitHub (Dec 22, 2022):

Also, https://github.com/s3fs-fuse/s3fs-fuse#examples states:

s3fs supports the standard AWS credentials file stored in ${HOME}/.aws/credentials

...however, that results in the following error:

s3fs: could not determine how to establish security credentials.

My credentials:

$ cat ~/.aws/credentials 
[default]
region = us-west-2
aws_access_key_id = MY_KEY
aws_secret_access_key = MY_SECRET_KEY
<!-- gh-comment-id:1362423946 --> @nick-youngblut commented on GitHub (Dec 22, 2022): Also, https://github.com/s3fs-fuse/s3fs-fuse#examples states: >s3fs supports the standard [AWS credentials file](https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html) stored in ${HOME}/.aws/credentials ...however, that results in the following error: ``` s3fs: could not determine how to establish security credentials. ``` My credentials: ``` $ cat ~/.aws/credentials [default] region = us-west-2 aws_access_key_id = MY_KEY aws_secret_access_key = MY_SECRET_KEY ```
Author
Owner

@ggtakec commented on GitHub (Jan 7, 2023):

@nick-youngblut
It seems that the ${HOME}/.aws/credentials file is not found.
This ${HOME} in documents means the home directory of the user who executed the s3fs process(the home directory in the passwd file, etc.).
Note that it's not the HOME environment variable.

Please check the s3fs execution user and path.
Thanks in advance for your assistance.

<!-- gh-comment-id:1374392170 --> @ggtakec commented on GitHub (Jan 7, 2023): @nick-youngblut It seems that the `${HOME}/.aws/credentials` file is not found. This `${HOME}` in documents means the home directory of the user who executed the s3fs process(the home directory in the passwd file, etc.). Note that it's not the `HOME` environment variable. Please check the s3fs execution user and path. Thanks in advance for your assistance.
Author
Owner

@nick-youngblut commented on GitHub (Jan 9, 2023):

Thanks @ggtakec for the explanation. I've since been able to fix the issue. Moreover, I'm now using fig to handle all of my secrets, so no need for .aws/credentials.

It would be great to see an "best practices" example of using s3fs in a GitHub codespace, but I maybe that is a bit too hard to document clearly?

<!-- gh-comment-id:1374973660 --> @nick-youngblut commented on GitHub (Jan 9, 2023): Thanks @ggtakec for the explanation. I've since been able to fix the issue. Moreover, I'm now using [fig](https://fig.io/) to handle all of my secrets, so no need for `.aws/credentials`. It would be great to see an "best practices" example of using s3fs in a GitHub codespace, but I maybe that is a bit too hard to document clearly?
Author
Owner

@ggtakec commented on GitHub (Jan 15, 2023):

@nick-youngblut Thanks for confirming.
I would also like to provide s3fs "best practices" for using the GitHub codespace, but I'm not familiar enough with "GitHub codespace" yet.
I wish someone could compile such information.

<!-- gh-comment-id:1383078412 --> @ggtakec commented on GitHub (Jan 15, 2023): @nick-youngblut Thanks for confirming. I would also like to provide s3fs "best practices" for using the GitHub codespace, but I'm not familiar enough with "GitHub codespace" yet. _I wish someone could compile such information._
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1055
No description provided.