[GH-ISSUE #2096] s3fs options don't include access and secret key #1063

Open
opened 2026-03-04 01:51:04 +03:00 by kerem · 5 comments
Owner

Originally created by @chrisbecke on GitHub (Jan 20, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2096

Additional Information

Version of s3fs being used (s3fs --version)

Amazon Simple Storage Service File System V1.91 (commit:unknown) with OpenSSL

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

Version     : 2.9.9
Release     : 15.fc37

Kernel information (uname -r)

5.15.49-linuxkit

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Red Hat Enterprise Linux Server"
VERSION="7.9 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.9"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.9 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.9:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.9
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.9"

How to run s3fs, if applicable

Docker supports mounting volumes via its local driver, which takes options that are compatible with fstab entries:

volumes:
  bucket:
    driver: local
    type: s3fs
    driver_options:
      device=my-bucket
      o=url=http://minio.example.com,passwd_file=no_possible_value,access_key=not-supported,secret_key=not-supported

n/a

Details about issue

s3fs will accept access-key and secret-key via env vars or via an file, but will accept the s3 url via -o url=. Docker can mount volumes using installed file system drivers, but cannot populate a file on the host, or ENV vars, via this directive. So without explicit -o access_key=,secret_key= support it is not possible to mount s3fs shares as docker volumes using, what should be easy support for any locally available fs driver.

Originally created by @chrisbecke on GitHub (Jan 20, 2023). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2096 ### Additional Information #### Version of s3fs being used (`s3fs --version`) Amazon Simple Storage Service File System V1.91 (commit:unknown) with OpenSSL #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) ``` Version : 2.9.9 Release : 15.fc37 ``` #### Kernel information (`uname -r`) 5.15.49-linuxkit #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) ``` NAME="Red Hat Enterprise Linux Server" VERSION="7.9 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.9" PRETTY_NAME="Red Hat Enterprise Linux Server 7.9 (Maipo)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.9:GA:server" HOME_URL="https://www.redhat.com/" BUG_REPORT_URL="https://bugzilla.redhat.com/" REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.9 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.9" ``` #### How to run s3fs, if applicable Docker supports mounting volumes via its local driver, which takes options that are compatible with fstab entries: ``` volumes: bucket: driver: local type: s3fs driver_options: device=my-bucket o=url=http://minio.example.com,passwd_file=no_possible_value,access_key=not-supported,secret_key=not-supported ``` <!-- Executed command line or /etc/fastab entry --> n/a ### Details about issue s3fs will accept access-key and secret-key via env vars or via an file, but will accept the s3 url via `-o url=`. Docker can mount volumes using installed file system drivers, but cannot populate a file on the host, or ENV vars, via this directive. So without explicit -o access_key=,secret_key= support it is not possible to mount s3fs shares as docker volumes using, what should be easy support for any locally available fs driver.
Author
Owner

@ggtakec commented on GitHub (Jan 29, 2023):

@chrisbecke , @gaul and s3fs users
Implementing this option is not difficult.

Until now, the s3fs option did not support this option due to fear of secret/token leakage.
(If user specifies it with environments, user has the same problem, so it probabry already be meaningless)
Certainly as you say, s3fs startup at container startup would be easier if it could be specified as an option.

But I'm not sure yet if I should add this option.
Users are unsure if they should decide to take the risk of optionally passing Secrets/Tokens at their own risk.
(This may be a concern since we already provide environment variables)

I would like your opinion on this matter.

<!-- gh-comment-id:1407642573 --> @ggtakec commented on GitHub (Jan 29, 2023): @chrisbecke , @gaul and s3fs users Implementing this option is not difficult. Until now, the s3fs option did not support this option due to fear of secret/token leakage. (If user specifies it with environments, user has the same problem, so it probabry already be meaningless) Certainly as you say, s3fs startup at container startup would be easier if it could be specified as an option. But I'm not sure yet if I should add this option. Users are unsure if they should decide to take the risk of optionally passing Secrets/Tokens at their own risk. (This may be a concern since we already provide environment variables) I would like your opinion on this matter.
Author
Owner

@bonjour-py commented on GitHub (Nov 26, 2024):

really need it
I use podman-systemd.unit.5 (systemd managed podman contianer) volume.

[Volume]
Options = url=http://minio.example.com,passwd_file=no_possible_value,access_key=not-supported,secret_key=not-supported
Type = fuse.s3fs
Device = my-bucket
[Service]
Environment = AWS_ACCESS_KEY_ID=**************
Environment = AWS_SECRET_ACCESS_KEY=*************

it works like

export AWS_ACCESS_KEY_ID=**************
export AWS_SECRET_ACCESS_KEY=*************
podman volume create -t fuse.s3fs --device=my-bucket --opt=url=http://minio.example.com,passwd_file=no_possible_value,access_key=not-supported,secret_key=not-supported

it does not work
podman seems can not pass env to mount(s3fs)

<!-- gh-comment-id:2500172710 --> @bonjour-py commented on GitHub (Nov 26, 2024): really need it I use podman-systemd.unit.5 (systemd managed podman contianer) volume. ``` [Volume] Options = url=http://minio.example.com,passwd_file=no_possible_value,access_key=not-supported,secret_key=not-supported Type = fuse.s3fs Device = my-bucket [Service] Environment = AWS_ACCESS_KEY_ID=************** Environment = AWS_SECRET_ACCESS_KEY=************* ``` it works like ``` bash export AWS_ACCESS_KEY_ID=************** export AWS_SECRET_ACCESS_KEY=************* podman volume create -t fuse.s3fs --device=my-bucket --opt=url=http://minio.example.com,passwd_file=no_possible_value,access_key=not-supported,secret_key=not-supported ``` it does not work podman seems can not pass env to mount(s3fs)
Author
Owner

@chrisbecke commented on GitHub (Nov 26, 2024):

My case is that of a multi Tennant docker swarm. One Tennant deploys a stack with a service that uses their S3 bucket. another deploys a different stack that uses their own S3 bucket.

I would not want to expose either tennants bucket credentials or contents to the other Tennant so they must have the option of configuring the bucket credentials on their own container. The environment is set on the host so that is unacceptable. In this case o= options are the only viable option as the containers are running isolated and can't influence the host environment where the filesystem drivers run.

<!-- gh-comment-id:2500213118 --> @chrisbecke commented on GitHub (Nov 26, 2024): My case is that of a multi Tennant docker swarm. One Tennant deploys a stack with a service that uses their S3 bucket. another deploys a different stack that uses their own S3 bucket. I would not want to expose either tennants bucket credentials or contents to the other Tennant so they must have the option of configuring the bucket credentials on their own container. The environment is set on the host so that is unacceptable. In this case o= options are the only viable option as the containers are running isolated and can't influence the host environment where the filesystem drivers run.
Author
Owner

@gaul commented on GitHub (Nov 26, 2024):

Why can't you put the authentication keys in a file:

https://github.com/s3fs-fuse/s3fs-fuse?tab=readme-ov-file#examples

<!-- gh-comment-id:2501850590 --> @gaul commented on GitHub (Nov 26, 2024): Why can't you put the authentication keys in a file: https://github.com/s3fs-fuse/s3fs-fuse?tab=readme-ov-file#examples
Author
Owner

@chrisbecke commented on GitHub (Nov 27, 2024):

When deploying containers using container orchestrators - like swarm - you don't get to install arbitrary files to the host file system.

<!-- gh-comment-id:2502855009 --> @chrisbecke commented on GitHub (Nov 27, 2024): When deploying containers using container orchestrators - like swarm - you don't get to install arbitrary files to the host file system.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1063
No description provided.