mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2096] s3fs options don't include access and secret key #1063
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1063
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @chrisbecke on GitHub (Jan 20, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2096
Additional Information
Version of s3fs being used (
s3fs --version)Amazon Simple Storage Service File System V1.91 (commit:unknown) with OpenSSL
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)Kernel information (
uname -r)5.15.49-linuxkit
GNU/Linux Distribution, if applicable (
cat /etc/os-release)How to run s3fs, if applicable
Docker supports mounting volumes via its local driver, which takes options that are compatible with fstab entries:
n/a
Details about issue
s3fs will accept access-key and secret-key via env vars or via an file, but will accept the s3 url via
-o url=. Docker can mount volumes using installed file system drivers, but cannot populate a file on the host, or ENV vars, via this directive. So without explicit -o access_key=,secret_key= support it is not possible to mount s3fs shares as docker volumes using, what should be easy support for any locally available fs driver.@ggtakec commented on GitHub (Jan 29, 2023):
@chrisbecke , @gaul and s3fs users
Implementing this option is not difficult.
Until now, the s3fs option did not support this option due to fear of secret/token leakage.
(If user specifies it with environments, user has the same problem, so it probabry already be meaningless)
Certainly as you say, s3fs startup at container startup would be easier if it could be specified as an option.
But I'm not sure yet if I should add this option.
Users are unsure if they should decide to take the risk of optionally passing Secrets/Tokens at their own risk.
(This may be a concern since we already provide environment variables)
I would like your opinion on this matter.
@bonjour-py commented on GitHub (Nov 26, 2024):
really need it
I use podman-systemd.unit.5 (systemd managed podman contianer) volume.
it works like
it does not work
podman seems can not pass env to mount(s3fs)
@chrisbecke commented on GitHub (Nov 26, 2024):
My case is that of a multi Tennant docker swarm. One Tennant deploys a stack with a service that uses their S3 bucket. another deploys a different stack that uses their own S3 bucket.
I would not want to expose either tennants bucket credentials or contents to the other Tennant so they must have the option of configuring the bucket credentials on their own container. The environment is set on the host so that is unacceptable. In this case o= options are the only viable option as the containers are running isolated and can't influence the host environment where the filesystem drivers run.
@gaul commented on GitHub (Nov 26, 2024):
Why can't you put the authentication keys in a file:
https://github.com/s3fs-fuse/s3fs-fuse?tab=readme-ov-file#examples
@chrisbecke commented on GitHub (Nov 27, 2024):
When deploying containers using container orchestrators - like swarm - you don't get to install arbitrary files to the host file system.