[GH-ISSUE #864] s3fs not seeing AWS ID and SECRET as environment variables #503

Closed
opened 2026-03-04 01:46:10 +03:00 by kerem · 0 comments
Owner

Originally created by @Alexithymia2014 on GitHub (Nov 27, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/864

Version of s3fs being used (s3fs --version)

1.84-3.el7

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

Name : fuse
Arch : x86_64
Version : 2.9.2
Release : 10.el7

Kernel information (uname -r)

3.10.0-862.3.2.el7.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

s3fs command line used, if applicable

s3fs $BUCKET_NAME $MOUNT_LOCATION

/etc/fstab entry, if applicable

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

Details about issue

Using S3FS v1.83 (and v1.84), I source environment variables from a file in a script I've written to use the s3fs command. Unfortunately S3FS does not see these environment variables even though they're in the environment with the error message "could not determine how to establish security credentials." I'm using CentOS 7.5 with the latest package updates.

Passing the AWS variables in, below, as variables on the command line works, exporting them works too, but not sourcing a file.

BUCKET_NAME=[bucketname]
MOUNT_LOCATION=[mountlocation]
AWSACCESSKEYID=XXXXXXXXXX
AWSSECRETACCESSKEY=XXXXXXXXX

After sourcing the file, in my script I just call s3fs with the bucket name and mount location. Here is a snippet of the script:

CONFIG_FILE_LOCATION=/root/.aws_backup_config
if [ -a $CONFIG_FILE_LOCATION ]; then
source $CONFIG_FILE_LOCATION
else
echo "AWS Backup configuration does not exist at
$CONFIG_FILE_LOCATION"
exit 1
fi
echo "Mounting S3 bucket at $MOUNT_LOCATION"
echo $AWSACCESSKEYID
echo $AWSSECRETACCESSKEY

#Mount the S3 Bucket at MOUNT_LOCATION
s3fs $BUCKET_NAME $MOUNT_LOCATION
S3FS_RC=$?
if [ $S3FS_RC != 0 ]; then
echo "Failed to mount bucket! Something isn't configured correctly! Check server messages log."

What seems to be the issue?

Originally created by @Alexithymia2014 on GitHub (Nov 27, 2018). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/864 #### Version of s3fs being used (s3fs --version) 1.84-3.el7 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) Name : fuse Arch : x86_64 Version : 2.9.2 Release : 10.el7 #### Kernel information (uname -r) 3.10.0-862.3.2.el7.x86_64 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" #### s3fs command line used, if applicable ``` s3fs $BUCKET_NAME $MOUNT_LOCATION ``` #### /etc/fstab entry, if applicable ``` ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) _if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages_ ``` ``` ### Details about issue Using S3FS v1.83 (and v1.84), I source environment variables from a file in a script I've written to use the s3fs command. Unfortunately S3FS does not see these environment variables even though they're in the environment with the error message "could not determine how to establish security credentials." I'm using CentOS 7.5 with the latest package updates. Passing the AWS variables in, below, as variables on the command line works, exporting them works too, but not sourcing a file. BUCKET_NAME=[bucketname] MOUNT_LOCATION=[mountlocation] AWSACCESSKEYID=XXXXXXXXXX AWSSECRETACCESSKEY=XXXXXXXXX After sourcing the file, in my script I just call s3fs with the bucket name and mount location. Here is a snippet of the script: CONFIG_FILE_LOCATION=/root/.aws_backup_config if [ -a $CONFIG_FILE_LOCATION ]; then source $CONFIG_FILE_LOCATION else echo "AWS Backup configuration does not exist at $CONFIG_FILE_LOCATION" exit 1 fi echo "Mounting S3 bucket at $MOUNT_LOCATION" echo $AWSACCESSKEYID echo $AWSSECRETACCESSKEY #Mount the S3 Bucket at MOUNT_LOCATION s3fs $BUCKET_NAME $MOUNT_LOCATION S3FS_RC=$? if [ $S3FS_RC != 0 ]; then echo "Failed to mount bucket! Something isn't configured correctly! Check server messages log." What seems to be the issue?
kerem closed this issue 2026-03-04 01:46:10 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#503
No description provided.