[GH-ISSUE #1214] AWS CloudWatch Agent overriding credentials #644

Closed
opened 2026-03-04 01:47:29 +03:00 by kerem · 1 comment
Owner

Originally created by @usernamemikem on GitHub (Dec 6, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1214

This maybe an FYI because I'm not sure it's 100% your issue.

Using:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.3 LTS
Release: 18.04
Codename: bionic

Version of s3fs being used (s3fs --version)

Amazon Simple Storage Service File System V1.85 (commit:bdfb9ee) with OpenSSL

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

Version: 2.9.7-1ubuntu1

Kernel information (uname -r)

command result: uname -r

4.15.0-1056-aws

I installed the S3FS software to mount an S3 Bucket. I also installed the CloudWatch Agent to capture the logs on my EC2.
Created two IAM Users:

CloudWatch-EC2Log-Agent and SFTP-Bucket-User both using Access Key IDs.

The Access Key ID for SFTP-Bucket-User is stored in /etc/passwd-s3fs

The AccessKey ID for CloudWatch-EC2Log-Agent is stored in /root/.aws/credentials

I mounted the S3 Bucket first then installed the CloudWatch agent. It ran for a couple of weeks without issue until I rebooted.
The bucket wasn’t mounted. When I tried to mount it manually with sudo mount -a I found the errors:

s3fs.cpp:s3fs_check_service(3866): Failed to connect by sigv4, so retry to connect by signature version 2.
s3fs.cpp:s3fs_check_service(3881): invalid credentials(host=https://s3.us-east-1.amazonaws.com) - result of checking service.

Basically the credentials for the CloudWatch is being used for both services. Even though the credentials for the SFTP Bucket user is stored in a completely different place, it was ignored or over taken or something ridiculous.

I stopped the CloudWatch Agent and renamed the /root/.asw/credential file. The bucket was able to mount again.

Now I’m using only one user, the SFTP Bucket User and gave it the same permissions as the CloudWatch agent. Now there is no confusion between the credentials. S3 Bucket is mounted and CloudWatch is receiving logs and creating alarms as expected.

Of course I was banging my head against the wall because I thought I caused it with an update I ran. But I was able to reproduce it on a temporary EC2.

Originally created by @usernamemikem on GitHub (Dec 6, 2019). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1214 This maybe an FYI because I'm not sure it's 100% your issue. Using: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.3 LTS Release: 18.04 Codename: bionic #### Version of s3fs being used (s3fs --version) Amazon Simple Storage Service File System V1.85 (commit:bdfb9ee) with OpenSSL #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) Version: 2.9.7-1ubuntu1 #### Kernel information (uname -r) _command result: uname -r_ 4.15.0-1056-aws I installed the S3FS software to mount an S3 Bucket. I also installed the CloudWatch Agent to capture the logs on my EC2. Created two IAM Users: CloudWatch-EC2Log-Agent and SFTP-Bucket-User both using Access Key IDs. The Access Key ID for SFTP-Bucket-User is stored in /etc/passwd-s3fs The AccessKey ID for CloudWatch-EC2Log-Agent is stored in /root/.aws/credentials I mounted the S3 Bucket first then installed the CloudWatch agent. It ran for a couple of weeks without issue until I rebooted. The bucket wasn’t mounted. When I tried to mount it manually with sudo mount -a I found the errors: s3fs.cpp:s3fs_check_service(3866): Failed to connect by sigv4, so retry to connect by signature version 2. s3fs.cpp:s3fs_check_service(3881): invalid credentials(host=https://s3.us-east-1.amazonaws.com) - result of checking service. Basically the credentials for the CloudWatch is being used for both services. Even though the credentials for the SFTP Bucket user is stored in a completely different place, it was ignored or over taken or something ridiculous. I stopped the CloudWatch Agent and renamed the /root/.asw/credential file. The bucket was able to mount again. Now I’m using only one user, the SFTP Bucket User and gave it the same permissions as the CloudWatch agent. Now there is no confusion between the credentials. S3 Bucket is mounted and CloudWatch is receiving logs and creating alarms as expected. Of course I was banging my head against the wall because I thought I caused it with an update I ran. But I was able to reproduce it on a temporary EC2.
kerem closed this issue 2026-03-04 01:47:29 +03:00
Author
Owner

@gaul commented on GitHub (Oct 10, 2020):

I believe that 0569cec3ea addresses your symptom, included in 1.86. Now s3fs checks the password file and environment before the ${HOME}/.aws/credentials file.

<!-- gh-comment-id:706512313 --> @gaul commented on GitHub (Oct 10, 2020): I believe that 0569cec3ea84d8f16b4ad93268a28b39f39c6b14 addresses your symptom, included in 1.86. Now s3fs checks the password file and environment before the `${HOME}/.aws/credentials` file.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#644
No description provided.