[GH-ISSUE #896] root disk is full after I upload my data to mounted s3 bucket #522

Closed
opened 2026-03-04 01:46:19 +03:00 by kerem · 1 comment
Owner

Originally created by @rongqiibri on GitHub (Jan 16, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/896

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD

Version of s3fs being used (s3fs --version)

V1.84(commit:b68d97c) with OpenSSL

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.2

Kernel information (uname -r)

4.14.88-88.76.amzn2.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"

###command to mount:
s3fs nameofmys3buckett -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket

Details about issue

I launched an EC2 instance with 100g storage and mount s3 to this EC2 with command above. Then I uploaded data like 100g to my s3 bucket. Thereafter I was told there is no more space on my ec2.
with df-h, I found /dev/xvda1 100G 99G 1.1G 99% /
but all those data are actually on my s3 bucket.
I guess there must be something wrong with my configuration. Please help!

Originally created by @rongqiibri on GitHub (Jan 16, 2019). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/896 ### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ _Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD_ #### Version of s3fs being used (s3fs --version) V1.84(commit:b68d97c) with OpenSSL #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.2 #### Kernel information (uname -r) 4.14.88-88.76.amzn2.x86_64 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/" ###command to mount: s3fs nameofmys3buckett -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket ### Details about issue I launched an EC2 instance with 100g storage and mount s3 to this EC2 with command above. Then I uploaded data like 100g to my s3 bucket. Thereafter I was told there is no more space on my ec2. with df-h, I found /dev/xvda1 100G 99G 1.1G 99% / but all those data are actually on my s3 bucket. I guess there must be something wrong with my configuration. Please help!
kerem closed this issue 2026-03-04 01:46:19 +03:00
Author
Owner

@rongqiibri commented on GitHub (Jan 16, 2019):

I figure it out. Actually all data from s3 were copied to /tmp and that is why it is full.
It could be closed.

<!-- gh-comment-id:454865537 --> @rongqiibri commented on GitHub (Jan 16, 2019): I figure it out. Actually all data from s3 were copied to /tmp and that is why it is full. It could be closed.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#522
No description provided.