mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #896] root disk is full after I upload my data to mounted s3 bucket #522
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#522
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @rongqiibri on GitHub (Jan 16, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/896
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD
Version of s3fs being used (s3fs --version)
V1.84(commit:b68d97c) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.2
Kernel information (uname -r)
4.14.88-88.76.amzn2.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3⭕amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
###command to mount:
s3fs nameofmys3buckett -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket
Details about issue
I launched an EC2 instance with 100g storage and mount s3 to this EC2 with command above. Then I uploaded data like 100g to my s3 bucket. Thereafter I was told there is no more space on my ec2.
with df-h, I found /dev/xvda1 100G 99G 1.1G 99% /
but all those data are actually on my s3 bucket.
I guess there must be something wrong with my configuration. Please help!
@rongqiibri commented on GitHub (Jan 16, 2019):
I figure it out. Actually all data from s3 were copied to /tmp and that is why it is full.
It could be closed.