mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #2071] df command shows wrong values (Used=0) #1046
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1046
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Framsfex on GitHub (Nov 30, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2071
Version of s3fs being used (
s3fs --version)V1.86 (commit:unknown)
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)2.9.9-3
Kernel information (
uname -r)5.4.0-132-generic
GNU/Linux Distribution, if applicable (
cat /etc/os-release)Ubuntu 20.04.5
How to run s3fs, if applicable
s3fs s3fs-test /S3 -o passwd_file=/root/.passwd-s3fs -o url=https://s3.tik.uni-stuttgart.de -o use_path_request_style
Details about issue
root@obertux:/S3# mount | grep S3
s3fs on /S3 type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
root@obertux:/S3# df -TH .
Filesystem Type Size Used Avail Use% Mounted on
s3fs fuse.s3fs 282T 0 282T 0% /S3
root@obertux:/S3# du -sh
41G .
root@obertux:/S3# find . | wc -l
108
==> df shows Used=0 but there IS at least 41 GB of file data!
@gaul commented on GitHub (Dec 1, 2022):
s3fs cannot support space used via
statvfssince it does not/cannot store the number of bytes used and S3 does not provide it. You need to useduinstead ofdf.@Framsfex commented on GitHub (Dec 1, 2022):
Please look at my report: I have used df! THIS IS the problem! df reports wrong numbers!
@gaul commented on GitHub (Dec 1, 2022):
I corrected my comment. Again, this is not something s3fs will ever support since S3 does not provide this information and it is expensive to calculate.