mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #559] Ensure diskfree doesn't work at all #313
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#313
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @davidfischer-ch on GitHub (Apr 2, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/559
Additional Information
Details about issue
s3fs-fuse is caching files even if the filesystem is running out of free space, less than 500MB of free space (<200 MB) and still caching files! So I disabled this feature.
@ggtakec commented on GitHub (Apr 9, 2017):
@davidfischer-ch Thanks for your report.
When s3fs starts uploading a object(file), it compares the free disk space with file size, and decides whether to use the cache directory or not.
Then s3fs should no longer use the cache directory if the file size is over free disk space.
and s3fs does not do multipart upload when not using cache directory.
If you can check with debug log(dbglevel=info and curldbg option), I think that you can see this difference.
I tried a case of this issue again, but the capacity specified by ensure_diskfree was protected.
However, there was another bug with the code of the master branch.
I am fixing it at #560.(This code is adapted only for a short period of time and is overwritten later, please see #560 comment)
If you can, please test again with the latest code of the master branch.
Thanks in advance for your assistance.
@davidfischer-ch commented on GitHub (Apr 11, 2017):
OK I will test it
@davidfischer-ch commented on GitHub (Apr 11, 2017):
options: _netdev,allow_other,noatime,rw,enable_noobj_cache,endpoint=eu-west-1,iam_role=auto,max_stat_cache_size=60000,use_sse,ensure_diskfree=3400,use_cache=/tmp/s3fs_cache
What is the unit of ensure_diskfree bytes or kilobytes?
I am not sure what is the unit finally, because my test fail again with master.
@ggtakec commented on GitHub (Apr 30, 2017):
@davidfischer-ch
A unit of ensure_diskfree bytes is MB.
(I think your configuration options do not have wrong)
s3fs uses statvfs function.
And if you can modify codes for debugging, please try to add messaging debug function in FdManager::IsSafeDiskSpace().
Regards,
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
Is this problem continuing?
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.
I will close this, but if the problem persists, please reopen or post a new issue.
@benzaam commented on GitHub (Jun 8, 2020):
@ggtakec I'd like to understand how the cache is working. Let's say I mount a bucket with 10GB of data in it, with the following fstab entry:
s3fs#bucket_name:/folder/data /mnt/data fuse allow_other,_netdev,use_path_request_style,umask=0,uid=1007,gid=1007,sync,passwd_file=/etc/passwd-s3fs,url=http://s3-eu-west-1.amazonaws.com 0 0
Then I'll have no local cache, right? so when a du -h /mnt/data/* -d 0 returns me this:
9.8G /mnt/data/folder1
It's not actually counting against my disk space, cause it's not stored locally, right?
Thanks!
Benjamin
@ggtakec commented on GitHub (Aug 2, 2020):
@benzaam It will never use the local disk permanently unless you use the use_cache option.
However, it may be used temporarily while uploading or downloading files. (When uploading and downloading are completed, the area will be released.)