mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2616] Slow file reading when disk space is low #1246
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1246
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @xqc-xiong on GitHub (Dec 9, 2024).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2616
Additional Information
Version of s3fs being used (
s3fs --version)1.91
Kernel information (
uname -r)ubuntu20.04
GNU/Linux Distribution, if applicable (
cat /etc/os-release)How to run s3fs, if applicable
[] command line
[] /etc/fstab
Details about issue
I encountered a problem when using s3fs. When I used use_cache to specify the use of cache, when the disk space is insufficient, there will be two concurrent reads of the same file. A situation will occur where the data downloaded by one read request is deleted in another read request, resulting in a circular download of data. Only a small part of the data can be read each time, resulting in slow reading. Is there any good solution for this?
@ggtakec commented on GitHub (Jan 19, 2025):
s3fs keeps downloaded files locally.
However, if you run out of disk space, temporary files will be created, resulting in the same problem you're experiencing.
Unfortunately, the only way to avoid this is to delete unnecessary files in the cache directory (ex. periodically), or to point the cache directory to a location with a large amount of disk space.