[GH-ISSUE #2616] Slow file reading when disk space is low #1246

Open
opened 2026-03-04 01:52:32 +03:00 by kerem · 1 comment
Owner

Originally created by @xqc-xiong on GitHub (Dec 9, 2024).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2616

Additional Information

Version of s3fs being used (s3fs --version)

1.91

Kernel information (uname -r)

ubuntu20.04

GNU/Linux Distribution, if applicable (cat /etc/os-release)

How to run s3fs, if applicable

[] command line
[] /etc/fstab

-ouse_cache=/ramdisk -odel_cache -oparallel_count=32 -omultipart_size=50

Details about issue

I encountered a problem when using s3fs. When I used use_cache to specify the use of cache, when the disk space is insufficient, there will be two concurrent reads of the same file. A situation will occur where the data downloaded by one read request is deleted in another read request, resulting in a circular download of data. Only a small part of the data can be read each time, resulting in slow reading. Is there any good solution for this?

Originally created by @xqc-xiong on GitHub (Dec 9, 2024). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2616 <!-- -------------------------------------------------------------------------- The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD. --------------------------------------------------------------------------- --> ### Additional Information #### Version of s3fs being used (`s3fs --version`) <!-- example: V1.91 (commit:b19262a) --> 1.91 #### Kernel information (`uname -r`) <!-- example: 5.10.96-90.460.amzn2.x86_64 --> ubuntu20.04 #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) <!-- command result --> #### How to run s3fs, if applicable <!-- Describe the s3fs "command line" or "/etc/fstab" entry used. --> [] command line [] /etc/fstab <!-- Executed command line or /etc/fastab entry --> ``` -ouse_cache=/ramdisk -odel_cache -oparallel_count=32 -omultipart_size=50 ``` ### Details about issue <!-- Please describe the content of the issue in detail. --> I encountered a problem when using s3fs. When I used use_cache to specify the use of cache, when the disk space is insufficient, there will be two concurrent reads of the same file. A situation will occur where the data downloaded by one read request is deleted in another read request, resulting in a circular download of data. Only a small part of the data can be read each time, resulting in slow reading. Is there any good solution for this?
Author
Owner

@ggtakec commented on GitHub (Jan 19, 2025):

s3fs keeps downloaded files locally.
However, if you run out of disk space, temporary files will be created, resulting in the same problem you're experiencing.
Unfortunately, the only way to avoid this is to delete unnecessary files in the cache directory (ex. periodically), or to point the cache directory to a location with a large amount of disk space.

<!-- gh-comment-id:2600740891 --> @ggtakec commented on GitHub (Jan 19, 2025): s3fs keeps downloaded files locally. However, if you run out of disk space, temporary files will be created, resulting in the same problem you're experiencing. Unfortunately, the only way to avoid this is to delete unnecessary files in the cache directory (ex. periodically), or to point the cache directory to a location with a large amount of disk space.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1246
No description provided.