mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #160] Read performance problem: pages merge is not effective due to fuse max pages per request #91
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#91
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @boazrf on GitHub (Mar 30, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/160
S3fs pages merge during read (at GetUninitPages) should provide performance optimization when reading large chunks from a file. Pages merge was recently fixed so that pages are merged according to the read request size. This fix prevent reading the entire file as a single page when requesting only parts of it (issue #112).
However, it looks like s3fs pages merging never occurs and the optimization is never gained due to fuse max pages per request limit. Fuse breaks single read to multiple read requests, each request read limited buffer with a max size of: page-size (4K) X max-pages-in-request (32 pages). The max request size is therefore 131056 bytes, much smaller than a reasonable s3fs page.
The result: when reading large chunks from a file s3fs sends multiple get requests to S3 instead of a single/parallel request for the entire requested buffer. Since my app is using s3fs page size of 1MB instead of the default 50MB the problem is much worse.
So far haven't found a solution/workaround for this issue. Ideas are welcomed.