mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[PR #114] [CLOSED] Disable pages merge #1349
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1349
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/s3fs-fuse/s3fs-fuse/pull/114
Author: @fcand-anevia
Created: 1/26/2015
Status: ❌ Closed
Base:
master← Head:range📝 Commits (10+)
c3ea03bdisable pages merge to download only what is necessary and lower part size to 1MB7b30736add possibility to disable pagesmerge to download only what is asked (rounded to page size of course) very useful when dealing with very big objects in order not to wait for the whole object to be downloaded to start working with small amount of data (takes much more sense when cache is not used)b56d7ffSetMultipartSize method now takes value in bytes (instead of mbytes) and minimum is lowered to 64kb (instead of 1MB)c6393eddebian files for making debian packages9f86fc8do not update cache date when reading cache; it should be updated only when reading actual data over network3c3ba2cfirst commit for prefetch threaded downloads6830523add dbg package2bfe686fix deadlock2cb15c5set correct git-hub urls0602accprepare changelog before merge on master📊 Changes
12 files changed (+230 additions, -35 deletions)
View changed files
➕
debian/changelog(+13 -0)➕
debian/compat(+1 -0)➕
debian/control(+23 -0)➕
debian/copyright(+20 -0)➕
debian/rules(+23 -0)📝
src/cache.cpp(+0 -2)📝
src/curl.cpp(+9 -3)📝
src/curl.h(+3 -0)📝
src/fdcache.cpp(+108 -23)📝
src/fdcache.h(+4 -0)📝
src/s3fs.cpp(+18 -7)📝
src/s3fs_util.cpp(+8 -0)📄 Description
I need to have some softwares that do random access to parts of objects. I cannot wait for the whole object to be downloaded, when I just need to access the first megabyte of data.
The current implementation merges available pages and download the object to the end. If a software needs to read small piece of data it will need to wait for the download to complete.
It would be convenient to have an option that disables it. This is what this pull requests is about.
Note : in order to see the benefit of this, I had to lower the pages sizes but the unit of multipart size was megabytes, so I took the liberty of changing the unit of the multipart_size option from mbytes to bytes. This can be rejected and be the purpose of another pull request.
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.