[PR #114] [CLOSED] Disable pages merge #1349

Closed
opened 2026-03-04 01:53:30 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/s3fs-fuse/s3fs-fuse/pull/114
Author: @fcand-anevia
Created: 1/26/2015
Status: Closed

Base: masterHead: range


📝 Commits (10+)

  • c3ea03b disable pages merge to download only what is necessary and lower part size to 1MB
  • 7b30736 add possibility to disable pagesmerge to download only what is asked (rounded to page size of course) very useful when dealing with very big objects in order not to wait for the whole object to be downloaded to start working with small amount of data (takes much more sense when cache is not used)
  • b56d7ff SetMultipartSize method now takes value in bytes (instead of mbytes) and minimum is lowered to 64kb (instead of 1MB)
  • c6393ed debian files for making debian packages
  • 9f86fc8 do not update cache date when reading cache; it should be updated only when reading actual data over network
  • 3c3ba2c first commit for prefetch threaded downloads
  • 6830523 add dbg package
  • 2bfe686 fix deadlock
  • 2cb15c5 set correct git-hub urls
  • 0602acc prepare changelog before merge on master

📊 Changes

12 files changed (+230 additions, -35 deletions)

View changed files

debian/changelog (+13 -0)
debian/compat (+1 -0)
debian/control (+23 -0)
debian/copyright (+20 -0)
debian/rules (+23 -0)
📝 src/cache.cpp (+0 -2)
📝 src/curl.cpp (+9 -3)
📝 src/curl.h (+3 -0)
📝 src/fdcache.cpp (+108 -23)
📝 src/fdcache.h (+4 -0)
📝 src/s3fs.cpp (+18 -7)
📝 src/s3fs_util.cpp (+8 -0)

📄 Description

I need to have some softwares that do random access to parts of objects. I cannot wait for the whole object to be downloaded, when I just need to access the first megabyte of data.

The current implementation merges available pages and download the object to the end. If a software needs to read small piece of data it will need to wait for the download to complete.

It would be convenient to have an option that disables it. This is what this pull requests is about.

Note : in order to see the benefit of this, I had to lower the pages sizes but the unit of multipart size was megabytes, so I took the liberty of changing the unit of the multipart_size option from mbytes to bytes. This can be rejected and be the purpose of another pull request.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/s3fs-fuse/s3fs-fuse/pull/114 **Author:** [@fcand-anevia](https://github.com/fcand-anevia) **Created:** 1/26/2015 **Status:** ❌ Closed **Base:** `master` ← **Head:** `range` --- ### 📝 Commits (10+) - [`c3ea03b`](https://github.com/s3fs-fuse/s3fs-fuse/commit/c3ea03b06eefdeeeeaebd702fe29ca1d49b67b2c) disable pages merge to download only what is necessary and lower part size to 1MB - [`7b30736`](https://github.com/s3fs-fuse/s3fs-fuse/commit/7b307368249dd1e6ef65382c0c90dbc27ade0c50) add possibility to disable pagesmerge to download only what is asked (rounded to page size of course) very useful when dealing with very big objects in order not to wait for the whole object to be downloaded to start working with small amount of data (takes much more sense when cache is not used) - [`b56d7ff`](https://github.com/s3fs-fuse/s3fs-fuse/commit/b56d7ff64c8422f8146b467cb736fd9e60531167) SetMultipartSize method now takes value in bytes (instead of mbytes) and minimum is lowered to 64kb (instead of 1MB) - [`c6393ed`](https://github.com/s3fs-fuse/s3fs-fuse/commit/c6393ed71ab79f3984681fb59b087ff10514e3a7) debian files for making debian packages - [`9f86fc8`](https://github.com/s3fs-fuse/s3fs-fuse/commit/9f86fc82838836cfe84288171c9cfb34c6587924) do not update cache date when reading cache; it should be updated only when reading actual data over network - [`3c3ba2c`](https://github.com/s3fs-fuse/s3fs-fuse/commit/3c3ba2ce9379be970ac7b179045cb7598cf736e3) first commit for prefetch threaded downloads - [`6830523`](https://github.com/s3fs-fuse/s3fs-fuse/commit/6830523c374cfca7f3e7d67aa5f777691d9a8f63) add dbg package - [`2bfe686`](https://github.com/s3fs-fuse/s3fs-fuse/commit/2bfe6861b44c79a9c9a74ae538a16b6f826093d7) fix deadlock - [`2cb15c5`](https://github.com/s3fs-fuse/s3fs-fuse/commit/2cb15c5a9416fc9ebf9e1dc150ea0db756dfacd9) set correct git-hub urls - [`0602acc`](https://github.com/s3fs-fuse/s3fs-fuse/commit/0602acc7a9202d7023e0b466c1af6e18dcab03ea) prepare changelog before merge on master ### 📊 Changes **12 files changed** (+230 additions, -35 deletions) <details> <summary>View changed files</summary> ➕ `debian/changelog` (+13 -0) ➕ `debian/compat` (+1 -0) ➕ `debian/control` (+23 -0) ➕ `debian/copyright` (+20 -0) ➕ `debian/rules` (+23 -0) 📝 `src/cache.cpp` (+0 -2) 📝 `src/curl.cpp` (+9 -3) 📝 `src/curl.h` (+3 -0) 📝 `src/fdcache.cpp` (+108 -23) 📝 `src/fdcache.h` (+4 -0) 📝 `src/s3fs.cpp` (+18 -7) 📝 `src/s3fs_util.cpp` (+8 -0) </details> ### 📄 Description I need to have some softwares that do random access to parts of objects. I cannot wait for the whole object to be downloaded, when I just need to access the first megabyte of data. The current implementation merges available pages and download the object to the end. If a software needs to read small piece of data it will need to wait for the download to complete. It would be convenient to have an option that disables it. This is what this pull requests is about. Note : in order to see the benefit of this, I had to lower the pages sizes but the unit of multipart size was megabytes, so I took the liberty of changing the unit of the multipart_size option from mbytes to bytes. This can be rejected and be the purpose of another pull request. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-04 01:53:30 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1349
No description provided.