mirror of
https://github.com/librespot-org/librespot.git
synced 2026-04-27 08:15:50 +03:00
[PR #402] [MERGED] Limit number of requests for pre-fetching #912
Labels
No labels
A-Alsa
SpotifyAPI
Tokio 1.0
audio
bug
can't reproduce
compilation
dependencies
duplicate
enhancement
good first issue
help wanted
high priority
imported
imported
invalid
new api
pull-request
question
reverse engineering
wiki
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/librespot#912
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/librespot-org/librespot/pull/402
Author: @kaymes
Created: 11/19/2019
Status: ✅ Merged
Merged: 12/2/2019
Merged by: @sashahilton00
Base:
dev← Head:limit-request-number📝 Commits (2)
db0e4a0Limit number of prefetch requests.e550b7frustfmt📊 Changes
1 file changed (+45 additions, -21 deletions)
View changed files
📝
audio/src/fetch.rs(+45 -21)📄 Description
I noticed some ocational buffer under-runs as a consequence of PR #393 for which I made a fix here.
The issue is, that the pre-fetch algorithm can potentially fire of lots of small requests to keep the amount of pending bytes at the desired level. If bandwidth limits are hit, this leads to responses being interleaved - all requests are served simultaneously, which means the total data rate is great but each single request experiences a low data rate. A consequence can be, that the first request isn't responded to in time and we get a buffer under-run.
I solved this by limiting the amount of open requests when pre-fetching: a pre-fetch request is only sent when less than 4 requests are open. This leads to less requests which are larger. Thus, individual requests should still get decent download rates.
I experimented with different values. Using a limit of 3 leads to significantly lower overall download rates. Thus, I chose 4.
I also increased the amount of data that is requested ahead of the current read positions.
So far I haven't had any more buffer under-runs with these changes. Fingers crossed.
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.