mirror of
https://github.com/librespot-org/librespot.git
synced 2026-04-27 08:15:50 +03:00
[GH-ISSUE #215] librespot crashing after x songs #145
Labels
No labels
A-Alsa
SpotifyAPI
Tokio 1.0
audio
bug
can't reproduce
compilation
dependencies
duplicate
enhancement
good first issue
help wanted
high priority
imported
imported
invalid
new api
pull-request
question
reverse engineering
wiki
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/librespot#145
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @kutmasterk on GitHub (May 7, 2018).
Original GitHub issue: https://github.com/librespot-org/librespot/issues/215
Since a few days, librespot is crashing after playing a number of songs from a playlist.
All was working before and nothing has changed on my side, so my guess is, that spotify must have changed something.
Steps to reproduce:
Here is a verbose log from the crash:
i tried this with pipe backend and alsa-backend on a raspberry pi 3. It crashes with both backends. I do not have pulseaudio installed.
Also tried this with disabled cache - still crashing :(
EDIT: i use a self-compiled binary of the latest version, but i also checked out the pre-compiled raspotify binary and some older versions of librespot from the last 2 months which i had on backup. they all have the same behavior.
@mrwsl commented on GitHub (May 8, 2018):
I can confirm this behaviour as well. I hadn't have the time to provide a proper log. I'm using a self compiled librespot (latest version) on raspberry pi3, too.
@berrywhite96 commented on GitHub (May 8, 2018):
Same error here, tried the "raspotify - that JUST WORKS" and the spotify daemon. I just came from the spotify connect version build in python, which is maybe 3 years old, that also stops working.
I think spotify tries atm to stop third party tools like librespot...
@plietar commented on GitHub (May 8, 2018):
Does this happen with the some tracks in particular, or do the same tracks sometimes work and sometimes don't?
PS: Fixed it for you, but in the future please quote logs with triple backticks, not single
@berrywhite96 commented on GitHub (May 8, 2018):
Its not really depending on the song, the more songs were played the higher is the chance of an crash. The python spotify connect I mentioned worked for years, since friday it crashes as the newer spotify projects.
So there are still people where one of these projects dont crash?!
@kutmasterk commented on GitHub (May 8, 2018):
The same tracks sometimes work and sometimes don't. It really depends on the amount of tracks played.
@kutmasterk commented on GitHub (May 9, 2018):
So, as of today i cannot reproduce the crashes anymore. I just played 30 tracks without interruption.
The only thing useful i have found is this:
https://github.com/spotify/web-api/issues/859#issuecomment-386214823
As of this month spotify has enabled a load balancer on their google provided distribution network and now requires a "Content-Length header" set for requests. This is a bugreport for the spotify web-api, but i am not sure if librespot is even using this api.
@berrywhite96 commented on GitHub (May 9, 2018):
Can confirm this, runned over the night tracks on spotify and the program doesnt seem to crash. I called it with my credentials, maybe this could help.
But I determine that every track needs 5-10 seconds before it plays to start. My internet connection isnt the problem, do other have this problem?
@mrwsl commented on GitHub (May 9, 2018):
@berrywhite96 with the linux client I have the same problem. When connecting from my android client librespot plays the song almost instantly.
@berrywhite96 commented on GitHub (May 9, 2018):
@herrwusel interesting, maybe some internet provider thing?! Try later to run my pi over the mobile network connection.
@cortegedusage commented on GitHub (May 9, 2018):
@berrywhite96 I to have the same 5 to 8 secs delay when changing songs as of this week.
also the occasional crash. I control it with android client
@mrwsl commented on GitHub (May 9, 2018):
@berrywhite96 Same network connection, just different clients.
@berrywhite96 commented on GitHub (May 9, 2018):
@herrwusel I tried it yesterday on two pi's, a pi 3 and pi 2. The pi 3 runs raspbian jessy and the pi 2 runs raspbian stretch, both have the same problem. I cant really determine the problem.
@michaelherger commented on GitHub (May 9, 2018):
While I'm using my own librespot based application, I've seen crashing behaviour recently, too. Likely no provider any of you is using :-).
Frequent crashes, and very, very slow playback start.
Spotify mentioned that they had switched CDN provider (from Akamai to Google?) in a bug report I commented on (https://github.com/spotify/web-api/issues/859#issuecomment-386214823). This seems to behave differently wrt. to some aspects of http implementation. I'm wondering whether we see some other side-effects of this change here, too.
@kutmasterk commented on GitHub (May 9, 2018):
I can confirm these delays too. If you start a song, album or playlist from the app (desktop mac and android here) there is a 10 second delay before the song starts playing on librespot.
Maybe we should open a new bug report specifically for this problem, as i am not sure this is connected to the crashes.
@berrywhite96 commented on GitHub (May 9, 2018):
Now I havent these delays, but the crashes are back. At this time it crashed on the fifth track:
INFO:librespot_playback::player: Loading track "Top Off" with Spotify URI "spotify:track:1W6wxOOYyJyyok8fqYSZ3m" ERROR:librespot_core::channel: channel error: 2 1 thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ChannelError', libcore/result.rs:945:5 stack backtrace: 0: 0x9d9333 - std::sys::unix::backtrace::tracing::imp::unwind_backtrace::h4ef6490c84cef3d0 at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49 1: 0x9d308f - std::sys_common::backtrace::_print::hea150ee4f33bb5de at libstd/sys_common/backtrace.rs:71The "loaded" message didnt came up, so its something while its loading the track.
I dont understand rust, so maybe anyone helps this info.
@plietar commented on GitHub (May 9, 2018):
The desktop client (and probably the mobile one too) download audio files over HTTP, whereas librespot downloads them using the same TCP socket used for everything else. It is possible Spotify is slowing shutting down the latter as they move to some new CDN.
Fixing this would require some more reverse engineering, to figure out how the client learns about the URL to use to download audio files.
My spotify-analyze scripts would be the first step for this. Additionally the desktop client can be started in verbose mode, in which case it prints the URL to the logs. You should compare those logs with the packet dumps, see if any identifier matches.
Does anyone know if speakers which were shipped with Spotify support ~3/4 years ago still work? I originally used their firmware to figure out the protocol.
@WhiteHatTux commented on GitHub (May 10, 2018):
Regarding the delay. I think this is related to loading and am working on it over at issue #210. I'm not sure as I just started using and understanding the library. I just presumed it was connected to my network.
@kingosticks commented on GitHub (May 10, 2018):
I am not seeing any delays with the linux desktop client. I couldn't work out how to start the client in verbose mode but when using a http/https proxy I only see traffic for the album art. Nothing else.
Libspotify is still working if that's any help.
However, if they were making changes it's likely they'd move people over in stages and maybe I just haven't been moved yet...
@berrywhite96 commented on GitHub (May 10, 2018):
@kingosticks atm there arent any delays, so maybe there were just some slow servers on the last days.
Besides the different error message, the crash looks like the others before.
Last crash I had was with this error:
@l3d00m commented on GitHub (May 11, 2018):
@berrywhite96 I reported that crash in #217 too as I thought this was unrelated.
@berrywhite96 commented on GitHub (May 11, 2018):
@l3d00m i think its all the same bug, they all happen when the receiver tries to load the next track. Maybe its unrelated, but i have no experience with the source code.
Since yesterday evening, the receiver works perfectly. I dont know why, but maybe the problem is time related?! Do we know other servers to get the tracks?
@plietar commented on GitHub (May 11, 2018):
The server used (Access Point) used is printed one one of the first lines. You could hardcode that value in
apresolve.rsand see if it now always works.We could add a flag to use a fixed AP instead of apresolve.
@sashahilton00 commented on GitHub (May 11, 2018):
With regards to how the URL is generated, I had a very quick look using spotify-analyze, and a search for the file downloaded yields a couple of commands tagged with 0x9 from Spotify Servers, which are then sent to librespot's channel.rs. Those commands contain the full URL for downloading. I haven't investigated what triggers the server to send the URL though.
There appear to be two endpoints that one can download the file through:
http://audio-fa.spotify.com/audio/a694d4d3e32927e1148...3206_BgCXZ+ig5P4x1xJet/GXFt7/MxNdQLaFI5rH1DPSb0M=http://audio-akp.spotify.com.edgesuite.net/audio/a694d4d3e32927e1148...2071ec57b228c0256e50a7611f@l3d00m commented on GitHub (May 17, 2018):
This issue does appear to be kind of resolved for me, librespot crashes are now back to normal (once day or so) and not multiple crashes a day.
@kingosticks commented on GitHub (May 17, 2018):
It may be worth nothing that with libspotify, periods of random ChannelErrors were not unheard of and would correspond to something being temporarily broken at Spotify's end. I guess the take-away here is that we don't want to crash but I think there's already an issue for tracking improvements to this sort of thing.
@roderickvd commented on GitHub (May 25, 2021):
Channel errors fixed upstream by Spotify on their end.
Reconnection issues tracked in #609.