mirror of
https://github.com/librespot-org/librespot.git
synced 2026-04-27 08:15:50 +03:00
[GH-ISSUE #172] Random crash in result.rs #117
Labels
No labels
A-Alsa
SpotifyAPI
Tokio 1.0
audio
bug
can't reproduce
compilation
dependencies
duplicate
enhancement
good first issue
help wanted
high priority
imported
imported
invalid
new api
pull-request
question
reverse engineering
wiki
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/librespot#117
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @l3d00m on GitHub (Feb 27, 2018).
Original GitHub issue: https://github.com/librespot-org/librespot/issues/172
The following crash happened randomly in the middle of playback. (If it helps, the raspotify service did not restart which usually happens after a crash, this is not the issue here though). There is no additionally information available.
@sashahilton00 commented on GitHub (Feb 27, 2018):
Hmm, that binary is stripped, so the backtrace isn't of much use. Do you use a beta client by any chance?
@l3d00m commented on GitHub (Feb 27, 2018):
I wondered why it looks like that. I'm using the raspotify deb packages provided by dtcooper. Is there any way I can provide you a mapping?
@sashahilton00 commented on GitHub (Feb 27, 2018):
There is, but it requires that you build the package yourself. If you clone the raspotify repo, but before building, delete this line
github.com/dtcooper/raspotify@f354732ecc/build.sh (L49)and then follow the normal instructions and install the compiled result, it'll give a proper backtrace.@l3d00m commented on GitHub (Feb 27, 2018):
Yeah I'm building it rn to provide you a better stacktrace in the future. This crash happened pretty randomly though. There are a couple more crashes that occur for me from time to time, the stacktrace tip allows me to report them with a better stacktrace, thanks.
Also big thanks to you guys for maintaining the repo, I'm using librespot on a daily basis for half a year now.
You may close this issue and I'll reopen as soon as it reoccurs (with a better stack)
@sashahilton00 commented on GitHub (Feb 27, 2018):
No problem, thanks for making the effort to report them :) If you have trouble reproducing that MercuryError bug, it could well be that it was being caused by a new value Spotify added to the proto definitions, with a fix pushed a couple of days ago https://github.com/librespot-org/librespot/pull/167. I assume the crash occurs when connecting from a mobile device to librespot?
@l3d00m commented on GitHub (Feb 27, 2018):
Ah that may be possible, building it provides me with the new version anyway 👍
@sashahilton00 commented on GitHub (Feb 28, 2018):
Did you manage to recreate this bug? I'm fairly sure the enum fix was all you needed, if so, please close the issue :)
@plietar commented on GitHub (Feb 28, 2018):
Assuming Spotify use HTTP error codes for mercury (I'm pretty sure they do), 503 is "Service Unavailable", so I'd say this is just a one off server side error.
@l3d00m commented on GitHub (Feb 28, 2018):
I'll just reopen if it should ever occur again, now with a better stack trace 👍
@l3d00m commented on GitHub (Mar 6, 2018):
It just rehappened, this time with a newer build (and with a proper stacktrace). It happened a few minutes after librespot had another crash and I reconnected to it (
I'll open an issue for that latersee #183).@ComlOnline commented on GitHub (Mar 6, 2018):
As @plietar Said above you got
WARN:librespot_core::mercury: error 503 for uri hm://remote/3/user/[username here]/again here. I'm not sure there's much to fix here sadly, other than it happens.@sashahilton00 Would there anything stopping us from retrying this 3 times, and then failing, or would that be best left in the relms of librespotd?
@l3d00m commented on GitHub (Mar 6, 2018):
Ah sorry if I misunderstood you so far. I now assume that is just how librespot handles errors by throwing them like this?
@ComlOnline commented on GitHub (Mar 6, 2018):
So basically the server is sending this error to you, as you may seen on a website from time to time:
librespot doesn't handle this well and results in the above crash. So a bit of Spotify and librespot.
(I could be wrong but I'm pretty sure that's whats going on. The question is this is how we want handle this going forward)
@maufl commented on GitHub (Mar 6, 2018):
I have this error too. My problem is that librespot does not completely crash, i.e. the process does not exit, so it won't be restarted. By the way, librespotd does not yet have a description, is it supposed to replace the binary of librespot? Which one am I supposed to use?
@l3d00m commented on GitHub (Mar 6, 2018):
Does it crash if you try to resume the playback on your client @maufl?
If so, it may be related to #183
@maufl commented on GitHub (Mar 6, 2018):
I don't think so. So far the only way to make it work again was to restart the librespot systemd service. Unfortunately this error is hard to reproduce :/ I'm currently running a debug build and wait for it to happen again.
@ComlOnline commented on GitHub (Mar 6, 2018):
@maufl that is a bug then if its not handling the error correctly. At the moment librespotd is still a work in progress.
@ComlOnline commented on GitHub (Mar 7, 2018):
Here is the
ifthat throws an error if a response code 400 or higher is received.@maufl commented on GitHub (Mar 7, 2018):
I think it might be this unwrap that causes the panic, I'll try to investigate. Unfortunately, the debug build is too slow to be usable, I'll try compiling the release build with debug symbols today.
@sashahilton00 commented on GitHub (Mar 8, 2018):
@ComlOnline I think your solution of retrying it a few times is a good idea. Do you want to implement a runoff feature (retries with increasing intervals until success), or do you want to have it try a set number of times at a set interval, then panic if it fails? I agree, as it currently stands, our handling of this error is poor, since it basically locks up librespot until manually restarted.
@maufl librespotd is currently a placeholder, we're not working on it atm. Use librespot, we'll make it clear when development shifts to librespotd. Currently, we need to get librespot packaged up and published, and iron out some of the more pressing issues, such as this one, session reconnection (which is sort of related to this one), and precaching audio.
@maufl commented on GitHub (Mar 9, 2018):
As expected, I have a well running release build with debug symbols but the error has not occurred again yet :/
@ComlOnline commented on GitHub (Mar 10, 2018):
@sashahilton00 Its a tough call depending on the functionality we want. I think a set number of retries and then panic is the way we should go.
I like the idea of retying (in a runoff fashion) for ten minutes then failing. If you get that many errors for ten mins there must be a bigger problem with Spotify,
@sashahilton00 commented on GitHub (Mar 14, 2018):
Agreed, a runoff for 10 mins or so should suffice. The problem I have is that I am uncertain as to where the request is called from; the
complete_requestfunction here just looks to process the mercuryresponse header, so I'm not too sure where the initial request is called from, without having to spend some time reading through the mercury part of the code to understand how it works. Thus, until I have some free time (exam season atm) to learn about it, either someone else can implement the runoff feature/provide some info on where it would need to be implemented, or as a stopgap, we could just change thatwarn!into apanic!. Would appreciate any input @plietar on whether changing that warn to a panic is going to have a serious impact on librespot usability (not sure how often it errors in normal usage).@ComlOnline commented on GitHub (Mar 21, 2018):
I changed it to a panic and it seems to be running fine (so far) as it should only be triggered if that error comes up. Like @sashahilton00 this would be a temp fix until it can be sorted properly.
@sashahilton00 commented on GitHub (Apr 6, 2018):
I've created #198 to catch 5xx errors and panic if it encounters them. It's lazy, but I don't have time to learn enough rust to implement the runoff functionality.
@ComlOnline commented on GitHub (Apr 11, 2018):
That works for me.
@l3d00m commented on GitHub (May 9, 2018):
I am running librespot
INFO:librespot: librespot (raspotify) deb240c (2018-05-01). Built on 2018-05-04. Build ID: GeKsU3jLon my pi in combination with snapcast.As my wifi connection is quite unreliable, librespot crashes very often (5 times a day) with the above described
ChannelError. The error is not catched immediately, exactly as described in the OP, so #198 had no effect for me.Added the log again, but I think it's exactly the same, it even crashes with identical errors. It is a bit more unreadable thanks to the snapcast logger, but I think it's still okay-ish.
@l3d00m commented on GitHub (May 10, 2018):
Reported in #215 too
@roderickvd commented on GitHub (Aug 7, 2021):
Closing this for the same reasons as in #215 (Spotify fixed channel errors on their end).