[GH-ISSUE #521] Tracks skip after few seconds when piping passthrough audio #332

Closed
opened 2026-02-27 19:30:03 +03:00 by kerem · 19 comments
Owner

Originally created by @magnetised on GitHub (Aug 29, 2020).
Original GitHub issue: https://github.com/librespot-org/librespot/issues/521

This is a re-surfacing of

https://github.com/plietar/librespot/issues/231 and
https://github.com/librespot-org/librespot/issues/54

I'm trying to use librespot to stream audio from spotify over a pipe backend into my multiroom audio system (in Elixir). The data is flowing and everything basically works from an integration point of view (data flows into the elixir process and nothing crashes), but the spotify player skips to the next track every few seconds.

Reading the above issues, it seems that this is because librespot sends an inappropriate notification to the spotify player once it has downloaded all the data for a song, which in the case of the pipe backend happens within a few seconds (as the pipe backend just pulls the data in as fast as the network will let it).

I'm using librespot master HEAD as my source and compiling with cargo build --release --no-default-features

Really keen to get this working please let me know how I can help.

Originally created by @magnetised on GitHub (Aug 29, 2020). Original GitHub issue: https://github.com/librespot-org/librespot/issues/521 This is a re-surfacing of https://github.com/plietar/librespot/issues/231 and https://github.com/librespot-org/librespot/issues/54 I'm trying to use librespot to stream audio from spotify over a pipe backend into my multiroom audio system (in Elixir). The data is flowing and everything basically works from an integration point of view (data flows into the elixir process and nothing crashes), but the spotify player skips to the next track every few seconds. Reading the above issues, it seems that this is because librespot sends an inappropriate notification to the spotify player once it has downloaded all the data for a song, which in the case of the pipe backend happens within a few seconds (as the pipe backend just pulls the data in as fast as the network will let it). I'm using librespot master `HEAD` as my source and compiling with `cargo build --release --no-default-features` Really keen to get this working please let me know how I can help.
kerem 2026-02-27 19:30:03 +03:00
  • closed this issue
  • added the
    bug
    audio
    labels
Author
Owner

@tadly commented on GitHub (Sep 27, 2020):

After some time I finally found the problem. You have to "pace" reading from stdout.
I have to sleep for 2 seconds before reading the next batch. Less than 2 sec. and the skipping happens again.

Edit: I lied it seems. Less works as well as long as the amount of bits read from stdout matches. Still experimenting :)

<!-- gh-comment-id:699601742 --> @tadly commented on GitHub (Sep 27, 2020): After some time I finally found the problem. You have to "pace" reading from stdout. I have to sleep for 2 seconds before reading the next batch. Less than 2 sec. and the skipping happens again. Edit: I lied it seems. Less works as well as long as the amount of bits read from stdout matches. Still experimenting :)
Author
Owner

@rystaf commented on GitHub (Oct 4, 2020):

@tadly Do you have an example of how to achieve that?

<!-- gh-comment-id:703196310 --> @rystaf commented on GitHub (Oct 4, 2020): @tadly Do you have an example of how to achieve that?
Author
Owner

@tadly commented on GitHub (Oct 4, 2020):

@rystaf hope you can read python :P

This is what I used to test some stuff and is not optimal but it works.

It spawns an HTTP server which you can connect to and have audio playing.
Either use your browser and go to http://localhost:8000 or use any other audio player that supports streaming.

Edit:
The relevant part from below regarding the pacing aspect is this:

...
        chunk_size = self.samplesize // 4
        while True:
            if self._exit.is_set():
                break

            # Read 250 ms of audio data and add it to all buffers
            chunk = self._proc.stdout.read(chunk_size)
            for handle, q in self._queues.items():
                q.extend(chunk)

            # Wait 250 ms before reading the next chunk as otherwise
            # spotify would start skipping tracks
            time.sleep(.25)
...

Fully working example implementation

#!/usr/bin/env python
import threading
import time
import collections
from http.server import BaseHTTPRequestHandler, HTTPServer
import uuid
from socketserver import ThreadingMixIn
from subprocess import PIPE, Popen


class LibreSpot:
    __instance = None

    samplerate = 44100
    channels = 2
    bits = 16

    samplesize = None

    _thread = None
    _proc = None
    _exit = None
    _queues = None

    def __init__(self, device_name, bitrate=160, initial_volume=50):
        self.device_name = device_name
        self.bitrate = bitrate
        self.initial_volume = initial_volume

        # Calculate byte-size for 1 sec. of audio
        self.samplesize = int(self.samplerate * self.bits * self.channels / 8)

        self._exit = threading.Event()
        self._queues = {}

    @classmethod
    def get(cls) -> 'LibreSpot':
        if not cls.__instance:
            cls.__instance = LibreSpot()
        return cls.__instance

    @property
    def is_running(self):
        return not self._exit.is_set()

    def subscribe(self):
        handle = str(uuid.uuid4())
        self._queues[handle] = collections.deque(maxlen=self.samplesize)
        return handle

    def unsubscribe(self, handle):
        del self._queues[handle]

    def run(self):
        self._proc = Popen(
            [
                'librespot',
                '--disable-audio-cache',
                '-n',
                self.device_name,
                '-b',
                str(self.bitrate),
                '--initial-volume',
                str(self.initial_volume),
                '--backend',
                'pipe',
            ],
            stdout=PIPE,
        )

        chunk_size = self.samplesize // 4
        while True:
            if self._exit.is_set():
                break

            # Read 250 ms of audio data and add it to all buffers
            chunk = self._proc.stdout.read(chunk_size)
            for handle, q in self._queues.items():
                q.extend(chunk)

            # Wait 250 ms before reading the next chunk as otherwise
            # spotify would start skipping tracks
            time.sleep(.25)

    def start(self):
        self._thread = threading.Thread(target=self.run)
        self._thread.start()

    def stop(self):
        self._exit.set()
        self._thread.join()

    def read(self, handle):
        if len(self._queues[handle]) < self.samplesize:
            return b''

        chunk = bytearray(bytes(self._queues[handle]))
        self._queues[handle].clear()
        return chunk


class StreamHandler(BaseHTTPRequestHandler):
    protocol_version = 'HTTP/1.1'

    def _wav_header(self, sample_rate, bits, channels):
        # For streaming the data-size has to be 0 as otherwise clients don't
        # stream but try to buffer everything which is not possible
        datasize = 0

        header = bytes('RIFF', 'ascii')
        header += (datasize + 36).to_bytes(4, 'little')
        header += bytes('WAVE', 'ascii')
        header += bytes('fmt ', 'ascii')
        header += (16).to_bytes(4, 'little')
        header += (1).to_bytes(2, 'little')
        header += (channels).to_bytes(2, 'little')
        header += (sample_rate).to_bytes(4, 'little')
        header += (sample_rate * channels * bits // 8).to_bytes(4, 'little')
        header += (channels * bits // 8).to_bytes(2, 'little')
        header += (bits).to_bytes(2, 'little')
        header += bytes('data', 'ascii')
        header += (datasize).to_bytes(4, 'little')

        return header

    def do_GET(self):
        self.send_response(200)
        self.send_header('Content-Type', 'audio/wav')
        self.end_headers()

        print('New client:', self.client_address)

        librespot = self.server.librespot

        self.wfile.write(
            self._wav_header(librespot.samplerate, librespot.bits,
                             librespot.channels))

        handle = librespot.subscribe()
        while librespot.is_running:
            chunk = librespot.read(handle)

            # Prevent high cpu-loads
            if not chunk:
                time.sleep(.1)
                continue

            try:
                self.wfile.write(chunk)
            except (ConnectionResetError, BrokenPipeError):
                break

        print('bye bye:', self.client_address)
        librespot.unsubscribe(handle)


class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
    librespot = None

    def __init__(self, *args, librespot, **kwargs):
        self.librespot = librespot

        HTTPServer.__init__(self, *args, **kwargs)


if __name__ == '__main__':
    librespot = LibreSpot('Python', 320)
    librespot.start()

    httpd = ThreadedHTTPServer(('', 8000), StreamHandler, librespot=librespot)

    try:
        httpd.serve_forever()
    except KeyboardInterrupt:
        pass

    librespot.stop()
    httpd.server_close()
<!-- gh-comment-id:703217902 --> @tadly commented on GitHub (Oct 4, 2020): @rystaf hope you can read python :P This is what I used to test some stuff and is not optimal but it works. It spawns an HTTP server which you can connect to and have audio playing. Either use your browser and go to http://localhost:8000 or use any other audio player that supports streaming. Edit: The relevant part from below regarding the pacing aspect is this: ```python ... chunk_size = self.samplesize // 4 while True: if self._exit.is_set(): break # Read 250 ms of audio data and add it to all buffers chunk = self._proc.stdout.read(chunk_size) for handle, q in self._queues.items(): q.extend(chunk) # Wait 250 ms before reading the next chunk as otherwise # spotify would start skipping tracks time.sleep(.25) ... ``` ## Fully working example implementation ```python #!/usr/bin/env python import threading import time import collections from http.server import BaseHTTPRequestHandler, HTTPServer import uuid from socketserver import ThreadingMixIn from subprocess import PIPE, Popen class LibreSpot: __instance = None samplerate = 44100 channels = 2 bits = 16 samplesize = None _thread = None _proc = None _exit = None _queues = None def __init__(self, device_name, bitrate=160, initial_volume=50): self.device_name = device_name self.bitrate = bitrate self.initial_volume = initial_volume # Calculate byte-size for 1 sec. of audio self.samplesize = int(self.samplerate * self.bits * self.channels / 8) self._exit = threading.Event() self._queues = {} @classmethod def get(cls) -> 'LibreSpot': if not cls.__instance: cls.__instance = LibreSpot() return cls.__instance @property def is_running(self): return not self._exit.is_set() def subscribe(self): handle = str(uuid.uuid4()) self._queues[handle] = collections.deque(maxlen=self.samplesize) return handle def unsubscribe(self, handle): del self._queues[handle] def run(self): self._proc = Popen( [ 'librespot', '--disable-audio-cache', '-n', self.device_name, '-b', str(self.bitrate), '--initial-volume', str(self.initial_volume), '--backend', 'pipe', ], stdout=PIPE, ) chunk_size = self.samplesize // 4 while True: if self._exit.is_set(): break # Read 250 ms of audio data and add it to all buffers chunk = self._proc.stdout.read(chunk_size) for handle, q in self._queues.items(): q.extend(chunk) # Wait 250 ms before reading the next chunk as otherwise # spotify would start skipping tracks time.sleep(.25) def start(self): self._thread = threading.Thread(target=self.run) self._thread.start() def stop(self): self._exit.set() self._thread.join() def read(self, handle): if len(self._queues[handle]) < self.samplesize: return b'' chunk = bytearray(bytes(self._queues[handle])) self._queues[handle].clear() return chunk class StreamHandler(BaseHTTPRequestHandler): protocol_version = 'HTTP/1.1' def _wav_header(self, sample_rate, bits, channels): # For streaming the data-size has to be 0 as otherwise clients don't # stream but try to buffer everything which is not possible datasize = 0 header = bytes('RIFF', 'ascii') header += (datasize + 36).to_bytes(4, 'little') header += bytes('WAVE', 'ascii') header += bytes('fmt ', 'ascii') header += (16).to_bytes(4, 'little') header += (1).to_bytes(2, 'little') header += (channels).to_bytes(2, 'little') header += (sample_rate).to_bytes(4, 'little') header += (sample_rate * channels * bits // 8).to_bytes(4, 'little') header += (channels * bits // 8).to_bytes(2, 'little') header += (bits).to_bytes(2, 'little') header += bytes('data', 'ascii') header += (datasize).to_bytes(4, 'little') return header def do_GET(self): self.send_response(200) self.send_header('Content-Type', 'audio/wav') self.end_headers() print('New client:', self.client_address) librespot = self.server.librespot self.wfile.write( self._wav_header(librespot.samplerate, librespot.bits, librespot.channels)) handle = librespot.subscribe() while librespot.is_running: chunk = librespot.read(handle) # Prevent high cpu-loads if not chunk: time.sleep(.1) continue try: self.wfile.write(chunk) except (ConnectionResetError, BrokenPipeError): break print('bye bye:', self.client_address) librespot.unsubscribe(handle) class ThreadedHTTPServer(ThreadingMixIn, HTTPServer): librespot = None def __init__(self, *args, librespot, **kwargs): self.librespot = librespot HTTPServer.__init__(self, *args, **kwargs) if __name__ == '__main__': librespot = LibreSpot('Python', 320) librespot.start() httpd = ThreadedHTTPServer(('', 8000), StreamHandler, librespot=librespot) try: httpd.serve_forever() except KeyboardInterrupt: pass librespot.stop() httpd.server_close() ```
Author
Owner

@magnetised commented on GitHub (Oct 7, 2020):

I've been poking around at this too. IMHO the problem is a result of the player using the byte-offset within the audio stream being used to directly represent the playback position. This works when the sink write is basically synchronous (I've done some measurements and the time taken to write to the rodio backend is basically the duration of the sample being written). But obviously with the pipe backend, the duration of the write is basically zero.

The approach above adds a delay to the pipe write by slowing down the read - since the pipe will block until read. My system is using erlang to read from a fifo pipe and I just receive messages when new data comes in, so I don't have that option.

Instead I've tried adding a delay to the pipe write which tries to match the expected duration of the audio being written - basically emulating the behaviour of the other sinks.

You can see the change here: https://github.com/librespot-org/librespot/compare/dev...magnetised:pipe-playback-delay

This works well enough - the backend occasionally falls behind the expected position, I'm guessing due to cumulative delays because of imprecise sleep durations - but I've minimised this by using a tight loop of small sleeps, rather than just a single sleep call of the required duration.

I'm mystified why I can't just use the right value for the duration of the sample - ~22.6 microseconds per sample - if I use that then the playback is constantly rewinding. Maybe there's something obvious I'm missing there though.

In reality I would prefer to re-write the player to use time-based calculations for the track position. This would allow me to do some buffering within my application since I'd have access to excess audio data, rather than just enough to play.

I suspect that the reason for using the byte position, enforced by the synchronous sink write function, is to make sure that the system stays in sync with the actual playback speed of the audio system, rather than assuming a 1:1 ratio between the system clock and the playback speed. Would you agree?

If I moved to using time-based position, I think it would involve tweaking the player to add a clock based position measurement in the play loop and removing any handling of the end of the audio download. Is that something you'd accept, or is it not worth my effort? (The player code is fairly involved and I have a horrible feeling there would be a lot of edge cases to handle).

<!-- gh-comment-id:704804298 --> @magnetised commented on GitHub (Oct 7, 2020): I've been poking around at this too. IMHO the problem is a result of the player using the byte-offset within the audio stream being used to directly represent the playback position. This works when the sink write is basically synchronous (I've done some measurements and the time taken to write to the rodio backend is basically the duration of the sample being written). But obviously with the pipe backend, the duration of the write is basically zero. The approach above adds a delay to the pipe write by slowing down the read - since the pipe will block until read. My system is using erlang to read from a fifo pipe and I just receive messages when new data comes in, so I don't have that option. Instead I've tried adding a delay to the pipe write which tries to match the expected duration of the audio being written - basically emulating the behaviour of the other sinks. You can see the change here: https://github.com/librespot-org/librespot/compare/dev...magnetised:pipe-playback-delay This works well enough - the backend occasionally falls behind the expected position, I'm guessing due to cumulative delays because of imprecise sleep durations - but I've minimised this by using a tight loop of small sleeps, rather than just a single sleep call of the required duration. I'm mystified why I can't just use the right value for the duration of the sample - ~22.6 microseconds per sample - if I use that then the playback is constantly rewinding. Maybe there's something obvious I'm missing there though. In reality I would prefer to re-write the player to use time-based calculations for the track position. This would allow me to do some buffering within my application since I'd have access to excess audio data, rather than just enough to play. I suspect that the reason for using the byte position, enforced by the synchronous sink write function, is to make sure that the system stays in sync with the actual playback speed of the audio system, rather than assuming a 1:1 ratio between the system clock and the playback speed. Would you agree? If I moved to using time-based position, I think it would involve tweaking the player to add a clock based position measurement in the play loop and removing any handling of the end of the audio download. Is that something you'd accept, or is it not worth my effort? (The player code is fairly involved and I have a horrible feeling there would be a *lot* of edge cases to handle).
Author
Owner

@michaelherger commented on GitHub (Oct 11, 2020):

@magnetised - thanks for the analysis! I've been struggling with this issue, too. In the end I did a dirty hack. Yours seems nicer to me. Have you been using it successfully?

<!-- gh-comment-id:706778050 --> @michaelherger commented on GitHub (Oct 11, 2020): @magnetised - thanks for the analysis! I've been struggling with this issue, too. In the end I did a dirty hack. Yours seems nicer to me. Have you been using it successfully?
Author
Owner

@magnetised commented on GitHub (Oct 12, 2020):

Have you been using it successfully?

@michaelherger a bit... my rough tests over a couple of hours were prett successful - the tracks kept playing until the progress at in the Spotify app reached the end and there weren't that many skips backwards. So at that level it works (for me).

I don't like the "14" magic number though so would like to be sure I wasn't just solving for my particular laptop rather than a generic solution. Maybe if you get a chance you could see how it works for you - that would double the amount of data available 😅

One problem is the lack of info sent to the sink (it just gets PCM data) which is fair enough but prevents stuff like analysing the track duration and buffering.

I suppose I could experiment with beefing up the sink API a little to enable this stuff (for most backends it would just be a no-op for the additional calls) but it feels bed to mess up the super simple version that's there.

<!-- gh-comment-id:707142605 --> @magnetised commented on GitHub (Oct 12, 2020): > Have you been using it successfully? @michaelherger a bit... my rough tests over a couple of hours were prett successful - the tracks kept playing until the progress at in the Spotify app reached the end and there weren't that many skips backwards. So at that level it works (for me). I don't like the "14" magic number though so would like to be sure I wasn't just solving for my particular laptop rather than a generic solution. Maybe if you get a chance you could see how it works for you - that would double the amount of data available 😅 One problem is the lack of info sent to the sink (it just gets PCM data) which is fair enough but prevents stuff like analysing the track duration and buffering. I suppose I could experiment with beefing up the sink API a little to enable this stuff (for most backends it would just be a no-op for the additional calls) but it feels bed to mess up the super simple version that's there.
Author
Owner

@kingosticks commented on GitHub (Oct 12, 2020):

I agree we should pace the pipe writes a bit, it should not need to be super accurate because no consumer should be relying on that. And since the pipe output doesn't need to support using it for anything except playback, changing it to write at the normal playback rate (ish) should be fine. The current system of pacing the writes (and therefore librespot's apparent current playback position) by relying on the pipe getting full is not very nice. Isn't it a bit surprising nobody has complained about this until now?

Assuming the writing is paced, why do you need extra information to implement an input buffer (other than the fixed sample rate and size)? Does your program really not already have an input buffer between the read end of the pipe and the output device you are writing? What would you do with that extra information?

<!-- gh-comment-id:707190867 --> @kingosticks commented on GitHub (Oct 12, 2020): I agree we should pace the pipe writes *a bit*, it should not need to be super accurate because no consumer should be relying on that. And since the pipe output doesn't need to support using it for anything except playback, changing it to write at the normal playback rate (ish) should be fine. The current system of pacing the writes (and therefore librespot's apparent current playback position) by relying on the pipe getting full is not very nice. Isn't it a bit surprising nobody has complained about this until now? Assuming the writing is paced, why do you need extra information to implement an input buffer (other than the fixed sample rate and size)? Does your program really not already have an input buffer between the read end of the pipe and the output device you are writing? What would you do with that extra information?
Author
Owner

@codetheweb commented on GitHub (Oct 26, 2020):

I'm having a similar problem.

I'm writing an adaption layer in Rust using Neon for Node.js so that code in Node.js can call librespot functions. I wrote a custom sink for Player like so:

struct EmittedSink {
    emitter: mpsc::Sender<Event>
}

impl audio_backend::Sink for EmittedSink {
    fn start(&mut self) -> std::result::Result<(), std::io::Error> {
        Ok(())
    }

    fn stop(&mut self) -> std::result::Result<(), std::io::Error> {
        Ok(())
    }

    fn write(&mut self, data: &[i16]) -> std::result::Result<(), std::io::Error> {
        self.emitter.send(Event::AudioData {
            data: data.to_vec()
        }).expect("event emitted");

        Ok(())
    }
}

Another part then uses the Receiver side of the mpsc channel to hand off audio data to Node.js when a poll function is called from JS. The issue is that I can't figure out how to put backpressure on the sink / channel; streaming with Spotify Connect results in track skipping. Any ideas on how to fix this? @magnetised's solution didn't seem to work for me.

Apologies if this is the wrong place to ask, I'm very new to Rust.

<!-- gh-comment-id:716745343 --> @codetheweb commented on GitHub (Oct 26, 2020): I'm having a similar problem. I'm writing an adaption layer in Rust using [Neon](https://neon-bindings.com/) for Node.js so that code in Node.js can call librespot functions. I wrote a custom sink for Player like so: ```rust struct EmittedSink { emitter: mpsc::Sender<Event> } impl audio_backend::Sink for EmittedSink { fn start(&mut self) -> std::result::Result<(), std::io::Error> { Ok(()) } fn stop(&mut self) -> std::result::Result<(), std::io::Error> { Ok(()) } fn write(&mut self, data: &[i16]) -> std::result::Result<(), std::io::Error> { self.emitter.send(Event::AudioData { data: data.to_vec() }).expect("event emitted"); Ok(()) } } ``` Another part then uses the Receiver side of the mpsc channel to hand off audio data to Node.js when a poll function is called from JS. The issue is that I can't figure out how to put backpressure on the sink / channel; streaming with Spotify Connect results in track skipping. Any ideas on how to fix this? @magnetised's solution didn't seem to work for me. Apologies if this is the wrong place to ask, I'm very new to Rust.
Author
Owner

@ashthespy commented on GitHub (Oct 26, 2020):

@codetheweb That is a cool project, I'd be quite interested in something similar. Is your code up somewhere I can take a peek it at?

<!-- gh-comment-id:716800238 --> @ashthespy commented on GitHub (Oct 26, 2020): @codetheweb That is a cool project, I'd be quite interested in something similar. Is your code up somewhere I can take a peek it at?
Author
Owner

@codetheweb commented on GitHub (Oct 26, 2020):

I'll try to clean it up and post it later tonight.

I accidentally committed my credentials in my local repo at some point... 😛

I plan to make a high-level interface available with play/pause/seek/enable connect/event emitters etc. Use case is a Spotify Connect Discord music bot. I've been wanting to learn Rust for a while and this seemed like a good project to get my feet wet.

<!-- gh-comment-id:716815123 --> @codetheweb commented on GitHub (Oct 26, 2020): I'll try to clean it up and post it later tonight. I accidentally committed my credentials in my local repo at some point... 😛 I plan to make a high-level interface available with play/pause/seek/enable connect/event emitters etc. Use case is a Spotify Connect Discord music bot. I've been wanting to learn Rust for a while and this seemed like a good project to get my feet wet.
Author
Owner

@codetheweb commented on GitHub (Oct 26, 2020):

@ashthespy check it out here: https://github.com/codetheweb/librespot-node.
Like I said, it's still extremely rough around the edges. Playing a song by track ID works fine (because it doesn't matter how much is buffered) but spirc is kinda broken.
I based it on this repo, which played audio directly using the default sink instead of passing it back to Node.js.

An example script (put in dev.js in src/):

import Spotify, {ESpotifyQuality, ESpotifyConnectDeviceType, ESpotifyVolumeCtrl} from '.';
import Speaker from 'speaker'

const spotify = new Spotify({
  username: '',
  password: '',
  quality: ESpotifyQuality.Bitrate96
});

// Load specified track (by id) and starts playing
spotify.play('15O20RQyWJgKrkHID9ynT9');

setTimeout(() => {
spotify.enableConnect({
  deviceName: "testdevice",
  deviceType: ESpotifyConnectDeviceType.Tablet,
  initialVolume: 50,
  volumeCtrl: ESpotifyVolumeCtrl.Fixed
});
}, 2000);

const speaker = new Speaker({
    channels: 2,
    bitDepth: 16,
    sampleRate: 44100
})

spotify.stream.pipe(speaker);

setInterval(() => {
    console.log('playing? ', spotify.isPlaying());
}, 1000);

If you wanna talk further, feel free to open an issue over there or use one of the contact methods listed on my site.

<!-- gh-comment-id:716854033 --> @codetheweb commented on GitHub (Oct 26, 2020): @ashthespy check it out here: https://github.com/codetheweb/librespot-node. Like I said, it's still extremely rough around the edges. Playing a song by track ID works fine (because it doesn't matter how much is buffered) but spirc is kinda broken. I based it on [this repo](https://github.com/navad/librespot-node), which played audio directly using the default sink instead of passing it back to Node.js. An example script (put in `dev.js` in `src/`): ```typescript import Spotify, {ESpotifyQuality, ESpotifyConnectDeviceType, ESpotifyVolumeCtrl} from '.'; import Speaker from 'speaker' const spotify = new Spotify({ username: '', password: '', quality: ESpotifyQuality.Bitrate96 }); // Load specified track (by id) and starts playing spotify.play('15O20RQyWJgKrkHID9ynT9'); setTimeout(() => { spotify.enableConnect({ deviceName: "testdevice", deviceType: ESpotifyConnectDeviceType.Tablet, initialVolume: 50, volumeCtrl: ESpotifyVolumeCtrl.Fixed }); }, 2000); const speaker = new Speaker({ channels: 2, bitDepth: 16, sampleRate: 44100 }) spotify.stream.pipe(speaker); setInterval(() => { console.log('playing? ', spotify.isPlaying()); }, 1000); ``` If you wanna talk further, feel free to open an issue over there or use one of the contact methods listed on [my site](https://maxisom.me/).
Author
Owner

@Burningstone91 commented on GitHub (Dec 10, 2020):

I think I have the same issue. After a few days of fiddling around I finally managed to get librespot running and the data is feed to a pipe, which is read by snapserver running on the same machine. However as soon as I start playing a song it skips immediately to the next song, not even 1 second, and the pipe grew to a few gigs in less than a minute. I can provide some logs if this helps.
I really like to get this to work, it's the last piece missing for my multiroom solution and highly appreciate any help with this.

<!-- gh-comment-id:742648425 --> @Burningstone91 commented on GitHub (Dec 10, 2020): I think I have the same issue. After a few days of fiddling around I finally managed to get librespot running and the data is feed to a pipe, which is read by snapserver running on the same machine. However as soon as I start playing a song it skips immediately to the next song, not even 1 second, and the pipe grew to a few gigs in less than a minute. I can provide some logs if this helps. I really like to get this to work, it's the last piece missing for my multiroom solution and highly appreciate any help with this.
Author
Owner

@gjdawson commented on GitHub (Mar 1, 2021):

I was curious about this, as it's impacting something I've been working on as well. A little testing has shown that the problem may only emerge when the --passthrough flag is present.

<!-- gh-comment-id:787938712 --> @gjdawson commented on GitHub (Mar 1, 2021): I was curious about this, as it's impacting something I've been working on as well. A little testing has shown that the problem may only emerge when the `--passthrough` flag is present.
Author
Owner

@Johannesd3 commented on GitHub (Mar 1, 2021):

Well, that's not possible. When the issue was created --passthrough did not even exist.

<!-- gh-comment-id:787940322 --> @Johannesd3 commented on GitHub (Mar 1, 2021): Well, that's not possible. When the issue was created `--passthrough` did not even exist.
Author
Owner

@gjdawson commented on GitHub (Mar 1, 2021):

Well ain't that something. It's what I'm seeing at the moment, though: Without passthrough, piping audio out works fine. With it, I get skipping. I'm at a loss to explain it.

<!-- gh-comment-id:787944781 --> @gjdawson commented on GitHub (Mar 1, 2021): Well ain't that something. It's what I'm seeing at the moment, though: Without passthrough, piping audio out works fine. With it, I get skipping. I'm at a loss to explain it.
Author
Owner

@Johannesd3 commented on GitHub (Mar 1, 2021):

It may depend on where you pipe it to. Something like pacat has just a little buffer, so everything will be fine. If you use a fifo or a file, they could grow very fast, and after a short while the player reaches the end of the track.

<!-- gh-comment-id:787951571 --> @Johannesd3 commented on GitHub (Mar 1, 2021): It may depend on where you pipe it to. Something like `pacat` has just a little buffer, so everything will be fine. If you use a fifo or a file, they could grow very fast, and after a short while the player reaches the end of the track.
Author
Owner

@roderickvd commented on GitHub (Jun 14, 2021):

Can you let us know what you're piping to? Or if this is no longer an issue?

<!-- gh-comment-id:860571588 --> @roderickvd commented on GitHub (Jun 14, 2021): Can you let us know what you're piping to? Or if this is no longer an issue?
Author
Owner

@Johannesd3 commented on GitHub (Jun 14, 2021):

Try /dev/null to reproduce it quickly.

<!-- gh-comment-id:860574341 --> @Johannesd3 commented on GitHub (Jun 14, 2021): Try `/dev/null` to reproduce it quickly.
Author
Owner

@roderickvd commented on GitHub (Aug 7, 2021):

Closing, no further feedback.

<!-- gh-comment-id:894702870 --> @roderickvd commented on GitHub (Aug 7, 2021): Closing, no further feedback.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/librespot#332
No description provided.