mirror of
https://github.com/lox-audioserver/lox-audioserver.git
synced 2026-04-26 06:45:47 +03:00
[GH-ISSUE #98] Testing v4.x #43
Labels
No labels
bug
enhancement
pull-request
released
released on @beta
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/lox-audioserver#43
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mr-manuel on GitHub (Dec 31, 2025).
Original GitHub issue: https://github.com/lox-audioserver/lox-audioserver/issues/98
I just installed the latest testing branch and there were a lot of changes. Some are very nice, some breaks my setup completely.
I have Squeezelite players in Music Assistant and cannot use them anymore with v4 of Lox Audio Server.
Is there a reason qhy squeezelite players are not supported anymore? I always thought that was the best option to have music in sync. How would you recommend a setup to have the best synced audio?
@rudyberends commented on GitHub (Dec 31, 2025):
The new 4.x branch takes control over the audio stream.
Previously playback relied entirely on the MA player. We now register our zones as players in MA, retrieve the stream, and play it ourselves. This gives lox-audioserver control over clock drift and allows proper internal grouping and sync.
If you prefer to use existing MA players, this is still possible. Each zone has an MA input. By default it registers the zone in MA, but you can enable offload to select an existing MA player. In offload mode, audio does not pass through lox-audioserver.
I will add squeezelite players as an output source.
For best sync, Snapcast / Sendspin outputs are preferred. You can also use Chromecast devices as Snapcast/Sendspin clients or the built-in web player.
@mr-manuel commented on GitHub (Dec 31, 2025):
Should I see then Music Assistent in the Output routing? The bridge is setup successfully.
Thanks!
I'm using Raspberry Pi 3B with HiFiBerry HAT, so i'm pretty flexible. I will try that out once 4.x is stable.
@rudyberends commented on GitHub (Dec 31, 2025):
no, output is what you select there. You can use offload with, or without an output. If you have an output, and you use offload, it will still honor offload.
@mr-manuel commented on GitHub (Dec 31, 2025):
Now I saw the button under Zones -> Zone - Music Assistant - Configure -> Use offload. Missed that before. 😄
@DiskoDisko404 commented on GitHub (Dec 31, 2025):
I testing the new brand!
My Bridge to Music Assistant works fine and I had Spotify Connect, that's cool!
But how can I get back my TuneIn Radio Stations from Music Assistant ?
Thank you
@DiskoDisko404 commented on GitHub (Dec 31, 2025):
I had added a Tunin account it works but on each radio station I have a error call: Playback Error and the Stream doesn't start!
@DiskoDisko404 commented on GitHub (Dec 31, 2025):
And I hear nothing on my Sonos
Music Assistant Bridge is online and I can Stream from the MA to the Sonos???
@rudyberends commented on GitHub (Dec 31, 2025):
Right now, you have music assistant on "internal player", but you do not have an output configured. So the audio has no route.
Either select an output (airplay or chrome cast on a Sonos?) or use the offload feature on the musicassistant input and select your Sonos players there.
The tunein issue, seems to be related to the static ffmpeg build. I will fix this right away
@DiskoDisko404 commented on GitHub (Dec 31, 2025):
Okey and when I route it to Airplay I don't need the Music Assistant Bridge and I connect a Spotify Account directly on the Audioserver?
The second is when I use Offplay so I can selected the Player in MA ?
@rudyberends commented on GitHub (Dec 31, 2025):
tunein fix:
e86c102@rudyberends commented on GitHub (Dec 31, 2025):
In theory, yes. However at the moment Spotify does not let you create a new app and you need that to setup the account;
@DiskoDisko404 commented on GitHub (Dec 31, 2025):
Okey i understand but where can I select the player in MA?
@DiskoDisko404 commented on GitHub (Dec 31, 2025):
The Container will be update automatically ?
@rudyberends commented on GitHub (Dec 31, 2025):
Either configure an output, or use an offloaded player. I would suggest you use an output.
@rudyberends commented on GitHub (Dec 31, 2025):
Container is updated on every commit to the testing branch. You do have to pull it locally.
@christophpichlmaier commented on GitHub (Jan 1, 2026):
Hi
Lox-audioserver -> Bridge MA -> Offload output Sonos Speaker from MA should work right?
I can't get it to work (no Audio to hear on sonos speaker)
@rudyberends commented on GitHub (Jan 1, 2026):
MA offload acts only on the MA input and therefore only offloads MA library content.
Radio is not part of the MA library and is routed directly to an output, which you currently do not have configured.
We could technically route every source through the offloaded MA instance, but that would add significant complexity. I may remove the offload option altogether to avoid confusion.
The main goal of the new release is to make lox-audioserver less dependent on MA and to treat MA as just another content source, rather than the solution driving all audio.
@christophpichlmaier commented on GitHub (Jan 1, 2026):
Is it possible to choose SONOS as an direct Output? The old Play1 have No AirPlay functionality.
@mr-manuel commented on GitHub (Jan 1, 2026):
This is indeed a good thought. Is Spotify Connect maybe also on the roadmap? Seems to be quite buggy in other applications and not that easy.
@rudyberends commented on GitHub (Jan 1, 2026):
I did create a Sonos output based on the MA implementation, but I don’t own any Sonos speakers myself, so it’s completely untested. Because of that, I left it out of the current release.
If you’re willing to test it, I can release a build tomorrow that includes it. We’ll likely need to iterate on the code to get it fully functional, so let me know if you’re interested.
@rudyberends commented on GitHub (Jan 1, 2026):
I had it running perfectly until Spotify decided to tighten their API/DRM implementation. They appear to be rolling this out gradually, so it may still work for you.
This module is used for Spotify routing and the Connect instance:
https://github.com/rudyberends/node-librespot
@christophpichlmaier commented on GitHub (Jan 1, 2026):
@tokylo commented on GitHub (Jan 1, 2026):
Thanks for the new test version! It would be great if you could consider adding support for Squeezelite players.
My current setup involves four USB sound cards passed through to a Proxmox VM. I previously spent a lot of time trying to get this to work reliably with Snapcast, but unfortunately without success. Today, I tested the sendspin-go (Resonate) and CLI player on Debian 13, but I haven't been able to get it running correctly with my multi-card setup yet.
For now, I am sticking with Squeezelite as it remains the most stable solution for my hardware. If anyone has tips or a working configuration for running four instances of the Sendspin player with dedicated USB sound cards on a Proxmox VM, I would be very interested to hear them!
@wiking-at commented on GitHub (Jan 1, 2026):
I hope it’s okay to hijack this issue for some discussion around the testing branch.
Over the holidays, I retired my old LMS installation (running on Proxmox with USB DACs/amps) and switched to Music Assistant (MA). MA is also running in a Proxmox VM and uses the same audio hardware. So far, MA works very well and seems to be under active development. I mainly chose Music Assistant because it is supported by lox-audioserver and because its grouping functionality is quite flexible.
I use Squeezelite groups for both audio playback and visualization (e.g. controlling LEDs and other lighting via LedFX, with Squeezelite acting as the audio sink). The Loxone Audio Server alone does not provide enough flexibility for this setup, which is why I tend to tinker a bit 😉
It would be great if I could easily map a Loxone zone to a Music Assistant player group, assuming this fits into the current architecture. At the moment, I wasn’t able to create a custom Spotify app because the Spotify dashboard is down for maintenance, so I might be missing some features of the current lox-audioserver implementation. For now, I created the external MA bridge and linked my spotify account in the loxone app. I was able to select the "Music Assistant" Spotify Account - and start a MA playlist from the Loxone App - but besides that the control was not easy to understand or use.
My intended use of Loxone zones would mainly be:
For music selection, I could live with using Music Assistant directly to start the zones. However, it would be nice to still use Loxone Touch controls for:
Do you think these use cases will be possible with lox-audioserver once version 4.0 is ready for production?
@mr-manuel commented on GitHub (Jan 1, 2026):
During my tests of v4.x I was able to assign a MA player group to a Loxone Audio Player (zone). I'm also using Squeezelite as players, so this should also be possible for you.
@rudyberends commented on GitHub (Jan 1, 2026):
No problem, all input is welcome. I do think, however, that we should split this into separate issues so we can address them individually.
I’m not sure if I fully understand your question about mapping. Are you looking to map a zone to a group of players, or is this meant to be a one-to-one mapping?
Based on the responses here, there seems to be some confusion about how everything fits together. I’m very open to suggestions on how to make this clearer.
Although I really like Music Assistant and think it’s the best music server available, I don’t think we should make lox-audioserver as dependent on MA as it was in previous versions. If all that’s needed is Spotify (or another streaming service) and player support, it would actually be simpler to remove MA from the setup entirely. MA should be used for specific use cases that are either not possible or not desirable to implement in lox-audioserver itself.
On the Loxone side, we already need to keep up with their development to ensure compatibility. Relying heavily on MA as a core dependency means we also have to closely track their development, which may not be the right long-term choice.
The use cases you describe are basic functionality of the Loxone Audio Server and are already implemented. The only major missing piece is DSP / equalizer configuration. Other than that, it should do everything the official server does. If something isn’t working, that’s likely due to me not explaining the feature set clearly enough, or making the setup too complex. I want to fix both.
The 4.x release is far more capable than previous versions, which were essentially a proxy for an MA instance. I’ve updated the README and added as much context as possible to the admin UI, but it may still not be clear enough. If something is unclear, please let me know and I’ll update it.
In hindsight, adding offload may have been a mistake—or at least too hard to understand. The intended model is to treat MA purely as a content source, which requires our own outputs. Offload was added to mimic 3.x behavior, but it only works when playing content from the MA library. Just like Spotify offload only applies to Spotify content sent to an external Spotify Connect device. For everything outside that source, you still need an output.
A lack of outputs might be the real issue. I see many of you are using LMS/Squeezelite. While it’s technically possible to implement this similar to MA, I personally don’t see a strong advantage. From a code and architectural standpoint, it feels outdated and less suited to the Loxone use case. I may be wrong, as I haven’t used it myself.
I would prefer moving towards newer Snapcast / Sendspin–based protocols. That said, @tokylo, I’m willing to spend time on your specific use case to better understand whether LMS is truly needed. If it turns out to be the better solution, I will implement it.
@simon2207 commented on GitHub (Jan 2, 2026):
@rudyberends
Hi Rudy, I wish you a happy new year - and many thanks for your hard work.
I tried 4.x testing during the last days... I tried hard... but there are several issues I run into:
After following this thread I had hoped that tuneIn Radio Stations are working after tunein fix: [
e86c102] - but for me that is not the case. I still got the error message. As you can see in the screenshot I had them added manually to Lox-Audioserver.Second: Sonos via AirPlay as direct output from Lox-Audioserver - doesn't work, I m not able to discover any sonos speaker while choosing Airplay. ( Screenshot )It only show one Squeezlite Player... ( there are more in my network )
Third: I wasn't able to group/pair any Lox-Audioserver Speaker / Group to an existing Loxone Original Room / Group.
Fourth: Covers doesn´t match - Spotify Playlist / Album Coverwork is not matching. For Example I m starting a "Michael Jackson" Playlist / Album within Loxone App directly from my Spotify Account - but it shows a "Beatles" Cover.
So I came to the conclusion - why so complicated... I Love the idea to develop Lox-Audiosever as a 99% Copy of the original Loxone Audioserver and keep it simple, but lets add 1% Bonus here...
All Ideas / and feature request are here and worked fine during the last branches... sometimes with issues sometimes not. the only thing missing to get them all together in one release.
Within Lox-Audioserver we should be able to add our Spotify Account - done, Like Original Loxone
Add our TuneIn Account - or direct Radio Streams from our own favorites - wich are routed to Loxone ( App ) - done
Radio Stations should be searchable from the Loxone app - done
Add a Network Folder / Library with our own Music - done
Add TTS and other Loxone Features, like the original Audioserver - done
But as a Bonus - we could add several new output features and pair them to our own speakers of choice... for example.
Add a new virtual Stereo Extensions with up to 4 Stereo Outputs. We link them to Sonos Speakers, Snapcast / Googlecast, Squeezlite Player ( like Lox-Audioplayer 2.x 3.x ) like many people ask for in that chat.
And as a cherry on top - add Music Assistant as an Input Source, so that everyone using MA could connect his favorite source to MA and stream that to Lox-Audioserver.
Personal:
After getting to the Lox-Audioserver IP at first we should be greeted by a Setup / Welcome Page ( IP, MacAdresse ) - we can now add that virtuell Loxone Audioserver to our own Loxone Config. Adding Spotify, and maybe Radiostations and Music Folder, add some Airplay, Cast, Squeez Speaker - done...
As a Bonus there could be an Expert / Experimental Tab - with MA integrations and the complicated stuff... like changing input / output devices...
What you have made here so far is excellent in my personal opinion and all services and functions are already here, Lox-Audioserver should be the first and best way to go, to add other speakers to the original Loxone setup, but at first only with the original services like Spotify / Folders and so on - everything build into Lox-Audioserver itself to make it less dependent on MA or other solutions... but make it optional ...
Kind regards
Simon
@mr-manuel commented on GitHub (Jan 2, 2026):
From my experience, Squeezelite may be an aging solution, but it remains the most reliable and stable multiroom audio playback system I have tested, particularly for long, uninterrupted playback sessions (for example, continuous playback over many hours or an entire day). I have been using Squeezelite for approximately eight years and currently operate seven players distributed throughout my house.
In comparative testing with other multiroom protocols - most notably AirPlay and Snapcast - I consistently observed gradual clock drift over time. After several hours of playback, individual rooms would no longer remain sample-accurate, and resynchronization would not occur automatically. In practice, the only way to restore synchronization was to stop playback entirely and restart the stream.
Protocol-level comparison
Squeezelite / Logitech Media Server (LMS)
Squeezelite operates with LMS acting as a central timing authority. Audio is streamed with explicit timing metadata, and each player continuously reports its buffer state and local clock behavior back to the server. LMS compensates for clock drift by instructing players to perform very small, continuous rate adjustments (fractional sample rate correction), rather than large buffer resets.
As a result:
This design is fundamentally optimized for permanent multiroom synchronization rather than ad-hoc streaming.
AirPlay (RAOP / AirPlay 1)
AirPlay is primarily designed as a consumer streaming protocol, not as a long-running, multiroom synchronization system. Each receiver maintains its own local clock and buffer, with synchronization relying on initial timestamps and relatively coarse correction mechanisms.
Key limitations:
This makes AirPlay acceptable for short listening sessions but unreliable for all-day or permanent multiroom playback.
Snapcast
Snapcast improves on AirPlay by using a server-driven model with timestamps and a shared audio stream. However, its synchronization model is still largely buffer-based rather than clock-adaptive.
In practice:
While Snapcast is suitable for many use cases, it lacks the fine-grained, continuous clock correction that LMS/Squeezelite provides.
Sendspin
Sendspin is less publicly documented, but from observed behavior it appears to operate as a low-latency audio transport rather than a fully clock-disciplined multiroom protocol. Synchronization is likely handled at session start with limited ongoing drift correction.
This typically implies:
Loxone Audio Server context
I also noticed a statement in your web interface indicating that the Loxone Audio Server itself is not responsible for synchronization. This raises architectural questions regarding how synchronization is handled when Music Assistant is not used. Specifically:
Based on discussions with Loxone Alpha testers, there are still reports of audible distortions and synchronization issues with the current Loxone Audio Server. This is particularly notable because the previous Loxone Music Server was based on Logitech Media Server with Squeezelite clients—an architecture that, at least in my experience, delivered superior long-term synchronization reliability.
Given this background, I would appreciate a more detailed explanation of the synchronization model used by the current Loxone Audio Server, especially in terms of clock discipline, drift correction strategy, and how it compares architecturally to the LMS/Squeezelite approach.
@rudyberends commented on GitHub (Jan 2, 2026):
Previous iterations of the code did not touch a single byte of audio. All it did was call a MA player over the API and instruct it to play. All audio processing was done by MA. This makes the entire chain fully dependent on MA. And because we did nothing with audio, we had no control over synchronization behavior whatsoever.
The 4.x release does its own audio processing. That gives us full control over the audio stream, the buffering model, and timing decisions on the server side. The statement that “lox-audioserver is not responsible for synchronization” is only true in the offload scenario (which effectively reverts back to the old 3.x flow). In offload mode you are delegating playback and sync to the external player, so the audio server cannot be the clock authority by definition. I removed offload for now to avoid those discussions and keep the behavior consistent and simple: if audio flows through lox-audioserver, then lox-audioserver owns the stream and the timing model end-to-end.
When routing audio through an output, sync capability depends on the protocol used. Every protocol has its own way of establishing a clock reference, distributing timestamps, buffering, and handling drift inside a group. That’s why if you want perfect sync, you must use the same protocol/output for all group members. Mixing protocols inside one group means you are effectively trying to bridge two independent timing domains, and that will always degrade to “best effort” at some point.
The protocol statements in that text are mostly incorrect. The AirPlay section is the closest to reality, but still needs framing:
AirPlay 1 (RAOP)
was not designed for multiroom grouping by Apple clients and does not provide a native, end-to-end multiroom grouping model comparable to SlimProto. Multiroom “grouping” in the open-source ecosystem is implemented by extensions/workarounds on top of RAOP receivers. It can work reasonably well, but it’s not comparable to a protocol that was designed around multi-client synchronization as a first-class feature. AirPlay 2 is a different story (native multiroom), but it’s not realistically implementable as a third-party receiver stack in a fully compatible way.
Snapcast
is explicitly designed for synchronized playback. It distributes timestamped chunks and clients continuously synchronize their local notion of server time. Drift correction is done by playing slightly faster/slower, typically implemented via single-sample insert/drop (or equivalent rate correction) depending on backend. In a stable setup (wired network, consistent sample rates, ALSA/CoreAudio/Wasapi behaving well), Snapcast can remain extremely tight for long sessions. If someone sees degradation over hours, that’s usually a configuration/backend issue (buffer settings, resampling, Wi-Fi jitter, heterogeneous sample rate paths), not a fundamental limitation of the protocol.
Sendspin
is also built around explicit time synchronization and timestamped media. The spec formalizes clock sync (offset + drift tracking) and defines scheduling against the server time domain, with continuous correction rather than “restart to resync”. Conceptually it is in the same family as Snapcast and LMS-style models: server timeline authority + client-side clock discipline + continuous micro-corrections. If you implement Sendspin correctly, long-running sync stability is an explicit design goal, not an afterthought. Sendspin is still new, but is documented very well https://github.com/Sendspin/spec. In the long run, I believe this to be the protocol of choice. It's already adopted by MA as their preferred protocol, and we already use it to register our players with MA.
From a protocol perspective. Snapcast and Sendspin do fit our architecture better, and this would be my choice. Also every protocol we add needs to be tested and maintained. If we could keep at least the outputs to a minimum, this would help a lot with keeping it as clean and simple as possible.
I am not saying we should ditch LMS straight away. You clearly have a lot of experience with it, and you are also using it over long runs. (not just an incidental group session for a single track). But the statement made about sendspin and snapcast are not valid. Both protocols are designed specifically for synchronized playback, just like the slimproto protocol, only on a more modern architecture. You would be the ideal candidate to test it :-)
@mr-manuel commented on GitHub (Jan 2, 2026):
Thanks for the detailed response! As you noted, I'm not that familiar with Snapcast and Sendspin.
Have you maybe a recommendation for a prepared image like PiCorePlayer or Max2play for Snapcast or Sendspin? What would you recommend between this two?
@rudyberends commented on GitHub (Jan 2, 2026):
Thanks for your input — it is appreciated.
I agree that keeping things as simple as possible is important. That said, aspects that feel obvious or self-explanatory during development to me are not always immediately clear to others. All feedback helps to identify and improve those areas.
Based on the screenshot you shared of your radio configuration, I still see the demo stations listed under the TuneIn presets. Those demo stations are only shown when no account is configured, which suggests that your TuneIn account is currently not being recognized.
Please try the latest release. In this version, the admin UI explicitly checks whether your username can be resolved and will report how many presets were found if the account lookup succeeds. That should make it clearer whether the account is detected correctly. It could also mean you do not have any presets set. In that case it will also revert to the demo stations.
I will get back to you on your other points. In the meantime, feel free to open them as separate issues so we can address them individually. This helps keep things more manageable on my end.
@simon2207 commented on GitHub (Jan 2, 2026):
How did you do that?
@tokylo commented on GitHub (Jan 4, 2026):
I am unable to see my Snapcast clients in the lox-audioserver interface. While Airplay and Sonos devices are discovered without issues, the Snapclients remain invisible, even when manually pointed to the lox-audioserver IP.
My Setup
Current Status: All 4 clients work perfectly when connected to a standalone Snapserver (MusicAssistant).
Technical Details I have tried pointing the Snapclients to the lox-audioserver IP with and without specifying port 1704
Questions:
Docker-Compose:
version: '3.8'
services:
lox-audioserver:
image: ghcr.io/rudyberends/lox-audioserver:testing
container_name: lox-audioserver
network_mode: host
ports:
- "7090:7090"
- "7091:7091"
- "7095:7095"
volumes:
- /opt/lox-audioserver/data:/app/data
- /opt/lox-audioserver/config:/config
- /opt/lox-audioserver/audio:/audio
environment:
- TZ=Europe/Zurich
restart: unless-stopped
@rudyberends commented on GitHub (Jan 4, 2026):
Lox-audioserver runs its own snapserver instance. we do not rely on the binary. Only the web socket transport is implemented and it runs on 7090
Snapcast zones are not yet discovered using autodiscovery, but instead it lists connected clients. you might need to refresh to see them. You can also use the embedded webplayer as a zone, or to tap into an existing zone.
connect to the server like this
snapclient ws://audioserverip:7090
when the client is connected it will show up (else try refresh)
If you have any other issues with snapcast, please open a separate issue. Every 4.x issue in one thread is really hard for me to manage.
@simon2207 commented on GitHub (Jan 5, 2026):
Any News on that? Tried Snapcast today... really frustrating after several hours of trial and error.
@Re4DeR commented on GitHub (Jan 7, 2026):
⸻
What’s your problem?
For me it wasn’t Snapcast itself – it was the lack of clear documentation.
Almost everything is answered in GitHub issues, but you have to piece it together yourself.
There is no need for PiCorePlayer, Max2Play, or any “audio distro” when using lox-audioserver v4. Those are built around LMS or legacy Snapcast setups and only add confusion here.
Below is a minimal, working setup that removed all the frustration.
⸻
Quick manual – Raspberry Pi Snapclient for lox-audioserver v4
TL;DR
• ❌ Don’t use PiCorePlayer / Max2Play
• ❌ Don’t run your own snapserver
• ✅ Use plain Raspberry Pi OS Lite
• ✅ Use Snapclient 0.32+ with WebSocket support
• ✅ lox-audioserver already is the Snapcast server (WS on port 7090)
⸻
Flash OS
Flash Raspberry Pi OS Lite 64-bit using Raspberry Pi Imager.
Enable:
• SSH
• Wi-Fi
• Hostname (e.g. snapclient-room)
Nothing else.
⸻
Install Snapclient (WebSocket capable)
lox-audioserver v4 requires Snapclient ≥ 0.32.
Example for Debian Trixie / ARM:
Verify:
snapclient --version⸻
Configure Snapclient (THIS is the key part)
lox-audioserver v4 runs its own Snapcast WebSocket server on port 7090.
You must connect Snapclient via WS, not via the classic TCP Snapcast port.
Edit:
sudo nano /etc/default/snapclientExample configuration:
SNAPCLIENT_OPTS="ws://*ip_of_lox-audioserver_v4*:7090 -s plughw:CARD=DEVICE,DEV=0 --latency 80 --hostID snapclient-room --logsink=system"Notes:
• ws://:7090 ← mandatory
• --hostID = human-readable client name shown in UI
• plughw avoids ALSA conversion issues
• latency 50–120 ms is typical
⸻
Start Snapclient
If it connects successfully, you’re done.
⸻
Map client in lox-audioserver UI
• Open lox-audioserver v4 UI
• Zones → Output routing → Snapcast
• Your Snapclient appears automatically
• Click it → mapped → audio plays
No manual discovery, no snapserver config, no AirPlay hacks.
⸻
@rudyberends awasome work! thank you so much!
@wiking-at commented on GitHub (Jan 11, 2026):
@rudyberends thanks for clearing up the confusion - so in lox-audioserver 4.0 MA is just used as a content source - not for playback. thats why i struggled to use my MA squeezelite players ;-) . currently i only run squeezelite players - so i would need to convert them to either snapcast or sendspin.
to sum your info up: senspin will be the prefered player in the future very likely - so it might be a good idea to convert my squeezelite players to sendspin instead of snapcast. is there already any documentation/notes on what is needed to connect a sendspin player to lox-audioserver? or should i just read up on sendspin and try to get it working that way?
edit: i just got sendspin working with a minimal ubuntu 24.04 vm and the wondom kab9 audio card on proxmox. currently only integrated in music assistant as my lox-audioserver container is currently lacking the possiblity for mdns (only one container in host network mode i suppose). here are my notes - maybe helpful for others or you if you want do document sendspin installation also in your project: https://gist.github.com/wiking-at/d7789afb9107c3f6a16501e1914bbe41