[GH-ISSUE #143] [Bug Report] ValueError: Track not in List for Long Playlist #128

Closed
opened 2026-02-27 04:57:42 +03:00 by kerem · 10 comments
Owner

Originally created by @carlyd95 on GitHub (Jan 6, 2026).
Original GitHub issue: https://github.com/Googolplexed0/zotify/issues/143

Originally assigned to: @Googolplexed0 on GitHub.

Zotify Version
v0.11.15 (efficient-api branch)

Bug Description
When downloading a playlist via the -f $FILENAME flag some playlists result in an error:
ValueError: <zotify.api.Track object at 0x7f8c641d10> is not in list

Bug Triggering Command
sudo -u pi /home/pi/.local/bin/zotify -f zotifylist.txt --download-format mp3 --download-quality very_high --root-path /media/pi/X9/DJ/Music --output "{artist} - {song_name}" --download-real-time True --md-save-genres True --client-id XXX--REDACTED--XXX

zotifylist.txt:
https://open.spotify.com/playlist/1eQ0j0QcpnipT0HClwzFSn

Error Traceback / Logs
zotify_DEBUG_2026-01-06_09-24-11.log

Standard Output Error Message
zotify_error_stout.txt

Config File
config_DEBUG.json

Additional Context
Example playlist causing the issue: https://open.spotify.com/playlist/1eQ0j0QcpnipT0HClwzFSn

Originally created by @carlyd95 on GitHub (Jan 6, 2026). Original GitHub issue: https://github.com/Googolplexed0/zotify/issues/143 Originally assigned to: @Googolplexed0 on GitHub. **Zotify Version** v0.11.15 (efficient-api branch) **Bug Description** When downloading a playlist via the `-f $FILENAME` flag some playlists result in an error: `ValueError: <zotify.api.Track object at 0x7f8c641d10> is not in list` **Bug Triggering Command** `sudo -u pi /home/pi/.local/bin/zotify -f zotifylist.txt --download-format mp3 --download-quality very_high --root-path /media/pi/X9/DJ/Music --output "{artist} - {song_name}" --download-real-time True --md-save-genres True --client-id XXX--REDACTED--XXX` zotifylist.txt: `https://open.spotify.com/playlist/1eQ0j0QcpnipT0HClwzFSn` **Error Traceback / Logs** [zotify_DEBUG_2026-01-06_09-24-11.log](https://github.com/user-attachments/files/24455076/zotify_DEBUG_2026-01-06_09-24-11.log) **Standard Output Error Message** [zotify_error_stout.txt](https://github.com/user-attachments/files/24455090/zotify_error_stout.txt) **Config File** [config_DEBUG.json](https://github.com/user-attachments/files/24455065/config_DEBUG.json) **Additional Context** Example playlist causing the issue: https://open.spotify.com/playlist/1eQ0j0QcpnipT0HClwzFSn
kerem 2026-02-27 04:57:42 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@RGPZ commented on GitHub (Jan 6, 2026):

I'm also getting this error and I assume that it's either being caused by two things that I've also got going on.

  1. You're playlist has tracks that can't be listened to and therefore there's no audio "object" for Zotify to grab in that list.
  2. You're running two instances of Zotify at once, which possibly can't work. (I'm not sure if this is the case, but it's what I'm doing while I get this code on my second instance)
    I'm not 100% sure if these are either issues but it might be worth looking at.
<!-- gh-comment-id:3716108476 --> @RGPZ commented on GitHub (Jan 6, 2026): I'm also getting this error and I assume that it's either being caused by two things that I've also got going on. 1. You're playlist has tracks that can't be listened to and therefore there's no audio "object" for Zotify to grab in that list. 2. You're running two instances of Zotify at once, which possibly can't work. (I'm not sure if this is the case, but it's what I'm doing while I get this code on my second instance) I'm not 100% sure if these are either issues but it might be worth looking at.
Author
Owner

@carlyd95 commented on GitHub (Jan 6, 2026):

I'm also getting this error and I assume that it's either being caused by two things that I've also got going on.

  1. You're playlist has tracks that can't be listened to and therefore there's no audio "object" for Zotify to grab in that list.
  2. You're running two instances of Zotify at once, which possibly can't work. (I'm not sure if this is the case, but it's what I'm doing while I get this code on my second instance)
    I'm not 100% sure if these are either issues but it might be worth looking at.

I am only running one instance so maybe not that but perhaps the other could be a cause. When you say it has no audio object are you referring to tracks that are no longer available in your market?

If that is what you mean, I think I tested this by placing some of those tracks in another playlist and the issue did not arise.

Tonight I will try to determine which track(s) in the playlist are causing the issue and perhaps that will help identify the cause.

Thanks for the additional thoughts! 😄

<!-- gh-comment-id:3716178863 --> @carlyd95 commented on GitHub (Jan 6, 2026): > I'm also getting this error and I assume that it's either being caused by two things that I've also got going on. > 1. You're playlist has tracks that can't be listened to and therefore there's no audio "object" for Zotify to grab in that list. > 2. You're running two instances of Zotify at once, which possibly can't work. (I'm not sure if this is the case, but it's what I'm doing while I get this code on my second instance) > I'm not 100% sure if these are either issues but it might be worth looking at. I am only running one instance so maybe not that but perhaps the other could be a cause. When you say it has no audio object are you referring to tracks that are no longer available in your market? If that is what you mean, I think I tested this by placing some of those tracks in another playlist and the issue did not arise. Tonight I will try to determine which track(s) in the playlist are causing the issue and perhaps that will help identify the cause. Thanks for the additional thoughts! 😄
Author
Owner

@RGPZ commented on GitHub (Jan 6, 2026):

When you say it has no audio object are you referring to tracks that are no longer available in your market?

Yeah, basically, any song in your playlist that you can't listen to on Spotify via regular means, probably can't be downloaded via Zotify.

<!-- gh-comment-id:3716194029 --> @RGPZ commented on GitHub (Jan 6, 2026): > When you say it has no audio object are you referring to tracks that are no longer available in your market? Yeah, basically, any song in your playlist that you can't listen to on Spotify via regular means, probably can't be downloaded via Zotify.
Author
Owner

@carlyd95 commented on GitHub (Jan 6, 2026):

When you say it has no audio object are you referring to tracks that are no longer available in your market?

Yeah, basically, any song in your playlist that you can't listen to on Spotify via regular means, probably can't be downloaded via Zotify.

Yes, that is what I tested yesterday. Below is the output for how zotify fittingly handles that situation with this test playlist: https://open.spotify.com/playlist/25XTlrBUJ7iPrnD4oZGk0G

`

SKIPPING: "HUGEL - Forever (feat. Malou & Yuna)" (TRACK ALREADY EXISTS)

SKIPPING: "Skeler - Arcadia - Heimanu Remix" (TRACK IS UNAVAILABLE)

SKIPPING: "Tearz - HEARTBEAT" (TRACK IS UNAVAILABLE)

SKIPPING: "&friends - Ode Ireti - Sam Zloty Remix" (TRACK IS UNAVAILABLE)

SKIPPING: "Alexander - Distraction" (TRACK IS UNAVAILABLE) ###`

<!-- gh-comment-id:3716735686 --> @carlyd95 commented on GitHub (Jan 6, 2026): > > When you say it has no audio object are you referring to tracks that are no longer available in your market? > > Yeah, basically, any song in your playlist that you can't listen to on Spotify via regular means, probably can't be downloaded via Zotify. Yes, that is what I tested yesterday. Below is the output for how zotify fittingly handles that situation with this test playlist: https://open.spotify.com/playlist/25XTlrBUJ7iPrnD4oZGk0G ` ### SKIPPING: "HUGEL - Forever (feat. Malou & Yuna)" (TRACK ALREADY EXISTS) ### ### SKIPPING: "Skeler - Arcadia - Heimanu Remix" (TRACK IS UNAVAILABLE) ### ### SKIPPING: "Tearz - HEARTBEAT" (TRACK IS UNAVAILABLE) ### ### SKIPPING: "&friends - Ode Ireti - Sam Zloty Remix" (TRACK IS UNAVAILABLE) ### ### SKIPPING: "Alexander - Distraction" (TRACK IS UNAVAILABLE) ###`
Author
Owner

@carlyd95 commented on GitHub (Jan 7, 2026):

@Googolplexed0 I think I have discovered the issue and my guess is that it relates to api batch call limit for Spotify's api perhaps.

But any playlist that contains > 100 unique tracks will cause this error. Any playlist with <= 100 unique tracks will not have this error.

Example Playlists (first 100 tracks are the same in both):

100 tracks: https://open.spotify.com/playlist/6dfSw7RRf3q1iXGmIpZepJ
101 tracks: https://open.spotify.com/playlist/25XTlrBUJ7iPrnD4oZGk0G

Let me know if there is anything else that I can do to help! Thanks again!

Additional support for theory:

https://stackoverflow.com/questions/79151675/spotify-api-get-tracks-information-from-playlist-with-over-100-tracks-using-pyth

<!-- gh-comment-id:3717000312 --> @carlyd95 commented on GitHub (Jan 7, 2026): @Googolplexed0 I think I have discovered the issue and my guess is that it relates to api batch call limit for Spotify's api perhaps. But any playlist that contains > 100 unique tracks will cause this error. Any playlist with <= 100 unique tracks will not have this error. **Example Playlists (first 100 tracks are the same in both):** 100 tracks: https://open.spotify.com/playlist/6dfSw7RRf3q1iXGmIpZepJ 101 tracks: https://open.spotify.com/playlist/25XTlrBUJ7iPrnD4oZGk0G Let me know if there is anything else that I can do to help! Thanks again! Additional support for theory: https://stackoverflow.com/questions/79151675/spotify-api-get-tracks-information-from-playlist-with-over-100-tracks-using-pyth
Author
Owner

@carlyd95 commented on GitHub (Jan 7, 2026):

While I am unsure if it is the best fix it does work:

Easiest Fix (recommended)

Do not preload tracks in parse_metadata() when pagination is needed.
Change this:
if TRACKS in playlist_resp and ITEMS in playlist_resp[TRACKS]:
To this:
if TRACKS in playlist_resp and ITEMS in playlist_resp[TRACKS] and playlist_resp[TRACKS].get(NEXT) is None:

Why this works
Spotify embeds only the first 100 items
If NEXT exists → playlist is paginated
You skip creating Track objects early
fetch_items() becomes the single source of truth
Prevents object identity mismatch
Fixes the ValueError without touching pagination logic

🔧 Also fix this (1 line, optional but correct)
Replace:
self.needs_expansion = NEXT not in playlist_resp[TRACKS] or playlist_resp[TRACKS][NEXT] is not None
With:
self.needs_expansion = playlist_resp[TRACKS].get(NEXT) is not None

🧠 Why this is better than clearing lists
No wasted work
No duplicated API calls
No side effects
Keeps behavior identical for ≤100-track playlists
Zero changes to fetch_items()

TL;DR
Guard against preloading when tracks.next exists.
That’s it. One condition, bug gone.

I hope this helps. Cheers!

<!-- gh-comment-id:3721116415 --> @carlyd95 commented on GitHub (Jan 7, 2026): While I am unsure if it is the best fix it does work: **✅ Easiest Fix (recommended)** **Do not preload tracks in parse_metadata() when pagination is needed.** Change this: if TRACKS in playlist_resp and ITEMS in playlist_resp[TRACKS]: To this: if TRACKS in playlist_resp and ITEMS in playlist_resp[TRACKS] and playlist_resp[TRACKS].get(NEXT) is None: **Why this works** Spotify embeds only the first 100 items If NEXT exists → playlist is paginated You skip creating Track objects early fetch_items() becomes the single source of truth Prevents object identity mismatch Fixes the ValueError without touching pagination logic **🔧 Also fix this (1 line, optional but correct)** Replace: self.needs_expansion = NEXT not in playlist_resp[TRACKS] or playlist_resp[TRACKS][NEXT] is not None With: self.needs_expansion = playlist_resp[TRACKS].get(NEXT) is not None **🧠 Why this is better than clearing lists** No wasted work No duplicated API calls No side effects Keeps behavior identical for ≤100-track playlists Zero changes to fetch_items() **TL;DR** Guard against preloading when tracks.next exists. That’s it. One condition, bug gone. I hope this helps. Cheers!
Author
Owner

@Googolplexed0 commented on GitHub (Jan 20, 2026):

@carlyd95 see if the new-hierarchy branch fixes this issue.

<!-- gh-comment-id:3770803770 --> @Googolplexed0 commented on GitHub (Jan 20, 2026): @carlyd95 see if the [new-hierarchy branch](https://github.com/Googolplexed0/zotify/tree/new-hierarchy) fixes this issue.
Author
Owner

@carlyd95 commented on GitHub (Jan 20, 2026):

@Googolplexed0 Okay, will do thank you!

I have noticed a few other issues and plan to submit them, have you gone ahead and fixed anything related to the following:

  • Continually having to reauthenticate via the URL
  • unpack requires a buffer of 4 bytes error on program start (immediate fail)
  • ConnectionRefusedError: [Errno 111] Connection refused error on program start (immediate fail)
  • (I assume) Librespot socket times out over the course of really long downloads (-d 25-playlists.txt) with ERROR: UNEXPECTED ERROR DURING DOWNLOADS (OSError: [Errno 9] Bad file descriptor) (may be related to the need to reauthenticate via URL)
  • zotify gets stuck when parsing playlist data for bulk downloads (-d 25-playlists.txt)
  • general increase in time it takes to parse playlists and fetch disc/track data

One other note that may be specific to my workflow.
I store all tracks in a single flat folder and use playlists only as references to the same files. When batch-downloading many playlists (e.g. 20–30 at once), multiple playlists often contain the same track.
In this case, Zotify attempts to save duplicate copies of the same song into the same folder using the _{count}.{ext} suffix. This creates a large number of unnecessary duplicate files and also logs them repeatedly in both .song_ids and .song_archive.
Ideally, duplicate playlist references would resolve to the existing file and skip cleanly, without creating suffixed copies or additional archive entries.
I’ve tested --disable-directory-archives, which partially helps: it prevents logging to .song_ids, but .song_archive is still written. Previously I was relying solely on .song_ids for deduplication, so I may be misunderstanding the intended roles of these two files.
One more edge case: when a new track is introduced for the first time and that same track appears in multiple playlists within a single batch run, I then encounter an UNEXPECTED ERROR DURING DOWNLOADS, even when using --disable-directory-archives.

Please advise if any of these have been fixed or if they are worth opening new issues/feature requests for.

Thank you for your help. Let me know if there is something I can help with.

Much agape,
Carlton

<!-- gh-comment-id:3770954552 --> @carlyd95 commented on GitHub (Jan 20, 2026): @Googolplexed0 Okay, will do thank you! I have noticed a few other issues and plan to submit them, have you gone ahead and fixed anything related to the following: - Continually having to reauthenticate via the URL - `unpack requires a buffer of 4 bytes` error on program start (immediate fail) - `ConnectionRefusedError: [Errno 111] Connection refused` error on program start (immediate fail) - (I assume) Librespot socket times out over the course of really long downloads (-d 25-playlists.txt) with `ERROR: UNEXPECTED ERROR DURING DOWNLOADS (OSError: [Errno 9] Bad file descriptor)` (may be related to the need to reauthenticate via URL) - zotify gets stuck when parsing playlist data for bulk downloads (-d 25-playlists.txt) - general increase in time it takes to parse playlists and fetch disc/track data One other note that may be specific to my workflow. I store all tracks in a single flat folder and use playlists only as references to the same files. When batch-downloading many playlists (e.g. 20–30 at once), multiple playlists often contain the same track. In this case, Zotify attempts to save duplicate copies of the same song into the same folder using the _{count}.{ext} suffix. This creates a large number of unnecessary duplicate files and also logs them repeatedly in both .song_ids and .song_archive. Ideally, duplicate playlist references would resolve to the existing file and skip cleanly, without creating suffixed copies or additional archive entries. I’ve tested --disable-directory-archives, which partially helps: it prevents logging to .song_ids, but .song_archive is still written. Previously I was relying solely on .song_ids for deduplication, so I may be misunderstanding the intended roles of these two files. One more edge case: when a new track is introduced for the first time and that same track appears in multiple playlists within a single batch run, I then encounter an UNEXPECTED ERROR DURING DOWNLOADS, even when using --disable-directory-archives. Please advise if any of these have been fixed or if they are worth opening new issues/feature requests for. Thank you for your help. Let me know if there is something I can help with. Much agape, Carlton
Author
Owner

@Googolplexed0 commented on GitHub (Jan 21, 2026):

  • Continually having to reauthenticate via the URL
  • unpack requires a buffer of 4 bytes error on program start (immediate fail

I don't experience these, and would need to be able to reproduce them to find a fix.

  • (I assume) Librespot socket times out over the course of really long downloads (-d 25-playlists.txt) with ERROR: UNEXPECTED ERROR DURING DOWNLOADS (OSError: [Errno 9] Bad file descriptor) (may be related to the need to reauthenticate via URL)

I could probably work on a more active reconnect system. Make a new feature request and try to give as much detail as possible on where in the control flow you see this error being raised.

  • zotify gets stuck when parsing playlist data for bulk downloads (-d 25-playlists.txt)
  • general increase in time it takes to parse playlists and fetch disc/track data

Longer fetching will always happen with larger requests. Hopefully after new-hierarchy the increase in time will no longer be so noticeable (almost exponential). Let me know if you see the large improvements I'm hoping for after all the kinks are ironed out.

In this case, Zotify attempts to save duplicate copies of the same song into the same folder using the _{count}.{ext} suffix. This creates a large number of unnecessary duplicate files and also logs them repeatedly in both .song_ids and .song_archive.

Do you have either of the skipping-related configs active? That should add them to .m3u8s without doing any cloning or archive duplication. Even if the archiving function still ran, if an ID is already in an archive it shouldn't be readded to it. Make a Bug Report for this, with a transcript of an archive file pre-and-post duplicate entries.

Ideally, duplicate playlist references would resolve to the existing file and skip cleanly, without creating suffixed copies or additional archive entries.

This should probably be default behavior for within a single playlist, but would have conflicts if {playlist_num} makes an appearance in the filename or if a user doesn't use .m3u8 files. Make this a simple Bug Report and I'll see what I can do. Might make this a config option just to safeguard other workflows dependent on file cloning in all cases.

One more edge case: when a new track is introduced for the first time and that same track appears in multiple playlists within a single batch run, I then encounter an UNEXPECTED ERROR DURING DOWNLOADS, even when using --disable-directory-archives.

No idea about this one. Does this still happen in new-hierarchy? If so, throw another Bug Report onto the pile (lol).

<!-- gh-comment-id:3776304842 --> @Googolplexed0 commented on GitHub (Jan 21, 2026): > * Continually having to reauthenticate via the URL > * `unpack requires a buffer of 4 bytes` error on program start (immediate fail I don't experience these, and would need to be able to reproduce them to find a fix. > * (I assume) Librespot socket times out over the course of really long downloads (-d 25-playlists.txt) with `ERROR: UNEXPECTED ERROR DURING DOWNLOADS (OSError: [Errno 9] Bad file descriptor)` (may be related to the need to reauthenticate via URL) I could probably work on a more active reconnect system. Make a new feature request and try to give as much detail as possible on where in the control flow you see this error being raised. > * zotify gets stuck when parsing playlist data for bulk downloads (-d 25-playlists.txt) > * general increase in time it takes to parse playlists and fetch disc/track data Longer fetching will always happen with larger requests. Hopefully after `new-hierarchy` the increase in time will no longer be so noticeable (almost exponential). Let me know if you see the large improvements I'm hoping for after all the kinks are ironed out. > In this case, Zotify attempts to save duplicate copies of the same song into the same folder using the _{count}.{ext} suffix. This creates a large number of unnecessary duplicate files and also logs them repeatedly in both .song_ids and .song_archive. Do you have either of the skipping-related configs active? That should add them to .m3u8s without doing any cloning or archive duplication. Even if the archiving function still ran, if an ID is already in an archive it shouldn't be readded to it. Make a Bug Report for this, with a transcript of an archive file pre-and-post duplicate entries. > Ideally, duplicate playlist references would resolve to the existing file and skip cleanly, without creating suffixed copies or additional archive entries. This should probably be default behavior for within a single playlist, but would have conflicts if `{playlist_num}` makes an appearance in the filename or if a user doesn't use .m3u8 files. Make this a simple Bug Report and I'll see what I can do. Might make this a config option just to safeguard other workflows dependent on file cloning in all cases. > One more edge case: when a new track is introduced for the first time and that same track appears in multiple playlists within a single batch run, I then encounter an UNEXPECTED ERROR DURING DOWNLOADS, even when using --disable-directory-archives. No idea about this one. Does this still happen in `new-hierarchy`? If so, throw another Bug Report onto the pile (lol).
Author
Owner

@carlyd95 commented on GitHub (Jan 22, 2026):

  • Continually having to reauthenticate via the URL
  • unpack requires a buffer of 4 bytes error on program start (immediate fail

I don't experience these, and would need to be able to reproduce them to find a fix.

I have not seen this error pop up in new-hierarchy so maybe it is fixed!

  • (I assume) Librespot socket times out over the course of really long downloads (-d 25-playlists.txt) with ERROR: UNEXPECTED ERROR DURING DOWNLOADS (OSError: [Errno 9] Bad file descriptor) (may be related to the need to reauthenticate via URL)

I could probably work on a more active reconnect system. Make a new feature request and try to give as much detail as possible on where in the control flow you see this error being raised.

This would be great! I want to get the file copies issue sorted out and then I will submit a feature request for this.

  • zotify gets stuck when parsing playlist data for bulk downloads (-d 25-playlists.txt)
  • general increase in time it takes to parse playlists and fetch disc/track data

Longer fetching will always happen with larger requests. Hopefully after new-hierarchy the increase in time will no longer be so noticeable (almost exponential). Let me know if you see the large improvements I'm hoping for after all the kinks are ironed out.

Sweet!

In this case, Zotify attempts to save duplicate copies of the same song into the same folder using the _{count}.{ext} suffix. This creates a large number of unnecessary duplicate files and also logs them repeatedly in both .song_ids and .song_archive.

Do you have either of the skipping-related configs active? That should add them to .m3u8s without doing any cloning or archive duplication. Even if the archiving function still ran, if an ID is already in an archive it shouldn't be readded to it. Make a Bug Report for this, with a transcript of an archive file pre-and-post duplicate entries.

Yes, both.

"SKIP_EXISTING": "True",
"SKIP_PREVIOUSLY_DOWNLOADED": "True"

The situation is likely more involved and previously I had a fix in place that just prevented it.

Scenario 1:
Bulk download 2 or more playlists that share a common track that has yet to be downloaded.

Possible Fix: ???

Scenario 2:
Download a playlist with a track that has more than one Spotify track ID's with all songs downloading to the same path in the format: {artist} - {track}.{ext} and one of them has been downloaded before.

Possible Fix: hard skip if path_exists

Ideally, duplicate playlist references would resolve to the existing file and skip cleanly, without creating suffixed copies or additional archive entries.

This should probably be default behavior for within a single playlist, but would have conflicts if {playlist_num} makes an appearance in the filename or if a user doesn't use .m3u8 files. Make this a simple Bug Report and I'll see what I can do. Might make this a config option just to safeguard other workflows dependent on file cloning in all cases.

m3u8-wise, I use my own script to populate them because I keep up with tracks that have been removed from my Spotify market but still have them downloaded. Sounds good. I created an issue for it. (#158) Thank you!

One more edge case: when a new track is introduced for the first time and that same track appears in multiple playlists within a single batch run, I then encounter an UNEXPECTED ERROR DURING DOWNLOADS, even when using --disable-directory-archives.

No idea about this one. Does this still happen in new-hierarchy? If so, throw another Bug Report onto the pile (lol).

with optimized downloading off it throws and error, with it on it does not throw an error but saves a copy (_{c}.{ext)

<!-- gh-comment-id:3782454671 --> @carlyd95 commented on GitHub (Jan 22, 2026): > > * Continually having to reauthenticate via the URL > > * `unpack requires a buffer of 4 bytes` error on program start (immediate fail > > I don't experience these, and would need to be able to reproduce them to find a fix. > I have not seen this error pop up in new-hierarchy so maybe it is fixed! > > * (I assume) Librespot socket times out over the course of really long downloads (-d 25-playlists.txt) with `ERROR: UNEXPECTED ERROR DURING DOWNLOADS (OSError: [Errno 9] Bad file descriptor)` (may be related to the need to reauthenticate via URL) > > I could probably work on a more active reconnect system. Make a new feature request and try to give as much detail as possible on where in the control flow you see this error being raised. > This would be great! I want to get the file copies issue sorted out and then I will submit a feature request for this. > > * zotify gets stuck when parsing playlist data for bulk downloads (-d 25-playlists.txt) > > * general increase in time it takes to parse playlists and fetch disc/track data > > Longer fetching will always happen with larger requests. Hopefully after `new-hierarchy` the increase in time will no longer be so noticeable (almost exponential). Let me know if you see the large improvements I'm hoping for after all the kinks are ironed out. > Sweet! > > In this case, Zotify attempts to save duplicate copies of the same song into the same folder using the _{count}.{ext} suffix. This creates a large number of unnecessary duplicate files and also logs them repeatedly in both .song_ids and .song_archive. > > Do you have either of the skipping-related configs active? That should add them to .m3u8s without doing any cloning or archive duplication. Even if the archiving function still ran, if an ID is already in an archive it shouldn't be readded to it. Make a Bug Report for this, with a transcript of an archive file pre-and-post duplicate entries. > Yes, both. "SKIP_EXISTING": "True", "SKIP_PREVIOUSLY_DOWNLOADED": "True" The situation is likely more involved and previously I had a fix in place that just prevented it. **Scenario 1:** Bulk download 2 or more playlists that share a common track that has yet to be downloaded. Possible Fix: ??? **Scenario 2:** Download a playlist with a track that has more than one Spotify track ID's with all songs downloading to the same path in the format: {artist} - {track}.{ext} and one of them has been downloaded before. Possible Fix: hard skip if path_exists > > Ideally, duplicate playlist references would resolve to the existing file and skip cleanly, without creating suffixed copies or additional archive entries. > > This should probably be default behavior for within a single playlist, but would have conflicts if `{playlist_num}` makes an appearance in the filename or if a user doesn't use .m3u8 files. Make this a simple Bug Report and I'll see what I can do. Might make this a config option just to safeguard other workflows dependent on file cloning in all cases. > m3u8-wise, I use my own script to populate them because I keep up with tracks that have been removed from my Spotify market but still have them downloaded. Sounds good. I created an issue for it. (#158) Thank you! > > One more edge case: when a new track is introduced for the first time and that same track appears in multiple playlists within a single batch run, I then encounter an UNEXPECTED ERROR DURING DOWNLOADS, even when using --disable-directory-archives. > > No idea about this one. Does this still happen in `new-hierarchy`? If so, throw another Bug Report onto the pile (lol). with optimized downloading off it throws and error, with it on it does not throw an error but saves a copy (_{c}.{ext)
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/zotify#128
No description provided.