mirror of
https://github.com/AliAkhtari78/SpotifyScraper.git
synced 2026-04-25 19:45:49 +03:00
[GH-ISSUE #1] Only download the information for first 30 songs for a playlist. #59
Labels
No labels
bug
bug
claude-assistant
claude-assistant
claude-assistant
dependencies
documentation
documentation
enhancement
in review list
infrastructure
infrastructure
infrastructure
pull-request
refactoring
release
stale
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/SpotifyScraper#59
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @yuhaian on GitHub (Jul 2, 2020).
Original GitHub issue: https://github.com/AliAkhtari78/SpotifyScraper/issues/1
Originally assigned to: @AliAkhtari78 on GitHub.
Describe the bug
get_playlist_url_info() only download first 30 songs (first page?) for a given playlist. My playlist has 192 songs, and this function only downloaded the information for first 30 songs.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Expected all 50 songs in pl variable, not first 30.
Screenshots
Not applicable.
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional context
None
@AliAkhtari78 commented on GitHub (Jul 11, 2020):
@yuhaian Hi.
I am very happy you used my library and I appreciate your feedback base on your experience.
Spotify Scraper uses requests to get web pages and it can load as much as tracks received by requests.
In order to load all tracks of each playlist, the page should be scrolled by a web browser so I have to add a selenium based scraper beside of request based scraper.
It isn't a problem!
It's an additional feature that I can add as a further capability using selenium beside of request to get web pages.
I will release a new version of the library soon,
So you can download and use it.