mirror of
https://github.com/spotipy-dev/spotipy.git
synced 2026-04-27 08:35:49 +03:00
[GH-ISSUE #718] Writing integration tests for spotipy-based app #432
Labels
No labels
api-bug
bug
dependencies
documentation
duplicate
enhancement
external-ide
headless-mode
implicit-grant-flow
invalid
missing-endpoint
pr-welcome
private-api
pull-request
question
spotipy3
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/spotipy#432
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @joebonneau on GitHub (Aug 19, 2021).
Original GitHub issue: https://github.com/spotipy-dev/spotipy/issues/718
Hi there,
I'm writing integration tests for a CLI tool and I'm trying to determine the best way to structure my tests. For things like testing whether a search was run successfully, it makes sense to test the response itself.
But what about testing something like whether
next_trackwas successful? I haven't been able to figure out whether I can extract the response code, so I would need to, instead, first make a call tocurrent_playbackthen call it again and test whether the responses differ.Of course, I could always just test whether something like a
SpotifyExceptionis raised, but I'm not sure I would encounter this error unless my authentication was unsuccessful. I do hit this error when there is no active device anddevice_idisn't passed in, but I'm thinking I could get some wonky results in my CI builds if there happens to be an active device and I didn't realize before running the build.So I guess my questions are:
Spotifyobject after making a call?Thanks!
@stephanebruckert commented on GitHub (Aug 19, 2021):
Agree!
Why would that be different, I believe it would also directly send a response? For example, show playlist tracks, do "next" once, and verify that you get the second page?
Okay in this case you would like to force spotipy to fail. Have you heard of mocking? It can help you fake the result of methods you don't have access to. Since the spotipy object is the object of a class, you could search for "instance method mock python", here are some ideas https://stackoverflow.com/q/5036920/1515819
@Peter-Schorn commented on GitHub (Aug 19, 2021):
spotipy does not always return the response code. If the underlying API request returns a non-successful status code, then the method (e.g.,
Spotify.next_track) will raise an exception (which may contain the http status code in it), which you'll probably want to interpret as a test failure. If a successful status code is returned, then an exception will not be raised. For the purposes of your tests, this is all the information you need.There are plenty of other error conditions that are unrelated to authentication. For example, if you provide an invalid id to a method such as
Spotify.track, then an exception will be raised.@joebonneau commented on GitHub (Aug 20, 2021):
Thanks for the input @stephanebruckert and @Peter-Schorn ! I think that I've got my head on my shoulders now as far as approaching the problem goes. I am still quite new to the concept of mocking (though it's making more and more sense as I go) and still wrapping my head around when I actually need to use it.
My thought here was that a method like
Spotify.search()returns data in the response butSpotify.next_track()does not and I was unsure how to handle that. But as Peter mentioned, an exception will be raised if the method is unsuccessful and that should be sufficient for my purposes.@joebonneau commented on GitHub (Aug 21, 2021):
So I was able to write the integration tests but I'm now running into another issue that perhaps one of you has experience with. In CI, my integration tests seem to get held up at the step where the authentication is made and I've narrowed it down to the point were the redirect URI is opened in the browser. In the CI environment, the browser can't open, but also not sure how to automatically enter in the URL (I saw somewhere you can get the URL using
urllib3).I know this a bit out of scope, but if you have any thoughts, it would be appreciated!
@Peter-Schorn commented on GitHub (Aug 21, 2021):
Don't test the authorization process. Authorize your app in advance and then save the token info in persistent storage.
@joebonneau commented on GitHub (Aug 21, 2021):
I've already authorized the app on my local machine and the tests run just fine, so I'm trying to work through what this might look like. I assume that you don't mean to commit the
.cachefile that is generated? I'd assume that would be as bad as committing theCLIENT_SECRET, but I may be thinking about this incorrectly.@Peter-Schorn commented on GitHub (Aug 21, 2021):
Yes, committing the cache file to git is as bad as committing the client secret. But there are ways of securely storing the token info. JSON is just a string, so you can store this string as a secret environment variable, just like your client secret. If you're using github actions to run your tests, then you can use a repository secret.
@joebonneau commented on GitHub (Aug 21, 2021):
Ah, very cool! I'm using Travis CI, but I'm thinking something like this might work:
Define the token info as an environment variable
TOKEN_INFO, then:I'll give it a go sometime this weekend, but let me know if this seems like a reasonable approach. Thanks again for all of the help, it's really invaluable for a new dev like myself.
@Peter-Schorn commented on GitHub (Aug 21, 2021):
That won't work.
cache_pathshould be a file path.CacheFileHandlerreads and writes to a file. You should useMemoryCacheHandlerinstead. Make sure to convert the JSON string to a python dict before passing it in toMemoryCacheHandler.__init__.@joebonneau commented on GitHub (Aug 21, 2021):
Finally got it! Phew!