[GH-ISSUE #1745] [feature]: Support "Doing something" with the responses from web requests in tests - artifacts #553

Open
opened 2026-03-16 15:57:37 +03:00 by kerem · 8 comments
Owner

Originally created by @stellarpower on GitHub (Jul 19, 2021).
Original GitHub issue: https://github.com/hoppscotch/hoppscotch/issues/1745

Is your feature request related to a problem? Please describe.
I'm not a web-stack developer, so, I spend a lot of my time trying to avoid anything to do with it like the plague. I therefore may simply be googling the wrong things because it's not the sort of development I spend my time on. But, for a project I'm working on, I have had to expose a computationally expensive process via an HTTP server. I had been using Postman to run basic testing of this, as a more controllable process than using my browser. Finally giving up on this, and looking for a replacement piece of software, I have arrived at Hoppscotch. Given that I can't seem to find a package that is doing what I want/need to do, I thought I would suggest this "feature" - or rather, a description of my usecase; how to implement htis if you wanted to I'd leave down to you.

I need to perform large regression tests on this computationally-expensive proces the web server is exposing, not just on the server itself. I therefore need to run batches of requests overnight, and test the accuracy and validity of the data returned in them. It seems this is not easy to do in Postman, or competing software and I have been around the houses searching for the right tool that does make it easy. This process takes over 5 minutes to run per request, and the JSON returned is over 5MiB per request. I need to run hundreds of these every day.

I arrived at Hoppscotch via this process and it seems that most if not all of the web API testing frameworks I have come across in this time only concentrate on tests that return a binary pass/fail status. They seem (to me) to be geared towards the testing of the API itself and less concerned about the data values over the format - this makes sense for many applications, but not as much for mine. It's great to be logging response codes, the time it takes, whether certain data values are present, etc. - I'm trying to avoid simply writing a shell script using curl, but for me I also want to use the same one tool to run these regressions, and that means that I need to get data out of Hoppscotch (or the other tools I have tried) in a controlled way; dumping all the responses into a file isn't going to work in the long run, I need to extract, process, and keep certain values.

Describe the solution you'd like

As I see the testing panel in the UI states that this part is in beta stage, I'd like to add that some ability to mangle or at least output to file in an orderly way all or part of the response data from the requests would be a much-needed feature for my use-case. I don't want to write scripts from scratch in a chosen scripting language, but at this stage, I am probably going to fall back to something like Curl because no piece of modern API-testing software that I have found available is allowing me to keep the data returned in a useful format - they have all been designed around pass/fail tests so far. Postman/Newman can log the response - but this has resulted in just one JSON/CSV file that is ~200 MiB in size for me to trawl through. And I had to use a number of console.logs or appending the results to the test name or similar hacky methods to get this far - that is what I saw recommended online.

In the simplest form I think Hoppscotch could support writing out each response to a file, with some basic metadata, from the CLI version (perhaps giving the option - output the whole collection to one file, or use a filename template to log each response to one file), and I would like ot be able to return values from my tests, which would then be collected, not just indicate pass or fail:

pw.test("Mean width from samples", () => {
    var j = JSON.parse(pw.response);
    var total = 0;
    for(var i = 0; i < j.samples.length; i++) {
        total += j.samples[i].width;
    }
    return (total / j.samples.length);
});

as a toy example.

Describe alternatives you've considered
I'll leave that open for others, as I'm not familiar enough with the software at this stage to make meaningful comments.

Originally created by @stellarpower on GitHub (Jul 19, 2021). Original GitHub issue: https://github.com/hoppscotch/hoppscotch/issues/1745 **Is your feature request related to a problem? Please describe.** I'm not a web-stack developer, so, I spend a lot of my time trying to avoid anything to do with it like the plague. I therefore may simply be googling the wrong things because it's not the sort of development I spend my time on. But, for a project I'm working on, I have had to expose a computationally expensive process via an HTTP server. I had been using Postman to run basic testing of this, as a more controllable process than using my browser. Finally giving up on this, and looking for a replacement piece of software, I have arrived at Hoppscotch. Given that I can't seem to find a package that is doing what I want/need to do, I thought I would suggest this "feature" - or rather, a description of my usecase; how to implement htis if you wanted to I'd leave down to you. I need to perform large regression tests on this computationally-expensive proces the web server is exposing, not just on the server itself. I therefore need to run batches of requests overnight, **and test the accuracy and validity of the data returned in them**. It seems this is not easy to do in Postman, or competing software and I have been around the houses searching for the right tool that does make it easy. This process takes over 5 minutes to run per request, and the JSON returned is over 5MiB per request. I need to run hundreds of these every day. I arrived at Hoppscotch via this process and it seems that most if not all of the web API testing frameworks I have come across in this time only concentrate on tests that return a binary pass/fail status. They seem (to me) to be geared towards the testing of the API itself and less concerned about the data values over the format - this makes sense for many applications, but not as much for mine. It's great to be logging response codes, the time it takes, whether certain data values are present, etc. - I'm trying to avoid simply writing a shell script using curl, but for me I also want to use the same one tool to run these regressions, and that means that I need to get data out of Hoppscotch (or the other tools I have tried) in a controlled way; dumping all the responses into a file isn't going to work in the long run, I need to extract, process, and keep certain values. **Describe the solution you'd like** As I see the testing panel in the UI states that this part is in beta stage, I'd like to add that some ability to mangle or at least output to file in an orderly way all or part of the response data from the requests would be a much-needed feature for my use-case. I don't want to write scripts from scratch in a chosen scripting language, but at this stage, I am probably going to fall back to something like Curl because no piece of modern API-testing software that I have found available is allowing me to keep the data returned in a useful format - they have all been designed around pass/fail tests so far. Postman/Newman can log the response - but this has resulted in just one JSON/CSV file that is ~200 MiB in size for me to trawl through. And I had to use a number of `console.log`s or appending the results to the test name or similar hacky methods to get this far - that is what I saw recommended online. In the simplest form I think Hoppscotch could support writing out each response to a file, with some basic metadata, from the CLI version (perhaps giving the option - output the whole collection to one file, or use a filename template to log each response to one file), and I would like ot be able to return values from my tests, which would then be collected, not just indicate pass or fail: ``` pw.test("Mean width from samples", () => { var j = JSON.parse(pw.response); var total = 0; for(var i = 0; i < j.samples.length; i++) { total += j.samples[i].width; } return (total / j.samples.length); }); ``` as a toy example. **Describe alternatives you've considered** I'll leave that open for others, as I'm not familiar enough with the software at this stage to make meaningful comments.
Author
Owner

@liyasthomas commented on GitHub (Jul 19, 2021):

@stellarpower thank you for the detailed explanation - we'll get back to you with an appropriate answer as fast as possible.

<!-- gh-comment-id:882657278 --> @liyasthomas commented on GitHub (Jul 19, 2021): @stellarpower thank you for the detailed explanation - we'll get back to you with an appropriate answer as fast as possible.
Author
Owner

@stellarpower commented on GitHub (Jul 19, 2021):

Sure, thanks very much! Sorry it's a bit vague - it's not really my area and this is just one of those situations where I need something to work and I don't really care how if it doesn't come back to bite me later down the line. Your tool looks nice and simple as I can play with it online and if it's growing quickly wanted to contribute in some way.

<!-- gh-comment-id:882661534 --> @stellarpower commented on GitHub (Jul 19, 2021): Sure, thanks very much! Sorry it's a bit vague - it's not really my area and this is just one of those situations where I need something to work and I don't really care how if it doesn't come back to bite me later down the line. Your tool looks nice and simple as I can play with it online and if it's growing quickly wanted to contribute in some way.
Author
Owner

@AndrewBastin commented on GitHub (Jul 19, 2021):

Well, we are toying around with an idea called "artifacts" for the test scripts which let people define a function that returns a value that can be saved as a JSON file. This is sorta the API we have in mind

pw.artifact("meanWidthFromSamples", () => {
    var j = JSON.parse(pw.response);
    var total = 0;
    for(var i = 0; i < j.samples.length; i++) {
        total += j.samples[i].width;
    }
    return (total / j.samples.length)
})

Once you do this, it will run your tests alongside it and emit a file which will contain all the artifact values in JSON format.
Something like this:

{
   "meanWidthFromSamples": 22
}

We have test revamps as part of our roadmap, but its a bit further down the line.

I will update this issue as soon as there is progress made. But this will take some time to be done as we are tackling other features and requests at the moment.

<!-- gh-comment-id:882775009 --> @AndrewBastin commented on GitHub (Jul 19, 2021): Well, we are toying around with an idea called "artifacts" for the test scripts which let people define a function that returns a value that can be saved as a JSON file. This is sorta the API we have in mind ```js pw.artifact("meanWidthFromSamples", () => { var j = JSON.parse(pw.response); var total = 0; for(var i = 0; i < j.samples.length; i++) { total += j.samples[i].width; } return (total / j.samples.length) }) ``` Once you do this, it will run your tests alongside it and emit a file which will contain all the artifact values in JSON format. Something like this: ```json { "meanWidthFromSamples": 22 } ``` We have test revamps as part of our roadmap, but its a bit further down the line. I will update this issue as soon as there is progress made. But this will take some time to be done as we are tackling other features and requests at the moment.
Author
Owner

@stellarpower commented on GitHub (Jul 19, 2021):

That's perfect, that's precisely all I ever needed! I couldn't understand why Postman didn't support something just like that. If it's also possible to save each response to a sepeate file inside a folder (in some relevant way - i.e., this makes obvious sense for the CLI but not sure how would bemanaged from the web UI) that would be exactly what I was looking for. I was also looking to be able to serve a page somehow that would in effect allow running of these tests for others in my team - as I wrote the bases in Postman I currently log in and run newman using GNU screen and Hoppscotch is offering this off the bat. So I'll certainly keep an eye on the repo, and thanks for the rapid response!

<!-- gh-comment-id:882815294 --> @stellarpower commented on GitHub (Jul 19, 2021): That's perfect, that's precisely all I ever needed! I couldn't understand why Postman didn't support something just like that. If it's also possible to save each response to a sepeate file inside a folder (in some relevant way - i.e., this makes obvious sense for the CLI but not sure how would bemanaged from the web UI) that would be exactly what I was looking for. I was also looking to be able to serve a page somehow that would in effect allow running of these tests for others in my team - as I wrote the bases in Postman I currently log in and run newman using GNU screen and Hoppscotch is offering this off the bat. So I'll certainly keep an eye on the repo, and thanks for the rapid response!
Author
Owner

@stellarpower commented on GitHub (Aug 15, 2021):

I was thinking about my testing again today; perhaps I'm just going over what I already said again, but I think what I need right now is less of a tool that can test my API, and more of a tool that can test what my API is doing.

I do the former in other ways to check my deployment, and it seems that competitors to Hoppscotch or any googling I do for automated testing of web APIs, focusses on the former - e.g. does my authentication work, do I get JSON back with the right structure, do I 404 for invalid values, etc.? If this means Hoppscotch isn't the tool I'm looking for, then fair enough. However as it's FOSS, I think perhaps if it could tick both boxes, it could be a very powerful tool - "Does my data come back okay, and is the data itself sensible and correct?"

I do need to check everything is okay when I redeploy, but semantically the value for my situation is also in the numbers the API returns, and I think it would certainly be helpful to have one tool to do the two, especially as with a very pretty UI, I could potentially just serve Hoppscotch to my team at large with some instructions. (I guess real bonus points down the line would be some less involved UI wrapper on top that could provide more domain-specific information. I could create test suites using Hoppscotch, but present a simpler interface to external team members that reduced the number of options and allowed selecting variables specific to our project)

In this sense, if test data for runs can be loaded by a JSON file, then the infrastructure is kinda already there; if outputting this via artifacts, or something else, is there, then it's perhaps more semantic. Some test cases will be checking everything works, and some will be seeing if it works well, and those two are probably best handled slightly separated and slightly differently IMO. Postman let me pass in variables to run a collection, but I think variables for my setup, and variables for my data, are perhaps two different channels, and Postman had input for both, but not output. Maybe even a series of transformations comes to mind. Test cases are a bit like a data pipeline of predicates. Transforming the raw JSON brought back, in steps, into an array of numbers, and finally a useful CSV file for my team, might be helpful. If I want to indicate an error, it would probably make sense to add columns to that array as I go. If our results are off by 100%, that's less a test failure as if infrastructure failed, and more a red cell in a spreadsheet we need to look into for our algorithm for next time.

Sorry for making a braindump! Just a bit of a userstory.

<!-- gh-comment-id:898973411 --> @stellarpower commented on GitHub (Aug 15, 2021): I was thinking about my testing again today; perhaps I'm just going over what I already said again, but I think what I need right now is less of a tool that can test my API, and more of a tool that can test what my API is doing. I do the former in other ways to check my deployment, and it seems that competitors to Hoppscotch or any googling I do for automated testing of web APIs, focusses on the former - e.g. does my authentication work, do I get JSON back with the right structure, do I 404 for invalid values, etc.? If this means Hoppscotch isn't the tool I'm looking for, then fair enough. However as it's FOSS, I think perhaps if it could tick both boxes, it could be a very powerful tool - "Does my data come back okay, and is the data itself sensible and correct?" I do need to check everything is okay when I redeploy, but semantically the value for my situation is also in the numbers the API returns, and I think it would certainly be helpful to have one tool to do the two, especially as with a very pretty UI, I could potentially just serve Hoppscotch to my team at large with some instructions. (I guess real bonus points down the line would be some less involved UI wrapper on top that could provide more domain-specific information. I could create test suites using Hoppscotch, but present a simpler interface to external team members that reduced the number of options and allowed selecting variables specific to our project) In this sense, if test data for runs can be loaded by a JSON file, then the infrastructure is kinda already there; if outputting this via artifacts, or something else, is there, then it's perhaps more semantic. Some test cases will be checking everything works, and some will be seeing if it works well, and those two are probably best handled slightly separated and slightly differently IMO. Postman let me pass in variables to run a collection, but I think variables for my setup, and variables for my data, are perhaps two different channels, and Postman had input for both, but not output. Maybe even a series of transformations comes to mind. Test cases are a bit like a data pipeline of predicates. Transforming the raw JSON brought back, in steps, into an array of numbers, and finally a useful CSV file for my team, might be helpful. If I want to indicate an error, it would probably make sense to add columns to that array as I go. If our results are off by 100%, that's less a test failure as if infrastructure failed, and more a red cell in a spreadsheet we need to look into for our algorithm for next time. Sorry for making a braindump! Just a bit of a userstory.
Author
Owner

@stellarpower commented on GitHub (Feb 23, 2022):

Hi, just checking if the artifacts feature has made any progress recently. Is this something that's made its way into the main branch? Cheers!

<!-- gh-comment-id:1049294003 --> @stellarpower commented on GitHub (Feb 23, 2022): Hi, just checking if the artifacts feature has made any progress recently. Is this something that's made its way into the main branch? Cheers!
Author
Owner

@AndrewBastin commented on GitHub (Feb 24, 2022):

@stellarpower This is a low priority issue as of right now, we have plans for a completely revamped hs API to replace the current pw scripting API with support for more stuff like artifacts and more solid foundation and structures for script authors.

We recently did make some changes to make this easy to implement under the hood though!

<!-- gh-comment-id:1050000094 --> @AndrewBastin commented on GitHub (Feb 24, 2022): @stellarpower This is a low priority issue as of right now, we have plans for a completely revamped `hs` API to replace the current `pw` scripting API with support for more stuff like artifacts and more solid foundation and structures for script authors. We recently did make some changes to make this easy to implement under the hood though!
Author
Owner

@stellarpower commented on GitHub (Feb 24, 2022):

Okay, thanks for the update! I will check back over time. If you're talking about better scripting with this, at a high level sounds like that might tick the boxes for this feature(?)

<!-- gh-comment-id:1050160085 --> @stellarpower commented on GitHub (Feb 24, 2022): Okay, thanks for the update! I will check back over time. If you're talking about better scripting with this, at a high level sounds like that might tick the boxes for this feature(?)
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/hoppscotch#553
No description provided.