[GH-ISSUE #1323] Is there a way to see response data for streaming API calls (eg. endpoints meant for receiving Server sent events) #1317

Open
opened 2026-03-03 19:50:22 +03:00 by kerem · 41 comments
Owner

Originally created by @gamerkhang on GitHub (Aug 5, 2022).
Original GitHub issue: https://github.com/ProxymanApp/Proxyman/issues/1323

Originally assigned to: @NghiaTranUIT on GitHub.

Currently you do not see the response data for an API call until it's completed
However, there are API calls that are meant to stay active to intercept server sent events, where the response data is continuously streamed

Is there a way to see this information on Proxyman?

Originally created by @gamerkhang on GitHub (Aug 5, 2022). Original GitHub issue: https://github.com/ProxymanApp/Proxyman/issues/1323 Originally assigned to: @NghiaTranUIT on GitHub. Currently you do not see the response data for an API call until it's completed However, there are API calls that are meant to stay active to intercept server sent events, where the response data is continuously streamed Is there a way to see this information on Proxyman?
Author
Owner

@NghiaTranUIT commented on GitHub (Aug 6, 2022):

We implemented this feature in the past, but it didn't work so well, so we completely removed it 😿

Can you elaborate on what type of streaming API you'd like to check? (Content-Type?)

<!-- gh-comment-id:1207208591 --> @NghiaTranUIT commented on GitHub (Aug 6, 2022): We implemented this feature in the past, but it didn't work so well, so we completely removed it 😿 Can you elaborate on what type of streaming API you'd like to check? (Content-Type?)
Author
Owner

@ivanmoskalev commented on GitHub (Nov 23, 2022):

I'm guessing server-sent events (text/event-stream)

<!-- gh-comment-id:1324620073 --> @ivanmoskalev commented on GitHub (Nov 23, 2022): I'm guessing server-sent events (`text/event-stream`)
Author
Owner

@wesbos commented on GitHub (May 24, 2023):

This would be a neat feature to have - as streaming becomes more popular in the browser, there isn't a single tool I've found that will allow you to see the streamed request as it's coming in. All wait for the request to close before showing the entire payload of data

A common use case right now is many of these GPT chat apps will stream the response in from OpenAI. Many use Web Streams (https://developer.mozilla.org/en-US/docs/Web/API/Streams_API) and others use server sent events.

<!-- gh-comment-id:1561171980 --> @wesbos commented on GitHub (May 24, 2023): This would be a neat feature to have - as streaming becomes more popular in the browser, there isn't a single tool I've found that will allow you to see the streamed request as it's coming in. All wait for the request to close before showing the entire payload of data A common use case right now is many of these GPT chat apps will stream the response in from OpenAI. Many use Web Streams (https://developer.mozilla.org/en-US/docs/Web/API/Streams_API) and others use server sent events.
Author
Owner

@NghiaTranUIT commented on GitHub (May 25, 2023):

I guess I can support the Streaming Body by looking at the Response Header:

  • If the Content-Length is absent
  • Or Transfer-Encoding: chunked

Let me play around and send you a Beta build 👍

Reference: https://gist.github.com/CMCDragonkai/6bfade6431e9ffb7fe88

<!-- gh-comment-id:1562078740 --> @NghiaTranUIT commented on GitHub (May 25, 2023): I guess I can support the Streaming Body by looking at the Response Header: - If the Content-Length is absent - Or Transfer-Encoding: chunked Let me play around and send you a Beta build 👍 Reference: https://gist.github.com/CMCDragonkai/6bfade6431e9ffb7fe88
Author
Owner

@SOVRON commented on GitHub (Jul 13, 2023):

Any update on this? I too am trying to intercept our ChatGPT stream api request but Proxyman Mac app shows nothing? Thanks

<!-- gh-comment-id:1634589574 --> @SOVRON commented on GitHub (Jul 13, 2023): Any update on this? I too am trying to intercept our [ChatGPT stream api](https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream) request but Proxyman Mac app shows nothing? Thanks
Author
Owner

@farmisen commented on GitHub (Aug 23, 2023):

I would also be super interested by that feature - trying to debug our in house sse streamed events and being able to see them as they come instead of at once when the last one is sent would help a ton. I'll definitely be able to help QA that feature if needed.

<!-- gh-comment-id:1690339495 --> @farmisen commented on GitHub (Aug 23, 2023): I would also be super interested by that feature - trying to debug our in house sse streamed events and being able to see them as they come instead of at once when the last one is sent would help a ton. I'll definitely be able to help QA that feature if needed.
Author
Owner

@reubn commented on GitHub (Nov 5, 2023):

Currently facing this issue as well

<!-- gh-comment-id:1793734811 --> @reubn commented on GitHub (Nov 5, 2023): Currently facing this issue as well
Author
Owner

@ChristianWeyer commented on GitHub (Dec 21, 2023):

Oh yeah, this is a super helpful feature @NghiaTranUIT - any updates on this?

Thanks!

<!-- gh-comment-id:1866840122 --> @ChristianWeyer commented on GitHub (Dec 21, 2023): Oh yeah, this is a super helpful feature @NghiaTranUIT - any updates on this? Thanks!
Author
Owner

@NghiaTranUIT commented on GitHub (Dec 22, 2023):

@ChristianWeyer not yet 😢 I tried to implement it but it breaks our current flow and doesn't meet our requirements. Thus, we postpone it until we find a better solution.

For example:

  • For a single request/response, Proxyman receives a lot of chunk in a very short time (millisecond) -> causing the UI to update too many times -> Lag and unresponsive.
<!-- gh-comment-id:1867131394 --> @NghiaTranUIT commented on GitHub (Dec 22, 2023): @ChristianWeyer not yet 😢 I tried to implement it but it breaks our current flow and doesn't meet our requirements. Thus, we postpone it until we find a better solution. For example: - For a single request/response, Proxyman receives a lot of `chunk` in a very short time (millisecond) -> causing the UI to update too many times -> Lag and unresponsive.
Author
Owner

@ChristianWeyer commented on GitHub (Dec 22, 2023):

Thanks for getting back with the details @NghiaTranUIT - do you know of any similar HTTPS debugging proxy tool running on macOS that can handle response streaming?

<!-- gh-comment-id:1867316050 --> @ChristianWeyer commented on GitHub (Dec 22, 2023): Thanks for getting back with the details @NghiaTranUIT - do you know of any similar HTTPS debugging proxy tool running on macOS that can handle response streaming?
Author
Owner

@NghiaTranUIT commented on GitHub (Dec 22, 2023):

@ChristianWeyer you can use Charles Proxy. However, it's hard to set and you might follow some tutorials on Google 👍

<!-- gh-comment-id:1867321473 --> @NghiaTranUIT commented on GitHub (Dec 22, 2023): @ChristianWeyer you can use Charles Proxy. However, it's hard to set and you might follow some tutorials on Google 👍
Author
Owner

@ChristianWeyer commented on GitHub (Dec 22, 2023):

Charles is too slow and cumbersome... 😅

<!-- gh-comment-id:1867347561 --> @ChristianWeyer commented on GitHub (Dec 22, 2023): Charles is too slow and cumbersome... 😅
Author
Owner

@NghiaTranUIT commented on GitHub (Mar 8, 2024):

Good news everyone 🎉

Video

https://github.com/ProxymanApp/Proxyman/assets/5878421/fba011b2-576e-4fcd-9d34-a5e489d19400


@ChristianWeyer @reubn @farmisen @SOVRON please give it a try and share with me the result 👍 I appreciate it 🙇

<!-- gh-comment-id:1985003144 --> @NghiaTranUIT commented on GitHub (Mar 8, 2024): Good news everyone 🎉 - Proxyman now supports ServerSentEvent and displays the body as soon as there is new stream data - Work if it's SSE with `Content-Type: text/event-stream ` - Beta build: https://download.proxyman.io/beta/Proxyman_5.0.0_Support_SSE_v2.dmg ### Video https://github.com/ProxymanApp/Proxyman/assets/5878421/fba011b2-576e-4fcd-9d34-a5e489d19400 -------- @ChristianWeyer @reubn @farmisen @SOVRON please give it a try and share with me the result 👍 I appreciate it 🙇
Author
Owner

@lennondotw commented on GitHub (Mar 13, 2024):

It's working! 🎉

But there's a issue if scripting is enabled. I use scripts to add custom headers and don't change HTTP body data. With scripting enabled, SSE data is showing as stream in Proxyman but Chrome isn't receiving any data from Proxyman until the request is done. Chrome received all the SSE data at once.

Can we have an option to tell Proxyman a script will not modify HTTP body, so it doesn't have to wait for the entire request to end, but instead returns the data to the client in realtime?

<!-- gh-comment-id:1993817884 --> @lennondotw commented on GitHub (Mar 13, 2024): It's working! 🎉 But there's a issue if scripting is enabled. I use scripts to add custom headers and don't change HTTP body data. With scripting enabled, SSE data is showing as stream in Proxyman but Chrome isn't receiving any data from Proxyman until the request is done. Chrome received all the SSE data at once. Can we have an option to tell Proxyman a script will not modify HTTP body, so it doesn't have to wait for the entire request to end, but instead returns the data to the client in realtime?
Author
Owner

@NghiaTranUIT commented on GitHub (Mar 15, 2024):

@reekystive I'm not sure how to implement the Scripting with SSE yet.

Currently, When a request matches with Scripting/Breakpoint, the script will be executed when the body is fully received -> So, we can modify the body (response.body) -> Then, it writes entire HTTP Response to the client.

<!-- gh-comment-id:1998868718 --> @NghiaTranUIT commented on GitHub (Mar 15, 2024): @reekystive I'm not sure how to implement the Scripting with SSE yet. Currently, When a request matches with Scripting/Breakpoint, the script will be executed when the body is fully received -> So, we can modify the body (`response.body`) -> Then, it writes entire HTTP Response to the client.
Author
Owner

@NghiaTranUIT commented on GitHub (Mar 15, 2024):

@reekystive I'm working on this change. May I ask:

  • Do you use Scripting to modify the head of SSE Request or Response part?
<!-- gh-comment-id:1998920007 --> @NghiaTranUIT commented on GitHub (Mar 15, 2024): @reekystive I'm working on this change. May I ask: - Do you use Scripting to modify the head of SSE `Request` or `Response` part?
Author
Owner

@lennondotw commented on GitHub (Mar 15, 2024):

@NghiaTranUIT I only modify the request header with scripts to test APIs in production and test environment. But maybe someone will want to modify the response header, who knows?

<!-- gh-comment-id:1998924045 --> @lennondotw commented on GitHub (Mar 15, 2024): @NghiaTranUIT I only modify the **request header** with scripts to test APIs in production and test environment. But maybe someone will want to modify the response header, who knows?
Author
Owner

@DeniDoman commented on GitHub (Oct 1, 2024):

Thank you for the implementation! The only UX issue I have is a data representation. GPT-like streaming API outputs data word by word, and the current output looks like this:

image

While I'd prefer it to look more natural, like a combined text.


If someone else faced the issue, I created a simple online converter to merge data: {"type":"Content","event_type":"data","content":" prompt"} lines into readable output: https://brown-celinka-85.tiiny.site

<!-- gh-comment-id:2386464935 --> @DeniDoman commented on GitHub (Oct 1, 2024): Thank you for the implementation! The only UX issue I have is a data representation. GPT-like streaming API outputs data word by word, and the current output looks like this: <img width="400" alt="image" src="https://github.com/user-attachments/assets/e695cdce-5d0b-4776-aa05-5d29681193f6"> While I'd prefer it to look more natural, like a combined text. --- If someone else faced the issue, I created a simple online converter to merge `data: {"type":"Content","event_type":"data","content":" prompt"}` lines into readable output: https://brown-celinka-85.tiiny.site
Author
Owner

@ivanmoskalev commented on GitHub (Oct 1, 2024):

look more natural, like a combined text

Not all SSE APIs would benefit from this. I have worked with an instant messaging product that utilized SSE for messages that were not intended to be joined.

<!-- gh-comment-id:2386474411 --> @ivanmoskalev commented on GitHub (Oct 1, 2024): > look more natural, like a combined text Not all SSE APIs would benefit from this. I have worked with an instant messaging product that utilized SSE for messages that were not intended to be joined.
Author
Owner

@DeniDoman commented on GitHub (Oct 1, 2024):

Not all SSE APIs would benefit from this. I have worked with an instant messaging product that utilized SSE for messages that were not intended to be joined.

Sure, I agree.

<!-- gh-comment-id:2386532782 --> @DeniDoman commented on GitHub (Oct 1, 2024): > Not all SSE APIs would benefit from this. I have worked with an instant messaging product that utilized SSE for messages that were not intended to be joined. Sure, I agree.
Author
Owner

@Swimburger commented on GitHub (Mar 25, 2025):

There's another format that's commonly streamed.
Ndjson
https://github.com/ndjson/ndjson-spec

<!-- gh-comment-id:2749826197 --> @Swimburger commented on GitHub (Mar 25, 2025): There's another format that's commonly streamed. Ndjson https://github.com/ndjson/ndjson-spec
Author
Owner

@NghiaTranUIT commented on GitHub (Mar 25, 2025):

FYI, you can prettify each JSON Streaming message by selecting a JSON string -> Right-click -> View as -> Prettify JSON

https://github.com/user-attachments/assets/bf7b4858-ac82-46de-ad6b-5690206f0263

<!-- gh-comment-id:2749933068 --> @NghiaTranUIT commented on GitHub (Mar 25, 2025): FYI, you can prettify each JSON Streaming message by selecting a JSON string -> Right-click -> View as -> Prettify JSON https://github.com/user-attachments/assets/bf7b4858-ac82-46de-ad6b-5690206f0263
Author
Owner

@Swimburger commented on GitHub (Mar 25, 2025):

Oh, I forgot to mention, in my usecase in using ndjson, it's actually in the HTTP request, not the response (as opposed to the rest of this thread).
I think streamed requests should receive the same realtime UX in Proxyman.

<!-- gh-comment-id:2750393431 --> @Swimburger commented on GitHub (Mar 25, 2025): Oh, I forgot to mention, in my usecase in using ndjson, it's actually in the HTTP request, not the response (as opposed to the rest of this thread). I think streamed requests should receive the same realtime UX in Proxyman.
Author
Owner

@avarayr commented on GitHub (Apr 14, 2025):

if anyone here has a usecase of debugging OpenAI-like outputs Custom Tab that shows the content deltas, use this script for a custom tab

response.customPreviewerTabs["OpenAI SSE"] = (() => {
	const lines = response.body.split('\n');
	let output = '';
	
	for (const line of lines) {
	  if (line.startsWith('data: ')) {
	    try {
	      const json = JSON.parse(line.slice(6));
	      const delta = json.choices?.[0]?.delta;
	      if (delta?.content) {
	        output += delta.content;
	      }
	    } catch (e) {
	      // Ignore malformed JSON lines
	    }
	  }
	}
	
	return output;
})();
<!-- gh-comment-id:2802012505 --> @avarayr commented on GitHub (Apr 14, 2025): if anyone here has a usecase of debugging OpenAI-like outputs Custom Tab that shows the content deltas, use this script for a custom tab ```javascript response.customPreviewerTabs["OpenAI SSE"] = (() => { const lines = response.body.split('\n'); let output = ''; for (const line of lines) { if (line.startsWith('data: ')) { try { const json = JSON.parse(line.slice(6)); const delta = json.choices?.[0]?.delta; if (delta?.content) { output += delta.content; } } catch (e) { // Ignore malformed JSON lines } } } return output; })(); ```
Author
Owner

@johnib commented on GitHub (Jun 5, 2025):

if anyone here has a usecase of debugging OpenAI-like outputs Custom Tab that shows the content deltas, use this script for a custom tab

response.customPreviewerTabs["OpenAI SSE"] = (() => {
const lines = response.body.split('\n');
let output = '';

for (const line of lines) {
if (line.startsWith('data: ')) {
try {
const json = JSON.parse(line.slice(6));
const delta = json.choices?.[0]?.delta;
if (delta?.content) {
output += delta.content;
}
} catch (e) {
// Ignore malformed JSON lines
}
}
}

return output;
})();

Does this still work for you? the concept of a function being invoked directly? I can't get the Previewer tabs to work in SSE responses.

<!-- gh-comment-id:2942930831 --> @johnib commented on GitHub (Jun 5, 2025): > if anyone here has a usecase of debugging OpenAI-like outputs Custom Tab that shows the content deltas, use this script for a custom tab > > response.customPreviewerTabs["OpenAI SSE"] = (() => { > const lines = response.body.split('\n'); > let output = ''; > > for (const line of lines) { > if (line.startsWith('data: ')) { > try { > const json = JSON.parse(line.slice(6)); > const delta = json.choices?.[0]?.delta; > if (delta?.content) { > output += delta.content; > } > } catch (e) { > // Ignore malformed JSON lines > } > } > } > > return output; > })(); Does this still work for you? the concept of a function being invoked directly? I can't get the Previewer tabs to work in SSE responses.
Author
Owner

@NghiaTranUIT commented on GitHub (Jun 5, 2025):

@johnib Maybe I will introduce a native SSE Tab for Open API. I will trim the prefix data: and prettify each JSON part. Does it work for you?

<!-- gh-comment-id:2942955299 --> @NghiaTranUIT commented on GitHub (Jun 5, 2025): @johnib Maybe I will introduce a native SSE Tab for Open API. I will trim the prefix `data: ` and prettify each JSON part. Does it work for you?
Author
Owner

@ChristianWeyer commented on GitHub (Jun 5, 2025):

@johnib Maybe I will introduce a native SSE Tab for Open API. I will trim the prefix data: and prettify each JSON part. Does it work for you?

Actually, for OpenAI-compatible endpoints, not just OpenAI ;-)

<!-- gh-comment-id:2943030805 --> @ChristianWeyer commented on GitHub (Jun 5, 2025): > [@johnib](https://github.com/johnib) Maybe I will introduce a native SSE Tab for Open API. I will trim the prefix `data: ` and prettify each JSON part. Does it work for you? Actually, for OpenAI-compatible endpoints, not just OpenAI ;-)
Author
Owner

@johnib commented on GitHub (Jun 5, 2025):

I think my problem is different. Even the simplest values being set on the previewer tab.

Not sure how to debug this, even the provided example by proxyman doesn't work.

The request is getting processed by the script, I can see this on the logs.

<!-- gh-comment-id:2943210238 --> @johnib commented on GitHub (Jun 5, 2025): I think my problem is different. Even the simplest values being set on the previewer tab. Not sure how to debug this, even the provided example by proxyman doesn't work. The request is getting processed by the script, I can see this on the logs.
Author
Owner

@NghiaTranUIT commented on GitHub (Jun 5, 2025):

@ChristianWeyer @johnib let's try this beta build: https://download.proxyman.io/beta/Proxyman_5.20.0_Support_SSE_Tab_for_openapi_endpoints.dmg

Changelogs

  • Add SEE tab to the custom Tabs
  • Auto prettify JSON in the event

Screenshots

Capture ChatGPT completion API with Proxyman

Capture ChatGPT completion API with Proxyman

<!-- gh-comment-id:2943230276 --> @NghiaTranUIT commented on GitHub (Jun 5, 2025): @ChristianWeyer @johnib let's try this beta build: https://download.proxyman.io/beta/Proxyman_5.20.0_Support_SSE_Tab_for_openapi_endpoints.dmg ### Changelogs - Add SEE tab to the custom Tabs - Auto prettify JSON in the event ### Screenshots ![Capture ChatGPT completion API with Proxyman](https://github.com/user-attachments/assets/e936101c-de28-40b0-9e06-f02286975c27) ![Capture ChatGPT completion API with Proxyman](https://github.com/user-attachments/assets/36207172-9363-473a-8f09-199b638055dc)
Author
Owner

@ChristianWeyer commented on GitHub (Jun 5, 2025):

Thanks, just tried it. It works.
But... how can we see the final response - for humans?

<!-- gh-comment-id:2943319181 --> @ChristianWeyer commented on GitHub (Jun 5, 2025): Thanks, just tried it. It works. But... how can we see the final response - for humans?
Author
Owner

@NghiaTranUIT commented on GitHub (Jun 5, 2025):

Thanks, just tried it. It works. But... how can we see the final response - for humans?

Just open the Body Tab. It shows the Raw SSE events

<!-- gh-comment-id:2943394745 --> @NghiaTranUIT commented on GitHub (Jun 5, 2025): > Thanks, just tried it. It works. But... how can we see the final response - for humans? Just open the Body Tab. It shows the Raw SSE events
Author
Owner

@ChristianWeyer commented on GitHub (Jun 5, 2025):

No, I mean the final response. The 'assembled' response.

<!-- gh-comment-id:2943405263 --> @ChristianWeyer commented on GitHub (Jun 5, 2025): No, I mean the final response. The 'assembled' response.
Author
Owner

@NghiaTranUIT commented on GitHub (Jun 5, 2025):

can you give me example of "The 'assembled' response." ?

It's SSE, the server sent a bunch of events during the connection. There is no "assembled" response.

<!-- gh-comment-id:2943412716 --> @NghiaTranUIT commented on GitHub (Jun 5, 2025): can you give me example of "The 'assembled' response." ? It's SSE, the server sent a bunch of events during the connection. There is no "assembled" response.
Author
Owner

@ChristianWeyer commented on GitHub (Jun 5, 2025):

Yes, exactly. But especially for LLM calls (like OpenAI), the actual interesting trace is the final response.

When I send a request to OpenAI and get back an answer, it is the text of the LLM response which is of interest. In streamed mode, they are using SSE - this is fine and nice as to how your tool currently handles it.

But the 'assembled' final response is the concatenated content values of all events. We would need a way to 'prettify' this.

Does that make sense?

<!-- gh-comment-id:2943475882 --> @ChristianWeyer commented on GitHub (Jun 5, 2025): Yes, exactly. But especially for LLM calls (like OpenAI), the actual interesting trace is the final response. When I send a request to OpenAI and get back an answer, it is the text of the LLM response which is of interest. In streamed mode, they are using SSE - this is fine and nice as to how your tool currently handles it. But the 'assembled' final response is the concatenated `content` values of all events. We would need a way to 'prettify' this. Does that make sense?
Author
Owner

@NghiaTranUIT commented on GitHub (Jun 5, 2025):

Thanks, I understand it, but each event has a different JSON key-value.

How can I assemble a new one?

Input:

event: delta_encoding
data: "v1"

event: delta
data: {"p": "", "o": "add", "v": {"message": {"id": "93b01adf-5682-4deb-b29a-ef4b41c8500a", "author": {"role": "assistant", "name": null, "metadata": {}}, "create_time": 1749105849.183922, "update_time": null, "content": {"content_type": "text", "parts": [""]}, "status": "in_progress", "end_turn": null, "weight": 1.0, "metadata": {"citations": [], "content_references": [], "message_type": "next", "model_slug": "gpt-4o", "default_model_slug": "gpt-4o", "parent_id": "e1c0683e-cbe4-4428-aca1-c97575e274d5", "model_switcher_deny": []}, "recipient": "all", "channel": null}, "conversation_id": "68413ca4-6a68-8005-923b-e488a003b02e", "error": null}, "c": 0}     

event: delta
data: {"p": "/message/content/parts/0", "o": "append", "v": "Just"}   

event: delta
data: {"v": " checking in \u2014 do you need"}

event: delta
data: {"v": " help with something specific? Let me know!"}    

event: delta
data: {"p": "", "o": "patch", "v": [{"p": "/message/content/parts/0", "o": "append", "v": " \ud83d\ude0a"}, {"p": "/message/status", "o": "replace", "v": "finished_successfully"}, {"p": "/message/end_turn", "o": "replace", "v": true}, {"p": "/message/metadata", "o": "append", "v": {"is_complete": true, "finish_details": {"type": "stop", "stop_tokens": [200002]}}}]}       

data: {"type": "message_stream_complete", "conversation_id": "68413ca4-6a68-8005-923b-e488a003b02e"}

data: {"type": "conversation_detail_metadata", "banner_info": null, "blocked_features": [], "model_limits": [], "limits_progress": [{"feature_name": "deep_research", "remaining": 25, "reset_after": "2025-07-05T06:44:10.212026+00:00"}], "default_model_slug": "gpt-4o", "conversation_id": "68413ca4-6a68-8005-923b-e488a003b02e"}

data: [DONE]

I might an sample output here

<!-- gh-comment-id:2943501314 --> @NghiaTranUIT commented on GitHub (Jun 5, 2025): Thanks, I understand it, but each event has a different JSON key-value. How can I assemble a new one? Input: ``` event: delta_encoding data: "v1" event: delta data: {"p": "", "o": "add", "v": {"message": {"id": "93b01adf-5682-4deb-b29a-ef4b41c8500a", "author": {"role": "assistant", "name": null, "metadata": {}}, "create_time": 1749105849.183922, "update_time": null, "content": {"content_type": "text", "parts": [""]}, "status": "in_progress", "end_turn": null, "weight": 1.0, "metadata": {"citations": [], "content_references": [], "message_type": "next", "model_slug": "gpt-4o", "default_model_slug": "gpt-4o", "parent_id": "e1c0683e-cbe4-4428-aca1-c97575e274d5", "model_switcher_deny": []}, "recipient": "all", "channel": null}, "conversation_id": "68413ca4-6a68-8005-923b-e488a003b02e", "error": null}, "c": 0} event: delta data: {"p": "/message/content/parts/0", "o": "append", "v": "Just"} event: delta data: {"v": " checking in \u2014 do you need"} event: delta data: {"v": " help with something specific? Let me know!"} event: delta data: {"p": "", "o": "patch", "v": [{"p": "/message/content/parts/0", "o": "append", "v": " \ud83d\ude0a"}, {"p": "/message/status", "o": "replace", "v": "finished_successfully"}, {"p": "/message/end_turn", "o": "replace", "v": true}, {"p": "/message/metadata", "o": "append", "v": {"is_complete": true, "finish_details": {"type": "stop", "stop_tokens": [200002]}}}]} data: {"type": "message_stream_complete", "conversation_id": "68413ca4-6a68-8005-923b-e488a003b02e"} data: {"type": "conversation_detail_metadata", "banner_info": null, "blocked_features": [], "model_limits": [], "limits_progress": [{"feature_name": "deep_research", "remaining": 25, "reset_after": "2025-07-05T06:44:10.212026+00:00"}], "default_model_slug": "gpt-4o", "conversation_id": "68413ca4-6a68-8005-923b-e488a003b02e"} data: [DONE] ``` I might an sample output here
Author
Owner

@ChristianWeyer commented on GitHub (Jun 5, 2025):

See attached a sample answer from OpenAI GPT-4o for a simple RAG query.

The final text response is:

Der Monitor, auf den sich die bereitgestellten Informationen beziehen, ist ein OLED-Monitor mit einem Seitenverhältnis von 21:9 und einer Bildschirmauflösung von 3440x1440 Pixeln. Die Bildschirmdiagonale beträgt 99,2 cm, was 39 Zoll entspricht, und die sichtbare Bildschirmfläche ist 35,0 dm² groß. Diese Details sind in den Abschnitten 10 bis 15 zu finden.

In Bezug auf die Energieeffizienz hat der Monitor eine Leistungsaufnahme im Ein-Zustand von 37,8 W sowohl bei Standard-Dynamikumfang (SDR) als auch bei hohem Dynamikumfang (HDR), wobei die Energieeffizienzklasse für HDR mit F angegeben ist. Im Aus-Zustand beträgt die Leistungsaufnahme 0,3 W und im Bereitschaftszustand 0,5 W. Diese Informationen sind in den Abschnitten 3 bis 8 zu finden.

Der Monitor wird mit einem externen Netzteil betrieben, das in der Verkaufsverpackung enthalten ist. Das Netzteil hat eine Eingangsspannung von 230 V und eine Ausgangsspannung von 19,5 V. Der Monitor bietet keine automatische Helligkeitsregelung (ABC) und keinen Spracherkennungssensor. Diese Details sind in den Abschnitten 24 bis 25.3 und 17 bis 18 zu finden.

Response - api.openai.com_v1_chat_completions.txt

<!-- gh-comment-id:2943514868 --> @ChristianWeyer commented on GitHub (Jun 5, 2025): See attached a sample answer from OpenAI GPT-4o for a simple RAG query. The final text response is: > Der Monitor, auf den sich die bereitgestellten Informationen beziehen, ist ein OLED-Monitor mit einem Seitenverhältnis von 21:9 und einer Bildschirmauflösung von 3440x1440 Pixeln. Die Bildschirmdiagonale beträgt 99,2 cm, was 39 Zoll entspricht, und die sichtbare Bildschirmfläche ist 35,0 dm² groß. Diese Details sind in den Abschnitten 10 bis 15 zu finden. > > In Bezug auf die Energieeffizienz hat der Monitor eine Leistungsaufnahme im Ein-Zustand von 37,8 W sowohl bei Standard-Dynamikumfang (SDR) als auch bei hohem Dynamikumfang (HDR), wobei die Energieeffizienzklasse für HDR mit F angegeben ist. Im Aus-Zustand beträgt die Leistungsaufnahme 0,3 W und im Bereitschaftszustand 0,5 W. Diese Informationen sind in den Abschnitten 3 bis 8 zu finden. > > Der Monitor wird mit einem externen Netzteil betrieben, das in der Verkaufsverpackung enthalten ist. Das Netzteil hat eine Eingangsspannung von 230 V und eine Ausgangsspannung von 19,5 V. Der Monitor bietet keine automatische Helligkeitsregelung (ABC) und keinen Spracherkennungssensor. Diese Details sind in den Abschnitten 24 bis 25.3 und 17 bis 18 zu finden. [Response - api.openai.com_v1_chat_completions.txt](https://github.com/user-attachments/files/20608003/Response.-.api.openai.com_v1_chat_completions.txt)
Author
Owner

@NghiaTranUIT commented on GitHub (Jun 6, 2025):

Sorry, I don't understand how your attached file is assembled? From what I see, it has many individual events.

If you don't mind, it'd be great if you can share with me the output of my sample input in previous comment?

<!-- gh-comment-id:2947557542 --> @NghiaTranUIT commented on GitHub (Jun 6, 2025): Sorry, I don't understand how your attached file is assembled? From what I see, it has many individual events. If you don't mind, it'd be great if you can share with me the output of my sample input in previous comment?
Author
Owner

@ChristianWeyer commented on GitHub (Jun 7, 2025):

Sorry, I don't understand how your attached file is assembled? From what I see, it has many individual events.

This is an export from your tool :-)

<!-- gh-comment-id:2952165000 --> @ChristianWeyer commented on GitHub (Jun 7, 2025): > Sorry, I don't understand how your attached file is assembled? From what I see, it has many individual events. This is an export from your tool :-)
Author
Owner

@NghiaTranUIT commented on GitHub (Jun 13, 2025):

@ChristianWeyer I understand your suggestion. Working on it now 👍

  • New view will auto merge all content key from OpenAPI stream event, much easier to read the OpenAPI response 👍
<!-- gh-comment-id:2969637475 --> @NghiaTranUIT commented on GitHub (Jun 13, 2025): @ChristianWeyer I understand your suggestion. Working on it now 👍 - New view will auto merge all `content` key from OpenAPI stream event, much easier to read the OpenAPI response 👍
Author
Owner

@NghiaTranUIT commented on GitHub (Jun 13, 2025):

@ChristianWeyer can you try this beta build: https://download.proxyman.io/beta/Proxyman_5.21.0_Try_to_merge_openai_response.dmg

Changelog

  • Auto merge the content for /completion and /response endpoint from openapi

Demo

https://github.com/user-attachments/assets/2ef7afd6-cf30-4813-b3b3-bf13bb238cdf

<!-- gh-comment-id:2970655098 --> @NghiaTranUIT commented on GitHub (Jun 13, 2025): @ChristianWeyer can you try this beta build: https://download.proxyman.io/beta/Proxyman_5.21.0_Try_to_merge_openai_response.dmg ### Changelog - Auto merge the content for /completion and /response endpoint from openapi ### Demo https://github.com/user-attachments/assets/2ef7afd6-cf30-4813-b3b3-bf13bb238cdf
Author
Owner

@ChristianWeyer commented on GitHub (Jun 17, 2025):

Nice, this is great. Thank you!

<!-- gh-comment-id:2979934136 --> @ChristianWeyer commented on GitHub (Jun 17, 2025): Nice, this is great. Thank you!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Proxyman#1317
No description provided.