[GH-ISSUE #12] Unexpected token : in JSON at position 426 #12

Closed
opened 2026-03-03 13:52:10 +03:00 by kerem · 6 comments
Owner

Originally created by @AtzaMan on GitHub (Apr 4, 2024).
Original GitHub issue: https://github.com/jehna/humanify/issues/12

After running the following command :

npm start -- --key="sk-_your-token_" -o deobfuscated.js obfuscated.js

I get the following error:

SyntaxError: Unexpected token : in JSON at position 426 at JSON.parse (<anonymous>) at codeToVariableRenames (file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/openai.ts:76:59) at processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/openai.ts:21:23 at async Promise.all (index 4) at async mapPromisesParallel (file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/run-promises-in-parallel.ts:17:5) at async client.createChatCompletion.model (file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/openai.ts:20:5) at async file:///C:/Users/Alexander/Documents/GitHub/humanify/src/index.ts:68:25

Originally created by @AtzaMan on GitHub (Apr 4, 2024). Original GitHub issue: https://github.com/jehna/humanify/issues/12 After running the following command : `npm start -- --key="sk-_your-token_" -o deobfuscated.js obfuscated.js` I get the following error: `SyntaxError: Unexpected token : in JSON at position 426 at JSON.parse (<anonymous>) at codeToVariableRenames (file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/openai.ts:76:59) at processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/openai.ts:21:23 at async Promise.all (index 4) at async mapPromisesParallel (file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/run-promises-in-parallel.ts:17:5) at async client.createChatCompletion.model (file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/openai.ts:20:5) at async file:///C:/Users/Alexander/Documents/GitHub/humanify/src/index.ts:68:25`
kerem 2026-03-03 13:52:10 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@jehna commented on GitHub (Jun 19, 2024):

I'm pretty sure this is since openai does not guarantee that the function calls are valid json. Should probably implement a quick retry logic for it 🤔

<!-- gh-comment-id:2179254304 --> @jehna commented on GitHub (Jun 19, 2024): I'm pretty sure this is since openai does not guarantee that the function calls are valid json. Should probably implement a quick retry logic for it 🤔
Author
Owner

@0xdevalias commented on GitHub (Jun 20, 2024):

Can you use the new JSON mode or tool choice or similar to force it?

  • https://platform.openai.com/docs/guides/text-generation/json-mode
    • A common way to use Chat Completions is to instruct the model to always return a JSON object that makes sense for your use case, by specifying this in the system message. While this does work in some cases, occasionally the models may generate output that does not parse to valid JSON objects.

      To prevent these errors and improve model performance, when using gpt-4o, gpt-4-turbo, or gpt-3.5-turbo, you can set response_format to { "type": "json_object" } to enable JSON mode. When JSON mode is enabled, the model is constrained to only generate strings that parse into valid JSON object.

    • https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format
      • response_format
        An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.

        Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.

        Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • https://platform.openai.com/docs/guides/function-calling
    • https://platform.openai.com/docs/guides/function-calling/function-calling-behavior
      • Function calling behavior
        The default behavior for tool_choice is tool_choice: "auto". This lets the model decide whether to call functions and, if so, which functions to call.

        We offer three ways to customize the default behavior depending on your use case:

        • To force the model to always call one or more functions, you can set tool_choice: "required". The model will then select which function(s) to call.
        • To force the model to call only one specific function, you can set tool_choice: {"type": "function", "function": {"name": "my_function"}}.
        • To disable function calling and force the model to only generate a user-facing message, you can set tool_choice: "none"
<!-- gh-comment-id:2179606070 --> @0xdevalias commented on GitHub (Jun 20, 2024): Can you use the new JSON mode or tool choice or similar to force it? - https://platform.openai.com/docs/guides/text-generation/json-mode - > A common way to use Chat Completions is to instruct the model to always return a JSON object that makes sense for your use case, by specifying this in the system message. While this does work in some cases, occasionally the models may generate output that does not parse to valid JSON objects. > > To prevent these errors and improve model performance, when using gpt-4o, gpt-4-turbo, or gpt-3.5-turbo, you can set [response_format](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format) to `{ "type": "json_object" }` to enable JSON mode. When JSON mode is enabled, the model is constrained to only generate strings that parse into valid JSON object. - https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format - > `response_format` > An object specifying the format that the model must output. Compatible with [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`. > > Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON. > > Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length. - https://platform.openai.com/docs/guides/function-calling - https://platform.openai.com/docs/guides/function-calling/function-calling-behavior - > Function calling behavior > The default behavior for [tool_choice](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tool_choice) is tool_choice: "auto". This lets the model decide whether to call functions and, if so, which functions to call. > > We offer three ways to customize the default behavior depending on your use case: > > - To force the model to always call one or more functions, you can set `tool_choice: "required"`. The model will then select which function(s) to call. > - To force the model to call only one specific function, you can set `tool_choice: {"type": "function", "function": {"name": "my_function"}}`. > - To disable function calling and force the model to only generate a user-facing message, you can set `tool_choice: "none"`
Author
Owner
<!-- gh-comment-id:2185578844 --> @0xdevalias commented on GitHub (Jun 24, 2024): Potentially related: - https://github.com/jehna/humanify/issues/17 - https://github.com/jehna/humanify/issues/18 - https://github.com/jehna/humanify/issues/22
Author
Owner

@0xdevalias commented on GitHub (Jul 3, 2024):

Can you use the new JSON mode or tool choice or similar to force it?

I'm not sure what version of the SDK response_format: { "type": "json_object" } / tool_choice became available in. This project seems to currently be using openai 3.3.0, whereas the latest version is 4.52.3 (at time of writing). I created a more specific issue about upgrading the library, which may end up being a prerequisite to using tool_choice/similar:

<!-- gh-comment-id:2204860486 --> @0xdevalias commented on GitHub (Jul 3, 2024): > Can you use the new JSON mode or tool choice or similar to force it? I'm not sure what version of the SDK `response_format: { "type": "json_object" }` / `tool_choice` became available in. This project seems to currently be using `openai` `3.3.0`, whereas the latest version is `4.52.3` (at time of writing). I created a more specific issue about upgrading the library, which may end up being a prerequisite to using `tool_choice`/similar: - https://github.com/jehna/humanify/issues/19
Author
Owner

@jehna commented on GitHub (Aug 7, 2024):

OpenAI API now guarantees structured output:

https://openai.com/index/introducing-structured-outputs-in-the-api/

(should fix this issue properly)

<!-- gh-comment-id:2272577045 --> @jehna commented on GitHub (Aug 7, 2024): OpenAI API now guarantees structured output: https://openai.com/index/introducing-structured-outputs-in-the-api/ (should fix this issue properly)
Author
Owner

@0xdevalias commented on GitHub (Aug 12, 2024):

This should now be fixed in v2 since there's the long awaited JSON mode with the new structured outputs. Please take a look and repoen if anything comes up

Originally posted by @jehna in https://github.com/jehna/humanify/issues/22#issuecomment-2282876269

See also:

<!-- gh-comment-id:2282985593 --> @0xdevalias commented on GitHub (Aug 12, 2024): > This should now be fixed in v2 since there's the long awaited JSON mode with the new [structured outputs](https://openai.com/index/introducing-structured-outputs-in-the-api/). Please take a look and repoen if anything comes up > > _Originally posted by @jehna in https://github.com/jehna/humanify/issues/22#issuecomment-2282876269_ See also: - https://github.com/jehna/humanify/issues/31
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#12
No description provided.