[GH-ISSUE #4] Add support for OpenAI's GPT-4 turbo (gpt-4-1106-preview) model + GPT-4o + GPT-4o mini #2

Closed
opened 2026-03-03 13:52:04 +03:00 by kerem · 6 comments
Owner

Originally created by @0xdevalias on GitHub (Nov 13, 2023).
Original GitHub issue: https://github.com/jehna/humanify/issues/4

Since OpenAI released the GPT-4 Turbo model (gpt-4-1106-preview) and reduced the prices of GPT-4 at their recent dev day, it would be cool if this tool was able to support using those as well.

Further Reading

See Also

Originally created by @0xdevalias on GitHub (Nov 13, 2023). Original GitHub issue: https://github.com/jehna/humanify/issues/4 Since OpenAI released the GPT-4 Turbo model (`gpt-4-1106-preview`) and reduced the prices of GPT-4 at their recent dev day, it would be cool if this tool was able to support using those as well. ## Further Reading - https://openai.com/blog/new-models-and-developer-products-announced-at-devday - https://platform.openai.com/docs/models - > New models launched at DevDay > > We are excited to announce the preview release of [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) (128k context window) and an updated [GPT-3.5 Turbo](https://platform.openai.com/docs/models/gpt-3-5) (16k context window). Among other things, both models come with improved instruction following, JSON mode, more reproducible outputs, and parallel function calling. - https://openai.com/pricing - Model | Input | Output -- | -- | -- gpt-4-1106-preview | $0.01 / 1K tokens | $0.03 / 1K tokens ## See Also - https://github.com/jehna/humanify/issues/5
kerem closed this issue 2026-03-03 13:52:04 +03:00
Author
Owner

@0xdevalias commented on GitHub (Nov 13, 2023):

A quick search of the repo suggest changes would need to be made to at least the following files:

<!-- gh-comment-id:1807468521 --> @0xdevalias commented on GitHub (Nov 13, 2023): A quick search of the repo suggest changes would need to be made to at least the following files: - https://github.com/jehna/humanify/blob/main/src/index.ts#L38-L44 - https://github.com/jehna/humanify/blob/main/src/openai/openai.ts#L30
Author
Owner

@jehna commented on GitHub (Nov 13, 2023):

Yes, great point! I think it would make sense to even parametrize the model as a command line argument 🤔

<!-- gh-comment-id:1807547164 --> @jehna commented on GitHub (Nov 13, 2023): Yes, great point! I think it would make sense to even parametrize the model as a command line argument 🤔
Author
Owner

@0xdevalias commented on GitHub (Nov 13, 2023):

I think it would make sense to even parametrize the model as a command line argument

Yeah, I was thinking that too. Though maybe it can still have some 'friendly aliases' built in or similar so that the end user doesn't need to know the exact model name they need (specifically thinking about the current preview version of gpt-4-turbo in particular).

There are API's for querying the models available too, so if you wanted to get really fancy and not hardcode things, the CLI could potentially fetch the available models from that URL and cache them, then tell the user which ones could be used. Though that might be overkill.

<!-- gh-comment-id:1807950220 --> @0xdevalias commented on GitHub (Nov 13, 2023): > I think it would make sense to even parametrize the model as a command line argument Yeah, I was thinking that too. Though maybe it can still have some 'friendly aliases' built in or similar so that the end user doesn't need to know the exact model name they need (specifically thinking about the current preview version of `gpt-4-turbo` in particular). There are API's for querying the models available too, so if you wanted to get really fancy and not hardcode things, the CLI could potentially fetch the available models from that URL and cache them, then tell the user which ones could be used. Though that might be overkill.
Author
Owner

@0xdevalias commented on GitHub (Jul 19, 2024):

GPT-4o mini was announced today; ~60% cheaper than GPT-3.5 Turbo:

  • https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/
    • GPT-4o mini scores 82% on MMLU and currently outperforms GPT-41 on chat preferences in LMSYS leaderboard(opens in a new window). It is priced at 15 cents per million input tokens and 60 cents per million output tokens, an order of magnitude more affordable than previous frontier models and more than 60% cheaper than GPT-3.5 Turbo

<!-- gh-comment-id:2237828220 --> @0xdevalias commented on GitHub (Jul 19, 2024): GPT-4o mini was announced today; ~60% cheaper than GPT-3.5 Turbo: - https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/ - > GPT-4o mini scores 82% on MMLU and currently outperforms GPT-41 on chat preferences in LMSYS leaderboard(opens in a new window). It is priced at 15 cents per million input tokens and 60 cents per million output tokens, an order of magnitude more affordable than previous frontier models and more than 60% cheaper than GPT-3.5 Turbo
Author
Owner

@0xdevalias commented on GitHub (Aug 12, 2024):

Hey! Seems that this PR got auto-closed because of the release of v2.

Thank you for your hard work and interest on improving humanify, especially @bilalba for maintaining a modernised fork while I've been unresponsive.

The v2 has now dependabot and automated merges for dependency updates that pass the tests, so dependencies should be easier to be kept up to date. I've also made gpt-4o-mini the default model and added the long awaited JSON mode with the new structured outputs.

The v2 does not count tokens anymore, but it uses the same AST-based precise approach that's been working for local renames. This should ensure that all the variable names are renamed, while not overloading the context limit of gpt-4o and others. I've experienced that some models (like Claude Opus) is much better at utilizing the full context window than others (like gpt-4o). By directing the LLM's focus to a small part that's only the fraction of its context window has worked the best in my testing. The v2 window size is super small now, but I'd be happy to increase it (or make it configurable via cli flag) if needed.

Originally posted by @jehna in https://github.com/jehna/humanify/issues/21#issuecomment-2282875396

<!-- gh-comment-id:2282988403 --> @0xdevalias commented on GitHub (Aug 12, 2024): > Hey! Seems that this PR got auto-closed because of the release of v2. > > Thank you for your hard work and interest on improving humanify, especially @bilalba for maintaining a modernised fork while I've been unresponsive. > > The v2 has now dependabot and automated merges for dependency updates that pass the tests, so dependencies should be easier to be kept up to date. I've also made `gpt-4o-mini` the default model and added the long awaited JSON mode with the new [structured outputs](https://openai.com/index/introducing-structured-outputs-in-the-api/). > > The v2 does not count tokens anymore, but it uses the same AST-based precise approach that's been working for local renames. This should ensure that all the variable names are renamed, while not overloading the context limit of gpt-4o and others. I've experienced that some models (like Claude Opus) is much better at utilizing the full context window than others (like gpt-4o). By directing the LLM's focus to a small part that's only the fraction of its context window has worked the best in my testing. The v2 window size is super small now, but I'd be happy to increase it (or make it configurable via cli flag) if needed. > > _Originally posted by @jehna in https://github.com/jehna/humanify/issues/21#issuecomment-2282875396_
Author
Owner

@0xdevalias commented on GitHub (Aug 12, 2024):

Since the v2 CLI's OpenAI feature seems to allow the model name to be specified via --model, and defaults to gpt-4o-mini, I consider this issue implemented now:

github.com/jehna/humanify@a6b09993dd/src/commands/openai.ts (L9-L12)

See also:

<!-- gh-comment-id:2282990067 --> @0xdevalias commented on GitHub (Aug 12, 2024): Since the v2 CLI's OpenAI feature seems to allow the model name to be specified via `--model`, and defaults to `gpt-4o-mini`, I consider this issue implemented now: https://github.com/jehna/humanify/blob/a6b09993dd93843a4a76556e4ed91b073f6a50b1/src/commands/openai.ts#L9-L12 See also: - https://github.com/jehna/humanify/issues/31
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#2
No description provided.