Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for OpenAI's GPT-4 turbo (gpt-4-1106-preview) model + GPT-4o + GPT-4o mini #4

Closed
0xdevalias opened this issue Nov 13, 2023 · 6 comments

Comments

@0xdevalias
Copy link

0xdevalias commented Nov 13, 2023

Since OpenAI released the GPT-4 Turbo model (gpt-4-1106-preview) and reduced the prices of GPT-4 at their recent dev day, it would be cool if this tool was able to support using those as well.

Further Reading

See Also

@0xdevalias 0xdevalias changed the title Add support for gpt-4 turbo (gpt-4-1106-preview) model Add support for OpenAI's gpt-4 turbo (gpt-4-1106-preview) model Nov 13, 2023
@0xdevalias
Copy link
Author

A quick search of the repo suggest changes would need to be made to at least the following files:

@jehna
Copy link
Owner

jehna commented Nov 13, 2023

Yes, great point! I think it would make sense to even parametrize the model as a command line argument 🤔

@0xdevalias
Copy link
Author

I think it would make sense to even parametrize the model as a command line argument

Yeah, I was thinking that too. Though maybe it can still have some 'friendly aliases' built in or similar so that the end user doesn't need to know the exact model name they need (specifically thinking about the current preview version of gpt-4-turbo in particular).

There are API's for querying the models available too, so if you wanted to get really fancy and not hardcode things, the CLI could potentially fetch the available models from that URL and cache them, then tell the user which ones could be used. Though that might be overkill.

@0xdevalias 0xdevalias changed the title Add support for OpenAI's gpt-4 turbo (gpt-4-1106-preview) model Add support for OpenAI's GPT-4 turbo (gpt-4-1106-preview) model + GPT-4o + GPT-4o mini Jul 19, 2024
@0xdevalias
Copy link
Author

GPT-4o mini was announced today; ~60% cheaper than GPT-3.5 Turbo:

  • https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/
    • GPT-4o mini scores 82% on MMLU and currently outperforms GPT-41 on chat preferences in LMSYS leaderboard(opens in a new window). It is priced at 15 cents per million input tokens and 60 cents per million output tokens, an order of magnitude more affordable than previous frontier models and more than 60% cheaper than GPT-3.5 Turbo

@0xdevalias
Copy link
Author

Hey! Seems that this PR got auto-closed because of the release of v2.

Thank you for your hard work and interest on improving humanify, especially @bilalba for maintaining a modernised fork while I've been unresponsive.

The v2 has now dependabot and automated merges for dependency updates that pass the tests, so dependencies should be easier to be kept up to date. I've also made gpt-4o-mini the default model and added the long awaited JSON mode with the new structured outputs.

The v2 does not count tokens anymore, but it uses the same AST-based precise approach that's been working for local renames. This should ensure that all the variable names are renamed, while not overloading the context limit of gpt-4o and others. I've experienced that some models (like Claude Opus) is much better at utilizing the full context window than others (like gpt-4o). By directing the LLM's focus to a small part that's only the fraction of its context window has worked the best in my testing. The v2 window size is super small now, but I'd be happy to increase it (or make it configurable via cli flag) if needed.

Originally posted by @jehna in #21 (comment)

@0xdevalias
Copy link
Author

0xdevalias commented Aug 12, 2024

Since the v2 CLI's OpenAI feature seems to allow the model name to be specified via --model, and defaults to gpt-4o-mini, I consider this issue implemented now:

export const openai = cli()
.name("openai")
.description("Use OpenAI's API to unminify code")
.option("-m, --model <model>", "The model to use", "gpt-4o-mini")

See also:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants