You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Background: I developed a rudimentary way to reduce token count for long prompts by concatenating words of a certain length, which has the potential to reduce API token costs by a few % , which can be significant for companies with high API costs from prompt token usage (regardless of their completion token costs which can remain constant).
Question for tokenizing: I am wondering if this approach has any negative affect as output seems unaffected, with completions returning normally.
Background: I developed a rudimentary way to reduce token count for long prompts by concatenating words of a certain length, which has the potential to reduce API token costs by a few % , which can be significant for companies with high API costs from prompt token usage (regardless of their completion token costs which can remain constant).
Question for tokenizing: I am wondering if this approach has any negative affect as output seems unaffected, with completions returning normally.
See this thread with some of the pros/cons: https://community.openai.com/t/removing-spaces-from-prompts-to-maximize-character-limits-i-e-in-gpt-config/684125
The text was updated successfully, but these errors were encountered: