-
-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/add ability to give custom token modifier costs per ai model #563
Feat/add ability to give custom token modifier costs per ai model #563
Conversation
- Implemented a configuration object to store custom token ratios for each AI model. - Included functionality to adjust token ratios for each model. - Default token ratio set to 1 if no custom ratio is provided for a model. This change allows for charging different token costs for each model, enabling flexible pricing strategies (e.g., higher rates for GPT-4, discounted rates for GPT-3.5).
…o_give_custom_token_modifier_costs_per_AI_model
change: removed 'TOKEN_RATIOS' object from AIController.ts
…o_give_custom_token_modifier_costs_per_AI_model
…o_give_custom_token_modifier_costs_per_AI_model
…o_give_custom_token_modifier_costs_per_AI_model
…o_give_custom_token_modifier_costs_per_AI_model
…o_give_custom_token_modifier_costs_per_AI_model
feat: Added tokenModifierRatio to ServerBuilder.ts
…o_give_custom_token_modifier_costs_per_AI_model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TroyceGowdy Look at these lines:
casualos/src/aux-records/AIController.ts
Lines 453 to 459 in b45397c
if (result.totalTokens > 0) { | |
await this._metrics.recordChatMetrics({ | |
userId: request.userId, | |
createdAtMs: Date.now(), | |
tokens: result.totalTokens, | |
}); | |
} |
The tokenModifierRatio
needs to be looked up for the model that was used and multiplied against result.totalTokens
and the result of that multiplication needs to be used for recording the chat metrics. This needs to happen for both chat()
and chatStream()
. Obviously, if no modifier exists for the model, then it needs to behave as if the modifier is 1
.
I also noticed that the AIController doesn't get any token modifiers for images even though they can be configured in ServerConfig.
Additionally, you need to actually use the token modifiers for generateImage()
by multiplying the square pixels by the modifier for the model.
Finally, you need to run all the tests and fix any that were broken by adding the tokenModifierRatio
property to AIChatOptions
.
…o_give_custom_token_modifier_costs_per_AI_model
…o_give_custom_token_modifier_costs_per_AI_model
chore: add token modifier tests - Implemented 'tokenModifierRatio' for AI models in ServerConfig.ts - Added `_calculateTokenCost` function in AIController.ts to calculate the modifier ratio for a given model. - Added tests to AIController.spec.ts to ensure proper operation This allows us to set custom token-to-cost ratios for each model, enabling differential pricing or discounts based on the AI model in use.
fixes #393