You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now we have a kind of ugly setup for switching between Llama, XAI, Claude, OpenAI and local.
We probably want completions to be a service, with more effort paid to moving it out of the runtime.
make sure that the bot works with no API keys
make sure it works with all OpenAI keys
Add helper function which correctly figures out the endpoint, etc without needing to juggle in the .env
Character file should configure which models are used where, and what company
Maybe we have a "fast/cheap and slow/powerful" option for each provider so we can run like shouldRespond and other apis that get hit a lot for fast/cheap, and responses use slow/powerful
generally clean up the response handling and make everything nice
Make sure frequency penalty, presence penalty work for Grok, OpenAI, and repetition penalty for Llama
add as LITTLE abstraction as possible. as few new classes or files as absolutely necessary. complexity is our enemy.
The text was updated successfully, but these errors were encountered:
Right now we have a kind of ugly setup for switching between Llama, XAI, Claude, OpenAI and local.
We probably want completions to be a service, with more effort paid to moving it out of the runtime.
The text was updated successfully, but these errors were encountered: