You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
if the service is provided a model name gpt-4o or gpt-4o-mini , then it would be validated. however, if the service was provided a model name gpt-100x , then it wouldn't be validated since it doesn't exist in the proxy config
i was trying to use utils.get_valid_models() like this:
valid_models = utils.get_valid_models()
if model_name not in valid_models:
logger.error(f"Invalid model name: {model_name}. Valid models are: {valid_models}")
Motivation, pitch
Are you a ML Ops Team?
Yes
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
…models based on key (#7538)
* test(test_utils.py): initial test for valid models
Addresses #7525
* fix: test
* feat(fireworks_ai/transformation.py): support retrieving valid models from fireworks ai endpoint
* refactor(fireworks_ai/): support checking model info on `/v1/models` route
* docs(set_keys.md): update docs to clarify check llm provider api usage
* fix(watsonx/common_utils.py): support 'WATSONX_ZENAPIKEY' for iam auth
* fix(watsonx): read in watsonx token from env var
* fix: fix linting errors
* fix(utils.py): fix provider config check
* style: cleanup unused imports
rajatvig
pushed a commit
to rajatvig/litellm
that referenced
this issue
Jan 16, 2025
…models based on key (BerriAI#7538)
* test(test_utils.py): initial test for valid models
Addresses BerriAI#7525
* fix: test
* feat(fireworks_ai/transformation.py): support retrieving valid models from fireworks ai endpoint
* refactor(fireworks_ai/): support checking model info on `/v1/models` route
* docs(set_keys.md): update docs to clarify check llm provider api usage
* fix(watsonx/common_utils.py): support 'WATSONX_ZENAPIKEY' for iam auth
* fix(watsonx): read in watsonx token from env var
* fix: fix linting errors
* fix(utils.py): fix provider config check
* style: cleanup unused imports
The Feature
sample config
if the service is provided a model name gpt-4o or gpt-4o-mini , then it would be validated. however, if the service was provided a model name gpt-100x , then it wouldn't be validated since it doesn't exist in the proxy config
i was trying to use utils.get_valid_models() like this:
Motivation, pitch
Are you a ML Ops Team?
Yes
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: