-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deduplicate logging errors for "model not found" #65
Comments
Hey @cvarrei, many thanks for reporting this! I agree with the first proposed solution as a quick fix, although we may think of a more in-depth solution in the near future for handling the difference between providers (such as Mistral in this case) and clients (such as LiteLLM here). Concerning the second solution, a |
A solution for the first case as more in-depth solution , which simplifies the task and merges with the initialisation parameters PR, could be to require the user to specify, at initialisation, the tracer he will be using, for example
And by modifying the init() from ecologits.py:
Thus, you can remove the init_instruments() and importlib find_spec function. In this way, there is no automatic initialisation depending on the module installed, but only depending on where the user wants to integrate it. With this setting, there is no unwanted secondary initialization (as was the case with |
Two things here:
|
@samuelrince @cvarrei Normally a proper logger has been implemented in #78, thereby closing this issue but please re-open it if you consider that the problem is not solved. |
Describe the bug
The fact that the script checks each time it calculates the impact whether a model exists for a specific provider leads to Ecologits frantically prints the following message: ‘Could not find model “xxxxx” for provider xxxx’ when the model/ provider is not implemented or when the correct tracer is not uniquely instantiated (see reason 2).
The message come from the llm_impacts function from utils.py:
Reason 1: In an architecture where the user would like to include a model that is not yet implemented in ecologits, it will spam the logs with this message.
Reason 2: This behaviour is also present when using the LiteLLM integration for MistralAI models and some other providers (e.g. Groq). LiteLLM relies on a OpenAI configuration for these providers, which also instanciate the OpenAI tracer, so: for ‘mistral/open-mistral-nemo-2407’ we get ‘Could not find model
open-mistral-nemo-2407
for openai provider’ https://github.com/BerriAI/litellm/blob/b376ee71b01e3e8c6453a3dd21421b365aaaf9f8/litellm/llms/openai.pyTo Reproduce
Using Mistral models with the LiteLLM tracer ( mistral/open-mistral-nemo-2407) or an unimplemented provider and/or model (e.g., groq/llama3-8b-8192) . The behaviour is more pronounced in stream mode (the message is printed for each token).
Proposed solution
To avoid this interaction between the OpenAI tracer and the LiteLLM tracer, adopting the LiteLLM tracer strategy as the default strategy for finding the provider could be useful.:
provider=models.find_provider(model_name=model_name)
.For the message, I suggest not displaying it when no model is returned or adding a verbosity parameter when initialising the Ecologits object (by default, False) to give the user the choice of whether or not to display this message.
The text was updated successfully, but these errors were encountered: