You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For my use case, I am using dynamic routes that perform LLM calls to determine input arguments. While reviewing the code, I found that these are the LLM-supported:
I only have access to Google Gemini, which also follows OpenAI specs but has a different client setup. Is there a way to use Gemini with the current setup? Alternatively, does this library have any planned extensions for Gemini in the future, or would we need to extend the base class in our code to implement Gemini-specific functionality?
The text was updated successfully, but these errors were encountered:
For my use case, I am using dynamic routes that perform LLM calls to determine input arguments. While reviewing the code, I found that these are the LLM-supported:
I only have access to Google Gemini, which also follows OpenAI specs but has a different client setup. Is there a way to use Gemini with the current setup? Alternatively, does this library have any planned extensions for Gemini in the future, or would we need to extend the base class in our code to implement Gemini-specific functionality?
The text was updated successfully, but these errors were encountered: