-
-
Notifications
You must be signed in to change notification settings - Fork 31.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM Tools support for Google Generative AI integration #117644
Conversation
Hey there @tronikos, mind taking a look at this pull request as it has been labeled with an integration ( Code owner commandsCode owners of
|
genai.get_model, entry.options.get(CONF_CHAT_MODEL, DEFAULT_CHAT_MODEL) | ||
) | ||
) | ||
await hass.async_add_executor_job(partial(genai.get_model, DEFAULT_CHAT_MODEL)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed this because when I had configured a bad model, the config entry refused to startup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is that bad? Isn't it better to fail as soon as possible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The config entry wouldn't set up and the user couldn't change the model anymore 🙈
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(happened to me)
homeassistant/components/google_generative_ai_conversation/conversation.py
Outdated
Show resolved
Hide resolved
homeassistant/components/google_generative_ai_conversation/conversation.py
Outdated
Show resolved
Hide resolved
homeassistant/components/google_generative_ai_conversation/conversation.py
Show resolved
Hide resolved
Please take a look at the requested changes, and use the Ready for review button when you are done, thanks 👍 |
) | ||
return conversation.ConversationResult( | ||
response=intent_response, conversation_id=conversation_id | ||
LOGGER.debug("Tool call: %s(%s)", tool_input.name, tool_input.tool_args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tool_input.tool_name
@@ -80,6 +135,11 @@ async def async_process( | |||
self, user_input: conversation.ConversationInput | |||
) -> conversation.ConversationResult: | |||
"""Process a sentence.""" | |||
tools: list[dict[str, Any]] | None = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do the tools need to be constructed on every interaction with the conversation entity? How about constructing them in init once and maybe even share them among the multiple conversation entities?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's up to the LLM API objects to decide if they want to cache them. Here we just fetch them.
homeassistant/components/google_generative_ai_conversation/conversation.py
Outdated
Show resolved
Hide resolved
homeassistant/components/google_generative_ai_conversation/config_flow.py
Show resolved
Hide resolved
homeassistant/components/google_generative_ai_conversation/config_flow.py
Outdated
Show resolved
Hide resolved
homeassistant/components/google_generative_ai_conversation/config_flow.py
Show resolved
Hide resolved
Don't forget to update the documentation and link to it here. It says "This conversation agent is unable to control your house" |
Marked as draft as we want to change the LLM helper slightly. |
genai.get_model, entry.options.get(CONF_CHAT_MODEL, DEFAULT_CHAT_MODEL) | ||
) | ||
) | ||
await hass.async_add_executor_job(partial(genai.get_model, DEFAULT_CHAT_MODEL)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is that bad? Isn't it better to fail as soon as possible?
"models/gemini-1.0-pro", # duplicate of gemini-pro | ||
"models/gemini-1.5-flash-latest", | ||
) | ||
and "Vision" not in api_model.display_name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if the API is translating the display name. Seems safer to check api_model.name
instead.
…t#117644) * initial commit * Undo prompt chenges * Move format_tool out of the class * Only catch HomeAssistantError and vol.Invalid * Add config flow option * Fix type * Add translation * Allow changing API access from options flow * Allow model picking * Remove allowing HASS Access in main flow * Move model to the top in options flow * Make prompt conditional based on API access * convert only once to dict * Reduce debug logging * Update title * re-order models * Address comments * Move things * Update labels * Add tool call tests * coverage * Use LLM APIs * Fixes * Address comments * Reinstate the title to not break entity name --------- Co-authored-by: Paulus Schoutsen <[email protected]>
Proposed change
Implement LLM Tools feature (#115464) for
google_generative_ai_conversation
. Added as a model picker and API picker in options flow:Type of change
Additional information
Checklist
ruff format homeassistant tests
)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest
.requirements_all.txt
.Updated by running
python3 -m script.gen_requirements_all
..coveragerc
.To help with the load of incoming pull requests: