Use gemini-1.5-flash-latest in google_generative_ai_conversation.generate_content #118594
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Breaking change
Proposed change
The google_generative_ai_conversation.generate_content couldn't be tied to a config entry earlier because it had to use the specific vision model when an image was passed. Now all the 1.5 are multimodal. Switch to using the default gemini-1.5-flash-latest. The docs already mention custom prompt and model settings don't apply to this service. A later PR could tie this to a config entry but that would be a breaking change. Better to wait until we figure out a generic way to handle multimodal across the different integrations though.
Type of change
Additional information
Checklist
ruff format homeassistant tests
)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest
.requirements_all.txt
.Updated by running
python3 -m script.gen_requirements_all
..coveragerc
.To help with the load of incoming pull requests: