-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow Part instantiation from function call #3194
Comments
Thank you for the feature request and the PR. user_content1 = Content...
model_content1 = model.generate_content([user_content1]).candidates[0].content # function_call
user_content2 = Part.from_function_response(...)
model_content2 = model.generate_content([user_content1, model_content1, user_content2]).candidates[0].content P.S. One reason I'm hesitant to add this feature is that I'm afraid the users can get confused and misunderstand function_call vs function_response. The whole function calling feature is already pretty complex and confusing for people who just start using it. |
By the way, could you please describe your use cases for function calling? |
Hey, I am already using generate_content successfully by using the code I put in the PR, but I thought I might not be the only person building systems on top of multiple LLM, so maybe someone else can use this feature too :) As for your question: I use function calling a lot, but I don't think I can discuss particular use cases in public, but think of use cases such as get_weather. I am currently missing the possibility to combine vision and function calling a bit 🤓 |
Would This is how it's supposed to work: function_call_part = Part.from_dict({
"function_call": {
"name": "get_current_weather"
"args": {
"location": "Boston, MA",
}
}
}) |
That would work for me 😄 |
Hey, @Ark-kun do you have infos about a timeline? |
…ionResponse`, `Candidate`, `Content`, `Part`) Workaround for issue in the proto-plus library: googleapis/proto-plus-python#424 Fixes #3194 PiperOrigin-RevId: 604893347
@ehaca Thank you for your patience. I made the fix/workaround some time ago, but it only got submitted today. The next release (expected very soon) will have the fix for P.S. To save/load history it's probably simpler to use |
Is your feature request related to a problem? Please describe.
As a user I would like to be able to handle Chat sessions from Generative Models (especially Gemini) on my own side without using a Chat instance. To achieve this, I need to be able to create a Part instance from a function call that was proposed by the LLM the same way I can create an instance from a function response.
Describe the solution you'd like
I can use Part.from_function_call(fn_name, arguments)
Describe alternatives you've considered
Implementing the same snippet on my side, but a) I had to do wquite a bit of research to find out how to do it and I think other people could use such a functionality and b) I think the snippet belongs mote to the SDK as to my code
Additional context
I already implemented a function (5 lines of code or so), see linked PR 😄
The text was updated successfully, but these errors were encountered: