-
Notifications
You must be signed in to change notification settings - Fork 16.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DOC: <Issue related to /v0.2/docs/how_to/extraction_examples/> #23383
Comments
I can't reproduce your issue following the doc. please give a complete script to reproduce your issue |
|
could you please reformat your code using the "code block" functionality like |
I have reformat the code. I am using following packages: Python Version: 3.11.9 |
my apology. I still can't reproduce. my env: GPT_MODEL = "gpt-3.5-turbo-0125" |
I have just created a conda env based on your configuration. Model "gpt-3.5-turbo-0125" gives following results:
Model "gpt-4o" gives following results:
Does error depend on "Python" version? |
I tried python 3.11 as well, it has no problem. The problem is with "openai Version: 1.35.3" |
I am also getting error for following env (Python:3.11.9):
Error:
|
you're right. whether the error is triggered or not in a particular run is related to the response that openai returned. my current investigation shows that it has something to do with what data schema openai thinks it can use. for example in the below, in the 1st run, it returns the json as an array of people and decides to use the "Data" schema, which results in correct output parsing; in the 2nd run, it returns the json as one single person and decide to use the "Person" schema which is not available (because we pass "schema=Data" to the llm)
|
see https://smith.langchain.com/public/800cd078-115c-495b-9280-fedbd2e83c4f/r the LLM knows that it only has the tool "Data" , but since the example in our code gives example function call to "Person", it confuses the LLM and causes it to return function call to "Person" as well. so maybe you can try fixing the examples to the below?
|
Now it is working fine. Thank you very much. |
…23393) Descriptions: currently in the [doc](https://python.langchain.com/v0.2/docs/how_to/extraction_examples/) it sets "Data" as the LLM's structured output schema, however its examples given to the LLM output's "Person", which causes the LLM to be confused and might occasionally return "Person" as the function to call issue: #23383 Co-authored-by: Lifu Wu <[email protected]>
@eyurtsev I think this issue can be closed; an update for the issue has been merged. |
URL
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/
Checklist
Issue with current documentation:
Getting following error:
Traceback (most recent call last):
File "Z:\llm_images\extract_info.py", line 148, in
response = chain.invoke(
^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\runnables\base.py", line 2504, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\output_parsers\base.py", line 169, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\runnables\base.py", line 1598, in _call_with_config
context.run(
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\runnables\config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\output_parsers\base.py", line 170, in
lambda inner_input: self.parse_result(
^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 196, in parse_result
pydantic_objects.append(name_dictres["type"])
~~~~~~~~~^^^^^^^^^^^^^
KeyError: 'Person'
Idea or request for content:
No response
The text was updated successfully, but these errors were encountered: