-
Notifications
You must be signed in to change notification settings - Fork 801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValidationError #1545
Comments
The error you're encountering, In your def generate_text(
self,
prompt: PromptValue,
n: int = 1,
temperature: float = 1e-8,
stop: t.Optional[t.List[str]] = None,
callbacks: t.List = []
) -> LLMResult:
# Synchronous generation logic
text = self._call(prompt)
return LLMResult(generations=[[Generation(text=text)]])
async def agenerate_text(
self,
prompt: PromptValue,
n: int = 1,
temperature: float = 1e-8,
stop: t.Optional[t.List[str]] = None,
callbacks: t.List = []
) -> LLMResult:
# Simulate an asynchronous call, here we directly call the sync method for now
text = self._call(prompt)
return LLMResult(generations=[[Generation(text=text)]]) By ensuring that |
@dosu |
To resolve the def generate_text(
self,
prompt: PromptValue,
n: int = 1,
temperature: float = 1e-8,
stop: t.Optional[t.List[str]] = None,
callbacks: t.List = []
) -> LLMResult:
text = self._call(prompt)
return LLMResult(generations=[[Generation(text=text)]])
async def agenerate_text(
self,
prompt: PromptValue,
n: int = 1,
temperature: float = 1e-8,
stop: t.Optional[t.List[str]] = None,
callbacks: t.List = []
) -> LLMResult:
text = self._call(prompt)
return LLMResult(generations=[[Generation(text=text)]]) Regarding the |
@dosu. |
The "Failed to parse output" error suggests that the output parser is unable to process the generated text correctly. This can happen if the output does not meet the expected format or constraints defined in the prompt. Here are a few steps you can take to troubleshoot and resolve this issue:
By addressing these areas, you should be able to identify and fix the issue causing the "Failed to parse output" error [1][2]. |
@amin-kh96 did you solve this? |
@jjmachan |
@dosu. |
The "Failed to parse output" error you're encountering might be related to the absence of ground truth in your dataset, which can affect the output generation and parsing process. Here are some steps to address this issue:
By following these steps, you should be able to identify and resolve the issue causing the "Failed to parse output" error. If the problem persists, consider reviewing the specific requirements and constraints of the Ragas framework to ensure full compatibility. |
@dosu. |
The Here are a few things to consider:
If the issue persists, consider reaching out to the maintainers or community for more specific guidance on using the |
@dosu. |
In the Ragas framework, 'ground truth' refers to the expected or correct answers used to evaluate the correctness of generated responses. It is used to classify answer statements into true positives, false positives, and false negatives, which helps in assessing the accuracy of the model's outputs [1]. To include ground truth in your dataset, you should structure your data as follows:
Here's an example of creating a sample1 = SingleTurnSample(
user_input="What is the capital of Germany?",
retrieved_contexts=["Berlin is the capital and largest city of Germany."],
reference_contexts=["Berlin is the capital of Germany."],
response="The capital of Germany is Berlin.",
multi_responses=["Berlin", "The capital is Berlin."],
reference="Berlin",
rubric={"accuracy": "high"}
) Regarding metrics that do not require ground truth, the default metrics provided by the Ragas framework include |
@dosu. |
When dealing with specialized use cases like medical machines, where ground truth information might not be readily available on the internet, you can consider the following approaches to obtain or create ground truth data:
By employing these strategies, you can build a reliable ground truth dataset tailored to your specialized domain, which will help in evaluating and improving your RAG pipeline effectively. |
i saw this error :
I think the cause of the error might be some where here:
this is code on the first line maybe it helps for better understanding:
The text was updated successfully, but these errors were encountered: