-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable autorag to automatically generate the evaluation dataset and evaluate the RAG system #36
Conversation
Signed-off-by: XuhuiRen <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: XuhuiRen <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: XuhuiRen <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: XuhuiRen <[email protected]>
Signed-off-by: XuhuiRen <[email protected]>
for more information, see https://pre-commit.ci
shall you add a document for usage and some descriptions about autorag. |
evals/evaluation/autorag/evaluation/ragas_evaluation_benchmark.py
Outdated
Show resolved
Hide resolved
Signed-off-by: XuhuiRen <[email protected]>
for more information, see https://pre-commit.ci
shall we have a document for autorag? and write a ut if necessary |
it seems can only run the ut with the script in the benchmark folder. hard to maintain a automatic ut |
Signed-off-by: XuhuiRen <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: XuhuiRen <[email protected]>
We need document, test can be consider later. |
Description
Conditioned on the user given input file/files to generate the evaluation dataset, then evaluate the rag system with ragas