Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Support for Auto Retrival Evaluation of different RAG techniques #5

Open
adithya-s-k opened this issue Jan 3, 2025 · 0 comments
Assignees

Comments

@adithya-s-k
Copy link
Owner

Issue: Add .evaluate and .generate_eval_dataset Methods for RAG Pipelines

Description:
Enhance the RAG framework by implementing .evaluate and .generate_eval_dataset methods to enable easy and automated evaluation of different RAG pipelines.

Requirements:

  1. .evaluate Method:

    • Allow users to benchmark retrieval performance using standard metrics (e.g., Precision, Recall, F1).
    • Support configuration for top-k retrieval and similarity scoring.
  2. .generate_eval_dataset Method:

    • Automatically create datasets for evaluation with annotated ground truth queries and responses.
    • Include support for custom datasets.
  3. Integration:

    • Seamlessly integrate with existing RAG pipelines.

Impact:
Simplifies evaluation workflows and helps compare RAG pipeline performance effectively.

Priority: Medium

@adithya-s-k adithya-s-k self-assigned this Jan 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant