-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add musique adapter base #525
base: dev
Are you sure you want to change the base?
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe changes introduce a new enumeration class, Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Adapter as MusiqueQAAdapter
Client->>Adapter: Call load_corpus()
Adapter->>Adapter: Process each question-answer pair
Adapter->>Adapter: If answer is a string, convert to lowercase
Adapter->>Client: Return processed question-answer pairs
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
evals/eval_framework/benchmark_adapters/benchmark_adapters.py (2)
10-10
: Fix formatting.Remove the extra empty line to comply with formatting standards.
11-24
: LGTM! Consider standardizing string literals.The
BenchmarkAdapter
enum provides a clean and structured way to manage different benchmark adapters. However, there's an inconsistency in string literal usage - single quotes for 'Musique' vs double quotes for other values.Apply this diff to standardize string literals:
- MUSIQUE = ('Musique', MusiqueQAAdapter) + MUSIQUE = ("Musique", MusiqueQAAdapter)evals/eval_framework/benchmark_adapters/musique_adapter.py (1)
83-84
: Fix formatting.Remove the extra empty lines to comply with formatting standards.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
evals/eval_framework/benchmark_adapters/benchmark_adapters.py
(1 hunks)evals/eval_framework/benchmark_adapters/musique_adapter.py
(1 hunks)
🧰 Additional context used
🪛 GitHub Actions: lint | ruff format
evals/eval_framework/benchmark_adapters/benchmark_adapters.py
[warning] 1-1: File would be reformatted. Please ensure the code adheres to formatting standards.
evals/eval_framework/benchmark_adapters/musique_adapter.py
[warning] 1-1: File would be reformatted. Please ensure the code adheres to formatting standards.
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: docker-compose-test
🔇 Additional comments (1)
evals/eval_framework/benchmark_adapters/musique_adapter.py (1)
85-91
: LGTM! Verify impact on downstream processing.Converting string answers to lowercase improves data consistency. However, ensure that downstream code handling these answers is aware of this change, especially if it relies on case-sensitive matching.
Run the following script to check for case-sensitive answer handling in the codebase:
✅ Verification successful
🏁 Script executed:
#!/bin/bash # Description: Search for potential case-sensitive answer handling. # Search for string operations on answers that might be case-sensitive rg -A 5 'answer.*==' || rg -A 5 'answer.*in' || rg -A 5 'answer.*match' # Search for test assertions that might expect specific case rg -A 5 'assert.*answer'Length of output: 464
Action Required: Verify Downstream Normalization in Answer Comparisons
The search results confirm that the downstream evaluation in
evals/official_hotpot_metrics.py
uses normalization (e.g., vianormalize_answer
) to compare answers. This indicates that converting string answers to lowercase in themusique_adapter.py
is consistent with the existing normalization logic.Please double-check that the implementation of
normalize_answer
(or any related normalization functions) indeed applies a case-insensitive comparison so that the lower-casing in the adapter does not introduce unexpected behavior in evaluations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
evals/eval_framework/tests/unit/benchmark_adapters_test.py (2)
34-56
: Enhance test documentation and error messages.Consider the following improvements:
- Move the expectation comment to the docstring
- Add actual values to assertion messages
- Remove redundant comments
Apply this diff to enhance the test:
@pytest.mark.parametrize("AdapterClass", ADAPTER_CLASSES) def test_adapter_returns_some_content(AdapterClass): """ Verify that the adapter returns some data and that each QA dict at least has a 'question' and 'answer' key (you can extend or remove as needed). + We expect at least 1 item regardless of the dataset size. """ adapter = AdapterClass() corpus_list, qa_pairs = adapter.load_corpus(limit=3) # small limit - # We don't know how large the dataset is, but we expect at least 1 item - assert len(corpus_list) > 0, f"{AdapterClass.__name__} returned an empty corpus_list." - assert len(qa_pairs) > 0, f"{AdapterClass.__name__} returned an empty question_answer_pairs." + assert len(corpus_list) > 0, f"{AdapterClass.__name__} returned an empty corpus_list when expecting at least 1 item" + assert len(qa_pairs) > 0, f"{AdapterClass.__name__} returned empty question_answer_pairs when expecting at least 1 item" - # Check the shape assert len(corpus_list) == len(qa_pairs), ( f"{AdapterClass.__name__} corpus_list and question_answer_pairs " - "should typically be the same length. Adjust if your adapter differs." + f"have different lengths: {len(corpus_list)} != {len(qa_pairs)}" )
58-76
: Enhance limit testing with randomization and additional assertions.Consider improving the test by:
- Using random limits to catch edge cases
- Verifying that both lists have the same length
- Making the docstring more specific about expectations
Apply this diff to enhance the test:
@pytest.mark.parametrize("AdapterClass", ADAPTER_CLASSES) def test_adapter_limit(AdapterClass): """ Check that the `limit` parameter correctly restricts the amount of data returned. - We'll test with limit=5. + The test ensures that: + 1. The number of items returned doesn't exceed the limit + 2. Both corpus_list and qa_pairs have the same length + 3. The limit is respected even with different values """ adapter = AdapterClass() - limit = 5 + limit = random.randint(3, 10) # Test with random limits corpus_list, qa_pairs = adapter.load_corpus(limit=limit) - # Confirm that we didn't receive more than 'limit' - # (Some adapters might be allowed to return fewer if the dataset is small) assert len(corpus_list) <= limit, ( - f"{AdapterClass.__name__} returned more items than requested limit={limit}." + f"{AdapterClass.__name__} returned {len(corpus_list)} items when limit was {limit}" ) - assert len(qa_pairs) <= limit, ( - f"{AdapterClass.__name__} returned more QA items than requested limit={limit}." + assert len(corpus_list) == len(qa_pairs), ( + f"{AdapterClass.__name__} returned lists of different lengths: " + f"corpus_list={len(corpus_list)}, qa_pairs={len(qa_pairs)}" )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
evals/eval_framework/tests/unit/benchmark_adapters_test.py
(1 hunks)
🧰 Additional context used
🪛 GitHub Actions: lint | ruff format
evals/eval_framework/tests/unit/benchmark_adapters_test.py
[warning] 1-1: File would be reformatted. Please adhere to the formatting standards.
⏰ Context from checks skipped due to timeout of 90000ms (21)
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: test
- GitHub Check: run_simple_example_test / test
- GitHub Check: test
- GitHub Check: windows-latest
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Build Cognee Backend Docker App Image
- GitHub Check: docker-compose-test
- GitHub Check: profiler
🔇 Additional comments (2)
evals/eval_framework/tests/unit/benchmark_adapters_test.py (2)
1-14
: Fix formatting issues flagged by ruff.The pipeline indicates formatting issues. Run
ruff format
to automatically fix the formatting.The imports are well-organized and the ADAPTER_CLASSES list is well-structured for parameterized testing.
🧰 Tools
🪛 GitHub Actions: lint | ruff format
[warning] 1-1: File would be reformatted. Please adhere to the formatting standards.
17-31
: LGTM! Well-structured smoke test.The test function is well-designed with:
- Clear docstring explaining the purpose
- Comprehensive assertions with descriptive error messages
- Proper type checking of return values
@@ -80,6 +80,8 @@ def load_corpus( | |||
# If you have a known 'answer' key, or sometimes it's "answer_aliases", adapt accordingly | |||
answer = item.get("answer", "") | |||
|
|||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried the Musique adapter, and it's not working for me. Since one version is already on a dev I will comment it here:
-The unzip creates the output in a " data " directory, but we are trying to read from the working directory.
-I wouldn't necessarily delete the zip and would only download it if it's not already there
-small thing: I would delete the comments but not a must
Description
DCO Affirmation
I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin
Summary by CodeRabbit
New Features
BenchmarkAdapter
class to define multiple benchmark constants.Bug Fixes
Tests