-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calculate official hotpot EM and F1 scores #292
Conversation
WalkthroughThe changes in this pull request introduce new evaluation metrics and enhance existing functionalities in the Changes
Possibly related PRs
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (4)
evals/deepeval_metrics.py (1)
49-51
: Docstring FormattingThe docstring does not follow PEP 257 conventions. The first line should be a concise summary.
Reformat the docstring for clarity:
"""Exact Match score taken directly from the official hotpot benchmark implementation and wrapped into a deepeval metric.""" +""" +Exact Match score metric from the official HotpotQA benchmark, wrapped into a Deepeval metric. +"""evals/official_hotpot_metrics.py (1)
10-11
: Consider Using the Standardjson
ModuleThe use of
ujson
might introduce compatibility issues and does not support all features of the standardjson
module.Unless there's a specific performance requirement, consider using the built-in
json
module:-import ujson as json +import jsonevals/eval_on_hotpot.py (2)
115-116
: Avoid Backslashes in Multiline StringsUsing backslashes for line continuation in strings can lead to syntax errors or accidental whitespace issues.
Use implicit string concatenation within parentheses for cleaner code:
parser.add_argument("--metric", type=str, default="correctness_metric", - help="Valid options are Deepeval metrics (e.g. AnswerRelevancyMetric) \ - and metrics defined in evals/deepeval_metrics.py, e.g. f1_score_metric") + help=("Valid options are Deepeval metrics (e.g., AnswerRelevancyMetric) " + "and metrics defined in evals/deepeval_metrics.py, e.g., f1_score_metric"))
125-126
: Ensure Consistent Metric InstantiationCurrently, the code checks if
metric
is a type before instantiation. This can be simplified for clarity.Assuming all metrics should be instantiated, remove the type check:
try: metric = getattr(deepeval.metrics, args.metric) except AttributeError: metric = getattr(evals.deepeval_metrics, args.metric) -if isinstance(metric, type): - metric = metric() +metric = metric()This ensures that
metric
is always an instance of the metric class.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
evals/deepeval_metrics.py
(1 hunks)evals/eval_on_hotpot.py
(2 hunks)evals/official_hotpot_metrics.py
(1 hunks)
class f1_score_metric(BaseMetric): | ||
|
||
"""F1 score taken directly from the official hotpot benchmark | ||
implementation and wrapped into a deepeval metric.""" | ||
|
||
def __init__(self, threshold: float = 0.5): | ||
self.threshold = threshold | ||
|
||
def measure(self, test_case: LLMTestCase): | ||
f1, precision, recall = f1_score( | ||
prediction=test_case.actual_output, | ||
ground_truth=test_case.expected_output, | ||
) | ||
self.score = f1 | ||
self.success = self.score >= self.threshold | ||
return self.score | ||
|
||
# Reusing regular measure as async F1 score is not implemented | ||
async def a_measure(self, test_case: LLMTestCase): | ||
return self.measure(test_case) | ||
|
||
def is_successful(self): | ||
return self.success | ||
|
||
@property | ||
def __name__(self): | ||
return "Official hotpot F1 score" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Potential Blocking in Asynchronous Methods
The a_measure
methods in both f1_score_metric
and em_score_metric
classes directly call the synchronous measure
method. This could block the event loop if measure
involves long-running operations.
Consider making the measure
method asynchronous if it performs I/O-bound or time-consuming tasks. Alternatively, you can offload the synchronous method to a thread using asyncio.to_thread
:
async def a_measure(self, test_case: LLMTestCase):
return await asyncio.to_thread(self.measure, test_case)
@property | ||
def __name__(self): | ||
return "Official hotpot F1 score" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Rename __name__
Property to Avoid Conflicts
Overriding the special __name__
attribute can lead to confusion and unexpected behavior since __name__
is a built-in attribute in Python.
Rename the property to avoid shadowing the built-in attribute:
@property
-def __name__(self):
+def name(self):
return "Official hotpot F1 score"
Ensure to update any references to __name__
accordingly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@property | |
def __name__(self): | |
return "Official hotpot F1 score" | |
@property | |
def name(self): | |
return "Official hotpot F1 score" |
def measure(self, test_case: LLMTestCase): | ||
self.score = exact_match_score( | ||
prediction=test_case.actual_output, | ||
ground_truth=test_case.expected_output, | ||
) | ||
self.success = self.score >= self.threshold | ||
return self.score |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Boolean Comparison with Float Threshold
In em_score_metric
, self.score
is a boolean value resulting from exact_match_score
, but it's being compared to a float threshold
. This can lead to unintended behavior.
Convert the boolean to a float for a meaningful comparison:
def measure(self, test_case: LLMTestCase):
self.score = exact_match_score(
prediction=test_case.actual_output,
ground_truth=test_case.expected_output,
)
+ self.score = float(self.score)
self.success = self.score >= self.threshold
return self.score
Alternatively, set the threshold
to 1.0
to reflect that only a perfect match is considered successful.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def measure(self, test_case: LLMTestCase): | |
self.score = exact_match_score( | |
prediction=test_case.actual_output, | |
ground_truth=test_case.expected_output, | |
) | |
self.success = self.score >= self.threshold | |
return self.score | |
def measure(self, test_case: LLMTestCase): | |
self.score = exact_match_score( | |
prediction=test_case.actual_output, | |
ground_truth=test_case.expected_output, | |
) | |
self.score = float(self.score) | |
self.success = self.score >= self.threshold | |
return self.score |
Summary by CodeRabbit
New Features
Bug Fixes
Documentation