-
Notifications
You must be signed in to change notification settings - Fork 589
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize view callbacks for model evaluation panel #5268
Conversation
WalkthroughThe changes in this pull request involve modifications to the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
🧰 Additional context used📓 Path-based instructions (1)app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)Pattern 🔇 Additional comments (2)app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (2)
The click handler correctly includes the
Changed the condition from checking exact length to Let's verify the impact of this change: ✅ Verification successfulLet me gather more context about the specific file and its usage of stages. The change from exact length check to The condition is used to safely access the first stage of a filter to determine if it's a
🏁 Scripts executedThe following scripts were executed for the analysis: Script: #!/bin/bash
# Description: Check for any other length-specific conditions in filter-related code
# that might need similar updates for consistency.
# Search for similar length checks in filter-related code
rg -A 3 'stages.*length.*=== 1'
rg -A 3 'stages.*length.*== 1'
# Search for FilterLabels usage to ensure compatibility
ast-grep --pattern 'FilterLabels'
Length of output: 3688 Script: #!/bin/bash
# Get more context around the changed condition
rg -B 5 -A 10 "if \(stages\?\.length >= 1\)" app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx
# Look for other stage length checks in the same file
rg "stages.*length" app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx
Length of output: 907 Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
1c5ffde
to
62dd24c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (1)
445-468
: Consider extracting common filtering patterns in detection view logic.The detection view filtering logic for 'class' and 'matrix' cases contains similar patterns that could be extracted into a helper method to reduce code duplication and improve maintainability.
Consider refactoring like this:
+ def _filter_detection_labels(self, view, pred_field, gt_field, pred_label, gt_label=None): + _, pred_root = self._get_label_field_path(pred_field) + _, gt_root = self._get_label_field_path(gt_field) + + filtered_view = ( + view.filter_labels( + pred_field, F("label") == pred_label, only_matches=False + ) + .filter_labels( + gt_field, F("label") == (gt_label or pred_label), only_matches=False + ) + .match( + (F(pred_root).length() > 0) | (F(gt_root).length() > 0) + ) + ) + return filtered_view def load_view(self, ctx): # ... existing code ... elif info.config.type == "detection": if view_type == "class": - view = ( - eval_view.filter_labels( - pred_field, F("label") == x, only_matches=False - ) - .filter_labels( - gt_field, F("label") == x, only_matches=False - ) - .match( - (F(pred_root).length() > 0) | (F(gt_root).length() > 0) - ) - ) + view = self._filter_detection_labels(eval_view, pred_field, gt_field, x) elif view_type == "matrix": - view = ( - eval_view.filter_labels( - gt_field, F("label") == y, only_matches=False - ) - .filter_labels( - pred_field, F("label") == x, only_matches=False - ) - .match( - (F(pred_root).length() > 0) | (F(gt_root).length() > 0) - ) - ) + view = self._filter_detection_labels(eval_view, pred_field, gt_field, x, y)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py
(2 hunks)
🔇 Additional comments (3)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3)
111-114
: LGTM! Clean and efficient binary classification check.
The binary classification condition is well-structured and clearly separates the classification logic from detection cases.
423-431
: LGTM! Improved classification view filtering.
The switch to match()
for classification evaluations and the inclusion of both ground truth and predicted labels in filters aligns well with the PR objectives for optimizing view callbacks.
421-431
: Verify handling of model comparison in Class Performance callbacks.
As mentioned in the PR objectives, there's a remaining task to filter both sets of predictions when comparing two models in the Class Performance callbacks. Please ensure this is addressed in a follow-up change.
62dd24c
to
981c42d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (2)
445-455
: Consider abstracting common filtering patterns in detection views.The class and matrix view types share similar filtering patterns. Consider extracting the common logic into a helper method to reduce code duplication.
+ def _filter_detection_labels(self, view, pred_field, gt_field, pred_label, gt_label=None): + _, pred_root = self._get_label_field_path(pred_field) + _, gt_root = self._get_label_field_path(gt_field) + + filtered_view = ( + view.filter_labels( + pred_field, F("label") == pred_label, only_matches=False + ) + .filter_labels( + gt_field, F("label") == (gt_label or pred_label), only_matches=False + ) + ) + + if gt_label: + return filtered_view.match( + (F(pred_root).length() > 0) & (F(gt_root).length() > 0) + ) + return filtered_view.match( + (F(pred_root).length() > 0) | (F(gt_root).length() > 0) + )Usage:
- if view_type == "class": - view = ( - eval_view.filter_labels( - pred_field, F("label") == x, only_matches=False - ) - .filter_labels( - gt_field, F("label") == x, only_matches=False - ) - .match( - (F(pred_root).length() > 0) | (F(gt_root).length() > 0) - ) - ) + if view_type == "class": + view = self._filter_detection_labels(eval_view, pred_field, gt_field, x) - elif view_type == "matrix": - view = ( - eval_view.filter_labels( - gt_field, F("label") == y, only_matches=False - ) - .filter_labels( - pred_field, F("label") == x, only_matches=False - ) - .match( - (F(pred_root).length() > 0) & (F(gt_root).length() > 0) - ) - ) + elif view_type == "matrix": + view = self._filter_detection_labels(eval_view, pred_field, gt_field, x, y)Also applies to: 457-468
470-491
: Fix inconsistent indentation in detection field filtering.The indentation in the detection field filtering section is inconsistent with the rest of the code.
elif view_type == "field": if field == "tp": view = eval_view.filter_labels( gt_field, F(computed_eval_key) == field, only_matches=False, - ).filter_labels( + ).filter_labels( pred_field, F(computed_eval_key) == field, only_matches=True, - ) + ) elif field == "fn": view = eval_view.filter_labels( gt_field, F(computed_eval_key) == field, only_matches=True, - ) + ) else: view = eval_view.filter_labels( pred_field, F(computed_eval_key) == field, only_matches=True, - ) + )
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py
(2 hunks)
🔇 Additional comments (2)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (2)
111-114
: LGTM! Clean and efficient binary classification check.
The binary classification condition is well-structured and uses explicit checks for both type and method, making the code more maintainable and less prone to errors.
421-440
: LGTM! Improved classification view filtering.
The switch to using match()
with combined conditions for both ground truth and predicted labels is more efficient than the previous approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3)
315-316
: Consider using a more descriptive default value for missing state.The current default value "(none)" could be more descriptive. Consider using something like "MISSING" or "NO_LABEL" to make the intent clearer.
- missing = ctx.panel.get_state("missing", "(none)") + missing = ctx.panel.get_state("missing", "MISSING")
440-464
: Consider extracting classification view filtering logic into a separate method.The classification view filtering logic is complex with multiple nested conditions. Extracting it into a separate method would improve readability and maintainability.
+def _build_classification_view(self, view_type, eval_view, x, y, field, gt_field, pred_field, gt_field2, pred_field2, eval_key): + if view_type == "class": + expr = F(f"{gt_field}.label") == x + expr |= F(f"{pred_field}.label") == x + if gt_field2 is not None: + expr |= F(f"{gt_field2}.label") == x + if pred_field2 is not None: + expr |= F(f"{pred_field2}.label") == x + return eval_view.match(expr) + elif view_type == "matrix": + expr = F(f"{gt_field}.label") == y + expr &= F(f"{pred_field}.label") == x + return eval_view.match(expr) + elif view_type == "field": + if field == "fn": + expr = F(f"{gt_field}.{eval_key}") == field + return eval_view.match(expr) + else: + expr = F(f"{pred_field}.{eval_key}") == field + return eval_view.match(expr) + return None def load_view(self, ctx): # ... existing code ... if info.config.type == "classification": - if view_type == "class": - # ... existing code ... - elif view_type == "matrix": - # ... existing code ... - elif view_type == "field": - # ... existing code ... + view = self._build_classification_view( + view_type, eval_view, x, y, field, + gt_field, pred_field, gt_field2, pred_field2, + eval_key + )
472-493
: Consider using list comprehension for building detection class filters.The detection class filtering logic involves multiple similar operations that could be simplified using list comprehension.
- if view_type == "class": - view = eval_view.filter_labels( - gt_field, F("label") == x, only_matches=False - ) - expr = F(gt_root).length() > 0 - view = view.filter_labels( - pred_field, F("label") == x, only_matches=False - ) - expr |= F(pred_root).length() > 0 - if gt_field2 is not None: - view = view.filter_labels( - gt_field2, F("label") == x, only_matches=False - ) - expr |= F(gt_root2).length() > 0 - if pred_field2 is not None: - view = view.filter_labels( - pred_field2, F("label") == x, only_matches=False - ) - expr |= F(pred_root2).length() > 0 - view = view.match(expr) + if view_type == "class": + fields = [(gt_field, gt_root), (pred_field, pred_root)] + if gt_field2 is not None: + fields.append((gt_field2, gt_root2)) + if pred_field2 is not None: + fields.append((pred_field2, pred_root2)) + + view = eval_view + for field, root in fields: + view = view.filter_labels( + field, F("label") == x, only_matches=False + ) + expr = expr | F(root).length() > 0 if 'expr' in locals() else F(root).length() > 0 + + view = view.match(expr)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py
(3 hunks)
🔇 Additional comments (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (1)
111-114
: LGTM! Efficient implementation of binary classification check.
The integration of binary classification logic directly into get_tp_fp_fn
improves code organization while maintaining efficient numpy operations for performance.
eval_key2 = view_state.get("compareKey", None) | ||
pred_field2 = None | ||
gt_field2 = None | ||
if eval_key2 is not None: | ||
info2 = ctx.dataset.get_evaluation_info(eval_key2) | ||
pred_field2 = info2.config.pred_field | ||
if info2.config.gt_field != gt_field: | ||
gt_field2 = info2.config.gt_field | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for invalid comparison key.
When loading a second evaluation for comparison, there's no error handling if the evaluation info cannot be loaded. This could lead to unclear errors if eval_key2 is invalid.
eval_key2 = view_state.get("compareKey", None)
pred_field2 = None
gt_field2 = None
if eval_key2 is not None:
+ try:
info2 = ctx.dataset.get_evaluation_info(eval_key2)
pred_field2 = info2.config.pred_field
if info2.config.gt_field != gt_field:
gt_field2 = info2.config.gt_field
+ except Exception as e:
+ ctx.ops.notify(
+ f"Failed to load comparison evaluation: {str(e)}",
+ variant="error"
+ )
+ return
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
eval_key2 = view_state.get("compareKey", None) | |
pred_field2 = None | |
gt_field2 = None | |
if eval_key2 is not None: | |
info2 = ctx.dataset.get_evaluation_info(eval_key2) | |
pred_field2 = info2.config.pred_field | |
if info2.config.gt_field != gt_field: | |
gt_field2 = info2.config.gt_field | |
eval_key2 = view_state.get("compareKey", None) | |
pred_field2 = None | |
gt_field2 = None | |
if eval_key2 is not None: | |
try: | |
info2 = ctx.dataset.get_evaluation_info(eval_key2) | |
pred_field2 = info2.config.pred_field | |
if info2.config.gt_field != gt_field: | |
gt_field2 = info2.config.gt_field | |
except Exception as e: | |
ctx.ops.notify( | |
f"Failed to load comparison evaluation: {str(e)}", | |
variant="error" | |
) | |
return |
23a63a3
to
94ed26c
Compare
94ed26c
to
a9ea1c3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3)
315-316
: Add a comment explaining the missing state.Consider adding a brief comment explaining why this state is being set and how it's used in the UI.
+ # Store missing label placeholder for UI to handle missing predictions/ground truth ctx.panel.set_state("missing", results.missing)
435-459
: Consider extracting classification view logic into a separate method.The classification view handling logic is complex and would benefit from being extracted into a dedicated method for better maintainability.
+ def _handle_classification_view(self, view_type, eval_view, info, x, y, field, gt_field, pred_field, gt_field2=None, pred_field2=None): + if view_type == "class": + expr = F(f"{gt_field}.label") == x + expr |= F(f"{pred_field}.label") == x + if gt_field2 is not None: + expr |= F(f"{gt_field2}.label") == x + if pred_field2 is not None: + expr |= F(f"{pred_field2}.label") == x + return eval_view.match(expr) + # ... rest of the classification logic ...
467-487
: Consider using list comprehension for label filtering.The repeated filter_labels calls could be simplified using list comprehension for better readability.
- view = eval_view.filter_labels( - gt_field, F("label") == x, only_matches=False - ) - expr = F(gt_root).length() > 0 - view = view.filter_labels( - pred_field, F("label") == x, only_matches=False - ) + fields_to_filter = [(gt_field, gt_root), (pred_field, pred_root)] + if gt_field2 is not None: + fields_to_filter.append((gt_field2, gt_root2)) + if pred_field2 is not None: + fields_to_filter.append((pred_field2, pred_root2)) + + view = eval_view + for field, root in fields_to_filter: + view = view.filter_labels(field, F("label") == x, only_matches=False) + expr |= F(root).length() > 0app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (2)
1220-1226
: Add TypeScript types for click handler parameters.Consider adding proper TypeScript types for the points parameter to improve type safety.
- onClick={({ points }) => { + onClick={({ points }: { points: Array<{ x: string; y: string }> }) => {
1608-1608
: Document the reason for using>= 1
in the filter check.Add a comment explaining why we check for one or more stages instead of exactly one stage.
+ // Check for one or more stages to handle cases where additional filters might be applied if (stages?.length >= 1) {
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx
(2 hunks)fiftyone/operators/builtins/panels/model_evaluation/__init__.py
(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)
Pattern **/*.{ts,tsx}
: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.
🔇 Additional comments (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (1)
111-114
: LGTM! Efficient binary classification check.
The binary classification check is well-implemented using clear and concise conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
68a5210
to
5603258
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested locally. LGTM.
Change log
Improves the views loaded by the model evaluation panel in the following callbacks:
Specific improvements:
match()
rather thanfilter_labels()
for classification evaluationsTODO
options["key"] == compare_eval_key
) and I wired up thetype == "matrix"
code path to use the same key. So I believe that once the events start firing,load_view()
should work as expectedSummary by CodeRabbit
New Features
Refactor