Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize view callbacks for model evaluation panel #5268

Merged
merged 3 commits into from
Dec 18, 2024

Conversation

brimoor
Copy link
Contributor

@brimoor brimoor commented Dec 13, 2024

Change log

Improves the views loaded by the model evaluation panel in the following callbacks:

  • clicking the TP/FP/FN icons in the Summary card
  • clicking classes in the Class Performance card
  • clicking cells in the Confusion Matrix card

Specific improvements:

  • Include both ground truth and predicted labels in the filters where appropriate
  • use match() rather than filter_labels() for classification evaluations
  • when comparing two models, filter both sets of predictions in the Class Performance callbacks

TODO

  • Events are not firing when cells are clicked in the "compare to" confusion matrix?
    • "compare to" callbacks are working for the TP/FP/FN buttons (in which case options["key"] == compare_eval_key) and I wired up the type == "matrix" code path to use the same key. So I believe that once the events start firing, load_view() should work as expected
  • I believe the frontend will need to be updated so that the selection state of the Class Performance chart updates as expected when you click on a bar

Summary by CodeRabbit

  • New Features

    • Enhanced evaluation logic for model performance, simplifying binary classification checks.
    • Improved handling of different view types for classification and detection scenarios.
    • Expanded filtering capabilities for evaluation stages in the confusion matrix.
  • Refactor

    • Consolidated binary classification logic into the main evaluation method.
    • Updated view loading process to enhance flexibility and clarity.
    • Improved interaction with the confusion matrix for better user experience.

@brimoor brimoor requested a review from imanjra December 13, 2024 06:22
Copy link
Contributor

coderabbitai bot commented Dec 13, 2024

Walkthrough

The changes in this pull request involve modifications to the EvaluationPanel class in the fiftyone/operators/builtins/panels/model_evaluation/__init__.py file. Key updates include the removal of the is_binary_classification method and integration of its logic into the get_tp_fp_fn method. The load_view method has been refactored to utilize a new eval_view variable for improved filtering of labels based on evaluation types. Additionally, the load_evaluation method has been updated to enhance error handling for missing results. Overall, these changes improve the clarity and maintainability of the evaluation logic.

Changes

File Path Change Summary
fiftyone/operators/builtins/panels/model_evaluation/__init__.py - Removed is_binary_classification method.
- Integrated binary classification logic into get_tp_fp_fn method.
- Refactored load_view method to use eval_view for filtering labels.
- Updated load_evaluation method to enhance state management for missing results.
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx - Updated useActiveFilter function to allow broader filtering.
- Modified onClick handler for confusion matrix plot to ensure correct parameters are passed.

Possibly related PRs

  • various model evaluation fixes and enhancements #5123: This PR modifies the Evaluation.tsx component, which is related to the EvaluationPanel class in the main PR, specifically enhancing the logic for filtering and handling evaluation metrics.
  • model evaluation bug fixes #5166: Changes in this PR also affect the Evaluation.tsx component, focusing on dialog functionalities for configuring evaluation settings, which ties into the overall evaluation logic being modified in the main PR.
  • fix confusion matrix in model evaluation panel #5186: This PR addresses issues with the confusion matrix in the Evaluation.tsx component, which is directly related to the evaluation logic being refined in the main PR.
  • fix pending evaluation also being listed as completed #5194: This PR introduces a new method in the EvaluationPanel class that enhances the loading of evaluations, which is relevant to the changes made in the main PR regarding evaluation handling.
  • Fixing #5254 #5267: This PR modifies the get_tp_fp_fn method in the EvaluationPanel, which is directly related to the changes made in the main PR where the logic for binary classification was integrated into this method.
  • Model evaluation panel fixes and enhancements #5281: This PR introduces a new constant for supported evaluation types in the EvaluationPanel, which aligns with the changes made in the main PR to improve evaluation handling and error management.

Suggested labels

bug

Suggested reviewers

  • brimoor

🐇 In the code where logic flows,
A panel's heart now clearly shows.
With methods merged, the path is bright,
Evaluation shines with clearer light.
Let's hop along, with joy we sing,
For clarity is a wonderful thing! 🌟


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 68a5210 and 5603258.

📒 Files selected for processing (2)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (2 hunks)
  • fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • fiftyone/operators/builtins/panels/model_evaluation/init.py
🧰 Additional context used
📓 Path-based instructions (1)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

🔇 Additional comments (2)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (2)

1220-1227: LGTM: Proper handling of comparison matrix click events.

The click handler correctly includes the compareKey parameter when loading the view for the comparison matrix, which aligns with the PR's objective of optimizing view callbacks.


1609-1609: LGTM: Enhanced filter condition for better flexibility.

Changed the condition from checking exact length to >= 1, which allows for more flexible filtering scenarios while maintaining the logic's correctness.

Let's verify the impact of this change:

✅ Verification successful

Let me gather more context about the specific file and its usage of stages.


The change from exact length check to >= 1 is safe and appropriate

The condition is used to safely access the first stage of a filter to determine if it's a FilterLabels operation. The change to >= 1 is more flexible and maintains correctness since:

  • The code only accesses stages[0] and its properties
  • The logic specifically handles FilterLabels operations on the first stage
  • Other length-specific checks in the codebase (viewBarMachine.ts) serve different purposes and don't interact with this component's filter logic
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any other length-specific conditions in filter-related code
# that might need similar updates for consistency.

# Search for similar length checks in filter-related code
rg -A 3 'stages.*length.*=== 1' 
rg -A 3 'stages.*length.*== 1'

# Search for FilterLabels usage to ensure compatibility
ast-grep --pattern 'FilterLabels'

Length of output: 3688


Script:

#!/bin/bash
# Get more context around the changed condition
rg -B 5 -A 10 "if \(stages\?\.length >= 1\)" app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx

# Look for other stage length checks in the same file
rg "stages.*length" app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx

Length of output: 907


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Base automatically changed from model-eval-fixes to release/v1.2.0 December 13, 2024 23:02
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (1)

445-468: Consider extracting common filtering patterns in detection view logic.

The detection view filtering logic for 'class' and 'matrix' cases contains similar patterns that could be extracted into a helper method to reduce code duplication and improve maintainability.

Consider refactoring like this:

+    def _filter_detection_labels(self, view, pred_field, gt_field, pred_label, gt_label=None):
+        _, pred_root = self._get_label_field_path(pred_field)
+        _, gt_root = self._get_label_field_path(gt_field)
+        
+        filtered_view = (
+            view.filter_labels(
+                pred_field, F("label") == pred_label, only_matches=False
+            )
+            .filter_labels(
+                gt_field, F("label") == (gt_label or pred_label), only_matches=False
+            )
+            .match(
+                (F(pred_root).length() > 0) | (F(gt_root).length() > 0)
+            )
+        )
+        return filtered_view

     def load_view(self, ctx):
         # ... existing code ...
         
         elif info.config.type == "detection":
             if view_type == "class":
-                view = (
-                    eval_view.filter_labels(
-                        pred_field, F("label") == x, only_matches=False
-                    )
-                    .filter_labels(
-                        gt_field, F("label") == x, only_matches=False
-                    )
-                    .match(
-                        (F(pred_root).length() > 0) | (F(gt_root).length() > 0)
-                    )
-                )
+                view = self._filter_detection_labels(eval_view, pred_field, gt_field, x)
             elif view_type == "matrix":
-                view = (
-                    eval_view.filter_labels(
-                        gt_field, F("label") == y, only_matches=False
-                    )
-                    .filter_labels(
-                        pred_field, F("label") == x, only_matches=False
-                    )
-                    .match(
-                        (F(pred_root).length() > 0) | (F(gt_root).length() > 0)
-                    )
-                )
+                view = self._filter_detection_labels(eval_view, pred_field, gt_field, x, y)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f627b82 and 62dd24c.

📒 Files selected for processing (1)
  • fiftyone/operators/builtins/panels/model_evaluation/__init__.py (2 hunks)
🔇 Additional comments (3)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3)

111-114: LGTM! Clean and efficient binary classification check.

The binary classification condition is well-structured and clearly separates the classification logic from detection cases.


423-431: LGTM! Improved classification view filtering.

The switch to match() for classification evaluations and the inclusion of both ground truth and predicted labels in filters aligns well with the PR objectives for optimizing view callbacks.


421-431: Verify handling of model comparison in Class Performance callbacks.

As mentioned in the PR objectives, there's a remaining task to filter both sets of predictions when comparing two models in the Class Performance callbacks. Please ensure this is addressed in a follow-up change.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (2)

445-455: Consider abstracting common filtering patterns in detection views.

The class and matrix view types share similar filtering patterns. Consider extracting the common logic into a helper method to reduce code duplication.

+    def _filter_detection_labels(self, view, pred_field, gt_field, pred_label, gt_label=None):
+        _, pred_root = self._get_label_field_path(pred_field)
+        _, gt_root = self._get_label_field_path(gt_field)
+        
+        filtered_view = (
+            view.filter_labels(
+                pred_field, F("label") == pred_label, only_matches=False
+            )
+            .filter_labels(
+                gt_field, F("label") == (gt_label or pred_label), only_matches=False
+            )
+        )
+        
+        if gt_label:
+            return filtered_view.match(
+                (F(pred_root).length() > 0) & (F(gt_root).length() > 0)
+            )
+        return filtered_view.match(
+            (F(pred_root).length() > 0) | (F(gt_root).length() > 0)
+        )

Usage:

-            if view_type == "class":
-                view = (
-                    eval_view.filter_labels(
-                        pred_field, F("label") == x, only_matches=False
-                    )
-                    .filter_labels(
-                        gt_field, F("label") == x, only_matches=False
-                    )
-                    .match(
-                        (F(pred_root).length() > 0) | (F(gt_root).length() > 0)
-                    )
-                )
+            if view_type == "class":
+                view = self._filter_detection_labels(eval_view, pred_field, gt_field, x)

-            elif view_type == "matrix":
-                view = (
-                    eval_view.filter_labels(
-                        gt_field, F("label") == y, only_matches=False
-                    )
-                    .filter_labels(
-                        pred_field, F("label") == x, only_matches=False
-                    )
-                    .match(
-                        (F(pred_root).length() > 0) & (F(gt_root).length() > 0)
-                    )
-                )
+            elif view_type == "matrix":
+                view = self._filter_detection_labels(eval_view, pred_field, gt_field, x, y)

Also applies to: 457-468


470-491: Fix inconsistent indentation in detection field filtering.

The indentation in the detection field filtering section is inconsistent with the rest of the code.

             elif view_type == "field":
                 if field == "tp":
                     view = eval_view.filter_labels(
                         gt_field,
                         F(computed_eval_key) == field,
                         only_matches=False,
-                    ).filter_labels(
+                     ).filter_labels(
                         pred_field,
                         F(computed_eval_key) == field,
                         only_matches=True,
-                    )
+                     )
                 elif field == "fn":
                     view = eval_view.filter_labels(
                         gt_field,
                         F(computed_eval_key) == field,
                         only_matches=True,
-                    )
+                     )
                 else:
                     view = eval_view.filter_labels(
                         pred_field,
                         F(computed_eval_key) == field,
                         only_matches=True,
-                    )
+                     )
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 62dd24c and 981c42d.

📒 Files selected for processing (1)
  • fiftyone/operators/builtins/panels/model_evaluation/__init__.py (2 hunks)
🔇 Additional comments (2)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (2)

111-114: LGTM! Clean and efficient binary classification check.

The binary classification condition is well-structured and uses explicit checks for both type and method, making the code more maintainable and less prone to errors.


421-440: LGTM! Improved classification view filtering.

The switch to using match() with combined conditions for both ground truth and predicted labels is more efficient than the previous approach.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3)

315-316: Consider using a more descriptive default value for missing state.

The current default value "(none)" could be more descriptive. Consider using something like "MISSING" or "NO_LABEL" to make the intent clearer.

-    missing = ctx.panel.get_state("missing", "(none)")
+    missing = ctx.panel.get_state("missing", "MISSING")

440-464: Consider extracting classification view filtering logic into a separate method.

The classification view filtering logic is complex with multiple nested conditions. Extracting it into a separate method would improve readability and maintainability.

+def _build_classification_view(self, view_type, eval_view, x, y, field, gt_field, pred_field, gt_field2, pred_field2, eval_key):
+    if view_type == "class":
+        expr = F(f"{gt_field}.label") == x
+        expr |= F(f"{pred_field}.label") == x
+        if gt_field2 is not None:
+            expr |= F(f"{gt_field2}.label") == x
+        if pred_field2 is not None:
+            expr |= F(f"{pred_field2}.label") == x
+        return eval_view.match(expr)
+    elif view_type == "matrix":
+        expr = F(f"{gt_field}.label") == y
+        expr &= F(f"{pred_field}.label") == x
+        return eval_view.match(expr)
+    elif view_type == "field":
+        if field == "fn":
+            expr = F(f"{gt_field}.{eval_key}") == field
+            return eval_view.match(expr)
+        else:
+            expr = F(f"{pred_field}.{eval_key}") == field
+            return eval_view.match(expr)
+    return None

 def load_view(self, ctx):
     # ... existing code ...
     if info.config.type == "classification":
-        if view_type == "class":
-            # ... existing code ...
-        elif view_type == "matrix":
-            # ... existing code ...
-        elif view_type == "field":
-            # ... existing code ...
+        view = self._build_classification_view(
+            view_type, eval_view, x, y, field,
+            gt_field, pred_field, gt_field2, pred_field2,
+            eval_key
+        )

472-493: Consider using list comprehension for building detection class filters.

The detection class filtering logic involves multiple similar operations that could be simplified using list comprehension.

-            if view_type == "class":
-                view = eval_view.filter_labels(
-                    gt_field, F("label") == x, only_matches=False
-                )
-                expr = F(gt_root).length() > 0
-                view = view.filter_labels(
-                    pred_field, F("label") == x, only_matches=False
-                )
-                expr |= F(pred_root).length() > 0
-                if gt_field2 is not None:
-                    view = view.filter_labels(
-                        gt_field2, F("label") == x, only_matches=False
-                    )
-                    expr |= F(gt_root2).length() > 0
-                if pred_field2 is not None:
-                    view = view.filter_labels(
-                        pred_field2, F("label") == x, only_matches=False
-                    )
-                    expr |= F(pred_root2).length() > 0
-                view = view.match(expr)
+            if view_type == "class":
+                fields = [(gt_field, gt_root), (pred_field, pred_root)]
+                if gt_field2 is not None:
+                    fields.append((gt_field2, gt_root2))
+                if pred_field2 is not None:
+                    fields.append((pred_field2, pred_root2))
+                
+                view = eval_view
+                for field, root in fields:
+                    view = view.filter_labels(
+                        field, F("label") == x, only_matches=False
+                    )
+                    expr = expr | F(root).length() > 0 if 'expr' in locals() else F(root).length() > 0
+                
+                view = view.match(expr)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 981c42d and 23a63a3.

📒 Files selected for processing (1)
  • fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3 hunks)
🔇 Additional comments (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (1)

111-114: LGTM! Efficient implementation of binary classification check.

The integration of binary classification logic directly into get_tp_fp_fn improves code organization while maintaining efficient numpy operations for performance.

Comment on lines 420 to 428
eval_key2 = view_state.get("compareKey", None)
pred_field2 = None
gt_field2 = None
if eval_key2 is not None:
info2 = ctx.dataset.get_evaluation_info(eval_key2)
pred_field2 = info2.config.pred_field
if info2.config.gt_field != gt_field:
gt_field2 = info2.config.gt_field

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for invalid comparison key.

When loading a second evaluation for comparison, there's no error handling if the evaluation info cannot be loaded. This could lead to unclear errors if eval_key2 is invalid.

 eval_key2 = view_state.get("compareKey", None)
 pred_field2 = None
 gt_field2 = None
 if eval_key2 is not None:
+    try:
         info2 = ctx.dataset.get_evaluation_info(eval_key2)
         pred_field2 = info2.config.pred_field
         if info2.config.gt_field != gt_field:
             gt_field2 = info2.config.gt_field
+    except Exception as e:
+        ctx.ops.notify(
+            f"Failed to load comparison evaluation: {str(e)}",
+            variant="error"
+        )
+        return
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
eval_key2 = view_state.get("compareKey", None)
pred_field2 = None
gt_field2 = None
if eval_key2 is not None:
info2 = ctx.dataset.get_evaluation_info(eval_key2)
pred_field2 = info2.config.pred_field
if info2.config.gt_field != gt_field:
gt_field2 = info2.config.gt_field
eval_key2 = view_state.get("compareKey", None)
pred_field2 = None
gt_field2 = None
if eval_key2 is not None:
try:
info2 = ctx.dataset.get_evaluation_info(eval_key2)
pred_field2 = info2.config.pred_field
if info2.config.gt_field != gt_field:
gt_field2 = info2.config.gt_field
except Exception as e:
ctx.ops.notify(
f"Failed to load comparison evaluation: {str(e)}",
variant="error"
)
return

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3)

315-316: Add a comment explaining the missing state.

Consider adding a brief comment explaining why this state is being set and how it's used in the UI.

+            # Store missing label placeholder for UI to handle missing predictions/ground truth
             ctx.panel.set_state("missing", results.missing)

435-459: Consider extracting classification view logic into a separate method.

The classification view handling logic is complex and would benefit from being extracted into a dedicated method for better maintainability.

+    def _handle_classification_view(self, view_type, eval_view, info, x, y, field, gt_field, pred_field, gt_field2=None, pred_field2=None):
+        if view_type == "class":
+            expr = F(f"{gt_field}.label") == x
+            expr |= F(f"{pred_field}.label") == x
+            if gt_field2 is not None:
+                expr |= F(f"{gt_field2}.label") == x
+            if pred_field2 is not None:
+                expr |= F(f"{pred_field2}.label") == x
+            return eval_view.match(expr)
+        # ... rest of the classification logic ...

467-487: Consider using list comprehension for label filtering.

The repeated filter_labels calls could be simplified using list comprehension for better readability.

-            view = eval_view.filter_labels(
-                gt_field, F("label") == x, only_matches=False
-            )
-            expr = F(gt_root).length() > 0
-            view = view.filter_labels(
-                pred_field, F("label") == x, only_matches=False
-            )
+            fields_to_filter = [(gt_field, gt_root), (pred_field, pred_root)]
+            if gt_field2 is not None:
+                fields_to_filter.append((gt_field2, gt_root2))
+            if pred_field2 is not None:
+                fields_to_filter.append((pred_field2, pred_root2))
+
+            view = eval_view
+            for field, root in fields_to_filter:
+                view = view.filter_labels(field, F("label") == x, only_matches=False)
+                expr |= F(root).length() > 0
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (2)

1220-1226: Add TypeScript types for click handler parameters.

Consider adding proper TypeScript types for the points parameter to improve type safety.

-                      onClick={({ points }) => {
+                      onClick={({ points }: { points: Array<{ x: string; y: string }> }) => {

1608-1608: Document the reason for using >= 1 in the filter check.

Add a comment explaining why we check for one or more stages instead of exactly one stage.

+  // Check for one or more stages to handle cases where additional filters might be applied
   if (stages?.length >= 1) {
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a9ea1c3 and 68a5210.

📒 Files selected for processing (2)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (2 hunks)
  • fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

🔇 Additional comments (1)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (1)

111-114: LGTM! Efficient binary classification check.

The binary classification check is well-implemented using clear and concise conditions.

imanjra
imanjra previously approved these changes Dec 18, 2024
Copy link
Contributor

@imanjra imanjra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@imanjra imanjra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested locally. LGTM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants