-
Notifications
You must be signed in to change notification settings - Fork 590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds correct/incorrect rows to summary card for classification/non-binary evaluations #5388
Conversation
WalkthroughThe pull request introduces enhancements to model evaluation metrics, specifically for non-binary classification tasks. In the frontend ( Changes
Possibly related PRs
Suggested Labels
Suggested Reviewers
Poem
Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)
470-487
: Consider setting lesserIsBetter to true for incorrect counts.While the implementation is good, the incorrect count should ideally be minimized. Consider this change:
{ id: false, property: "Incorrect", value: evaluationMetrics.num_incorrect, compareValue: compareEvaluationMetrics.num_incorrect, - lesserIsBetter: false, + lesserIsBetter: true, filterable: true, hide: !isNoneBinaryClassification, },
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx
(2 hunks)plugins/panels/model_evaluation/__init__.py
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)
Pattern **/*.{ts,tsx}
: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.
🔇 Additional comments (3)
plugins/panels/model_evaluation/__init__.py (2)
328-332
: Well-implemented counting logic!The implementation efficiently uses numpy's vectorized operations to count correct and incorrect predictions, which is optimal for performance.
371-379
: Clean integration of correct/incorrect metrics!The conditional logic appropriately handles non-binary classification cases, and the integration with the existing metrics dictionary is well-structured.
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)
213-214
: Good flag implementation!The boolean flag is well-named and correctly implements the condition for non-binary classification evaluations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
What changes are proposed in this pull request?
https://voxel51.atlassian.net/browse/FOEPD-254
for classification and non-binary evaluation, adds Correct/Incorrect rows in summary card
clicking on icon in row would filter the view based on correct/incorrect
video
https://github.com/voxel51/fiftyone-teams/pull/1174
One note potentially outside of the scope of this ticket is that for all the existing, evaluations, there will be correct / incorrect rows in summary card but they will be missing the values because they were never saved when evaluation was calculated.
Note: the correct/incorrect icons are still clickable and work correctly
How is this patch tested? If it is not, please explain why.
(Details)
Release Notes
Is this a user-facing change that should be mentioned in the release notes?
notes for FiftyOne users.
(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)
What areas of FiftyOne does this PR affect?
fiftyone
Python library changesSummary by CodeRabbit
New Features
Improvements