Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model evaluation panel fixes and enhancements #5281

Merged
merged 5 commits into from
Dec 17, 2024

Conversation

imanjra
Copy link
Contributor

@imanjra imanjra commented Dec 16, 2024

What changes are proposed in this pull request?

(Please fill in changes proposed in this fix)

How is this patch tested? If it is not, please explain why.

(Details)

Release Notes

Is this a user-facing change that should be mentioned in the release notes?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release
    notes for FiftyOne users.

(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)

What areas of FiftyOne does this PR affect?

  • App: FiftyOne application changes
  • Build: Build and test infrastructure changes
  • Core: Core fiftyone Python library changes
  • Documentation: FiftyOne documentation changes
  • Other

Summary by CodeRabbit

  • New Features

    • Introduced Error component for displaying evaluation errors.
    • Added ErrorIcon component for visual representation of errors.
    • Implemented support for new evaluation types: classification, detection, and segmentation.
  • Bug Fixes

    • Enhanced error handling in the Evaluation component to provide user feedback for unsupported evaluation types.
  • Documentation

    • Updated type definitions for better clarity and usage in components.

Copy link
Contributor

coderabbitai bot commented Dec 16, 2024

Warning

Rate limit exceeded

@imanjra has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 9 minutes and 28 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 0142a6e and 3443b87.

📒 Files selected for processing (2)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (24 hunks)
  • fiftyone/operators/builtins/panels/model_evaluation/__init__.py (4 hunks)

Walkthrough

The pull request introduces enhancements to the model evaluation view in a machine learning application. New components like Error and ErrorIcon have been added to improve error handling and user interface. The Evaluation component now includes more robust error management, checking for unsupported evaluation types and displaying appropriate feedback. A utility function in utils.ts has been slightly modified to improve numeric value formatting. In the backend, a new constant SUPPORTED_EVALUATION_TYPES was introduced to validate dataset evaluation types.

Changes

File Change Summary
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx New React component for displaying error messages with Material-UI components and added type definition for props.
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/ErrorIcon.tsx New SVG icon component representing an error state.
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx Enhanced error handling, added error states, and conditional rendering based on evaluation errors.
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/utils.ts Updated formatValue function with additional numeric value check.
fiftyone/operators/builtins/panels/model_evaluation/__init__.py Added SUPPORTED_EVALUATION_TYPES constant for evaluation type validation.

Poem

🐰 A Rabbit's Ode to Error Handling 🐰
In code's wild garden, errors bloom bright,
With icons and messages, we set things right,
Validation dances, types checked with care,
Our model's journey, no longer a scare!
Hop, hop, debugging with glee! 🚀


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@imanjra imanjra changed the base branch from develop to release/v1.2.0 December 16, 2024 19:06
@imanjra imanjra marked this pull request as ready for review December 16, 2024 21:16
@imanjra imanjra requested a review from a team December 16, 2024 21:17
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/ErrorIcon.tsx (1)

7-31: Allow the icon's color to be customized via props

The ErrorIcon component currently uses hardcoded fill colors in the SVG paths, which limits its reusability and theming capabilities. To align with Material-UI best practices and allow the icon's color to be controlled via props, consider replacing the hardcoded fill attributes with fill="currentColor".

Apply this diff to update the SVG paths:

 <SvgIcon width="54" height="61" viewBox="0 0 54 61" fill="none" {...props}>
   <path
-    d="M27 3.16667C34.0667 3.16667 40.6667 3.9 45.5334 5.23333C48.4667 6.03333 50 6.83333 50.6667 7.3V53.7C50.0667 54.1667 48.5334 54.9667 45.5334 55.7667C40.6667 57.1 34.0667 57.8333 27 57.8333C19.9334 57.8333 13.3334 57.1 8.46669 55.7667C5.53335 54.9667 4.00002 54.1667 3.33335 53.7V7.3C3.93335 6.83333 5.46669 6.03333 8.46669 5.23333C13.3334 3.9 19.9334 3.16667 27 3.16667ZM27 0.5C19.7334 0.5 12.8667 1.23333 7.73335 2.63333C2.86669 3.96667 0.666687 5.56667 0.666687 6.56667V54.4333C0.666687 55.4333 2.86669 57.0333 7.73335 58.3667C12.8667 59.7 19.7334 60.5 27 60.5C34.2667 60.5 41.1334 59.7 46.2667 58.3667C51.1334 57.0333 53.3334 55.4333 53.3334 54.4333V6.56667C53.3334 5.56667 51.1334 3.96667 46.2667 2.63333C41.1334 1.23333 34.2667 0.5 27 0.5Z"
-    fill="#FFC59B"
+    fill="currentColor"
   />
   <path
     d="M18.2664 27.6333C14.3331 27.6333 11.0664 24.4333 11.0664 20.4333C11.0664 16.4333 14.3331 13.2999 18.2664 13.2999C22.1997 13.2999 25.3997 16.4999 25.3997 20.4333C25.3997 24.3666 22.1997 27.6333 18.2664 27.6333ZM18.2664 15.9666C15.7997 15.9666 13.7331 17.9666 13.7331 20.4999C13.7331 23.0333 15.7331 25.0333 18.2664 25.0333C20.7997 25.0333 22.7997 23.0333 22.7997 20.4999C22.7997 17.9666 20.7331 15.9666 18.2664 15.9666Z"
-    fill="#FFC59B"
+    fill="currentColor"
   />
   <!-- Repeat for all paths -->
 </SvgIcon>

This change allows the icon's color to be controlled by the color prop of the SvgIcon, enhancing reusability and consistency with theming.

fiftyone/operators/builtins/panels/model_evaluation/__init__.py (1)

303-310: Consider adding logging for unsupported evaluation types

While the error handling is good, adding logging would help track occurrences of unsupported types in production.

 if evaluation_type not in SUPPORTED_EVALUATION_TYPES:
+    logger.warning(
+        "Unsupported evaluation type: %s for key: %s",
+        evaluation_type,
+        computed_eval_key
+    )
     ctx.panel.set_data(
         f"evaluation_{computed_eval_key}_error",
         {"error": "unsupported", "info": serialized_info},
     )
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f627b82 and cf54fc2.

📒 Files selected for processing (5)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx (1 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/ErrorIcon.tsx (1 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (19 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/utils.ts (1 hunks)
  • fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/utils.ts (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/ErrorIcon.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

🪛 Biome (1.9.4)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/utils.ts

[error] 38-38: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.

See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.

(lint/suspicious/noGlobalIsNan)

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx

[error] 6-6: Do not shadow the global "Error" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx

[error] 45-45: Do not shadow the global "Error" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)

🔇 Additional comments (2)
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (2)

35-35: LGTM: Well-defined constant for supported evaluation types

The constant follows proper naming conventions and includes common ML model evaluation types.


323-323: Verify serialized evaluation info structure

While using pre-serialized info is good for consistency, consider validating its structure to prevent potential issues.

@@ -42,6 +42,7 @@ import {
} from "@mui/material";
import React, { useEffect, useMemo, useState } from "react";
import { useRecoilState, useSetRecoilState } from "recoil";
import Error from "./Error";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid shadowing the global 'Error' object

Importing Error shadows the global Error object, which can lead to confusion or unexpected behavior. Consider renaming the imported component to prevent this issue.

Apply this diff to rename the import and update its usage:

-import Error from "./Error";
+import EvaluationError from "./Error";

...

-  return <Error error={evaluationError} onBack={navigateBack} />;
+  return <EvaluationError error={evaluationError} onBack={navigateBack} />;

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Biome (1.9.4)

[error] 45-45: Do not shadow the global "Error" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)

@@ -35,7 +35,7 @@ export function getNumericDifference(
export function formatValue(value: string | number, fractionDigits = 3) {
const numericValue =
typeof value === "number" ? value : parseFloat(value as string);
if (!isNaN(numericValue)) {
if (!isNaN(numericValue) && numericValue == value) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use Number.isNaN instead of global isNaN

The global isNaN function performs type coercion, which can lead to unexpected results. It's safer to use Number.isNaN, which does not coerce the argument and ensures accurate checks for NaN.

Apply this diff to update the code:

-export function formatValue(value: string | number, fractionDigits = 3) {
+export function formatValue(value: string | number, fractionDigits = 3) {
   const numericValue =
     typeof value === "number" ? value : parseFloat(value as string);
-  if (!isNaN(numericValue) && numericValue == value) {
+  if (!Number.isNaN(numericValue) && numericValue == value) {
     return parseFloat(numericValue.toFixed(fractionDigits));
   }
   return value;
 }

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Biome (1.9.4)

[error] 38-38: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.

See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.

(lint/suspicious/noGlobalIsNan)

import React from "react";
import ErrorIcon from "./ErrorIcon";

export default function Error(props) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid shadowing the global 'Error' object

Naming the component Error shadows the global Error object, which can lead to confusion or unexpected behavior. Consider renaming the component to avoid this issue.

Apply this diff to rename the component:

-export default function Error(props) {
+export default function EvaluationError(props) {
   const { onBack } = props;
   return (
     <Stack sx={{ height: "100%", p: 2 }} spacing={1}>
       ...
     </Stack>
   );
 }

Also, update the component's usage and props type accordingly:

-type ErrorProps = {
+type EvaluationErrorProps = {
   onBack: () => void;
   error: any;
 };

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Biome (1.9.4)

[error] 6-6: Do not shadow the global "Error" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)

@imanjra imanjra force-pushed the bugfix/me-panel-tweaks branch from cf54fc2 to f15a383 Compare December 17, 2024 17:09
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (5)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx (1)

26-32: Enhance accessibility with ARIA attributes

The error message should be properly labeled for screen readers.

Apply this diff to improve accessibility:

           <Typography color="secondary">
+            role="status"
+            aria-live="polite"
             Analyze and improve models collaboratively with your team
           </Typography>
           <Typography sx={{ fontWeight: 600 }}>
+            role="alert"
             The Model Evaluation panel currently supports only classification,
             detection, and segmentation evaluations
           </Typography>
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (1)

35-35: Consider using an Enum for evaluation types

Using a list for supported types is error-prone. An Enum would provide better type safety and maintainability.

Apply this diff to use an Enum:

-SUPPORTED_EVALUATION_TYPES = ["classification", "detection", "segmentation"]
+from enum import Enum, auto
+
+class EvaluationType(str, Enum):
+    CLASSIFICATION = "classification"
+    DETECTION = "detection"
+    SEGMENTATION = "segmentation"
+
+SUPPORTED_EVALUATION_TYPES = [t.value for t in EvaluationType]
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (3)

94-101: Extract error handling logic into custom hook

The error state management logic should be extracted into a custom hook for better reusability and separation of concerns.

Create a new hook:

function useEvaluationError(data: any, name: string, compareKey?: string) {
  const evaluationError = useMemo(() => {
    return data?.[`evaluation_${name}_error`];
  }, [data, name]);

  const compareEvaluationError = useMemo(() => {
    return data?.[`evaluation_${compareKey}_error`];
  }, [data, compareKey]);

  return { evaluationError, compareEvaluationError };
}

157-159: Add loading state for error condition

The error state should show a loading indicator while the error data is being fetched.

Apply this diff:

   if (evaluationError) {
+    if (!data) {
+      return (
+        <Box sx={{ height: "50vh", display: "flex", alignItems: "center", justifyContent: "center" }}>
+          <CircularProgress />
+        </Box>
+      );
+    }
     return <Error error={evaluationError} onBack={navigateBack} />;
   }

629-634: Extract common table header styles

The same table header styles are duplicated in multiple places. Extract them into a styled component.

Create a styled component:

const StyledTableHeader = styled(TableRow)(({ theme }) => ({
  'th p': {
    color: theme.palette.text.secondary,
    fontSize: '1rem',
    fontWeight: 600,
  },
}));

Then use it in place of the duplicated styles.

Also applies to: 902-907, 1085-1090, 1277-1282

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cf54fc2 and f15a383.

📒 Files selected for processing (4)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx (1 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/ErrorIcon.tsx (1 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (19 hunks)
  • fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/ErrorIcon.tsx
🧰 Additional context used
📓 Path-based instructions (2)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

🪛 Biome (1.9.4)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx

[error] 45-45: Do not shadow the global "Error" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx

[error] 6-6: Do not shadow the global "Error" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)

🔇 Additional comments (2)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx (2)

39-42: 🛠️ Refactor suggestion

Add TypeScript type definition for props

The props type should be properly defined using TypeScript for better type safety and documentation.

Add this type definition:

+type ModelEvaluationErrorProps = {
+  onBack: () => void;
+  error: {
+    error: 'unsupported';
+    info: Record<string, unknown>;
+  };
+};

-type ErrorProps = {
-  onBack: () => void;
-  error: any;
-};

Likely invalid or redundant comment.


6-6: ⚠️ Potential issue

Rename component to avoid shadowing global Error object

The component name shadows the global Error object, which can lead to confusion and potential issues.

Apply this diff to rename the component:

-export default function Error(props) {
+export default function ModelEvaluationError(props) {

Likely invalid or redundant comment.

🧰 Tools
🪛 Biome (1.9.4)

[error] 6-6: Do not shadow the global "Error" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)


export default function Error(props) {
const { onBack, error } = props;
console.log(">>>", { error });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove console.log statement

Debug logging should not be present in production code.

Apply this diff to remove the console.log:

-  console.log(">>>", { error });

Comment on lines 303 to 310
serialized_info = info.serialize()
evaluation_type = info.config.type
if evaluation_type not in SUPPORTED_EVALUATION_TYPES:
ctx.panel.set_data(
f"evaluation_{computed_eval_key}_error",
{"error": "unsupported", "info": serialized_info},
)
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling in load_evaluation method

The error handling could be more informative by including the actual evaluation type in the error message.

Apply this diff to enhance error feedback:

             serialized_info = info.serialize()
             evaluation_type = info.config.type
             if evaluation_type not in SUPPORTED_EVALUATION_TYPES:
+                supported_types = ", ".join(SUPPORTED_EVALUATION_TYPES)
                 ctx.panel.set_data(
                     f"evaluation_{computed_eval_key}_error",
-                    {"error": "unsupported", "info": serialized_info},
+                    {
+                        "error": "unsupported",
+                        "info": serialized_info,
+                        "message": f"Evaluation type '{evaluation_type}' is not supported. Supported types are: {supported_types}"
+                    },
                 )
                 return
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
serialized_info = info.serialize()
evaluation_type = info.config.type
if evaluation_type not in SUPPORTED_EVALUATION_TYPES:
ctx.panel.set_data(
f"evaluation_{computed_eval_key}_error",
{"error": "unsupported", "info": serialized_info},
)
return
serialized_info = info.serialize()
evaluation_type = info.config.type
if evaluation_type not in SUPPORTED_EVALUATION_TYPES:
supported_types = ", ".join(SUPPORTED_EVALUATION_TYPES)
ctx.panel.set_data(
f"evaluation_{computed_eval_key}_error",
{
"error": "unsupported",
"info": serialized_info,
"message": f"Evaluation type '{evaluation_type}' is not supported. Supported types are: {supported_types}"
},
)
return

Comment on lines +318 to +325
hide: isSegmentation,
},
{
id: "iou",
property: "IoU Threshold",
value: evaluationConfig.iou,
compareValue: compareEvaluationConfig.iou,
hide: !isObjectDetection,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use TypeScript discriminated union for evaluation types

The evaluation type checks should use TypeScript's discriminated unions for better type safety.

Add these types:

type EvaluationType = 'classification' | 'detection' | 'segmentation';

interface BaseEvaluationConfig {
  type: EvaluationType;
}

interface ClassificationConfig extends BaseEvaluationConfig {
  type: 'classification';
  method: 'binary' | 'multiclass';
}

interface DetectionConfig extends BaseEvaluationConfig {
  type: 'detection';
  iou: number;
}

interface SegmentationConfig extends BaseEvaluationConfig {
  type: 'segmentation';
}

type EvaluationConfig = ClassificationConfig | DetectionConfig | SegmentationConfig;

@imanjra imanjra force-pushed the bugfix/me-panel-tweaks branch from f15a383 to 0142a6e Compare December 17, 2024 17:17
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (2)

318-325: Consider extracting metric filtering logic

The metric filtering logic is repeated in multiple places. Consider extracting it into a utility function for better maintainability.

+const filterMetricsByType = (metrics, isSegmentation) => {
+  return metrics.filter(m => !m.hide);
+};

-const computedMetricPerformance = metricPerformance.filter((m) => !m.hide);
+const computedMetricPerformance = filterMetricsByType(metricPerformance, isSegmentation);

Also applies to: 876-877, 886-887


629-638: Consider extracting table header styles

The table header styling is duplicated across multiple table components. Consider extracting it into a styled component or theme constant.

+const TableHeaderCell = styled(TableCell)(({ theme }) => ({
+  "& p": {
+    color: theme.palette.text.secondary,
+    fontSize: "1rem",
+    fontWeight: 600,
+  }
+}));

-<TableCell sx={{ "th p": { /* ... */ } }}>
+<TableHeaderCell>
   <Typography>Metric</Typography>
-</TableCell>
+</TableHeaderCell>

Also applies to: 902-911, 1085-1094, 1277-1286

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f15a383 and 0142a6e.

📒 Files selected for processing (4)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx (1 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/ErrorIcon.tsx (1 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (19 hunks)
  • fiftyone/operators/builtins/panels/model_evaluation/__init__.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • fiftyone/operators/builtins/panels/model_evaluation/init.py
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/ErrorIcon.tsx
🧰 Additional context used
📓 Path-based instructions (2)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

🪛 Biome (1.9.4)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx

[error] 45-45: Do not shadow the global "Error" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx

[error] 6-6: Do not shadow the global "Error" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)

🔇 Additional comments (5)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Error.tsx (2)

6-6: Avoid shadowing the global 'Error' object

The component name shadows the global Error object, which can lead to confusion or unexpected behavior.

Also applies to: 38-40

🧰 Tools
🪛 Biome (1.9.4)

[error] 6-6: Do not shadow the global "Error" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)


8-35: LGTM! Well-structured error UI

The error component has a clean and user-friendly layout with:

  • Clear navigation with a back button
  • Centered error icon and messages
  • Informative text about supported evaluation types
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (3)

94-101: LGTM! Well-implemented error state management

Good use of useMemo for deriving error states from the data object, preventing unnecessary recalculations.


157-159: LGTM! Proper error handling flow

Good implementation of early return pattern when evaluation error is detected.


1312-1324: LGTM! Improved table row rendering

Good use of null check to filter out hidden rows before rendering.

manivoxel51
manivoxel51 previously approved these changes Dec 17, 2024
Copy link
Contributor

@manivoxel51 manivoxel51 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🍨

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants