Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add absolute error and MAE as custom metrics #195

Merged
merged 10 commits into from
Jan 18, 2025

Conversation

manushreegangwar
Copy link
Contributor

@manushreegangwar manushreegangwar commented Jan 10, 2025

This PR adds absolute_error and mean_absolute_error metric operators (similar to this)

import random
import numpy as np
import fiftyone as fo
import fiftyone.zoo as foz

dataset = foz.load_zoo_dataset("quickstart").select_fields().clone()

# Populate some fake regression data
for idx, sample in enumerate(dataset, 1):
    ytrue = random.random() * idx
    ypred = ytrue + np.random.randn() * np.sqrt(ytrue)
    confidence = random.random()
    sample["ground_truth"] = fo.Regression(value=ytrue)
    sample["predictions"] = fo.Regression(value=ypred, confidence=confidence)
    sample.save()

custom_metrics = {}
custom_metrics.update({"@voxel51/metric-examples/example_metric": dict(value="hello")})
custom_metrics.update({"@voxel51/metric-examples/absolute_error": {}})
custom_metrics.update({"@voxel51/metric-examples/mean_absolute_error": dict(error_eval_key="eval_absolute_error")})

results = dataset.evaluate_regressions(
    "predictions",
    gt_field="ground_truth",
    eval_key="eval",
    custom_metrics=custom_metrics,
    metric="absolute_error",
)

results.custom_metrics_report()
# {'Example metric': 'hello', 'Mean Absolute Error Metric': 5.0326606526479445}

This also works for video datasets.

dataset = foz.load_zoo_dataset("quickstart-video")

# Populate some fake regression data
for idx, sample in enumerate(dataset, 1):
    for fidx, frame in enumerate(sample.frames.values()):
        ytrue = random.random() * fidx
        ypred = ytrue + np.random.randn() * np.sqrt(ytrue)
        confidence = random.random()
        frame["ground_truth"] = fo.Regression(value=ytrue)
        frame["predictions"] = fo.Regression(value=ypred, confidence=confidence)
    sample.save()

results = dataset.evaluate_regressions(
    "frames.predictions",
    gt_field="frames.ground_truth",
    eval_key=eval_key,
    custom_metrics=custom_metrics,
    metric="absolute_error",
)

plugins/metric-examples/__init__.py Outdated Show resolved Hide resolved
plugins/metric-examples/__init__.py Outdated Show resolved Hide resolved
plugins/metric-examples/__init__.py Show resolved Hide resolved
@manushreegangwar manushreegangwar force-pushed the manushree/custom-metrics branch from 97c3db6 to 71fe421 Compare January 17, 2025 00:45
@brimoor brimoor merged commit fb38e4c into custom-metrics Jan 18, 2025
@brimoor brimoor deleted the manushree/custom-metrics branch January 18, 2025 19:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants