Creating evaluation metrics for projects involving object detection takes a surprising amount of time. This repo contains code we've found useful to speed up the results analysis for object detection projects. It provides:
- Easy creation of a pandas inference dataframe enabling detailed analysis.
- Summary statistics for easy plotting.
- Calculation of coco metrics using the same pandas dataframe. (uses pycocotools).
To see a quick example of the functionality have a look at the starter notebook.
pip install git+https://github.com/alexhock/object-detection-metrics
Imports:
from objdetecteval.metrics import (
image_metrics as im,
coco_metrics as cm
)
Take predictions in a pandas dataframe and similar labels dataframe (same columns except for score) and calculate an 'inference' dataframe:
infer_df = im.get_inference_metrics_from_df(preds_df, labels_df)
infer_df.head()
The inference dataframe enables easy analysis of the results for example:
- IoU stats by class and failure category
- Highest scoring false positive predictions
- Comparison of bounding box distributions for FP and TP
- ... etc. ..
class_summary_df = im.summarise_inference_metrics(infer_df)
class_summary_df
This makes it easy to plot:
figsize = (5, 5)
fontsize = 16
fig_confusion = (
class_summary_df[["TP", "FP", "FN"]]
.plot(kind="bar", figsize=figsize, width=1, align="center", fontsize=fontsize)
.get_figure()
)
fig_pr = (
class_summary_df[["Precision", "Recall"]]
.plot(kind="bar", figsize=figsize, width=1, align="center", fontsize=fontsize)
.get_figure()
)
Use the dataframes to calculate full coco metrics
res = cm.get_coco_from_dfs(preds_df, labels_df, False)
res