You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering how you are computing the metrics for evaluation. I was going through metrics.py file and came across ap_per_class function which seems to be computing the average precision for each class in an image. (FYI - my custom dataset only has 1 class with a lot of objects of that class in a single image)
I wanted to understand what *stats is (the parameter passed in the function) in test.py? And how does it help in being able to assign a predicted class to a ground truth?
Also,
I wanted to know how you are associating a particular predicted class with a ground truth? Is it solely based on the highest iou values? If yes, what if you assign a ground truth to a particular predicted class (and eliminate it from the iteration once it is assigned) and find a higher iou to another predicted class further down the iteration?
Thanks!
The text was updated successfully, but these errors were encountered:
Thanks for your response. I just wanted to understand what is happening in the backend for DynamicDet - specifically with regards to the metrics.py and test.py files.
If I run the model on test data, how can I access the precision, recall etc scores?
Hi Team,
I was wondering how you are computing the metrics for evaluation. I was going through
metrics.py
file and came acrossap_per_class
function which seems to be computing the average precision for each class in an image. (FYI - my custom dataset only has 1 class with a lot of objects of that class in a single image)I wanted to understand what
*stats
is (the parameter passed in the function) intest.py
? And how does it help in being able to assign a predicted class to a ground truth?Also,
I wanted to know how you are associating a particular predicted class with a ground truth? Is it solely based on the highest iou values? If yes, what if you assign a ground truth to a particular predicted class (and eliminate it from the iteration once it is assigned) and find a higher iou to another predicted class further down the iteration?
Thanks!
The text was updated successfully, but these errors were encountered: