-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation metrics per class #42
Comments
Hi @mariiak2021, Unfortunately there are no per-class evaluation metrics currently defined but, if you're interested, these would be very easy to add. You'd want to update the metrics function to return such values. E.g. in pseudo-code you could do something like: for object_type in object_types_whose_position_was_different_at_the_start_of_the_unshuffle_phase:
metrics[f"{object_type}_fixed"] = end_energy_of_object == 0.0 Let me know if you'd need any help getting this working. |
Hi @Lucaweihs thanx a lot for your reply! I might need your help if possible. :) Will it look like this? To compute the end energy of each object which was misplaced:
Please correct if it's a wrong way. Also which metrics can I reuse per class except for end energy? best, |
Hi @Lucaweihs sorry for disturbing, but did you have a time to look into my question? :) Thank you! |
Hi @mariiak2021 , Just to double check, the number that you want reported per class is equal to the def metrics(self) -> Dict[str, Any]:
metrics = ... # Old UnshuffleTask metrics code
# New, per-class, metrics code
key_to_count = defaultdict(lambda: 0)
for object_type, end_energy in zip([gp["type"] for gp in gps], end_energies):
if end_energy > 0.0:
key = f"end_energy__{object_type}"
# Undo the running average across object type and recompute it with the new value/count
metrics[key] = metrics.get(key, 0.0) * key_to_count[key] + end_energy
key_to_count[key] += 1
metrics[key] /= key_to_count[key]
return metrics Note that I'm reusing the
Let me know if that helps! |
Hi @Lucaweihs , @mattdeitke!
Can you please tell if there are existing any evaluation metrics per class or type of objects (pickable/openable..)?
Best regards,
Mariia
The text was updated successfully, but these errors were encountered: