-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically find metric attribute when logging #8656
Comments
Hey @samgelman, Good catch. I will do that next week unless you want to. Should be fairly simple Best, |
Dear @samgelman, Seems to work fine. Am I missing something ?
|
Hmm, thanks for looking into it. In my set up, I was getting:
I'll try to recreate it with a simple example. |
Dear @samgelman, I added a test. Feel free to add a test which currently breaks. Best, |
Hey @tchaton, It seems this fails:
Two things to note.
Thanks, |
After looking a bit at this I found the following: import torchmetrics
from torchmetrics import Accuracy, PearsonCorrcoef
from torch.nn import ModuleList
m1 = ModuleList([Accuracy() for _ in range(5)])
m2 = ModuleList([PearsonCorrcoef() for _ in range(5)])
list(m1.named_modules())
#[('',
# ModuleList(
# (0): Accuracy()
# (1): Accuracy()
# (2): Accuracy()
# (3): Accuracy()
# (4): Accuracy()
# )),
# ('0', Accuracy()),
# ('1', Accuracy()),
# ('2', Accuracy()),
# ('3', Accuracy()),
# ('4', Accuracy())]
list(m2.named_modules())
# [('',
# ModuleList(
# (0): PearsonCorrcoef()
# (1): PearsonCorrcoef()
# (2): PearsonCorrcoef()
# (3): PearsonCorrcoef()
# (4): PearsonCorrcoef()
# )),
# ('0', PearsonCorrcoef())] it seems like metrics were the metric state is the list type ( |
This was a issue do to a bug in the hashing of metrics, which have been fixed in Lightning-AI/torchmetrics#478. |
🚀 Feature
Motivation
When logging metrics stored in a ModuleList, it's somewhat clunky because you have to manually specify the metric_attribute in the call to self.log().
Pitch
It seems like Lightning should be able to find the metric_attribute automatically, given the Metric object. This would make logging metrics stored in a ModuleList a bit cleaner.
Alternatives
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
The text was updated successfully, but these errors were encountered: