You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When training big models with a lot of data the conf_mats and f1 heatmap create concurrency problems. This is due to the shared lists they all have which are not synchronized. PL already has the solution for this in the changelog as well as in the master so we will fix it after release of the new version.
Describe the bug
When training big models with a lot of data the conf_mats and f1 heatmap create concurrency problems. This is due to the shared lists they all have which are not synchronized. PL already has the solution for this in the changelog as well as in the master so we will fix it after release of the new version.
Fixed in Lightning-AI/pytorch-lightning#6886
To Reproduce
Run with a conf_mat_*_wandb.yaml callback with the cb55_full_unet.yaml experiment
The text was updated successfully, but these errors were encountered: