-
-
Notifications
You must be signed in to change notification settings - Fork 647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Duplicate logging #1012
Comments
Thanks for reporting. As a side tip: |
Could this be reopened? I'm having the same issue, please see this MWE:
defaults:
- hydra/job_logging: disabled
hydra:
output_subdir: Null
run:
dir: .
import hydra
from omegaconf import OmegaConf
import pytorch_lightning as pl
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
class Model(pl.LightningModule):
def __init__(self, in_size, hid_size, out_size):
super().__init__()
self.fc1 = nn.Linear(in_size, hid_size)
self.fc2 = nn.Linear(hid_size, out_size)
def forward(self, x):
return self.fc2(self.fc1(x))
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.cross_entropy(logits, y)
return loss
def configure_optimizers(self):
return optim.Adam(self.parameters(), lr=1e-3)
@hydra.main(config_name="config")
def main(cfg):
print(OmegaConf.to_yaml(cfg))
model = Model(5, 10, 2)
trainer = pl.Trainer()
if __name__ == "__main__":
main() Having
Commenting out
As you can see, with In my case, I would want Hydra logging to be disabled, while still being able to see the output from PyTorch Lightning.
|
Thanks for the repro. We will take a look. About disabled as a logging config: There is a pull request that added support for not configuring the logging in Hydra 1.1 (which is not yet released). Generally speaking, when using Hydra - by default it configures the logging.
I am interested in community help to identify the root cause, if you want to do some digging it will help. |
Thanks for the very quick response and the directions, I will try them out and get back to you with the root cause as soon as I have found it! |
Ok, multiple things to go on:
|
I don't want to disable existing loggers by default because there are many scenarios where you would want to preserve the existing loggers. Did you find what in Lightning is configuring the logging? |
Running into a related issue with Mephisto as well. In my case, Hydra doesn't simply duplicate logs, but it turns on logging of all loggers globally to the INFO level: Line 46 in bc3567d
This leads me to get all kinds of spam from other used packages... Is this actually a desired behavior? If so I'll have to have Mephisto add:
to all defaults lists, but I'm not sure if I'm losing important logging info by doing this. Am I missing something? |
@JackUrb, since the problem you are describing is different we should discuss it on it's own issue. |
@Huizerd, thanks for the update! |
I provide my observation here for anyone who encounters similar problems. Show answerMake sure only the root logger has StreamHandler. For example, if you use
pl._logger.handlers = []
pl._logger.propagate = True
Long answer and explanation
# pytorch_lightning.__init__.py, version: 1.2.5
_root_logger = logging.getLogger()
_logger = logging.getLogger(__name__)
_logger.setLevel(logging.INFO)
# if root logger has handlers, propagate messages up and let root logger process them
if not _root_logger.hasHandlers():
_logger.addHandler(logging.StreamHandler())
_logger.propagate = False
import hydra
import pytorch_lightning
from omegaconf import DictConf
@hydra.main(config_path='conf', config_name='config')
def main(cfg: DictConfig) -> None:
...
if __name__ == '__main__':
main()
|
Thanks for the comment @hughplay. One thing you can try is to use |
🐛 Bug
Description
I am using pytorch lightning with hydra and output from pytorch-lightning is duplicated by hydra
Here is an example
The text was updated successfully, but these errors were encountered: