Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation results logging is inconsistent #1058

Closed
Tracked by #1191 ...
hokmund opened this issue May 29, 2021 · 3 comments
Closed
Tracked by #1191 ...

Evaluation results logging is inconsistent #1058

hokmund opened this issue May 29, 2021 · 3 comments
Labels

Comments

@hokmund
Copy link
Contributor

hokmund commented May 29, 2021

Describe the Issue

Metrics calculated by evaluation hook are sometimes logged as train/{metric_name} and sometimes as val/{metric_name}.

More precisely, imagine that you have evaluation interval equal to 250 iterations and logging interval equal to 20 iterations. On the 250th iteration you will have your evaluation results logged as val/{metric_name}. After that, on the 500th iteration both train loss logging and evaluation will occur and you will have your evaluation results logged as train/{metric_name}.

It is extremely frustrating, especially if you use tensorboard logger that builds you two different charts for train/{metric_name} and val/{metric_name}.

Bug fix

This issue is caused by this line of code: https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/logger/base.py#L61

When training iteration is logged, time is in the buffer. Otherwise it's not.
This LOC comes from this PR that applied get_mode method everywhere despite the fact that previously it was only used in the text logger which logs the mode for the whole iteration rather than for each data point separately. Some of the issues caused by that PR were already fixed.

Before that pull request was merged, evaluation metrics were always logged with train tag. If such behavior is acceptable, I am willing to create a PR.

However, if you think that we need to always log evaluation results with val tag, it will require a lot of redesigning, because we will either need to create separate hook methods for evaluation or make EvalHook explicitly set val mode and flush the logger in the same manner as val(...) method of runner currently does.

@zhouzaida
Copy link
Collaborator

zhouzaida commented Jun 10, 2021

Thanks for your callback. I think we need to always log evaluation results with val tag.

@zhouzaida
Copy link
Collaborator

Are you willing to create a PR?

@zhouzaida
Copy link
Collaborator

closed by #1252

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants