Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't suppress all warnings in train and eval #88

Closed
apmcleod opened this issue Nov 1, 2019 · 3 comments
Closed

Don't suppress all warnings in train and eval #88

apmcleod opened this issue Nov 1, 2019 · 3 comments
Labels
bug Something isn't working

Comments

@apmcleod
Copy link
Collaborator

apmcleod commented Nov 1, 2019

In particular, warning the user about --seq_len being too small is very important.

I tried to filter that warning with 'once', but:

  1. Python only filters the first occurrence of each exact warning message, even if multiple messages match the same filter's regex.
  2. For some reason, it's not working anyways, probably because the warnings are being printed by the dataloader workers?

These seem to be the only warnings printed during train or test (when everything goes smoothly), so I think notifying the user is important, and removing the full ignore filter will allow through other warnings from actual errors.

@apmcleod apmcleod mentioned this issue Nov 1, 2019
Merged
@apmcleod apmcleod added the bug Something isn't working label Nov 1, 2019
@apmcleod apmcleod added this to the Dataset v1.0 Release milestone Nov 1, 2019
@JamesOwers
Copy link
Owner

Ultimately I think switching to the package logging is probably a better plan for future. Given that we are not expecting many to use our code directly to create models, is this one case of seq_len important?

I would err on the side of 'warnings all off' unless debug mode is on, just so that logging is at a minimum for the casual user.

@apmcleod
Copy link
Collaborator Author

I dunno really. If we don't expect users to create models, they won't see the --seq_len warning anyways. That only shows up when training/testing models (and really is important in those cases I'd say, since we're giving them bad data).

@JamesOwers
Copy link
Owner

Since this is not actually affecting the creation of ACME v1.0, I'm going to kick to the next milestone. We should discuss via skype then, I think it'll be quicker, and record the resolution here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants