-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update test_validate_nwb_path_grouping
test
#1157
Conversation
Codecov ReportBase: 88.29% // Head: 88.27% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #1157 +/- ##
==========================================
- Coverage 88.29% 88.27% -0.03%
==========================================
Files 73 73
Lines 8800 8800
==========================================
- Hits 7770 7768 -2
- Misses 1030 1032 +2
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
It wasn't expected to fail, it was just expected to give a warning which is asserted 2 lines later. We only fail on ERRORs not on WARNINGs, and what happened is that where there used to be a warning there is now an error. As for why it fails, I am not sure, we did not change our code in that period, nor did obvious packages that might be involved in NWB validation (nwbinspector, pynwb) get new releases in that time period (latest releases from october). |
it was not expected to fail (I guess the docstring is misleading/need to be fixed if that is what gave you idea that it should fail). It is the upgrade of nwbinspector from 0.4.17 to 0.4.19 is what makes that test to fail... running that validation manually brings up the reason:
so I guess nwbinspector started to demand subject_id to be specified . more specifically it was NeurodataWithoutBorders/nwbinspector#303 released in 0.4.18. It made the missing subject_id to be not a warning (does not result in error exit) but an error (error exit). I will leave it to @TheChymera to decide on what would be the most appropriate fix -- make |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, let's just fix the test rather than assert it to be broken :3
ok, please fix the test the way you want to have it fixed (as we briefly discussed -- add more of clearly failing validation files and assess groupping).For now to proceed let's just merge -- we just need to keep in mind that it would be failing with prior versions of nwbinspector |
Keep in mind too that since #1108, any time validation or upload is done with an active internet connection the latest version of the inspector becomes a requiment, so that should be pretty safe.
That's one of the oldest checks that even predates the inspector. The failing test case should have always been capturing that case (that is, true expected behavior is
To be clear, when I left #1108, if anything was ever returned by this line: https://github.com/dandi/dandi-cli/blob/master/dandi/files/bases.py#L504-L509 (hence termed ' Note also the actual filtering of checks based on configured importance was not affected by the issue and hotfix of NeurodataWithoutBorders/nwbinspector#303, which was an unfortunate issue in the display of the results. Put simply, true underlying importance was always used correctly under the hood, especially here in DANDI, but the importance attached to the returned message was from the default non-configured importance and I guess that caused an issue through the interaction with the new warning/error classification (which I've not had time to look through in detail). Sorry about not catching that sooner. |
The test in question was failing because the
validate
command was exiting nonzero.@TheChymera Seeing as the sample file was expected to fail validation, exactly why was the command expected to exit successfully, and what changed?