-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shouldn't tests in the various language tracks test the same cases and be consistently named? #104
Comments
We're working on this, have a look at what's going on in the https://github.com/exercism/x-common repository. The ones that are consistent probably have a You can help by comparing what different tracks do and creating a Or by updating the tests in the various tracks to match the |
When canonical data is updated, would it be reasonable to update the track maintainers (possibly by adding issues through blazon)? |
That's a good idea, but I think the answer is 'not now' Currently canonical data gets tweaked a lot, and it's important not to cause blazon fatigue so that important issues don't get lost in the noise. Track maintainers have the ability to check for themselves if canonical data has been updated, and it's not super critical if problems from different tracks have different test cases. |
I think that if the test suite gets a big overhaul (interesting edge cases, adding/removing a number of tests) then it's probably worth adding an issue via blazon. These are great issues for new contributors, and so we could also make sure to add the "good first patch" label. If it's a change only to the descriptions, then I wouldn't bother submitting issues via blazon. |
At this point, I think we've agreed that this is a good idea. And we break it into two parts:
|
Looking at the test suites I find cases where the test names are very descriptive and they are consistent across the tracks (e.g. HAMMING).
But there are cases where we not only have very different counts of tests in different tracks but the names of those tests are not consistent across the tracks. For an example I look at LEAP
I'd prefer to have the same test cases in all tracks, and I'd suggest to give them really descriptive names (along the lines of unit_under_test_data_sent_in_expected_behaviour).
The text was updated successfully, but these errors were encountered: