Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shouldn't tests in the various language tracks test the same cases and be consistently named? #104

Closed
guntbert opened this issue Nov 26, 2016 · 5 comments

Comments

@guntbert
Copy link

Looking at the test suites I find cases where the test names are very descriptive and they are consistent across the tracks (e.g. HAMMING).

But there are cases where we not only have very different counts of tests in different tracks but the names of those tests are not consistent across the tracks. For an example I look at LEAP

  • C# : very descriptive, 4 tests
  • C++: not quite as descriptive, 5 tests
  • Python: similar to C++
  • javaScript: the descriptions are not at all clear for me, a lot more tests.

I'd prefer to have the same test cases in all tracks, and I'd suggest to give them really descriptive names (along the lines of unit_under_test_data_sent_in_expected_behaviour).

@Insti
Copy link

Insti commented Nov 27, 2016

We're working on this, have a look at what's going on in the https://github.com/exercism/x-common repository. The ones that are consistent probably have a canonoical_data.json file describing all the tests.

You can help by comparing what different tracks do and creating a canonical_data.json file containing all the tests needed for exercises that are missing it.

Or by updating the tests in the various tracks to match the canonical_data.json

@stevejb71
Copy link

When canonical data is updated, would it be reasonable to update the track maintainers (possibly by adding issues through blazon)?

@Insti
Copy link

Insti commented Nov 27, 2016

That's a good idea, but I think the answer is 'not now'

Currently canonical data gets tweaked a lot, and it's important not to cause blazon fatigue so that important issues don't get lost in the noise.

Track maintainers have the ability to check for themselves if canonical data has been updated, and it's not super critical if problems from different tracks have different test cases.

@kytrinyx
Copy link
Member

When canonical data is updated, would it be reasonable to update the track maintainers [...]?

I think that if the test suite gets a big overhaul (interesting edge cases, adding/removing a number of tests) then it's probably worth adding an issue via blazon. These are great issues for new contributors, and so we could also make sure to add the "good first patch" label.

If it's a change only to the descriptions, then I wouldn't bother submitting issues via blazon.

@petertseng
Copy link
Member

At this point, I think we've agreed that this is a good idea. And we break it into two parts:

  1. Actually having a canonical data file for each exercise: Implement canonical data for all non-deprecated exercises problem-specifications#552
  2. Ensuring propagation so that tracks know about the common data: Ensure canonical-data.json updates are propagated to all tracks problem-specifications#524

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants