-
-
Notifications
You must be signed in to change notification settings - Fork 658
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Write generators for all exercises #605
Comments
The Ruby track takes on a similar project at exercism/ruby#396. If, in the course of writing a generator, we find that there is significant duplicate code between generators that could be extracted into https://github.com/exercism/xgo/blob/master/gen/gen.go, we should do so, so that writing each generator is as little work as possible. |
Please note the new exercise file structure for generators #610 |
The checklist in this issue is due to be replaced by separate issues. Relevant comment by @petertseng from #611: Y'all are right about the difficulty of keeping checklists up to date. We've seen it done both ways before in this repo too! For #275 we split up into many different issues. I think this was when Blazon was new and I was very much still in that mindset. In #414 and in #470 we had a checklist. Of these two, I was managing the checklist in #414 and not #470, but #414 is much smaller so I don't have much experience with that. I can say that when #414 was filed there were only eight exercises affected and I didn't want to set up filing the per-exercise issue for just eight of them. The recommendation is to use https://github.com/github/hub , I simply didn't have that set up (I used a modified version of Blazon for #275 ). Which makes contributors' lives easier (Don't think only of ourselves)? It can be that the individual issues seem more tractable (harder for someone to know they need only take one item in a checklist). The disadvantage is if there's a chance of contributors seeing "Oh this project has 60 open issues they're obviously not very active" but perhaps that is not serious. If agree on splitting them up, feel free to repurpose the #604 and #605 as the progress issue that the per-exercise issues link back to. I can't get to splitting them up until at least this weekend if not longer, so feel free to beat me to it. I'll say something if I start so that we don't duplicate work. |
I'd like to suggest this issue is put on hold until #604 is closed. I'm worried that if this is also split into separate issues that it will create too many long running issues than is good for the repo. It also seems like a logical step to ensure our current generators are up to date before we start adding more. |
@leenipper @robphoenix : May be worth throwing the experience into some documentation? |
I'm using a template based approach for the OCaml generator. This has proved quite successful so far (30 out of 40 OCaml exercises have auto-generated tests, and also 1 Purescript exercise). Templates are in here: https://github.com/exercism/xocaml/tree/master/tools/test-generator/templates. The language is a bit ad-hoc, I'm sure it could be improved. |
Thanks @stevejb71 looks interesting, will check it out! |
#803 is a temp fix to pangram till a generator is made for it. |
For exercism#605 I did not have a way to wake up exercism#695 so I started a new one. This generator mimics the output of the connect generator so the output is a little nicer IMHO. I bumped the `testVersion`.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. For exercism#605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. Make one correction to pass the 'X is only valid as a check digit' test case from canonical-data.json. Output FAIL and PASS in test output. For exercism#605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. Put FAIL and PASS in test result output. Keep TestGivesPositiveRequiredError redundant test on zero since it verifies ErrOnlyPositive. Update example solution to use int64 for input type since -1 test case in canonical-data.json. For #605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. Put FAIL and PASS in test result output. For #605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. For #605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. Make one correction to pass the 'X is only valid as a check digit' test case from canonical-data.json. Output FAIL and PASS in test output. For #605.
Add .meta/gen.go to generate cases_test.go. Make use of zero-value of expectError bool. Update test program to use generated test case array. For exercism#605.
Add .meta/gen.go to generate cases_test.go. Make use of zero-value of expectError bool. Update test program to use generated test case array. For #605.
Add .meta/gen.go to generate cases_test.go. Make use of zero-value of expectError bool. Update test program to use generated test case array. The test case value for the 'Total()' is retained in TestTotal() since there wasn't an array of cases in the canonical-data.json for the 'total' property, and there is only one valid return value. For exercism#605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. Update example solution to pass test which includes limit as prime. For exercism#605.
Add .meta/gen.go to generate cases_test.go. Make use of zero-value of expectError bool. Update test program to use generated test case array. The test case value for the 'Total()' is retained in TestTotal() since there wasn't an array of cases in the canonical-data.json for the 'total' property, and there is only one valid return value. For #605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. Update example solution to pass test which includes limit as prime. For #605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. Output FAIL and PASS in test output. For exercism#605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. Output FAIL and PASS in test output. For exercism#605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. Output FAIL and PASS in test output. For #605.
Add .meta/gen.go to generate cases_test.go. Update test program to use generated test case array. Output FAIL and PASS in test output. For #605.
* implemented generator and new example for pig latin * go fmt * small refactoring on @ferhatelmas suggestions * each test gets a description printed and PASS or FAIL * implemented test generator for nucleotide-count exercise (issue #605) * added hints.md and changed .expected type in generator * restored original nucleotide_count.go stub and adjusted expected type for tests * small refactoring in example.go * problem specification changed format; check for unexpected type in generator
In x-common, we have various exercises with canonical-data.json representing the recommended set of tests for that exercise. We created generators to more easily keep up with changes in x-common. Whenever x-common changes, the task is as simple as re-running the generator.
The current documentation is at https://github.com/exercism/xgo#generating-test-cases - if at any point we find that we should explain something better, we should add any necessary documentation.
Preliminaries:
ls exercises/*/cases_test.go | cut -d/ -f2
from the root of the xgo directory.ls exercises/*/canonical-data.json | cut -d/ -f2
from the root of the x-common directory.This issue concerns exercises that don't yet have a generator.
For each exercise, first we should decide whether we even want a generator.
An example where we don't want one is diamond, since property-based tests aren't represented in canonical-data.json.
So don't be afraid to suggest that we pass on having a generator if it creates more work than it saves.
After all, the point is to make maintenance of this track easy.
Or, we may decide to lazy-evaluate the generator:
If our current tests already match x-common, there is no current need to have a generator since it will make no difference currently.
So we may decide that we'll make a generator the next time the x-common JSON changes, and until then we'll stick with what we have.
If we decide we do want a generator for a given exercise:
First, we should check whether there are any cases that our track has that the common JSON doesn't have.
For each of those, we should consider the options:
binaryIs deprecated in this track.diamondUses property-based tests, so won't be generated.hello-worldWill only ever have one test for foreseeable future, seems unnecessary to generate.trinaryIs deprecated in this track.The text was updated successfully, but these errors were encountered: