-
-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal #32
Comments
@snahor This is really cool! I like the idea of adding more consistency to this track. Since I have been mostly working on this track alone and am not an expert with SML I was kind of doing just the minimum effort. That being said, everything you pointed out is spot on and seeing it laid out like this just points out how lazy I was being 😦 |
@mcmillhj don't be too hard on yourself, after all you started from nothing. If more people contribute to this repo we'll have more ideas on how to improve this. |
Thanks :) I will draft some PRs that work toward the goal you have proposed. |
Been super busy of late, but I am finally going to have time for this over the next few days. |
One interesting thing that I just encountered, is how should we handle functions that take multiple arguments in |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
The existing exercises run against
example.sml
, they should do it against the implementation in another file. Taking this into account and taking as reference some of the other repos, I suggest this structure:Or if we want to have more examples (like Haskell):
The
{exercise name}.sml
file should contain the function type and args types:If someone tries to run the tests will get that error. One advantage of having this "template" is that the implementer will have an idea on what to do.
Some of the exercises have a definition and a json file with canonical data (aka test data). We should use that whenever it is possible. Most of them have
description
,expected
and a third key that changes from exercise to exercise. For instance, the atbash cipher has:description
,expected
andphrase
.The test cases can have a similar structure to the cases found in the json file, for instance:
If the exercise has more than one function to be implemented:
Unfortunately, Standard ML doesn't have a test lib such as HSpec, PyTest, etc. We could consider https://github.com/kvalle/sml-testing.
For now, this can do the work:
It's almost the same function found in all the exercises, but it has a nice side effect.
Note:
run_tests
receives a function and a list of test cases:val run_tests = fn: ('a -> ''b) -> {description: string, expected: ''b, input: 'a} list -> bool list
aux
should have the same keys used in the test cases's records.expl
could be a concatenation of the function's name with the arguments e.g.foo called with 1 2 3: PASSED
.or
val test_cases = [ { expected = , input = } ]
I prefer the first one.
Running the tests
SETUP.md
suggests to import the test file in the repl, I think that's not very helpful and hard to know which test case failed. If we implement the suggested changes, we could just do:We could even add some color to
PASSED
andFAILED
.What do you think @mcmillhj ?
cc @kytrinyx
The text was updated successfully, but these errors were encountered: