Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal #32

Closed
snahor opened this issue Oct 24, 2016 · 7 comments
Closed

Proposal #32

snahor opened this issue Oct 24, 2016 · 7 comments
Labels

Comments

@snahor
Copy link
Contributor

snahor commented Oct 24, 2016

The existing exercises run against example.sml, they should do it against the implementation in another file. Taking this into account and taking as reference some of the other repos, I suggest this structure:

├── {exercise name}/
│   ├── example.sml
│   └── test_{exercise name}.sml
│   └── {exercise name}.sml

Or if we want to have more examples (like Haskell):

├── {exercise name}/
│   ├── examples/
│       └── bfs_recursive.sml 
│       └── bfs_using_a_queue.sml 
│   └── test_{exercise name}.sml
│   └── {exercise name}.sml

The {exercise name}.sml file should contain the function type and args types:

fun foo (bar: string): string =
  (* Remove this line and implement the function *)
  raise Fail "'foo' is not implemented"

If someone tries to run the tests will get that error. One advantage of having this "template" is that the implementer will have an idea on what to do.

Some of the exercises have a definition and a json file with canonical data (aka test data). We should use that whenever it is possible. Most of them have description, expected and a third key that changes from exercise to exercise. For instance, the atbash cipher has: description, expected and phrase.

The test cases can have a similar structure to the cases found in the json file, for instance:

val test_cases = [
  {
    description = "Lorem ipsum dolor sit amet",
    expected = "Sed at imperdiet ligula",
    input = [1, 2, 3, 4]
  }
]

If the exercise has more than one function to be implemented:

val {function name 0}_test_cases = [...]

val {function name 1}_test_cases = [...]

Unfortunately, Standard ML doesn't have a test lib such as HSpec, PyTest, etc. We could consider https://github.com/kvalle/sml-testing.

For now, this can do the work:

fun run_tests _ [] = []
  | run_tests f (x :: xs) =
      let
        fun aux { description, phrase, expected } =
          let
            val output = f phrase
            val is_correct = output = expected
            val expl = description ^ ": " ^
              (if is_correct then "PASSED" else "FAILED") ^ "\n"
          in
            (print (expl); is_correct)
          end
      in
        (aux x) :: run_tests f xs
      end

It's almost the same function found in all the exercises, but it has a nice side effect.

Note:

  • run_tests receives a function and a list of test cases: val run_tests = fn: ('a -> ''b) -> {description: string, expected: ''b, input: 'a} list -> bool list
  • aux should have the same keys used in the test cases's records.
  • if there's no description, expl could be a concatenation of the function's name with the arguments e.g. foo called with 1 2 3: PASSED.
  • if there's no canonical data, the records should be:
val test_cases = [
  {
    description = "...",
    expected = ,
    input = 
  }
]

or

val test_cases = [
  {
    expected = ,
    input = 
  }
]

I prefer the first one.

Running the tests

SETUP.md suggests to import the test file in the repl, I think that's not very helpful and hard to know which test case failed. If we implement the suggested changes, we could just do:

$ poly -q < test_{exercise name}.sml
foo bar baz: PASSED
bar baz qux: FAILED

We could even add some color to PASSED and FAILED.

What do you think @mcmillhj ?

cc @kytrinyx

@mcmillhj
Copy link

@snahor This is really cool! I like the idea of adding more consistency to this track. Since I have been mostly working on this track alone and am not an expert with SML I was kind of doing just the minimum effort. That being said, everything you pointed out is spot on and seeing it laid out like this just points out how lazy I was being 😦

@snahor
Copy link
Contributor Author

snahor commented Oct 25, 2016

@mcmillhj don't be too hard on yourself, after all you started from nothing. If more people contribute to this repo we'll have more ideas on how to improve this.

@mcmillhj
Copy link

Thanks :) I will draft some PRs that work toward the goal you have proposed.

@kytrinyx
Copy link
Member

I'm late to the party, but wanted to say that I really like where this is going. Thanks @mcmillhj and @snahor for taking the time to dig into this so deeply.

@mcmillhj
Copy link

Been super busy of late, but I am finally going to have time for this over the next few days.

@mcmillhj
Copy link

mcmillhj commented Dec 30, 2016

One interesting thing that I just encountered, is how should we handle functions that take multiple arguments in run_tests. The exercises anagram, accumulate, and allergies all take 2 arguments. Other unimplemented exercises might take 2 or more arguments. run_tests can definitely be customized in each of these cases; I wasn't sure if there was a better, more generic way.

@stale
Copy link

stale bot commented Jul 10, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot closed this as completed Jul 18, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants