Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Schema optimises for testing f(inputs) = output. What of other types of tests? #1225

Closed
petertseng opened this issue Apr 13, 2018 · 5 comments

Comments

@petertseng
Copy link
Member

petertseng commented Apr 13, 2018

Our JSON schema right now most ergonomically expresses tests of the form:

The code under test, given this input, produces this exact output.

This is expressed in https://github.com/exercism/problem-specifications/blob/master/canonical-schema.json#L6-L8, which today reads:

   , " optimising for the ability to represent example-based tests"
   , " of the form 'function (input) == output'. Future expansions"
   , " may allow for other types of tests such as property-based. "

What other types might we care about:

The number of uses for each of these alternate test types is not large enough that I feel it is worth any possible proposal at this time. Therefore I'm closing this issue immediately and taking no action on it.

If you want to make it easy to make a proposal when the time is right for it, it is in your interest to continue reporting any such use cases here.

@coriolinus
Copy link
Member

It feels to me like expanding into any of those categories puts us squarely into property-based testing territory.

All property based test frameworks with which I am familiar (1, 2, 3) have a programmatic interface. Expressing a property based test suitable for multiple languages via a JSON file sounds would be an ambitious undertaking. I suspect that for our cases, we'll do better to construct function-based tests for the canonical data, and write comments in English about particular property-based tests which tracks may wish to implement manually.

@cmccandless
Copy link
Contributor

@coriolinus what about adding a "implementManually" = true or "noAuto" = true tag for cases that may require human implementation?

@coriolinus
Copy link
Member

I prefer comments.

Some test generators written before inputs were standardized infer the problem inputs: everything which isn't a known field must be an input. "comments" and "input" are known fields, so nothing is broken as yet.

Adding a new field breaks those generators.

@SleeplessByte
Copy link
Member

Adding a new field breaks those generators.

Yeah I don't think this is a good reason to not add new fields. I actually think it's one of the worst reasons. In this case, we can reduce the workload for all tracks significantly by adding new properties, and perhaps making the schema versioned.

This is a one time cost for generators, which just need to ignore properties, and adds a lot of benefit instantly and in the future.

@sshine
Copy link
Contributor

sshine commented Apr 3, 2019

I also prefer comments to describe properties for the reason given by @coriolinus:

Because property based test frameworks [...] have a programmatic interface.

Edit: Clarified reason, as @coriolinus gave two.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants