Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(ci): Test run examples as well as the regression suite #349

Merged
merged 1 commit into from
Sep 26, 2020

Conversation

alerque
Copy link
Member

@alerque alerque commented Sep 26, 2020

Related to but does not solve №348, see my comment there.

@alerque alerque mentioned this pull request Sep 26, 2020
@alerque
Copy link
Member Author

alerque commented Sep 26, 2020

In case it wasn't clear, the tests on this PR are expected to fail and reflect the fact that the 1.9.1 release and master branch are in fact broken. I believe this can be merged as-is and the fix for #348 should bring things back into the green.

@alerque alerque merged commit 1579013 into lunarmodules:master Sep 26, 2020
@alerque alerque deleted the ci-examples branch September 26, 2020 19:32
@@ -24,6 +24,7 @@ install:
script:
- luacheck .
- lua run.lua tests --luacov
- lua run.lua examples
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have a single line here;

lua run.lua --luacov

this will run everything, and add coverage reporting for everything as well.

Copy link
Member Author

@alerque alerque Sep 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I deliberately ran it in two steps because then it is clear from the Travis output log what failed. The output isn't very clear when they are both run together. Also it allowed for coverage testing to be separate.

I did consider adding coverage reporting to the examples, but my experience with testing on other projects has led me to believe that its better to measure test coverage via the most specific unit testing possible, big monolithic examples that cover lots of ground can be very useful to end users and making sure they work can catch some kinds of issues but they don't do a great job at identifying specific expectations of each interface along the way. It's surprising what can slip though as 'working overall' but actually be broken inside. I would rather specific unit tests be the baseline for coverage reporting. Examples should work too (hence adding the to the CI run) but that shouldn't count as really testing the expectations on each function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants