Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(ci): Test run examples as well as the regression suite #349

Merged
merged 1 commit into from
Sep 26, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ install:
script:
- luacheck .
- lua run.lua tests --luacov
- lua run.lua examples
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have a single line here;

lua run.lua --luacov

this will run everything, and add coverage reporting for everything as well.

Copy link
Member Author

@alerque alerque Sep 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I deliberately ran it in two steps because then it is clear from the Travis output log what failed. The output isn't very clear when they are both run together. Also it allowed for coverage testing to be separate.

I did consider adding coverage reporting to the examples, but my experience with testing on other projects has led me to believe that its better to measure test coverage via the most specific unit testing possible, big monolithic examples that cover lots of ground can be very useful to end users and making sure they work can catch some kinds of issues but they don't do a great job at identifying specific expectations of each interface along the way. It's surprising what can slip though as 'working overall' but actually be broken inside. I would rather specific unit tests be the baseline for coverage reporting. Examples should work too (hence adding the to the CI run) but that shouldn't count as really testing the expectations on each function.


after_success:
- luacov-coveralls