-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: reintroduce coverage and upload to codecov.io #229
Conversation
2ecf336
to
31708c1
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #229 +/- ##
==========================================
- Coverage 91.76% 91.52% -0.25%
==========================================
Files 44 50 +6
Lines 7360 10332 +2972
==========================================
+ Hits 6754 9456 +2702
- Misses 606 876 +270 ☔ View full report in Codecov by Sentry. |
It appears that tarpaulin is accurately reading tests that rely on the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have any experience with tarpaulin, but it seems straightforward enough. Some actions are outdated though.
d702f11
to
7f83dcf
Compare
Should we get tests back to >90% before the release or not fail ci to get the release out? Note: since v14 we've been below 90%, thinking maybe changing it to 80%? |
Does it make sense to enforce a coverage > x% as part of the CI pipeline for every PR? Codecov will create a report (see above) for all PRs which makes it obvious whether the coverage increases or decreases. Setting an arbitrary minimum could also lead to a situation like "current coverage is 95%, so it's fine if this PR drops it a little". But maybe I'm overthinking. Higher coverage could still be a goal for the next release (or the one after that) even without the explicit CI check. |
BTW: Looks like moving from nightly to stable caused a permissions error. Could be xd009642/tarpaulin#406 and may be solved by moving to a non-docker solution. FYI |
Note: Still looking into this option, but I force-pushed to test the action. In looking into taiki-e/install-action I found their It and the wrapper they developed to compiling with instrument-coverage, |
doc: remove codecov badge before release to crates.io badge is pending its correctness, see PR #229 for progress
doc: remove codecov badge before release to crates.io badge is pending its correctness, see PR #229 for progress
I've let this sit awhile, let's try it out. |
Bringing back codecov #147
I went with tarpaulin because it seemed easy to setup and use, especially with the container on dockerhub. I don't believe that coverage reports need the same level of confidence as passing the tests that are written, but can be useful in deciding how to focus efforts.
But I'm new to this, so feedback would be appreciated.