-
-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use coverage data to decide which functions to mutate and which tests to run #24
Comments
So some thoughts: Coverage stats done per test would currently need a run per test, which would likely be slower. There might be a way to do it by implementing a custom test runner and implementing the profiler built-ins to get the stats with a single threaded runner but that feels like a lot of extra work and another tool entirely. It could be worked out statically to a degree but things like dynamic dispatch/generics etc make this also pretty complex. With Tarpaulin is working on adding Generally, I think keeping the collection of coverage stats to the users and using that to filter the mutagens is a good first step. That means it can be plugged into existing setups that may use: grcov, kcov, tarpaulin, cargo-llvm-cov, Unfortunately, I think the option to make it an easy "always on" thing won't work for a large amount of projects that would need bespoke setup. As things start to stabilise and grow in maturity this should be possible, but I think there'll always be a selection of users that have less conventional coverage needs. And for these users being able to provide a pre-generated coverage report to cargo-mutants would probably be the preferred UX. One example of something I've been planning in is on-device embedded coverage using Just my 2¢ 😁 |
Thanks @xd009642. I can't currently think of any practical way to do this, so I'm going to close the bug for now. |
I thought about this some more after adding nextest support (#85), which does run one test at a time (more or less) and so would be a foundation for collecting coverage one test at a time. I agree that it seems like getting coverage working well on any tree seems a bit fiddly today, so this might be hard to make work out of the box. For the case originally suggested, of just entirely skipping uncovered code, it seems like the best thing would be for users to either add tests for that code, or manually mark it skipped in cargo-mutants. However, perhaps they want to parallelize working towards better tests using both coverage and mutants, rather than one after the other. Skipping spansI think it could make sense to have an option like Also, if this just accepted a format-independent list of Accepting test->span mapsIf we do run one test at a time, perhaps using nextest, and they emit coverage, then we can collect a map from test name to lines covered by that test. (Again, with the caveat that the coverage data is not 100% exact, and that some kinds of test might not collect coverage well.) By inverting this map we could see which tests could potentially catch a bug in some given line, and then run only those tests. For very large crates this might give a significant improvement in performance, especially if they already expect to be tested under Nextest and so already pay the one-test-at-a-time performance cost. |
So just a small comment on some playing around with ideas in this area I'm working on, recently I overhauled tarpaulin's reporting to better get function/method names and do so in a way that matches cargo-mutants, then generated an lcov coverage report as that currently has function names, grab all the functions with 0 hits and put them in a Abridged version of the lcov coverage report
Generated mutants.toml
|
Discussed in #23
Originally posted by xd009642 February 13, 2022
So this may be worth creating an issue for, but this is largely an idle thought I had yesterday. Mutation testing improves on things like coverage by making sure the tests are actually useful and not just hitting a ton of lines/conditions but not checking any of the values. Now if we already have code coverage results for our tests and can see some functions aren't tested at all we could save time by not applying mutations to them - after all none of the mutations would be caught.
For maximum usability this should probably take the form of accepting an optional argument of some open coverage format like an lcov report or cobertura.xml which a number of coverage tools already output.
Yeah, good idea.
mutagen
optionally does something live this according to its documentation, but I have not looked at the implementation.We could take it a step further by understanding which tests run which function under test. Functions not reached by any test we know are just apparently not tested. Functions that are reached by some tests, we can mutate and then run only the relevant tests. This would potentially be dramatically faster on some trees.
That said, I think there are a few things that might make this annoying to implement reliably, but perhaps my preconceptions are out of date. In the past, getting coverage files out of Rust historically to be a bit platform-dependent and fiddly to set up in my experience. And, historically the output was in a platform-dependent format that required external preprocessing. Both of these are in tension with my goal for a very easy start with cargo-mutants.
However there is now https://blog.rust-lang.org/inside-rust/2020/11/12/source-based-code-coverage.html providing
-Z instrument-coverage
, which is moving towards stabilization as-C instrument-coverage
.So if this ends up with a way to just directly get a platform-independent coverage representation out of
cargo
this might be pretty feasible.Coverage may still raise some edge cases if the test suite starts subprocesses, potentially in different directories, as both cargo-mutants and cargo-tarpaulin seem to do. Will we still collect all the aggregate coverage info? But, we could still offer it for trees where it does work well. And maybe it will be fine.
There might also be a hairy bit about mapping from a function name back to the right
cargo test
invocation to hit it. But that also can probably be done: if nothing else perhaps by just running the test binary directly...Possibly this could be done with https://github.com/taiki-e/cargo-llvm-cov
The text was updated successfully, but these errors were encountered: