-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expensive tests should be skipped by default #104
Comments
These aren't categorically the same thing. Since these benchmarks don't verify time / space complexities, they are mostly useless for automatic testing and hence the high-iteration variants could be commented-out (though a low-iteration variant should be kept as it can detect some bugs). Someone investigating performance could temporarily un-comment them, or write some proper framework for fitting the results to a performance model and checking for goodness-of-fit (and add an option for disabling these for settings where reproducibility is essential). If slow tests remain after removing the benchmarks, I think it's better to look into optimising the tests before disabling tests. E.g. if it's one of those tests that run something in a loop, check if reducing the number of iterations preserves the power of the test to detect the potential bug, and if so, reduce that number of iterations.
This is indeed excessive for something of the size of fibers. |
Yay, thank you! And since we are not immune to slow tests in Guix, you may want to check https://issues.guix.gnu.org/74394, which follows suite :-). The pre-push hook should be tested more. |
Hi!
Some tests take a very long time (for example some unique tests take about 192s on a very fast machine), resulting in a total test time of something like an 30 minutes, an hour or worst, depending on the performance of the build machine.
This seems a bit excessive for a test suite that is run on
make check
in the context of packaging. I'd suggest only fast tests are run bymake check
, with benchmarks/slow tests kept for another target such asmake check-extensive
or similar.Other test runner solutions such as Pytest typically deal with this with markers, e.g. 'slow' tests are skipped by default but can be selected if desired. I'm not sure if such as approach can be accomplished with the Autotools tests, hence my proposition of a different target above.
Thanks for this useful concurrency library!
The text was updated successfully, but these errors were encountered: