Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expensive tests should be skipped by default #104

Closed
Apteryks opened this issue Feb 19, 2024 · 3 comments
Closed

Expensive tests should be skipped by default #104

Apteryks opened this issue Feb 19, 2024 · 3 comments

Comments

@Apteryks
Copy link

Apteryks commented Feb 19, 2024

Hi!

Some tests take a very long time (for example some unique tests take about 192s on a very fast machine), resulting in a total test time of something like an 30 minutes, an hour or worst, depending on the performance of the build machine.

This seems a bit excessive for a test suite that is run on make check in the context of packaging. I'd suggest only fast tests are run by make check, with benchmarks/slow tests kept for another target such as make check-extensive or similar.

Other test runner solutions such as Pytest typically deal with this with markers, e.g. 'slow' tests are skipped by default but can be selected if desired. I'm not sure if such as approach can be accomplished with the Autotools tests, hence my proposition of a different target above.

Thanks for this useful concurrency library!

@emixa-d
Copy link
Collaborator

emixa-d commented Oct 30, 2024

benchmarks/slow tests

These aren't categorically the same thing.

Since these benchmarks don't verify time / space complexities, they are mostly useless for automatic testing and hence the high-iteration variants could be commented-out (though a low-iteration variant should be kept as it can detect some bugs). Someone investigating performance could temporarily un-comment them, or write some proper framework for fitting the results to a performance model and checking for goodness-of-fit (and add an option for disabling these for settings where reproducibility is essential).

If slow tests remain after removing the benchmarks, I think it's better to look into optimising the tests before disabling tests. E.g. if it's one of those tests that run something in a loop, check if reducing the number of iterations preserves the power of the test to detect the potential bug, and if so, reduce that number of iterations.

total test time of something like an 30 minutes

This is indeed excessive for something of the size of fibers.

@civodul
Copy link
Collaborator

civodul commented Nov 23, 2024

@Apteryks Done with #113.

@civodul civodul closed this as completed Nov 23, 2024
@Apteryks
Copy link
Author

Apteryks commented Nov 24, 2024

Yay, thank you! And since we are not immune to slow tests in Guix, you may want to check https://issues.guix.gnu.org/74394, which follows suite :-). The pre-push hook should be tested more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants