Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce test runtimes #633

Open
3 tasks
rnelson0 opened this issue Jan 19, 2017 · 7 comments
Open
3 tasks

Reduce test runtimes #633

rnelson0 opened this issue Jan 19, 2017 · 7 comments

Comments

@rnelson0
Copy link
Member

rnelson0 commented Jan 19, 2017

It has been pointed out that tests on collectd take a long time to run, approx 15+ minutes on Travis CI and possibly longer. This makes adjusting a PR a test of patience. Average test runtimes for VP modules are in the 3-5 minute range making this 3-5x longer. We need to investigate a way to reduce the time.

We would like to solicit some solutions. Two that have already been discussed as low-hanging fruit are:

  • Parallelized rspec tests
  • Drop some OS support

We are investigating the first already. The second is a bit more controversial. Per the current metadata.json most OSes are still supported. Only Debian 6 looks to be fully past end of life. However, the Enterprise Linux 5 family is coming up on End of Life (20170331) and some others may be nearing it as well. We loop over each supported OS in most of the tests, so removing even a single OS from the matrix should have some effect on the tests.

We are proposing to drop Debian 6 support now. We also plan to drop the EL5 family at the end of March.

That leads us to the more difficult target:

  • Refactor tests for improved runtime without dropping coverage

We would like help from frequent contributors and users with this step. Specifically, highlighting some of the slowest tests, the easiest tests to refactor, and any unnecessary tests.

@bastelfreak
Copy link
Member

I'm fine with dropping ubuntu12.04, RHEL5 and Debian 6.

@rnelson0
Copy link
Member Author

Parallelization does seem to help, though the output is much different:

$ time be rake parallel_test

2296 examples, 1 failure

Took 533 seconds (8:53)
Tests Failed

real    8m56.077s
user    11m13.905s
sys     0m5.794s
$ time be rake test

Finished in 13 minutes 2 seconds (files took 16.3 seconds to load)
2296 examples, 1 failure

real    13m51.740s
user    13m32.719s
sys     0m6.207s

@rnelson0
Copy link
Member Author

#634 shows the reduced runtime. ~10 minutes vs ~16, hopefully less on average since one of the jobs seemed to hang longer than the others.. It gets 32 parallel threads on the Travis VMs, which makes it FAST. That's the positive.

The negative is that the logs are broken up in clumps per thread. We could decrease the parallelism with some overrides, but it simply make it less obfuscated, not unobfuscated. When there's a failure, finding it is difficult. (1532 when you give up)

There's definitely a tradeoff here that would need to be evaluated to see if it's worthwhile.

@bastelfreak
Copy link
Member

As discussed in the modulesync_config PR, I'm fine with parallel_rspec.

@alexjfisher
Copy link
Member

@rnelson0 Is there any issue with simplecov/coveralls?
https://travis-ci.org/voxpupuli/puppet-collectd/jobs/193436952 has 32 instances of [Coveralls] Submitting to https://coveralls.io/api/v1

@juniorsysadmin
Copy link
Member

The biggest problem is the testing of all of the RSpec unit tests against every supported operating system for no good reason, especially when the code has no logic to do anything differently for specific operating systems. For instance, there are no (or few) conditional statements that generate a different configuration file if a certain fact is detected, so really there is little value in running the same test case with different mocked facts. A textbook example of an increase in the number of tests not actually providing more confidence that things work as is expected.

@bastelfreak
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants