-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add stress tests and benchmarks to Qubes automated tests #5740
Comments
Our current automated tests already put a lot of stress on the system, like creating a lot of VMs, doing a lot of qrexec calls etc. And we do find issues that way, that are much harder to reproduce in normal usage (things happening at ~1% of VM start or such). So, I don't think we need specific improvements here right now. Also, things like (even not very severe) memory leaks are covered this way, because of so much activity that it pile up enough to cause out of memory errors (we specifically run tests with a limited RAM for this reason). As for the benchmarks, that could be a good idea, but the current test setup is running in a nested virtualization (to have easy control over the environment), so the results will not be representative of a physical installation at all. Testing on a real hardware directly is hard to automate. We have some ideas - like using Intel AMT, but we're not there yet. Also, specific hardware may influence performance results a lot, so different test results will not be comparable... |
Now that testing on real hardware is being done, I wonder if the Phoronix Benchmark Suite would be usable. At a minimum, it should be able to test our performance on various I/O and CPU bound workloads. |
Add simple connection latency, and throughput tests. Run them with different type of services (scripts, socket, via fork-server or not). They print a test run time for comparison - the lower the better. QubesOS/qubes-issues#5740
Add simple connection latency, and throughput tests. Run them with different type of services (scripts, socket, via fork-server or not). They print a test run time for comparison - the lower the better. QubesOS/qubes-issues#5740
Add simple connection latency, and throughput tests. Run them with different type of services (scripts, socket, via fork-server or not). They print a test run time for comparison - the lower the better. The tests can be also started outside of the full test run by calling /usr/lib/qubes/tests/qrexec_perf.py. It requires giving names of two existing and running VMs. QubesOS/qubes-issues#5740
Add simple connection latency, and throughput tests. Run them with different type of services (scripts, socket, via fork-server or not). They print a test run time for comparison - the lower the better. The tests can be also started outside of the full test run by calling /usr/lib/qubes/tests/qrexec_perf.py. It requires giving names of two existing and running VMs. QubesOS/qubes-issues#5740
Add few simple storage performance tests using fio tool. Tests are checking dom0's root and varlibqubes pools (by default the same thing, but in case of XFS or BTRFS setups, they are different). And VM's root/private/volatile. The last one is tested by creating ext4 filesystem there first. This isn't very representative (normally volatile is used for swap and CoW data), but allows comparing results with other volumes. The tests can be also started outside of the full test run by calling /usr/lib/qubes/tests/storage_perf.py. It requires giving name of a VM to test (which may be dom0). QubesOS/qubes-issues#5740
Add simple connection latency, and throughput tests. Run them with different type of services (scripts, socket, via fork-server or not). They print a test run time for comparison - the lower the better. The tests can be also started outside of the full test run by calling /usr/lib/qubes/tests/qrexec_perf.py. It requires giving names of two existing and running VMs. QubesOS/qubes-issues#5740
Add few simple storage performance tests using fio tool. Tests are checking dom0's root and varlibqubes pools (by default the same thing, but in case of XFS or BTRFS setups, they are different). And VM's root/private/volatile. The last one is tested by creating ext4 filesystem there first. This isn't very representative (normally volatile is used for swap and CoW data), but allows comparing results with other volumes. The tests can be also started outside of the full test run by calling /usr/lib/qubes/tests/storage_perf.py. It requires giving name of a VM to test (which may be dom0). QubesOS/qubes-issues#5740
Add few simple storage performance tests using fio tool. Tests are checking dom0's root and varlibqubes pools (by default the same thing, but in case of XFS or BTRFS setups, they are different). And VM's root/private/volatile. The last one is tested by creating ext4 filesystem there first. This isn't very representative (normally volatile is used for swap and CoW data), but allows comparing results with other volumes. The tests can be also started outside of the full test run by calling /usr/lib/qubes/tests/storage_perf.py. It requires giving name of a VM to test (which may be dom0). QubesOS/qubes-issues#5740
* origin/pr/647: tests: add qrexec performance tests Pull request description: Add simple connection latency, and throughput tests. Run them with different type of services (scripts, socket, via fork-server or not). They print a test run time for comparison - the lower the better. QubesOS/qubes-issues#5740 This will be especially useful to measure impact of QubesOS/qubes-core-qrexec#141 and similar changes.
Add few simple storage performance tests using fio tool. Tests are checking dom0's root and varlibqubes pools (by default the same thing, but in case of XFS or BTRFS setups, they are different). And VM's root/private/volatile. The last one is tested by creating ext4 filesystem there first. This isn't very representative (normally volatile is used for swap and CoW data), but allows comparing results with other volumes. The tests can be also started outside of the full test run by calling /usr/lib/qubes/tests/storage_perf.py. It requires giving name of a VM to test (which may be dom0). The test scenarios are modeled after what kdiskmark tool uses. QubesOS/qubes-issues#5740
VMs and dom0 should be stress tested. Maybe using stress-ng or whatever else is suitable and/or other tools.
Reason is that stress can lead to crashes and whatnot. Better noticed during testing than in stable release by user.
Also benchmarks would be useful. Phoronix test suite? That is because some changes might degrade performance. That is also better found out during testing. Maybe phoronix test suite? Others?
Would be useful to run benchmarks first. Then stress tests. Then again benchmarks. Then compare if benchmarks got sufficiently worse which is possible since there might be a memory leak or something.
References:
The text was updated successfully, but these errors were encountered: