Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add stress tests and benchmarks to Qubes automated tests #5740

Open
adrelanos opened this issue Mar 22, 2020 · 2 comments
Open

add stress tests and benchmarks to Qubes automated tests #5740

adrelanos opened this issue Mar 22, 2020 · 2 comments
Labels
C: infrastructure P: default Priority: default. Default priority for new issues, to be replaced given sufficient information.

Comments

@adrelanos
Copy link
Member

VMs and dom0 should be stress tested. Maybe using stress-ng or whatever else is suitable and/or other tools.

Reason is that stress can lead to crashes and whatnot. Better noticed during testing than in stable release by user.

Also benchmarks would be useful. Phoronix test suite? That is because some changes might degrade performance. That is also better found out during testing. Maybe phoronix test suite? Others?

Would be useful to run benchmarks first. Then stress tests. Then again benchmarks. Then compare if benchmarks got sufficiently worse which is possible since there might be a memory leak or something.

References:

@adrelanos adrelanos added P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. T: enhancement labels Mar 22, 2020
@marmarek
Copy link
Member

Our current automated tests already put a lot of stress on the system, like creating a lot of VMs, doing a lot of qrexec calls etc. And we do find issues that way, that are much harder to reproduce in normal usage (things happening at ~1% of VM start or such). So, I don't think we need specific improvements here right now. Also, things like (even not very severe) memory leaks are covered this way, because of so much activity that it pile up enough to cause out of memory errors (we specifically run tests with a limited RAM for this reason).

As for the benchmarks, that could be a good idea, but the current test setup is running in a nested virtualization (to have easy control over the environment), so the results will not be representative of a physical installation at all. Testing on a real hardware directly is hard to automate. We have some ideas - like using Intel AMT, but we're not there yet. Also, specific hardware may influence performance results a lot, so different test results will not be comparable...
Severe performance degradation is caught by timeouts. This is for example the case for HVM grub on R4.1 right now...

@DemiMarie
Copy link

Now that testing on real hardware is being done, I wonder if the Phoronix Benchmark Suite would be usable. At a minimum, it should be able to test our performance on various I/O and CPU bound workloads.

@andrewdavidwong andrewdavidwong removed this from the Non-release milestone Aug 13, 2023
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 21, 2025
Add simple connection latency, and throughput tests. Run them with
different type of services (scripts, socket, via fork-server or not).
They print a test run time for comparison - the lower the better.

QubesOS/qubes-issues#5740
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 21, 2025
Add simple connection latency, and throughput tests. Run them with
different type of services (scripts, socket, via fork-server or not).
They print a test run time for comparison - the lower the better.

QubesOS/qubes-issues#5740
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 25, 2025
Add simple connection latency, and throughput tests. Run them with
different type of services (scripts, socket, via fork-server or not).
They print a test run time for comparison - the lower the better.

The tests can be also started outside of the full test run by calling
/usr/lib/qubes/tests/qrexec_perf.py. It requires giving names of two
existing and running VMs.

QubesOS/qubes-issues#5740
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 26, 2025
Add simple connection latency, and throughput tests. Run them with
different type of services (scripts, socket, via fork-server or not).
They print a test run time for comparison - the lower the better.

The tests can be also started outside of the full test run by calling
/usr/lib/qubes/tests/qrexec_perf.py. It requires giving names of two
existing and running VMs.

QubesOS/qubes-issues#5740
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 26, 2025
Add few simple storage performance tests using fio tool.
Tests are checking dom0's root and varlibqubes pools (by default the
same thing, but in case of XFS or BTRFS setups, they are
different). And VM's root/private/volatile. The last one is tested by
creating ext4 filesystem there first. This isn't very representative
(normally volatile is used for swap and CoW data), but allows comparing
results with other volumes.

The tests can be also started outside of the full test run by calling
/usr/lib/qubes/tests/storage_perf.py. It requires giving name of a VM to
test (which may be dom0).

QubesOS/qubes-issues#5740
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 26, 2025
Add simple connection latency, and throughput tests. Run them with
different type of services (scripts, socket, via fork-server or not).
They print a test run time for comparison - the lower the better.

The tests can be also started outside of the full test run by calling
/usr/lib/qubes/tests/qrexec_perf.py. It requires giving names of two
existing and running VMs.

QubesOS/qubes-issues#5740
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 26, 2025
Add few simple storage performance tests using fio tool.
Tests are checking dom0's root and varlibqubes pools (by default the
same thing, but in case of XFS or BTRFS setups, they are
different). And VM's root/private/volatile. The last one is tested by
creating ext4 filesystem there first. This isn't very representative
(normally volatile is used for swap and CoW data), but allows comparing
results with other volumes.

The tests can be also started outside of the full test run by calling
/usr/lib/qubes/tests/storage_perf.py. It requires giving name of a VM to
test (which may be dom0).

QubesOS/qubes-issues#5740
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 26, 2025
Add few simple storage performance tests using fio tool.
Tests are checking dom0's root and varlibqubes pools (by default the
same thing, but in case of XFS or BTRFS setups, they are
different). And VM's root/private/volatile. The last one is tested by
creating ext4 filesystem there first. This isn't very representative
(normally volatile is used for swap and CoW data), but allows comparing
results with other volumes.

The tests can be also started outside of the full test run by calling
/usr/lib/qubes/tests/storage_perf.py. It requires giving name of a VM to
test (which may be dom0).

QubesOS/qubes-issues#5740
marmarek added a commit to QubesOS/qubes-core-admin that referenced this issue Jan 26, 2025
* origin/pr/647:
  tests: add qrexec performance tests

Pull request description:

Add simple connection latency, and throughput tests. Run them with
different type of services (scripts, socket, via fork-server or not).
They print a test run time for comparison - the lower the better.

QubesOS/qubes-issues#5740

This will be especially useful to measure impact of QubesOS/qubes-core-qrexec#141 and similar changes.
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Feb 14, 2025
Add few simple storage performance tests using fio tool.
Tests are checking dom0's root and varlibqubes pools (by default the
same thing, but in case of XFS or BTRFS setups, they are
different). And VM's root/private/volatile. The last one is tested by
creating ext4 filesystem there first. This isn't very representative
(normally volatile is used for swap and CoW data), but allows comparing
results with other volumes.

The tests can be also started outside of the full test run by calling
/usr/lib/qubes/tests/storage_perf.py. It requires giving name of a VM to
test (which may be dom0).

The test scenarios are modeled after what kdiskmark tool uses.

QubesOS/qubes-issues#5740
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: infrastructure P: default Priority: default. Default priority for new issues, to be replaced given sufficient information.
Projects
None yet
Development

No branches or pull requests

4 participants