Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust configuration in subsystem regression tests to Kusama network #3528

Closed
AndreiEres opened this issue Feb 29, 2024 · 1 comment · Fixed by #3583
Closed

Adjust configuration in subsystem regression tests to Kusama network #3528

AndreiEres opened this issue Feb 29, 2024 · 1 comment · Fixed by #3583
Assignees
Labels
T12-benchmarks This PR/Issue is related to benchmarking and weights.

Comments

@AndreiEres
Copy link
Contributor

Subsystem regression tests for availability read and write introduced in #3311.

However, their configuration was simply copied from benchmark examples. For better results, we should simulate the Kusama network.

@AndreiEres AndreiEres self-assigned this Feb 29, 2024
@AndreiEres AndreiEres added the T12-benchmarks This PR/Issue is related to benchmarking and weights. label Feb 29, 2024
@AndreiEres
Copy link
Contributor Author

Copied from @sandreim wrote here #3311 (comment)

let mut config = TestConfiguration::default();
config.n_cores = 20;

This value should be set to the number of block validators are expected to check per relay chain block. This depends on approval voting parameters. We should also factor in async backing, that is we'd expect to have 1 included candidate at every block.

We can calibrate these in the followup PR.

github-merge-queue bot pushed a commit that referenced this issue Mar 1, 2024
…#3311)

### What's been done
- `subsystem-bench` has been split into two parts: a cli benchmark
runner and a library.
- The cli runner is quite simple. It just allows us to run `.yaml` based
test sequences. Now it should only be used to run benchmarks during
development.
- The library is used in the cli runner and in regression tests. Some
code is changed to make the library independent of the runner.
- Added first regression tests for availability read and write that
replicate existing test sequences.

### How we run regression tests
- Regression tests are simply rust integration tests without the
harnesses.
- They should only be compiled under the `subsystem-benchmarks` feature
to prevent them from running with other tests.
- This doesn't work when running tests with `nextest` in CI, so
additional filters have been added to the `nextest` runs.
- Each benchmark run takes a different time in the beginning, so we
"warm up" the tests until their CPU usage differs by only 1%.
- After the warm-up, we run the benchmarks a few more times and compare
the average with the exception using a precision.

### What is still wrong?
- I haven't managed to set up approval voting tests. The spread of their
results is too large and can't be narrowed down in a reasonable amount
of time in the warm-up phase.
- The tests start an unconfigurable prometheus endpoint inside, which
causes errors because they use the same 9999 port. I disable it with a
flag, but I think it's better to extract the endpoint launching outside
the test, as we already do with `valgrind` and `pyroscope`. But we still
use `prometheus` inside the tests.

### Future work
* #3528
* #3529
* #3530
* #3531

---------

Co-authored-by: Alexander Samusev <[email protected]>
skunert pushed a commit to skunert/polkadot-sdk that referenced this issue Mar 4, 2024
…paritytech#3311)

### What's been done
- `subsystem-bench` has been split into two parts: a cli benchmark
runner and a library.
- The cli runner is quite simple. It just allows us to run `.yaml` based
test sequences. Now it should only be used to run benchmarks during
development.
- The library is used in the cli runner and in regression tests. Some
code is changed to make the library independent of the runner.
- Added first regression tests for availability read and write that
replicate existing test sequences.

### How we run regression tests
- Regression tests are simply rust integration tests without the
harnesses.
- They should only be compiled under the `subsystem-benchmarks` feature
to prevent them from running with other tests.
- This doesn't work when running tests with `nextest` in CI, so
additional filters have been added to the `nextest` runs.
- Each benchmark run takes a different time in the beginning, so we
"warm up" the tests until their CPU usage differs by only 1%.
- After the warm-up, we run the benchmarks a few more times and compare
the average with the exception using a precision.

### What is still wrong?
- I haven't managed to set up approval voting tests. The spread of their
results is too large and can't be narrowed down in a reasonable amount
of time in the warm-up phase.
- The tests start an unconfigurable prometheus endpoint inside, which
causes errors because they use the same 9999 port. I disable it with a
flag, but I think it's better to extract the endpoint launching outside
the test, as we already do with `valgrind` and `pyroscope`. But we still
use `prometheus` inside the tests.

### Future work
* paritytech#3528
* paritytech#3529
* paritytech#3530
* paritytech#3531

---------

Co-authored-by: Alexander Samusev <[email protected]>
github-merge-queue bot pushed a commit that referenced this issue Mar 11, 2024
Fixes #3528

```rust
latency:
    mean_latency_ms = 30 // common sense
    std_dev = 2.0 // common sense
n_validators = 300 // max number of validators, from chain config
n_cores = 60 // 300/5
max_validators_per_core = 5 // default
min_pov_size = 5120 // max
max_pov_size = 5120 // max
peer_bandwidth = 44040192 // from the Parity's kusama validators
bandwidth = 44040192 // from the Parity's kusama validators
connectivity = 90 // we need to be connected to 90-95% of peers
```
dharjeezy pushed a commit to dharjeezy/polkadot-sdk that referenced this issue Mar 24, 2024
Fixes paritytech#3528

```rust
latency:
    mean_latency_ms = 30 // common sense
    std_dev = 2.0 // common sense
n_validators = 300 // max number of validators, from chain config
n_cores = 60 // 300/5
max_validators_per_core = 5 // default
min_pov_size = 5120 // max
max_pov_size = 5120 // max
peer_bandwidth = 44040192 // from the Parity's kusama validators
bandwidth = 44040192 // from the Parity's kusama validators
connectivity = 90 // we need to be connected to 90-95% of peers
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T12-benchmarks This PR/Issue is related to benchmarking and weights.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant