Skip to content

Commit

Permalink
docs(perf): Added new CLI flags for iterations + updated docs (#258)
Browse files Browse the repository at this point in the history
  • Loading branch information
maschad authored Aug 24, 2023
1 parent 947ad8a commit eebc07b
Show file tree
Hide file tree
Showing 3 changed files with 50 additions and 23 deletions.
53 changes: 38 additions & 15 deletions perf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,14 @@ Benchmark results can be visualized with https://observablehq.com/@libp2p-worksp
3. Wait for action run to finish and to push a commit to your branch.
4. Visualize results on https://observablehq.com/@libp2p-workspace/performance-dashboard.

## Running via local machine
## Running with Terraform on AWS manually

### Prerequisites

- Terraform 1.5.5 or later
- Node.js 18 or later
- [an AWS IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html)


### Provision infrastructure

Expand All @@ -26,13 +33,28 @@ Benchmark results can be visualized with https://observablehq.com/@libp2p-worksp
5. `CLIENT_IP=$(terraform output -raw client_ip)`
6. `SERVER_IP=$(terraform output -raw server_ip)`

### Build and run implementations
**Notes**
- You may need to reset the infrastructure if you encounter any errors, you can do that by running `terraform destroy` and then `terraform apply`.
- While running terraform you may encounter the following error:
```bash
Error: collecting instance settings: reading EC2 Launch Template versions: couldn't find resource
│ with module.short_lived_server[0].aws_instance.perf,
│ on ../../modules/short_lived/main.tf line 15, in resource "aws_instance" "perf":
│ 15: resource "aws_instance" "perf" {
```
- If you set *TF_VAR* [`long_lived_enabled`](./terraform/configs/local/terraform.tf#L42) env variable to default to **true** terraform should spin up the long-lived resources that are required for the short-lived resources to be created.
- It's best to destroy the infrastructure after you're done with your testing, you can do that by running `terraform destroy`.
### Build and run libp2p implementations
_WARNING_: Running the perf tests might take a while.
Given you have provisioned your infrastructure, you can now build and run the libp2p implementations on the AWS instances.
1. `cd runner`
2. `npm ci`
3. `npm run start -- --client-public-ip $CLIENT_IP --server-public-ip $SERVER_IP`
3. `npm run start -- --client-public-ip $CLIENT_IP --server-public-ip $SERVER_IP`
* Note: The default number of iterations that perf will run is 10; desired iterations can be set with the `--iterations <value>` option.
### Deprovision infrastructure
Expand All @@ -41,27 +63,28 @@ _WARNING_: Running the perf tests might take a while.
## Adding a new implementation or a new version
1. Add implementation to `impl/`.
- For a new implementation, create a folder `impl/<your-implementation-name>/`
1. Add the implementation to new subdirectory in [`impl/*`](./impl/).
- For a new implementation, create a folder `impl/<your-implementation-name>/` e.g. `go-libp2p`
- For a new version of an existing implementation, create a folder `impl/<your-implementation-name>/<your-implementation-version>`.
- In that folder include a `Makefile` that builds an executable and stores it next to the `Makefile` under the name `perf`.
- Requirements for the executable:
- Running as a libp2p-perf server
- Command line flags
- `--run-server`
- Running as a libp2p-perf server:
- The perf server must not exit as it will be closed by the test runner.
- The executable must accept the command flag `--run-server` which indicates it's running as server.
- Running as a libp2p-perf client
- Given that perf is a client driven set of benchmarks, the performance will be measured by the client.
- Input via command line
- `--server-ip-address`
- `--transport` (see `runner/versions.ts` for possible variants)
- `--server-address`
- `--transport` (see [`runner/versions.ts`](./runner/src/versions.ts#L7-L43) for possible variants)
- `--upload-bytes` number of bytes to upload per stream.
- `--download-bytes` number of bytes to download per stream.
- Output
- Logging MUST go to stderr.
- Measurement output is printed to stdout as JSON in the form of:
- Logging MUST go to `stderr`.
- Measurement output is printed to **stdout** as JSON in the form of:
```json
{"latency": 0.246442851}
```
Note that the measurement includes the time to (1) establish the
connection, (2) upload the bytes and (3) download the bytes.
2. For a new implementation, in `impl/Makefile` include your implementation in the `all` target.
3. For a new version, reference version in `runner/src/versions.ts`.
2. For a new implementation, in [`impl/Makefile` include your implementation in the `all` target.](./impl/Makefile#L7)
3. For a new version, reference version in [`runner/src/versions.ts`](./runner/src/versions.ts#L7-L43).
4 changes: 1 addition & 3 deletions perf/runner/src/benchmark-result-type.ts
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,7 @@ export type Result = {
};

export type ResultValue = {
connectionEstablishedSeconds: number,
uploadSeconds: number,
downloadSeconds: number,
latency: number
};

export type Comparison = {
Expand Down
16 changes: 11 additions & 5 deletions perf/runner/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ import yargs from 'yargs';
import fs from 'fs';
import { BenchmarkResults, Benchmark, Result, IperfResults, PingResults, ResultValue } from './benchmark-result-type';

async function main(clientPublicIP: string, serverPublicIP: string) {
async function main(clientPublicIP: string, serverPublicIP: string, iterations: number) {
const pings = runPing(clientPublicIP, serverPublicIP);
const iperf = runIPerf(clientPublicIP, serverPublicIP);

Expand All @@ -19,7 +19,7 @@ async function main(clientPublicIP: string, serverPublicIP: string) {
uploadBytes: 100 << 20,
downloadBytes: 0,
unit: "bit/s",
iterations: 10,
iterations,
}),
runBenchmarkAcrossVersions({
name: "Single Connection throughput – Download 100 MiB",
Expand All @@ -28,7 +28,7 @@ async function main(clientPublicIP: string, serverPublicIP: string) {
uploadBytes: 0,
downloadBytes: 100 << 20,
unit: "bit/s",
iterations: 10,
iterations,
}),
runBenchmarkAcrossVersions({
name: "Connection establishment + 1 byte round trip latencies",
Expand All @@ -37,7 +37,7 @@ async function main(clientPublicIP: string, serverPublicIP: string) {
uploadBytes: 1,
downloadBytes: 1,
unit: "s",
iterations: 100,
iterations: iterations * 10,
}),
];

Expand Down Expand Up @@ -232,8 +232,14 @@ const argv = yargs
demandOption: true,
description: 'Server public IP address',
},
'iterations': {
type: 'number',
default: 10,
description: 'Number of iterations to run',
demandOption: false,
}
})
.command('help', 'Print usage information', yargs.help)
.parseSync();

main(argv['client-public-ip'] as string, argv['server-public-ip'] as string);
main(argv['client-public-ip'] as string, argv['server-public-ip'] as string, argv['iterations'] as number);

0 comments on commit eebc07b

Please sign in to comment.