Skip to content

AmpereComputing/AmperePerformanceToolkit

Repository files navigation

Ampere Performance Toolkit

Ampere Performance Toolkit (APT) is a fork of PerfKitBenchmarker from GCP: https://github.com/GoogleCloudPlatform/PerfKitBenchmarker

  • APT Version: 1.0.0
  • Upstream Google PerfKitBenchmarker commit SHA: 0fc45c45a25657aa0634ae06cace08cb79e7803b

Features Added

  • Oracle Cloud Infrastructure (OCI) support as a provider
    • APT can automatically provision/cleanup VMs, VCNs, etc. through OCI for workload runs
  • Additional support for BareMetal testing
  • IRQ binding for experimentation with network intensive workloads
  • A global tuning module that enables declarative bash commands from yaml configs on all systems involved in a test
  • Max throughput mode for key workloads to determine the best throughput possible under a given SLA

In summary, APT is great at capturing all workload parameters and system-under-test parameters in a single, re-playable, yaml file.

Licensing

Ampere Performance Toolkit provides wrappers and workload definitions around popular benchmark tools. We made it very simple to use and automate everything we can. It instantiates VMs on the Cloud provider of your choice, automatically installs benchmarks, and runs the workloads without user interaction.

Due to the level of automation you will not see prompts for software installed as part of a benchmark run. Therefore you must accept the license of each of the benchmarks individually, and take responsibility for using them before you use the Ampere Performance Toolkit.

In its current release these are the benchmarks that are executed and their associated license terms:

APT: Overview, Setup, and Usage

Overview

APT runs on a separate system from the system-under-test (SUT), and sends commands over SSH to the SUT to perform benchmarks. The steps in this guide will help to prepare a new APT runner system.

Test Topology

A minimum of 2+ systems is required for APT.

The simplest configuration would consist of one runner system and one system-under-test (SUT) for single-node tests.

A more involved configuration might consist of one runner system, one SUT, and one or more clients (depending on the workload).

flowchart LR
subgraph clients [" "]
    direction TB
        b(Client1) ~~~c(Client2) ~~~d(...)
end
subgraph server [" "]
    direction TB
        e[(SUT)]
end
a(Runner) -->clients
a(Runner) -->server
clients<-.->server
Loading

Prerequisites

For BareMetal / Static VM Tests

  • Passwordless SSH configured
    • Runner -> SUT
    • Runner -> Client(s)
    • See here for more details
  • Passwordless sudo granted to user on...
    • SUT
    • Client(s)
    • Required for package installation / builds
  • Firewall disabled between SUT and Client(s)

For Cloud-based Tests

  • APT will automatically create the system(s) and SSH keys required for connection under the hood (given a valid cloud YAML config)

Dependencies

APT requires Python 3.11 and above, pip for package management, and a virtual environment for dependencies. Check the current version on your runner system with python3 --version.

If the system does not already have Python 3.11 or higher, install it explicitly, e.g. Fedora 38

sudo dnf install python3.11

Setup APT

Create a new virtual environment

python3.11 -m venv apt_venv

Start the virtual environment

source apt_venv/bin/activate

Upgrade pip inside virtual environment (important)

python3.11 -m pip install --upgrade pip

Clone the Ampere Performance Toolkit (APT) repository

Next, cd into the root of project directory

Then, install requirements (while venv is running!)

pip install -r requirements.txt
  • There may be a warning during install about "timeout-decorator being installed using legacy 'setup.py install' method", which is safe to ignore

Usage

The 5 Stages of a PerfKitBenchmarker run

flowchart LR
Provision -->Prepare-->Run-->Cleanup-->Teardown
Loading

To initiate all phases, simply call APT with the workload and config of your choice. Run from the root of the project directory and be sure the virtual envirnoment is active.

./pkb.py --benchmarks=<benchmark_name> --benchmark_config_file=<path_to_config>

e.g. to run NGINX and wrk with an existing YAML config

./pkb.py --benchmarks=ampere_nginx_wrk --benchmark_config_file=./ampere/pkb/configs/example_nginx.yml

For more details about setting up a BareMetal run, see the BareMetal Getting Started Guide

For more details about setting up cloud-based runs on OCI, see the OCI Getting Started Guide

YAML Configs

Each YAML config file represents a workload configuration for a certain system(s) and environment

  • The path to the configuration file in the run command can be relative or absolute
  • The benchmark name must match the name defined in the YAML config

A few useful PerfKitBenchmarker flags

Flag Description
--run_stage_iterations=<n> Execute the run stage N times in a row
--run_stage=<provision,prepare,run,cleanup,teardown> Run in stages, useful for monitoring/debugging between runs
--helpmatch=ampere Searches and matches any flag that has been implemented by Ampere with a description on how to use it. You can use the . notation to drill down in to specific flags you're interested in. E.G. ./pkb.py --helpmatch=ampere.pkb.linux_packages.redis will return all the associated ampere_redis_server flags for running ampere_redis_memtier

Usage example:

  1. Pass --run_stage=provision,prepare
  2. Save the run_uri generated at the end of this first pass
  3. Connect to SUT for debugging/monitoring
  4. Pass --run_stage=run --run_uri=<run_uri> to repeat testing manually N times
  5. Pass --run_stage=cleanup,teardown --run_uri<run_uri> when ready to finish

Results

To deactivate virtual environment

deactivate

All test results, logs, Ampere System Dump results, etc. can be found in

/tmp/perfkitbenchmarker/runs/<run_uri>

This directory (with correct run_uri) will be output to the console at the end of each test run.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •