This is a set of benchmarks tests to track js-ipfs Benchmarks in a Grafana Dashboard.
The IPFS team needs a historical view of various performance metrics around js-ipfs
and how it compares to the reference implementation written in go
. This project
implements benchmark tests for js-ipfs
and publishes the results in a dashboard.
The artifacts are also made available on the IPFS network. Over time the historical
view will expose how js-ipfs
is hopefully approaching the go
implementation
and which areas need improvement.
The goal is to provide immediate feedback and long-term tracking around performance to developers and the community with an extremely low barrier. The CI system integrating code changes will trigger benchmark runs as well a scheduled run every night. Each run will provide a URL where the results will be visible.
This project also provides a possibility to run tests locally on a development
version of js-ipfs
. Developers can then examine individual output files before
submitting code to the community.
- The dashboard documentation
- Architecture of the
js-ipfs
benchmark system - Reference on how this Repository is organized
- Using the Runner to manage benchmark runs remotely, which includes an API available here
- Description of tests
- Convenience scripts for the docker-compose deployment
- Overview video hosted on the IPFS network.
- Introduction to Clinic.js in the context of IPFS Recording
The dashboard is available at https://benchmarks.ipfs.team and can be viewed without a user account.
A Continuous Integration
server can trigger benchmark runs using the endpoint exposed on https://benchmarks.ipfs.team/runner. A commit from the js-ipfs repository can be supplied to run the benchmarks against. An api key is also required to be able to trigger a run. Please check Runner docs on how to configure an api key for the runner. An example invocation using curl is provided below.
> curl -XPOST -d '{"commit":"adfy3hk"}' \
-H "Content-Type: application/json" \
-H "x-ipfs-benchmarks-api-key: <api-key>" \
https://benchmarks.ipfs.team/runner
The response provides links to the output produced by the benchmark tests:
TBD
For more details about the dashboard see the Grafana doc.
Clone Benchmark tests and install:
> git clone https://github.com/ipfs/benchmarks.git
> cd benchmarks/runner
> npm install
> cd ../tests
> npm install
The files are defined in fixtures.
> npm run generateFiles
Here is the file object for a single test:
{ size: KB, name: 'OneKBFile' }
To add multiple test files add a count property:
{ size: KB, name: 'OneHundredKBFile', count: 100 }
From the benchmarks/tests
directory:
> node local-add
> node local-extract
> node local-transfer
Run all benchmarks:
> npm run benchmark
Create a pre-generated key:
> node util/create-privateKey
Use env variable FILESET
to run test just against that specific set of file(s). Options of FILESET
are defined in the config.
> FILESET="One64MBFile" node local-add
Use env variable VERIFYOFF=true
to skip the pre-generation of test files.
> VERIFYOFF=true node local-add
Inside the benchmarks/tests
dir is a script to pull down master branch and install:
> ./getIpfs.sh ../
Directory structure now :
├── benchmarks
├──── js-ipfs
├──── tests
Run tests against branch
> cd benchmarks/tests
> STAGE=local REMOTE=true node local-add
Below is a list of optional flags used by the tests to run a specific strategy or transport module in Libp2p.
-s
DAG strategy (balanced | trickle)-t
Transport (tcp | ws)-m
Stream muxer (mplex, spdy)-e
Connection encryption (secio)
See README.
Results will be written to out directory under benchmarks/tests
name
: Name of testwarmup
: Flag for if we warm up dbdescription
: Description of benchmarkfileSet
: Set of files to be used in a testdate
: Date of benchmarkfile
: Name of file used in benchmarkmeta.project
: Repo that are benchmarkedmeta.commit
: Commit used to trigger benchmarkmeta.version
: Version of js-ipfsduration.s
: The number of seconds for benchmarkduration.ms
: The number of millisecs the benchmark tookcpu
: Information about cpu benchmark was run onloadAvg
: The load average of machine
Copyright (c) Protocol Labs, Inc. under the MIT license. See LICENSE file for details.