Telemetry is the performance testing framework used by Chrome. It allows you to perform arbitrary actions on a set of web pages (or any android application!) and report metrics about it. The framework abstracts:
- Launching a browser with arbitrary flags on any platform.
- Opening a tab and navigating to the page under test.
- Launching an Android application with intents through ADB.
- Fetching data via the Inspector timeline and traces.
- Using Web Page Replay to cache real-world websites so they don’t change when used in benchmarks.
Run
catapult/telemetry/bin/run_tests --help
and see the usage info at the top.
See this page.
- Write one performance test that runs on major platforms - Windows, Mac, Linux, Chrome OS, and Android for both Chrome and ContentShell.
- Run on browser binaries, without a full Chromium checkout, and without having to build the browser yourself.
- Use Web Page Replay to get repeatable test results.
- Clean architecture for writing benchmarks that keeps measurements and use cases separate.
Telemetry is designed for measuring performance rather than checking correctness. If you want to check for correctness, browser tests are your friend.
If you are a Chromium developer looking to add a new Telemetry benchmark to
src/tools/perf/
,
please make sure to read our
Benchmark Policy
first.
Telemetry provides two major functionality groups: those that provide test automation, and those that provide the capability to collect data.
The test automation facilities of Telemetry provide Python wrappers for a number of different system concepts.
- Platforms use a variety of libraries & tools to abstract away the OS specific logic.
- Browser wraps Chrome's DevTools Remote Debugging Protocol to perform actions and extract information from the browser.
- Android App is a Python wrapper around
adb shell
.
The Telemetry framework lives in
src/third_party/catapult/telemetry/
and performance benchmarks that use Telemetry live in
src/tools/perf/
.
Telemetry offers a framework for collecting metrics that quantify the performance of automated actions in terms of benchmarks, measurements, and story sets.
- A
benchmark
combines a measurement together with a story set, and optionally a set
of browser options.
- We strongly discourage benchmark authors from using command-line flags to specify the behavior of benchmarks, since benchmarks should be cross-platform.
- Benchmarks are discovered and run by the
benchmark runner,
which is wrapped by scripts like
run_benchmark
intools/perf
.
- A measurement (called
StoryTest
in the code) is responsible for setting up and tearing down the testing platform, and for collecting metrics that quantify the application scenario under test.- Measurements need to work with all story sets, to provide consistency and prevent benchmark rot.
- You probably don't need to override
StoryTest
(see "Timeline Based Measurement" below). If you think you do, please talk to us.
- A story set is a set of stories together with a shared state that describes application-level configuration options.
- A story is an application scenario and a set of actions to run in that scenario. In the typical Chromium use case, this will be a web page together with actions like scrolling, clicking, or executing JavaScript.
- There are two major ways to collect data (often referred to as
measurements or metrics) about the stories:
- Ad hoc measurements: These are measurements that do not require traces,
for example when a metric is calculated directly in the test page in
Javascript and we simply want to extract and report this number.
Currently
PressBenchmark
and associatedPressStory
subclasses are examples that use ad hoc measurements. (In reality, PressBenchmark usesDualMetricMeasurement
StoryTest, where you can have both ad hoc and timeline based metrics.) - Timeline Based Measurements: These are measurements that require
recording a timeline of events, for example a chrome
trace.
Telemetry collects traces and other artifacts as it interacts with the
page and stores them in the form of test results, then Results Processor
computes metrics using these test results. New metrics should generally
be timeline based measurements: Computing metrics on the trace makes it
possible to compute many different metrics from the same run easily, and
the collected trace is useful for debugging metrics values.
- The current supported programming model is known as Timeline Based Measurements v2 This is the current recommended method of adding metrics to telemetry.
- TBMv3, a new version of Timeline Based Measurement based on Perfetto is currently under development. It is not ready for general use yet but there are active experiments on the FYI bots. Ideally this will eventually replace TBMv2, but this will only happen once have an easy migration path of current TBMv2 metrics to TBMv3.
- Ad hoc measurements: These are measurements that do not require traces,
for example when a metric is calculated directly in the test page in
Javascript and we simply want to extract and report this number.
Currently
- Run Telemetry benchmarks locally
- Record a story set with Web Page Replay
- Feature guidelines
- Profile generation
- Telemetry unittests
If you have questions, please email [email protected].
You can keep up with Telemetry related discussions by joining the telemetry group.
The recordings are not included in the Chromium source tree. If you are a Google
partner, run gsutil config
to authenticate, then try running the test again.
If you don't have gsutil
installed on your machine, you can find it in
build/third_party/gsutil/gsutil
.
If you are not a Google partner, you can run on live sites with --use-live-sites` or record your own story set archive.
Your forwarder binary may be outdated. If you have built the forwarder in
src/out that one will be used. if there isn't anything there Telemetry will
default to downloading a pre-built binary. Try re-building the forwarder, or
alternatively wiping the contents of src/out/
and running run_benchmark
,
which should download the latest binary.
Make sure that your keychain is correctly configured.