-
Notifications
You must be signed in to change notification settings - Fork 92
Capturing benchmark test data
We're trying a new experiment- running renderer tests to prove out the new GL-based renderer. These are very much a WIP which we'll talk about more when they're fully baked.
This wiki doc explains how to actually capture the 'canned' .json data that is fed into the benchmarker code.
- Navigate rapid to a place/area where you want to test the renderer.
- Refresh the browser to ensure that you have the minimal set of graph information/renderable stuff to do the test.
- Wait for the renderer to load everything.
- Set a breakpoint in the modules/pixi/PixiLayerOsm.js file on the line that starts with
if (this._saveCannedData ...
- pan the map a tad- this forces a re-render and your breakpoint should be hit.
- Modify the 'this._saveCannedData' to
true
. Hit Continue. - A file called something like [zoom][lat][lng]_canned_osm_data.json should get saved to disk.
This file contains all the renderable entities that we're displaying on the screen, plus every OSM entity in the deep graph that we have in memory. The renderable entities are a subset of the graph- the graph contains many more entities that might be offscreen, invisible (like relations) etc.
Also, we save some metadata about the scene- the zoom, lat, lng, window width and height, as well as the projection used at the time of the snapshot taken.
This is all saved in .json format- our benchmarking code will load and rehydrate all these entities later.
Now we need to curate the data a bit for test consumption. For each file you save, you will want to do three things:
- rename the file with a placename at the front of the filename (For example,
16_35.6762_139.6503_canned_osm_data.json
would becometokyo_16_35.6762_139.6503_canned_osm_data.json
) - Add a data member like
placename: 'Tokyo'
inside the file. This will allow us to have nice printouts in the benchmarker later! - Modify the filename from
.json
to.js
, and save the entire blob to a variable called${placename}_${zoom}
. For this file, that means stickingvar tokyo_16 =
at the very beginning of the file, assigning that variable to the json blob we captured.
- You will need to move each canned data file into the RapiD folder
test/benchmark
. - Ensure that each data file is included by the
bench.html
benchmarking page. This page should load each canned data script starting at the line commented withput all canned data here
. For this example, add the following directive to the page:
<script src="tokyo_16_35.6762_139.6503_canned_osm_data.json"></script>
This will load the canned data into the global variable we specified, so that the benchmark test can use it for the render step.
The amount of data we load at low zooms, such as zoom 15 over dense urban areas, can be in the 100s of Megabytes. As such, checking this kind of file in raw format doesn't work for github, so we have added a zipper/unzipper step in both the npm actions and as part of the benchmark.yml directive in github actions.
Right now, there is a tokyo zip file with three different zoom's worth of data that the benchmark tests use. The Zoom 15 data unpacks at well over 100MB, which violates github's file size directive for a committed file!
Just know that the npm run benchmark
directive will automagically unzip and clean up the raw .json files each time you run it.
Some Benchmarking code (in the form of a bench.html file) picks up all the tokyo data from each .js file. by using <script src>
tags assuming that the files have been unpacked. Each of those files loads .json data into a global variable, which the test/benchmark/tests/bench.js
then uses in each test suite in calls such as setup(tokyo_15)
, which unpacks the global var into zoom, projection, graph data, and renderable entity data.
Then the benchmark is captured and emitted to the console.