Knative performance tests are tests geared towards producing useful performance metrics of the knative system. As such they can choose to take a blackbox point-of-view of the system and use it just like an end-user might see it. They can also go more whiteboxy to narrow down the components under test.
Knative has a load generator library that can be used to generate load. The load generator uses Fortio for generating load based on the generator options set in the test. The load generator provides the generator results which includes all the data points for the load generation request and any calculated latencies.
For eg.
opts := loadgenerator.GeneratorOptions{Duration: 1*time.Minute, NumThreads: 1}
resp, err := opts.RunLoadTest(false /* resolvableDomain */)
if err != nil {
t.Fatalf("Generating traffic via fortio failed: %v", err)
}
Knative provides a prometheus wrapper that provides methods to wait for prometheus to scrap for the metrics once the test is finished. It also provides a way to query the prometheus server for any server-side metrics and then display those in testgrid
For eg.
promAPI, err := prometheus.PromAPI()
if err != nil {
logger.Error("Cannot setup prometheus API")
}
query := fmt.Sprintf("%s{namespace_name=%q, configuration_name=%q, revision_name=%q},metric, test.ServingNamespace, names.Config, names.Revision)
val, err := prometheus.RunQuery(context.Background(), logger, promAPI, query)
if err != nil {
logger.Infof("Error querying metric %s: %v", metric, err)
}
Once the test is done, each test can define which metrics they want to be captured and shown on testgrid. For each metric, create a testgrid testcase by using the CreatePerfTestCase() method.
For eg.
testName := "TestPerformanceLatency"
var tc []testgrid.TestCase
for name, val := range metrics {
tc = append(tc, CreatePerfTestCase(val, name), testName))
}
Once we create all the test cases for the metrics, we can use the
CreateTestgridXML() method
to create the output xml that will be used as input to testgrid. If the test is
run locally, it will create a file called junit_<testName>.xml
and save it
locally under knative/serving/test/performance/artifacts/
(Note that this will
create a directory called artifacts, if not present. Will use the existing one,
if present). When the test is run with Prow, this file will be stored with the
other artifacts generated by Prow like the build log.
For eg.
if err = testgrid.CreateTestgridXML(tc, testName); err != nil {
t.Fatalf("Cannot create output xml: %v", err)
}
All the metrics are appended in the junit_knative.xml file and can be seen on testgrid
Performance tests are simple go tests, that use Fortio for generating load and Tesgrid to see the metrics on a continuous basis. For whitebox tests, the performance tests also bring up the prometheus service along with knative-serving. So, you can query prometheus to get server side metrics with the load generator metrics.
Knative uses testgrid to show all its metrics. Performance metrics are shown on a separate tab for each repo like serving. It will pick up the junit_knative.xml file generated by the tests and display the metric and the value in the grid in the performance tab automatically.