Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow running performance test server and simulations separately #3514

Merged
merged 4 commits into from
Feb 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -499,6 +499,8 @@ lazy val tests: ProjectMatrix = (projectMatrix in file("tests"))
)
.dependsOn(core, files, circeJson, cats)

lazy val flightRecordingJavaOpts = "-XX:StartFlightRecording=filename=recording.jfr,dumponexit=true,duration=120s"

lazy val perfTests: ProjectMatrix = (projectMatrix in file("perf-tests"))
.enablePlugins(GatlingPlugin)
.settings(commonJvmSettings)
Expand All @@ -513,7 +515,8 @@ lazy val perfTests: ProjectMatrix = (projectMatrix in file("perf-tests"))
"com.fasterxml.jackson.module" %% "jackson-module-scala" % "2.16.1",
"nl.grons" %% "metrics4-scala" % Versions.metrics4Scala % Test,
"com.lihaoyi" %% "scalatags" % Versions.scalaTags % Test,
"com.github.scopt" %% "scopt" % "4.1.0",
// Needs to match version used by Gatling
"com.github.scopt" %% "scopt" % "3.7.1",
"io.github.classgraph" % "classgraph" % "4.8.165" % Test,
"org.http4s" %% "http4s-core" % Versions.http4s,
"org.http4s" %% "http4s-dsl" % Versions.http4s,
Expand All @@ -526,7 +529,9 @@ lazy val perfTests: ProjectMatrix = (projectMatrix in file("perf-tests"))
.settings(Gatling / scalaSource := sourceDirectory.value / "test" / "scala")
.settings(
fork := true,
connectInput := true
connectInput := true,
Compile / run / javaOptions += flightRecordingJavaOpts,
Test / run / javaOptions -= flightRecordingJavaOpts
)
.jvmPlatform(scalaVersions = List(scala2_13))
.dependsOn(core, pekkoHttpServer, http4sServer, nettyServer, nettyServerCats, playServer, vertxServer, vertxServerCats)
Expand Down
23 changes: 20 additions & 3 deletions perf-tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,14 @@
Performance tests are executed by running `PerfTestSuiteRunner`, which is a standard "Main" Scala application, configured by command line parameters. It executes a sequence of tests, where
each test consist of:

1. Starting a HTTP server (Like Tapir-based Pekko, Vartx, http4s, or a "vanilla", tapirless one)
1. Starting a HTTP server if specified (Like Tapir-based Pekko, Vartx, http4s, or a "vanilla", tapirless one)
2. Running a simulation in warm-up mode (5 seconds, 3 concurrent users)
3. Running a simulation with user-defined duration and concurrent user count
4. Closing the server
5. Reading Gatling's simulation.log and building simulation results

The sequence is repeated for a set of servers multiplied by simulations. Afterwards, all individual simulation results will be aggregated into a single report.
If no test servers are specified, only simulations are run, assuming a server started externally.
Command parameters can be viewed by running:

```
Expand All @@ -27,6 +28,15 @@ which displays help similar to:
[error] -g, --gatling-reports Generate Gatling reports for individuals sims, may significantly affect total time (disabled by default)
```

If you want to run a test server separately from simulations, use a separate sbt session and start it using `ServerRunner`:

```
perfTests/runMain sttp.tapir.perf.apis.ServerRunner http4s.TapirMulti
```

This is useful when profiling, as `perfTests/runMain` will be a forked JVM isolated from the JVM that runs Gatling, configured with additional options like `"-XX:StartFlightRecording=filename=recording.jfr,...`
After the simulations, you can open `recording.jfr` in Java Mission Control to analyze performance metrics like heap and CPU usage.

## Examples

1. Run all sims on all pekko-http servers with other options set to default:
Expand All @@ -36,13 +46,20 @@ perfTests/Test/runMain sttp.tapir.perf.PerfTestSuiteRunner -s pekko.* -m *

2. Run all sims on http4s servers, with each simulation running for 5 seconds:
```
perfTests/Test/runMain sttp.tapir.perf.PerfTestSuiteRunner -s http4s.Tapir,http4s.TapirMulti,http4s.Vanilla,http4s.VanillaMulti -s * -d 5
perfTests/Test/runMain sttp.tapir.perf.PerfTestSuiteRunner -s http4s.Tapir,http4s.TapirMulti,http4s.Vanilla,http4s.VanillaMulti -m * -d 5
```

3. Run some simulations on some servers, with 3 concurrent users instead of default 1, each simulation running for 15 seconds,
and enabled Gatling report generation:
```
perfTests/Test/runMain sttp.tapir.perf.PerfTestSuiteRunner -s http4s.Tapir,netty.future.Tapir,play.Tapir -s PostLongBytes,PostFile -d 15 -u 3 -g
perfTests/Test/runMain sttp.tapir.perf.PerfTestSuiteRunner -s http4s.Tapir,netty.future.Tapir,play.Tapir -m PostLongBytes,PostFile -d 15 -u 3 -g
```

4. Run a netty-cats server with profiling, and then PostBytes and PostLongBytes simulation in a separate sbt session, for 25 seconds:
```
perfTests/runMain sttp.tapir.perf.apis.ServerRunner netty.cats.TapirMulti
// in a separate sbt session:
perfTest/Test/runMain sttp.tapir.perf.PerfTestSuiteRunner -m PostBytes,PostLongBytes -d 25
```

## Reports
Expand Down
20 changes: 20 additions & 0 deletions perf-tests/src/main/scala/sttp/tapir/perf/apis/ServerName.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
package sttp.tapir.perf.apis

import sttp.tapir.perf.Common._

sealed trait ServerName {
def shortName: String
def fullName: String
}
case class KnownServerName(shortName: String, fullName: String) extends ServerName

/** Used when running a suite without specifying servers and assuming that a server has been started externally. */
case object ExternalServerName extends ServerName {
override def shortName: String = "External"
override def fullName: String = "External"
}

object ServerName {
def fromShort(shortName: String): ServerName =
KnownServerName(shortName, s"${rootPackage}.${shortName}Server")
}
40 changes: 38 additions & 2 deletions perf-tests/src/main/scala/sttp/tapir/perf/apis/ServerRunner.scala
Original file line number Diff line number Diff line change
@@ -1,11 +1,47 @@
package sttp.tapir.perf.apis

import cats.effect.IO
import cats.effect.{ExitCode, IO, IOApp}
import sttp.tapir.perf.Common._

import scala.reflect.runtime.universe

trait ServerRunner {
def start: IO[ServerRunner.KillSwitch]
}

object ServerRunner {
/** Can be used as a Main object to run a single server using its short name. Running perfTests/runMain
* [[sttp.tapir.perf.apis.ServerRunner]] will load special javaOptions configured in build.sbt, enabling recording JFR metrics. This is
* useful when you want to guarantee that the server runs in a different JVM than test runner, so that memory and CPU metrics are recorded
* only in the scope of the server JVM.
*/
object ServerRunner extends IOApp {
type KillSwitch = IO[Unit]
val NoopKillSwitch = IO.pure(IO.unit)
private val runtimeMirror = universe.runtimeMirror(getClass.getClassLoader)

def run(args: List[String]): IO[ExitCode] = {
val shortServerName = args.head
for {
killSwitch <- startServerByTypeName(ServerName.fromShort(shortServerName))
_ <- IO.never.guarantee(killSwitch)
} yield ExitCode.Success
}

def startServerByTypeName(serverName: ServerName): IO[ServerRunner.KillSwitch] = {
serverName match {
case ExternalServerName => NoopKillSwitch
case _ =>
try {
val moduleSymbol = runtimeMirror.staticModule(serverName.fullName)
val moduleMirror = runtimeMirror.reflectModule(moduleSymbol)
val instance: ServerRunner = moduleMirror.instance.asInstanceOf[ServerRunner]
instance.start
} catch {
case e: Throwable =>
IO.raiseError(
new IllegalArgumentException(s"ERROR! Could not find object ${serverName.fullName} or it doesn't extend ServerRunner", e)
)
}
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import cats.syntax.all._
import com.codahale.metrics.{Histogram, MetricRegistry}
import fs2.io.file.{Files => Fs2Files}
import fs2.text
import sttp.tapir.perf.apis.ServerName

import java.nio.file.{Files, Path, Paths}
import java.util.stream.Collectors
Expand All @@ -20,7 +21,7 @@ object GatlingLogProcessor {

/** Searches for the last modified simulation.log in all simulation logs and calculates results.
*/
def processLast(simulationName: String, serverName: String): IO[GatlingSimulationResult] = {
def processLast(simulationName: String, serverName: ServerName): IO[GatlingSimulationResult] = {
for {
lastLogPath <- IO.fromTry(findLastLogFile)
_ <- IO.println(s"Processing results from $lastLogPath")
Expand Down Expand Up @@ -48,7 +49,7 @@ object GatlingLogProcessor {
val throughput = (state.histogram.getCount().toDouble / state.totalDurationMs) * 1000
GatlingSimulationResult(
simulationName,
serverName,
serverName.shortName,
state.totalDurationMs.millis,
meanReqsPerSec = throughput.toLong,
latencyP99 = snapshot.get99thPercentile,
Expand Down
50 changes: 28 additions & 22 deletions perf-tests/src/test/scala/sttp/tapir/perf/PerfTestSuiteParams.scala
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
package sttp.tapir.perf

import scopt.OptionParser
import sttp.tapir.perf.apis.{ExternalServerName, ServerName}

import scala.concurrent.duration._
import scala.util.{Failure, Success}

import Common._
import scopt.OParser
import scala.util.Failure
import scala.util.Success

/** Parameters to customize a suite of performance tests. */
case class PerfTestSuiteParams(
Expand All @@ -14,18 +16,17 @@ case class PerfTestSuiteParams(
durationSeconds: Int = PerfTestSuiteParams.defaultDurationSeconds,
buildGatlingReports: Boolean = false
) {
/**
* Handles server names passed as groups like netty.*, pekko.*, etc. by expanding them into lists of actual server names.
* Similarly, handles '*' as a short simulation name, expanding it to a list of all simulations.

/** Handles server names passed as groups like netty.*, pekko.*, etc. by expanding them into lists of actual server names. Similarly,
* handles '*' as a short simulation name, expanding it to a list of all simulations.
* @return
*/
def adjustWildcards: PerfTestSuiteParams = {
val withAdjustedServer: PerfTestSuiteParams = {
val expandedShortServerNames = shortServerNames.flatMap { shortServerName =>
if (shortServerName.contains("*")) {
TypeScanner.allServers.filter(_.startsWith(shortServerName.stripSuffix("*")))
}
else List(shortServerName)
} else List(shortServerName)
}
copy(shortServerNames = expandedShortServerNames)
}
Expand All @@ -41,9 +42,9 @@ case class PerfTestSuiteParams(

def minTotalDuration: FiniteDuration = ((duration + WarmupDuration) * totalTests.toLong).toMinutes.minutes

/** Returns pairs of (fullServerName, shortServerName), for example: (sttp.tapir.perf.pekko.TapirServer, pekko.Tapir)
/** Returns list of server names
*/
def serverNames: List[(String, String)] = shortServerNames.map(s => s"${rootPackage}.${s}Server").zip(shortServerNames).distinct
def serverNames: List[ServerName] = if (shortServerNames.nonEmpty) shortServerNames.map(ServerName.fromShort).distinct else List(ExternalServerName)

/** Returns pairs of (fullSimulationName, shortSimulationName), for example: (sttp.tapir.perf.SimpleGetSimulation, SimpleGet)
*/
Expand All @@ -54,31 +55,36 @@ case class PerfTestSuiteParams(
object PerfTestSuiteParams {
val defaultUserCount = 1
val defaultDurationSeconds = 10
val builder = OParser.builder[PerfTestSuiteParams]
import builder._
val argParser = OParser.sequence(
programName("perf"),
val argParser = new OptionParser[PerfTestSuiteParams]("perf") {
opt[Seq[String]]('s', "server")
.required()
.action((x, c) => c.copy(shortServerNames = x.toList))
.text(s"Comma-separated list of short server names, or groups like 'netty.*', 'pekko.*', etc. Available servers: ${TypeScanner.allServers.mkString(", ")}"),
.text(
s"Comma-separated list of short server names, or groups like 'netty.*', 'pekko.*'. If empty, only simulations will be run, assuming already running server. Available servers: ${TypeScanner.allServers
.mkString(", ")}"
): Unit

opt[Seq[String]]('m', "sim")
.required()
.action((x, c) => c.copy(shortSimulationNames = x.toList))
.text(s"Comma-separated list of short simulation names, or '*' for all. Available simulations: ${TypeScanner.allSimulations.mkString(", ")}"),
.text(
s"Comma-separated list of short simulation names, or '*' for all. Available simulations: ${TypeScanner.allSimulations.mkString(", ")}"
): Unit

opt[Int]('u', "users")
.action((x, c) => c.copy(users = x))
.text(s"Number of concurrent users, default is $defaultUserCount"),
.text(s"Number of concurrent users, default is $defaultUserCount"): Unit

opt[Int]('d', "duration")
.action((x, c) => c.copy(durationSeconds = x))
.text(s"Single simulation duration in seconds, default is $defaultDurationSeconds"),
.text(s"Single simulation duration in seconds, default is $defaultDurationSeconds"): Unit

opt[Unit]('g', "gatling-reports")
.action((_, c) => c.copy(buildGatlingReports = true))
.text("Generate Gatling reports for individuals sims, may significantly affect total time (disabled by default)")
)
.text("Generate Gatling reports for individuals sims, may significantly affect total time (disabled by default)"): Unit
}

def parse(args: List[String]): PerfTestSuiteParams = {
OParser.parse(argParser, args, PerfTestSuiteParams()) match {
argParser.parse(args, PerfTestSuiteParams()) match {
case Some(p) =>
val params = p.adjustWildcards
TypeScanner.enusureExist(params.shortServerNames, params.shortSimulationNames) match {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,15 @@ import java.nio.file.Paths
import java.time.LocalDateTime
import java.time.format.DateTimeFormatter
import scala.concurrent.duration.FiniteDuration
import scala.reflect.runtime.universe

/** Main entry point for running suites of performance tests and generating aggregated reports. A suite represents a set of Gatling
* simulations executed on a set of servers, with some additional parameters like concurrent user count. One can run a single simulation on
* a single server, as well as a selection of (servers x simulations). The runner then collects Gatling logs from simulation.log files of
* individual simulation runs and puts them together into an aggregated report comparing results for all the runs.
* individual simulation runs and puts them together into an aggregated report comparing results for all the runs. If no server are
* provided in the arguments, the suite will only execute simulations, assuming a server has been started separately.
*/
object PerfTestSuiteRunner extends IOApp {

val runtimeMirror = universe.runtimeMirror(getClass.getClassLoader)

def run(args: List[String]): IO[ExitCode] = {
val params = PerfTestSuiteParams.parse(args)
println("===========================================================================================")
Expand All @@ -37,10 +35,10 @@ object PerfTestSuiteRunner extends IOApp {
val currentTime = LocalDateTime.now().format(formatter)
((params.simulationNames, params.serverNames)
.mapN((x, y) => (x, y)))
.traverse { case ((simulationName, shortSimulationName), (serverName, shortServerName)) =>
.traverse { case ((simulationName, shortSimulationName), serverName) =>
for {
serverKillSwitch <- startServerByTypeName(serverName)
_ <- IO.println(s"Running server $shortServerName, simulation $simulationName")
serverKillSwitch <- ServerRunner.startServerByTypeName(serverName)
_ <- IO.println(s"Running server ${serverName.shortName}, simulation $simulationName")
_ <- (for {
_ <- IO.println("======================== WARM-UP ===============================================")
_ = setSimulationParams(users = WarmupUsers, duration = WarmupDuration, warmup = true)
Expand All @@ -51,7 +49,7 @@ object PerfTestSuiteRunner extends IOApp {
} yield simResultCode)
.guarantee(serverKillSwitch)
.ensureOr(errCode => new Exception(s"Gatling failed with code $errCode"))(_ == 0)
serverSimulationResult <- GatlingLogProcessor.processLast(shortSimulationName, shortServerName)
serverSimulationResult <- GatlingLogProcessor.processLast(shortSimulationName, serverName)
_ <- IO.println(serverSimulationResult)
} yield (serverSimulationResult)
}
Expand All @@ -60,18 +58,6 @@ object PerfTestSuiteRunner extends IOApp {
.as(ExitCode.Success)
}

private def startServerByTypeName(serverName: String): IO[ServerRunner.KillSwitch] = {
try {
val moduleSymbol = runtimeMirror.staticModule(serverName)
val moduleMirror = runtimeMirror.reflectModule(moduleSymbol)
val instance: ServerRunner = moduleMirror.instance.asInstanceOf[ServerRunner]
instance.start
} catch {
case e: Throwable =>
IO.raiseError(new IllegalArgumentException(s"ERROR! Could not find object $serverName or it doesn't extend ServerRunner", e))
}
}

/** Gatling doesn't allow to pass parameters to simulations when they are run using `Gatling.fromMap()`, that's why we're using system
* parameters as global variables to customize some params.
*/
Expand Down
Loading