Skip to content
This repository has been archived by the owner on Oct 6, 2021. It is now read-only.

Latest commit

 

History

History
21 lines (14 loc) · 1.66 KB

instrumenting.md

File metadata and controls

21 lines (14 loc) · 1.66 KB

Instrument Your Application

The way Conprof works, is by collecting profiles in pprof format from HTTP endpoints. So all applications have to do is use one of the client libraries for pprof and expose an HTTP endpoint serving it. Pprof client libraries exist for various languages:

Language/runtime CPU Heap Allocations Blocking Mutex Contention Extra
Go Yes Yes Yes Yes Yes Goroutine, fgprof
Rust Yes No No No No
Python Yes Yes No No No
NodeJS Yes Yes No No No
JVM Yes No No No No

Guides

Generic Profiling

Additionally any perf profile can be converted to pprof using perf_data_converter, so even programs that do not have native support for pprof can benefit from continuous profiling with Conprof. We do, however, recommend to use native instrumentation when possible, as it allows language and runtime specific nuances to be encodede in the respective libraries.

Once there is an HTTP endpoint that serves profiles in pprof format, all that needs to be done is configure Conprof to collect the profile in a regular interval. See examples/conprof.yaml for an example configuration.