Skip to content

Commit

Permalink
kvserver: add storage.compactions.duration timeseries metric
Browse files Browse the repository at this point in the history
Add a new storage.compactions.duration metric that describes the cumulative
time spent in compactions since process start. This may be used to calculate
the effective compaction concurrency over an interval. See
cockroachdb/pebble#1934.

Epic: none
Release note (ops change): exposes a new metric `storage.compactions.duration`
computed by the storage engine that provides the cumulative time the storage
engine has spent in compactions. This duration may exceed time elapsed, because
of concurrent compactions and may be useful in monitoring compaction
concurrency.
  • Loading branch information
jbowens committed May 19, 2023
1 parent f2ff9b4 commit b01cc37
Showing 1 changed file with 13 additions and 0 deletions.
13 changes: 13 additions & 0 deletions pkg/kv/kvserver/metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -691,6 +691,16 @@ See storage.AggregatedIteratorStats for more details.`,
Measurement: "Iterator Ops",
Unit: metric.Unit_COUNT,
}
metaStorageCompactionsDuration = metric.Metadata{
Name: "storage.compactions.duration",
Help: `Cumulative sum of all compaction durations.
The rate of this value provides the effective compaction concurrency of a store,
which can be useful to determine whether the maximum compaction concurrency is
fully utilized.`,
Measurement: "Processing Time",
Unit: metric.Unit_NANOSECONDS,
}
metaStorageCompactionsKeysPinnedCount = metric.Metadata{
Name: "storage.compactions.keys.pinned.count",
Help: `Cumulative count of storage engine KVs written to sstables during flushes and compactions due to open LSM snapshots.
Expand Down Expand Up @@ -2047,6 +2057,7 @@ type StoreMetrics struct {
SharedStorageBytesWritten *metric.Gauge
StorageCompactionsPinnedKeys *metric.Gauge
StorageCompactionsPinnedBytes *metric.Gauge
StorageCompactionsDuration *metric.Gauge
IterBlockBytes *metric.Gauge
IterBlockBytesInCache *metric.Gauge
IterBlockReadDuration *metric.Gauge
Expand Down Expand Up @@ -2669,6 +2680,7 @@ func newStoreMetrics(histogramWindow time.Duration) *StoreMetrics {
SharedStorageBytesWritten: metric.NewGauge(metaSharedStorageBytesWritten),
StorageCompactionsPinnedKeys: metric.NewGauge(metaStorageCompactionsKeysPinnedCount),
StorageCompactionsPinnedBytes: metric.NewGauge(metaStorageCompactionsKeysPinnedBytes),
StorageCompactionsDuration: metric.NewGauge(metaStorageCompactionsDuration),
FlushableIngestCount: metric.NewGauge(metaFlushableIngestCount),
FlushableIngestTableCount: metric.NewGauge(metaFlushableIngestTableCount),
FlushableIngestTableSize: metric.NewGauge(metaFlushableIngestTableBytes),
Expand Down Expand Up @@ -3014,6 +3026,7 @@ func (sm *StoreMetrics) updateEngineMetrics(m storage.Metrics) {
sm.IterInternalSteps.Update(int64(m.Iterator.InternalSteps))
sm.StorageCompactionsPinnedKeys.Update(int64(m.Snapshots.PinnedKeys))
sm.StorageCompactionsPinnedBytes.Update(int64(m.Snapshots.PinnedSize))
sm.StorageCompactionsDuration.Update(int64(m.Compact.Duration))
sm.SharedStorageBytesRead.Update(m.SharedStorageReadBytes)
sm.SharedStorageBytesWritten.Update(m.SharedStorageWriteBytes)
sm.RdbL0Sublevels.Update(int64(m.Levels[0].Sublevels))
Expand Down

0 comments on commit b01cc37

Please sign in to comment.