-
Notifications
You must be signed in to change notification settings - Fork 627
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: record metrics from rules and export to remote #3861
base: main
Are you sure you want to change the base?
Conversation
be2d95a
to
c90f289
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good work, Alberto! 🚀
I'd like to discuss the queries we aim to answer. Have you analyzed how the exported metrics will be used? Just some example use cases
pkg/experiment/block/compaction.go
Outdated
func pyroscopeInstanceHash(shard uint32, id uuid.UUID) string { | ||
buf := make([]byte, 0, 40) | ||
buf = append(buf, byte(shard>>24), byte(shard>>16), byte(shard>>8), byte(shard)) | ||
buf = append(buf, id.String()...) | ||
return fmt.Sprintf("%x", xxhash.Sum64(buf)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why we're using a UUID generated by compaction worker.
First of all, it is not helpful and will cause data duplication. Jobs might be retried multiple times: each attempt may result in exported samples that will have its own __pyroscope_instance__
label, that prevents deduplication in the metrics backend. Second, it will result in cardinality issues: there might be dozens and hundreds of compaction worker, each of them can handle any block (i.e., we get Rules x Shards x Workers series, where each rule may produce multiple series, based on the aggregation dimensions).
Note that compaction job source blocks always belong to the same shard but may be produced by a set of segment writers. This is a typical situation, when the shard ownership/affinity changes due to the topology change (node added or removed), or when the primary owner is not available, or when the placement rules for the dataset change.
It's possible that we have two segments with identical timestamps (given the millisecond precision of ULIDs). Whether we want to handle the issue in the very first version is probably the most important question, if we decide to use segment timestamps. I'd say, no, we don't have to. And if were to, we would need to ensure that the data sent from different segment origins is not mixed. The segment origin is determined by a combination of the Shard
and CreatedBy
metadata attributes and the timestamp of segment creation. We assume that within the Shard/CreatedBy
timestamp collision is not possible (this is not guaranteed strictly speaking). Shard/CreatedBy
cardinality is bound and is typically a 1:1 mapping. However, the worst case scenario is N*M – therefore we may want to get rid of it (e.g., by aggregating data in the backend with recording rules).
I see the following ways to solve/mitigate it:
- Add
Shard/CreatedBy
as a series label (hash of it). We probably could be fine with justCreatedBy
, but we need to make sure the timestamp collision is not possible in the segment writer: imagine a series is moved from one shard to another, hosted by the same segment-writer, and the timestamps of the segments that include this "transition" match. Such samples would be deduplicated in the time series (prometheus-like) backend. - Add an explicit metadata attribute to include the timestamp in nanosecond precision sufficient for our needs in practice. The timestamp is the real local time of the segment-writer produced the block.
- Handle this in compaction planner: we could probably somehow "guess" the timestamp, provided that we have all the information needed there.
It may be tempting to implement p.2. However, before we go further, I'd like to see analysis of the access patterns – basically: what queries we expect: for example, aggregation functions supported. Do we want to support functions without associative property (e.g., mean/average)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed to use CreatedBy
instead of worker id
pkg/experiment/block/compaction.go
Outdated
@@ -62,6 +66,7 @@ func Compact( | |||
ctx context.Context, | |||
blocks []*metastorev1.BlockMeta, | |||
storage objstore.Bucket, | |||
workerId uuid.UUID, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is OK for a prototype, but I believe we should come up with a more elaborated component design: we should not be concerned with metrics export when dealing with compaction: already complex area gets complicated further and violates the single-responsibility principle.
I propose adding a new interface SampleRecorder
or SampleObserver
(or any other name that communicates the purpose) that is an option of the Compact
call. It should not appear at the planning stage, but passed explicitly to the (*CompactionPlan) Compact
call (we could find a better name, though).
The interface should be defined by the consumer (in the block
package, before we declare the Compact
function).
type SampleObserver interface {
// Observe is called before the compactor appends the entry
// to the output block. This method must not modify the entry.
Observe(ProfileEntry)
// Flush is called before the compactor flushes the output dataset.
// This call invalidates all references (such as symbols) to the source
// and output blocks. Any error returned by the call terminates the
// compaction job: it's caller responsibility to suppress errors.
Flush() error
}
Alternatively, there might be an abstract fabric providing observer/recorder for a certain scope (batch) of samples. In our case, the dataset scope is a perfect fit.
We will extend it later to also provide symbols for filtering (something like SetSymbols
or just Reset
will do). Tenant can be obtained through the Dataset
member of the entry. Timestamp override should be injected outside the compaction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, I'll be rewriting this next. Thank you for the tips
pkg/experiment/metrics/recorder.go
Outdated
func (r *Recorder) RecordRow(fp model.Fingerprint, lbls phlaremodel.Labels, totalValue int64) { | ||
labelsMap := r.getOrCreateLabelsMap(fp, lbls) | ||
|
||
for _, recording := range r.Recordings { | ||
aggregatedFp, matches := recording.matches(fp, labelsMap) | ||
if !matches { | ||
continue | ||
} | ||
if aggregatedFp == nil { | ||
// first time this series appears | ||
exportedLabels := r.generateExportedLabels(labelsMap, recording) | ||
|
||
sort.Sort(exportedLabels) | ||
f := AggregatedFingerprint(exportedLabels.Hash()) | ||
aggregatedFp = &f | ||
|
||
recording.fps[fp] = aggregatedFp | ||
recording.data[*aggregatedFp] = newTimeSeries(exportedLabels, r.recordingTime) | ||
} | ||
recording.data[*aggregatedFp].Samples[0].Value += float64(totalValue) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can optimize it in the way that we only call it once per series and keep aggregators prepared.
This is a tight loop – if we can avoid allocations, loops, lookups, etc. we should go for it. We don't have to implement low level optimizations, but we should avoid wasteful operations (such as creating the labelsMap
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Talked in private, I'll rewrite it to take advantage of the order property of the rows.
func (r *Recording) matches(fp model.Fingerprint, labelsMap map[string]string) (*AggregatedFingerprint, bool) { | ||
aggregatedFp, seen := r.fps[fp] | ||
if seen { | ||
// we've seen this series before | ||
return aggregatedFp, seen | ||
} | ||
if r.rule.profileType != labelsMap["__profile_type__"] { | ||
return nil, false | ||
} | ||
for _, matcher := range r.rule.matchers { | ||
// assume labels.MatchEqual for every matcher: | ||
if labelsMap[matcher.Name] != matcher.Value { | ||
return nil, false | ||
} | ||
} | ||
return nil, true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a very reduced version of the syntax we use in queries. I think we should support the full one.
ec8eba9
to
7d1e59b
Compare
7d1e59b
to
1700a83
Compare
In this PR, we introduce a first version of the metrics recorder and metrics exporter.
Every level 1 compaction job will record metrics from profiles in the form of time series. The recording will follow some recording rules given by config or an external service (for now, this is hardcoded to a single recording rule). The recorded metrics are exported to a remote after the compaction process.
Generated metrics are aggregations of total values of some kind of dimension (or profile type). The aggregation process is explained below:
time = blockTime
and value equal to the sum of all totalValues that match (T, F, E).Example:
Let's consider the following profiles present in some blocks being compacted
{service_name="worker", job="batch_compress", region="eu"}
{service_name="worker", job="batch_compress", region="eu"}
{service_name="API", region="eu"}
{service_name="worker", job="batch_compress", region="ap"}
{service_name="worker", job="batch_compress", region="us"}
{service_name="worker", job="batch_compress", region="eu"}
And the following recording rule:
Name = "cpu_usage_compress_workers"
T = cpu samples
F =
{service_name="worker", job="batch_compress"}
E =
"region"
This will result in the following exported series and samples.
{__name__="cpu_usage_compress_workers", service_name="worker", job="batch_compress", region="eu"} = (t, 120)
{__name__="cpu_usage_compress_workers", service_name="worker", job="batch_compress", region="ap"} = (t, 30)
{__name__="cpu_usage_compress_workers", service_name="worker", job="batch_compress", region="us"} = (t, 40)
Note that Profile 1 was discarded by profile type. Profiles 2 and 6 were aggregated, and Profile 3 was discarded by filter. For all of the 3 exported samples, t = blockTime.
Given the distributed architecture and concurrent nature of compactors, and the chosen timestamp for samples, time collisions may happen. For that reason, an extra
__pyroscope_instance__
label has been added, so that two compaction jobs may write to prometheus without causing overwrites. This intance id is computed from a worker id and a shard id.Next steps:
Out of scope right now: