Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: record metrics from rules and export to remote #3861

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

alsoba13
Copy link
Contributor

In this PR, we introduce a first version of the metrics recorder and metrics exporter.

Every level 1 compaction job will record metrics from profiles in the form of time series. The recording will follow some recording rules given by config or an external service (for now, this is hardcoded to a single recording rule). The recorded metrics are exported to a remote after the compaction process.

Generated metrics are aggregations of total values of some kind of dimension (or profile type). The aggregation process is explained below:

  • Given a recording rule with a profile type T, a filter F (made of a set of key-value) and a set of labels E to export.
  • Every profile seen during the compaction that matches T and F will be considered for the aggregation.
  • To aggregate, profiles are grouped by E, resulting in multiple time series.
  • Every time serie will have a single sample with time = blockTime and value equal to the sum of all totalValues that match (T, F, E).
  • Hence, as we are adding up all totalValues that fulfill the conditions, we are conceptually aggregating by time (we discard the original profile timestamp and use the block time), resulting into a single sample per series per compaction job.

Example:

Let's consider the following profiles present in some blocks being compacted

profile  profile type  labels totalValue stacktraces (ignored) timestamp (ignored) 
memory alloc_space {service_name="worker", job="batch_compress", region="eu"} 10 ... ...
2 cpu samples {service_name="worker", job="batch_compress", region="eu"} 20 ... ...
3 cpu samples {service_name="API", region="eu"} 1 ... ...
4 cpu samples {service_name="worker", job="batch_compress", region="ap"} 30 ... ...
5 cpu samples {service_name="worker", job="batch_compress", region="us"} 40 ... ...
6 cpu samples {service_name="worker", job="batch_compress", region="eu"} 100 ... ...

And the following recording rule:
Name = "cpu_usage_compress_workers"
T = cpu samples
F = {service_name="worker", job="batch_compress"}
E = "region"

This will result in the following exported series and samples.
{__name__="cpu_usage_compress_workers", service_name="worker", job="batch_compress", region="eu"} = (t, 120)
{__name__="cpu_usage_compress_workers", service_name="worker", job="batch_compress", region="ap"} = (t, 30)
{__name__="cpu_usage_compress_workers", service_name="worker", job="batch_compress", region="us"} = (t, 40)

Note that Profile 1 was discarded by profile type. Profiles 2 and 6 were aggregated, and Profile 3 was discarded by filter. For all of the 3 exported samples, t = blockTime.

Given the distributed architecture and concurrent nature of compactors, and the chosen timestamp for samples, time collisions may happen. For that reason, an extra __pyroscope_instance__ label has been added, so that two compaction jobs may write to prometheus without causing overwrites. This intance id is computed from a worker id and a shard id.

Next steps:

  • Get the export config programmatically so every metric is exported to the expected datasource (tenant-wise)
  • Read rules from external service (tenant-settings?) and config.
  • Error handling: lack of error handling is evident. There's a lot of room here for improvement but we should strive to not interfere with compaction and consider retries vs metrics loss.

Out of scope right now:

  • functions/stacktraces processing

@alsoba13 alsoba13 requested a review from a team as a code owner January 21, 2025 14:52
@alsoba13 alsoba13 force-pushed the alsoba13/metrics-from-profiles-record-and-export branch from be2d95a to c90f289 Compare January 21, 2025 14:57
@alsoba13 alsoba13 marked this pull request as draft January 21, 2025 22:02
Copy link
Collaborator

@kolesnikovae kolesnikovae left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work, Alberto! 🚀

I'd like to discuss the queries we aim to answer. Have you analyzed how the exported metrics will be used? Just some example use cases

Comment on lines 372 to 377
func pyroscopeInstanceHash(shard uint32, id uuid.UUID) string {
buf := make([]byte, 0, 40)
buf = append(buf, byte(shard>>24), byte(shard>>16), byte(shard>>8), byte(shard))
buf = append(buf, id.String()...)
return fmt.Sprintf("%x", xxhash.Sum64(buf))
}
Copy link
Collaborator

@kolesnikovae kolesnikovae Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why we're using a UUID generated by compaction worker.

First of all, it is not helpful and will cause data duplication. Jobs might be retried multiple times: each attempt may result in exported samples that will have its own __pyroscope_instance__ label, that prevents deduplication in the metrics backend. Second, it will result in cardinality issues: there might be dozens and hundreds of compaction worker, each of them can handle any block (i.e., we get Rules x Shards x Workers series, where each rule may produce multiple series, based on the aggregation dimensions).

Note that compaction job source blocks always belong to the same shard but may be produced by a set of segment writers. This is a typical situation, when the shard ownership/affinity changes due to the topology change (node added or removed), or when the primary owner is not available, or when the placement rules for the dataset change.

It's possible that we have two segments with identical timestamps (given the millisecond precision of ULIDs). Whether we want to handle the issue in the very first version is probably the most important question, if we decide to use segment timestamps. I'd say, no, we don't have to. And if were to, we would need to ensure that the data sent from different segment origins is not mixed. The segment origin is determined by a combination of the Shard and CreatedBy metadata attributes and the timestamp of segment creation. We assume that within the Shard/CreatedBy timestamp collision is not possible (this is not guaranteed strictly speaking). Shard/CreatedBy cardinality is bound and is typically a 1:1 mapping. However, the worst case scenario is N*M – therefore we may want to get rid of it (e.g., by aggregating data in the backend with recording rules).

I see the following ways to solve/mitigate it:

  1. Add Shard/CreatedBy as a series label (hash of it). We probably could be fine with just CreatedBy, but we need to make sure the timestamp collision is not possible in the segment writer: imagine a series is moved from one shard to another, hosted by the same segment-writer, and the timestamps of the segments that include this "transition" match. Such samples would be deduplicated in the time series (prometheus-like) backend.
  2. Add an explicit metadata attribute to include the timestamp in nanosecond precision sufficient for our needs in practice. The timestamp is the real local time of the segment-writer produced the block.
  3. Handle this in compaction planner: we could probably somehow "guess" the timestamp, provided that we have all the information needed there.

It may be tempting to implement p.2. However, before we go further, I'd like to see analysis of the access patterns – basically: what queries we expect: for example, aggregation functions supported. Do we want to support functions without associative property (e.g., mean/average)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed to use CreatedBy instead of worker id

@@ -62,6 +66,7 @@ func Compact(
ctx context.Context,
blocks []*metastorev1.BlockMeta,
storage objstore.Bucket,
workerId uuid.UUID,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is OK for a prototype, but I believe we should come up with a more elaborated component design: we should not be concerned with metrics export when dealing with compaction: already complex area gets complicated further and violates the single-responsibility principle.

I propose adding a new interface SampleRecorder or SampleObserver (or any other name that communicates the purpose) that is an option of the Compact call. It should not appear at the planning stage, but passed explicitly to the (*CompactionPlan) Compact call (we could find a better name, though).

The interface should be defined by the consumer (in the block package, before we declare the Compact function).

type SampleObserver interface {
   // Observe is called before the compactor appends the entry
   // to the output block. This method must not modify the entry.
   Observe(ProfileEntry)

   // Flush is called before the compactor flushes the output dataset.
   // This call invalidates all references (such as symbols) to the source
   // and output blocks. Any error returned by the call terminates the
   // compaction job: it's caller responsibility to suppress errors.
   Flush() error
}

Alternatively, there might be an abstract fabric providing observer/recorder for a certain scope (batch) of samples. In our case, the dataset scope is a perfect fit.

We will extend it later to also provide symbols for filtering (something like SetSymbols or just Reset will do). Tenant can be obtained through the Dataset member of the entry. Timestamp override should be injected outside the compaction.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, I'll be rewriting this next. Thank you for the tips

Comment on lines 54 to 78
func (r *Recorder) RecordRow(fp model.Fingerprint, lbls phlaremodel.Labels, totalValue int64) {
labelsMap := r.getOrCreateLabelsMap(fp, lbls)

for _, recording := range r.Recordings {
aggregatedFp, matches := recording.matches(fp, labelsMap)
if !matches {
continue
}
if aggregatedFp == nil {
// first time this series appears
exportedLabels := r.generateExportedLabels(labelsMap, recording)

sort.Sort(exportedLabels)
f := AggregatedFingerprint(exportedLabels.Hash())
aggregatedFp = &f

recording.fps[fp] = aggregatedFp
recording.data[*aggregatedFp] = newTimeSeries(exportedLabels, r.recordingTime)
}
recording.data[*aggregatedFp].Samples[0].Value += float64(totalValue)
}
}
Copy link
Collaborator

@kolesnikovae kolesnikovae Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can optimize it in the way that we only call it once per series and keep aggregators prepared.

This is a tight loop – if we can avoid allocations, loops, lookups, etc. we should go for it. We don't have to implement low level optimizations, but we should avoid wasteful operations (such as creating the labelsMap)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talked in private, I'll rewrite it to take advantage of the order property of the rows.

Comment on lines +120 to +139
func (r *Recording) matches(fp model.Fingerprint, labelsMap map[string]string) (*AggregatedFingerprint, bool) {
aggregatedFp, seen := r.fps[fp]
if seen {
// we've seen this series before
return aggregatedFp, seen
}
if r.rule.profileType != labelsMap["__profile_type__"] {
return nil, false
}
for _, matcher := range r.rule.matchers {
// assume labels.MatchEqual for every matcher:
if labelsMap[matcher.Name] != matcher.Value {
return nil, false
}
}
return nil, true
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very reduced version of the syntax we use in queries. I think we should support the full one.

@alsoba13 alsoba13 force-pushed the alsoba13/metrics-from-profiles-record-and-export branch 2 times, most recently from ec8eba9 to 7d1e59b Compare January 22, 2025 09:41
@alsoba13 alsoba13 force-pushed the alsoba13/metrics-from-profiles-record-and-export branch from 7d1e59b to 1700a83 Compare January 22, 2025 09:42
@alsoba13 alsoba13 marked this pull request as ready for review January 22, 2025 13:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants