Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow user to pass in metrics namespace for prometheus exporter #5692

Closed
splunkericl opened this issue Jul 15, 2022 · 12 comments
Closed

Allow user to pass in metrics namespace for prometheus exporter #5692

splunkericl opened this issue Jul 15, 2022 · 12 comments

Comments

@splunkericl
Copy link
Contributor

splunkericl commented Jul 15, 2022

Is your feature request related to a problem? Please describe.

Currently, all metrics emitted in otel collector have namespace set to otelcol. If the otel collector user adds their own metrics they would end up having the same prefix. For our team, we are not interested to have the prefix for our metrics name.

From the otel collector user perspective, the metrics namespace should be customizable. If the user decides not to have any namespace to make the metrics more readable, it should be an option.

Describe the solution you'd like

A new field is added in ServiceTelemetryMetrics in go.opentelemetry.io/collector/config/service.go

type ServiceTelemetryMetrics struct {
...
namespace string
}

the namespace is passed in to prometheus exporter namespace

opts := prometheus.Options{
	Namespace: col.service.config.Telemetry.Metrics.namespace,
}

Describe alternatives you've considered

user can remove the name prefix when scraping the metric

Additional context

N/A

@dmitryax
Copy link
Member

dmitryax commented Aug 1, 2022

Hey @splunkericl. I'm curios what is your use case?

If the otel collector user adds their own metrics they would end up having the same prefix

What metrics do want to add that end up with otelcol prefix? And how do you do that? Can you please provide an example?

The otelcol namespace is applied only to a specific set of collector's own metrics, not the metrics that the collector collects.

@dmitryax
Copy link
Member

dmitryax commented Aug 1, 2022

Also the issue title mentions prometheus exporter which not part of this repo

@tigrannajaryan
Copy link
Member

@dmitryax the use case is custom builds or custom distros of the Collector which have a name that is different from "OpenTelemetry Collector". In such cases you may want to change own metric names to avoid using "otelcol" namespace to avoid confusion (in some environments people may have canonical Otel Collector and custom distros co-existing, so differentiating their metrics can be useful). The hard-coded "otelcol" namespace value is currently used as a prefix for all own metrics that the Collector exposes (by default we expose them as Prometheus metrics on http://localhost:8888/metrics).
I think the capability to customize the metric namespace fits into our general customization story where we allow for example to change the short name and full name of the Collector (via BuildInfo.Command and BuildInfo.Description).

@dmitryax
Copy link
Member

dmitryax commented Aug 2, 2022

It makes sense now. Thanks for clarifying.

@bogdandrutu
Copy link
Member

@tigrannajaryan as mentioned in the PR, this is only supported in opencensus, and we try to move away from that. Should we just remove the "otelcol" namespace from metrics for the moment? This will make transition to opentelemetry even harder.

@dmitryax
Copy link
Member

dmitryax commented Aug 2, 2022

@bogdandrutu I don't think we can just remove the namespace. The collector's metrics are widely adopted in the current form. It'll be a significant breaking change for collector users that care about collector's observability. Also removing the namespace can introduce conflicts with other systems given that we will get metrics like process_uptime.

Can't we align metrics exposed with telemetry.useOtelForInternalMetrics feature gate to have the same otelcol namespace?

@tigrannajaryan
Copy link
Member

I don't think we can just remove the namespace. The collector's metrics are widely adopted in the current form. It'll be a significant breaking change for collector users that care about collector's observability.

I agree with @dmitryax

Can't we align metrics exposed with telemetry.useOtelForInternalMetrics feature gate to have the same otelcol namespace?

When we migrate to useOtelForInternalMetrics=true in the future I think at least we will need to make sure there is a possibility to produce backward compatible metrics that have the same otelcol namespace added to it - even if that namespace is not added by the default.

If there is a strong desire to get rid of otelcol namespace then I would announce that it is deprecated, but keep as a backward compatible option for a very long time (at least 12 months) to give people who observe Collector time to migrate. Also, maybe by then we can add full support for Schemas and this metric renaming will be just a Schema version change, which help people who migrate.

@bogdandrutu
Copy link
Member

I still believe that we should not allow users to change it (not expose this option). We made this decision to have "otelcol" prefix fine, we will not break observability, but we should not allow users to change it and support this "option"

@bogdandrutu
Copy link
Member

The problem is that, if we add that capability and otel-go does not support we are stuck on opencensus. Without that, we can just go and change all metrics to have otelcol prefix name (when changing to otel-go) and we are good.

@dmitryax
Copy link
Member

dmitryax commented Aug 8, 2022

I still believe that we should not allow users to change it (not expose this option). We made this decision to have "otelcol" prefix fine, we will not break observability, but we should not allow users to change it and support this "option"

I agree with you. I recommended @splunkericl couple of approaches that can address the need without introducing any configuration options here. Once applied, I think we can close this issue

@dmitryax
Copy link
Member

As discussed with @splunkericl, solving #5882 will make this issue not needed anymore

@codeboten
Copy link
Contributor

Closing this issue as #5882 is now resolved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants