Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[receiver/nginxreceiver] Add additional labels for nginxreceiver #33069

Closed
aofodo opened this issue May 15, 2024 · 9 comments
Closed

[receiver/nginxreceiver] Add additional labels for nginxreceiver #33069

aofodo opened this issue May 15, 2024 · 9 comments

Comments

@aofodo
Copy link

aofodo commented May 15, 2024

Component(s)

receiver/nginx

Is your feature request related to a problem? Please describe.

Metrics look like this

nginx_connections_current{__tenant_id__="default", state="active", tenant="default"} 255
nginx_connections_current{__tenant_id__="default", state="reading", tenant="default"} 0
nginx_connections_current{__tenant_id__="default", state="waiting", tenant="default"} 222
nginx_connections_current{__tenant_id__="default", state="writing", tenant="default"} 3

But in this example I am using 2 nginx. The metrics are summarized for both nginx. As a result, it looks like this on the graph
image
On the first nginx I have ~200, and on the second ~3 active connections.

Describe the solution you'd like

I suggest add additional labels in metrics with information about nginx endpoint. For example http_scheme, instance, net_host_name and net_host_port

Describe alternatives you've considered

No response

Additional context

Example of my collector configuration for 2 nginx:

receivers:
  nginx/first:
    endpoint: "http://first.nginx:9080"
    collection_interval: 60s
  nginx/second:
    endpoint: "http://second.nginx:9080"
    collection_interval: 60s
@aofodo aofodo added enhancement New feature or request needs triage New item requiring triage labels May 15, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@djaglowski
Copy link
Member

I'm not sure endpoint related information should be used as identifying attributes because they may be volatile. Are there any other identifiers available via the API? Perhaps a UUID of some sort?

@aofodo
Copy link
Author

aofodo commented May 16, 2024

I'm not sure endpoint related information should be used as identifying attributes because they may be volatile. Are there any other identifiers available via the API? Perhaps a UUID of some sort?

There is no additional information from the API for identification. Maybe add an optional job_name: <string> to the nginxreceiver configuration for identification?

@djaglowski
Copy link
Member

That's an interesting idea. If we want to do something along those lines I'd suggest we use the semantic conventions for service.*.

Config could look like:

receivers:
  nginx:
    ...
    service:
      name: foo
      instance.id: 627cc493-f310-47de-96bd-71410b7dec09

or maybe more generic:

receivers:
  nginx:
    ...
    additional_resource_attributes: # (probably a better name)
      service.name: foo
      service.instance.id: 627cc493-f310-47de-96bd-71410b7dec09
      custom.attribute: bar

@dmitryax, I think this issue could be relevant to many different metric scrapers. Do you think this is a reasonable solution we might want to use more broadly?

@crobert-1
Copy link
Member

Removing needs triage as code owner has proposed a possible solution, with the understanding that specific implementation and configuration details may still need to be discussed.

@crobert-1 crobert-1 removed the needs triage New item requiring triage label May 29, 2024
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jul 29, 2024
@aofodo
Copy link
Author

aofodo commented Jul 31, 2024

This is also relevant for receiver/kafkametricsreceiver
image

And his issue is still in demand

@djaglowski djaglowski removed the Stale label Jul 31, 2024
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Sep 30, 2024
Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants