Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weird opentelemetry-collector errors #26343

Closed
ghevge opened this issue Aug 30, 2023 · 21 comments
Closed

Weird opentelemetry-collector errors #26343

ghevge opened this issue Aug 30, 2023 · 21 comments

Comments

@ghevge
Copy link

ghevge commented Aug 30, 2023

Component(s)

exporter/prometheus

What happened?

I'm trying to set up opentelemetry-collector with prometheus, tempo and loki on a dockerize jave springboot 3 application.

I've managed to start all the containers, but once I generet some activity on my app, otel collector start to generate errors like bellow:

Any idea what could be causing this?

Collector version

0.82.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

receivers:
  otlp:
    protocols:
      http:
      grpc:

processors:
  # batch metrics before sending to reduce API usage
  batch:

exporters:
  logging:
    loglevel: debug

  loki:
    endpoint: "http://loki:3100/loki/api/v1/push"

  prometheus:
    endpoint: "0.0.0.0:8889"
    const_labels:
      label1: value1

  otlp:
    endpoint: tempo:4317
    tls:
      insecure: true

# https://github.com/open-telemetry/opentelemetry-collector/blob/main/extension/README.md
extensions:
  # responsible for responding to health check calls on behalf of the collector.
  health_check:
  # fetches the collector’s performance data
  pprof:
  # serves as an http endpoint that provides live debugging data about instrumented components.
  zpages:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [prometheus]
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]
    logs:
      receivers: [otlp]
      exporters: [loki]

Log output

otel-collector_1  | 2023-08-30T15:24:45.452Z    error   [email protected]/log.go:23    error gathering metrics: 5 error(s) occurred:
otel-collector_1  | * collected metric log4j2_events label:{name:"application" value:"testapp"} label:{name:"job" value:"unknown_service"} label:{name:"label1" value:"value1"} label:{name:"level" value:"debug"} counter:{value:599} has help "Number of debug level log events" but should have "Number of trace level log events"
otel-collector_1  | * collected metric log4j2_events label:{name:"application" value:"testapp"} label:{name:"job" value:"unknown_service"} label:{name:"label1" value:"value1"} label:{name:"level" value:"warn"} counter:{value:2} has help "Number of warn level log events" but should have "Number of trace level log events"
otel-collector_1  | * collected metric log4j2_events label:{name:"application" value:"testapp"} label:{name:"job" value:"unknown_service"} label:{name:"label1" value:"value1"} label:{name:"level" value:"fatal"} counter:{value:0} has help "Number of fatal level log events" but should have "Number of trace level log events"
otel-collector_1  | * collected metric log4j2_events label:{name:"application" value:"testapp"} label:{name:"job" value:"unknown_service"} label:{name:"label1" value:"value1"} label:{name:"level" value:"error"} counter:{value:0} has help "Number of error level log events" but should have "Number of trace level log events"
otel-collector_1  | * collected metric log4j2_events label:{name:"application" value:"testapp"} label:{name:"job" value:"unknown_service"} label:{name:"label1" value:"value1"} label:{name:"level" value:"info"} counter:{value:12} has help "Number of info level log events" but should have "Number of trace level log events"
otel-collector_1  |     {"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
otel-collector_1  | github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*promLogger).Println
otel-collector_1  |     github.com/open-telemetry/opentelemetry-collector-contrib/exporter/[email protected]/log.go:23
otel-collector_1  | github.com/prometheus/client_golang/prometheus/promhttp.HandlerForTransactional.func1
otel-collector_1  |     github.com/prometheus/[email protected]/prometheus/promhttp/http.go:144
otel-collector_1  | net/http.HandlerFunc.ServeHTTP
otel-collector_1  |     net/http/server.go:2122
otel-collector_1  | net/http.(*ServeMux).ServeHTTP
otel-collector_1  |     net/http/server.go:2500
otel-collector_1  | go.opentelemetry.io/collector/config/confighttp.(*decompressor).ServeHTTP
otel-collector_1  |     go.opentelemetry.io/collector/config/[email protected]/compression.go:147
otel-collector_1  | go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*Handler).ServeHTTP
otel-collector_1  |     go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:212
otel-collector_1  | go.opentelemetry.io/collector/config/confighttp.(*clientInfoHandler).ServeHTTP
otel-collector_1  |     go.opentelemetry.io/collector/config/[email protected]/clientinfohandler.go:28
otel-collector_1  | net/http.serverHandler.ServeHTTP
otel-collector_1  |     net/http/server.go:2936
otel-collector_1  | net/http.(*conn).serve
otel-collector_1  |     net/http/server.go:1995

Additional context

My docker-compose looks something like this:

........
version: '3.8'

services:
 otel-collector:
    image: otel/opentelemetry-collector-contrib:0.82.0
    restart: always
    command:
      - --config=/etc/otelcol-contrib/otel-collector.yml
    volumes:
      - /data/configs/otel-collector.yml:/etc/otelcol-contrib/otel-collector.yml
    ports:
      - "1888:1888" # pprof extension
      - "8888:8888" # Prometheus metrics exposed by the collector
      - "8889:8889" # Prometheus exporter metrics
      - "13133:13133" # health_check extension
      - "4317:4317" # OTLP gRPC receiver
      - "4318:4318" # OTLP http receiver
      - "55679:55679" # zpages extension
    networks:
      mw-network:
        aliases:
          - otel-collector

  prometheus:
    container_name: prometheus
    image: prom/prometheus
    restart: always
    command:
      - --config.file=/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"
    volumes:
      - /data/configs/prometheus.yml:/etc/prometheus/prometheus.yml
    networks:
      mw-network:
        aliases:
          - prometheus

  loki:
    image: grafana/loki:latest
    ports:
      - 3100:3100
    command: -config.file=/etc/loki/local-config.yaml
    networks:
      mw-network:
        aliases:
          - loki

  tempo:
    image: grafana/tempo:latest
    command: [ "-config.file=/etc/tempo.yml" ]
    volumes:
      - /data/configs/tempo.yml:/etc/tempo.yml
    ports:
      - "3200:3200"   # tempo
      - "4317"  # otlp grpc
    networks:
      mw-network:
        aliases:
          - tempo

  grafana:
    image: grafana/grafana-enterprise
    container_name: grafana
    restart: unless-stopped
    ports:
      - "3000:3000"
    volumes:
      - /data/grafana:/var/lib/grafana 
..........

Java config class:

@Configuration
public class OpenTelemetryConfig {
    @Bean
    OpenTelemetry openTelemetry(SdkLoggerProvider sdkLoggerProvider, SdkTracerProvider sdkTracerProvider, ContextPropagators contextPropagators) {
        OpenTelemetrySdk openTelemetrySdk = OpenTelemetrySdk.builder()
                .setLoggerProvider(sdkLoggerProvider)
                .setTracerProvider(sdkTracerProvider)
                .setPropagators(contextPropagators)
                .build();
        OpenTelemetryAppender.install(openTelemetrySdk);
        return openTelemetrySdk;
    }

    @Bean
    SdkLoggerProvider otelSdkLoggerProvider(Environment environment, ObjectProvider<LogRecordProcessor> logRecordProcessors) {
        String applicationName = environment.getProperty("spring.application.name", "application");
        Resource springResource = Resource.create(Attributes.of(ResourceAttributes.SERVICE_NAME, applicationName));
        SdkLoggerProviderBuilder builder = SdkLoggerProvider.builder()
                .setResource(Resource.getDefault().merge(springResource));
        logRecordProcessors.orderedStream().forEach(builder::addLogRecordProcessor);
        return builder.build();
    }

    @Bean
    LogRecordProcessor otelLogRecordProcessor() {
        return BatchLogRecordProcessor
                .builder(OtlpGrpcLogRecordExporter.builder()
                                                  .setEndpoint("http://otel-collector:4317")
                                                  .build())
                .build();
    }
}

application.yaml:

logging:
  config: classpath:/log4j2-spring.xml

management:
  endpoints:
    web:
      exposure:
        include: metrics
  otlp:
    metrics:
      export:
        url: http://otel-collector:4318/v1/metrics
        step: 10s
    tracing:
      endpoint: http://otel-collector:4318/v1/traces
  tracing:
    sampling:
      probability: 1.0

spring:
  application:
    name: kelteu
...............
@ghevge ghevge added bug Something isn't working needs triage New item requiring triage labels Aug 30, 2023
@github-actions
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@crobert-1
Copy link
Member

Also filed here: #opentelemetry-collector/issues/8340

@frzifus
Copy link
Member

frzifus commented Sep 14, 2023

hm... I am sure I have seen this in the past. But looking at the stacktrace and impl. I can only guess what might have caused it.
Since promLogger is just a wrapper around zap.Logger which is thread-safe, I assume the error is somehow related to the object(s) passed to the logger.

A closer look with a real debugger might help. Could you provide some details about the passed objects?

@Frapschen Frapschen removed the needs triage New item requiring triage label Sep 15, 2023
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@andrm
Copy link

andrm commented Nov 23, 2023

Same problem here with wildfly 30.

@fede843
Copy link

fede843 commented Nov 29, 2023

getting the same error too. Have tried collectors from 0.82.0 to 0.90.0. All the same.

@crobert-1
Copy link
Member

crobert-1 commented Nov 29, 2023

@andrm, @fede843, or @ghevge: Would you be able to provide the detailed information about the passed objects that the logger is hitting the errors on, as @frzifus requested above? This would be very helpful in debugging, otherwise it will be hard to make progress.

@ghevge
Copy link
Author

ghevge commented Nov 29, 2023

@crobert-1 it will probably be faster to change otel-collector and throw those packages in the logs, in debug mode.

@crobert-1
Copy link
Member

@crobert-1 it will probably be faster to change otel-collector and throw those packages in the logs, in debug mode.

Pardon my ignorance, but I don't understand what you mean here. Since this isn't failing every time the goal is to find what data is causing this exception to be hit, helping us understand the root cause and fix it.

Are you suggesting adding more logging to the collector to understand which packages are causing the failure? Either way, any more information that can be provided here would be helpful.

@ghevge
Copy link
Author

ghevge commented Nov 30, 2023

Are you suggesting adding more logging to the collector to understand which packages are causing the failure?

Yes. This will also make any future investigations much faster IMO

@andrm
Copy link

andrm commented Dec 1, 2023

I'm getting this from wildfly 30:

Dec 01 07:56:42  t-02 otelcol[41089]: 2023-12-01T07:56:42.739Z        error        [email protected]/log.go:23        error gathering metrics: 2 error(s) occurred:
Dec 01 07:55:42 t-02 otelcol[41089]: * collected metric undertow_request_count_total label:{name:"app"  value:"wildfly"}  label:{name:"deployment"  value:""}  label:{name:"job"  value:"wildfly"}  label:{name:"name"  value:"default"}  label:{name:"subdeployment"  value:""}  label:{name:"type"  value:"http-listener"}  counter:{value:0} has help "The number of requests this listener has served" but should have "Number of all requests"
Dec 01 07:55:42 t-02 otelcol[41089]: * collected metric undertow_request_count_total label:{name:"app"  value:"wildfly"}  label:{name:"deployment"  value:""}  label:{name:"job"  value:"wildfly"}  label:{name:"name"  value:"https"}  label:{name:"subdeployment"  value:""}  label:{name:"type"  value:"https-listener"}  counter:{value:0} has help "The number of requests this listener has served" but should have "Number of all requests"
Dec 01 07:55:42 t-02 otelcol[41089]:         {"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
Dec 01 07:55:42 t-02 otelcol[41089]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*promLogger).Println
Dec 01 07:55:42 t-02 otelcol[41089]:         github.com/open-telemetry/opentelemetry-collector-contrib/exporter/[email protected]/log.go:23
Dec 01 07:55:42 t-02 otelcol[41089]: github.com/prometheus/client_golang/prometheus/promhttp.HandlerForTransactional.func1
Dec 01 07:55:42 t-02 otelcol[41089]:         github.com/prometheus/[email protected]/prometheus/promhttp/http.go:144
Dec 01 07:55:42 t-02 otelcol[41089]: net/http.HandlerFunc.ServeHTTP
Dec 01 07:55:42 t-02 otelcol[41089]:         net/http/server.go:2136
Dec 01 07:55:42 t-02 otelcol[41089]: net/http.(*ServeMux).ServeHTTP
Dec 01 07:55:42 t-02 otelcol[41089]:         net/http/server.go:2514
Dec 01 07:55:42 t-02 otelcol[41089]: go.opentelemetry.io/collector/config/confighttp.(*decompressor).ServeHTTP
Dec 01 07:55:42 t-02 otelcol[41089]:         go.opentelemetry.io/collector/config/[email protected]/compression.go:147
Dec 01 07:55:42 t-02 otelcol[41089]: go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*middleware).serveHTTP
Dec 01 07:55:42 t-02 otelcol[41089]:         go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:229
Dec 01 07:55:42 t-02 otelcol[41089]: go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.NewMiddleware.func1.1
Dec 01 07:55:42 t-02 otelcol[41089]:         go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:81
Dec 01 07:55:42 t-02 otelcol[41089]: net/http.HandlerFunc.ServeHTTP
Dec 01 07:55:42 t-02 otelcol[41089]:         net/http/server.go:2136
Dec 01 07:55:42 t-02 otelcol[41089]: go.opentelemetry.io/collector/config/confighttp.(*clientInfoHandler).ServeHTTP
Dec 01 07:55:42 t-02 otelcol[41089]:         go.opentelemetry.io/collector/config/[email protected]/clientinfohandler.go:28
Dec 01 07:55:42 t-02 otelcol[41089]: net/http.serverHandler.ServeHTTP
Dec 01 07:55:42 t-02 otelcol[41089]:         net/http/server.go:2938
Dec 01 07:55:42 t-02 otelcol[41089]: net/http.(*conn).serve
Dec 01 07:55:42 t-02 otelcol[41089]:         net/http/server.go:2009

It's very confusing, how does the collector know the comment? From prometheus?

@andrm
Copy link

andrm commented Dec 1, 2023

It looks like a concurrency issue. I issue the same curl command twice and I different entries for the same counter:

:~$ date && curl http://t-02:9989/metrics | grep undertow_request_count_total              
Fr 1. Dez 09:13:18 CET 2023
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 52413    0 52413    0     0  10.0M      0 --:--:-- --:--:-- --:--:-- 12.4M
# HELP undertow_request_count_total The number of requests this listener has served
# TYPE undertow_request_count_total counter
undertow_request_count_total{app="wildfly",deployment="",job="wildfly",name="default",subdeployment="",type="http-listener"} 0
undertow_request_count_total{app="wildfly",deployment="",job="wildfly",name="https",subdeployment="",type="https-listener"} 0
:~$ date && curl http://t-02:9989/metrics | grep undertow_request_count_total              
Fr 1. Dez 09:13:21 CET 2023
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 52320    0 52320    0     0  8948k      0 --:--:-- --:--:-- --:--:--  9.9M
# HELP undertow_request_count_total Number of all requests
# TYPE undertow_request_count_total counter
undertow_request_count_total{app="tb.war",deployment="tb.war",job="wildfly",name="com.XXXX.rest.RestApplication",subdeployment="tb.war",type="servlet"} 0
:~$  

Is this a wildfly problem?

@andrm
Copy link

andrm commented Dec 1, 2023

Is there a way to debug what wildfly is sending?

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jan 31, 2024
@eaisling
Copy link

I'm having the same issue with Wildfly 30 publishing to the collector

2024-01-31T12:51:54.544Z	error	[email protected]/log.go:23	error gathering metrics: 3 error(s) occurred:
* collected metric undertow_request_count_total label:{name:"app"  value:"uvw.ear"}  label:{name:"deployment"  value:"uvw.ear"}  label:{name:"host"  value:"foooo"}  label:{name:"instance"  value:"bob"}  label:{name:"job"  value:"wildfly"}  label:{name:"maininstance"  value:"false"}  label:{name:"name"  value:"de.bob.JaxRsCoreApplication"}  label:{name:"subdeployment"  value:"rest-api.war"}  label:{name:"type"  value:"servlet"}  counter:{value:0} has help "Number of all requests" but should have "The number of requests this listener has served"
* collected metric undertow_request_count_total label:{name:"app"  value:"uvw.ear"}  label:{name:"deployment"  value:"uvw.ear"}  label:{name:"host"  value:"foooo"}  label:{name:"instance"  value:"bob"}  label:{name:"job"  value:"wildfly"}  label:{name:"maininstance"  value:"false"}  label:{name:"name"  value:"de.bob.JaxRsCoreApplication"}  label:{name:"subdeployment"  value:"webs.war"}  label:{name:"type"  value:"servlet"}  counter:{value:0} has help "Number of all requests" but should have "The number of requests this listener has served"
* collected metric undertow_request_count_total label:{name:"app"  value:"uvw.ear"}  label:{name:"deployment"  value:"uvw.ear"}  label:{name:"host"  value:"foooo"}  label:{name:"instance"  value:"bob"}  label:{name:"job"  value:"wildfly"}  label:{name:"maininstance"  value:"false"}  label:{name:"name"  value:"de.bob.MyRestApplication"}  label:{name:"subdeployment"  value:"rest-api.war"}  label:{name:"type"  value:"servlet"}  counter:{value:0} has help "Number of all requests" but should have "The number of requests this listener has served"
	{"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*promLogger).Println
	github.com/open-telemetry/opentelemetry-collector-contrib/exporter/[email protected]/log.go:23
github.com/prometheus/client_golang/prometheus/promhttp.HandlerForTransactional.func1
	github.com/prometheus/[email protected]/prometheus/promhttp/http.go:144
net/http.HandlerFunc.ServeHTTP
	net/http/server.go:2136
net/http.(*ServeMux).ServeHTTP
	net/http/server.go:2514
go.opentelemetry.io/collector/config/confighttp.(*decompressor).ServeHTTP
	go.opentelemetry.io/collector/config/[email protected]/compression.go:147
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*middleware).serveHTTP
	go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:225
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.NewMiddleware.func1.1
	go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:83
net/http.HandlerFunc.ServeHTTP
	net/http/server.go:2136
go.opentelemetry.io/collector/config/confighttp.(*clientInfoHandler).ServeHTTP
	go.opentelemetry.io/collector/config/[email protected]/clientinfohandler.go:28
net/http.serverHandler.ServeHTTP
	net/http/server.go:2938
net/http.(*conn).serve
	net/http/server.go:2009

otel-collector-config

extensions:
  health_check:
  pprof:
    endpoint: 0.0.0.0:1777
  zpages:
    endpoint: 0.0.0.0:55679
    
receivers:
  otlp:
    protocols:
      grpc:
      http:
      
processors:
  batch:
  
exporters:
  prometheus:
    endpoint: "0.0.0.0:1234"
service:
  pipelines:
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [prometheus]
  extensions: [health_check, pprof, zpages]

@github-actions github-actions bot removed the Stale label Feb 1, 2024
@stmlange
Copy link

stmlange commented Feb 6, 2024

I bumped into this problem now too while using the jenkins open-telemetry plugin.
Not sure if it is related to jenkinsci/opentelemetry-plugin#380

Config:

....
  prometheus:
    endpoint: "opentelemetry-collector:9464"
    resource_to_telemetry_conversion:
      enabled: true
    enable_open_metrics: true
    add_metric_suffixes: false
....
      exporters:
        - logging # use for debugging only
        # Disabled until https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/26343 is fixed
        - prometheus

yields (note censored version):

2024-02-06T16:25:34.490Z	info	MetricsExporter	{"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 47, "data points": 128}
2024-02-06T16:25:34.490Z	info	ResourceMetrics #0
Resource SchemaURL: 
Resource attributes:
     -> container.id: Str(some-docker-hash-image-value)
     -> host.arch: Str(amd64)
     -> host.name: Str(some-docker-hash-image-value)
     -> jenkins.opentelemetry.plugin.version: Str(2.13.0)
     -> jenkins.url: Str(https://some-url/)
     -> jenkins.version: Str(2.387.3)
     -> os.description: Str(Linux)
     -> os.type: Str(linux)
     -> process.runtime.description: Str(Eclipse Adoptium OpenJDK 64-Bit Server VM 11)
     -> process.runtime.name: Str(OpenJDK Runtime Environment)
     -> process.runtime.version: Str(11)
     -> service.instance.id: Str(some-instance-id)
     -> service.name: Str(some-name)
     -> service.namespace: Str(some-name)
     -> service.version: Str(2.387.3)
     -> telemetry.sdk.language: Str(java)
     -> telemetry.sdk.name: Str(opentelemetry)
     -> telemetry.sdk.version: Str(1.25.0)
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope io.opentelemetry.sdk.trace 
Metric #0
Descriptor:
     -> Name: queueSize
     -> Description: The number of spans queued
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> spanProcessorType: Str(BatchSpanProcessor)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #1
Descriptor:
     -> Name: processedSpans
     -> Description: The number of spans processed by the BatchSpanProcessor. [dropped=true if they were dropped due to high throughput]
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> dropped: Bool(false)
     -> spanProcessorType: Str(BatchSpanProcessor)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 5641
ScopeMetrics #1
ScopeMetrics SchemaURL: 
InstrumentationScope io.jenkins.opentelemetry 2.13.0
Metric #0
Descriptor:
     -> Name: ci.pipeline.run.launched
     -> Description: Job launched
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 318
Metric #1
Descriptor:
     -> Name: ci.pipeline.run.success
     -> Description: Job succeed
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 216
Metric #2
Descriptor:
     -> Name: jenkins.queue.time_spent_millis
     -> Description: Total time spent in queue by the tasks that have been processed
     -> Unit: ms
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 93000420
Metric #3
Descriptor:
     -> Name: jenkins.agents.online
     -> Description: Number of online agents
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2
Metric #4
Descriptor:
     -> Name: jenkins.executor.available
     -> Description: Available executors
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> label: Str(ubuntu)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #1
Data point attributes:
     -> label: Str(jenkins-slave-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #2
Data point attributes:
     -> label: Str(release-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #3
Data point attributes:
     -> label: Str(built-in)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2
NumberDataPoints #4
Data point attributes:
     -> label: Str(dpa)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #5
Data point attributes:
     -> label: Str(jenkins)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #6
Data point attributes:
     -> label: Str(release)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #7
Data point attributes:
     -> label: Str(slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #5
Descriptor:
     -> Name: ci.pipeline.run.aborted
     -> Description: Job aborted
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 7
Metric #6
Descriptor:
     -> Name: jenkins.scm.event.completed_tasks
     -> Description: Number of completed SCM Event tasks
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #7
Descriptor:
     -> Name: jenkins.agents.total
     -> Description: Number of agents
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 4
Metric #8
Descriptor:
     -> Name: jenkins.queue.waiting
     -> Description: Number of tasks in the queue with the status 'waiting', 'buildable' or 'pending'
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1
Metric #9
Descriptor:
     -> Name: ci.pipeline.run.completed
     -> Description: Job completed
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 300
Metric #10
Descriptor:
     -> Name: jenkins.executor.busy
     -> Description: Busy executors
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> label: Str(ubuntu)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 12
NumberDataPoints #1
Data point attributes:
     -> label: Str(jenkins-slave-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 12
NumberDataPoints #2
Data point attributes:
     -> label: Str(release-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #3
Data point attributes:
     -> label: Str(built-in)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #4
Data point attributes:
     -> label: Str(dpa)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 12
NumberDataPoints #5
Data point attributes:
     -> label: Str(jenkins)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 12
NumberDataPoints #6
Data point attributes:
     -> label: Str(release)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #7
Data point attributes:
     -> label: Str(jenkins-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #11
Descriptor:
     -> Name: ci.pipeline.run.failed
     -> Description: Job failed
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 84
Metric #12
Descriptor:
     -> Name: jenkins.executor.queue
     -> Description: Defined executors
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> label: Str(ubuntu)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #1
Data point attributes:
     -> label: Str(jenkins-slave-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #2
Data point attributes:
     -> label: Str(release-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #3
Data point attributes:
     -> label: Str(built-in)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #4
Data point attributes:
     -> label: Str(dpa)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1
NumberDataPoints #5
Data point attributes:
     -> label: Str(jenkins)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #6
Data point attributes:
     -> label: Str(release)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #7
Data point attributes:
     -> label: Str(jenkins-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #13
Descriptor:
     -> Name: jenkins.executor.idle
     -> Description: Idle executors
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> label: Str(ubuntu)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #1
Data point attributes:
     -> label: Str(jenkins-slave-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #2
Data point attributes:
     -> label: Str(release-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #3
Data point attributes:
     -> label: Str(built-in)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2
NumberDataPoints #4
Data point attributes:
     -> label: Str(dpa)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #5
Data point attributes:
     -> label: Str(jenkins)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #6
Data point attributes:
     -> label: Str(release)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #7
Data point attributes:
     -> label: Str(jenkins-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #14
Descriptor:
     -> Name: jenkins.scm.event.pool_size
     -> Description: Number of threads handling SCM Events
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #15
Descriptor:
     -> Name: jenkins.executor.online
     -> Description: Online executors
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> label: Str(ubuntu)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 12
NumberDataPoints #1
Data point attributes:
     -> label: Str(jenkins-slave-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 12
NumberDataPoints #2
Data point attributes:
     -> label: Str(release-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #3
Data point attributes:
     -> label: Str(built-in)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2
NumberDataPoints #4
Data point attributes:
     -> label: Str(dpa)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 12
NumberDataPoints #5
Data point attributes:
     -> label: Str(jenkins)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 12
NumberDataPoints #6
Data point attributes:
     -> label: Str(release)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #7
Data point attributes:
     -> label: Str(jenkins-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #16
Descriptor:
     -> Name: jenkins.queue.blocked
     -> Description: Number of blocked tasks in the queue. Note that waiting for an executor to be available is not a reason to be counted as blocked
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #17
Descriptor:
     -> Name: jenkins.agents.offline
     -> Description: Number of offline agents
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2
Metric #18
Descriptor:
     -> Name: login
     -> Description: Logins
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 72
Metric #19
Descriptor:
     -> Name: jenkins.queue.buildable
     -> Description: Number of tasks in the queue with the status 'buildable' or 'pending'
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1
Metric #20
Descriptor:
     -> Name: jenkins.scm.event.active_threads
     -> Description: Number of threads actively handling SCM Events
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #21
Descriptor:
     -> Name: jenkins.executor.connecting
     -> Description: Connecting executors
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> label: Str(ubuntu)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #1
Data point attributes:
     -> label: Str(jenkins-slave-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #2
Data point attributes:
     -> label: Str(release-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #3
Data point attributes:
     -> label: Str(built-in)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #4
Data point attributes:
     -> label: Str(dpa)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #5
Data point attributes:
     -> label: Str(jenkins)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #6
Data point attributes:
     -> label: Str(release)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #7
Data point attributes:
     -> label: Str(jenkins-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #22
Descriptor:
     -> Name: jenkins.scm.event.queued_tasks
     -> Description: Number of queued SCM Event tasks
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
Metric #23
Descriptor:
     -> Name: ci.pipeline.run.started
     -> Description: Job started
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 318
Metric #24
Descriptor:
     -> Name: jenkins.queue.left
     -> Description: Total count of tasks that have been processed
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1014
Metric #25
Descriptor:
     -> Name: login_success
     -> Description: Successful logins
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 72
Metric #26
Descriptor:
     -> Name: jenkins.executor.defined
     -> Description: Defined executors
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> label: Str(ubuntu)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 16
NumberDataPoints #1
Data point attributes:
     -> label: Str(jenkins-slave-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 12
NumberDataPoints #2
Data point attributes:
     -> label: Str(release-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2
NumberDataPoints #3
Data point attributes:
     -> label: Str(built-in)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2
NumberDataPoints #4
Data point attributes:
     -> label: Str(dpa)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 12
NumberDataPoints #5
Data point attributes:
     -> label: Str(jenkins)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 16
NumberDataPoints #6
Data point attributes:
     -> label: Str(release)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2
NumberDataPoints #7
Data point attributes:
     -> label: Str(jenkins-slave-ubuntu-06)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 4
ScopeMetrics #2
ScopeMetrics SchemaURL: 
InstrumentationScope io.opentelemetry.runtime-metrics 1.24.0-alpha
Metric #0
Descriptor:
     -> Name: process.runtime.jvm.threads.count
     -> Description: Number of executing threads
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> daemon: Bool(false)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 21
NumberDataPoints #1
Data point attributes:
     -> daemon: Bool(true)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 47
Metric #1
Descriptor:
     -> Name: process.runtime.jvm.memory.limit
     -> Description: Measure of max obtainable memory
     -> Unit: By
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> pool: Str(CodeHeap 'non-profiled nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 122912768
NumberDataPoints #1
Data point attributes:
     -> pool: Str(CodeHeap 'non-nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 5836800
NumberDataPoints #2
Data point attributes:
     -> pool: Str(G1 Old Gen)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 11045699584
NumberDataPoints #3
Data point attributes:
     -> pool: Str(Compressed Class Space)
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1073741824
NumberDataPoints #4
Data point attributes:
     -> pool: Str(CodeHeap 'profiled nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 122908672
Metric #2
Descriptor:
     -> Name: process.runtime.jvm.buffer.limit
     -> Description: Total capacity of the buffers in this pool
     -> Unit: By
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> pool: Str(mapped)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1
NumberDataPoints #1
Data point attributes:
     -> pool: Str(direct)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 578575
Metric #3
Descriptor:
     -> Name: process.runtime.jvm.memory.committed
     -> Description: Measure of memory committed
     -> Unit: By
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> pool: Str(G1 Eden Space)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 3925868544
NumberDataPoints #1
Data point attributes:
     -> pool: Str(G1 Survivor Space)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 37748736
NumberDataPoints #2
Data point attributes:
     -> pool: Str(CodeHeap 'non-profiled nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 59506688
NumberDataPoints #3
Data point attributes:
     -> pool: Str(CodeHeap 'non-nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2818048
NumberDataPoints #4
Data point attributes:
     -> pool: Str(G1 Old Gen)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 7082082304
NumberDataPoints #5
Data point attributes:
     -> pool: Str(Compressed Class Space)
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 27131904
NumberDataPoints #6
Data point attributes:
     -> pool: Str(Metaspace)
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 214499328
NumberDataPoints #7
Data point attributes:
     -> pool: Str(CodeHeap 'profiled nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 73990144
Metric #4
Descriptor:
     -> Name: process.runtime.jvm.gc.duration
     -> Description: Duration of JVM garbage collection actions
     -> Unit: ms
     -> DataType: Histogram
     -> AggregationTemporality: Cumulative
HistogramDataPoints #0
Data point attributes:
     -> action: Str(end of minor GC)
     -> gc: Str(G1 Young Generation)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Count: 462
Sum: 54105.000000
Min: 4.000000
Max: 3849.000000
ExplicitBounds #0: 0.000000
ExplicitBounds #1: 5.000000
ExplicitBounds #2: 10.000000
ExplicitBounds #3: 25.000000
ExplicitBounds #4: 50.000000
ExplicitBounds #5: 75.000000
ExplicitBounds #6: 100.000000
ExplicitBounds #7: 250.000000
ExplicitBounds #8: 500.000000
ExplicitBounds #9: 750.000000
ExplicitBounds #10: 1000.000000
ExplicitBounds #11: 2500.000000
ExplicitBounds #12: 5000.000000
ExplicitBounds #13: 7500.000000
ExplicitBounds #14: 10000.000000
Buckets #0, Count: 0
Buckets #1, Count: 5
Buckets #2, Count: 39
Buckets #3, Count: 88
Buckets #4, Count: 157
Buckets #5, Count: 61
Buckets #6, Count: 25
Buckets #7, Count: 43
Buckets #8, Count: 26
Buckets #9, Count: 8
Buckets #10, Count: 1
Buckets #11, Count: 7
Buckets #12, Count: 2
Buckets #13, Count: 0
Buckets #14, Count: 0
Buckets #15, Count: 0
Metric #5
Descriptor:
     -> Name: process.runtime.jvm.cpu.utilization
     -> Description: Recent cpu utilization for the process
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0.003514
Metric #6
Descriptor:
     -> Name: process.runtime.jvm.system.cpu.load_1m
     -> Description: Average CPU load of the whole system for the last minute
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0.090000
Metric #7
Descriptor:
     -> Name: process.runtime.jvm.classes.loaded
     -> Description: Number of classes loaded since JVM start
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 36111
Metric #8
Descriptor:
     -> Name: process.runtime.jvm.classes.current_loaded
     -> Description: Number of classes currently loaded
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 30102
Metric #9
Descriptor:
     -> Name: process.runtime.jvm.classes.unloaded
     -> Description: Number of classes unloaded since JVM start
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 6009
Metric #10
Descriptor:
     -> Name: process.runtime.jvm.buffer.usage
     -> Description: Memory that the Java virtual machine is using for this buffer pool
     -> Unit: By
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> pool: Str(mapped)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1
NumberDataPoints #1
Data point attributes:
     -> pool: Str(direct)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 578575
Metric #11
Descriptor:
     -> Name: process.runtime.jvm.buffer.count
     -> Description: The number of buffers in the pool
     -> Unit: {buffers}
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> pool: Str(mapped)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1
NumberDataPoints #1
Data point attributes:
     -> pool: Str(direct)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 52
Metric #12
Descriptor:
     -> Name: process.runtime.jvm.memory.init
     -> Description: Measure of initial memory requested
     -> Unit: By
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> pool: Str(G1 Eden Space)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 35651584
NumberDataPoints #1
Data point attributes:
     -> pool: Str(G1 Survivor Space)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #2
Data point attributes:
     -> pool: Str(CodeHeap 'non-profiled nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2555904
NumberDataPoints #3
Data point attributes:
     -> pool: Str(CodeHeap 'non-nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2555904
NumberDataPoints #4
Data point attributes:
     -> pool: Str(G1 Old Gen)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 656408576
NumberDataPoints #5
Data point attributes:
     -> pool: Str(Compressed Class Space)
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #6
Data point attributes:
     -> pool: Str(Metaspace)
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #7
Data point attributes:
     -> pool: Str(CodeHeap 'profiled nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2555904
Metric #13
Descriptor:
     -> Name: process.runtime.jvm.memory.usage
     -> Description: Measure of memory used
     -> Unit: By
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> pool: Str(G1 Eden Space)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 413138944
NumberDataPoints #1
Data point attributes:
     -> pool: Str(G1 Survivor Space)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 37748736
NumberDataPoints #2
Data point attributes:
     -> pool: Str(CodeHeap 'non-profiled nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 58739840
NumberDataPoints #3
Data point attributes:
     -> pool: Str(CodeHeap 'non-nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 2641280
NumberDataPoints #4
Data point attributes:
     -> pool: Str(G1 Old Gen)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1402521008
NumberDataPoints #5
Data point attributes:
     -> pool: Str(Compressed Class Space)
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 20638872
NumberDataPoints #6
Data point attributes:
     -> pool: Str(Metaspace)
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 186798808
NumberDataPoints #7
Data point attributes:
     -> pool: Str(CodeHeap 'profiled nmethods')
     -> type: Str(non_heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 71704320
Metric #14
Descriptor:
     -> Name: process.runtime.jvm.system.cpu.utilization
     -> Description: Recent cpu utilization for the whole system
     -> Unit: 1
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0.022191
Metric #15
Descriptor:
     -> Name: process.runtime.jvm.memory.usage_after_last_gc
     -> Description: Measure of memory used after the most recent garbage collection event on this pool
     -> Unit: By
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> pool: Str(G1 Eden Space)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 0
NumberDataPoints #1
Data point attributes:
     -> pool: Str(G1 Survivor Space)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 37748736
NumberDataPoints #2
Data point attributes:
     -> pool: Str(G1 Old Gen)
     -> type: Str(heap)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1217955216
ScopeMetrics #3
ScopeMetrics SchemaURL: 
InstrumentationScope io.opentelemetry.exporters.otlp-grpc-okhttp 
Metric #0
Descriptor:
     -> Name: otlp.exporter.exported
     -> Description: 
     -> Unit: 
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> success: Bool(true)
     -> type: Str(span)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 5641
NumberDataPoints #1
Data point attributes:
     -> success: Bool(false)
     -> type: Str(span)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 1034
Metric #1
Descriptor:
     -> Name: otlp.exporter.seen
     -> Description: 
     -> Unit: 
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
     -> type: Str(span)
StartTimestamp: 2024-01-31 09:06:33.004223 +0000 UTC
Timestamp: 2024-02-06 16:25:33.02364 +0000 UTC
Value: 6675
	{"kind": "exporter", "data_type": "metrics", "name": "logging"}
2024-02-06T16:25:37.383Z	error	[email protected]/log.go:23	error gathering metrics: collected metric queueSize label:{name:"container_id" value:"some-container-id"} label:{name:"host_arch" value:"amd64"} label:{name:"host_name" value:"some-hostname"} label:{name:"instance" value:"some-intance-value"} label:{name:"jenkins_opentelemetry_plugin_version" value:"2.18.0"} label:{name:"jenkins_url" value:"https://some-url/"} label:{name:"jenkins_version" value:"2.387.3"} label:{name:"job" value:"some-job"} label:{name:"logRecordProcessorType" value:"BatchLogRecordProcessor"} label:{name:"os_description" value:"Linux"} label:{name:"os_type" value:"linux"} label:{name:"process_runtime_description" value:"Eclipse Adoptium OpenJDK 64-Bit Server VM 11"} label:{name:"process_runtime_name" value:"OpenJDK Runtime Environment"} label:{name:"process_runtime_version" value:"11"} label:{name:"service_instance_id" value:"some-instance-id"} label:{name:"service_name" value:"jenkins"} label:{name:"service_namespace" value:"jenkins"} label:{name:"service_version" value:"2.387.3"} label:{name:"telemetry_sdk_language" value:"java"} label:{name:"telemetry_sdk_name" value:"opentelemetry"} label:{name:"telemetry_sdk_version" value:"1.30.1"} gauge:{value:0} has help "The number of logs queued" but should have "The number of spans queued"
	{"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*promLogger).Println
	github.com/open-telemetry/opentelemetry-collector-contrib/exporter/[email protected]/log.go:23
github.com/prometheus/client_golang/prometheus/promhttp.HandlerForTransactional.func1
	github.com/prometheus/[email protected]/prometheus/promhttp/http.go:144
net/http.HandlerFunc.ServeHTTP
	net/http/server.go:2136
net/http.(*ServeMux).ServeHTTP
	net/http/server.go:2514
go.opentelemetry.io/collector/config/confighttp.(*decompressor).ServeHTTP
	go.opentelemetry.io/collector/config/[email protected]/compression.go:147
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*middleware).serveHTTP
	go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:225
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.NewMiddleware.func1.1
	go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:83
net/http.HandlerFunc.ServeHTTP
	net/http/server.go:2136
go.opentelemetry.io/collector/config/confighttp.(*clientInfoHandler).ServeHTTP
	go.opentelemetry.io/collector/config/[email protected]/clientinfohandler.go:28
net/http.serverHandler.ServeHTTP
	net/http/server.go:2938
net/http.(*conn).serve
	net/http/server.go:2009

Copy link
Contributor

github-actions bot commented Apr 8, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Apr 8, 2024
@crobert-1 crobert-1 removed the Stale label Apr 8, 2024
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jun 10, 2024
@crobert-1 crobert-1 removed the Stale label Jun 10, 2024
@thoraage
Copy link

thoraage commented Jun 20, 2024

It seems that this is the error that the Wildfly users are experiencing: https://issues.redhat.com/browse/WFLY-18300. It would probably not hurt to give it a vote.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Aug 20, 2024
Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants