Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some metrics Started being reported as zero when memory usage high #22361

Open
ShahroZafar opened this issue Feb 4, 2025 · 0 comments
Open
Labels
sink: kafka Anything `kafka` sink related source: kubernetes_logs Anything `kubernetes_logs` source related type: bug A code related bug.

Comments

@ShahroZafar
Copy link

ShahroZafar commented Feb 4, 2025

A note for the community

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Problem

We have vector running as a sidecar and pushing logs of 1 pod only.
This vector process reads the logs using kubernetes_logs of a single pod and push logs to kafka.

We see that due to some issues in kafka, there was buffering in memory and memory usage increased. I also see queue.buffering.max.kbytes is also reached.

However once the issue was over, we saw that the metrics value showed as 0 for some. Other metrics completely vanished.

I ran tcpdump and can see that traffic is still going onto kafka.

Configuration

vector.yaml: |
    acknowledgements:
      enabled: true
    api:
      address: 0.0.0.0:8686
      enabled: true
      playground: false
    data_dir: /vector-data-dir
    expire_metrics_secs: 300
    sinks:
      kafka:
        batch:
          max_bytes: 1000000
          max_events: 10000
          timeout_secs: 2
        bootstrap_servers: kafka:9092
        buffer:
          max_events: 5000
          type: memory
          when_full: block
        compression: zstd
        encoding:
          codec: json
        inputs:
        - kubernetes_logs
        librdkafka_options:
          client.id: vector
          queue.buffering.max.kbytes: "75000"
          request.required.acks: "1"
        message_timeout_ms: 0
        topic: vector-topic
        type: kafka
      prometheus_exporter:
        address: 0.0.0.0:9090
        buffer:
          max_events: 500
          type: memory
          when_full: block
        flush_period_secs: 60
        inputs:
        - internal_metrics
        type: prometheus_exporter
    sources:
      internal_metrics:
        type: internal_metrics
      kubernetes_logs:
        glob_minimum_cooldown_ms: 5000
        include_paths_glob_patterns:
        - /var/log/pods/${VECTOR_SELF_POD_NAMESPACE}_${VECTOR_SELF_POD_NAME}_*/container/*
        ingestion_timestamp_field: ingest_timestamp
        namespace_annotation_fields:
          namespace_labels: ""
        node_annotation_fields:
          node_labels: ""
        pod_annotation_fields:
          container_id: ""
          container_image_id: ""
          pod_annotations: ""
          pod_labels: ""
          pod_owner: ""
          pod_uid: ""
        type: kubernetes_logs

Version

0.41.1-alpine

Debug Output


Example Data

No response

Additional Context

Image Image Image

References

No response

@ShahroZafar ShahroZafar added the type: bug A code related bug. label Feb 4, 2025
@pront pront added sink: kafka Anything `kafka` sink related source: kubernetes_logs Anything `kubernetes_logs` source related labels Feb 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sink: kafka Anything `kafka` sink related source: kubernetes_logs Anything `kubernetes_logs` source related type: bug A code related bug.
Projects
None yet
Development

No branches or pull requests

2 participants