Some metrics Started being reported as zero when memory usage high #22361
Labels
sink: kafka
Anything `kafka` sink related
source: kubernetes_logs
Anything `kubernetes_logs` source related
type: bug
A code related bug.
A note for the community
Problem
We have vector running as a sidecar and pushing logs of 1 pod only.
This vector process reads the logs using kubernetes_logs of a single pod and push logs to kafka.
We see that due to some issues in kafka, there was buffering in memory and memory usage increased. I also see queue.buffering.max.kbytes is also reached.
However once the issue was over, we saw that the metrics value showed as 0 for some. Other metrics completely vanished.
I ran tcpdump and can see that traffic is still going onto kafka.
Configuration
Version
0.41.1-alpine
Debug Output
Example Data
No response
Additional Context
References
No response
The text was updated successfully, but these errors were encountered: