-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[metribeat] Failed to parse kubernetes.labels.app #12638
Comments
Thanks @nerddelphi, can you also paste the error message you saw here please? |
@kaiyan-sheng These errors:
and
|
@nerddelphi Thank you! I suspect it's because how we define |
@kaiyan-sheng ok. I appreciate it and I hope we can fix that :) |
got the same, is there any fix / workaround available yet? the filebeat fixes dont work for metricbeat (drop fields, rename) |
I'm having this error yet. Metricbeat 7.6.0 monitoring GKE. Any clues here? {"type":"mapper_parsing_exception","reason":"failed to parse field [kubernetes.pod.labels.app] of type [keyword] in document with id 'MECzcnABNHSy86_GVDf1'. Preview of field's value: '{kubernetes={io/instance=vault, io/name=vault}}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:338"}} |
Having the same issue on AWS EKS: {"type":"mapper_parsing_exception","reason":"failed to parse field [kubernetes.labels.statefulset] of type [keyword] in document with id 'HfFwzXABACATvuNpI5wp'. Preview of field's value: '{kubernetes={io/pod-name=elastic-operator-0}}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:795"}} |
Please, re-open this issue. The beat v7 template
Some k8s containers have these kinds of labels
Inserting this triggers a mapping error
|
Sorry, #16857 is only backported to 7.7.0 by now. |
My issue was not related to this bug. I added the setting "labels.dedot: true" to solve it. Sorry for the noise. |
Hi everyone. Based on this closed issue 8773 - Filebeat and Kubernetes pods with labels (helm deployments, etc), I guess it wasn't fixed at all. I'm getting the same behavior, even using the latest version of metricbeat (7.1.1) and config:
When metricbeat tries to push out events like these below to Elasticsearch:
{ "@timestamp": "2019-06-21T14:30:06.333Z", "@metadata": { "beat": "metricbeat", "type": "_doc", "version": "7.1.1" }, "ecs": { "version": "1.0.0" }, "cloud": { "availability_zone": "us-east1-b", "instance": { "id": "123123213213", "name": "gke-staging-preemptible-pool-xxxxxxx" }, "machine": { "type": "n1-standard-8" }, "project": { "id": "kubernetes-staging-220222" }, "provider": "gcp" }, "kubernetes": { "deployment": { "name": "shared-queue-preemptive", "replicas": { "unavailable": 0, "updated": 1, "desired": 1, "available": 1 }, "paused": false }, "namespace": "shared-queue", **"labels": { "app": { "kubernetes": { "io/instance": "shared-queue", "io/managed-by": "Tiller", "io/name": "shared-queue", "io/version": "1.3.25" } }**, "helm": { "sh/chart": "microservice-0.1.2" } } }, "metricset": { "name": "state_deployment" }, "service": { "address": "kube-state-metrics.kube-system.svc.cluster.local:8080", "type": "kubernetes" }, "event": { "module": "kubernetes", "duration": 384732698, "dataset": "kubernetes.deployment" }, "host": { "name": "gke-staging-preemptible-pool-xxxxxx" }, "agent": { "hostname": "gke-staging-preemptible-pool-xxxxxxx", "id": "d6e948a1-419e-440c-bdd3-c110209e5942", "version": "7.1.1", "type": "metricbeat", "ephemeral_id": "8810147c-0e44-4583-a663-b91c788309c4" } }
Obviously because these labels:
"kubernetes": {
"deployment": {
"name": "shared-queue-preemptive",
"replicas": {
"unavailable": 0,
"updated": 1,
"desired": 1,
"available": 1
},
"paused": false
},
"namespace": "shared-queue",
"labels": {
"app": {
"kubernetes": {
"io/instance": "shared-queue",
"io/managed-by": "Tiller",
"io/name": "shared-queue",
"io/version": "1.3.25"
}
},
"helm": {
"sh/chart": "microservice-0.1.2"
}
}
}
The text was updated successfully, but these errors were encountered: