-
Notifications
You must be signed in to change notification settings - Fork 16.8k
[stable/fluent-bit] Document how to parse JSON logs produced by an application #10424
Comments
I have tried multiple configuration combinations, including: Thank you for the link @kfox1111 . I have tried multiple combinations, including: backend:
type: es
es:
host: elasticsearch-client
image:
fluent_bit:
tag: 1.0.1
parsers:
json:
- extraEntries: |-
Decode_Field_As escaped log do_next
Decode_Field_As json log
and backend:
type: es
es:
host: elasticsearch-client
image:
fluent_bit:
tag: 1.0.1
parsers:
json:
- extraEntries: |-
Decode_Field_As escaped_utf8 log do_next
Decode_Field_As json log
but it continuous to come as a plain text log: {
"_index": "kubernetes_cluster-2019.01.07",
"_type": "flb_type",
"_id": "HnceJmgBINsxlb1YnF_W",
"_score": 1,
"_source": {
"@timestamp": "2019-01-07T02:22:46.865Z",
"log": "2019-01-07T02:22:46.865426429Z stdout F {\"context\":{\"package\":\"slonik\",\"logLevel\":20,\"executionTime\":\"1.28 ms\",\"rowCount\":1,\"sql\":\"SELECT id FROM event_seat_state_change WHERE event_id = ? ORDER BY id DESC LIMIT 1\"},\"message\":\"query\",\"sequence\":47781,\"time\":1546827766865,\"version\":\"1.0.0\"}",
"kubernetes": {
"pod_name": "adm-do-event-seating-lookups-7555bf849c-h4qlc",
"namespace_name": "default",
"pod_id": "563e747d-1208-11e9-b648-42010aa40038",
"labels": {
"chart-name": "data-manager",
"chart-version": "1.0.0",
"heritage": "Tiller",
"pod-template-hash": "3111694057",
"release": "adm-do-event-seating-lookups"
},
[...]
},
"fields": {
"@timestamp": [
"2019-01-07T02:22:46.865Z"
]
}
}
|
@gajus but your key. |
My understanding is that the prefix to which you are referring is added by Docker. I have SSHed to one of the GKE nodes and checked the raw logs to confirm this: $ tail -1 /var/log/pods/6829cea2-1208-11e9-b648-42010aa40038/data-manager/0.log
2019-01-07T10:52:37.095818517Z stdout F {"context":{"package":"@applaudience/cinema-data-scraper-http-client","namespace":"httpClient","logLevel":20},"message":"using proxy http://proxy-gateway:8080","sequence":180167,"time":1546858357095,"version":"1.0.0"}
The app itself simply outputs a JSON line, i.e. just:
My understanding is that fluent-bit is supposed to search the log message for occurrences satisfying the JSON parser and extract that information. |
I see. FWFIW, this works for us with chart version
But it only works with top level JSON keys as far as I can tell. And here's one line from an nginx container that is set to log in JSON:
This works for us in Splunk, not ES. |
Do you mind telling me what is the contents of Mine is: {
"disable-legacy-registry": true,
"live-restore": true,
"storage-driver": "overlay2"
}
Shouldn't this say |
Never mind. It appears that $ docker info | grep 'Logging Driver: json-file'
Logging Driver: json-file
|
That is strange. I logged in to another cluster that I own to see what is the format output, and it is indeed just plain JSON. Something somewhere is broken in my setup. |
Mine is the default for an EKS node on AWS:
I think the default GKE node has the same |
I have identified the cause. The reasons the log format is off is because I am using Have not discovered what is the fix, yet. |
There is something utterly broken with fluent-bit (or poorly documented). |
After many hours... it looks that I needed to configure https://docs.fluentbit.io/manual/filter/kubernetes Everything else works after configuring |
@gajus I had the same issue and I can see it works only if u have a JSON MAP, so if u have a nested json the decoder will still fail.
|
an issue has been created several months ago fluent/fluent-bit#1130 regarding this matter, who knows C ? :) |
There's many issues for this, just search. In this one they have a list of them. It's a fluent-bit problem though, not a chart issue. |
I found this to be a working configuration to properly format json:
|
I am also experiencing the same problem as others in this thread, and have tried the solutions presented here without success. In every case I'm seeing @TarekAS, @gajus Since you both have this working, is it still working for you? |
Hi, I'm facing this same problem "log": 2019-09-17T18:17:07.799810563Z stdout F { ... }" using a managed cluster from IBMCloud. I've tried all the options on this post without success. |
I am facing the same issue "log": 2019-09-17T18:17:07.799810563Z stdout F { ... }" on OpenShift cluster. Any suggestions? |
same issue on k3s cluster |
Same here:
|
I tried the same, but it still appears to have problems with the |
Log messages from app containers in openshift cluster are updated before they are saved to log files. If the log message from app container is In fluent-bit config, have one
and have
This will change the log message to look like:
|
@sgujrati-up I got it partly working by simply setting However, some of my messages contain logs in JSON which I would also like to parse. Some look like this:
Perfect, nothing to change here. But some look like this:
Here I would also love to parse the JSON and merge it. All logs come from k8s-application. |
Soooo, I got it working. Additionally to the above, I just needed to add another filter to
Now it works! |
@boxcee . could you show us the full config for fluentbit where log is parsed as json please? |
So I spend some time getting this working.
filters.conf
inputs.conf
to get the docker container logging I added tagging (the forward plugin does not support Tag) so this is the options I am having on my docker container
|
Passing these values to helm chart solves this issue for me in k3d clusters:
|
Which chart:
stable/fluent-bit
What happened:
An application produces a JSON log, e.g.
but ES receives the log as a plain text message (log field):
What you expected to happen:
I expect that fluent-bit-parses the json message and providers the parsed message to ES.
How to reproduce it (as minimally and precisely as possible):
Using default configuration.
CC @naseemkullah @jknipper @vroyer (Recent contributors to stable/fluent-bit Chart).
What is the configuration for parsing JSON logs produced by an application running in a Kubernetes Pod?
The text was updated successfully, but these errors were encountered: