Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Infra UI] Ensure correct message reconstruction for all official filebeat modules #26759

Closed
weltenwort opened this issue Dec 6, 2018 · 20 comments
Assignees
Labels

Comments

@weltenwort
Copy link
Member

weltenwort commented Dec 6, 2018

The heuristics used to reconstruct the message from the documents created by the official filebeat modules should support all kinds of log events.

Known issues with pre-ECS formats are covered by the following issues:

Compatibility with various modules in ECS format has been improved in #31120.

@weltenwort weltenwort added the Feature:Logs UI Logs UI feature label Dec 6, 2018
@elasticmachine
Copy link
Contributor

Pinging @elastic/infrastructure-ui

@mathieudz
Copy link

Same thing for nginx error log (access log is fine).

@bvader
Copy link

bvader commented Dec 11, 2018

Here is my example

I used add log data, followed the instruction for MySQL in Kibana

Added Filebeat MySQL Module to
MySQL ver. 5.7.24-0
ubuntu0.18.04.1

Sample Log Lines

2018-12-07T02:19:36.564599Z 29 [Note] Access denied for user 'petclinicdd'@'47.153.152.234' (using password: YES)
2018-12-07T02:19:38.607311Z 30 [Note] Access denied for user 'petclinicdd'@'47.153.152.234' (using password: YES)

What the line file look like in discover
screen shot 2018-12-06 at 6 25 14 pm

What the look like in log viewer

2018-12-06 18:19:36.564 failed to format message from /var/log/mysql/error.log
2018-12-06 18:19:38.607 failed to format message from /var/log/mysql/error.log

@welderpb
Copy link

Same for Logstash module:
failed to format message from /var/log/logstash/logstash-plain.log

@wixxerd
Copy link

wixxerd commented Dec 21, 2018

This is impacting Filebeat as well. I'm getting this when I beat over logs from IIS. It's producing a LOT of logs, which impacts space within the deployment also...

@omaryoussef
Copy link

Seeing the same here with FileBeat, mysql and nginx error logs are all 'failed to format' but no problems with nginx access logs. Are there any temporary fixes for this?

@hadleylion
Copy link

+1 urgency

@weltenwort @welderpb

image

Having same issue with Filebeat for Logstash module.

@jasonsattler
Copy link

Not sure if this is the issue or not. I found the filebeat modules will use the field log instead of message. In logstash I added a mutate to rename the log field to message then they started to show Kibana Logs.

@hadleylion
Copy link

Not sure if this is the issue or not. I found the filebeat modules will use the field log instead of message. In logstash I added a mutate to rename the log field to message then they started to show Kibana Logs.

Did the trick! Thanks @jasonsattler

@simianhacker
Copy link
Member

I'm adding rules for MySQL slow and error logs via #28219

@paltaa
Copy link

paltaa commented Jan 10, 2019

Not sure if this is the issue or not. I found the filebeat modules will use the field log instead of message. In logstash I added a mutate to rename the log field to message then they started to show Kibana Logs.
@jasonsattler
Could you show your solution? im facing the same problem with filebeat not processing logs from kibana pod

@jasonsattler
Copy link

You should be able to rename the field either in filebeat or logstash.

In filebeat just add the following to your prospectors:

processors:
  - rename:
       fields:
          - from: "log"
          - to: "message"

Or in logstash use mutate in your filters

filter {
  mutate {
    rename => { "log" => "message" }
  }
}

@paltaa
Copy link

paltaa commented Jan 10, 2019

Tried with filebeat, didnt work is it added to the filebeat.yml or kubernetes.yml my configs:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        path: ${path.config}/inputs.d/*.yml
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: false
    processors:
      - add_cloud_metadata:
      - drop_fields:
          when:
            has_fields: ['kubernetes.labels.app']
          fields:
            - 'kubernetes.labels.app'
      - rename:
             fields:
                - from: "log"
                  to: "message"

    output.elasticsearch:
      hosts: ['http://elasticsearch.whitenfv.svc.cluster.local:9200']
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: docker
      json.keys_under_root: false
      json.add_error_key: false
      json.ignore_decoding_error: true
      containers.ids:
        - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true

---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: {{ filebeat_image_full }}
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat

@jasonsattler
Copy link

@paltaa you are missing - before the to

          processors:
            - add_kubernetes_metadata:
                in_cluster: true
            - rename:
                fields:
                  - from: "log"
                  - to: "message"

@paltaa
Copy link

paltaa commented Jan 14, 2019

@jasonsattler Did it, and still im getting these errors

2019-01-14 15:19:32.726
failed to format message from /var/lib/docker/containers/f6883893ebb064518104318835d88e6c6fb9077f5a9369922066e9b004d9ee0f/f6883893ebb064518104318835d88e6c6fb9077f5a9369922066e9b004d9ee0f-json.log

@cawoodm
Copy link

cawoodm commented Jan 24, 2019

I have a logs-prod index which look like this:

{
    "raw": "No such bean definition found to exist",
    "source": "console",
    "timestamp": "2018-09-24T04:42:51.478Z"
    ...
}

kibana.yml:

xpack.infra.sources.default.logAlias: "logs-*"
xpack.infra.sources.default.fields.timestamp: "timestamp"
xpack.infra.sources.default.fields.message: ['raw']

Result in Logs UI:

failed to format message from console

@weltenwort
Copy link
Member Author

There is a problem with the message setting not working correctly, sorry 🙈 the only workaround right now is to move or copy the message to the message field during ingestion or reindexing. We're working on fixing and improving the configurability.

@Colennn
Copy link

Colennn commented Jan 25, 2019

Filebeat for Elasticsearch. The problem appear when i search the IIS log at kibana

@weltenwort
Copy link
Member Author

Many problems have been addressed via #30398 and #31120. Please feel free to open separate issues for other problems with particular modules.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests