Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to restart or kill logstash service when kafka output plugin failed #8996

Open
devopsberlin opened this issue Jan 21, 2018 · 4 comments
Open

Comments

@devopsberlin
Copy link

I am using docker.elastic.co/logstash/logstash-oss:6.0.0 with kafka output plugin,
Logstash output kafka plugin is not pushing data into kafka when one of the kafka nodes go down or get different broker id.

1/19/2018 10:36:47 PM[2018-01-19T20:36:47,283][WARN ][org.apache.kafka.clients.NetworkClient] Connection to node 1 could not be established. Broker may not be available.
1/19/2018 11:16:44 PM[2018-01-19T21:16:44,320][INFO ][logstash.outputs.kafka   ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.01}
1/19/2018 11:46:44 PM[2018-01-19T21:46:44,876][INFO ][logstash.outputs.kafka   ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.01}
1/21/2018 5:08:48 PM[2018-01-21T15:08:48,645][INFO ][logstash.outputs.kafka   ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.01}
1/21/2018 5:08:49 PM[2018-01-21T15:08:49,241][INFO ][logstash.outputs.kafka   ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.01}

retries parameter might not help in the case because the broker id can be different from the one when logstash container start with (for example kafka brokers can change from [1,2,3] to [1,2,4].

  # If you choose to set `retries`, a value greater than zero will cause the
  # client to only retry a fixed number of times. This will result in data loss
  # if a transient error outlasts your retry count.
  #

https://www.elastic.co/guide/en/logstash/5.6/plugins-outputs-kafka.html

Is there a way to force logstash to exit / kill the process in this case, this way a new logstash container will be launched with the new brokers ids and the service will start properly.

output {
  kafka {
    bootstrap_servers=> "kafka:9092"
    topic_id=> "topic"
    codec=> "json"
    message_key=> "key"
  }
  #stdout { codec => "rubydebug" }
}

Thanks

@ebuildy
Copy link
Contributor

ebuildy commented Mar 20, 2018

Same issue with RabbitMQ, impossible to stop the logstash container, or execute any command within it.

Did you try docker kill also?

@bgeesaman
Copy link

Is there any way to tell that logstash has zero connections to Kafka anymore via script/API? If so, that may open the possibility of a liveness probe running on an interval inside Kubernetes to do just that.

@bbotte
Copy link

bbotte commented Sep 6, 2018

The situation I encountered was Logstash's indexer.conf and shipper.conf configurations are also in /etc/logstash/conf.d/.
Execute separately.
Hope useful

@lhzw
Copy link

lhzw commented Sep 5, 2021

Mine, refer to @buch11 at logstash-plugins/logstash-output-kafka#155:

kind: Deployment
metadata:
  name: log2es
  labels:
    app: log2es
spec:
  replicas: 1
  selector:
    matchLabels:
      app: log2es
  template:
    metadata:
      labels:
        app: log2es
    spec:
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
      initContainers:
      - name: create-liveness-script
        image: busybox
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        securityContext:
          privileged: true
          runAsUser: 1000
        command: ["/bin/sh"]
        args: ["-c", 'echo "if ! grep \"Broker may not be available\" /tmp/log >/dev/null 2>&1; then exit 0; else echo \"kafka is out of reach, need to restart.\"; exit 1; fi; size=\$(stat -c %s /tmp/log); if [ \$size -gt 10485760 ]; then > /tmp/log; fi; " > /livenessdir/live.sh; chmod +x /livenessdir/live.sh ']
        volumeMounts:
          - mountPath: /livenessdir
            name: livenessdir
      containers:
      - name: log2es
        image: .....
        command: ["/bin/sh"]
        args: ["-c", "logstash -f /etc/logstash/conf.d/logserver.conf | tee /tmp/log"]
        livenessProbe:
          exec:
            command:
            - /bin/sh
            - /livenessdir/live.sh
          initialDelaySeconds: 60
          periodSeconds: 5
          failureThreshold: 3
        volumeMounts:
          - mountPath: /livenessdir
            name: livenessdir

      volumes:
        - name: livenessdir
          emptyDir: {}
---

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants