-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to restart or kill logstash service when kafka output plugin failed #8996
Comments
Same issue with RabbitMQ, impossible to stop the logstash container, or execute any command within it. Did you try |
Is there any way to tell that logstash has zero connections to Kafka anymore via script/API? If so, that may open the possibility of a |
The situation I encountered was Logstash's indexer.conf and shipper.conf configurations are also in /etc/logstash/conf.d/. |
Mine, refer to @buch11 at logstash-plugins/logstash-output-kafka#155:
|
I am using
docker.elastic.co/logstash/logstash-oss:6.0.0
withkafka output plugin
,Logstash output kafka plugin is not pushing data into kafka when one of the kafka nodes go down or get different broker id.
retries
parameter might not help in the case because the broker id can be different from the one when logstash container start with (for example kafka brokers can change from [1,2,3] to [1,2,4].https://www.elastic.co/guide/en/logstash/5.6/plugins-outputs-kafka.html
Is there a way to force logstash to exit / kill the process in this case, this way a new logstash container will be launched with the new brokers ids and the service will start properly.
Thanks
The text was updated successfully, but these errors were encountered: