Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle RecordTooLargeException #191

Closed
wants to merge 2 commits into from

Conversation

davehagler
Copy link

RecordTooLargeException is similar to SerializationException. The retry will fail forever and will be caught in an endless loop.

See issue #190

Retrying will fail so catch the exception
@jsvd jsvd self-requested a review May 23, 2018 14:37
Previous location was returning error
@@ -262,6 +262,9 @@ def retrying_send(batch)
rescue org.apache.kafka.common.errors.InterruptException => e
failures << record
nil
rescue org.apache.kafka.common.errors.RecordTooLargeException => e
Copy link

@praseodym praseodym Jun 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KafkaProducer::send cannot throw a RecordTooLargeException, so this PR does not make sense.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a Runtime exception that is unchecked

Copy link

@praseodym praseodym Jun 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would expect it to be thrown from the future.get() call a little down the code. Either way this PR does not help because the same (by default infinite) retry logic is still there; #193 proposes a better solution by only retrying sending events when a RetriableException is thrown.

@praseodym
Copy link

praseodym commented Jun 13, 2018

I do agree that any RecordTooLargeExceptions are not retriable and thus should be permanent, causing a high severity log message and a dropped event.

However, this plugin currently does not have a way to distinguish between permanent and temporary failures. There is only a configurable retry limit (:retries) that will impact all failure modes.

#193 proposes a better solution to handle these kinds of failures.

@roaksoax
Copy link

Closing as this has been fixed in logstash-plugins/logstash-integration-kafka#29, in Logstash Kafka Integration plugin version 10.5.0

@roaksoax roaksoax closed this Jul 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants