-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rd_kafka_consumer_close/rd_kafka_destroy remain blocked indefinitely if the broker is unreachable #4519
Comments
Maybe try with the latest release of 2.3.0 first |
Thank you for your answer. I get the same problem with 2.3.0 version. |
Lately, we have also encountered this
Seems like it is waiting for some thread to join, but either the target thread refused to stop or it did not exist. (blocked for over an hour!) Version
DebuggingSome more Gdb context:
Looks like it is waiting for another thread!
This time the target thread "rdk:broker237" does not seem to be waiting for someone else, but instead in conditional wait:
So perhaps the main handler failed to wake this thread up from conditional wait? |
Description
I'm usinng librdkafka-2.0.2, and i'm facing with rd_kafka_consumer_close(myHandle) call blocking (once the broker become unreachable).
The same problem appears with rd_kafka_destroy(myHandle).
if i skip both api calls, a leakage appears each time a connection lost or unreachable broker happen.
Is there a way to avoid this call blocking (without running into memory leakage)?
thank you in advance
How to reproduce
Simulate lost connection (i,e, using clumsy) before invoke rd_kafka_consumer_close or rd_kafka_destroy.
Checklist
Please provide the following information:
<v2.0.2>
<v2.8.2>
<client.id="somevalue";bootstrap.servers= "my_Ip";group.id= "somevalue"; statistics.interval.ms=1000>
<Windows10>
debug=..
as necessary) from librdkafkaThe text was updated successfully, but these errors were encountered: