-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
producer returns Queue full #117
Comments
Did you set librdkafka is asynch on the inside, the produce() API cannot easily trigger the broker thread to send off the currently buffered messages. |
I set the batch.num.messages <= queue.buffering.max.messages On Wed, May 14, 2014 at 2:01 PM, Magnus Edenhill
|
Currently no way no. May I ask what is the reason for decreasing the internal queue size? (queue.buffering.max.messages) If you need to decrease the internal queues, for whatever reason, you need to calculate proper values for batch.num.messages, queue.buffering.max.messages and queue.buffering.max.ms according to your expected produce() rate. E.g. if you think you will produce() 1000 messages/s, then set: This will make sure one batch of messages is sent to the broker every 100ms (containing 100 messages or less). |
I was playing with these parameters to see what happens with the producer On Wed, May 14, 2014 at 2:26 PM, Magnus Edenhill
|
Okay, then I suggest you keep max.messages to a high value. Think of it as your safety harness if the broker or broker connection starts acting up, |
Let me know if this solved it for you so I can close the issue. Thanks |
you can close the issue On Thu, May 15, 2014 at 10:53 AM, Magnus Edenhill
|
Thanks |
@edenhill I got through the consumer issues and noticed the topic message count was greater than the Elasticsearch index count. In grep'ing logs I found these errors and error count almost matches the missing record count. I notice you recommend some setting above and it's not clear exactly where to apply the settings for librdkafka using the
|
If I edit the
|
I'm guessing this is related to the confluent-kafka-python issue (not calling poll() frequently enough) |
If I set
queue.buffering.max.messages=10
and
queue.buffering.max.ms=60000
the producer returns "Queue full" after sending 10 messages . It seems that the producer doesn't send the messages to kafka. I should suppose that when the queue is full ( or maybe 3/4 full) the messages are send to kafka even if queue.buffering.max.ms is not expired.
Maybe it is better to define a max and min value.
When the size of queue is more than buffering.min then producer tries to send the data to kafka
The text was updated successfully, but these errors were encountered: