You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently using confluent kafka with librdkafka (C#). We found out that during rebalancing we get a hell lot of "Timed out MetadataRequest in flight" messages. I would say hundreds and even with the same timestamp.
After 30 sec (the rebalance is completed) the consumer gets assigned to another partition. The partition strategy is the default one "RangeAssignor".
Here are the logs:
As you can see we get revoked from partition 3 and a new assignment to partition 2. In between there are timeouts. Hundreds of them. We don't show them here in the logs because we used a distinct in the query parameters.
2022-10-17T06:53:47.4836418Z | [06:53:47.483 INF] Assigned to following partitions: 2 Started
2022-10-17T06:53:47.3243943Z | 2 request(s) timed out: disconnect (after 27345798ms in state UP)
2022-10-17T06:53:47.3242361Z | 2 request(s) timed out: disconnect (after 27345798ms in state UP)
2022-10-17T06:53:47.3240675Z | Timed out 2 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
-- this we get a hell lot of times. Hundreds of them and it's spamming the console output
2022-10-17T06:53:47.3240559Z | Timed out MetadataRequest in flight (after 60010ms, timeout #1)
2022-10-17T06:53:47.3240137Z | Timed out MetadataRequest in flight (after 61010ms, timeout #0)
2022-10-17T06:53:16.2152438Z | [06:53:16.215 INF] Revoked from partitions: 3 Started
How to reproduce
This I don't know because we use Microsoft Event Hub with the Confluent Kafka C# libraries to consume and to produce but with the current versions.
During rebalancing, the messages are generated in masses.
Description
We are currently using confluent kafka with librdkafka (C#). We found out that during rebalancing we get a hell lot of "Timed out MetadataRequest in flight" messages. I would say hundreds and even with the same timestamp.
After 30 sec (the rebalance is completed) the consumer gets assigned to another partition. The partition strategy is the default one "RangeAssignor".
Here are the logs:
As you can see we get revoked from partition 3 and a new assignment to partition 2. In between there are timeouts. Hundreds of them. We don't show them here in the logs because we used a distinct in the query parameters.
2022-10-17T06:53:47.4836418Z | [06:53:47.483 INF] Assigned to following partitions: 2 Started
2022-10-17T06:53:47.3243943Z | 2 request(s) timed out: disconnect (after 27345798ms in state UP)
2022-10-17T06:53:47.3242361Z | 2 request(s) timed out: disconnect (after 27345798ms in state UP)
2022-10-17T06:53:47.3240675Z | Timed out 2 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
-- this we get a hell lot of times. Hundreds of them and it's spamming the console output
2022-10-17T06:53:47.3240559Z | Timed out MetadataRequest in flight (after 60010ms, timeout #1)
2022-10-17T06:53:47.3240137Z | Timed out MetadataRequest in flight (after 61010ms, timeout #0)
2022-10-17T06:53:16.2152438Z | [06:53:16.215 INF] Revoked from partitions: 3 Started
How to reproduce
This I don't know because we use Microsoft Event Hub with the Confluent Kafka C# libraries to consume and to produce but with the current versions.
During rebalancing, the messages are generated in masses.
Confluent Kafka 1.9.3
Librdkafka.redist 1.9.2
Docker 6.0-alpine
Default Confluent Consumer Settings
The text was updated successfully, but these errors were encountered: