Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spamming the logs during rebalancing #4028

Open
benzmarkus opened this issue Oct 19, 2022 · 0 comments
Open

Spamming the logs during rebalancing #4028

benzmarkus opened this issue Oct 19, 2022 · 0 comments

Comments

@benzmarkus
Copy link

benzmarkus commented Oct 19, 2022

Description

We are currently using confluent kafka with librdkafka (C#). We found out that during rebalancing we get a hell lot of "Timed out MetadataRequest in flight" messages. I would say hundreds and even with the same timestamp.
After 30 sec (the rebalance is completed) the consumer gets assigned to another partition. The partition strategy is the default one "RangeAssignor".

Here are the logs:
As you can see we get revoked from partition 3 and a new assignment to partition 2. In between there are timeouts. Hundreds of them. We don't show them here in the logs because we used a distinct in the query parameters.

2022-10-17T06:53:47.4836418Z | [06:53:47.483 INF] Assigned to following partitions: 2 Started
2022-10-17T06:53:47.3243943Z | 2 request(s) timed out: disconnect (after 27345798ms in state UP)
2022-10-17T06:53:47.3242361Z | 2 request(s) timed out: disconnect (after 27345798ms in state UP)
2022-10-17T06:53:47.3240675Z | Timed out 2 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
-- this we get a hell lot of times. Hundreds of them and it's spamming the console output
2022-10-17T06:53:47.3240559Z | Timed out MetadataRequest in flight (after 60010ms, timeout #1)
2022-10-17T06:53:47.3240137Z | Timed out MetadataRequest in flight (after 61010ms, timeout #0)
2022-10-17T06:53:16.2152438Z | [06:53:16.215 INF] Revoked from partitions: 3 Started

How to reproduce

This I don't know because we use Microsoft Event Hub with the Confluent Kafka C# libraries to consume and to produce but with the current versions.
During rebalancing, the messages are generated in masses.

Confluent Kafka 1.9.3
Librdkafka.redist 1.9.2
Docker 6.0-alpine
Default Confluent Consumer Settings

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant