You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Configure maxReplicaCount bigger than partition count in a topic for a scaling job with Kafka trigger resulting in effectively having more jobs running than partitions in the topic.
Use-Case
I use Kafka as a trigger for long-running jobs. These jobs read a message, perform an action, and complete. At the moment I'm limited to have as many jobs as partitions. even when jobs consume the message and close the consumer to perform the actual operations. Once the message is consumed, committed, and the consumer closed -I can verify that for a given consumer group there is no consumer registered in confluent cloud-, I cannot trigger any more jobs if running jobs are equal to the partition count.
Allowing scaler limit to be maxReplicaCount, even when maxReplicacount > topic partition count will enable me to use even a single partition topic to trigger many jobs, lmited only by how fast the job consumes the message and commits it.
Anything else?
I acknowledge the risk of expecting more consumers than partitions and I value the safe defaults and I think those are the way to go. The more advanced use cases should also be allowed, with propper documentation, so the safe defaults won't become a limitation for the tool.
I'm now forced to have a bigger cluster than otherwise needed because I have many topics orchestrating a multi-stage pipeline of "step functions" and so I can have the number of stages that are required I'm forced to have large partitions count.
The text was updated successfully, but these errors were encountered:
Proposal
Configure maxReplicaCount bigger than partition count in a topic for a scaling job with Kafka trigger resulting in effectively having more jobs running than partitions in the topic.
Use-Case
I use Kafka as a trigger for long-running jobs. These jobs read a message, perform an action, and complete. At the moment I'm limited to have as many jobs as partitions. even when jobs consume the message and close the consumer to perform the actual operations. Once the message is consumed, committed, and the consumer closed -I can verify that for a given consumer group there is no consumer registered in confluent cloud-, I cannot trigger any more jobs if running jobs are equal to the partition count.
Allowing scaler limit to be maxReplicaCount, even when maxReplicacount > topic partition count will enable me to use even a single partition topic to trigger many jobs, lmited only by how fast the job consumes the message and commits it.
Anything else?
I acknowledge the risk of expecting more consumers than partitions and I value the safe defaults and I think those are the way to go. The more advanced use cases should also be allowed, with propper documentation, so the safe defaults won't become a limitation for the tool.
I'm now forced to have a bigger cluster than otherwise needed because I have many topics orchestrating a multi-stage pipeline of "step functions" and so I can have the number of stages that are required I'm forced to have large partitions count.
The text was updated successfully, but these errors were encountered: