You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently kpack has a bunch of limiters that prevent it from scaling effectively for large cluster (for eg. we have a cluster with over 3000 image resources)
We have an internal patch on the kpack controller to bump up rate limits so that it scales to large multi-tenant clusters. We would love to contribute the change upstream. It largely involves changing the following three vars via a single scale-factor cli arg that multiplies the default values with the user provided scale factor.
Would yall be open to a small PR to make these rates configurable? We have been able to scale to close to 3k images and our reconcile time improved from 4mins to <1s post our changes.
The text was updated successfully, but these errors were encountered:
sambhav
changed the title
kpack doesn't scale for large cluster with the current client side rate limits
kpack doesn't scale for large clusters with the current client side rate limits
Jul 15, 2024
Currently kpack has a bunch of limiters that prevent it from scaling effectively for large cluster (for eg. we have a cluster with over 3000 image resources)
The main limiters are -
kpack/cmd/controller/main.go
Line 66 in 93a7c2c
kpack/cmd/controller/main.go
Lines 318 to 319 in 93a7c2c
We have an internal patch on the kpack controller to bump up rate limits so that it scales to large multi-tenant clusters. We would love to contribute the change upstream. It largely involves changing the following three vars via a single
scale-factor
cli arg that multiplies the default values with the user provided scale factor.Would yall be open to a small PR to make these rates configurable? We have been able to scale to close to 3k images and our reconcile time improved from 4mins to <1s post our changes.
The text was updated successfully, but these errors were encountered: