You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You should configure your workloads' minReadySeconds and terminationGracePeriodSeconds values to be 60 seconds or higher to ensure that the service is not disrupted due to workload rollouts.
In my test, even use a much less minReadySeconds/terminationGracePeriodSeconds, there are no impact at the view of response latency. Use minReadySeconds/terminationGracePeriodSeconds:60 will slow down our development deploy progress. How to tuning the value of minReadySeconds? Why not 10 or 20 but 60 seconds?
Thanks,
The text was updated successfully, but these errors were encountered:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
For versions with pod readiness feedback enabled. minReadySeconds is no longer needed. But terminationGracePeriodSeconds is still required if the rolling update needs to be seamless.
According to the document,
In my test, even use a much less
minReadySeconds/terminationGracePeriodSeconds
, there are no impact at the view of response latency. UseminReadySeconds/terminationGracePeriodSeconds:60
will slow down our development deploy progress. How to tuning the value of minReadySeconds? Why not 10 or 20 but 60 seconds?Thanks,
The text was updated successfully, but these errors were encountered: