Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Change default minReadySeconds to 5 seconds
After tests on several platforms, we decided to change default minReadySeconds of ScyllaCluster Pods from 10s to 5s. Test consisted of spawning multiple ScyllaCluster's having single node in parallel to overload kube-proxy recociling Endpoints and iptable rules. After ScyllaCluster passed Available=True,Progressing=False,Degraded=False test validated after how long it's possible to connect via identity ClusterIP Service. This measures how big the discrepancy is between when we call ScyllaCluster Available and when it's actually available. On different platforms and setups results were as the following (in seconds): * GKE with kube-proxy iptables mode: ``` 0.004304-2.272 74.6% █████▏ 1067 2.272-4.54 13.7% █ 196 4.54-6.808 7.34% ▌ 105 6.808-9.075 3.92% ▎ 56 9.075-11.34 0.28% ▏ 4 11.34-13.61 0.0699% ▏ 1 13.61-15.88 0% ▏ 15.88-18.15 0% ▏ 18.15-20.41 0% ▏ 20.41-22.68 0.0699% ▏ 1 ``` * GKE with Dataplane V2 Enabled (Cillium): ``` 0.004604-0.08347 94.3% █████▏ 943 0.08347-0.1623 3.7% ▏ 37 0.1623-0.2412 1.3% ▏ 13 0.2412-0.3201 0.1% ▏ 1 0.3201-0.3989 0.1% ▏ 1 0.3989-0.4778 0.2% ▏ 2 0.4778-0.5567 0.2% ▏ 2 0.5567-0.6355 0.1% ▏ 1 ``` * EKS with kube-proxy iptables mode: ``` 0.003163-0.129 95.6% █████▏ 956 0.129-0.2549 0.9% ▏ 9 0.2549-0.3807 0.3% ▏ 3 0.3807-0.5066 0.8% ▏ 8 0.5066-0.6324 1.4% ▏ 14 0.6324-0.7583 0% ▏ 0.7583-0.8841 0.6% ▏ 6 0.8841-1.01 0.4% ▏ 4 ``` After reproducing it locally, the root cause of slowness of GKE kube-proxy setup, seems to be slowness of iptables commands. kube-proxy logs traces when iptable execution takes long, and logs contained lots of such traces, sometimes taking even 15s. Because there's an alternative on GKE, we lower minReadySeconds to 5s to give enough time to kube-proxy and not delay rollouts too much.
- Loading branch information