Skip to content

Commit

Permalink
Change default minReadySeconds to 5 seconds
Browse files Browse the repository at this point in the history
After tests on several platforms, we decided to change default minReadySeconds of ScyllaCluster Pods from 10s to 5s.

Test consisted of spawning multiple ScyllaCluster's having single node in parallel to overload kube-proxy recociling Endpoints and iptable rules.
After ScyllaCluster passed Available=True,Progressing=False,Degraded=False test validated after how long it's possible to connect via identity ClusterIP Service.
This measures how big the discrepancy is between when we call ScyllaCluster Available and when it's actually available.

On different platforms and setups results were as the following (in seconds):
* GKE with kube-proxy iptables mode:

  ```
  0.004304-2.272  74.6%    █████▏  1067
  2.272-4.54      13.7%    █       196
  4.54-6.808      7.34%    ▌       105
  6.808-9.075     3.92%    ▎       56
  9.075-11.34     0.28%    ▏       4
  11.34-13.61     0.0699%  ▏       1
  13.61-15.88     0%       ▏
  15.88-18.15     0%       ▏
  18.15-20.41     0%       ▏
  20.41-22.68     0.0699%  ▏       1
  ```

* GKE with Dataplane V2 Enabled (Cillium):

  ```
  0.004604-0.08347  94.3%  █████▏  943
  0.08347-0.1623    3.7%   ▏       37
  0.1623-0.2412     1.3%   ▏       13
  0.2412-0.3201     0.1%   ▏       1
  0.3201-0.3989     0.1%   ▏       1
  0.3989-0.4778     0.2%   ▏       2
  0.4778-0.5567     0.2%   ▏       2
  0.5567-0.6355     0.1%   ▏       1
  ```

* EKS with kube-proxy iptables mode:

  ```
  0.003163-0.129  95.6%  █████▏  956
  0.129-0.2549    0.9%   ▏       9
  0.2549-0.3807   0.3%   ▏       3
  0.3807-0.5066   0.8%   ▏       8
  0.5066-0.6324   1.4%   ▏       14
  0.6324-0.7583   0%     ▏
  0.7583-0.8841   0.6%   ▏       6
  0.8841-1.01     0.4%   ▏       4
  ```

After reproducing it locally, the root cause of slowness of GKE kube-proxy setup, seems to be slowness of iptables commands.
kube-proxy logs traces when iptable execution takes long, and logs contained lots of such traces, sometimes taking even 15s.
Because there's an alternative on GKE, we lower minReadySeconds to 5s to give enough time to kube-proxy and not delay rollouts too much.
  • Loading branch information
zimnx committed Feb 29, 2024
1 parent b1a8db0 commit 2b5e8f4
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion pkg/controller/scyllacluster/resource.go
Original file line number Diff line number Diff line change
Expand Up @@ -344,7 +344,7 @@ func StatefulSetForRack(r scyllav1.RackSpec, c *scyllav1.ScyllaCluster, existing
}

// Assume kube-proxy notices readiness change and reconcile Endpoints within this period
kubeProxyEndpointsSyncPeriodSeconds := 10
kubeProxyEndpointsSyncPeriodSeconds := 5
loadBalancerSyncPeriodSeconds := 60

readinessFailureThreshold := 1
Expand Down
4 changes: 2 additions & 2 deletions pkg/controller/scyllacluster/resource_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -671,7 +671,7 @@ func TestStatefulSetForRack(t *testing.T) {
"scylla/rack": "rack",
},
},
MinReadySeconds: 10,
MinReadySeconds: 5,
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: newBasicStatefulSetLabels(0),
Expand Down Expand Up @@ -984,7 +984,7 @@ func TestStatefulSetForRack(t *testing.T) {
"-O",
"inherit_errexit",
"-c",
"nodetool drain & sleep 20 & wait",
"nodetool drain & sleep 15 & wait",
},
},
},
Expand Down

0 comments on commit 2b5e8f4

Please sign in to comment.