diff --git a/tests/zero-downtime-scaling/results/1.1.0/1.1.0.md b/tests/zero-downtime-scaling/results/1.1.0/1.1.0.md index 7c0f6a3141..6b1e4a35a9 100644 --- a/tests/zero-downtime-scaling/results/1.1.0/1.1.0.md +++ b/tests/zero-downtime-scaling/results/1.1.0/1.1.0.md @@ -522,4 +522,6 @@ Logs: ## Future Improvements -1. When applying the 10/25node values manifest, try with a larger sleep value and termination grace period. +1. When applying the 10/25node values manifest, try with a larger sleep value and termination grace period to see + if a bigger value of the delay can prevent traffic loss when scaling down. It could be that the chosen delay + is not large enough to allow the fronting load balancer to gracefully drain the node with the terminating pod.