You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, Go sets $GOMAXPROCS to be the total number of cpus on the physical host machine, without regard to any external cgroup type of cpu limits. In a typical Vitess setup, a single VM is likely to host at least mysql + vttablet, and in many cases, many more containers / processes.
Uber has benchmarked the problem (uber-go/automaxprocs#12 (comment)), showing increased mutex contention compared to setting $GOMAXPROCS appropriately.
I don't have high enough QPS throughput right now to accurately benchmark this in Vitess, so I can't say for sure whether or not this would have an impact, though it feels like it would.
If we do find out that this is an issue, I'm not sure where the appropriate place to tackle this is.
Use Uber's package to set it automatically - https://github.com/uber-go/automaxprocs. This is the easiest implementation, but I'm not sure how easy this is to override if users disagree with the implementation.
Just add documentation - this gives people a chance to set it appropriately, but seems unlikely to happen
Set it in the helm chart / operator or whatever other installation mechanisms we provide - instead of handling it in code, we can use the k8s limit setting to set the $GOMAXPROCS env var.
My initial reaction is that this is likely not an issue with vitess, at least not for existing use cases. The typical expectation for a vitess process, be it vttablet or vtgate, is that a single process should generally be kept at around a few thousand QPS. Pushed to the limit, these have scaled up to the high tens of thousands of QPS.
We generally don't recommend pushing beyond those limits, and one shouldn't have to. I think going beyond these limits will be problematic no matter what GOMAXPROCS value you set. This is because there are code paths that are used by every query that acquire and release mutexes. They will all start to contend.
By default, Go sets
$GOMAXPROCS
to be the total number of cpus on the physical host machine, without regard to any external cgroup type of cpu limits. In a typical Vitess setup, a single VM is likely to host at least mysql + vttablet, and in many cases, many more containers / processes.Uber has benchmarked the problem (uber-go/automaxprocs#12 (comment)), showing increased mutex contention compared to setting
$GOMAXPROCS
appropriately.I don't have high enough QPS throughput right now to accurately benchmark this in Vitess, so I can't say for sure whether or not this would have an impact, though it feels like it would.
If we do find out that this is an issue, I'm not sure where the appropriate place to tackle this is.
$GOMAXPROCS
env var.cc @danieltahara @sougou @yangxuanjia
The text was updated successfully, but these errors were encountered: