Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate setting $GOMAXPROCS to Linux limits #4302

Open
derekperkins opened this issue Oct 23, 2018 · 1 comment
Open

Investigate setting $GOMAXPROCS to Linux limits #4302

derekperkins opened this issue Oct 23, 2018 · 1 comment

Comments

@derekperkins
Copy link
Member

By default, Go sets $GOMAXPROCS to be the total number of cpus on the physical host machine, without regard to any external cgroup type of cpu limits. In a typical Vitess setup, a single VM is likely to host at least mysql + vttablet, and in many cases, many more containers / processes.

Uber has benchmarked the problem (uber-go/automaxprocs#12 (comment)), showing increased mutex contention compared to setting $GOMAXPROCS appropriately.

I don't have high enough QPS throughput right now to accurately benchmark this in Vitess, so I can't say for sure whether or not this would have an impact, though it feels like it would.

If we do find out that this is an issue, I'm not sure where the appropriate place to tackle this is.

  1. Use Uber's package to set it automatically - https://github.com/uber-go/automaxprocs. This is the easiest implementation, but I'm not sure how easy this is to override if users disagree with the implementation.
  2. Just add documentation - this gives people a chance to set it appropriately, but seems unlikely to happen
  3. Set it in the helm chart / operator or whatever other installation mechanisms we provide - instead of handling it in code, we can use the k8s limit setting to set the $GOMAXPROCS env var.

cc @danieltahara @sougou @yangxuanjia

@sougou
Copy link
Contributor

sougou commented Oct 27, 2018

My initial reaction is that this is likely not an issue with vitess, at least not for existing use cases. The typical expectation for a vitess process, be it vttablet or vtgate, is that a single process should generally be kept at around a few thousand QPS. Pushed to the limit, these have scaled up to the high tens of thousands of QPS.

We generally don't recommend pushing beyond those limits, and one shouldn't have to. I think going beyond these limits will be problematic no matter what GOMAXPROCS value you set. This is because there are code paths that are used by every query that acquire and release mutexes. They will all start to contend.

@tirsen may be able to validate this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants