Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change director server from Thin to Puma #1800

Merged
merged 6 commits into from
Nov 6, 2017
Merged

Change director server from Thin to Puma #1800

merged 6 commits into from
Nov 6, 2017

Conversation

Kiemes
Copy link
Contributor

@Kiemes Kiemes commented Oct 2, 2017

#150958544

Signed-off-by: Kai Hofstetter [email protected]

@cfdreddbot
Copy link

Hey Kiemes!

Thanks for submitting this pull request! I'm here to inform the recipients of the pull request that you and the commit authors have already signed the CLA.

@cppforlife
Copy link
Contributor

cppforlife commented Oct 2, 2017 via email

@Kiemes
Copy link
Contributor Author

Kiemes commented Oct 2, 2017

@voelzmo wanted this as starting point for further discussions. We have a follow-up story to get more numbers. At the moment we can just say that Puma can deal with more load than Thin.
There is a test available which fails with Thin but runs green with Puma.

@drnic
Copy link
Contributor

drnic commented Oct 3, 2017

As an aside, Puma is the default web server for new Rails 5 apps since mid 2016 https://richonrails.com/articles/the-rails-5-0-default-files

The history of Puma is that it was a fork of the unmaintained Mongrel - a threaded webserver - by the creator of Rubinius @evanphx. Puma has gone from strength to strength.

@evanphx
Copy link
Contributor

evanphx commented Oct 3, 2017

If you have questions about tuning puma, just ask!

bitmoji

@beyhan
Copy link
Member

beyhan commented Oct 13, 2017

We ran jmeter load tests against both a thin director and a puma director with 3 puma processes querying the GET /info endpoint. This gives us a basic performance comparison between both servers. Setup was:

  • 3 jmeter workers
  • 500 threads per jmeter worker each haveing a loop count of 1000 => 3 x 500 x 1000 = 1 500 000 requests in total

Puma has significantly less 502 Bad Gateway errors, lower response times and a higher throughput than Thin.

Thin:
thin

Puma:
puma

thin_server.stop!
end
puma_configuration = Puma::Configuration.new do |user_config|
user_config.workers 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be configurable?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at some point in the future, probably. Right now, we don't have a clear guidance on when to increase the number of workers and what that would mean for other things, such as your database connection pool. Therefore, I'd like to not expose this number for now.

@dpb587-pivotal
Copy link
Contributor

The performance results are a nice addition to the PR!

@voelzmo
Copy link
Contributor

voelzmo commented Oct 17, 2017

@evanphx we don't seems to do many fancy things with puma, how do you feel about our configuration bit:

puma_configuration = Puma::Configuration.new do |user_config|
  user_config.workers 3
  user_config.bind 'tcp://127.0.0.1'
  user_config.port config.port
  user_config.app rack_app
  user_config.preload_app!
end
puma_launcher = Puma::Launcher.new(puma_configuration)
  • we've decided to stick to TCP sockets to allow moving nginx to a different machine, if necessary
  • we've used preload_app to reduce memory footprint
  • we're using 3 workers and 0:16 threads in clustered mode
  • we're not sure how the number of nginx workers and the number of puma workers should evolve. If I increase the number of nginx workers from currently 3 to, say, 10, would I need to adapt my puma workers as well?
  • we ensure a separate DB connection pool per puma worker like this: b0bb46d is this a recommended pattern?

Thanks!

@voelzmo
Copy link
Contributor

voelzmo commented Oct 17, 2017

Don't merge yet, we've to fix some controller responses first which fail with puma.

friegger and others added 2 commits October 23, 2017 15:05
DB connections are disconnected before puma forks its own worker
processes. This ensures that all puma workers will get new
DB connections.

[#150958544](https://www.pivotaltracker.com/story/show/150958544)

Signed-off-by: Beyhan Veli <[email protected]>
@friegger
Copy link
Contributor

We have fixed the controller responses and DB concurrency issues. The pr is now ready to be merged.

@evanphx
Copy link
Contributor

evanphx commented Oct 24, 2017

@voelzmo Some quick notes on your config:

  1. The bind and port are mutually exclusive. If you want to bind to localhost like that, you'll need to interpolate the port into the tcp url.
  2. No need to change the nginx config when you change the number of puma workers/threads, they're totally independent variables.
  3. Disconnecting the pool before fork to spawn a new pool per worker is a common pattern, that is fine.

Basically, fix that url/port issue and you're good!

@beyhan
Copy link
Member

beyhan commented Oct 25, 2017

We executed following performance tests:

  • bosh directors:

    • Latest bosh director from master branch with thin setup and without /deployments endpoint optimization (see Optimize performance of /deployments endpoint #1793)
    • Latest bosh director from master branch with thin setup and with `/deployments' endpoint optimization
    • Latest bosh director from master branch with puma setup with three workers and /deployments endpoint optimization
  • all directors had a machine with 4 CPU, 16GB RAM on OpenStack

  • with 10 deployments

  • tested against /deployments endpoint

  • we used throughputramp project used by cf-routing team: https://github.com/cloudfoundry-incubator/routing-perf-release. The benchmark started with 10000 request with one thread and incrementally ramped up until 10000 request with 20 threads at a rate limit of 1000.

Results are attached.

requests-per-second-thin-without-fix-vs-puma-with-fix
response-time-thin-without-fix-vs-puma-with-fix
requests-per-second-thin-vs-thin-with-fix
response-time-thin-vs-thin-with-fix
requests-per-second-thin-with-fix-vs-puma-with-fix
response-time-thin-with-fix-vs-puma-with-fix

manno added 2 commits October 25, 2017 15:34
Pumas bind and port config settings are mutually exclusive.
Previously the bind setting was ignored and puma listened on all interfaces.

[#151788691](https://www.pivotaltracker.com/story/show/151788691)

Signed-off-by: Beyhan Veli <[email protected]>
@beyhan
Copy link
Member

beyhan commented Oct 26, 2017

@evanphx thank you for the feedback! We changed the binding as suggested. This should be ready now to be merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.