This repository has been archived by the owner on Jul 22, 2018. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 28
Best Practices for Benchmarking and Performance Analysis in the Cloud
Benjamin Oakes edited this page Nov 15, 2013
·
23 revisions
- Robert Barnes, AWS
(slides will be available)
-
has a background with aerospace measurements (first job?)
-
Lots of ways to measure, have to think about calibration, accuracy, relevance, correlation of results with other measurement tools
-
Best benchmark: your app
-
Benchmarking in the cloud is different because of layers of abstraction -> more variability
-
Use a good AMI -- some vary a lot, but highly tested ones don't
-
Comparing on-premise vs cloud (same benchmarks
-
Choosing a benchmark (geekbench, ...)
-
Know what you're actually measuring with your tool
-
How do you know when you're done?
Tests
- 10 instances...
- geekbench (blackbox)
- Testing at scale means you might have to do some thinking about data storage and parsing
- Filesystem, system calls, etc can greatly influence results of CPU, etc.
- SPEC CPU2006 http://spec.org has results
A crowd-sourced conference wiki!
Working together is better. :)
- Speakers, for example:
- Recent Conferences
- Software
- Offline Access
- Contributors (More than 50!)
- Code Frequency