Find out how a 3-person performance analysis team kept up with a 200-person development organization with little more than some spreadsheets and a little light queuing theory!

Suppose you had a complex application stack of 70+ services that you knew you’d have to scale out by four orders of magnitude over the next two years. You don’t have auto scaling on your private cloud–although few things auto-scale *that* far anyway. The developers are somehow too busy implementing features to add instrumentation to their code, and it’s nearly impossible to obtain a representative full-stack load environment because the agile teams are changing things constantly! What would you do?

With the transition to the use of virtualized infrastructure as a invaluable part of companies strategies going forward. It is very import for organization to be able to effective apply a useful capacity and scalability approach to this new type of infrastructure.

The main players in the private cloud space are still lacking in the amount of performance metrics that they expose to performance engineers. This in tern makes it very challenging for performance engineering as a discipline to adapt their approaches to providing viable capacity and scale solutions for scaling a private cloud.

The approach I am proposing is simple in terms of how it can be implemented but has proven to offer rapid and reliable Performance and Scalability model. What this approach shows is an entire system can be modeled rapidly by simply applying the Universal Scalability model to each component and then tying each modeled component together using basic Queuing theory.

You must be a Member to view this post and you are currently not logged in.

You can either log in below or sign up here.