Abstract/Description

Ironically, much of the Agile process is based on intuition. Folklore. Anecdotes. Tradition. Faith.

NOW, FOR THE FIRST TIME IN AGILE HISTORY, there is solid research backed by the hard numbers of tens of thousands of teams from a pool of hundreds of thousands of projects. And not surprisingly, that disturbs some existing foundations, rebuilding them them with facts, evidence, and insights.
This talk builds upon on-going research. Last year’s results included the insights that:

TEAMS THAT AGGRESSIVELY CONTROL WiP:
* Cut Time in Process in half
* Have ¼ as many defects

STABLE TEAMS RESULT IN UP TO:
* 60% better Productivity
* 40% better Predictability
* 60% better Responsiveness

In last year's research we only looked at a few parameters that we could heuristically extract from the data that we already had. It was a great start primarily because it established our framework for quantitatively evaluating performance, however, the parameters we looked at only accounted for 7% of the variation in performance.

THIS YEAR, we are looking at 35 total variables and have a fully predictive model of performance based upon the team's context as well as their behaviors and practices.** With this model, we can make context-sensitive recommendations to target improvement efforts.

If last year's work was the first flight at Kitty Hawk, this year's is landing on the moon. And there's a great view of the big picture from up there. Come join us.

THIS SESSION WILL BE:
* A presentation of general findings in our research correlating agile practices to performance along the dimensions of Productivity, Predictability, Quality, and Time-to-market. These can be used to make general decisions about what to focus on, however...
* This session will also introduce you to the first-ever quantified decision framework for targeting improvement and making agile practice decisions. This fully-predictive model of performance that can be used to do “what-if” analyses to target improvement efforts, giving specific recommendations along with the likely results of the recommended changes.
* The model is tailored for both context (age of codebase, regulatory needs, etc.) and economic model (i.e., quality is paramount for medical device manufacturers, but time-to-market is most critical for mobile apps). Recommendations are personalized to each team’s specific set of practices.
* Best of all, no specific ALM tool is required to get these benefits. It’s agnostic. Any team will be able to take a survey on their practices and get concrete suggestions for immediate improvement.
* The numbers, so that we don't just say, A is better than B. Rather, we can can say that A is a 24% improvement in Quality, but a 10% reduction in Productivity compared to B. This allows you to plug it into your own economic model and make informed trade-off decisions.
* Visually accompanied in the style of a pulp detective graphic novel

WARNING: DO NOT COME TO THIS SESSION UNLESS YOU ARE PREPARED TO HAVE YOUR MOST CLOSELY HELD BELIEFS ABOUT AGILE CHALLENGED WITH DATA.

Additional Resources

About the Speaker(s)

Larry Maccherone is an industry-recognized thought leader on agile, metrics, and cybersecurity. He currently helps a number of companies including Comcast, AgileCraft, and Agility Health. Previously, Larry led the insights product line at Rally Software which enabled better decisions with data, leveraged big data techniques to conduct ground-breaking research, and offered the first-ever agile performance benchmarking capability. Before Rally, Larry worked at Carnegie Mellon with the Software Engineering Institute (SEI) and CyLab for seven years conducting research on cybersecurity and software engineering metrics with a particular focus on reintroducing quantitative insight back into the agile communities. Contact Larry on his LinkedIn page: https://www.linkedin.com/in/larrymaccherone