RESOURCES

In Search of Excellence — Exploring Real Time Performance Measurement for Driving Continuous Improvement in Agile Teams

About this Publication

Today team leadership is a craft, not systematically supported by data. New technology, however, makes it possible to access real-time performance data empowering individuals, teams and organizations to take action based on data-driven insights of current ways of working.

1. INTRODUCTION

“We measure time, cost and quality. These are the results of work already done. How can I help my teams be proactive and know how things are going sooner, and in ways that enable them to improve their working conditions and performance on a continuous basis that would also result in improvements in our general metrics?” This question from the head of projects (PMO) at one of the world’s largest product development companies demarks the start of the story we share here. His question reflects a couple of very common measurement problems in product development today. First, existing metrics and KPIs often focus on lagging indicators, thus measuring results of already performed actions when it is often too late to fix things. Second, these measurements are used by senior managers to control and monitor work progress and consumption of resources. Missing is a systematic way to empower teams with measurements that help them create the right conditions for high productivity and innovation more proactively in their own context. Hence, a key to proactive improvement is to (a) identify leading indicators that affect the performance of lagging indicators and (b) complement strategic measurements with operational measurements supporting continuous improvement on the team level.

The concerns with measurement raised by our partner led to a national research and innovation project that resulted in a cloud-based measurement and analytics tool aimed at supporting continuous improvement through a weekly iterative engagement-loop. In this report, we explore the experiences of using the tool and measuring scientifically verified success factors for performance in agile teams. By complementing traditional KPIs (e.g. time, cost, scope, quality) with measurements for the team, the tool intends to help teams to identify, analyze and act on operational improvements continuously to drive successful development outcomes. Here is what we have learned so far.

2. Background

Prindit (Prindit, 2017) is a cloud-based measurement and analytics platform for measuring team performance. It was the result of Proactability - a joint research and innovation project between five product development firms and RISE SICS Västerås (Swedish Institute of Computer Science) in Sweden. The measurement tool, originally developed for teams working with traditional waterfall, gate-phased product development, grew interest and one of our partners wanted to develop a version adapted to agile development teams. Hence, together we set out to tweak the measurement to meet the needs of these teams.

2.1 Our partner

Our partner – ABB Industrial Automation Control Technologies - develops products in the automation industry. ABB has 130,000 employees around the globe. We have worked with four teams at ABB IACT applying a mix of Scrum and XP practices. Teams are 4-6 persons each. They are all engineers. Each team started using the measurement tool early 2016. Measurements are ongoing and conducted on a weekly basis. Prior to using the tool, none of the teams had used similar ways of measuring team performance to drive continuous improvement.

2.2 Performance measurement in continuous improvement

Performance measurement is the process of quantifying effectiveness and efficiency of actions (Neely, 1999). Its purpose is to monitor and improve the performance of these actions on a continuous basis. In other words, performance measurement is a tool for continuous improvement.

Kaizen, Lean production and Six Sigma all rely on systematic closed-loop systems using measurement as a tool for continuous improvement. The underlying principles and mechanisms of these approaches have also heavily influenced continuous improvement in agile development. The essence of continuous improvement is to constantly seek ways in which products and processes can be improved, so that greater value can be delivered to customers at ever greater levels of efficiency. Before any organization can determine what it needs to improve, however, it has to establish where and why its current performance falls short. Hence, the need for performance measures (Neely, 1999). And yet, measures and measurement alone do not produce improvements. Improvements need to be actively identified and implemented by individuals, organizations and management to add value.

A key to successful continuous improvement is identifying leading indicators that affect the performance of lagging indicators (Nudurupati & Bititci, 2002). Lagging indicators measure the result of already performed actions while leading indicators measure the factors that impact the result. Leading indicators can thus be used to drive a certain behavior or change the direction. Usually leading indicators relate to the input of the process or activities within the process. It is these indicators that need to be measured and monitored to ensure proactive improvement.

To drive continuous improvement, teams then should move with the right set of leading and lagging indicators to measure and monitor performance of their activities and focus their improvement effort. The team should use the leading indicators to drive improvements that result in improvements in the lagging performance measures (e.g. time, cost, scope, quality) (Nudurupati & Bititci, 2002). Establishing the relationship between leading and lagging indicators and how predictive measurements can be used to create preconditions for teams to perform better is critical to the overall experience here, since leading indicators offer early feedback allowing teams to correct course.

2.3 Finding leading performance indicators for our agile teams

How do we find leading indicators? Rapid change requires development organizations to continuously learn and convert knowledge into new market value. In this context, organizations must find ways for individuals to work together to solve complex tasks and problems in uncertainty. Hence, the team and how well its members collaborate have become central to success. So, if collaboration is key to dealing with complexity in dynamic contexts then it is a useful starting point for where to look for data relevant to teams, and thus explore what to measure and how.

The measurement we have developed originates from decades of scientific research on success factors for team performance in product development. Measurements are survey based and measure ten success factors: Goal clarity, Task clarity, Support, Fragmentation, Motivation, Energy level, Workload, Risk level, On-time delivery of tasks and Quality of deliverables (Cedergren, 2011).

Agile development approaches use various measurement practices, e.g. effort estimation, velocity, burndown charts and WIP. Additionally, practices such as continuous retrospectives indicate how things are in the moment enabling continuous reflection and running experiments all the time to drive continuous improvement (Rising, 2014). Many different measurement practices and metrics are used in agile development. For example, in an industrial review of measurement in agile development a team of researchers found that, on top of typical agile metrics, many teams also use custom metrics (Kupiainen et al., 2015). In fact, they found 102 metrics of which most were software related metrics and thus applied to products, features, requirements, code, builds, tests and defects. Their results indicate that the reasons for, and the effects of, using these metrics are found in sprint planning, progress tracking, software quality assurance, fixing process problems, and motivating people. Most of these metrics are also generated automatically through development tools. However, automated support for measurements done by the team, for the team, such as happiness indexes, emotional seismographs or continuous retrospectives is scarcer. And even scarcer, if inspecting to what extent existing tools actually measure leading performance indicators, capture team performance data during the execution of development activities and drive continuous improvements that result in improvements in the lagging performance measures.

Work has been done in the area of leading and lagging indicators for agile teams. Cisco, for example, presents a set of metrics for understanding how work is flowing by distinguishing between leading and trailing indicators (Power, 2014). Whereas they promote indicators that measure and visualize workflow based on the different states that work passes through an organization including cycle time and throughput analysis, we have a slightly different set of indicators focusing on team collaboration and performance. Our approach also builds on a tool that integrates these measurements with various analytics features, e.g. for predicting outputs, that can further support continuous improvement (Nessen et al., 2017).

2.4 Developing the Agile Pulse

Based on the original version of the tool, designed for traditional product projects, measuring ten success factors for team collaboration and performance we bounced the factors together with the teams at ABB IACT to adapt them to their needs. It was a straightforward process. The idea was to preserve simplicity and not add too many new factors. The teams kept the original factors and then proposed supplementary success factors that they considered important to agile. We came up with a set of thirteen leading performance indicators. Hence, three factors were added to the original ten: Focus, Fun at work and Continuous improvement. Finally, we translated these into questions which were implemented in the form of a survey that served as the data collection feature (Figure 1).

Figure 1. Agile Pulse survey.

2.5 The measurement tool

The Agile Pulse was added to a tool that had already been developed to administer online surveys via the web and an app for mobile devices. The tool automatically sends the survey and notifications to its users on a weekly basis. It features a feedback function consisting of a dashboard visualizing data and measurement results every week (Figure 2). Longitudinal data can also be accessed in graphs showing historical data and changes in data over time (Figure 3). The feedback enables individuals, teams and organizations to use the measurements for continuous analysis, and to take action to correct low scores - and amplify good scores - and thus prevent poor performance.

Figure 2. Dashboard depicting seven factors and their change compared to previous week (right side). The factors relate to Ability to deliver and Work situation in a team.

Figure 3. Trend graph showing a 3-month overview of weekly measurements of Motivation in a team, including mean value and standard deviation in responses. Concerning standard deviation, it intends to help the team to avoid the “Halo Effect” during analysis and retrospective meetings. In this example, the standard deviation indicates that some members’ responses are “red” and so the team needs to figure out the root-cause to fix or retain motivation. The mean value may be “green” but that doesn’t help the team in the long run.

3. Exploring experiences of real-time measurement for DRIVING continous improvement

This story is based on an ongoing journey involving many experiences, but to highlight a few insights and lessons learned from its first year:

3.1 The “Now, what?” moment

Having developed and introduced the Agile Pulse survey and the measurement tool in four development teams and having access to performance data, an interesting problem was encountered – a problem that none of us (teams included) had anticipated. Having the data visualized, turned into a “now, what?” moment. That is, the teams were not sure what to do with the data and how to proceed. Clearly, we had underestimated the power of these measurements and also the need for more support on how to approach them and work with them in productive ways.

The teams reported valuable experiences from using real-time measurement, both pros and cons. In terms of the former, all teams had positive experiences of the tool and agreed that the measurement worked well. The tool was simple to use; measurements were quick and effective; feedback in the form of visualization of dashboards and trend graphs was useful and; the tool provided a heat map to the team visualizing the general level of feelings about how the team was doing. This type of measurement increased everyone’s awareness about the state of work that made them more responsive to fix things, and as a result helped them adjust both individually and as a team faster. For example, if weekly reports indicated low or deviating scores on some factor then the team discussed and analyzed the reasons behind the score and identified ways to fix it. If motivation, energy or fun factors were low, then they introduced culture strengthening activities such as team lunches and team outings. Some scrum masters also pointed out the importance of scoring high on goal clarity. Low scores prompted them to increase communication in different ways to ensure that goals were clear to everyone. Essentially, teams stated that the tool helped them detect things that were not always visible, that it helped them identify factors that were not on the regular Scrum or Kanban boards or not always clearly spelled out by team members. Trend graphs especially helped detect tendencies or movements over time that would otherwise go unnoticed. This applied to high scores as well, where they identified and emphasized good practices. More specifically, the teams indicated that measuring leading indicators helped prevent unwanted effects in lagging indicators. Managers in particular, stated that the trends were “the real key”. The value of a specific indicator at a given point of time could be “green”, but trend graphs clearly visualized changes that could potentially lead to project delays or higher costs and impact lagging indicators. Generally, the teams’ experience was that measuring leading indicators provided them an opportunity to proactively correct course in new ways that they had not previously been able to do with traditional metrics and KPIs.

On the downside, various problems were encountered. Most pointed to the fact that although data was captured and feedback visualized in effective ways teams had difficulties using the data. They did not always know how to analyze and act on the feedback provided by the tool. When analyzing the underlying reasons to this, we found three things: skills, support and time. Concerning skills, it was not necessarily clear what a factor purported to measure and thus teams felt it was beyond their ability to interpret the data. Despite discussions about definitions of factors and what was producing a certain score, the next step was sometimes a showstopper. Turning insights into action is complicated. It requires skills in driving and managing change, including domain specific knowledge, leadership and social skills. It is worth noting that they were all engineers. Their core competence lies in technology, not necessarily in team or organizational development. Consequently, the tool’s underlying methodology is being developed which brings us to the second aspect, support. Although the tool provides some general guidance and recommendations on each factor and how it can be improved, certain contexts call for more hands-on support to teams and users. In this case, training and workshops involving scenarios and examples of measurement and how to achieve continuous improvement served the teams well. Time is a critical aspect. Filling out the survey and collecting data was effortless. However, to reap the full benefits of measurement also inflicts that the measurement data is analyzed and acted upon when the scores and tool’s feedback indicate such needs. Prioritizing time to take all the necessary steps to close the improvement loop was not always the case. Hence, prioritizing improvement activities and the time this requires must be a deliberate act. Overall, the experiences here suggest that more than automated measurement and feedback is needed to close the loop and drive continuous improvement.

3.2 No easy fixes

Now, there are no easy fixes so we are experimenting our way forward. A measurement tool alone cannot improve a team’s performance, but can facilitate continuous improvement. In this case, the tool helps capturing data and by amplifying certain signals it helps bringing further insight into what needs to improve or change. To close the improvement loop, human interaction is required. Teams must act on the data and insights provided by the tool as discussed above. Furthermore, some improvements can be easily addressed in the team and by the team, while others require actions outside the team. The former is often triggered by situational or event-driven needs while the latter reflects structural and systemic ones and call for organization wide actions. A situational need is typically the result of internal events, planning or organization within the team, such as high workload. Once identified, teams and individuals can discuss and analyze the root-cause of the score and then act to repair low scores. A high workload might imply that the team must re-prioritize tasks or split assignments. Systemic needs are the result of external, cross-functional problems impacting many organizational units and teams requiring actions and interventions outside the team. For example, high levels of fragmentation might only be resolved by re-organizing entire organizational structures, thus necessitating the involvement of other stakeholders such as line management or upper management.

4. What We Learned

Measurement alone will not drive continuous improvement, but measurement can help. The measurement system needs to be complemented and integrated with a management system. Hence, the system as a whole consists of two components: the performance measurement system and the performance management system. The measurement system should be able to be operated as a simple thermostat, but should also allow other functions, such as the questioning of standards, strategies, processes and assumptions of the organization. These two components form one integrated system – a system that does not operate in an organizational, strategic, or environmental vacuum (Melnyk et al., 2014). In taking such an approach we need to recognize this must encompass both single and double loop learning. Tools like these work well for capturing data and providing feedback based on the measurement results, but other mechanisms and features are clearly needed to close the improvement loop, repeatedly.

Based on the experiences reported here, we also found that for small agile teams consisting of few individuals working according to agile practices and principles, measurement and analytics tools may not provide significant added value beyond what a small team generally already knows or surfaces in daily interactions, activities, meetings and retrospectives. However, this is based on a short term, even day to day, perspective. The measurement in itself creates a documented reminder of how the team/organization have felt and perceived work over time. The “living memory” or collective memory is notoriously short. Storing the measurements and trends in tools like this can help teams and organizations remember when and why things changed, and thus support organizational learning. Access to data is critical to retrospectives and the continuous improvement process.

Additionally, changes implied by the introduction of a new tool need to be addressed to ensure a successful implementation process. Thus, by including organizational culture, context and ways of working, we also recognize that performance measurement is both a technical process and a social one.

The development of a complete and effective measurement system for continuous improvement is not a trivial task and the same is true of keeping it up to date. In summary, we identified some aspects that could potentially increase the likelihood of doing data driven continuous improvement successfully.

Organization

  • Understand the relationships between the performance measurement system and performance management system
  • Understand the relationships between the environment, strategy, culture, and their relationships with the performance measurement and management system, respectively
  • Recognize the need for single and double loop learning

Process

  • Adapt measurement practice and process to team context and needs
  • Align measurement and analytics process with ways of working and work cycles
  • Manage change with new tools/methods (e.g. manage sponsorship, practices, attitudes)

Tool

  • Adapt and contextualize indicators, survey design and measurement design (and adapt over time, e.g. through machine learning and gamification)
  • Provide guidance to support data analysis and taking action in tool
  • Provide self-help in tool (add online material on how to manage scores, e.g. high stress)
  • Provide training/coaching (package tool offer with additional services and how-to training)
  • Acknowledgements

We would like to thank ABB Industrial Automation Control Technologies for excellent collaboration and invaluable contributions to this project. We also want to thank VINNOVA, Sweden’s Innovation Agency, for sponsorship and financial support. Jutta Eckstein, thank you for shepherding us through this experience report with profound guidance, keen insights, questions and suggestions.

5. Acknowledgements

We would like to thank ABB Industrial Automation Control Technologies for excellent collaboration and invaluable contributions to this project. We also want to thank VINNOVA, Sweden’s Innovation Agency, for sponsorship and financial support. Jutta Eckstein, thank you for shepherding us through this experience report with profound guidance, keen insights, questions and suggestions.

REFERENCES

Cedergren, S. “Evaluating Performance in Product Development – The Case of Complex Products”. PhD Thesis, Mälardalen University, 2011.

Kupiainen, E., Mäntyla M.V., & Itkonen J. “Using metrics in Agile and Lean Software Development – A systematic literature review of industrial studies”. Information and Software Technology, Vol. 62, June, pp. 143–163, 2015.

Melnyk, S. A., Bititci, U., Platts, K., Tobias, J. & Andersen., B. “Is performance measurement and management fit for the future?” Management Accounting Research, 25(2), 173–186, 2014.

Neely, A. “The performance measurement revolution: why now and what next?” International Journal of Operations & Production Management, Vol. 19, No. 2, 205–228, 1999.

Nessen T., Larsson S., Cedergren S., Olsson T., Andrén D., Nyfjord J. “Towards a predictive model for delivering product development projects on time”. Paper to be presented at the 24th Innovation and Product Development Management Conference, Reykjavik, Iceland, June 11-13, 2017.

Nudurupati, S. & Bititci, U. “Driving continuous improvement”. Manufacturing Engineer, Vol. 81, No. 5, pp. 230–23, 2002.

Power, K. “Metrics for Understanding Flow”. Experience report, Agile2014, Orlando, FL, USA, July 28-Aug 1, 2014.

Prindit, 2017. Prindit’s website: http://www.prindit.com

Rising, L. “Continuous Retrospectives”. Keynote at XP2015, Helsinki, Finland, May 26, 2015.

About the Author

Over the past two decades, I have been deeply engaged in the efforts of various companies to become better software development organizations. They range from small IT departments in Bangladesh to startups and global enterprises. My work has been carried out in various roles as pioneer, innovator, early adopter, expert, catalyst and, at times, a mix of all these roles at the same time.

For the last 20+ years, been involved in product and service development. Always focused on helping the organizations take better advantage of the Lean Product Development and Agile mindset and methods. Dedicated to build great teams, products and solutions. Involved in building and transforming worldwide organizations. Experienced with multi-cultural teams and organizations within telecom, industrial automation and gaming. My work has over the years taken me to different roles from R&D, enterprise architecture to Business Development and Software Management. Always with an agile mindset.