Experience Report

Sensemaking Applications for Agile: Combining Qualitative and Quantitative Metrics

About this Publication

Sensemaking is a form of distributed ethnography that helps organizations detect emergent patterns, trends, and weak signals. The narrative-based approach links qualitative data with quantitative data in order to understand, manage, and measure situations that are complex, uncertain, and ambiguous. This experience report focuses on how sensemaking was used to understand impediments to Agile adoption and improve employee engagement.

1. INTRODUCTION

Leaders face unprecedented challenges as business environments and ecosystems become increasingly volatile and uncertain. How do we measure progress and make sense of complex adaptive systems like corporate culture and business ecosystems? How do we detect weak signals and unarticulated, emergent user needs quickly, before the competition? How do we know if our interventions have desirable effects or harmful unintended consequences? As organizations encounter more “unknown-unknowns,” different approaches are needed to measure progress and sense emergent opportunities under conditions of extreme uncertainty.

In prior roles as a change agent, engineer, and manager at Intel corporation, I worked on increasingly complex situations across a variety of domains including culture change, innovation, sales, strategic planning, and product development. I am excited to share one approach for managing complex adaptive systems called sensemaking. Sensemaking helps decision makers at all levels make sense of situations in order to take effective action. I hope you will find the approach insightful, useful, & intriguing enough to want to learn more.

2. Background

While there is no common definition for complexity, some characteristics of complex systems are that they have

many interdependent parts that interact in nonlinear and surprising ways

  • emergent behavior and higher-order properties that are different than the resultant properties of the constituent parts
  • limited predictability where small changes in conditions can lead to vary different dynamics over time
  • the ability to self-organize and operate without central control

Many aspects of change management, conflict resolution, trust building, and employee morale and engagement are complex. Most socio-technical problems involving humans and technology are complex as well. When leaders attempt to take an ordered approach (e.g. roadmaps, Gantt charts, detailed project plans) to solve unordered, complex problems, things never go as planned. Organizations struggle with the lack of information, ambiguity, high levels of uncertainty, and unforeseen circumstances. Increased investments in a more detailed analysis or more rigorous planning process rarely help. Different approaches are needed when the situation is complex.

Sensemaking is a narrative-based methodology that captures and analyzes a large number of experiences and observations in order to understand, measure, and manage change within a complex adaptive system. The sensemaking approach used in this report was developed by David Snowden (founder and chief scientific officer of Cognitive Edge) and is based on concepts from anthropology, neuroscience, and complexity theory. The methodology bridges the gap between qualitative data (e.g. narratives, observations, experiences) and quantitative data (e.g. questionnaires, surveys) by linking narratives with quantitative data provided by the participants. The combination of narrative and metadata provides a nuanced and holistic perspective that enables leadership teams to identify emergent patterns and trends in behaviors and perceptions. Teams can also use the patterns and underlying narratives to inform action plans and affect change.

Like traditional surveys, sensemaking can be structured as either a one-time pulse or as a continuous process over time (weeks to months). The degree of anonymity afforded to participants is another design choice. Anonymous processes promote transparency and perceptions of safety. Non-anonymous processes make it easier to track the participation of individual participants.

While both sensemaking and traditional surveys often utilize questions, demographic data, and free form text, there are several significant differences between sensemaking and traditional surveys. The sensemaking narrative collection process often relies on multiple entries from the same individual participant. The collection process is like keeping a continuous journal or diary of small observations. It’s like a captain’s logbook, where participants capture their daily experiences, observations, events, and interactions. “Watercooler” conversations, informal discussions in hallways, gossip, and rumors are a few examples of what may inspire an entry from a participant.

Participants also tag their entries by answering several questions about the experience or observation they share. What feelings are associated with their story? What is the main theme of the story or what is the experience mostly about? What is the effect on business or morale? These questions enable the participant to add layers of meaning beyond the plain text of the narrative. This is significant because narratives frequently contain sarcasm, metaphor, and innuendo which is often misunderstood or misinterpreted by others unfamiliar with the situation. The questions function as metadata that is used for quantitative analysis and for identifying emergent patterns across a large population. By answering questions about their experience, participants link their qualitative narrative with quantitative data.

Sensemaking works well for understanding and managing almost any complex adaptive human system. This experience report contains examples and lessons learned from several sensemaking initiatives that we worked on at Intel to help understand some complex challenges including

  • What are the impediments and barriers to the adoption of Agile and Lean methods and principles across a large multi-national enterprise?
  • How can organizations improve employee retention rates and increase employee engagement levels?
  • How can organizations improve diversity and foster a more inclusive culture?
  • What are the emergent needs across a large, distributed group of technology users?

3. Sensemaking Applications to understand Agile adoption impediments and improve employee engagement

Part of our team’s charter was to promote the adoption of Lean and Agile methods, cultivate a community of practice, and improve product development teams. After learning about sensemaking, we felt that it could yield insights about Agile and development work that we had not received from prior approaches, anecdotal evidence, and traditional surveys.

Our sensemaking work began by identifying the primary domains of interest and defining a few high-level research objectives. We needed to align on what we wanted to learn or discover from this sensemaking effort. For example, we were interested in understanding the factors related to intrinsic motivation and employee engagement. The table below contains a few of our research objectives.

After we aligned on the high-level research objectives, our next step was to develop a set of questions or sensemaking framework for capturing the relevant data. The three primary components of our sensemaking framework were

  • an opening question to prompt participants to share an experience
  • a set of questions asking participants to describe or evaluate what they shared
  • a set of demographic questions about the participant providing the response

We used an open-ended question prompt to narrow and constrain the range of possible responses from participants. We designed the prompts to trigger memories and solicited specific experiences. We were not interested in any random story, only the ones related to Agile and events about work. We asked participants to submit experiences by responding to one of the following three prompts of their own choosing.

  • Share a specific situation or moment at work that gives you hope or concern for the future of Agile.
  • Describe a specific event or activity that happened during work that inspired or bothered you.
  • Tell a story about work that you would share with a close friend.

Participants then responded to the prompt they chose with free-form, narrative text (not a list of bullets). Participant responses to these prompts could be either positive or negative. There were no “right” answers to the ambiguous, open-ended prompt questions but participant responses were hopefully related to Agile and events at work.

The next section was a set of questions designed to answer our research objectives. We created a set of sixteen questions that were informed by the theory in the field of psychology and neuroscience or from the knowledge and experience of Agile and Lean subject matter experts. For example, we wanted to understand the factors that influenced intrinsic motivation of employees. One set of concepts we used was adapted from Daniel Pink’s (2012) research on autonomy, mastery, and purpose. According to Pink and related research, the intrinsic motivation of knowledge workers can be improved with increased levels of autonomy, helping them develop skills and mastery, and creating connections with purpose greater than themselves. This research informed the design of one of our questions shown below.

This triad question type was composed of a balanced set of three related concepts. All three concepts were either positive or negative. We asked participants to position a marker within the triangle where they best felt it described or reflected their experience, story, or observation. The closer they positioned the maker to any one vertice or description, the stronger that statement was in the context of their experience. For example, if a participant positioned a marker in the center of the triangle, it meant that people in their story were motivated equally by [autonomy] AND [mastering their skills] AND [a greater purpose].

We also offered a not applicable [N/A] to participants if the question did not apply to the situation they shared. In this example, participants were asked if people in the story or observation they shared were motivated by some combination of [autonomy], [mastering their skills], and [a greater purpose]. Again, there were no “right” answers to this question. Participants were simply asked to describe the event or experience they shared. Question responses added layers of meaning beyond the narrative text provided in their response. Note that the description and evaluation questions were completed by the person submitting the response, not by our team or any other researchers, algorithms, or third parties.

Another question type we used were dyads, which asked participants to position a marker where they felt it best described their experience, story, or observation. The closer the maker was to any one side or description, the stronger that statement was in the context of their experience. In our dyad example below, we asked participants to evaluate the overall effect on business based on the events that transpired in their story or experience shared. Not applicable [N/A] was again offered as an option if the question did not apply to their situation or if they did not know how to answer.

The last section of our sensemaking framework contained a set of six demographic questions. We felt it was important for responses to be collected anonymously to promote transparency, so the demographic question provided us with a generic description of the person submitting the response. The demographic questions were also important for qualitative analysis to understand the similarities and differences across different populations. Demographic data also provided additional context and helped people who were reading the responses to understand where the person submitting the response might be coming from (e.g. biases). Here are a few examples of the demographic data we collected: gender, role, business unit, geography, grade level, and tenure. The appendix of this report contains several additional examples of questions that we used in the sensemaking frameworks.

With a draft framework developed, we tested it with a small population of participants prior to collecting responses at a larger scale. This was done initially with a few subject matter experts using paper forms. After a couple of iterations, we then tested the framework on small groups of about five to ten people. Several iterations were needed to refine and tune the framework to make it efficient and effective. Here are a few criteria we evaluated during the framework testing phase:

  • Are the prompt questions producing responses relevant to our research objectives? Or are we getting responses that are unrelated?
  • Are the questions worded as clearly and simply as possible? Are any questions confusing participants or are there any that take participants a while comprehend? Are the questions self-explanatory?
  • Are there any questions with a large number of people selecting not applicable? If so, what edits would make the question more applicable? Or should the question be dropped entirely from the framework?
  • How long did it take for a participant to complete a response? Are there too many questions?
  • How could the usability and overall user experience for participants be improved?

After our framework was sufficiently tested and validated, we began the execution phases of the sensemaking methodology which included narrative collection, emergent pattern analysis, and intervention design and deployment. The diagram below depicts our execution process at a high-level starting with story collection.

3.1  Collection

We collected experiences from people anonymously using a web application (via a smart phone, PC, or other device). Printed forms were also considered, but we decided against them because of the extra effort needed to transcribe responses. Participants responded to the prompt question(s) by sharing a short story (i.e. a fragmented micro-narrative). Most responses were not fully constructed stories with a beginning, middle, and end. They were mostly simple observations, rumors (e.g. café conversations or hallway chats), or feelings about a specific event or incident. A majority of the responses ranged from a few of sentences to a couple of paragraphs.

We then asked participants to answer several questions about what they shared. The questions enabled participants to add additional meaning to their story beyond what they shared in plain text. This process of self-signification by participants eliminated the need for a researcher or algorithm to tag the response, which often introduces bias and errors. We found that it took about ten to twenty minutes for participants to complete a response the first time. However, subsequent responses took less time as participants became familiar with the framework.

3.2 Emergent Pattern Analysis

Once we had about two hundred responses collected for our sensemaking initiative on Agile adoption barriers, the metadata from the participants’ answers to questions was analyzed with statistical software. The data was also visualized graphically for coherent patterns and correlations between question parameters were calculated. In the following pattern analysis example, we wanted to understand which Agile and Lean principles correlated most with stories that had positive effects on business. We asked participants what principles were present or lacking in the story they shared. The chart ranked principles along the y-axis according to how well they correlated with stories that had a positive effect on business. The coloring reflected whether the story had either positive (green) or negative (red) effects on business. The x-axis represented the percentage of stories in each of the seven business effect categories.

According to this data set, 75% of the stories that were about the principle of [Respect for Individuals] also had a strongly positive (dark green) effect on business according to the person that submitted the story. Stories about the principles of [trust] and [learning] also had strong correlations with effects on business. When these principles were present in a story, the story’s effect on business was positive. When these principles were lacking but needed in a story, the story’s effect on business was negative, according to the participant submitting the story.

We also analyzed patterns around attitudes toward change. After submitting an experience about work or the future of Agile, participants were asked if the attitudes toward change in their story was some combination of [thoughtful consideration], [enthusiastic], or [keep as is]. Each of the dots within the triad represents a specific experience, observation, or story shared by a participant. The hashed lines represent the average position for each set of stories. Again, the coloring reflects whether the story had either positive (green) or negative (red) effects on business. The size of the dot reflects whether the event was rare (small) or common (large). Stories where the attitude towards change were [keep as is] correlated with negative effects on business, while stories with a [thoughtful] or [enthusiastic] attitude were correlated with positive effects on business. For example, the dot on the right triad closest to the top was a story from a participant that had a positive effect on business (green), that was somewhat common (medium sized dot), and the attitude towards change in the story was [thoughtful consideration].

As part of this effort we also wanted to find a way to measure effects related to managers that were resistant to change. While working on Agile and Lean transformations, we encountered middle managers that resisted change even though frontline employees and executive sponsors were advocates for adopting Agile and Lean. We metaphorically called this the “permafrost layer which acted as if it were frozen, perhaps due to bureaucratic policies, status quo mindsets, or conflicting priorities.

The following graph was one attempt to measure and make sense of permafrost related stories. The y-axis represents the attitude towards change in the story. The x-axis measures the alignment between the participant and their management. For example, if a participant strongly supported the events in the story they shared and they thought that their management did not, then it would result in a low degree of alignment along the x-axis.

We found a strong correlation between stories with a negative effect on business (i.e. colored red) and a [keep as is] attitude towards change combined with a perceived lack of alignment between employees and their managers (lower left corner).

3.3 Intervention Design and Deployment

The next phase for our Agile adoption sensemaking was to make sense of the emergent patterns and trends in order to take action. As part of a facilitated action planning workshop, several patterns, trends, and weak signals were shared with participants. The narratives associated with patterns were purposefully withheld while participants were asked to come up with several guesses for what was behind these patterns and trends. For example, participants speculated that the stories with a [keep as is] attitude towards change and a negative effect of business were from a particular part of organization. After their guesses were captured, participants were allowed to read the individual stories that generated the patterns and trends. We challenged the participants to find evidence that validated or invalidated their guesses. This process generated insights and learning which could then be turned into ideas for interventions and action plans.

Here is a simple example for how the data and stories were used to develop interventions. One aspect of self-organization that we wanted to understand was the focus on team versus individuals. Using a dyad, we asked participants to describe whether recognition or consequences in the story were focused on [teams over individuals] or [individuals over teams]. Several of the stories toward [individuals over teams] were negative. One of the negative stories heavily skewed toward [individuals over teams] was titled “How Individual Contributor Culture can be blocker for Success.” Insights from this pattern and specific story inspired an intervention to reduce the predominance of the phrase “individual contributor” from the company’s language. Instead, our team of Agile coaches would try to use the phrase “team contributor” over “individual contributor” to help shift culture toward team success vs individual success. While individual contributor was still used at the company, we felt that discussions about the individual contributor culture helped improve Agile adoptions and development work in general.

As part of another sensemaking workshop to understand employee engagement, a portfolio of safe-to-fail interventions was developed by participants to amplify beneficial patterns and dampen negative ones. The interventions were designed as safe-to-fail (vs. fail-safe) because changes within a complex adaptive system are unpredictable and unintended consequences were expected for every intervention. One of the key questions we asked participants during ideation and brainstorming was “How do we get more stories like these and fewer like those?” What might the organization do to get more stories, or dots, about [enthusiastic] attitudes towards change that have positive effects on business? What interventions might reduce or help prevent some of the red negative stories about a [keep as is] attitude toward change? These safe-to-fail interventions also went through a process of constructive, critical feedback to make them more robust and improve the chances of success. Several interventions were then deployed in parallel. As new experiences and observations were shared, the narratives changed and new patterns emerged.

4. What We Learned

Here are some of our insights from the sensemaking initiative on Agile and Lean adoption impediments:

  • Stories that were about the principle of Respect for Individuals had the strongest correlation with business results. Learning and Trust were strongly correlated as well.
  • Stories that were about Lean and/or Agile had disproportionately higher rates of Respect for Individuals, Learning and Trust present.
  • The sensemaking method was able to measure some aspects of the permafrost problem which was correlated with negative effects on business.
  • The sensemaking method was able to measure some aspects of self-organization and team cohesion which were correlated with positive effects on business.
  • We need to improve the degree of alignment between people and their management.
  • We need to dampen a [keep as is] attitude towards change.
  • We should focus on long-term benefits despite short-term pain.
  • We should push for more team-level commitments and team recognition.
  • We need to help teams understand and connect with their higher purpose.

4.1 Reflecting on the benefits of the sensemaking methodology

There were several unique benefits we experienced from the sensemaking methodology that we didn’t get from traditional surveys and focus groups. The combination of quantitative metrics connected to rich narratives helped leaders understand the challenges in context and created a sense of urgency. A series of personal, compelling, and emotional stories from participants was difficult for leaders to ignore or dismiss. After reading one story, a vice-president commented that it didn’t accurately reflect the whole situation. The vice-president asserted that the person who submitted the story only understood part of the problem. However, the leader quickly realized that the employee’s perception of the troubling situation was genuine and that the leadership team had a role to play to improve transparency and more effectively communicate with employees.

Leaders at all levels of the organization used the data from the same sensemaking initiative to take action and affect change. The sensemaking data set was fractal, or self-similar, at different levels of the organization. This meant that some of the team-level patterns were similar to the patterns of the larger group. The whole organization was engaged to develop and deploy interventions to try and get “more stories like these and fewer like those.”

The ability to quantify and measure progress within a complex adaptive system without knowing what success looks like in advance was another powerful advantage of sensemaking. For example, in a previous graphic about attitudes towards change, about half of the stories were on the left-hand side of the triad toward the [keep as is] corner and were signified by participants as having a negative effect on business (red). The organization could set a goal to reduce the fraction of the stories on the left side from 50% to 40% over the next quarter. Several safe-to-fail interventions could then be deployed to shift the pattern toward the right-hand side and progress could be measured. “More stories like these and fewer like those” could be converted into a set of quantitative goals and metrics for the organization.

4.2 Recommendations for future sensemaking initiatives

After leading many sensemaking initiatives we learned a few things about the methodology that other practitioners may find useful in future work. For instance, we found it particularly difficult to keep the frameworks short. If possible, limit the number of questions to about three or four triads, four to six dyads, and three to four multi-choice questions at most. We had anecdotal evidence that people did not submit responses frequently if the framework was too long. We also found that presenting all of the questions at once improved the completion rate and user experience for participants versus splitting the questions up across separate pages.

One of the more useful questions was to ask participants to give their entry a name or headline. “If this were a news story, what would the headline read?” The name provided interesting insights and enabled us to scan entries quickly. We didn’t ask participants to name their stories in one framework in order to reduce the number of questions and we greatly regretted it later during analysis. Names are a powerful way to add meaning.

One challenge we encountered was getting people to submit stories. Many organizations suffered from survey fatigue, and it was difficult to get people’s attention through mass emails and electronic marketing campaigns. Word of mouth and leveraging personal relationships worked better than mass media communication. Another technique that worked well was to have people volunteer to be story champions for a short period of time, typically one or two weeks. During this time, it was their job to share experiences and observations on a daily basis. When they were done, they passed the responsibility to another volunteer. We also believed that integrating the collection process with some part of a work process helped (e.g. responses are collected as part every retrospective and demo).

A lesson we learned from statistical analysis work was that we had to be careful with statistical claims due to the fact that story collection was based on convenience sampling. The samples were not truly random, we mostly received responses from participants that felt like submitting a story. Rather than frame patterns as statistically significant, we instead asked if there was enough evidence in order to take action.

Another technique that worked well was to develop dashboards and automated reports to enable participants to do their own quantitative analysis. This distributed approach to analyze the data improved our overall ability to scan the data for weak signals and patterns. It also helped to engage a large group to affect change.

5. Acknowledgements

I’d like to thank and acknowledge my mentors, coaches and co-travelers on this sensemaking journey. I owe a debt of gratitude to David Snowden for his thought leadership on sensemaking and complexity methods, and his mentoring over the years. I’d also like to thank Michael Cheveldave for his coaching as we came up to speed on the design and execution of sensemaking initiatives. A big thank you is due to Rhea Staddick as my co-traveler and confidante for much of the sensemaking work at Intel. Thanks also to Nanette Brown for being my shepherd on this experience report for the Agile 2017 conference – your advice was highly valued and greatly appreciated!

6. APPENDIX

REFERENCES:

Pink, D. H. (2012). Drive: the surprising truth about what motivates us. New York: Riverhead Books.

Snowden, David. Sense Maker™ website, http://cognitive-edge.com/sensemaker/

Agile2017

Have a comment? Join the conversation

Related Agile Experience Reports

The ongoing transformation of ExxonMobil Information Technology began with a fistful of passionate Agile proponents. Along the way, their efforts have led to increased support, influence and a growing portfolio of high-performing Agile teams. 1.     …
The ongoing transformation of ExxonMobil Information Technology began with a fistful of passionate Agile proponents. Along the way, their efforts have led to increased support, influence and a growing portfolio of high-performing Agile teams. 1.     …

Discover the many benefits of membership

Your membership enables Agile Alliance to offer a wealth of first-rate resources, present renowned international events, support global community groups, and more — all geared toward helping Agile practitioners reach their full potential and deliver innovative, Agile solutions.

IMPORTANT: We have transitioned to a new membership platform. If you have not already done so, you will need to SET UP AN ACCOUNT on the new platform to establish your user profile. Your previous login credentials will not work until you do this set up.

When you see the login screen, choose “Set up Account” and follow the prompts to create your new account. You can choose to log in using your social credentials for either Google or Linkedin (recommended), or you can set up your account using an email address.