RESOURCES

Things Are Broken: A case study in moving toooooooo fast

About this Publication

My company presented an exciting opportunity to spend some time with a few teams and learn why delivery was taking longer than expected, so I created a Retrospective Value Mapping survey to use during team retrospectives. What I uncovered was not hugely surprising, but it did allow me to have some tough conversations with company leadership. In a sense, we were moving too fast for our own good.

1.0 INTRODUCTION

“Move fast and break things.” — Mark Zuckerberg

He made that quote famous years ago, and it lit a fire under many in the sect of software I work in. Mobile development is no longer a hobby for companies today, and in that world speed is the key. If you can’t innovate and change fast enough, you’ll get left in the dust. Bottle Rocket Studios is one of the original third-party app development companies, being founded the day after Steve Jobs announced the creation of the iOS App Store. The company has spent the last eight years perfecting the art of moving fast and breaking things along the way.

My entrance to the company began three years ago. My boss hired me as a project manger, but secretly he was bringing me in to serve as our internal Agile transformation consultant. After my first year, I was elevated to a lead position and made known the desire for me to function as Bottle Rocket’s Agile Coach. The first thing I did was form a cross-disciplinary team to examine how we operated on teams with an emphasis on steady improvement. I reported to the VP of Solutions and Delivery, while still performing the duties inside PMO. The nearly two years I spent in this role are my career highlight.

In early 2015, I was presented with an interesting coaching opportunity. One of the divisions of the company was having a problem delivering releases on time for clients, and wanted to find out why. My boss gathered myself, the program manager of this group, and a few other key members of leadership together to discuss some options. It was then that the decision was made to have me spend a few months with the group and see what I could find wrong.

2.0 BACKGROUND

This division consisted of several teams. A framework team on each platform (iOS and Android) provided the main functionality of the apps that were made for brands. They were mostly autonomous, functioning with tech leads and QA as the day-to-day leadership of direction. Depending on the main functionality they were writing, sprints tended to be longer than other teams (3-4 weeks). Retrospectives were brief, and stand ups were long and unorganized. A product manager would show up for demos and planning; providing high level stories for teams to break down on their own later.

There was also a back end web services team that performed a few functions. They serviced the APIs which fuel the framework teams, as well as what was provided from brands. The team also provided production support. Consisting only of engineers and QA, they were also autonomous from the rest of the division.

Implementation teams were the majority of the division, divided up to support the various brands that utilized our app platform. These were more traditionally structured, with project managers (who served mainly as scrum masters and client support leads), engineers, testers, and user-interface specialists. Implementation teams would take new releases from the framework teams, integrate new services from the web service team, and skin the app using branding provided from our clients. Usually, the release would also include some custom functionality that would be implemented outside of the framework. Additionally, the release would also coincide with marketing brands would have planned, meaning the release had fixed ship dates.

This final set of teams also had some unique challenges that other teams did not have to deal with. Much of the assets provided in client APIs were actively being updated as implementations would begin. They had their own deadlines internally, often which would not be communicated to our teams. As a result, teams would be waiting on client services causing a stop-and-go flow of work. “Hurry up and wait,” was not just a slogan for these teams. It was a way of life.

This presents several challenges that enterprise organizations might be familiar with. Many of the teams functioned in silos without proper team leadership to keep them shielded from outside stakeholder interruption. The teams that did have protection (the more client-facing implementation teams) were not properly enabled to speak for team commitments. PMO often felt as if they were only order takers.

In years past, this division had tried a few different tactics to improve productivity. They had experimented with utilizing some Scrum events, but had never fully given it a shot. There was a standing Scrum of Scrums meeting on the calendar, but it was sparsely attended by leaders and often would just be status reports to each other with little collaboration.

After the initial meeting to discuss the issue in early 2015, I created several interview sessions with development and quality assurance leads to get their take on the challenges this group has. It allowed team members to have a voice separate from company leadership and PMO. I completed more than a dozen interviews over the course of two weeks. Many issues were mentioned, but three areas in particular kept coming up:

  1. Teams often are unable to commit to deliverable timelines because of what they described as, “an unusual amount of uncertainty and delays”.
  2. Project Management needs a deeper understanding of progress and issues during an implementation.
  3. Teams felt they never ended a day working on what they intended to at the start.

Armed with all of this information, I formulated a strategy and kicked off the effort.

3.0 MY STORY

Early on, I made the decision to not spend a ton of time exploring and understanding the full history of this group of teams. The company is small, so I had already heard a few stories leading up to the start of my coaching. Once some team members heard I was coming on board, my inbox filed up quickly with opinions. I felt that created a decent amount of recency bias. Like many external agile coaching consultants, I wanted to come in with as open of a mind as possible.

The first thing I did was call a meeting including some tech leads, project managers, and executive sponsors. Those conversations revealed a somewhat clear path to me, but I was encouraged to be creative in my research. As such, I decided to take a two-pronged approach to change.

The first involved a traditional Scrum implementation by the book (or guide, if you will). The way I sold it, this was not a process that I invented off the top of my head. It was something that’s been baking in the community for a while, and has a proven track record of success. My Scrum Alliance certifications also presented me as a leader in this practice, so company leadership was confident in my abilities to run things appropriately.

The second part included a survey that teams would use as an addendum to their team retrospectives. Scrum masters running the event would have the option to have teams either take the survey in advance and utilize it as an anonymous way of gathering information to review, or run retrospectives their own way and have teams fill out the survey afterwards. A few of the teams began the Scrum implementation with me facilitating all events, and for those I chose to have teams fill out the survey in advance.

To create a successful survey, I took our analytics expert at Bottle Rocket, (L.B.) out to lunch to discuss how she would collect the data if this team were like one of our clients. Our usability testing practice at the time was going through a transition, and L.B. was trailblazing the way that we would be conducting interviews in the future. With a master’s degree in Organizational Psychology, I knew she would point me in the right direction.

We came up with the idea of using aspects of Net Promoter Score (NPS), value-stream mapping, and Jeff Sutherland’s Happiness Metric surveys to collect our data. This seemed like an organized approach to quantifying discussion based upon my experience.

L.B. and I settled on five ideas to query in what I referred to as Retrospective Value Mapping:

  1. Rate satisfaction with the current sprint.
  2. Rate team productivity.
  3. Rate team communication.
  4. Rate personal productivity.
  5. Rate the quality of work delivered.

Each of the five questions asked the team member to pick a number from one to ten, with one being the lowest and ten being the highest. After each rating, we included an open-ended text box for the user to state why they felt that way. At the end of the survey, we asked one final open-ended question to see if there was anything else the team member wanted to add.

3.1 Scrum Implementation

The transition for all teams to a standard Scrum implementation was slowly rolled out across the division. I took two teams at a time for 3-4 sprints, then moved on to other teams while still shadowing some of the previous team’s ceremonies. Once designated Scrum masters had a couple of sprints with me facilitating all of the events, I gave them the option of taking over a few when they felt comfortable. Naturally, they felt more comfortable with stand ups first because of the sheer number of repetitions they had observing my behavior.

Most teams agreed to have demos, retrospectives, and planning all in one day to allow for nine straight days without team meetings. Implementation team backlogs were fairly defined and repetitious depending on the specific brand need, so a refinement meeting was not necessary.

One of the biggest adjustments I made out of the box was to how PMs ran stand ups for their teams. Before I took over events for the teams, there was not a board to reference in the morning. As expected, this caused ambiguous communication towards progress towards the sprint goal.

Also, there was not any follow up on blockers from day to day. Throughout each day, project managers would sometimes speak to individual team members about issues that were resolved, but the team would never hear it. Some would never be resolved either, mainly because the PM that stayed silent during stand ups would never write down the blockers.

Once I incorporated physical Kanban boards and took a more active role in communication during daily scrums, my peers shadowing me saw immediate improvements in team participation. Team members started communicating what specific tasks they would have resolved today. Some would even point to the board to help reference items on their radar. They also expressed appreciation in blockers being resolved when I took my turn.

As compared to previous implementations of the framework, my co-workers took to Scrum rather easily. It was clear they were ready for something new in their daily workflow, and I was proud of the early strides that teams made in the transition.

3.2 Results from the Survey

It was interesting to observe how teams would use the survey in their retrospectives. I did not provide any specific direction on how it must be filled out when I was facilitating the event for teams, only that everyone needed to before we left the room. Three patterns emerged:

  • One team was very diligent about filling the survey out in advance, leading to what I observed was prepared team discussion. I would start the event by reading the averages for each question and some of the reasons why. We would then start probing into the root cause. The perceived benefit for them was making the discussion time more solutions-oriented because the questions allowed team members to understand how they felt in advance.
  • Another team would spend the first 10 minutes of the retro filling the form out. They referred to it as their “decompression period” of the meeting. This method created the same solution-oriented discussion much sooner in the event, but allowed for any last minute items in the sprint to be addressed.
  • A third team would begin with open discussion time to verbalize their thoughts and feelings on the recent sprint, and would then end the session answering questions. This precluded the team from having the benefit of their own data to aid in discussion, but the PM for this team created a very useful tool as a result. A template for organizing the data was created, and allowed teams to have an organized space to review some of the larger discussion points in our internal wiki. This template ended up being used by much of Bottle Rocket’s PMO today as a result of its success.

Note: When I facilitated the retrospectives, I had teams fill out the survey use the first method. I did so only to cut down on meeting time. I found it fascinating that the other two methods grew organically on teams. 

At the end of the fifth sprint, I started reviewing the survey data.

The main benefit of using Survey Monkey as my collection tool was I could filter and download spreadsheets of data for any team or sprint I desired. There were other teams outside this division of the company that wanted to be a part of the pilot of this survey, so I needed the ability to create several filters for groups.

The challenge for me was I could quickly take averages for teams and sprints, but the open-ended questions were difficult to parse. A colleague’s wife works in data analysis, so I interviewed her to see if there were simple ways to parse data. Unfortunately, the tools that she uses took longer to set up than I had available in my timeline. The cost of those tools were also prohibitive to my budget. As a result, I ended up blocking off two full days of work to simply read through the information. Positive and negative results were color coded in a spreadsheet to allow for easy reference. I also separated sprints and teams into separate sheets.

Here are the results from sprints one through five:

  • Sprint satisfaction: 6.7
  • Team productivity: 7.7
  • Team communication: 7.25
  • Personal productivity: 7.6
  • Quality of work: 7.1

Since I facilitated the retrospective events, these numbers were not surprising as a whole. However, seeing them aggregated did strike me as interesting. A few initial observations that stood out to me were:

  1. The number averages were close, much closer than one would imagine from a data perspective. The distance between the lowest and highest rated categories was just a single point. Since I am not a data scientist, I couldn’t really quantify why this was the case. As a result, I simply chalked it up to the sample size.
  2. All of the averages were high, period. Since the surveys were taken in a variety of environmental settings, I did not make any conclusions as the the reason. My use of the numbers was consequently in relation to each other, because the questions were the main constant I had.
  3. Sprint satisfaction was the lowest average, which I was expecting because of the frustration teams had with the currently workflow.
  4. Productivity (team and personal) rated the highest, which I believed to be the product of passionate people working in an agile format. Teams were empowered to communicate throughout the day, and often would be able to identify and resolve issues before PMO caught on.
  5. The low number that was a surprise, though, had to do with quality of work. One of the company’s points of pride is our amazing quality assurance discipline and their ability to work hand-in-hand with engineering to crank out an amazing product.

The low quality number led me to investigate the comments around that section. Below are a few of them:

  • “Rushing through tasks because of tight deadlines are effecting the overall quality of our product.”
  • “I’d say no real change in quality of work, but 2 week intervals can cause a hurried sensation towards the end of the sprint and may create a decline in quality.”
  • “I don’t feel like very much went out. What we did make looks alright.”
  • “More time needs to be spent on regression and bug fixes, but circumstances have prevented us that luxury.”
  • “In some cases, we were more likely to give up on an issue that was going to take too long, or put it off until the next release, because of the time crunch. Maybe it’s better that we don’t bend over backwards to meet every request all the time, anyway.”

This concerned me, so I gathered the leads and PMO together to review this information. In advance, I sent this information in the meeting invite and asked them to come prepared with some feedback of their peers as to the cause of this lack in perceived quality.

Many team members felt like they started in a hole, mainly due to deadlines and goals dictated by the client and internal account management. Aggressive timelines gave them the impression that they needed to move at warp speed. This is not new in the client services business, so I started to discount their comments. I kept digging, though, and found something to work with.

As I mentioned above, often our brands won’t have all of their APIs and associated data ready for our consumption. Much of the requirements for these web services were fluid, depending on the campaign our release would accompany. What I was not aware of was how our teams responded to that change.

4.0 WHAT WE LEARNED

As many in our industry know, the definition of ready and done should be before teams begin working on a product. These checkpoints on either side of the iteration exist not only for the team’s sanity, but to also ensure peak productivity for the time box. During the meeting with team leadership, it was identified that there was not any sort of release checklist to ensure work can begin.

This led to teams accepting user stories and implementation tasks into the sprint that weren’t actually ready to be worked on. If there was work to be done, it would only partially be completed. In some cases, half-finished work would sit in the product backlog for another iteration or two before it could be picked back up again. Interestingly enough, these delays in client deliverables would not allow for the deadline of release to be pushed back. We would simply have to make up the time on the back end.

From a quality assurance perspective, we also identified that they were jumping from app to app too much to get into a rhythm. Because the platform itself is meant to service many different brands with the same code base, implementation teams might sometimes be working on up to four different apps at a time. Engineering did not experience a challenge in context switching, mainly because if there was a bug on one app there was a high likelihood it underneath the UI and could be fixed at the framework level of code.

So, QA would sometimes test two or three apps in a given day of work and would sometimes lose context for which issue was where. When asked why they felt this was needed, I was pointed back to deliverable dates and the perceived need for output.

4.1 A tweak in the workflow

Without making wholesale changes in our process, we decided to try a couple of things:

  1. Backlog owners (usually PMO) were required to show up to the next sprint planning session with a draft of definitions of ready and done (DoR and DoD). There was not necessarily much discussion over what “done-done” meant, because many team members had done this exercise before. The “ready-ready” part, however, brought up plenty of spirited debate because they were very excited to have this checkpoint in place now.
  2. We kept daily workflow the same for engineering, but QA was asked to only test one brand at a time. On some teams, this meant we needed alter the number of testers from sprint to sprint to accommodate current deliverable schedules, but this would allow each member to be the subject matter expert for a brand until it was shipped.

The first item created a unique challenge, because only accepting work that was ready meant that clients would need to be given some tough love. If they didn’t deliver on time, PMO told them we couldn’t start on it until the next sprint. While conversation was difficult around this topic the first couple of times, they eventually came around to the benefit once they saw the results.

In a sense, we had been sacrificing quality in the name of speed — or the perception of it. We weren’t really delivering work any sooner or later, because most of the time if hinged on teams getting delivery on their release dependencies. What we could now quantify was creating more fluid work, which improved our quality.

Take a look at the averages in scores for sprints six through ten:

  • Sprint satisfaction: 7.3
  • Team productivity: 7.8
  • Team communication: 7.6
  • Personal productivity: 7.9
  • Quality of work: 7.6

Scores rose across the board, with huge leaps in team satisfaction and quality. Armed with this data, I put together a presentation for company leadership. Many of us felt like we had all the data needed to make a successful argument of a new way to work at Bottle Rocket.

4.2 The Value In Numbers

Going in, I had a few preconceived notions about what the results of the survey would yield. I knew that the frenetic pace of their work was the result of poor boundaries with clients in relation to change within the sprint. While logic dictates that quality would suffer as a result of this, I was not expecting the team to recognize it in terms of metrics. The number was the most powerful argument I could make.

If I had simply presented the quotes from team members without the benefit of hard data to back up our improvement, the concerns could have been chalked up to whining. Having presented qualitative research in the past, I know it all too well. Instead, we could point to an issue that could be partially solved on individual teams, but couldn’t be improved at scale without support from up the food chain.

We pitched leadership with the notion of having a definition of ready for the entire division of the company. When brands come armed with an implementation date, it would be beneficial if we could present them with a checklist of items to fulfill before we committed to the delivery date. Again, the conversation with existing brands were sometimes difficult, but the data we provided gave all the ammunition needed.

Today, this checklist is used by account and PMO teams to help our clients align with the best way to deliver quality work in a reasonable time limit. It does not limit our risk or exposure, but it has created a more peaceful environment for team members. They are also empowered to raise their hands if a task does not meet the DoR. Project leads also spoke about the value of this proposition to their peers across the company, which has created — in small part — an increase in sanity for delivery teams.

4.3 Slow Down

Dave Snowden, the data scientist who created Cynefin, once said in a presentation, “creating systems where people slow down and pay attention, from a cognitive perspective, turns out to be absolutely critical.” His example was a section of road in Swindon, England, called the Magic Roundabout. It contains five smaller roundabouts surrounding a sixth.

Many would consider it the most dangerous stretch of road in the world, but they would be wrong. According to Wikipedia, it has an amazing traffic throughput and an even better accident record. Snowden says this is because drivers are forced to slow down and pay attention once they enter.

In my opinion, this study highlights how this approach is just as important for our teams. If we can all learn to slow down and pay attention to our work, the quality and efficiency would increase significantly.

5.0 ACKNOWLEDGEMENTS

Monte Masters. Long before I was hired, we met and managed to hit it off in a way that was unique to me at that time in my career. Along the way, he managed to mentor me in ways that I can’t articulate without getting emotional. He saw something in me that I wasn’t sure even existed at the time.

He also provided the organizational shielding I needed to run experiments like this in the organization. I received very little pushback from the executive sponsor, as a result.

Leaders like him train when necessary, but for the most part I was enabled to do what was necessary and learn my own lessons along the way. He wasn’t concerned with the specifics of our plan, just that we were thinking about one as we moved through project work. His organizational support of my coaching also provided the covering I needed to walk around looking for holes in our daily approach to work. His “How We Work Initiative” is the best thing I have ever worked on to date.

Thanks Monte for everything you did for me!

REFERENCES

Scrum Guide, http://www.scrumguides.org/
Net Promoter Network,
https://www.netpromoter.com/know/

Sutherland, Jeff, Scrum Inc., https://www.scruminc.com/happiness-metric-wave-of-future-2/

Survey created on Survey Monkey, https://www.surveymonkey.com

Scrum board created on Jira by Atlasssian, https://jira.atlassian.com/

Dave Snowden and Cynefin, http://cognitive-edge.com/

Magic Roundabout, https://en.wikipedia.org/wiki/Magic_Roundabout_(Swindon)

About the Author

Chris's first job out out of college was the weekend sports anchor at an NBC affiliate. If he had only known what was in store for his career! Interestingly enough, he still loves telling the stories of others around him every day. Each interaction is an opportunity to learn what made you unique, and understand where you came from. Chris thinks if you got to know each other more on a personal level, it would make the tough conversations easier to have. Come tell him your story!