It was interesting to observe how teams would use the survey in their retrospectives. I did not provide any specific direction on how it must be filled out when I was facilitating the event for teams, only that everyone needed to before we left the room. Three patterns emerged:
- One team was very diligent about filling the survey out in advance, leading to what I observed was prepared team discussion. I would start the event by reading the averages for each question and some of the reasons why. We would then start probing into the root cause. The perceived benefit for them was making the discussion time more solutions-oriented because the questions allowed team members to understand how they felt in advance.
- Another team would spend the first 10 minutes of the retro filling the form out. They referred to it as their “decompression period” of the meeting. This method created the same solution-oriented discussion much sooner in the event, but allowed for any last minute items in the sprint to be addressed.
- A third team would begin with open discussion time to verbalize their thoughts and feelings on the recent sprint, and would then end the session answering questions. This precluded the team from having the benefit of their own data to aid in discussion, but the PM for this team created a very useful tool as a result. A template for organizing the data was created, and allowed teams to have an organized space to review some of the larger discussion points in our internal wiki. This template ended up being used by much of Bottle Rocket’s PMO today as a result of its success.
Note: When I facilitated the retrospectives, I had teams fill out the survey use the first method. I did so only to cut down on meeting time. I found it fascinating that the other two methods grew organically on teams.
At the end of the fifth sprint, I started reviewing the survey data.
The main benefit of using Survey Monkey as my collection tool was I could filter and download spreadsheets of data for any team or sprint I desired. There were other teams outside this division of the company that wanted to be a part of the pilot of this survey, so I needed the ability to create several filters for groups.
The challenge for me was I could quickly take averages for teams and sprints, but the open-ended questions were difficult to parse. A colleague’s wife works in data analysis, so I interviewed her to see if there were simple ways to parse data. Unfortunately, the tools that she uses took longer to set up than I had available in my timeline. The cost of those tools were also prohibitive to my budget. As a result, I ended up blocking off two full days of work to simply read through the information. Positive and negative results were color coded in a spreadsheet to allow for easy reference. I also separated sprints and teams into separate sheets.
Here are the results from sprints one through five:
- Sprint satisfaction: 6.7
- Team productivity: 7.7
- Team communication: 7.25
- Personal productivity: 7.6
- Quality of work: 7.1
Since I facilitated the retrospective events, these numbers were not surprising as a whole. However, seeing them aggregated did strike me as interesting. A few initial observations that stood out to me were:
- The number averages were close, much closer than one would imagine from a data perspective. The distance between the lowest and highest rated categories was just a single point. Since I am not a data scientist, I couldn’t really quantify why this was the case. As a result, I simply chalked it up to the sample size.
- All of the averages were high, period. Since the surveys were taken in a variety of environmental settings, I did not make any conclusions as the the reason. My use of the numbers was consequently in relation to each other, because the questions were the main constant I had.
- Sprint satisfaction was the lowest average, which I was expecting because of the frustration teams had with the currently workflow.
- Productivity (team and personal) rated the highest, which I believed to be the product of passionate people working in an agile format. Teams were empowered to communicate throughout the day, and often would be able to identify and resolve issues before PMO caught on.
- The low number that was a surprise, though, had to do with quality of work. One of the company’s points of pride is our amazing quality assurance discipline and their ability to work hand-in-hand with engineering to crank out an amazing product.
The low quality number led me to investigate the comments around that section. Below are a few of them:
- “Rushing through tasks because of tight deadlines are effecting the overall quality of our product.”
- “I’d say no real change in quality of work, but 2 week intervals can cause a hurried sensation towards the end of the sprint and may create a decline in quality.”
- “I don’t feel like very much went out. What we did make looks alright.”
- “More time needs to be spent on regression and bug fixes, but circumstances have prevented us that luxury.”
- “In some cases, we were more likely to give up on an issue that was going to take too long, or put it off until the next release, because of the time crunch. Maybe it’s better that we don’t bend over backwards to meet every request all the time, anyway.”
This concerned me, so I gathered the leads and PMO together to review this information. In advance, I sent this information in the meeting invite and asked them to come prepared with some feedback of their peers as to the cause of this lack in perceived quality.
Many team members felt like they started in a hole, mainly due to deadlines and goals dictated by the client and internal account management. Aggressive timelines gave them the impression that they needed to move at warp speed. This is not new in the client services business, so I started to discount their comments. I kept digging, though, and found something to work with.
As I mentioned above, often our brands won’t have all of their APIs and associated data ready for our consumption. Much of the requirements for these web services were fluid, depending on the campaign our release would accompany. What I was not aware of was how our teams responded to that change.