RESOURCES

Making the Change: Going Agile at the Department of Labor

About this Publication

How did a small division within the United States Department of Labor manage to break through its silos while improving its performance? This paper will demonstrate how three teams worked together while utilizing the kanban Method’s practices to become a unified service for software development.

1. INTRODUCTION

The United States Department of Labor (DOL) is the cabinet-level department that administers and enforces more than 180 federal laws and more than 1,000 federal regulations. The Department has a mission “To foster, promote, fprofitable employment; and assure work-related benefits and rights.” Within the Department is the Office of Public Affairs Division of Electronic Communications that is responsible for managing and developing the Department’s web presence, www.dol.gov, as well as many web and data-driven applications that support the Department’s mission.

Responding quickly to changes in demand among the Department’s agencies was an ongoing challenge for the Division’s three person developer Technical Team. The manager of the team began to look for a better way to support his team while managing the demand for change requests, new development, and maintenance work. The manager and team’s first choice was to use scrum to develop and release a high priority project known as “Quarry.” I was brought on to help stabilize the Scrum implementation while encouraging greater collaboration between three separate silo teams within the Division. These teams were the Technical Team, Governance and Security Team, and Quality Assurance (QA) Team.

This experience report covers how three teams, composed mostly of contract staff, came together to evolve their way of working over a two-year period while using concepts and practices from the Scrum Framework and the Kanban Method. My goal in the report is to share the concepts and practices that benefited and sometimes challenged the teams’ approach to practicing how they managed and evolved their way of working. This report is written from my perspective and conveys only my opinions and observations about the experience. I speak only for myself when I write about the perspective of the managers and teams involved.

2. Background

2.1  The Technical Team

The Technical Team was responsible for requirements analysis and software delivery including support for releases into production. They were composed of a senior developer, mid-level developer, and junior developer. Management and support of the team included the federal manager, me as the lead, a systems administrator, and an occasional designer that would help with web page layouts. Prior to my arrival, this team was using Scrum or traditional project management techniques to deliver their work for security and governance review. They would deliver work in batches to the Governance and Security team and QA team.

2.2 The Governance and Security Team

The Governance and Security Team was responsible for multiple aspects of the contract and associated projects. The team ensured that contracts were performing successfully relative to regulatory requirements. They also oversaw and managed a mid-level security specialist who was responsible for performing security tests and inspections against the code base and server configurations. The mid-level security specialist depended on the technical team to support delivery to the QA team.

2.3 The Quality Assurance Team

The Quality Assurance team was responsible for black box testing and accessibility testing for the Quarry project and other development changes performed by the Technical Team. The team consisted of a mid-level testing Specialist and a 508 compliance expert who performed 508 compliance testing, manual, and automated testing. The QA team depended on the technical team to support their server configurations.

2.4 Quarry

Initially, high priority focus was on Quarry, an application that would convert a database into a Restful API. It was being developed in part to meet the requirements of President Obama’s M-13-13 memo titled “Open Data Policy—Managing Information as an Asset.

2.5  Aging Applications

The Technical Team was also responsible for large maintenance backlog. This backlog included rewriting twenty applications that were at risk of major security vulnerabilities due to legacy code bases such as ColdFusion, ASP, or .net.

2.6 Quality Assurance and Release Management Support

The three teams were also involved in releasing applications on behalf of other agencies that were required to go through the Division for security and quality assurance review. During the two-year period of this report, the three teams supported numerous releases targeting production environments such as web servers, the Apple iTunes Store, and Google Play store.

One of the biggest challenges with the work was how to address and manage it with a limited number of developers, testers, and support staff. The federal manager of the Technical Team was responsible for managing approximately 120 stories for eight projects. This experience report covers how a group of six individuals supported this demand.

2.7 The Five Questions

In August of 2014, I interviewed for the Team Lead position of the Technical Team. During my interview, the federal manager asked five questions that captured the challenges he and the Division were facing. The questions were:

  1. “How can you help me to see into the work?”
  2. “How long will it take for work to complete once the team starts on a story?”
  3. “When will a feature or project be done once the team starts?”
  4. “How can you help me to express to upper management the demands on the team and their capacity or capability to address the demand?”
  5. “How can you get QA to talk with the developers?”

These five questions were to become the focus of my work with the Division over the next two years. My first task was to help them stabilize their usage of the Scrum Framework.

3. Stabilizing Scrum

My initial charge was to stabilize and mature the team’s implementation of the Scrum Framework. The Technical Team started to use Scrum in the Spring of 2014 to develop Quarry. They chose Scrum because they felt it would offer them the opportunity to focus and deliver on their project commitments. The Technical Team’s attempt at using Scrum was their way of signaling they were going into an agile way of working to manage their commitments, focus, and changing demands. The team’s Scrum implementation included:

  • A Product Backlog
  • A series of columns to represent work in progress To-Do, Doing, and Done visualization in place

However, the team was struggling to meet the spirit of the Scrum framework (e.g., Sprint planning, backlog refinement, Sprint review and retrospectives, etc.). There was no performance monitoring or reporting taking place. Some of the challenges they cited included turn over within the team and an aging development infrastructure that had failed them at least once with regards to source control.

The developers on the Technical Team were also experiencing challenges while collaborating and working with other teams or silos within the Division. Those silos were the Governance and Security team and the Quality Assurance team. Until my arrival, the three teams had operated using a waterfall approach, sending their work downstream using traditional project management meetings such as a handoff meeting.

While the Technical Team was focused on delivering the Quarry project to completion, they were also challenged on a weekly basis to manage multiple internal and external customer requests. These requests ranged from simple production bug fixes to new features and sometimes requests for new applications. During my interview and initial first few days attempting to stabilize the Scrum implementation, the Technical Team manager made it clear he would also need an approach to balance the capability of the team against the requests and changing priorities he was responsible for managing. That request engendered my use of the Kanban Method.

I chose to implement a proto-Kanban in mid-September to visualize the end-to-end aspects of the Division’s development and maintenance demand as well as capturing the capability of the teams. The proto-Kanban had a handful of features which included:

  • Two swim lanes that contained user stories: one for the Quarry (high priority/high risk) project and one for the next most important project.
  • A subordinate Scrum board used by the Technical Team to track their tasks needed to develop the stories
  • Limits for the number of active stories in process for a given area such as analysis, development, etc.
  • No limits or an unbounded number of stories that could be in a completed state for a given area

The proto-Kanban supported a larger team view of the project work. It helped bring everyone together during a 10:00am stand up meeting. While using the proto-Kanban, I noticed changes in behavior by team members such as:

  • the security and testing teams were having discussions to resolve or address risks associated with the stories
  • coordination between the development, security, and quality assurance team members on how to address issues and handoffs associated with user stories

4. Practice and Process – August 2014 through December 2015

From the Fall of 2014 through the early Winter of 2015, we made several changes to our proto-Kanban. We continued to use Scrum throughout this period and found good results with iterations, retrospectives, and a dedicated planning meeting. We also found that visualizing the status of the work using a proto-Kanban supported the collaboration that was needed between the three teams.

4.1 Ticket Design and Ownership

When I created the proto-Kanban, I started tracking the start and finish dates on each ticket. Using the start and finish dates supported the development of a general picture for lead time and throughput. It was not a process that could be forecasted due to unbounded (unlimited) areas where work was completed.

During a retrospective in the Spring of 2015, the three teams voiced concern that the ticket design lacked ownership and accountability for everyone. We all felt a bit frustrated when we were not sure who tested something or who had analyzed a story for tasks. Around the time of this retrospective, I finished reading Daniel Vacanti’s book “Actionable Agile Metrics for Predictability”. I felt there might be some benefit to identifying work that was waiting for a long period in a completed state. The frustration of the team and new knowledge from Vacanti’s book compelled me to update ticket design. I evolved the ticket design to include:

  1. who was responsible for completing an activity such as analysis, development, or testing.
  2. the start and finish date for each activity or column (e.g., analysis, development, etc.)

Original Ticket Design

 

Updated Ticket Design

Once the new ticket layout was deployed, I observed that short “after” meetings were taking place right after the 10:00am stand up. The meetings tended to focus on quickly discussing issues with the individuals who had participated in the development and testing of a story. What surprised me was the amount of excitement that this new design generated among the three teams. Each team member could point to an individual ticket they participated in completing and talk about their achievement. This simple change to the layout made a noticeable and positive change to morale. The cumulative excitement around seeing the work being completed (e.g., tickets deemed as release ready) and achievements made by individuals were palpable during the daily stand up and retrospectives as the Quarry project, and other projects, made progress.

4.2 Using Batches and Iterations to Boost Focus

During the first few weeks after I arrived at my job, I focused on getting the team’s Scrum implementation to a point where it was meeting both the needs of the team and the Technical Team Manager. I executed a backlog grooming meeting which involved working with the developers and technical team manager to reorganize, prioritize, and refine the list of features and stories associated with the Quarry project.

I worked with the team to discuss how many stories they could complete in three weeks or fifteen business days which was their desired timeframe for a Sprint. We scoped that commitment of stories as a batch and pulled each story through the proto-Kanban until the batch was completed. Due to competing priorities, we included stories from one other project into our batch. The batch felt a lot like the Sprint backlog. Most of the time, the team estimated the batch size incorrectly and a portion of the original batch was carried forward to the next iteration.

While estimation was not our focus, we discovered that batching and reducing the number of stories to focus on was beneficial to the team. The three teams could concentrate and deliver stories with a better sense of control than in prior attempts at using Scrum.

4.3 Benefits

Benefits observed during this period included a clear sense of scope and commitment with regards to what stories were committed for development and delivery as a potential release. Using the proto-Kanban supported the team’s need to address a mixed set of priorities within the scope of an iteration. The team also benefited greatly by having specific point-in-time reflections via the retrospective meeting. During the meeting, team members would share their frustrations about the way they were working and discuss what could be done to improve their approach to developing and testing software.

Delivery and related risk data were also shared during this meeting. Primarily, there was a focus on cycle times within the various work areas. The goal of analyzing the cycle time was to determine where the three teams could benefit from some simple changes to our process. After a few iterations and the change in the ticket design to support ownership, the team was beginning to develop some momentum as they worked collectively to deliver the Quarry project story by story feature by feature. This process and our changes worked very well for the Quarry project. By June of 2015, the project was released into production.

4.4 Challenges

Challenges I observed during this period included:

Rework from weak requirements analysis. The team, excited to deliver the work mostly from a development and engineering perspective, lacked the necessary experience and discipline to prototype and design concepts while involving the product owner. A number of defects would be discovered later during user acceptance testing which indicated that the team needed to focus more on requirements.

Delays from broken builds. This was due to differences in developer environment configurations and server configurations. The team struggled to keep their environments in sync mostly due to old hardware. When a build would break, the remaining two developers would assist the other developer with the build issue. When all three developers stopped work to resolve a broken build it, would take about two to three hours to resolve. By the time the build was resolved, most work would be delayed by a day.

Fatigue. While the team initially enjoyed the safety that a Sprint would provide them, they ultimately became fatigued with the batching aspects of the process. Attempting to estimate how much time a story or feature would take became a less than desirable task for the team to perform. Initially, during the first six to nine Sprints, the team was excited and ready to tackle the work ahead. Once the Quarry project was completed and new projects were brought to the team’s attention, there was less motivation to take on new challenges. This may have been due to a decrease in adrenaline once the Quarry project was released.

4.5 Performance – August 2014 through December 2015

By December of 2015, the average cycle time for all three teams as one system (going from initial queuing to ready for delivery) was about 55 days. The team was also beginning to be mindful of their work in progress limits for their respective areas of specialization (e.g., development, testing, etc.). Emphasis was given to the number of issues that would remain completed but not pulled into the next stage or not released into production. The delay associated with not pulling work into the next stage was identified as a risk.

 

4.6 How well did we address the five problems?

How can you help me to see into the work? By visualizing the work on the kanban board, the Manager of the Technical Team could see all work, in terms of user stories, spanning from the backlog through released (in production). This visualization supported his need to see into the work.

How long will it take for work to complete once the team starts on a story? We were able to generate a rough number of 55 days. It was very likely that this lead time was inaccurate given we did not limit the inactive state for work in progress. This effectively created an unbounded system with no real limits which negatively impacted any predictability. The unlimited inactive or “complete” columns for work in progress was one reason I considered it a proto-Kanban.

When will a feature or project be done once the team starts? At this point, we were unable to predict when a feature or project would be done because the work in progress was not limited for work that was completed in a given area (e.g., development complete). The unlimited inactive or “complete” column violated Little’s Law and the constraints required to accurately determine when a feature or project would be done.

“How can you help me to express to upper management the demands on the team and their capacity or capability to address the demand?” The visualization of the work in progress combined with the rough lead time data aided in expressing the commitments that were already in place by the Team and Division. The visualization focused the management conversation on what was already in progress. Because of the unbounded (unlimited) areas of work in progress where work was completed, the capability (e.g., how much can be delivered over a given period of time) could not be accurately measured.

“How can you get QA to talk with the developers?” The combination of involving the quality assurance team in the stand-up meetings, conducting retrospectives, and after meetings that would take place after a stand-up meeting all served to get the QA Team to engage with the developers on the Technical Team. Additionally, the change in ticket design that supported sign-off of work further encouraged collaboration, pride, and ownership between the teams.

5. A Motivation to Change

As Thanksgiving 2015 approached, the Manager of the Technical Team directed me to prepare to move the team to use all aspects of the Kanban Method as well as make use of a recent purchase for Atlassian Jira. He, too, had observed the teams’ fatigue and wanted to provide them with a way to work without the planning effort the team was performing.

The manager chose to use all aspects of the Kanban Method because of the performance gains, initial visualization benefits of using a proto-Kanban, a desire to get a better handle on cycle times, and a wish to forecast delivery for projects and features. Other motivations for moving to the Kanban Method included:

  • The manager continued to have challenges with addressing requests in a timely manner.
  • Previous measurement was reasonable, but he and I both felt it would not support forecasting due to the unbounded proto-Kanban design.
  • We knew we needed more information regarding the nature of the demand, the sources of demand, and frequency. By using all aspects of the Method, we felt this information could be captured.
  • The team would refuse to accept new work while performing a Sprint. Even though they were utilizing two lanes, they would not accept new work during the 15-day Sprint.
  • Higher priority requests arriving from other agencies needed to be addressed prior to the completion of a Sprint.

6. Training for flow

I began training the teams on the Kanban Method from early-December until mid-December. The goal of the training was to orient the teams to the principles, practices, and values of the Method while also giving them a hands-on experience via the getKanban Game. I was asked to provide the training not just to the Technical, Governance and Security, and Quality Assurance teams, but to all of the Division’s management and staff.

6.1 Using getKanban to establish a new context

The getKanban board game is a great way to introduce the major concepts of the Kanban Method to a team. The game supports the Method’s approach to measuring and tracking progress as well as meeting and review cadences (e.g., daily planning meeting, replenishment meeting, delivery planning, and service delivery review).

One aspect of the game is revenue. Five teams composed of six to eight people from the Division, including members from the Technical, Governance, and Quality Assurance teams, competed on revenue. Having a government staff focus on revenue was a great experience. I was exposed to not only the teams’ ability to quickly grasp and manage risk but also their different styles for mitigating risk. The winning team was composed of mostly individuals who supported the Department’s web presence. Their agility and quick decision-making skills regarding how to address risks within the game allowed them to surpass their competitors quickly.

6.2  Benefits

Using the getKanban game was an excellent way to expose the Division staff to concepts and management practices associated with the Kanban Method. For most of the participants in the Division, this was already well understood. However, for some newcomers, it was either a completely different way of thinking or a new way of describing a method of working and managing risk.

6.3 Challenges

One of the challenges the teams encountered when playing the game was understanding how to measure their performance. Some participants were confused either from an accounting standpoint (how much revenue) or from a graphing standpoint (how to accurately draw a cumulative flow diagram). The game also took a considerable amount of time to play (approximately three hours on average) which challenged the teams to stay focused given they were taking time away from their day-to-day work.

7.  Implementing Kanban

The training finished in late December 2015. Christmas was just around the corner and the teams were eager to get the Kanban Method in place before the employees took off for their holiday vacations. Rather than use the Systems Thinking Approach to Implementing Kanban (STATIK) developed by Mike Burrows, I decided to use The Kanban Kick-start Field Guide that was developed by Christophe Achouiantz and Johan Nordin. I chose the Field Guide because it supported the short time frame (less than two weeks) for me to learn, develop, and facilitate the implementation of the Method. I started facilitating the implementation on the Monday before Christmas. We finished on Christmas Eve.

7.1 Practice and Process – January 2016 through September 2016

Shortly after we implemented the Kanban Method and the associated system that supported the three teams, I was charged with configuring a recent purchase of Atlassian Jira to reflect our way of working. I also spent a good deal of time leading the team through four common cadences of replenishment meeting, daily meeting, delivery planning, and the service delivery review.

7.2 Tooling to the design, not designing to be a tool

Configuring Jira to operate with respect to our systems’ design was challenging. We took advantage of Jira’s existing features to get the behavior we needed from the software. This included utilizing sub-tasks to operate as blocks or defects which prevented work from proceeding until those sub-tasks were resolved. We also utilized the transition screen feature to prompt or remind the team what the “done” policies were for a given area of work. We also made use of the work in progress limits for work that was active and inactive.

7.3 Utilizing the Kanban Method’s cadences

Part of the Field Guide included using the Method’s cadences for a team. The cadences included the replenishment meeting, daily planning meeting, delivery planning meeting, and service delivery review. We reused existing meetings or meeting times for the cadences. Our daily stand-up became our daily planning meeting. Our focus shifted from one of who was working on what, to one of risk management and early prevention of risks. By walking the board right to left and top to bottom, we identified and addressed any pressing risks such as items that were at risk of making their due date and blocks or defects that were causing work to be delayed.

We replaced our retrospective meetings with the service delivery review meeting. While the scope of the service delivery review meeting is intended to mostly collect and discuss service performance (e.g., average cycle time, throughput, work in progress) and associated risks (patterns for blocks and defects), we also discussed changes we felt were needed to improve performance and experiments that we would want to attempt during the next two weeks. We also would perform a brief replenishment planning meeting near the end of our service delivery review meeting. We did this because it fit well with our larger focus and awareness on performance. We could say with confidence when something would be done relative to the day we selected it. Delivery planning would take place on occasion when enough stories were completed to deliver a feature or when a bug fix was needed. The planning meeting was attended by the development team and Department’s release team for the target platform (e.g., Windows or Linux). We discussed timing, risks, and roll back strategies during this meeting.

7.4 Sizing Appropriately

One of the steps within The Kanban Kick-start Field Guide recommended setting visualization policies for work items using a T-shirt time scale. The scale represented work that would take anywhere from less than one day (XS) to greater than three weeks (XL). We often found that we were debating a lot about how big something was before we spent time understanding what was requested. What we thought would be simple was actually complex or time consuming. We later implemented an Upstream Kanban which supported analysis and definition of options to grow our focus on analysis and requirements.

7.5 Benefits

We were limiting the work in progress not only for work we were performing (e.g., development), but for work that was completed (e.g., development done). By limiting the amount of inactive or “complete” work for a given area, we were able to establish forecasting. Using the Kanban Kick-start Field Guide provided a number of immediate benefits for me and the teams: a clear set of preparation activities to ensure we were focused as we built the Kanban system, and a very useful set of “kicks” to implement the Method’s core features.

7.6 Challenges

During our December implementation of the Method, we spent considerable time discussing and agreeing on the policies that would support the analysis, development, testing, and release of work. Many hours passed as we discussed what we thought would be good and useful. Ultimately, we generated 22 policies that would be used at various points throughout the analysis, development, testing, and release of work. Weeks after we implemented the system, we noted that we used about 18 or less of those policies. While we had aspirations to achieve a higher level of quality, we recognized that our need to utilize process experimentation to bring a new policy into our overall set of policies. Developing the discipline to respect a policy was challenging.

Technology also presented a challenge. Prior to using Jira, we agreed and initially practiced self-selecting work using avatars and an analog (whiteboard) version of our system. Once we started using Jira, there was no obvious way to support this type of behavior.

7.7 How well did we address the five problems?

How can you help me to see into the work? By visualizing the work on the Kanban board in Jira, we were not only able to support the federal manager’s view of the work, but to also see into it via the comments in Jira.

How long will it take for work to complete once the team starts on a story? We were able to generate a reliable number of 11 days. By limiting the amount of work in progress for the entire Kanban system, we were able to reliably predict how long a story would take to complete once the team started.

When will a feature or project be done once the team starts? We were able to forecast a project or feature delivery, which were composed of a number of stories, based on prior performance after we moved to limiting the amount of work in progress (WIP) for the whole Kanban system (e.g., limited WIP for the completed work as well as work actively being performed). By referencing past performance of the Kanban system’s stories and using Monte Carlo simulations, we were able to reasonably forecast when a feature or project, compromised of a number of stories, would be done.

“How can you help me to express to upper management the demands on the team and their capacity or capability to address the demand?” By utilizing the reporting features in Jira as well as using Actionable Agile Analytics’ tool, we could accurately describe the demand the team was facing and their capability in terms of performance, predictability, and quality.

“How can you get QA to talk with the developers?” By this time, there were discussions of having the QA team write unit tests as part of the Upstream Kanban system. We grew beyond the challenge of collaboration and moved into the opportunity to improve quality early on in the process. We started to improve our capabilities to analyze requirements.

7.8 Performance – January 2016 through September 2016

The team managed to reduce the average cycle time from 55 days to 11 days by August of 2016. The reduction in average cycle time was due to respecting the WIP limits for the whole system. We emphasized collaboration on how to work as a whole team by respecting the work in progress limits not only for work in progress in a given area, but also for work completed in a given area. By respecting this overall system limit on work in progress, the team was able to decrease the amount of work in progress while also reducing the cycle time for the whole system.

8. What I Learned

Leadership matters. While the Kanban Method was a useful approach for establishing and evolving the teams’ way of working, leadership is what powered it. One of the Method’s change management principles, encouraging acts of leadership at all levels, really shined through during this experience. I witnessed a good amount of leadership from all team members, especially from the federal manager, once everyone agreed to respect the work in progress limits. By accepting those limits as real, the Technical Team and the federal manager began leading trade off discussions relative to the teams’ capabilities and customer needs and goals.

Minimize the size. Early on in my work with this Division, I noticed how the Technical Team was challenged to estimate the effort needed for a given story. They also struggled to accurately estimate the size of work as described in the Kanban Kick-start Field Guide. They chose to avoid story points and the federal manager also wanted something that could reliably forecast capability (e.g., ten stories in eight weeks). Near the end of this experience, I worked with the federal manager to “atomize” the stories and requests in such a way as to express them in their smallest possible form. We performed this work using an Upstream Kanban system. By doing this work, the team had access to stories that were clear in terms of requirements, risk, and priority.

Follow the Kanban Method’s principles and practices. The Method’s principles and general practices provided us with a set of goals on which we could grow, mature, and evolve our way of working. We evolved while balancing our demand against our capability. The Method served us well given our need to support more than one project or system at a time. Lastly, the Method’s principles and practices became my go-to when my coaching work became challenging.

9. Acknowledgements

I would like to thank the following people for their leadership and accessibility in making this paper happen: David Anderson, Chairman, LeanKanban, Inc.; Emily Bell, Ph.D.; Nanette Brown, Senior Member, Software Engineering Institute; Karen Palmer, QA Lead, TriTech Enterprise Systems, Inc.; Michael Pulsifer, Lead IT Specialist, U.S. Department of Labor; Daniel S. Vacanti, CEO ActionableAgile; and Randolph Williams, CEO and President, TriTech Enterprise Systems, Inc.

I could not have done this without your support. Thank you!

REFERENCES

Department of Labor, https://www.dol.gov/opa/aboutdol/mission.htm

Copyright 2017 is held by the author.

About the Author

Joey Spooner is an Accredited Kanban Trainer and Kanban Coaching Professional at TriTech Enterprise Systems, Inc. In a 20 year career spanning the communications, insurance, higher education, non-profit, and government sectors, Joey has been a software developer, IT director, strategic analyst, and technical expert. Joey holds a Bachelors in Business Administration.