About this Publication
As a first time manager, I took over the leadership of a team whose history of performance was overshadowed by delayed deliveries and that had a significant lack of trust with their internal stakeholders leading to an overall dysfunctional relationship. These are the processes – additional targeted meetings, moving team members around, and improving the showcase – that worked for our team to start to rebuild that trust and the lessons I learned along the way
1. INTRODUCTION
The Content Automation Translation System (CATS) is a nearly decade-long project that is replacing software on legacy mainframe systems with a new C#/SQL software system. The CATS software aggregates oil and gas production data from state sources, loads the data against our historical data, identifies anomalies for analyst review, and pushes the data downstream to other IHS Markit products. Each CATS release is a replacement for one state’s oil, gas, or oil and gas production data program on the mainframe. To date, approximately a dozen states or portions of states have been released.
The project has undergone several successes and failures in the ten plus year history of the project – with more perceived failures than successes in the recent past. In the past year and a half, we have focused on fixing the relationship with our stakeholders to enhance trust between the team and our partners. As the team’s manager, I had to determine how to identify low trust, find tools I could use to increase transparency, and learn how to talk to my customers about potential failures before they happen. I often jokingly refer to myself as a professional CATS herder – getting the entire team of developers, QA, business users, product owners, stakeholders, and management on the same page took a lot of patience, time, and the willingness to continually try new things.
2. Delayed Deliveries and Disgruntled Customers
2.1 Delayed Delivery
I came on board the CATS team in the year leading up to the release of Louisiana (LA). This delivery had been delayed several times due to a combination of internal and external factors: the original team had been pulled off and then returned to the project, two new developers and a new Scrum Master were brought on board, and the project required a full re-issue of our client data – including coordination with other teams, leading to falling into the 80/20 trap – correcting less than 20% of the data took up over 80% of the time. LA was released November 2015, with point releases in January and May 2016, while the team simultaneously spun up the next state, Texas (TX). This contributed to a sense that the team was spread too thin and set up for failure. While the point releases came quickly on the heels of the main release, the perception remained that the project did not deliver LA on time.
2.2 Unhappy Customers
This failure to deliver “on time” meant that a significant portion of our stakeholders was unhappy with the team and the project as a whole. The CATS team answers to a diverse group of stakeholders which includes the analysts who work within the program to process the data, the operations leaders who work with our customers to determine their needs, and the product management team that decides the overall roadmap for IHS Markit’s data platforms. Each of these customers feels (in a truly normal way) that their priorities are the highest overall priorities and need to be addressed and they often do not consider the impact on the other stakeholders of their requests.
Based on the delay in the delivery of the LA project, the team had a reputation for not delivering what our customers wanted, despite the fact that our analysts loved the new system as compared to the previous mainframe system and were able to complete their work and provide data downstream to our customers on a faster cadence.
Due to competitive pressures, the original ideal release date for TX was October of 2016 – an unrealistic and untenable due date based on the available development resources and the technical challenges of the project. No customer likes to hear “no”, and ours were no exception. This fed into an ongoing lack of trust between the developers and our stakeholders. The team felt underappreciated – that the scope of the work previously delivered and the satisfaction of our end users was not appreciated by our stakeholders. The delays on LA overshadowed the period of previous high throughput delivery in which the team completed several smaller states in a very short period of time, while also running an enhancement track for states already in production.
The hectic pace of development has also hindered the addressing of technical debt, some of which had to be addressed to move forward with the TX release. This delay in delivery was poorly understood by our stakeholders and could have been communicated better by myself and the management team.
2.3 Identifying Low Trust
The hectic, pace slipped deadlines, and acknowledged communications failures lead to an obvious lack of trust from our stakeholders, and our interactions with them became hostile and combative on the whole. The team heard directly from our stakeholders that they were concerned about our ability to deliver and that the delivery was already late, and that our competitors (“two guys in a garage”) were way ahead of us. We frequently heard “it is just an <X> problem” where <X> was some small aspect of what our product does. These outbursts came in the team’s end of iteration showcases, in responses to my status emails, and from their surrogates – the subject matter experts we worked with and our frontline users.
Our stakeholders questioned everything the team did – fixating on the team’s overall velocity and any fluctuations in it, or fixating on individual team member’s tasks and capacity. They also highlighted one or two high performing team members and commented about how the rest of the team isn’t measuring up to their standards.
The team wasn’t immune from trust issues either – with a small cadre of strong technical leaders with significant experience that the team looked to as unofficial leaders the team tended to defer to the loudest voices. I had to work to win them over individually before I could proceed forward – one way this was accomplished was by splitting up the most vocal of the group members for a time to lessen their ability to triangulate and influence.
Lastly, the CATS team no longer spoke up in the retrospective even when the format was changed or they were prodded individually. Conversations with the team one-on-one to ask why received the response that they had reached a level of apathy where they did not believe that speaking up would lead to any sort of change or betterment for their situation. I frequently heard “it doesn’t matter, they don’t care what we have to say, so why speak up anymore?” and “I don’t want to be the nail standing up that gets hammered.” It was clear that a change needed to be made to bring the team and the stakeholders back on the same side.
3. Trying Transparency to Foster Higher Trust
In order to determine what the changes we needed to make were, we had to honestly evaluate the current status of the team and decide what the next steps were. We began by working to stabilize the team as it was at that point in time and then began to tweak our process in ways that seemed like they’d lead to good results – adding regular meetings, improving the value of the showcase, and sharing ownership to improve trust.
3.1 Stabilize the Team
The CATS team had been through a significant amount of upheaval over the course of the TX project:
- Our team’s long-term manager moved to the product owner role and I took on the manager role in his place.
- The team’s next level manager left amicably during a reorganization a few months later and our chain of command shifted.
- As a first time manager, it took me a while to learn to relate to the team in that role, and to my former manager, now product owner, as a peer.
- Our QA lead took on the scrum master role, leading to some adjustments in her team’s workload.
- The division was simultaneously working through a change from Operations serving as product management to having an official product management structure in place.
A majority of the CATS team has been together since the inception of the project and was understandably cynical about the latest change in management and priorities based on their experiences with four or five prior senior management changes over the previous ten years.
In order to address this cynicism and lack of trust, development and quality assurance, operations, product management, and my management chain met in groups by focus and also in mixed groups to talk about the state of the project and what the pain points were from everyone’s perspective. After this, we met as a whole group to prioritize the pain points and determine what would be addressed immediately, in the near term, and what could wait for the longer term. This gave the team – and our stakeholders – a safe space to express their concerns while still pushing forward with the project and doing the best to meet shifting priorities.
The most significant immediate term change was that everyone from the stakeholders, to our product owner, to the team agreed that we would no longer talk about a potential or expected release date – the development and QA team would own the release date based on the scope of the work, and our product owner would work with the operations and product management to control that scope and set the team up for success. As our product owner had been the team manager for the majority of the delayed LA release, he was in immediate agreement that the team needed to own the date and product management needed to own the scope.
This took the focus off “when will you deploy” and moved it to “what will we deploy.” Additionally, it allowed the team to move away from the sense that we had already failed because we had not met the initial requested deployment date and were getting later all the time. It also allowed some time for the team to determine what unknowns still lay within the scope and add scope if needed without “slipping” or “missing” a committed date. The key to making this work though were several changes we implemented to increase transparency.
3.2 Hold Regular Stakeholder Meetings
One of the highest priorities for the management team after the pain point sessions was developing a regular cadence of meetings with our stakeholders. During the LA release, and in the early months of the TX release, the team only met with our stakeholders on a quarterly basis at best. Without ongoing meetings where the team and the stakeholders could communicate about the project, there were a lot of rumors and innuendo about project status flying about. Additionally, the team felt that the stakeholders were not invested in the project due to a lack of feedback.
We implemented the following series of meetings or touchpoints to address these issues and increase transparency.
3.3 Make the Showcase More Valuable
An early push for visibility and transparency was to ensure that all of the work done in an iteration was reflected in the showcase. The team has iterated through a couple of versions of this process. In the earliest version, we split the development team into two or three smaller teams, each with an area of focus and would list by each team and focus the user stories completed each iteration. When the team decided that the functional split was not serving their needs we shifted to highlighting each user story completed as a whole.
An early iteration of that process included a slide per user story, created by the developer, which highlighted the work they had done, including any applicable screenshots and code snippets, with a brief description of the business value of the user story provided by myself or the product owner. The team eventually decided that creating a slide per user story worked was more time away from development than they were willing to spend leading up to the showcase. At this point in time, the product owner creates an overall slide or slides highlighting user stories completed, any user stories with particular business value, and a count of defects cleared in the iteration.
3.4 Share Ownership to Improve Trust
The team was working hard as their reluctance to take time away from development to write showcase slides indicated. However, the customer had the impression that some teammates were more valuable than others, which undermined trust. To mitigate this, we looked for ways to create a sense of shared ownership by using the showcases to reduce the perceived appearance that some members of the team are more highly competent contributors (rock stars) and some members of the team are less invested and accomplish less in any given iteration (slackers). This has been an ongoing issue for the team – because the team is very heavily siloed, our end users and business partners have expressed a perception that some members of the team produce at a higher or more desirable rate. One way we have tried to avoid this is by decreasing the reliance on the capacity tracking feature of our agile tracking tool – by not providing time estimates for tasks, it is harder to point to any given developer and ask why they are not doing more. Additionally, each completed user story in the user story listing in the showcase includes the name of the developer(s) that worked the user story to highlight individual contributions.
Another significant way we have tried to combat this perception is by ensuring that multiple members of the team present each showcase. We try to have a user story from each area of the application (code, SQL, database structure/data loading, etcetera) presented each iteration. Additionally, I track who is presenting at each showcase and work with the product owner to select user stories to showcase that include all developers and ensure that everyone is presenting at least once every third showcase (approximately six weeks).
By including all developers contributions as equally as we could, we increased the sense of trust and were better able to showcase the team’s passion for the project and the job that they were doing. Additionally, we have moved from the Scrum Master or development manager writing the showcase deck and presenting it to the product owner writing and presenting the showcase. This allows the product owner to drive what he wants to highlight for the business about the project and provides a better experience for our stakeholders as he can better explain the value of what the team is developing. Having the product owner as the face of the showcase also signals that product management and development are working hand in hand towards shared success.
3.5 Focus on Small, Fast, and Consistent Wins
Not only did the customers need to view the team in a new light, the team needed to think in a new way about the product, our relationship with our stakeholders and customers, and our relationship with delivery. The team needed a fast “win” or endorphin boost that could come from completing a recognizable and usable chunk of software on a deadline agreed upon and committed to by all parties.
The team had been using a “milestone” approach where we would spend around six iterations working on a chunk of functionality but this had gotten clouded and become unclear as to what was really being accomplished. This was reframed as “releases” – while the actual product cannot be released to production until all functionality is completed, the product was in a good enough state in the beginning of 2017 that testing was occurring, and some parallel testing and user training could begin. Four potential “releases” of chunks of thematically paired functionality were identified and the first release was planned and committed to by the team.
The goal with our first release was to immediately get the team a win. The release was pared down to a set of functionality that would be useful to the end users, provide the ability to start training, and held little or no items of unclear or unknown complexity. This release was run overlapping with the second release because the team could reasonably be split into two teams without jeopardizing on-time completion. The team worked with the product owner to handle unknowns, immediate issues, and potential changes in scope to ensure that the release completed its objectives on time.
Completion of the first release, and two weeks later the overlapping second release gave the team a solid footing of delivering on the cadence they committed to and allowed us to show our stakeholders and customers that we could be trusted to deliver testable software that could be shown to our end users and that we could start training on well before the full completion of the project. Additionally, this allowed us to show our subject matter experts working versions of new functionality and remove some of the potential issues that could arise from miscommunication or confusion around their requirements.
The team’s releases are never longer than three months or six iterations and are scope boxed to an anticipated velocity of between twenty-five and thirty points per iteration. This average velocity was established by tracking the team’s velocity over the better part of a year to determine what effect vacations, production support issues, etcetera had and what a realistic velocity was. The team generally exceeds the total number of story points committed to the release – anyone not working on release-specific tasks works on stories from the general backlog that have been groomed for a future release. When it comes to maintaining trust, our ongoing philosophy is that it is better for transparency to promise a reasonable amount and try to over deliver than to overload the team and not deliver on time.
3.6 Reframe Failures and Missed Deadlines
As part of the philosophy of under promise and over deliver, it is critical that any potential issues and uncertainty are raised early and often. With a product of significant size and a feature set that contains many new items that must be added, it is unreasonable to expect that there will not be any potential issues or delays. These issues must be raised before deadlines are missed, or trust will be damaged.
The CATS application is a replacement for an existing legacy mainframe application and requires a seamless transition of data from the existing application to the new application when it goes live. While a process had been in place for other states, it did not work for TX and required a complete rewrite. As this is a required component, the additional scope and time delay had to be accepted. One way we were able to get the stakeholders on board was to point out that this change would move all parts of the process under the team’s control and would allow a more frequent turnaround of parallel test data. By giving the stakeholders good reason for the delay and an idea of what it gained the team, we were granted the flexibility to do what we needed.
Between the delay in the initial load and complex new features and changes to existing features to accommodate the uniqueness of the TX data, there was significant uncertainty around the scope of the project and estimated completion date. To address the complexity of features, the team has worked to get these features in front of our end users through demos, training, and hands-on testing as early and often as possible to ensure that we are in agreement. This has led to some scope change as our end users have realized that what they originally thought was the design that met their needs had to be adjusted for new information and new workflows. This did require a mini-release of a single iteration to have multiple meetings to scope and prioritize these items.
Addressing the uncertainty around performance and scope has been a trickier proposition. Without the initial load, it is difficult to get a full understanding of the potential performance of the application. The team has worked to get a full end-to-end test as early in the project timeline as possible, and will not plan any additional releases until this scope is known.
There have been releases where items have had to be pushed to another release or swapped for a similar scope item due to changes in information, lack of subject matter experts, or other issues. We have continued to maintain trust with our stakeholders by ensuring that they are informed as soon as we identify a deadline that may be missed that an issue exists and that we present them at the same time the plan we have to mitigate or address this issue.
4. Keys to Our Success
4.1 Assign the Right People to the Right Roles
It is not enough to have the right data and right plans and present it to your stakeholders – you must also have the right people presenting that plan. By shifting the right people into the right roles, we have increased trust and transparency.
For example – due to his strong knowledge of the team and their capabilities and his strong domain knowledge of the product, our product owner has been able to clearly elucidate what the team does and how that relates to our end product. Additionally, due to my shorter tenure with the team, I do not have the same weight of accumulated negative interactions and am theoretically better able to separate process from people. These are some areas you may want to consider when evaluating if someone is right for a role.
Figure 1. Factors to Assess for Role Suitability
4.2 Tear Down Your Silos
As part of the attempt to shift the perception of rock stars and slackers, we have encouraged developers to branch out into new areas – both by setting up functional teams with a lead developer doing the training and through sheer necessity. It is as important to select your lead developers and their teams by the criteria laid out above as it is to select managers and product owners. An early iteration of the teaming concept had a team with no clear leader – this resulted in that team not delivering at nearly the same cadence as the other teams and an increased need for oversight by myself and the product manager.
Once we had the right people on the right teams, the entire team was able to move forward despite losing a key lead developer for six months during the last year because other developers rallied around and pitched in to ensure the feature would be complete on time and to specifications and had the support necessary to do so. This gave them more confidence in that code area and a chance to shine. More recently, the shift to bringing the initial load functionality in-house ensured that the loss of another developer with specialized knowledge for six weeks due to injury could be worked around by implementing new processes.
This work to cross-train the team is an ongoing issue – with different developers having different skill sets, technology histories, and levels of comfort across the backend and front end of the application, undoing the silos takes a larger time commitment than it might on other teams. Significant progress has been made in at least documenting the processes and the decisions behind them so that the team has this shared knowledge in a single location going forward.
5. Where are we now?
5.1 Nothing is Rebuilt Overnight
While I would love to be able to say that everything is perfect now and there are no remaining issues, it’s not possible until at least when we release TX later this year. Our stakeholders, understandably, are still looking for a firm release date and we have had to manage expectations on when the team might be able to provide that date. The team itself still needs to come to an agreement on what the date is and to commit to it. The level of trust is greater than it was before, and as the team continues to deliver on a regular cadence will grow further. However, the team is very aware that a missed release deadline could have serious negative consequences for our relationships.
5.2 We Are All in this Together
Trust is a two-way street. In order to have an effective working relationship with your stakeholders, the transparency must flow in both directions. Not only does the team need to keep the stakeholders involved in major decisions, such as releases, but the stakeholder needs to be providing consistent feedback to the team. The CATS team strongly prefers to hear this feedback from their stakeholders – it is not enough for me to say “the stakeholders are happy” – the stakeholders need to be speaking up in the showcases or providing direct feedback via email. If the transparency is flowing in only one direction, the team will feel scrutinized and still undervalued or untrusted.
Lastly, getting the senior members of all of the involved teams on board has been invaluable. Not only do we work to ensure that the development team members with the most expertise and seniority are brought into the fold early on decisions and encouraged to take those decisions on as their own, we also work to ensure that the right level of decision maker is at every meeting. By including voices from the top of all of the involved hierarchies, we can ensure that there will be no surprises for senior leadership and that we can honestly say that our plans are supported from the top down.
6. Next Steps
We continue to try to focus on both near term iterative improvements and long-term considerations as we work to get the TX release out the door this year to at least maintain, if not improve, our current trust level. We are trying to stay true to the spirit of Agile as we do – that we try new things, fail fast where necessary, and focus more on people than processes. In conjunction with the steps we have outlined here, I have begun to work with my management leadership on a succession plan for the team – how to move forward and bring on new team members and how to prepare for the inevitable retirement of others. Each new release or new state gives the team an opportunity to evaluate and reflect on how we will knock down our silos and if we are assigning the right people to the right roles. If we can continue to focus on maintaining our commitments and our personal relationships, the rest will fall into place.
7. Acknowledgements
I want to thank Michael Keeling for shepherding me through this experience report – it made the seemingly insurmountable task of editing myself down and letting everyone into my head much easier. Thanks also to Chris Edwards and Sean Dunn for encouraging me to submit to Agile 2017 in the first place, and Seshadri Veeraraghavan for his insights and beta reading. I also could not have had this experience without my boss Jeremy Leavitt, and his boss Andrew Tuttle who have had my back throughout this entire time.
Copyright 2017 is held by Meg Ward.