So, we started with what felt like puzzles. The team would sit around the projector with basic inputs and domain concepts displayed on the white board and someone would moderate. “Ok if we nom 70,000 dekatherms from Fayetteville Express down to Moulton, and we run the capacity algorithm, based on the operational capacities of our downstream locations, what do we expect to be cut at Shorewood?” These full team exercises were important for a few reasons. First, they allowed us to find any inconsistencies in our model and narrow down our language. They were a haven for knowledge crunching. With everyone in the room together, product owners included, and Brazil on video, inconsistencies in language and meaning bubbled up quite quickly. It was through these sessions that we refined the meaning of nomination that we talked about earlier.
Early in the project, we were having these sessions daily, sometimes multiple sessions a day. They would generally last from 30 minutes to an hour, or however long it took to solidify the team’s understanding of a new concept or scenario. The product owners really seemed to get a kick out of this, which is the second important point. These sessions helped to create those personal and professional relationships between the technology team and the product owners that we believe were an essential part of our success. We were a team of consultants brought in to work with a team of product owners that had never worked this closely with a development team before and had never taken such an active role in building software before. As operators of natural gas pipes, they were generally far removed from the software development lifecycle in their day-to-day work. We think at the outset they were a bit standoffish due to all the new technical jargon being thrown around. And we too were intimidated by the complexity of their work. These whiteboard sessions really gave the product owners a chance to flex their professional intellectual muscles and to show us what they knew. It got them engaged in our work and set the building blocks in place upon which we would implement our version of ATDD.
The final significant benefit of these sessions was that they acted as an incognito introduction to BDD, to acceptance tests, to Gherkin, and really to testing in general. We didn’t bring everyone into a room and start showing slides about SpecFlow [SpecFlow]. We didn’t go though trainings on Gherkin syntax or step definitions or table formatting. We didn’t talk about any of this up front. We simply started writing our scenarios on the white board. Photo A1 (See PDF) is from one of our sessions.
You’ll observe the Given, When, Then steps, the variable inputs, the table format for executing multiple examples. The technology team was well aware that this would eventually turn into an executable acceptance test, but the product team saw it simply as an intuitive way to run through our what-if pipeline scenarios and to continue driving out our collective knowledge of the domain. This was important, because when the product owners were originally asked to get involved in writing acceptance tests, they pushed back. They were still under the impression that test writing was a purely technical task and that they didn’t have the time or expertise to learn the tools necessary for doing so. It took some convincing on our part that this was something that they could do and that we could provide them a way to really influence the team and define what would be built through these tests.
Eventually, they agreed to pair on one of our more complex capacity tests and were surprised at how similar the process was to our whiteboard sessions. We explained that all tests could be implemented this way and that if they needed new steps to define new functionality, they simply needed to create the English statement and we would implement the functionality. The lesson we learned here is that the introduction to a new approach, new idea, even a new tool, doesn’t have to be formal and top-heavy. The team whiteboard sessions provided a ton of value, including ramping product owners up on a tool they didn’t even know they’d be using. And all that was needed was a whiteboard and a dry-erase marker.
In the end, those scenarios became executable tests running with every new change to the system. And this was the moment that, looking back, we started really doing acceptance test DRIVEN development, roughly 3 months from the inception of the project. Now it wasn’t that we weren’t testing up until then. It really just took this long for the dust to settle, to the point that we were comfortable with our base knowledge of the system and that its early implementation was stable enough to start really building upon. It was also at this point that we made a concerted effort to get the product owners involved up front, so that BAs and testers could check in failing specs before the development process began.
The process moving forward from this point was fairly straightforward. The business would prioritize new features to be built in each two-week iteration with input from the development team. The team would perform analysis on these new stories, which included input from dev, test, and analysis roles, in conjunction with product owners. Stories would be presented during our iteration planning meetings so that the entire team could help knowledge crunch and drive out any inconsistencies. Once the analysis was driven out, a BA or a tester would work with the product owner to drive out SpecFlow tests. SpecFlow, sometimes referred to as “Cucumber for .Net,” is a BDD framework that seeks to bridge the communication gap between domain experts and engineers by binding business readable tests to their underlying implementations.
Now for anyone who doesn’t run the build locally, creating new SpecFlow specs could seem impossible. These tests live in the codebase. We needed our existing step definitions and existing tests to be accessible to everyone who would be writing new ones, and this included product owners and BAs who weren’t running the build on their machines. We got the level of accessibility we needed from a tool called Pickles [Pickles]. Pickles is a .Net tool that can be used in conjunction with SpecFlow to create living documentation in a very accessible html format. We added Pickles to our project and had it generate an artifact after each commit build in TeamCity. We then simply showed the business analysts and product owners how to access that Team City artifact. The artifact was a pretty, searchable html document that showed every running test for that particular commit, so it was always up to date as long as you grabbed the latest one. Example A2 (See PDF) shows some pickles output that would typically be used to compose new tests.
A business analyst or product owner could access the artifact and click through the navigation links on the left to see what tests were running under each area of functionality. When composing new tests, they’d simply copy and paste combinations of existing steps or new English statements into an excel file, and sometimes even into a formatted SpecFlow file. These would then either be checked into the codebase on the spot and ignored until development began, or they’d be attached to the story cards to be checked in later. The ramp-up time necessary to get everyone on the same page with this was short. We just gave them the path to the artifact in the Continuous Integration server. Understanding what steps already existed and when to request new ones wasn’t quite as simple and took more time.
At this point we were in approximately the 4th month of the project. We had built up a base set of steps to describe the system’s most basic functionality, which the product owners didn’t have as much a hand in creating. It took a little time for everyone to gain a clear understanding of what steps were already in existence. However, moving into the 7th and 8th months of the project, as new features were built out, new steps would be delivered as part of Analysis sessions between the BA and product owner. This is really what allowed the product owners to drive out new functionality. See PDF for a Timeline of the major events discussed so far.
The product owners were energized in knowing that they had a clear role in creating new functionality and that they had a contract by which they could hold us accountable in the form of these tests. When a new feature would go up for approval, they would first look to see that the tests implemented up front were unignored and passing. This approach also freed up testers to do more exploratory testing for less obvious bugs, because they didn’t need to spend time writing test cases or doing regression on the fundamental requirements for each feature. The documentation of each feature were the tests themselves and everyone had access to them.