It’s a little strange that there is such a thing as an Agile Testing tribe, since many people who were interested in testing when Agile burst upon the scene were concerned that the agenda was to “get rid of all the testers”.
Yet the signs are fairly strong. On the topic of Agile Testing you can find a book by Lisa Crispin and Janet Gregory, a mailing list which has remained quite active for over ten years (its “golden age” in terms of traffic appears to have been 2005-2009, but it’s far from dormant), an annual conference (since 2009) and a few more specialized events such as the Functional Testing Tools workshop.
To understand Agile testing, it’s important to understand a little bit about testers, and testing as a distinct occupation. I cannot do justice to this topic in a short post, but I intend to come back to it.
There is a just-so story which goes roughly as follows. Once upon a time, software development was without form (and full of void pointers). But the Spirit of Royce moved upon the face of software engineering, and after the requisite number of days we were handed down the sacred quinquinity of phases – analysis, design, development, testing, production. (This is also rendered in some apocrypha as a sextinity or a septinity, but we don’t heed false prophets, of course.)
There our story becomes a “Tower of Babel” mashup; from each of the phases sprang forth one distinct professional tribe – the analysts, the architects, the coders, the testers, and the ops people (dwelling deep in the basements where the servers were). Each tribe spoke its own language and did not get along so well with the others; thus was the power of software limited – until Agile came along and returned us to the golden age of “Whole Team”.
I have spoken out elsewhere about how mythical the various “origin stories” in software engineering turn out to be. Like most origin myths, they are built around a tiny seed of fact, but are dangerously misleading. The truth is both more complicated and more interesting.
Testing as a distinct discipline arguably got its greatest early support from Glenford Myers’ “The Art of Software Testing”. Myers presented a list of 16 “axioms” of software testing (aptly dissected and brought up to date in an article by Erik Petersen).
As Petersen notes, many of the axioms didn’t quite catch on in actual practice; but one did, apparently with pervasive effect on the industry: the fourth axiom, which proclaimed “It is impossible to test your own program” – and thus advocated a strict separation between the activities of testing and of programming.
Cem Kaner puts it as follows:
Many years ago, the software development community formed a model for the software testing effort. As I interacted with it from 1980 onward, the model included several “best practices” and other shared beliefs about the nature of testing. The testing community has developed a culture around these shared beliefs.
Some of these “shared beliefs” reinforced other prejudices in the software engineering community, such as the notion of separate “phases” for the various activities that contribue to software development. Others were reinforced by economic trends: separate testing groups presented an opportunity for “deskilling” part of the job that had previously been the responsibility of software developers, making testers a cheaper commodity. (Never mind that this flew in the face of the first of Myers’ axioms, which was “Assign your most creative programmers to testing”!)
This conjunction of factors leading to the creation of testing as a distinct specialty is a much more interesting story than the just-so story we started with, “there are testers because there is a testing phase”. It is nicely compatible with how some modern sociologists have analyzed the development of professional disciplines; I’m thinking mostly of Andrew Abbott, author of “The System of Professions“, an absolutely fascinating book that helped me make sense of many things within the software development industry. In a nutshell, Abbott explains the emergence of professions (and specialities within a profession, to some extent) as “turf wars”, battles of jurisdiction over particular kinds of work. As with other battles, what’s of interest is the ever-shifting pattern of alliances and oppositions. (I’m not aware of any detailed work on the sociology of the software professions specifically, but some research has been done for instance on that of project management, or on the “sciences of design” more broadly.)
Seen from one angle, the Agile movement should have alienated testers, rather than draw them in. The major parties sitting at the Agile table back in 2001 were Extreme Programming and Scrum. Advocates of the former vigorously recommended that all testing should be automated to the extent possible. (Lisa Crispin and Tip House’s 2002 book on Testing Extreme Programming dismissed “manual” testing with a contemptuous, ultra-short chapter – about four sentences, three of them reading only “No manual tests”.) Descriptions of the latter made no references at all to testers as a distinct group, subsuming all technical skills within “The development team”. Even the Agile Manifesto didn’t mention testing at all. According to Erik Petersen’s summation of Agile Testing history, Kent Beck himself told an audience of testers that they would be out of jobs in a few years.
Yet the twain did meet, Agile and testing. The major reasons for this unlikely outcome are, I suspect, as follows. First, Extreme Programming’s insistence on automated testing had the side-effect of refuting Myers’ fourth axiom; this made “developer testing” acceptable again. By way of consequence, a number of Agile developers got interested in testing; whereas testing had previously been marginalized in developers’ discourse as a consequence of the (roughly) twenty year old separation, testing became a prominent topic of Agile conferences, mailing lists, and so on.
A second reason was Agile’s rejection of the notion of phases, and implicitly of the separation of roles. This tended to restore testers to a skilled, creative role as full-fledged participants on the technical team, involved and consulted from the start and throughout the project – fulfilling the aspirations of many software testers frustrated by the prevailing “shared beliefs” of their tribe, as some war stories attest.
This “re-skilling” appealed to a specific sub-tribe of testers, providing the third major reason that comes to mind: several talented people with a major interest or even primary interest in testing became involved in Agile right from the beginning, immediately sparking a wave of cross-fertilization of ideas. (Some names that come to mind are Brian Marick (a tester and Manifesto founder whose writings foreshadowed Agile testing well before the term was coined); Cem Kaner, Brett Pettichord and to some extent James Bach (coauthors of the popular book “Lessons Learned in Software Testing” and founders of the “Context-Driven School” of software testing, which among other things stood for re-skilling); Elisabeth Hendrickson, Erik Petersen, Matt Heusser, Michael Bolton and many others.
Testing remains a hotly contested jurisdiction. You should by no means take the foregoing to imply that the Context-Driven Testing tribe somehow “merged” with Agile. James Bach responded to a first version of this article with surprise at my glossing over “acrimony” between Agile testing and the Context-Driven school, which has existed at least since the 2004 Agile Fusion workshop that brought these communities together in a spirit of exploration. Rather, out of the many sub-tribes in testing, we find two that show substantial overlap in membership, and an interest in discussing both their common ground and their differences. James also reminded me that people generally don’t have equal standing in all the tribes they affiliate with. Someone can be highly regarded in the Agile testing community, but considered only marginally competent (if that) within the Context-Driven or exploratory testing schools.
The ideas that now characterize the Agile Testing tribe emerged from this working out of common ground in a context of controversy; one presentation from Elisabeth Hendrickson summarizes the major principles.
Wikipedia describes “agile testing” as “testing practice that follows the principles of the Agile Manifesto”. Put like this, it seems almost boringly obvious – Agile on one side, testing on the other, let’s combine these ideas and work out the consequences. But such a glib descriptor conceals the decidedly non-obvious nature of this merger; in much the same way that “software engineering”, the engineering of software, was originally a provocative term which sparked a great deal of fruitful thinking, until it became accepted and even taken for granted.
The “taken for granted” phase is when innovation stops and ossification sets in. I’m happy to report that “Agile testing” remains a fertile ground for healthy disagreements and new ideas.
Crispin, L., Gregory, J.: Agile Testing: A Practical Guide for Testers and Agile Teams, Addison-Wesley (2008)
Myers, G. J.: The Art of Software Testing, John Wiley & Sons (1979)
Petersen, E.: Back to the Beginning : Testing Axioms revisited, Testing Spot blog (2002)
Kaner, C.: How Many Lightbulbs Does It Take To Change A Tester? Pacific Northwest Software Quality Conference (2002)
Abbott, A.: The System of Professions: Essay on the Division of Expert Labour, University of Chicago Press (1988)
About the Author
This is an Agile Alliance community blog post. Opinions represented are personal and belong solely to the author. They do not represent opinion or policy of Agile Alliance.