Taking small steps in a big organization: An experience report on implementing Exploratory Testing into a large organization

About this Publication

How can a single individual change how testing is done in a large, distributed organization? This is my story of how I implemented Exploratory Testing in a big company one step at a time. Working with small teams, I provided opportunities for them to experience success and then capitalized on their passion to help promote the approach. By focusing on small steps and showing by doing, they were able to bring about great change.

This white paper is my experience report, sharing the successes and how I achieved them, the problems I encountered and how I addressed them, and my plans for the next steps. The aim is to show how you, as an individual in your large or small organization, can change the way things are done by taking small steps.


I came to my current company over six years ago and noticed there was a large amount of scripted tests held in a HP Quality Centre repository and it ALWAYS took the team six weeks to test the release regardless of what changes or features were in the release. The reasons given for taking this amount of time was the need to run regression against everything in the release, there was no automation in place. It appeared that there could have been trust issues between the testers and developers. This manifested itself in a variety of ways such as the testers not communicating effectively with the developers and the developers not fully believing the quality of the testing being undertaken.  The testers did not believe what the developers had changed, since there was no access to the code repository nor had the testers asked for access. It should be noted that the project was very complex and not a UI system.

I already had experience of working in a semi-exploratory way before joining the company (ad-hoc / free testing) and I decided to use my own (after work hours) time to learn more about it.  This was after somebody at the Eurostar Testing Conference [1] in 2008 suggested it would be worthwhile for me to look into this and if I needed any help then get in touch. That somebody was Michael Bolton [2] and this was my first contact with him.


Before I delve deeper into my experiences of implementing Exploratory Testing, I should take a moment to explain what I mean by Exploratory Testing. I will start with the definition as described by James Bach.

“Simultaneous learning, test design and test execution”. Exploratory Testing Explained – James Bach [4]

The first time the term was used in the context of software testing was by Cem Kaner who defined it as follows:

“a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.” A tutorial in Exploratory Testing – Cem Kaner [6]

What does this mean to anyone reading this article?

The key aspect is that Exploratory Testing is an approach and a mindset. It is not defined as a process or a methodology, and the way in which it is implemented is dependent on the project and the people, in other words the context.  Exploratory Testing came from the context driven testing [7] world.

What it means to you is that there is no large up front planning of test scripts with multiple steps, you test as soon as you can access a system and your exploratory session is structured in that you have a goal to achieve (Charter/Mission). The focus of Exploratory Testing is not on what you know but on what you do not know about the system.  If you exactly know what the system should do then there is a case to automate that and free up some time to do some Exploratory Testing.

The best way to describe what you know and what you do not know about the system is in figure 1 below which is based upon an idea and concept of James Lyndsay [8].

Figure 1  – Why we need to explore

 Figure 1 shows that to begin with before you get the software you have a set of expectations that you hope will be met once you have the software (deliverable). Once the software has been delivered you can ‘check’ [9] that the software meets your expectations. If it does (where the deliverable and the expectation cross) then it works as expected, if not, this could be defects, misunderstanding or missing functionality. Since this is what you already know and expect, it may be wise or prudent to try and automate this. On the other side there is a large amount that you do not know or expect about the deliverable. Rumsfeld [10] defined this as the ‘Unknown Unknowns’.  This is where you have to explore; since you did not expect this you cannot script it nor plan it in advance. This is where Exploratory Testing comes in, using techniques, approaches and structures that enable testers to engage with the deliverable and uncover information that could prove to be useful to the customer.

One aspect of Exploratory Testing that is sometimes forgotten is the need to work closely with others on the project.  Collaboration is vital for Exploratory Testing to be successful; testers need to work with developers, project managers and customers. Working with others on the team and doing some paired Exploratory Testing can prove to be a great way to show the importance of Exploratory Testing and the information it can uncover.

If you want to know more about Exploratory Testing there are many resources available online for free, the ministry of testing [11] has gathered some of these together into one place.


At first I realized that my understanding of Exploratory Testing was very shallow so I decided in my own time during the evenings to do some research of Exploratory Testing.  The outcome from this research formed the basis for section 2 in this article. As I learnt more about Exploratory Testing I found out about context driven and session-based testing management [12] along with a wide range of supplementary material. Following this line of research I came across the book “Lessons Learned in Software Testing” [13]. This book provided me with lots of tips and techniques that I could start to adopt and apply to my own testing. Such as:

“You will also find important problems sooner if you know more about the product, the software and hardware it must interact with, and the people who will use it” 


“To test something well, you have to work with it. You must get into it. This is an exploratory process, even if you have a perfect description of the product.” 

This led to discovering a vast amount of free training material which was originally put together by Cem Kaner for the Association of Software Testing [14]. This material was the basis of the Black Box Software Testing course (BBST) [15], I read and re-read the material to improve my understanding of the Exploratory Testing.

I took the information learnt and looked at ways in which I could apply this in practice on projects I was involved with. The first steps I took were not really about Exploratory Testing but more based upon common sense and looking at how things could be improved. This was done by implementing the following ideas:

  • Talking to development and asking what code had been changed
  • Talking to the team and highlighting where everyone thought the high risk parts were
  • Engaging and collaborating with development teams and building up a relationship of mutual trust in helping to deliver a quality product
  • Asking the project teams what the testing team could do to support the development teams
  • Getting earlier involvement with the design /analysis side

This helped to build up the confidence of the testing teams so that everyone on the project teams could see what skills and support testers could bring to the project.

4.      TODDLING

Simultaneously I started to implement a semi-structured Exploratory Testing approach based upon my understanding of what I had learnt.  For one of the projects the testing team managed to reduce the amount of testing for the release from six weeks to two weeks. We did this by using a mixture of scripted regression tests based upon risk of what had been implemented or changed in the project with some time set aside to do Exploratory Testing.  To help uncover the risks the testing team worked closely with the development team to discover what areas of code they had changed and were difficult to implement. One of the side-effects of this was a closer working relationship within the project team and the breaking down of barriers which had caused some trust issues amongst the team members. The success of this project of reducing the time of testing and still providing a high quality product was provided to management as evidence of the benefits of adopting a more risk based strategy which incorporated Exploratory Testing to uncover useful information about the product that was unknown.

This uncovering of new information was vital to success; we discovered issues and problems in the product earlier since our efforts were concentrated on the risky areas and not just on covering what we already understood about the behavior of the product.

The Exploratory Testing effort, even though it was a relatively small in comparison to the scripted regression effort, was finding far more defects within the product. It became even more visible once the product was released to the customers with a reduction in the amount of defects reported in the production environment. This information was used by me to help justify expanding the Exploratory Testing approach to other projects and teams.


The next step was to communicate the success of this project and find opportunities for expanding it to more projects. It was a difficult task to try and get buy in on the approach, even with the weight of evidence supporting the facts that Exploratory Testing was more effective in uncovering potential problems. I struggled to explain clearly how this was the right approach to take. I did learn that effective communication is crucial if you want to try and change how things are done. There is a need to prove what you are saying by doing it and then use that as evidence. To some this may seem like common sense but to me at the time it was a case of me saying, well it works for me so why not do it, which was not the right tactic to use.

I was asked to prove the approach on other projects and provide more evidence of its benefit.

This was done by gathering feedback from the customer on how well the product was received. The customer during their User acceptance testing found a much lower number of defects in the product and managed to reduce their acceptance testing from two weeks down to two days. A significant reduction considering the product was a complex system where there were minimal UIs and lots of API interfaces both internally and to 3rd party systems.

6.      LEARNING

I realized very quickly that that I was out of my depth with my knowledge and skills in applying Exploratory Testing. I would need to learn a lot more to adapt the approach for use in a large organization so it was at this time that I reached out to the testing community and asked for some support and help. I was very fortunate that Michael Bolton responded and he gave up his own time to chat with me via Skype for quite a few evenings and weekends.  I would like to state publicly how much I am thankful for his generous offer to mentor and train me in aspects of the Rapid Software Testing course [16].

From these sessions with Michael Bolton I put together some training material that was relevant to my organization and could be used internally. It was crucial that the material was suitable for the organization so that people using it felt it valuable for them to invest their time and effort in attending the workshops. The key element here was to make sure the material was flexible enough to meet the needs of the various teams and cultures across the organization. This material became the initial reference material to help train other testers for projects and teams. If you are looking to implement this approach into your organization, adapting the material to meet the needs of those involved is vital.  You need to have an understanding of how each and every team differs and make the material you use be adaptable to those differences. It is a case that ‘one size does not fit all’ and avoid using terms such as ‘best practice’. Instead focus your efforts on allowing people to adapt what you are recommending to fit their needs. Have good practices and flexibility in how teams implement the approach, encourage innovation rather than dictate how it must be done.

The amount of effort to initially put the material together can be high, for example to put the initial material together for the first workshop took approximately two weeks of my own time in the evenings spread over a 3 month period. Once this initial material was created it was then demoed to a small team for feedback and the material adjusted. It is important to ensure that the material used was flexible enough to be adapted and changed based upon the audience.  The current workshop material is very different from the initial offering and ensuring that your material remains current and dynamic is crucial for it to be successful.  One important lesson is to ensure you do try your material out with different audiences once you have done enough preparation. What I mean by ‘enough’ depends on what your learning goals are for the audience you are working with. It is difficult to measure how much is enough preparation but by being open to feedback and admitting where something is not working you can adjust your material to be more suitable.


I was asked by management if this approach could be scaled to bigger projects and teams. I joined a project in which there were four distributed teams across different time zones with each team consisting of about nine people, loosely formed into scrum teams. These teams were located in various locations around the world and the project was a highly visible project within the organization.

My approach was to first try the material out on the tester on the team in the same location as myself. They provided me with some critical feedback and the material was altered and adjusted where it did not fit our needs. We made the decision that all our testing would be of an exploratory nature for all manual testing (there was some automation). The approach within the local team was eagerly adopted but for the distributed teams there were some initial teething issues and resistance which were not easy to solve in a satisfactory way. My thoughts on these teething issues may have been due to lack of ‘buy in’ amongst some of the teams and a resistance to change what had always been done. Some of the teams did not see a benefit in changing what they were doing or what was the benefit to them and their daily work. There was also conflict in what their team leaders were requesting and what I was asking them to adopt. One key issue was the moving away from passing and failing tests towards a more general dashboard style of reporting. This change meant the testers reported what they had found along with what had or had not been tested. This changed caused difficulty for some in that it was not so easy to know the progress of testing compared to the traditional test case percentage run and not run. Later on I worked with some of the teams to try and resolve this and the other issues and discover ways to report the progress of testing.

Despite these issues the product was delivered to the customer and is now live. Some of the feedback we received was that we had very few customer specific issues raised against the release in comparison to other previous project delivered. This I think is a reasonable measure of success for the approach. From this I was contacted by another team within our company based in Israel to see if there was a possibility to do some Exploratory Testing training with their teams to help overcome this resistance and the teething issues.


I went to one of the global locations and delivered the workshops to about fifty people and I used this as a trial to see if the approach could be adapted to work with different projects outside of my direct influence.

Once the workshops had been delivered I returned back to my base office to work on other projects. I found that the approach had not been adopted very well in the location I had just visited and that some teams had given up trying to use the approach and resorted back to scripted tests.  It puzzled me why this was the case and it was only after some investigation that I discovered that I had made some mistakes in delivering the workshops.  The summary of these mistakes can be seen below

  • No follow up with teams to ensure questions could be answered
  • No application in the workshop to the real work that was being done by the teams (no connection between what was being said and how they do their job)
  • The workshop was more a lecture rather than a learn by doing set of exercises (not experiential)

From this feedback I started to redesign the workshops so it was more hands on, practical and group work led rather than trainer led.  To enable me to make this material more experiential I had to go back and learn more about how to make the workshop more experiential. For this I did research on various approaches for encouraging learning by doing. One of the first resources I came across and was recommended to use was the book “Training from the Back of the Room” [17] by Sharon L Bowman. There were some valuable lessons in this book on how to approach training, the key ones being:

  • The 4 Cs – Connections, Concepts, Concrete Practice and Conclusions
  • The importance of “What is in it for you”
  • Talk less and do more

There is far more within this book but as a starting point for understanding how people learn I found it extremely useful. This led me into the psychology of learning and the discovery of experiential learning. I researched this and uncovered information about Kolb [18] and his cycle of experiential learning which can be seen in figure 2.

Figure 2 – Kolb experiential learning cycle

This is similar to the 4 Cs as discussed in the book by Sharon Bowman in which students need to be given something to learn, try out what has been learnt, adapt to real experience and reflect on what they have learnt. Some other resources I used included a set of books by Gerald Weinberg on experiential learning [19].

All this was vital for me in moving forward and I learnt a very important lesson on how people learn and apply what they have learnt.


Another issue that happened was after delivering training to some local teams, I found that there was a less than positive response to the approach and I could not work out the reason why. I asked myself if this was a case of “Not in your own backyard?” Regardless of the reason, I needed to find a way to resolve it. Michael Bolton came to the rescue and together we organized an internal three day rapid software testing course for the teams run by Michael himself. This proved to be a great success and people within the teams started to talk and discuss ways in which they could use the approach on their own projects. We had started to gather some momentum for adopting the Exploratory Testing approach.

Reflecting on this later it was still difficult to understand why I had failed to sell the usefulness of the Exploratory Testing approach. Was it a question of culture or I was not seen as an expert in the field. A case of “Who are you to tell us this?” This is something to be aware of when you are attempting to change the culture and approaches used within your own organization.  Another factor may have been that the workshops I ran were only 1 day in length with a mixture of people from different teams attending. When the outside experts came in, this was a three day intensive course which was attended by the majority of the same teams. This may have been another reason why resistance was seen. If only a few people from each team attended the sessions with me, then those who did not attend could have a natural reluctance to join in with the discussions. They may have felt excluded and not empowered in supporting the changes.

You will find some resistance and how you overcome this can be challenging. You may need to involve respected experts from outside. Alternatively you can adopt a top down approach by getting buy in from influential people within your organization first to help support your credentials as an expert.


Using the momentum and enthusiasm generated by the workshop by Michael Bolton I started to work closely with our internal training division and reached out to other regions who had expressed an interest in learning more about the approach.  I delivered the ‘new’ workshop on Exploratory Testing to five different global locations.

I looked for people in these workshops who showed interest and passion about the subject. I approached them to see if they would like to become ‘experts’, the ‘go to’ people in their region when people ask about Exploratory Testing. I would act as a mentor and support person and they in turn would act as mentors and trainers in their region. We now have over twenty experts across the regions. I found some very passionate testers in each of the regions and their passion and enthusiasm has helped to drive the approach into more and more teams, with some of the experts now training others in their regions. Some successes include the following:

  • India – Over 200 testers now regularly using the approach
  • France – Over 50 people trained in using the approach
  • UK – More and more teams becoming aware and 20-30 testers using the approach
  • China – Still early days
  • Israel – Started to revisit and retrain

The difficulty we have with all of this success in adopting the approach is in measuring how successful the approach is. One way to measure the success is how many issues are being raised by the customer once the product has been released.  What is the perception of the quality of the product by the users?

During the training sessions in one of the global locations we applied the Exploratory Testing approach to fourteen different products. These products were Set Top Boxes (STB) and we spent one hour per product. During these exploratory sessions we uncovered over three hundred issues, some of which could have been defined as showstoppers. This was one way I demonstrated the effectiveness of using the Exploratory Testing approach. As we have moved forward with adopting the Exploratory Testing approach, teams have found problems earlier and therefore provide quicker feedback to the development teams. One side effect of this is that the testing teams have been seen as more valuable and useful, acting as a service to the project and providing vital information to those who can make use of it.


Our company was acquired in 2012 and we are now part of a much bigger company (70,000+ people) this has given me more scope to expand the approach into other teams.  So my future plans include:

  • More workshops done by internal experts
  • Expand use of outside agencies to deliver rapid software testing workshops
  • Look at the BBST course and see if we can get our staff trained as instructors to deliver this internally
  • Using internal communications system to spread knowledge and expertise of the approach
  • Internships
  • Apprenticeship schemes

One point that should be made is that the Exploratory Testing approach can be adopted by any development methodology be it agile, waterfall or iterative. It is methodology independent and that is one of the great advantages of this approach no matter which methodology your own company is following, this approach can fit in to it.

Since the acquisition some of the future plans listed above have been extended. Currently I am working with testing teams in the USA to implement Exploratory Testing within Cisco. I have already delivered workshops in Atlanta and will by the time this is published have delivered some workshops to our San Jose campus. Due to the success of this within the USA teams I am working closely with the senior leadership team in Atlanta to get the approach adopted across the whole division.


The key lessons I learnt from this experience are:

  • A single individual can make a huge difference within any organization big or small
  • Have a passion and find others who share your passion
  • Embrace mistakes (you will make them)
  • Be prepared to learn
  • Do not tell people what to do but show them


  1. Eurostar Testing Conference – retrieved 7th May 2014
  2. Michael Bolton website – retrieved 7th May 2014
  3. James Bach website – retrieved 7th May 2014
  4. Exploratory Testing explained, James Bach – retrieved 7th May 2014
  5. Cem Kaner website – retrieved 7th May 2014
  6. A Tutorial in Exploratory Testing, Cem Kaner – retrieved 7th May 2014
  7. Context Driven Testing website – retrieved 7th May 2014
  8. James Lyndsay website – retrieved 7th May 2014
  9. Checking vs Testing refined, James Bach – retrieved 7th May 2014
  10. There are known knowns, Rumsfeld, Wikipedia – retrieved 7th May 2014
  11. Exploratory Testing resources, Ministry of Testing – retrieved 7thMay 2014
  12. Session-Based Test Management, James Bach – retrieved 7th May 2014
  13. Lessons Learned is Software Testing, Cem Kaner, James Bach, Bret Prettichord – Learned-Software-Testing-Context-Driven/dp/0471081124
  14. Association of Software Testing – retrieved 7th May 2014
  15. Black Box Software Testing Course Material – retrieved 7th May 2014
  16. Rapid Software Testing Course – retrieved 7th May 2014
  17. Training from the Back of the Room, Sharon l Bowman – Aside/dp/0787996629 retrieved 7th May 2014
  18. David A Kolb and Experiential Learning – retrieved 7th May 2014
  19. Experiential Learning, Gerald Weinberg – retrieved 7th May 2014
  20. Explore it, Elisabeth Hendrickson – retrieved 7th May 2014
  21. John Stevenson’s blog – retrieved 7th May 2014
  22. John Stevenson, The Psychology of Software #Testing – retrieved 7thMay 2014

About the Author

Having been involved in testing for over 20 years and within the IT industry for more than 24 years I am still surprised with how exciting I find it and how much I continue to learn about things that are new. I have a passion for learning and love to learn about new things. I have an interest in many things such as social science, psychology, photography and gardening. I keep involved within the testing community and write a testing blog ( and can be found regularly tweeting (@steveo1967). I am keen to see what can be of benefit to software testing from outside the traditional channels and as such I like to explore different domains and see if there is anything that can be linked back to testing. I care about the testing community, like to be involved and like to be social. I feel I have a wide variety of experience within testing and currently I am mentoring and training others in exploratory testing and SBTM whilst looking for opportunities to introduce approaches from other crafts such as anthropology, ethnographic research, design thinking, cognitive science and many others.