I had the great pleasure of interviewing Natalia Lehmann, an Argentine software developer who has been working for Grupo Esfera for more than a decade. Natalia is passionate about the technical side of Agility, especially the automated tests that lead to developing higher-quality software. Precisely she chose to share her practical experiences on how she has been doing development guided by acceptance tests.
This interview is in Spanish. However, you can read a summary in English below.
Natalia began by talking about the tension that exists between the automation of acceptance tests and the technical difficulties to implement them on the graphical interface and make them run in a reasonable time. She also commented that she and her co-workers are defenders of the value of automating acceptance tests, since they have seen its benefit since these are fundamental to guarantee that progress can be made without breaking previous functionality and improving the quality of a system.
Natalia mentioned that testing manually is a repetitive process that invites mistakes at some point. Automatic tests help to establish agreements with users as they are defined in a readable text language that both developers and stakeholders understand. Acceptance tests under this format are executable and constitute part of the living documentation of a system.
Doing ATDD (Acceptance Test Driven Development), she said, helps to focus on what is really important in addition to helping to identify what the main flow is. In contrast, putting aside the writing of automated tests is causing the system to be left with points that cannot be regressed automatically and therefore the need for manual tests is introduced.
Natalia showed the Testing Pyramid which was introduced by authors such as Martin Fowler and Mike Cohn. In short, the pyramid states that we should have many unit tests, many service or integration tests, and some graphical user interface tests. This pyramid proves from the bottom up that integration actually occurs. One other detail is that the unit tests run very fast compared to the graphical interface. Unit tests run on allied components, better identify faults, and are repeatable; therefore, they are very valuable, but they do not detect integration problems.
The tests on the graphical interface are very slow and not very reliable since they sometimes give false positives. The other problem they bring with them is the effort that must be devoted to their maintainability. An important factor is that these tests are much faster than a real user interacting with the system. Their level of precision is also questionable since they cover complete workflows, and it is often difficult to locate the error.
Natalia went over alternatives that they used (mock objects) in order to speed up the run of graphical interface tests. Having repeatable results was the other challenge they faced, testing against APIs (Application Programming Interfaces) or underlying models were alternatives that were also tried.
Natalia said that the composition of the teams she worked with are developers, without professionals who do manual tests; therefore, the profile of the members of their team has contributed greatly to automate the vast majority of the tests of the systems they build.
To close Natalia suggested looking at tools like Cucumber that help with acceptance testing. And on a final note, she said that despite the technical problems it may cause, test-driven development is a path worth walking.