“Whether you practise Outside-in Development, Acceptance Test Driven Design or have a collection of automated tests that you run regularly, how much feedback are you getting from them? As teams are asked to react quickly to changes in our products and user demands we rely on automated tests to guide our development and testing. But what if we are placing too much trust in them or what if they are misleading us? How can we ensure that we have automated tests that give valuable feedback?
It’s common and easy to blame the tools, but how we designed our automated tests can affect the outcome more so than the tools. Once such design skill that we often ignore is how we codify our expectations into our automated tests. What rules should we put in place to determine if a check should pass or fail?
In, Solving the codified oracle problem, I will briefly introduce:
- What Codified Oracles are and how they a fundamental part of an automated test
- How we are at risk of substitution and automation bias when we rely upon test automation feedback
- Why we need to question what our Codified Oracles are actually telling us and how we can improve them to give better feedback”