Part of that transformation, Klain continues, means developing a "culture of professional testing" that drives how Barclays first recruits and then develop testers. This culture focuses our training, coaching, and mentoring programs that, in turn, hone in on testing skills such as heuristic test strategies, visual test models, exploratory testing and qualitative reporting.
Heuristic test strategy, for example, lets teams come up with the best test approaches and compress test cycles while finding important bugs earlier, with models and reporting improving communication with senior management. But where did it come from?
How Context-Driven Emerged From the Schools of Software Testing
Bret Pettichord is a tester from Austin, Texas, a former consultant for Thoughtworks and an early contributor to both WATiR and the Selenium Projects. It was 2003 when Pettichord first gave his presentation, Schools of Software Testing, which identified distinct ways of thinking about the testing problem. Pettichord identified the previously mentioned factory method, or school, which believes in a making testing a repeatable process.
Bret Pettichord defined the schools of software testing in "Schools of Software Testing." More than a decade later, he's still in the business as a quality assurance manager at Blackbaud.
In addition to the Factory School, Pettichord also named an Analytic School, which uses academic models to create test cases; the Quality School, which focuses on prevention, and the Context-Driven School, which applies different tools for different problems.
A context-driven tester might, for example, use a great deal of automation for a batch program that would be maintained for years but might not use any for a video game to be deployed to the iTunes store just once. Pettichord listed exploratory testing as an exemplar for this school; 10 years later, it's a core part of Barclays' training curriculum.
Nearly all software testing begins as with some amount of exploration. A human checks the work by running it, learning it and adapting the test approach over time based on feedback from the software itself. While this might be perfectly sufficient for a single person writing a video game for iPhone, or a computer science student checking work before turning it in, it is widely derided in larger IT service organizations as unrepeatable, ad hoc or unable to scale.
It's certainly true that exploratory testing is rarely repeated. The question is the value of repeatability. Exploratory testing proponents would ask, if the number of possible input combinations is infinite, wouldn't testing with different values, and different paths through the software, actually increase coverage over time? For that matter, if the software has different features with every build, along with different known risks, why test it the same way?
Sign up for CIO Asia eNewsletters.