Skip to main content

the minimum number of tests

Elisabeth Hendrickson wrote about a case of complacency and lack of imagination in a tester she interviewed.

I agree with the gist of her argument, that good testers have a "knack" for considering myriad ways to approach any particular situation.

But I would also like to point out that just as a "tester who can’t think beyond the most mundane tests" is dangerous, so a tester who assigns equal importance to every possible test is just as dangerous, if not more so.

The set of all possible tests is always immense, and the amount of time to do testing is always limited. I like to encourage a knack for choosing the most critical tests. Sometimes this knack looks like intuition or "error guessing", but in those cases it is almost always, as Brian Marick put it in his review of Kaner's "Testing Computer Software", "past experience and an understanding of the frailties of human developers" applied well.

Intuition and error guessing take you so far, but there are other techniques to limit the number of tests to those most likely to find errors. The diagram here represents work I did some time ago. I was given a comprehensive set of tests for a particular set of functions. Running every test took a long time. In order to arrive at the smallest possible set of tests that exercised every aspect of the set of functions, I arranged a mind map such that each test whose test criteria overlapped the test criteria of another test was put farthest away from the center. Each test whose test criteria included multiple sets of test criteria was put closer to the center. In this way, I arrived at the smallest possible set of tests that covers at least some aspect of all of the set of functions under test.

It is not a comprehensive set of tests. Rather, it is the set of tests, which, given the smallest amount of testing time, has the greatest chance of uncovering an error in the system.




There are certainly echoes here of model-based testing, although I think this approach plays a little fast and loose with the rules there.

Regardless of whether the tester chooses a minimum number of tests through intuition or through analysis, it's a good skill to have.

And if you do encounter a tester who creates a smaller number of tests than you think is appropriate, you might want to check his reasoning to be certain that that tester is really making a mistake.

Comments

Popular posts from this blog

Reviewing "Context Driven Approach to Automation in Testing"

I recently had occasion to read the "Context Driven Approach to Automation in Testing". As a professional software tester with extensive experience in test automation at the user interface (both UI and API) for the last decade or more for organizations such as Thoughtworks, Wikipedia, Salesforce, and others, I found it a nostalgic mixture of FUD (Fear, Uncertainty, Doubt), propaganda, ignorance and obfuscation. 

It was weirdly nostalgic for me: take away the obfuscatory modern propaganda terminology and it could be an artifact directly out of the test automation landscape circa 1998 when vendors, in the absence of any competition, foisted broken tools like WinRunner and SilkTest on gullible customers, when Open Source was exotic, when the World Wide Web was novel. Times have changed since 1998, but the CDT approach to test automation has not changed with it. I'd like to point out the deficiencies in this document as a warning to people who might be tempted to take it se…

Watir is What You Use Instead When Local Conditions Make Automated Browser Testing Otherwise Difficult.

I spent last weekend in Toronto talking to Titus Fortner, Jeff "Cheezy" Morgan, Bret Pettichord, and a number of other experts involved with the Watir project. There are a few things you should know:

The primary audience and target user group for Watir is people who use programming languages other than Ruby, and also people who do little or no programming at all. Let's say that again:

The most important audience for Watir is not Ruby programmers 
Let's talk about "local conditions":

it may be that the language in which you work does not support Selenium
I have been involved with Watir since the very beginning, but I started using modern Watir with the Wikimedia Foundation to test Wikipedia software. The main language of Wikipedia is PHP, in which Selenium is not fully supported, and in which automated testing in general is difficult. Watir/Ruby was a great choice to do browser testing.  At the time we started the project, there were no selenium bindings for …

Open letter to the Association for Software Testing

To the Association for Software Testing:

Considering the discussion in the software testing community with regard to my blog post "Test is a Ghetto", I ask the Board of the AST  to release a statement regarding the relationship of the AST with Keith Klain and Per Scholas, particularly in regard to the lawsuit for fraud filed by Doran Jones (PDF download link) .

The AST has a Code of Ethics  and I also ask the AST Board to release a public statement on whether the AST would consider creating an Ethics Committee similar to, or as a part of the recently created Committee on Standards and Professional Practices.

The yearly election for the Board of the AST happens in just a few weeks, and I hope that the candidates for the Board and the voting members of the Association for Software Testing will consider these requests with the gravity they deserve.