The other day Jeff Fry sent me an IM out of nowhere and in the course of the conversation asked me what I was doing these days.
I blithely answered "not so much coding, more test design". Which was really too short an answer, but there's IM for ya.
In a nutshell, Socialtext has a really nice Selenium-based automation framework, and recently I've been attacking the backlog of creating automation for fixed bugs. When I'm doing manual (Exploratory) testing on these code changes, I'm willing to spend time investigating odd combinations of things, multiple recurrences of things, all sorts of unusual experiments to find where exactly the boundaries of the functions lie.
But automated tests are expensive: they are expensive to create, and they are expensive to run. So when writing the automated regression tests, I'm trying to minimize the number of test steps while maximizing the coverage of the code in question.
Here's an example:
| type_ok | email_page_subject | Fw: Fwd: Re: Email subject %%start_time%% |
We needed to test that email forwarded or replied to, on the same subject, would appear on a single wiki page. Manually, I tested every combination of "Fwd:", "Fw:", "Re:", after investigating all the permutations that various email clients might employ.
But when I wrote the automated test, I chose the most pathological case that I knew would succeed, in order to minimize run time and maximize the chance that the test would catch a regression problem.
Automated GUI testing is too expensive to cover all the cases that a human being can conceive. There is a craft and a discipline involved in creating automated tests that are both effective and economical.
I am very much enjoying exercising that craft and discipline.
I blithely answered "not so much coding, more test design". Which was really too short an answer, but there's IM for ya.
In a nutshell, Socialtext has a really nice Selenium-based automation framework, and recently I've been attacking the backlog of creating automation for fixed bugs. When I'm doing manual (Exploratory) testing on these code changes, I'm willing to spend time investigating odd combinations of things, multiple recurrences of things, all sorts of unusual experiments to find where exactly the boundaries of the functions lie.
But automated tests are expensive: they are expensive to create, and they are expensive to run. So when writing the automated regression tests, I'm trying to minimize the number of test steps while maximizing the coverage of the code in question.
Here's an example:
| type_ok | email_page_subject | Fw: Fwd: Re: Email subject %%start_time%% |
We needed to test that email forwarded or replied to, on the same subject, would appear on a single wiki page. Manually, I tested every combination of "Fwd:", "Fw:", "Re:", after investigating all the permutations that various email clients might employ.
But when I wrote the automated test, I chose the most pathological case that I knew would succeed, in order to minimize run time and maximize the chance that the test would catch a regression problem.
Automated GUI testing is too expensive to cover all the cases that a human being can conceive. There is a craft and a discipline involved in creating automated tests that are both effective and economical.
I am very much enjoying exercising that craft and discipline.