Skip to main content

what is "automated testing"?

Apropos of a recent discussion on the software-testing mail list, I was reminded of reading James Bach's Agile Test Automation presentation for the first time. It contains this one great page that says that test automation is:

any use of tools to support testing.

until that time I had held the idea that test automation was closely related to manual tests. I was familiar with data-driven and keyword-driven test frameworks, and I was familiar with tests that relied on coded parts to run, but I still had this idea that there was a necessary connection between manual tests and automated tests. I had this idea that proper automated testing was simply where the machine took over and emulated human actions that were expensive or boring.

That way, of course, lies madness. And expense.

It was reading this particular presentation that really lit the light bulb and got me thinking about all the other things that testers do, and all kinds of other ways to approach test automation. Here are some things I've done as a test automator since I read Bach's presentation that bear no resemblance to a test case or a test plan (in no particular order):

2nd party test server was mostly down, so I wrote a sockets-based server to impersonate it.
Script to collect and organize output for gigantic number of reconciliation reports.
Script to parse compiler logs for errors and warnings.
Script to deploy machine images over the network for test environments.
Linux-based file storage-retrieval-display system in all-Windows shop.
Script to parse entire code base and report all strings shown to users. (So humans could find typos.)
Script to reach into requirements-management-tool database and present requirements in sane form.
Various network traffic-sniffing scripts to get a close look at test data in the wild.
Script to compare file structure on different directories.
Script to compare table structure in different databases.
Script to emulate large numbers of clicks with interesting HTTP headers on the GET.
Scripts to install the nightly build.
Monkey-test scripts to click randomly on windows.

and here's more testing-like substance I did as a test automator, that was still mostly not about validation:

Emulated GUI layer to talk to and test underlying server code.
Gold-master big-bang test automation framework.
SOAP tests framework.
Watir scripts for really boring regression tests. (But I didn't emulate the user, instead I made the code easy to maintain and the output easy to read.)

And lots of other odd bits of reporting, manipulation, chores and interesting ideas.

I have a couple of conclusions from a few years of working like this. Or maybe they're just opinions.

1) Excellent testers should be able to address the filesystem, the network, and the database, as well as the UI.
2) Testing is a species of software development. Testers and devs are closer than they think.
3) Testing is a species of system administration, too.
4) Testing is a species of customer support, also.


Michael said…
Thanks for this, Chris. I think it's an important contribution to understanding, and your stature in the community will help to take the message far.

For another contribution, you (and your readers) might enjoy


---Michael B.

Popular posts from this blog

Reviewing "Context Driven Approach to Automation in Testing"

I recently had occasion to read the "Context Driven Approach to Automation in Testing". As a professional software tester with extensive experience in test automation at the user interface (both UI and API) for the last decade or more for organizations such as Thoughtworks, Wikipedia, Salesforce, and others, I found it a nostalgic mixture of FUD (Fear, Uncertainty, Doubt), propaganda, ignorance and obfuscation. 

It was weirdly nostalgic for me: take away the obfuscatory modern propaganda terminology and it could be an artifact directly out of the test automation landscape circa 1998 when vendors, in the absence of any competition, foisted broken tools like WinRunner and SilkTest on gullible customers, when Open Source was exotic, when the World Wide Web was novel. Times have changed since 1998, but the CDT approach to test automation has not changed with it. I'd like to point out the deficiencies in this document as a warning to people who might be tempted to take it se…

Watir is What You Use Instead When Local Conditions Make Automated Browser Testing Otherwise Difficult.

I spent last weekend in Toronto talking to Titus Fortner, Jeff "Cheezy" Morgan, Bret Pettichord, and a number of other experts involved with the Watir project. There are a few things you should know:

The primary audience and target user group for Watir is people who use programming languages other than Ruby, and also people who do little or no programming at all. Let's say that again:

The most important audience for Watir is not Ruby programmers 
Let's talk about "local conditions":

it may be that the language in which you work does not support Selenium
I have been involved with Watir since the very beginning, but I started using modern Watir with the Wikimedia Foundation to test Wikipedia software. The main language of Wikipedia is PHP, in which Selenium is not fully supported, and in which automated testing in general is difficult. Watir/Ruby was a great choice to do browser testing.  At the time we started the project, there were no selenium bindings for …

Open letter to the Association for Software Testing

To the Association for Software Testing:

Considering the discussion in the software testing community with regard to my blog post "Test is a Ghetto", I ask the Board of the AST  to release a statement regarding the relationship of the AST with Keith Klain and Per Scholas, particularly in regard to the lawsuit for fraud filed by Doran Jones (PDF download link) .

The AST has a Code of Ethics  and I also ask the AST Board to release a public statement on whether the AST would consider creating an Ethics Committee similar to, or as a part of the recently created Committee on Standards and Professional Practices.

The yearly election for the Board of the AST happens in just a few weeks, and I hope that the candidates for the Board and the voting members of the Association for Software Testing will consider these requests with the gravity they deserve.