Skip to main content

Kevin Lawrence is still right

Back in March 2005 Kevin Lawrence wrote this great article for Better Software called "Grow Your Test Harness Naturally"

He's got a link to the PDF on his blog, luckily.

It's one of those articles that keeps being cited by various people one runs into here and there. I read it over and over until I Got It.

The basic premise is: don't write the framework first. Write some tests, then extract the common code from those tests, write more tests, abstract common code, write more tests, refactor, repeat. The refactoring generates the framework.

The thing is, I had never had enough code complex enough, running against an app stable enough, to really and truly build a framework this way.

Until now.

I have a large number things to do with Watir.

First I automated each path through the code individually. I did about 10 of these.
Each of my paths had to set up a certain set of rules, so I abstracted a method "make_rules"
Each of my paths had to set some data before doing a final click, so I abstracted a method for that.
At this point my 1o individual scripts had shrunk to about 5 lines of code anyway, so I put them all in one file.
But I still have to handle various variations in the path through the code, so I wrote a controller.rb file that orchestrates all of the making of rules, the data handling and final clicking.

Now I'm adding new paths through the GUI. Adding each new path means rejiggering my existing libraries to accomodate the new path. But since my libraries were generated as a consequence of actual code, they're naturally in tune with what I want to do. Updating them to handle the first new path through the code takes some thought, but the next 10 times I need to go down that path all of the methods are already in place, and the repetition becomes trivial.

The framework is reliably going through the GUI about 80 times now, every time subtly different than every other time.

The thing is, I don't think I could have *designed* this framework. For one thing, I didn't have enough information when I started writing to anticipate all of the paths I would need. For another, even though my framework isn't very complex, it's still large enough that I doubt that I could have held the whole architecture in my head in order to write it in advance.

I can't believe how a) robust and b) readable this framework is turning out to be. I still have some mistakes to clean up, and I have a couple more paths through the code to add, but I'm so pleased to finally find out for myself that Kevin Lawrence is still right.

Comments

Ramdeo said…
I partially agree with you on this. But still it make sense in most of the cases if you start with framework and then enhance it. For example, ideally your helper routines like, making DB connections, getting data, comparing files, reporting can be there in your framework. This can be done even before you start test case automation as such.
Framework can be enhanced with Navigation, setup/tear down etc as you make progress with your Automation.
There is value in starting with frameowrk by solving problems common to all the test cases and then enhance framework as you go.

Popular posts from this blog

Reviewing "Context Driven Approach to Automation in Testing"

I recently had occasion to read the "Context Driven Approach to Automation in Testing". As a professional software tester with extensive experience in test automation at the user interface (both UI and API) for the last decade or more for organizations such as Thoughtworks, Wikipedia, Salesforce, and others, I found it a nostalgic mixture of FUD (Fear, Uncertainty, Doubt), propaganda, ignorance and obfuscation. 

It was weirdly nostalgic for me: take away the obfuscatory modern propaganda terminology and it could be an artifact directly out of the test automation landscape circa 1998 when vendors, in the absence of any competition, foisted broken tools like WinRunner and SilkTest on gullible customers, when Open Source was exotic, when the World Wide Web was novel. Times have changed since 1998, but the CDT approach to test automation has not changed with it. I'd like to point out the deficiencies in this document as a warning to people who might be tempted to take it se…

Watir is What You Use Instead When Local Conditions Make Automated Browser Testing Otherwise Difficult.

I spent last weekend in Toronto talking to Titus Fortner, Jeff "Cheezy" Morgan, Bret Pettichord, and a number of other experts involved with the Watir project. There are a few things you should know:

The primary audience and target user group for Watir is people who use programming languages other than Ruby, and also people who do little or no programming at all. Let's say that again:

The most important audience for Watir is not Ruby programmers 
Let's talk about "local conditions":

it may be that the language in which you work does not support Selenium
I have been involved with Watir since the very beginning, but I started using modern Watir with the Wikimedia Foundation to test Wikipedia software. The main language of Wikipedia is PHP, in which Selenium is not fully supported, and in which automated testing in general is difficult. Watir/Ruby was a great choice to do browser testing.  At the time we started the project, there were no selenium bindings for …

Open letter to the Association for Software Testing

To the Association for Software Testing:

Considering the discussion in the software testing community with regard to my blog post "Test is a Ghetto", I ask the Board of the AST  to release a statement regarding the relationship of the AST with Keith Klain and Per Scholas, particularly in regard to the lawsuit for fraud filed by Doran Jones (PDF download link) .

The AST has a Code of Ethics  and I also ask the AST Board to release a public statement on whether the AST would consider creating an Ethics Committee similar to, or as a part of the recently created Committee on Standards and Professional Practices.

The yearly election for the Board of the AST happens in just a few weeks, and I hope that the candidates for the Board and the voting members of the Association for Software Testing will consider these requests with the gravity they deserve.