Skip to main content

UI test framework design using Watir

I wrote a little toy test automation framework recently.

It's a keyword-driven framework, and the test data is kept in a CSV file. The test data look like this:

goto,http://www.google.com,,
text_field,name,q,pickaxe
button,name,btnG,click
match,Programming Ruby,text

This is the classic Google Search example from the Watir documentation.

I used CSV because it's the simplest implementation. This is, after all, a toy. In the real world this might be HTML tables or wiki tables or a spreadsheet. What we want is for anyone to be able to write a test quickly and easily after a little bit of training on FireBug or the IE Developer Toolbar. (Test design is a different issue. Let's not talk about that now.)

The framework grabs the test data and interprets each row in order. A simple dispatcher invokes the correct execution method depending on what the first element in the row of test data is. This has two advantages.

For one thing, there are a finite number of types of page elements to deal with, so our methods to manipulate them will be limited in number.

For another thing, we can write custom test fixtures using our own special keywords, and make our own little DSL, like I did with the final row above that starts with "match".

Reporting pass/fail status is left as an exercise for the reader.

This simple framework could easily be the start of a real test automation project. It's robust, it scales well, and it can easily be customized, improved, and refactored over time.

[UPDATE: the comment below about leaking memory is interesting. I didn't say it explicitly in the original post, but I think test data (CSV files) should represent fewer than about 200 individual steps. My vision is that this script is invoked for a series of CSV files all testing different paths through the application, and each CSV file is handled by a new process. Paths through the application with more than about 200 steps make it much harder to analyze failures. The fewer the number of test steps, the better.]



require 'test/unit'
require 'watir'


class ExampleTest < Test::Unit::TestCase


def goto(args)
@ie.goto("#{args[1]}")
end

def text_field(args)
@ie.text_field(:"#{args[1]}",'q').set("#{args[3]}")
end

def button(args)
@ie.button(:"#{args[1]}","#{args[2]}").click
end

def match(args)
assert_match("#{args[1]}",@ie.text,message="FAIL!")
end

def test_google

@ie = Watir::IE.new

@command_array = []

File.open('watir_keywords.csv') do |file|
while line = file.gets
@command_array << line
end #while
end #do

@command_array.each do |comm|
args = comm.split(',')


#arguably 'case' would be nicer here than 'if', but this gets the job done.
if args[0] == "goto"
goto(args)
elsif args[0] == "text_field"
text_field(args)
elsif args[0] == "button"
button(args)
else args[0] == "match"
match(args)
end #if
end #do

end #def

end #class

Comments

Bret said…
You could use this instead of your final set of if statements:

send args[0], args
saravanan said…
this type of implementation will cause more number of function call.
For coding point of view it may simple, But while execution for every tiny step it will call the function.
So as a result number of function call will get increase, and this may lead to memory leakage problems.
TestWithUs said…
This comment has been removed by a blog administrator.
The Geeks said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Reviewing "Context Driven Approach to Automation in Testing"

I recently had occasion to read the "Context Driven Approach to Automation in Testing". As a professional software tester with extensive experience in test automation at the user interface (both UI and API) for the last decade or more for organizations such as Thoughtworks, Wikipedia, Salesforce, and others, I found it a nostalgic mixture of FUD (Fear, Uncertainty, Doubt), propaganda, ignorance and obfuscation. 

It was weirdly nostalgic for me: take away the obfuscatory modern propaganda terminology and it could be an artifact directly out of the test automation landscape circa 1998 when vendors, in the absence of any competition, foisted broken tools like WinRunner and SilkTest on gullible customers, when Open Source was exotic, when the World Wide Web was novel. Times have changed since 1998, but the CDT approach to test automation has not changed with it. I'd like to point out the deficiencies in this document as a warning to people who might be tempted to take it se…

Watir is What You Use Instead When Local Conditions Make Automated Browser Testing Otherwise Difficult.

I spent last weekend in Toronto talking to Titus Fortner, Jeff "Cheezy" Morgan, Bret Pettichord, and a number of other experts involved with the Watir project. There are a few things you should know:

The primary audience and target user group for Watir is people who use programming languages other than Ruby, and also people who do little or no programming at all. Let's say that again:

The most important audience for Watir is not Ruby programmers 
Let's talk about "local conditions":

it may be that the language in which you work does not support Selenium
I have been involved with Watir since the very beginning, but I started using modern Watir with the Wikimedia Foundation to test Wikipedia software. The main language of Wikipedia is PHP, in which Selenium is not fully supported, and in which automated testing in general is difficult. Watir/Ruby was a great choice to do browser testing.  At the time we started the project, there were no selenium bindings for …

Open letter to the Association for Software Testing

To the Association for Software Testing:

Considering the discussion in the software testing community with regard to my blog post "Test is a Ghetto", I ask the Board of the AST  to release a statement regarding the relationship of the AST with Keith Klain and Per Scholas, particularly in regard to the lawsuit for fraud filed by Doran Jones (PDF download link) .

The AST has a Code of Ethics  and I also ask the AST Board to release a public statement on whether the AST would consider creating an Ethics Committee similar to, or as a part of the recently created Committee on Standards and Professional Practices.

The yearly election for the Board of the AST happens in just a few weeks, and I hope that the candidates for the Board and the voting members of the Association for Software Testing will consider these requests with the gravity they deserve.