Tuesday, February 14, 2006

Unorthodox testing tools in a hostile environment

Some time ago, I built an "Enterprise" disk imaging system, the equivalent of Symantec's Ghost product, from Open Source components, hosted on FreeBSD. In the process I discovered the FreeBSD ports collection, an astoundingly huge set of applications managed as part of FreeBSD itself. I used a whole raft of stuff from the ports collection as test tools in an all-Windows environment, everything from collaboration (Twiki) to network analysis (ntop).

I was doing a lot of install/smoke testing, and the Frisbee disk imaging system was an important part of that work. I published an interview with Mike Hibler, Frisbee's lead developer, in the March 2005 issue of Better Software magazine.

I discussed the wider issues of hosting test tools on FreeBSD in a paper for the 2005 Pacific Northwest Software Quality Conference, and now that paper has been adapted as a FreeBSD White Paper.

Dru Lavigne, a noted FreeBSD evangelist and Open Source advocate not only helped me through the process of editing the White Paper, but was also kind enough to publish an interview with me about the paper, about software testing in general, and about why FreeBSD is an excellent platform for network testing tools.

I encourage you to read the White Paper and the interview, especially if you're involved in testing networked devices of any kind, especially if you're frustrated by the tools available on Windows for doing this sort of work.

Wednesday, February 08, 2006

Scripting fun

I have a new-ish laptop and I haven't bothered to install unixtools or cygwin on it. (I had a bad experience with Cygwin some time ago).

I needed to grep around in the C# source code directory for stuff. Windows Search didn't do the Right Thing, and TextPad file search gave me memory errors. So I wrote a little script in Ruby to root around in the source code for stuff. Maybe it'll be useful for someone (sorry about the lack of indentation, that's left as an exercise for the reader) :

require "find"
Find.find("../interesting/directory") do |f|
# Find.prune if f =~ /Design/
# Find.prune if f =~ /UnitTest/

if f =~ /cs/
x = File.open(f)
while line = x.gets
if line =~ /stuffImLookingFor/i
puts f
puts line
end #if
end #while
end #if

end #do

Friday, February 03, 2006

Testing Web Services

There was a recent thread on the agile-testing mail list about testing Web Services. I ended up having an interesting private conversation with the original poster, during which I said:

I would immediately ask you "how are these services implemented?" Assume that your web services are implemented as pure XML-over-HTTP. I'm building a system-test framework in Ruby right now that understands how to talk to and how to listen to these XML-over-HTTP interfaces. I could just as well be using SOAP or REST, but in this case I don't have to.


> > Any "Gotchas" I need to be aware of ? Special strategies / approaches /

I think it boils down to two questions: the choice of tools, and the
choice of approach. Almost any set of tools can work with either approach. Do you know Brian Marick's distinction between "business-facing" tests and "technology-facing" tests?

The choice of approach I identified is to test with unit-like tests
that validate whether the service is performing properly internally;
or to take a more black-box approach that validates whether the
information that the service processes is correct on the way in and on the way out.

The tools choice is whether to use unit-test sort of tools highly
coupled to the code itself like jUnit or NUnit, or to use a more
foreign framework to uncouple the tests from the implementation of the services.

I've chosen to implement a foreign test framework in Ruby that concentrates on correctness of information moving in and out of the service. I did this for two reasons: I'm working with great developers who can be relied upon to have
implemented (and tested!) the service correctly; and I'm testing against an idiosyncratic 30-year-old legacy database, so unusual data conditions are more likely to cause failures than are incorrect services. It's a matter of adjusting to risk.


The bit above assumes that we all agree on the definition of a "Web Service", which is in fact probably unlikely. I'm stipulating that a well-known interface implemented as an XML payload in an HTTP POST operation is in fact a web service, even in the absence of WSDLs, SOAP, REST, or any of those other goodies. Even if it's false, it makes a good argument.

I also assume that we all agree on the purpose of our web services, which is slightly more likely-- is the Web Service on the Internet, or are is it for applications behind a firewall?

I'm thinking of these services in terms of an Enterprise Application Integration system which could implement any number of interfaces in addition to my (broadly-defined) Web Services interfaces. This is another reason why a robust general-purpose scripting language like Ruby is an attractive choice for building the tests: besides HTTP, my tests will also have to speak ODBC, FTP, and talk to the filesystem, among other EAI interfaces.

There'll probably be more on this subject later, but for those interested in designing system tests for such services and architectures, these are required reading.