Sunday, December 10, 2006

thank you Grace Hopper

You can look at wikipedia, but there are all sorts of other references.

I am a professional software tester, almost certainly because of Grace Hopper:

If she didn't invent it, she at least made popular the term "bug"
She is largely responsible for COBOL. I began my career reading COBOL. I think if I had begun with any other language, I would probably have failed.
She was one of the first to advocate adherence to standards, without which testers would not exist.
The phrase "It's easier to ask forgiveness than it is to get permission." is attributed to her, and has been a hallmark of my software testing career.
The most successful periods of my career have been with female managers. Women in IT are remarkably good managers, and without Grace Hopper's influence, there would be far fewer than than there are today.

She would have been 100 years old this month.

I am proud to note her influence.

Friday, December 08, 2006

at least look at the satellite view

Had an interesting confluence of influences this week. I sent another Better Software article to the copyeditor this week, and in it I cite (again) Harry Robinson's work. I am by no means an expert model-based tester, but understanding the basics of Harry's work is a fine tool to have in the toolbox. Harry is using Watir for testing Google Maps, as you can see if you grab the pdf and the Ruby files off of the site.

And I read that James Kim and his family may have gotten into trouble because they trusted an online map service.

I live in a pretty remote part of Colorado. A little over a year ago, my wife and I used Google Maps to navigate to another remote part of Colorado, where we were going to visit a friend. Google Maps routed us via the Mosca Pass Road.

As I mentioned to Harry, "The problem is that while Mosca Pass Road exists, it hasn't had any traffic other than hikers and horses for the last 100 years or so" Here's where Google Maps routes you today. I don't know if the bug got fixed because of Harry, but the Mosca Pass Road runs over the north shoulder of that big snowy mountain in the satellite view. You don't want to take your car there, unless you've got high-clearance 4-wheel drive and a winch for the hard parts. And we were driving at night.

Furthermore, I know someone from Georgia once who looked at a Rand McNally Atlas and figured they'd just find a dirt road running from Ouray CO to Telluride CO. They didn't realize Telluride is in a box canyon surrounded by 13000-foot peaks. I persuaded this person not to do that. This was before Google Maps existed. You can't get there from here-- at least not the way you think you can.

When you live in a city, and especially when you deal with technology for a living, so much of your world is online, you forget how much wild country still exists in the US. There are still many millions of square miles in the US where you can get into big, big trouble with hardly any effort at all.

So please, be skeptical of what the mapping software reports. And look at the satellite view.

Wednesday, December 06, 2006

Austin Workshop on Test Automation invites start

A few days ago Bret Pettichord announced the call for participation in the Austin Workshop on Test Automation. The theme this time is "Open Source Testing Frameworks".

I attended the last one of these two years ago, and it was very fine. I'm helping organize it this year.

One of the things about this conference that I think is really important is that it actively seeks apprentice/journeyman practitioners, and not just experts and beginners. This means that everyone there is actively looking for answers and questioning the state of the practice. If you think it's a good idea but you're doing something other than Ruby/Watir/Selenium, please consider attending. Diversity is good.

We've already received some really great proposals and issued some invitations to some fine people. If you're on the fence, please consider that the conference may run out of room, so the earlier you send your letter of introduction, the nicer it is for all concerned.

If you have any questions, please drop an email or leave a comment.

Thursday, November 16, 2006

stuff I left out of the Fort Lewis presentation

Even before I ever heard of Edward Tufte, I had always disliked scripted presentations. My approach to speaking in public is very much like my approach to performing music used to be, back when I made my living that way: know your material thoroughly, and be prepared to interact with your audience as much as possible. It's a strangely rare approach among those who speak about software, but one or two people get it.

Martin Fowler points out one danger of this kind of presentation, being quoted, but what I find is that there are always one or two points that I really, really planned to make, but the conversation with the audience didn't go that way.

Luckily, here in the 21st century, we have blogs, so we can go back and talk about the bits we missed the first time around.

I discussed the very basic aspects of copyright and license and how they relate to open-source and proprietary software. I left out a discussion of plagiarism, which is critically important in an academic setting, and ought to be critically important everywhere else.

Just because software is free and unencumbered, does not give users of that software license to claim it as their own work. Whenever you use software created by other people, please please please make a concerted effort to provide the correct citations for that work, even if the license does not require it. You have far more to gain by honest reference to others' work than you do by plagiarism.

The other bit that I never mentioned (sorry Jonathan) was the ideas in Fred Brooks No Silver Bullet. We talked about all sorts of great stuff, open source, agile development, non-hierarchical management strategies, scripting languages. The point that I should have made explicitly but did not, is that all this stuff is cool, but a) it's not magic and b) in another ten years, it will all be quaint if not hilarious.

Tuesday, November 14, 2006

linked from the soap4r wiki

One of the points I'm making to the students in my lecture tomorrow is that the most important things to contribute to open source projects are questions and answers. Documentation is also really really important to healthy open source projects. To my mind, code is the last thing most open source projects need.

Good timing that yesterday someone added to the main soap4r wiki under "Howtos" a link to my blog entry back in March dealing with "SOAP + Basic Authentication in Ruby". The link is called "Basic SOAP Authentication".

Thursday, November 09, 2006

another look at soapui: load test

In our SOAP API, we have a function that I believe is intended to take care of small jobs, special cases, cleanup areas, things like that. Using this function bypasses a lot of the niftier features of our application. I have some functional tests for it, it works just fine.

Not long ago one of our customer support people asked me what kind of performance we could expect running almost a million transactions with this function in at least 30 threads. I pointed out that that was not at all what the function was designed to do. He made the excellent point that that was what the customer wanted to do, and we should be able to provide some information for them.

Well, crap. I am simply not set up to run that kind of a test. Neither Ruby nor Windows is very good at handling threads, and I don't have a *nix box I can appropriate immediately. (I'm working on that as we speak.) I did suggest a test that the customer could run on a test server with their own code, but I was disappointed not to be able to say something useful immediately.

But I had been working on this blog, and I re-read my post about soapui, and remembered that it had a load-test feature. Just like the functional-test interface, the load-test was really nicely designed, and I could create a 30-thread, 60-second load test for the function in question in just a few minutes, and get some very nice graphical output. My colleague was pleased, and I hope the customer was too.

I still believe that soapui is not scriptable enough to be my primary regression-test tool, but once again, for exploring-- this time for exploring performance-- you just can't beat it. Very very nicely done.

Again.

Monday, November 06, 2006

lecture Nov. 15: "How to be a Software Expert"

The CS department at Fort Lewis College in Durango has a tradition of featuring a speaker from the community for the last lecture before the Thanksgiving break. I did the lecture two years ago on "The History of Software Testing", and I had a blast.

I was asked to do it again this year after starting the Four Corners Open Source mail list. (I hope to have a number of students (and even professors) join after the lecture.) I have a few things on my mind this time around.

For one thing, the job that I moved to Durango to take, got outsourced to Canada. Or possibly Denver. Or maybe Israel. I left before they figured out the details. But it's definitely gone.

For another thing, I spent a year working for Thoughtworks. TW itself is not gigantic (although they are spread-out) but I worked on some enormous agile projects with some really smart people from all over the world. The work was amazing, but the travel was brutal.

For another thing, I've published some pretty heavy articles and given some big time presentations since then.

So I'm not really an expert except maybe at some niche software-test tool stuff, but I do get to talk to some real experts now and then, and I've seen a lot more of the software world since the last time I gave this lecture.

However, because of the bit of expertise that I do have, I am very concerned that the world these students think they're about to enter does not really exist. I'd like to be able to give them pointers about how to jump in and survive in a world where IT paradigms are upset over and over and over again every few years or even months, where the future is always uncertain and the only way to keep up is to keep learning, keep teaching, keep asking questions, and stay flexible. Funny that as I got my material together, Kathy Sierra published this nice bit:
"Why does engineering/math/science education in the US suck?" which points out among other things that "
Where we used to prepare students for a "job for life", now we must prepare students to be jobless."

More than that, though, I would like to point them toward tools and attitudes that will allow them to live and work in Durango, which may or may not be a "faux town" today, but certainly will be soon without more highly-skilled, high-paying jobs. Like in IT.

Here's the blurb, the outline will follow:

How to be a Software Expert

Are you so talented that companies will seek you out to make you a job offer? Or is your next job being outsourced already?

Routine information technology jobs of coding, support, and implementation are leaving the US for places like India and China. But at the same time, tech companies report that they face a desperate shortage of talented employees. ("The Battle for Brainpower", The Economist, Oct 5 2006).

You can be the talent; or you can be outsourced.

In this presentation, Chris McMahon explains how you can use a wealth of free resources to become an expert in information technology, and a valuable (outsource-resistant) IT asset. From local interest groups to world-wide open source software projects, Chris explains and demonstrates the tools and resources that talented experts must know in order to succeed in a global market for IT employment.

Chris is an expert in software testing and agile software development. He has published many articles about software testing, presented tutorials at software testing conferences, runs a weblog dedicated to software testing, and he contributes to a number of open-source projects. He is currently telecommuting from Durango for a software company based near San Francisco.

So here's what I'm going to talk about, feel free to leave a comment or drop me an email:

BASICS
Copyright and licenses. You have copyright to everything you produce. Learn how to use it, and how to give it away.

Overview of GPL and BSD licenses, along with real-life stories about why they're useful. Possible contrast to i.e. Windows.

Money. A few people get paid because they sell a great idea, like a singer/songwriter. Think Paul McCartney. Most people get paid because they sell a service, like a studio musician or member of Paul McCartney's touring band. So we know how Paul McCartney gets paid. His drummer got the gig because he's an expert drummer.

NON-TRADITIONAL AND NON-HIERARCHICAL BUSINESS STRUCTURES
If you expect to make a living by performing perfectly clear assignments under reasonable deadlines from your immediate supervisor, your job is being outsourced right now. Start thinking now about how to use your talent and expertise to create value and reduce cost wherever and however you work.
Agile teams. Point to Thoughtblogs
Distributed teams. Mention Magpie
Startups. Mention Skywerx
Telecommuting

STRATEGY FOR EXPERTISE
Keep a blog
Write articles
Get a portfolio. Mention Entropy Media. (Check out their About page: "...Every update is important to us. Every client is important to us. Every request is at the top of our list, not waiting to be filled." Go Entropy!)
Contribute to mail lists and technical communities like user groups or SIGs.
Present at conferences if you can
Contribute to open-source projects

CONTRIBUTING TO OPEN SOURCE
Contributing questions. Mention perlmonks, soap4r, Watir
Contributing documentation. Mention the FreeBSD paper noted above and it's origins at PNSQC
Contributing code is often the least important thing you can do.

DIGRESSIONS
LAMP is NOT "the acronym given to the mainstream open source movement"! It's an application development stack, and it doesn't even stand for what it used to stand for. (What were you people thinking?)
Ebay, Amazon, Google, Yahoo: who uses Windows?
Google Tech Talks

CONCLUSION
Ask and answer questions in public to increase your skill.
Work in public because it gives others confidence in your abilities.
Share your work, and others will share theirs with you.
Work with others because doing it all yourself is impossible.
Quote Jerry Weinberg: "Give your best work away for free."

Wednesday, November 01, 2006

mind maps for testing and other things


Better Software magazine feature article for November is about using mind maps for test cases.

I learned about mind maps at Thoughtworks, where they had been in use for test case management (and lots of other things) long before I arrived. The key advantage of mind maps, as Robert Sabourin points out in the article, is that "...misunderstandings and ambiguities are exposed and resolved in real time as you build the mind map".


That sounds too good to be true, but it really does work like that. They are also readable, easy to construct, portable, and attractive.


Here are two other ways I used mind maps that were very, very effective:


Accounting and coverage:
The project involved a re-write of an existing system. All of the (very complex) features of the existing system had to be accounted for in the new system, under the (very complex) new architecture. I used a mind map to relate existing features on the left side to new features on the right side. It was SO MUCH BETTER THAN A SPREADSHEET.


Architecture:
My former colleague Dragos Manolescu uses mind maps when designing systems. He's produced some spectacular work. One I remember had to be printed on a plotter, and was about 6 feet high and 3 feet wide-- but quite readable. It was SO MUCH BETTER THAN A DESIGN DOCUMENT.


Although mind-mapping works just fine with colored pencils and paper, I really recommend FreeMind, it's a very east to use, very popular, open-source mind-mapping tool that will save the diagrams in an number of formats, including jpeg.


Monday, October 23, 2006

"agile" "backlash"

Jonathan Kohl has a wonderful essay about the zeitgeist as it relates to A/a/gile software development.

Maybe it's because I only started making a living with software in my 30s (after being a bass player, librarian, and bartender); maybe it's because when I started, I was working on venerable systems (US 911 data-validation systems written in COBOL and C) created and maintained by smart, engaged people, but things like this don't make any sense to me:

"...companies being torn apart from within as factions rage against each other, grappling for methodology dominance..."

"...berated for not knowing the Agile lingo, and being belittled as "waterfall", and "backward" when they would ask innocent questions."

Jonathan assures me that this sort of thing goes on. Craziness.

I have to leave this business aside, though, because, while it's interesting, it doesn't affect the price of eggs in China. The meat of Jonathan's essay (for me) is here:

"...any skilled team can make a process work for them and do good work, and any team lacking in skill is going to fail no matter what process they are using."

Skill of course is critically important. (The Economist had an article a couple of weeks ago pointing out that even while companies are outsourcing function to India and China, they are desperate for talent, and will hire talented employees anywhere in the world they find them.)
Other *human* attributes are also critically important. Here are some favorites:

The ability to change your mind:
What happens if someone comes to you and says "you should be doing your work *this* way". What if they're right? I was on a project at ThoughtWorks where we'd been going along feeling a little bogged down. A new developer joined the project and said exactly that: and everyone agreed, from the architect with the Ph.D. to the testers to the customers. If you can't change your mind, you are an evolutionary dead end.

Rhetoric:
Can you change someone else's mind? I have a perfomance-test project where I sure hope I can. The framework I inherited is hard to use. I'm pretty sure I have a better alternative. I've coded about 70% of my alternative. Now I have to make my case. Wish me luck.

Listening:
What are the people on the project saying? What do they need? How is it going? What's working, what's not working? Do you have any idea? If you don't know what's happening with the people, then you don't know what's happening with the project.

Big Vision:
One system I worked on was developed before the advent of relational databases. I worked on a project to convert a gigantic, life-critical database from flat files to SQL. We could not have done it without the vision of a Vice President who *knew* that we needed to have this information in a modern database. His vision was absolutely critical to the project, and he oversaw the project from beginning to end.

We've been saying it for 20 years, but it bears repeating:

THERE IS NO SILVER BULLET.

There is skill; there is talent; there is communication; there is vision; there is engagement; there is interest; and then there are methods and methodologies and processes from which to choose, or not. Those processes that emphasize

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

seem like they are more likely to succeed in most cases.

But there is also history. Here is a post-agilist confession: when thing go seriously pear-shaped for me, I always reach for one of two ideas to pull things back on track: either the V-model of software development; or the IEEE 829 spec. I hear the agile vigilantes coming for me as I type this. Luckily I live in the middle of nowhere.

First of all, my favorite formulation of the V-model of software development resembles Brian Marick's deconstruction more than the classic formula. You just can't go too far wrong by

saying what you're going to build, and validating that it is correct;
designing what you've said, and validating the design;
building what you've designed, and validating the building;
and testing what you've built.

Whether that cycle happens in a week or a month or a year is a matter of choice for the people doing the work.

I think it's stupid of the IEEE to use their copyright on the 829 spec to generate income. Frankly, the last time I needed the 829 spec, I bootlegged it off of a Russian site. No, there are no copies of the 829 spec on any drive on any computer to which I have access, so please don't sue me.

There is this idea in English literature of the 5-paragraph essay, and it translates nicely to software development. This idea is embodied in the V-model and in the 829 spec.

Tell the people what you are going to do.
Do what you said you would do, in an orderly manner.
Tell the people what you just did.

From this rises most of literature, most of culture, most of collaboration...

...and the best of software.

Thursday, September 21, 2006

FOCOS: FOur Corners Open Source

http://tech.groups.yahoo.com/group/four-corners-open-source-focos/
FOCOS is Four COrners Open Source

This is a group of software users and technology professionals based in and around the Four Corners area of the USA, where Arizona, Colorado, New Mexico, and Utah come together.

We welcome software professionals, amateurs, people from business, arts, government, academia and education, and anyone else interested in open source software and in the Four Corners area.

We encourage a wide range of discussion, but we would generally like it to revolve around these areas:

Improving our skills and broadening our experience with open source software;
Using open source software in business, academia, the arts, and government;
Improving our communities, both local and virtual, with our skill and knowledge of open source software.

#####################################################

I've been thinking about doing this for some time. Probably the original inspiration was when I got stranded in Denver in the dead of winter and met Dave Wegner at the hotel where we were both stuck in the snow. We talked about whether Durango would become another Aspen.

Besides the people I worked with for the first year and half I was here, I kept meeting other interesting tech folks around Durango: the CS faculty at Fort Lewis College, where I gave the Thanksgiving lecture a couple of years ago; smart CS students there; a two-man .NET consulting firm; an XP shop doing telecom work; and a couple of executive-management folks who emailed me after reading one of my Better Software articles.

And all of the non-IT small business people I've met here have software/data issues of one sort or another. See the Better Software article on the site above for a good story.

So I've been to the big city, eh, and been to Chirb meetings and a SQuaD meeting once and a couple of RMTUG meetings, not to mention presented at the STAR Conferences and PNSQC. I'm on a bunch of mail lists so I'm convinced of the value of these things.

If the list can attract enough members, I think it'll be a success. If not: it was a good idea worth trying.

Mesa Verde, Arches and Canyonlands, Great Sand Dunes, Monument Valley
Moab, Durango, Santa Fe
Mountains, Rivers, Desert

and now software. If you have any interest in the area; in rural tech; in Wild West hackers; you are invited to join.

Monday, August 14, 2006

SOAP testing the Enterprisey way

Today I found an article about SOA testing at Wells Fargo.

What a mess.

It's hard to read, but make your way to the end of the article where it describes Wells Fargo's SOA testing tools setup. Here's my un-Enterprisey answer:

Instead of SOAPScope, use soapui. Unlike proprietary SOAPScope, soapui is free and open-source, and as Mike Kelly pointed out, soapui works where SOAPScope fails. soapui is fantastic for exploring WSDLs, but I wouldn't necessarily use it as a test driver.

I use Ruby's soap4r and http-access2 libraries to drive tests. And, instead of Mercury TestDirector ($$$$$$$$$$$), I find that a combination of Ruby's test/unit features and a nice source-code repository give me all of the features that Wells Fargo gets from TestDirector. Super reliable, super simple, super understandable, low overhead and zero cost.

Then they mention "Integra Enterprise from Solstice Software" which seems to parse log files and handle HTTPS (no big deal)-- but it also talks to MQ. (no mention of whether this was IBM MQSeries, Microsoft MSMQ, ActiveMQ, or some other message-queue spec. I guess we can assume MQSeries...)

Now this is worrisome. There is no obvious MQSeries library for Ruby (although Python has one, and the MQSeries interface seems to be pretty well documented), but I've worked with both MQSeries and MSMQ in a number of systems, and the fact that the test environment demands an MQ driver at all sets off all sorts of warning bells in my mind.

If you have a SOAP interface, you should be sending text (XML) over HTTP. If your test tool has to talk MQ, then you are testing at the wrong place. If I had a system with MQ involved, I'd stipulate that the UI/acceptance/system/performance tests are adequate proof that MQ is really working. Having your integration/SOAP tool talk MQ is a massive waste of resources.

Finally, they have "Data Management". They built some sort of db-based repository from which the test driver extracts test data at runtime. The specs are a little odd, but they do mention ODBC. There seems to be some sort of data-writer UI and a data-grabber from the test driver.

My favorite source of test data is to simply generate it within the script. Here' s a made-up brain-dead example to create 1000 users using a SOAP api function "createUser" itself:

1.upto 1000 do |num|
@soap.createUser({"UserNumber" => num},
{"UserName" => "Chris"})
end

(For a much more interesting discussion of generating test data from within the script, see my article "Old School Meets New Wave" in the June issue of Better Software magazine.)

Dragging test data out of a db with ODBC has got to be an expensive operation. If I absolutely positively have to keep test data outside the test script itself, I'd use a delimited file or possibly a spreadsheet. ODBC is overkill.

Here is the conclusion of the paper:


No one tool can satisfy all of the application, environment, and testing requirements. The only option is to implement a tightly coupled suite of tools to address the requirements.


So we all know that tight coupling is bad, right? I'd suggest that serious testers be in a position to alter and augment their test tools at the drop of a hat. That's why I like Ruby for so much of my SOAP testing. Accommodating change with a full-featured scripting language is not too hard.

And if Ruby were to somehow fail, I could (plan A) use my existing scripts to drive soapui without too much difficulty. Or I could (plan B) port them to Java or C# without too much fuss. They would (plan C) certainly work from Perl with some improvements to the SOAP::Lite module.

In short:
soapui for exploring WSDLs;
Ruby for test driving, test management, and test reporting;
sane and rational insertion points for test interfaces on the system;
and delimited files (if you need 'em) for test data--

will take you a long, long, long way down the SOA testing road.

Thursday, July 27, 2006

Scripting Web Tests at Agile2006: great fun was had.

About 40 or so smart, engaged people attended the Scripting Web Tests (with Watir and Ruby) class. It was very interesting, this is the first time I've taught the class to students of whom the majority have a programming background. Luckily, very few had Ruby experience, so the class was still able to emphasize the exploring and experimenting that is such an important part of the presentation .

And since this was the Agile conference, the majority of the attendees were experienced pair programmers, and were eager to collaborate and share laptops and code around their tables. By the middle of exercise 2, the noise level was amazing, and the students were crazy productive. Usually we take five or six hours for this material, but this class blasted through just about everything in less than 3 hours. So many people had jumped ahead on the material that after completing exercise 3, I simply encouraged folks to try anything they wanted to try out of exercises 4, 5, and 6.

And in what seemed to be a coordinated attack of guerilla kindness, Elisabeth Hendrickson and Michael Bolton both showed up within minutes of each other and jumped in, troubleshooting, pair programming, contributing tricks and hacks. Thanks Elisabeth and Michael, you put the whole experience over the top.

Finally, I am still encouraging other people to teach this class. It is unlikely that I will teach it myself again anytime soon, so if you'd like to discuss teaching the Scripting Web Tests class yourself at a conference or user group or anything, leave a comment or drop me a line.

Monday, July 17, 2006

Another fine SOAP test tool

In a fit of coincidence and serendipity, I had a user report that the input string
è or à  
caused his Perl SOAP client to send crazy stuff to the server. My Ruby client also freaked out on this string, although not as badly:

XSD::ValueSpaceError: {http://www.w3.org/2001/XMLSchema}string: cannot accept 'è or à'.

but a project called "soapUI" announced a 1.6beta1 release today. I installed it, and it is not only a joy to use, but it handled the unicode input flawlessly. It's a Java app, and considering what an utter chore it is to get Java doing SOAP properly, the 1-click install was fantastic.

I'm not ready to turn in Ruby for my regression tests, but having an excellent Java SOAP client at hand is a great relief as these SOAP functions (and their users) become more sophisticated.

An interesting aspect of the soapUI project is that it seems to do load testing out of the box. I've cobbled together some nice load-profiling scripts in Ruby, but these functions in soapUI definitely bear investigation.

Friday, June 30, 2006

no wonder "agile" gets a bad rap

A colleague pointed me to this blog entry from an "agile" consultant who got fired early in the project. The really telling phrase is

"The process of discovery can indeed feel open-ended as it is the very nature of discovery to explore the domain of the business opportunity in an open way. The purpose of this "openness" is to find the appropriate scope, workflow, practical boundaries, and hidden benefits in an exploratory and visual manner."

That's ridiculous. The process of discovery is to gather the minimum amount of information necessary to begin delivering working software. Nothing else.

These people wasted their client's time and money and delivered nothing. I would have fired them too.

Wednesday, June 21, 2006

Ruby SOAP is amazing

Back in March I grumbled about the fact that the SOAP libraries included in the standard Ruby distro on Windows are fairly broken, and to get SOAP working with Basic Authentication, one has to rummage around the internet collecting and installing a couple of libraries written by Hiroshi Nakamura.

I take it all back. No more grumbling.

I've recently been working to port about 40 API functions from one SOAP server to a different, better SOAP server. We have client code written in 6 languages besides Ruby: Java, C#, Perl, TCL, VBScript, and PHP.

Ruby was the only language that continued to function 100% with the new SOAP server and all of the tweaks we made to the new server. (PHP did pretty well, too, but I can't test in PHP, and maintenance is tricky. )

The basic issue is that there is more than one way to generate an accurate WSDL; and there is more than one way to interpret any valid WSDL. Perl's SOAP::Lite module for example will not parse our new WSDL without major surgery on my part.

We eventually got Java and C# both working with the new SOAP server, but it took some tweaking on both the client code and the server code.

Throughout the entire project, not only were our tests (specifications) written in Ruby with Nakamura-san's libraries, these tests became our most reliable reference implementations also.

Very, very impressive work.

Tuesday, May 23, 2006

Kevin Lawrence is still right

Back in March 2005 Kevin Lawrence wrote this great article for Better Software called "Grow Your Test Harness Naturally"

He's got a link to the PDF on his blog, luckily.

It's one of those articles that keeps being cited by various people one runs into here and there. I read it over and over until I Got It.

The basic premise is: don't write the framework first. Write some tests, then extract the common code from those tests, write more tests, abstract common code, write more tests, refactor, repeat. The refactoring generates the framework.

The thing is, I had never had enough code complex enough, running against an app stable enough, to really and truly build a framework this way.

Until now.

I have a large number things to do with Watir.

First I automated each path through the code individually. I did about 10 of these.
Each of my paths had to set up a certain set of rules, so I abstracted a method "make_rules"
Each of my paths had to set some data before doing a final click, so I abstracted a method for that.
At this point my 1o individual scripts had shrunk to about 5 lines of code anyway, so I put them all in one file.
But I still have to handle various variations in the path through the code, so I wrote a controller.rb file that orchestrates all of the making of rules, the data handling and final clicking.

Now I'm adding new paths through the GUI. Adding each new path means rejiggering my existing libraries to accomodate the new path. But since my libraries were generated as a consequence of actual code, they're naturally in tune with what I want to do. Updating them to handle the first new path through the code takes some thought, but the next 10 times I need to go down that path all of the methods are already in place, and the repetition becomes trivial.

The framework is reliably going through the GUI about 80 times now, every time subtly different than every other time.

The thing is, I don't think I could have *designed* this framework. For one thing, I didn't have enough information when I started writing to anticipate all of the paths I would need. For another, even though my framework isn't very complex, it's still large enough that I doubt that I could have held the whole architecture in my head in order to write it in advance.

I can't believe how a) robust and b) readable this framework is turning out to be. I still have some mistakes to clean up, and I have a couple more paths through the code to add, but I'm so pleased to finally find out for myself that Kevin Lawrence is still right.

Wednesday, May 03, 2006

Internet Explorer Basic Authorization vs iFrame is just wrong and stupid

UPDATE October 2009: it seems Angrez has added support for Basic Auth to a fork and it should hit the main Watir distro RSN. News will be here. Good things come...

So I wanted to bypass the 401 Authorization Required screen that IE kicks up without resorting to actually having to manipulate the popup. So I hacked Watir's goto() method, which works great:

def goto(url)
#@ie.navigate(url)
@ie.navigate(url,nil,nil,nil,"Authorization: Basic AbdXyz46bG1ubw==\n")
wait()
sleep 0.2
return @down_load_time
end

I know I can get away with this because the user/password on the test systems are never going to change, and even if they do, it's an easy script change.

The trouble is that the main page has an iFrame, and for some reason, when called from a script, stupid IE won't use Basic Authorization credentials to request the iFrame.

So now I'm getting the authorization popup when IE tries to load the iFrame contents. The main page is loading fine.

Deeply frustrating.

And stupid.


Tuesday, May 02, 2006

Two things about performance testing

A couple of things to stick in your back pocket:

1) Think geometrically, not arithmetically. Think about orders of magnitude.

10 is the same as 20 is the same as 50.
2000 is the same as 5000 is the same as 7000.
100,000 is the same as 200,000 is the same as 300,000.

The interesting information comes between the orders of magnitude: is 10,000 faster or slower than 10? is 1000 faster or slower than 100,000?

2) Think about precision and accuracy.

It is precise to say "10 records per second on this OS on this hardware on this network".
It is accurate to say "a rate of between 1 and 10 records per second on any OS on any reasonable hardware".
It is accurate to say "10 records per second running locally; 1 record per second over the network; 10 seconds per record over VPN".

Accuracy is critical in the general case; precision is critical in the specific case.

Monday, April 17, 2006

Sentence from a fictional software article

Blame Jonathan Kohl. Someone should write the rest of the article, so as not to waste this:

"On project sweetie, Axelrod and Hyacinth were perplexed. Their EAI implementation wasn't SOA, it was SOL."

Monday, March 27, 2006

SOAP + Basic Authentication in Ruby

Do you need to do SOAP in Ruby?

First of all, install the latest version of Ruby you can stand. (I'm using this one. What the heck. I'm giving the laptop back in a few hours.)

That's a fine thing, but you can't do Basic Authentication yet. You need to install a net/http that understands Basic Authentication.

Ready to go? Not quite. The version of SOAP4R you have doesn't handle arguments to web services correctly. You need to install the latest SOAP4R to get your arguments handled right.

Now you're ready. You might get an extraneous error message "ignored attr: {}mixed" but it probably won't affect your ability to do the job.

Many many thanks to Emil Marceta for walking me through the process.

Two things:
1) Perl's SOAP::Lite does this out of the box.
2) The Pickaxe dismisses SOAP pretty perfunctorily, but the beauty of SOAP is that it is (should be) very very easy for the client, using existing libraries.

What would it take to put this in the regular Windows Ruby distro?

Saturday, March 25, 2006

No basic authentication in Ruby SOAP library?

The closest I've seen is this post but it's still not quite the right thing.

What I really want is for

soap = SOAP::WSDLDriverFactory.new(WSDL_URL).createDriver

to accept a user/password and do basic authentication for me, but I don't see a way to make it happen.

Leave a comment if you have an idea...
-Chris

Tuesday, February 14, 2006

Unorthodox testing tools in a hostile environment

Some time ago, I built an "Enterprise" disk imaging system, the equivalent of Symantec's Ghost product, from Open Source components, hosted on FreeBSD. In the process I discovered the FreeBSD ports collection, an astoundingly huge set of applications managed as part of FreeBSD itself. I used a whole raft of stuff from the ports collection as test tools in an all-Windows environment, everything from collaboration (Twiki) to network analysis (ntop).

I was doing a lot of install/smoke testing, and the Frisbee disk imaging system was an important part of that work. I published an interview with Mike Hibler, Frisbee's lead developer, in the March 2005 issue of Better Software magazine.

I discussed the wider issues of hosting test tools on FreeBSD in a paper for the 2005 Pacific Northwest Software Quality Conference, and now that paper has been adapted as a FreeBSD White Paper.

Dru Lavigne, a noted FreeBSD evangelist and Open Source advocate not only helped me through the process of editing the White Paper, but was also kind enough to publish an interview with me about the paper, about software testing in general, and about why FreeBSD is an excellent platform for network testing tools.

I encourage you to read the White Paper and the interview, especially if you're involved in testing networked devices of any kind, especially if you're frustrated by the tools available on Windows for doing this sort of work.

Wednesday, February 08, 2006

Scripting fun

I have a new-ish laptop and I haven't bothered to install unixtools or cygwin on it. (I had a bad experience with Cygwin some time ago).

I needed to grep around in the C# source code directory for stuff. Windows Search didn't do the Right Thing, and TextPad file search gave me memory errors. So I wrote a little script in Ruby to root around in the source code for stuff. Maybe it'll be useful for someone (sorry about the lack of indentation, that's left as an exercise for the reader) :

require "find"
Find.find("../interesting/directory") do |f|
# Find.prune if f =~ /Design/
# Find.prune if f =~ /UnitTest/

if f =~ /cs/
x = File.open(f)
while line = x.gets
if line =~ /stuffImLookingFor/i
puts f
puts line
end #if
end #while
end #if

end #do

Friday, February 03, 2006

Testing Web Services

There was a recent thread on the agile-testing mail list about testing Web Services. I ended up having an interesting private conversation with the original poster, during which I said:

I would immediately ask you "how are these services implemented?" Assume that your web services are implemented as pure XML-over-HTTP. I'm building a system-test framework in Ruby right now that understands how to talk to and how to listen to these XML-over-HTTP interfaces. I could just as well be using SOAP or REST, but in this case I don't have to.


> > Any "Gotchas" I need to be aware of ? Special strategies / approaches /

I think it boils down to two questions: the choice of tools, and the
choice of approach. Almost any set of tools can work with either approach. Do you know Brian Marick's distinction between "business-facing" tests and "technology-facing" tests?

The choice of approach I identified is to test with unit-like tests
that validate whether the service is performing properly internally;
or to take a more black-box approach that validates whether the
information that the service processes is correct on the way in and on the way out.

The tools choice is whether to use unit-test sort of tools highly
coupled to the code itself like jUnit or NUnit, or to use a more
foreign framework to uncouple the tests from the implementation of the services.

I've chosen to implement a foreign test framework in Ruby that concentrates on correctness of information moving in and out of the service. I did this for two reasons: I'm working with great developers who can be relied upon to have
implemented (and tested!) the service correctly; and I'm testing against an idiosyncratic 30-year-old legacy database, so unusual data conditions are more likely to cause failures than are incorrect services. It's a matter of adjusting to risk.


The bit above assumes that we all agree on the definition of a "Web Service", which is in fact probably unlikely. I'm stipulating that a well-known interface implemented as an XML payload in an HTTP POST operation is in fact a web service, even in the absence of WSDLs, SOAP, REST, or any of those other goodies. Even if it's false, it makes a good argument.

I also assume that we all agree on the purpose of our web services, which is slightly more likely-- is the Web Service on the Internet, or are is it for applications behind a firewall?

I'm thinking of these services in terms of an Enterprise Application Integration system which could implement any number of interfaces in addition to my (broadly-defined) Web Services interfaces. This is another reason why a robust general-purpose scripting language like Ruby is an attractive choice for building the tests: besides HTTP, my tests will also have to speak ODBC, FTP, and talk to the filesystem, among other EAI interfaces.

There'll probably be more on this subject later, but for those interested in designing system tests for such services and architectures, these are required reading.

Monday, January 23, 2006

Teaching the Scripting For Testers class

SQE (the stickyminds people) have announced the schedule for the STAREast conference, and I'll be teaching the Scripting for Testers tutorial on May 16, almost certainly with my colleague Dave Hoover.

I'm really looking forward to working with Dave. Not only is he a better programmer than me, but we share an interest in teaching, and in working with (and in being) beginners on the way to being experts.

Brian Marick
and Bret Pettichord started the class in 2003, and this tutorial was a major contributor to the existence of the Watir test framework, to which I've contributed a bit of work. This tutorial will be the first since 2004 not taught by Bret (and the first ever not taught by one of the two of them, as far as any of us know).

It's a good class, and some great people have been a part of teaching it in the past. I'm hoping that the class will take on a life of its own from now on. Enough people have taught it, and enough people have taken it, that I think the course has enough momentum by itself to survive without needing sponsorship and evangelism. In fact, I'm hoping that it will evolve into a "scripting jam session" under certain circumstances-- watch this space...

All of the course materials are open source, so anyone is free to teach this class anywhere they like. Unfortunately, the materials are not exactly self-explanatory, so it helps to have been associated with a class once if you intend to teach it later.

So if you're coming to STAREast, consider signing up for the Scripting for Testers tutorial. We intend to provide thrills, chills, and spills for the discerning beginner.

Friday, January 13, 2006

Static tests and dynamic tests

A typical test expects a particular set of data to exist, right?

Say I have some software that accepts a chunk of XML and does a search for people in a database by last name and first name. It might accept

<requestroot xmlns="http://foo">
<lastname xmlns="" > MCMAHON < /lastname>
<firstname xmlns="">CHRIS</firstname>
</requestroot>


and it might return

<responseroot xmlns="http://foo">
<user xmlns="">CHRIS MCMAHON</user>
</responseroot>

so I could write a test that does something like


SEND

<requestroot xmlns="http://foo">
<lastname xmlns="">MCMAHON</lastname>
<firstname xmlns="">CHRIS</firstname>
</requestroot>

RECEIVE AND PARSE

<responseroot xmlns="http://foo">
<user xmlns="">CHRIS MCMAHON</user>
</responseroot>

That is a static test, and that's how unit tests work. More than likely, the returned value CHRIS MCMAHON isn't in a database at all; it's returned by a mock database object.

#######################################################

But if I'm doing system testing, I'd rather talk to a real database. And I don't want to rely on the fact that CHRIS MCMAHON will always be in that database. Furthermore, I want my tests to do some of the work for me, and go exploring on their own, to be dynamic. So let's make system tests in Ruby that do system-test things instead of unit-test things. This test will generate its own expected results, and we can loop as many times as want, generating new expected results as long as the computer keeps running...

First I want to to be able to generate some search criteria:

def random_text( length )
output = ""
length.times do
select_array = ('A'..'Z').to_a
select_array << '%'
select_array << '%'
output <<>
end
output
end

And generate some query data for the test:

fstnm = random_text(1)
lstnm = random_text(2)

Then I want to hit the database directly and get some results:

tq = @c.run("SELECT * FROM namestable WHERE fstnm LIKE \'#{fstnm}%\' and lstnm LIKE \'#{lstnm}%\'”)

result = tq.fetch_all

I'll probably manipulate "result" to figure out exactly what I have. Then I want to take the same query and run it through the code I'm testing:

str = <<
END_OF_XML
<requestroot xmlns="http://foo">
<lastname xmlns="" >#{lstnm} < /lastname>
<firstname xmlns="">#{fstnm}</firstname>
</requestroot>
END_OF_XML

SEND #{str}

So a test might generate search criteria

lstnm = "JO%"
fstnm = "Q%"

"result" (properly parsed) might then contain

QUENTIN JONES
QXBRT JOHNS
QUIGLEY JOHNSON
QUIMBY X JOVANIVICH

I think it's quite reasonable that my SUT might assume a Q to be followed by a U, and might assume no middle names in the database. Therefore the software might return a set of data for parsing:


<responseroot xmlns="http://foo">
<user xmlns="">QUENTIN JONES</user>
<user xmlns="">QUIGLEY JOHNSON</user>
</responseroot>

Which of course would be a bug, since neither QXBRT nor QUIMBY are in the search results. The test would assert the four users from the database, and fail because the software would have only returned two users.

Seriously, could anyone invent "QXBRT"?

#####################################################################

Thanks to Harry Robinson of Google, who's traveled this path before . When I was starting this sort of work, he was gracious enough to lend his opinion. During the course of the conversation, he dropped this little gem which I've been saving:

To go with a military analogy, static tests are like
battlements: they are fairly cheap to build and maintain,
and they can help keep you from losing ground you have
already won. Generated tests are like tanks or ground
troops: they need more thought in their design, and you need
them if you want to win new territory, but they work best
when they are on the move.

Thursday, January 05, 2006

A software tester should always have the means and opportunity to read the source code for the software being tested. I've had a number of jobs as a tester, and on some them, getting to read the source code was quite an ordeal.

I'm currently working on an integration project. We had a new Business Analyst join the team, and on his first week, one of the developers gave him a guided tour of the most relevant parts of the source code for one of the applications we are integrating. I tagged along. I had surfed some of this code, but I am not a Java expert by any means, and having the design and implementation explained by one of the developers that wrote the code was a great help in understanding how and why to test the integration of that application.

Imagine that. A BA and a tester learning about the code from a happy developer. I like my job.