Tuesday, November 13, 2007

Boulder Ruby Group: telecommute panel Nov 14

I'll be part of a panel on distributed agile working/telecommuting/remote working tomorrow evening Nov 14.

If you can get to Boulder, stop in. If you're interested in following along, there'll be an IRC channel running. (probably freenode #socialtext, unless we change our minds.)

Also, I have a wiki dedicated to the topic. For tomorrow evening the wiki will be totally public. I intend for the wiki to stick around as a repository of information about working in distributed teams. Leave a comment or send an email if you'd like access to that wiki.

Friday, November 02, 2007

test design economy

The other day Jeff Fry sent me an IM out of nowhere and in the course of the conversation asked me what I was doing these days.

I blithely answered "not so much coding, more test design". Which was really too short an answer, but there's IM for ya.

In a nutshell, Socialtext has a really nice Selenium-based automation framework, and recently I've been attacking the backlog of creating automation for fixed bugs. When I'm doing manual (Exploratory) testing on these code changes, I'm willing to spend time investigating odd combinations of things, multiple recurrences of things, all sorts of unusual experiments to find where exactly the boundaries of the functions lie.

But automated tests are expensive: they are expensive to create, and they are expensive to run. So when writing the automated regression tests, I'm trying to minimize the number of test steps while maximizing the coverage of the code in question.

Here's an example:
| type_ok | email_page_subject | Fw: Fwd: Re: Email subject %%start_time%% |

We needed to test that email forwarded or replied to, on the same subject, would appear on a single wiki page. Manually, I tested every combination of "Fwd:", "Fw:", "Re:", after investigating all the permutations that various email clients might employ.

But when I wrote the automated test, I chose the most pathological case that I knew would succeed, in order to minimize run time and maximize the chance that the test would catch a regression problem.

Automated GUI testing is too expensive to cover all the cases that a human being can conceive. There is a craft and a discipline involved in creating automated tests that are both effective and economical.

I am very much enjoying exercising that craft and discipline.

Thursday, September 27, 2007

actual code: create vs. sort

Grig and Simon are both still suspicious, so here is an example from the test itself:

This is Selenium RC in a keyword-driven framework where test data is in the wiki. Take a look at the "code" below and I hope you agree that an explicit test for creating a record is not worthwhile.

Any of these could fail in the setup, even though there is no explicit assertion about success:

| clickAndWait | link=Accounts | |
| type | name | 0sort1account%%start_time%% |
| st-submit | | |
| clickAndWait | link=Accounts | |
| type | name | 0sort2account%%start_time%% |
| st-submit | | |
| clickAndWait | link=Accounts | |
| type | name | 0sort3account%%start_time%% |
| st-submit | | |

and even if those didn't trigger a failure, these other assertions about sorting would fail in a really obvious way if any of the three records were missing:

| open | /nlw/control/account | |
| text_like | qr/sort1account.+sort2account.+sort3account/ | |
| clickAndWait | link=Name | |
| text_like | qr/sort3account.+sort2account.+sort1account/ | | |
| clickAndWait | link=Number of Workspaces | |
| text_like | qr/sort1account.+sort2account.+sort3account/ | |
| clickAndWait | link=Number of Workspaces | |
| text_like | qr/sort3account.+sort2account.+sort1account/ | | |
| clickAndWait | link=Number of Users | |
| text_like | qr/sort3account.+sort2account.+sort1account/ | |
| clickAndWait | link=Number of Users | |
| text_like | qr/sort1account.+sort2account.+sort3account/ | |

Wednesday, September 26, 2007

don't test for blocking conditions: an example

Matt Heusser quoted me on this at the Google Test Automation Conference. I've had this idea for some time, that many tests both manual and automated waste time explicitly testing for conditions that don't need tests.

I ran into one of those today. Or rather, my boss did. (He's not really a boss so much as he is an experienced colleague who has more conversations with management than me. Socialtext has very little hierarchy. )

We have a lot of automated tests that we created very quickly. Many of them have TODOs left over. I've been cleaning up the tests and implementing the TODOs.

One of the tests had a whole list of TODOs that had already been done, so I deleted the TODO list. My boss saw this and asked me why. In particular, one of the TODOs had been to create a record of a particular type, and my boss didn't see a test for this.

I pointed out that I had written an explicit test for sorting records of exactly that type, and three records of that type were created in the setup for the sorting test.

Had the record-creation failed, the tests for sorting would have failed. It's a nice economy of effort.

Friday, September 21, 2007

sysadmin week in QA

Go on, click the link, read the blog post, then read the source. Socialtext's REST API is pretty great.

Wednesday, September 19, 2007

Luke Closs/Distributed Agile/Agile Vancouver

If you're near Vancouver, you should be there.

I'm interested in what Luke's going to say, and I work with him.

Wednesday, September 12, 2007

Joe Zawinul

Joe Zawinul died today. He was 75. He wrote "In a Silent Way", and one of my favorites "Mercy Mercy Mercy" (hard to find an original version on teh internets), a hit for Cannonball Adderly.

He played in Miles' Bitches Brew band. He founded Weather Report and dealt with Jaco.

Bring on the artists.

Sunday, September 09, 2007

Become a better software artist

There's been an interesting discussion on the software-testing mail list about the failures of Six Sigma and other process-quality religions to map effectively to software development. I think you might get better software the same way that you get better music.


Almost every musician starts by playing along with recordings. After that, playing along with the radio builds improvisation skills. After that, playing along with other people broadens and deepens the communication spectrum. Throughout all this, work on the basics like scales and chords and tempo don't stop. The conscientious musician practices as many styles of music as possible. The professional musician may not enjoy blues, or orchestral works, or jazz, but there are certain forms that all professionals master in order simply to be professional.


Music is constructed for listeners. Rehearsal is where the details of the performance are designed and implemented. Rehearsal is not where musicians improve their skills; it is where musicians, together, plan to present their work to an important audience.


Real artists perform. They create music for real listeners, who pay for the privilege of doing so.

Trying to apply Six Sigma or Lean or CMM in order to get better musical performance doesn't seem very smart. But humans have been performing music for our entire time on the planet. We know pretty well how to go about getting good performances. If it's software:


Work the exercises in the books. Then make up your own exercises, or look for interesting questions. (My first serious program was to throw the I Ching in Perl, using the instructions from the back of the Wilhelm/Baynes translations as an algorithm. The script also calculated the changing lines, and printed both hexagrams to the console. I learned a lot.) Learn the basics, and keep expanding: conditionals, loops, objects, design patterns. Learn other languages, even if you don't like them much. (I've started Head First Java 3 times, I'm slack.)


Share your programs, work with others to improve theirs. Publish your code as a demonstration or a work in progress. Get feedback, and read and comment on others' work. Join an open source project. Or a couple of them. Write documentation-- if you can't write it down clearly, then you don't understand it well enough.


Real artists ship. -Steve Jobs
Make stuff for users. As often as you can. All kinds of users, all kinds of products.

Thursday, August 30, 2007

apropos of not much: started a photo blog

I've been thinking about doing this for some time, and now it's happened. I got a nifty little bullet-proof camera and started a photo blog of the things I see from day to day. Considering where I live, some of them are pretty amazing.

Friday, August 17, 2007

so where's the testing

The Socialtext job is on craigslist now and I've been reviewing resumes. We have several great candidates, but most of the people who've submitted resumes don't seem to be able to help out much.

In particular, I'm seeing a lot of resumes that talk about how their owners do almost everything *except* actually test software. There seems to be lots of reporting and managing and reviewing and meetings and stuff, but actual testing experience seems to be pretty thin on the ground.

Or maybe that's just how people write resumes.

Monday, August 13, 2007

Socialtext is hiring a tester/QA person

The job announcement should show up here any minute. In the meantime, you can read the job description.

This is a great job, and we're looking for one outstanding person. You can google for Socialtext yourself (I recommend doing some research before you apply), and read the official job description. Here I'm going to talk about why this is such a great job, and the stuff you need to have to get it that isn't in the job description.

Socialtext has about fifty employees. A few are based in Palo Alto, but the development, QA, and support staff are spread across the US and Canada, along with a few overseas employees. If hired, you would be expected to telecommute. If you happen to live in the Bay Area, you'd be welcome to visit the office.

We're a small-a agile shop. We have standups, retrospectives, and iterations. We release small features frequently, and we install them on our internal wiki before we release to production.

Besides the wiki, we use irc, email, VOIP, VNC, and more, to communicate and share information. We're in constant contact with each other through the day.

Since so much of our communication is with text, excellent writing skills are critical. Since there are so few employees, abilities beyond testing are very important. If you know about business, or development, or marketing, or Open Source, that's important to us. You can have a real influence on how the company works.

We have some interesting company policies. For instance, the official vacation policy consists of five words: "Be An Adult About It". We have three salaries. Everyone who joins gets paid one of the three salaries. Everyone in the company (usually) has a quarterly travel budget. When you join, you get an allowance to buy yourself hardware. Most people have Mac laptops, but we have a couple of Windows users and a couple of Linux users.

A lot of our development work is done in public. Surf http://www.socialtext.net/open to see what we're up to. The coolest thing I have over there is three small tutorials for how to use our REST API with a Ruby client.

We have a lot of very smart people who hold strong opinions. We use strong language, because we don't have the bandwidth to play politics. We criticize ideas, not people, we work on fixing code, not fixing blame. Discussions get heated, but aren't personal. If you're offended by four-letter words in the service of making a point, this is not the place for you to work.

Besides unit tests, we have an automation framework based on Selenium. We've quadrupled the number of test steps since April, from about 1000 to about 4000. I'm using Watir and FireWatir to do basic analysis of page load times. If you know test automation, Selenium, or performance testing, you'd be very welcome. Performance in particular is just coming under close scrutiny. In the next six months we are planning to make significant improvements to all of our test architecture. It promises to be a really cool project.

We have about 10 developers. When we add another QA person, we'll have a pretty good dev/test ratio.

This is a great job with great people and a very interesting market.

If you'd like to apply, send your resume as text in the body of an email addressed to jobs@socialtext.com. You can send it to me, too.

Tuesday, August 07, 2007

just enough design

So in recent weeks the number of test steps in our Selenium-based framework has quadrupled, from about 1000 to about 4000. The time it takes to run these tests is starting to cause significant pain.

So I set out to spike a little harness that would run sets of tests concurrently. (Note: it is *so* nice to have fork() available, after years of working in Windows environments.)

Trying to achieve the Simplest Test Harness That Could Possibly Work, I started out running all of the tests in the same test environment. Unfortunately, some of the tests change the state of the test environment, which causes other tests running concurrently to fail.

I started separating the state-changing tests (which may not be run concurrently) from the other tests (which may be run safely).

My colleague objected to this. Strongly. And he's right. He framed his argument in terms of old-school "software engineering" principles, but the argument is sound in terms of agile mantras like "make it work; make it good; make it fast". Segregating magic tests from non-magic tests will eventually cause failure. Or madness. Or both.

My next spike, I'm going to clone the test environment and run sets of tests in their own test environments. I think that's the *next*-simplest thing that could work, and it will get us a long way down the road-- until we have to design some more.

Thursday, July 26, 2007

new open source license: Common Public Attribution License

The Register has probably the best coverage so far.

This license is particularly valuable where the market is saturated with competitors, and for companies offering "software as a service".

OSI did the right thing, I think.

Wednesday, July 18, 2007

data-driven test framework with Socialtext wiki as repository

Socialtext has a tradition of having "Wiki Wednesday" once a month, where people hack little wiki-related projects.

I had a little Wiki Wednesday myself today. Socialtext wikis have a very nice (and not very well documented) REST API. I've been figuring out to use this API in Ruby. I got to the point today where I could read and write pages to the wiki, so I hacked up a little framework using Watir that reads test data from one page and updates another page with the results of the tests.

Wiki markup makes parsing the data really easy, and the REST API makes updating pages really easy.

@test_data_page looks like
| Google Test | Google | Google Search | I'm Feeling Lucky |
This displays as a table in the wiki.

It's in dire need of some refactoring. Reading pages with more than one line of test data is left as an exercise for the reader. :-)

If you want to muck around with it yourself, you could get a free account from socialtext, or try the open source version or the VMWare image.

require 'net/http'
require 'test/unit'
require 'watir'
include Watir

class TC_goog < Test::Unit::TestCase
def test_goog

@user = 'user@socialtext.com'
@pass = 'password'
@host = 'mineral.socialtext.net'
@port = '33333'
@test_data_page = "/data/workspaces/admin/pages/testdata"
@test_result_page = "/data/workspaces/admin/pages/testresults"
@test_results = ""

def get_page(loc)
http = Net::HTTP.new(@host, @port)
http.start do |http|
request =
Net::HTTP::Get.new(loc,initheader = {'Accept' => 'text/x.socialtext-wiki'})
request.basic_auth @user, @pass
response = http.request(request)
@page = response.body
return @page

def put_page
req = Net::HTTP::Put.new(@test_result_page, initheader = {'Content-Type' => 'text/x.socialtext-wiki'})
req.basic_auth @user, @pass
req.body = @test_results
response = Net::HTTP.new(@host, @port).start {|http| http.request(req) }
puts "Response #{response.code} #{response.message}:

puts @page

test_data = @page.split("|")
test_name = test_data[1]
page_title = test_data[2]
button_name1 = test_data[3]
button_name2 = test_data[4]

ie = IE.new

@test_results << @page
@test_results << "|" + test_name
@test_results << "|PASS"
@test_results << "|FAIL"

assert_equal(button_name2, ie.button(:name, "btnG").value)
@test_results << "|PASS"
@test_results << "|FAIL"

assert_equal(button_name2, ie.button(:name, "btnI").value)
@test_results << "|PASS"
@test_results << "|FAIL"

@test_results << "|"
puts @test_results



Wednesday, May 30, 2007

venture capital in a town of 15,000 people

This is cool.

I've written letters about this in local (not available online) and international (October 2006) publications, it's great to see some serious action from people who might make something happen.

The quality of life here is superb. There is a small but enthusiastic pool of talent. If you have a spectacular business idea; don't need too much funding; and the VC environment in Silicon Valley, NYC, Austin, Atlanta, or Chicago is too much of a freak show: come here.

Thursday, May 24, 2007

"let your employees decide when and where to work"

Time magazine has a refeshingly hype-free short article about 37signals. Half of the company works remotely.
First, kill all your meetings; they waste employees' time. "Interruption is the biggest enemy of productivity," he says. "We stay away from each other as much as we can to get more stuff done." Use asynchronous communication and software instead to exchange information, ideas and solutions. Next, dump half your projects to focus on the core of your business. Too much time and effort are wasted on second-tier objectives. Third, let your employees decide when and where to work so they can be both efficient and happy. As long as their fingers are near a keyboard, they could as easily be in Caldwell, Idaho, as in Chicago.
At Socialtext where I work, most of the company works remotely. Socialtext sells wikis, and internally, the entire company lives, breathes, and evolves on wikis. The part about asynchronous (PUBLIC) communication is right on. Not only is it effective communication, it is also a living knowledge base and repository for company culture.

I have had a lot of jobs. It's ironic that the company culture at Socialtext is so rich when the people in the company see each other so rarely.

Friday, May 11, 2007

here come the software artists

Apropos of my software-as-music analogy, Jonathan Kohl has another nice non-manufacturing metaphor, software development as theater. It makes a lot of sense, especially since Jonathon's point is actually about the value of improvisation in the course of developing and testing software.

In software, improvisation saves time and creates new opportunities. On the factory floor or in the supply chain, improvisation is a disruption and causes chaos.

Tuesday, May 08, 2007

starting a work blog

I'm starting a blog for the work I'm beginning do at Socialtext. It'll mostly be concerned with wikis, Selenium, FireWatir, Perl, and maybe a little Ruby. I'll probably be posting to this one a little less often, but you're welcome to read the other one if you'd like. A lot of the Socialtext development staff are keeping open work blogs, so mine fits right in.

Sunday, May 06, 2007

oh heck yeah


Saturday, May 05, 2007

an example of an analogy: monks vs music

Brian Marick gives a very well-reasoned and well-researched talk on the state of agile development. In it he compares agile teams to monks. I've never liked this analogy (see the comments) and on several occasions I've suggested that agile teams are much more like jazz bands. The most important differences for me are that unlike agile teams, monks a) do the same thing in the same way over and over and b) don't have audiences.

And now I've found the perfect musical agile counterexample.

This is Ella Fitzgerald and Count Basie doing the song One O'Clock Jump. The song is a twelve-bar blues, which is the jazz equivalent of a database app with a UI. By which I mean: just as every programmer has built a database app with a UI, every American musician has played twelve-bar blues. It is a framework on which many many many songs are hung, from Count Basie to Jimi Hendrix to the Ramones.

This particular video is a great example of agile practice. Listen to how the voice and piano influence each other. This is a lot like pair programming, and it's a lot like TDD: voice does something; piano responds; piano does something; voice responds. And notice the eye contact. These people are intensely aware of what's going on instant-to-instant. They have no sheet music (BDUF). They are involved in an activity that takes intense concentration and skill, just like good software development. They are also clearly aware that there is an audience, just as good software development should be aware of the needs of the people paying the bills.

I particularly love it when Ella sings "I don't know where I'm going from here". She recovers immediately, of course. It doesn't get any more agile than that. They are clearly inventing the music as they go along, using skill and experience.

I like Cleveland Eaton the bass player too. I think of him as a tester. He is subtly influencing the structure of the song, changing the dynamics, tying themes together, providing little pushes to the main players. Also, his instrument is much less sophisticated than a grand piano or a human voice, much as testers' tools are less sophisticated then developers' tools. For a bonus, here's a video of the full Basie orchestra featuring a bass solo from Mr. Eaton on another twelve-bar blues. Again, the interaction between bass and piano is wonderfully agile.

Apropos of not much, I had the privilege of speaking with Count Basie and Cleveland Eaton twice. The Basie band played at my high school on two occasions more than twenty years ago, and those conversations, especially with Mr. Eaton, were an enormous influence for me.

Friday, May 04, 2007

Charley Baker has a blog!

Watir Project Manager Charley Baker is blogging. First post is an example of driving scriptaculous AJAX with Watir. I know that Charley is doing really big, nifty things with Watir and Ruby. I'm looking forward to reading about them.

Monday, April 23, 2007

but, but, but... that's black box testing!

Brian Marick has been thinking about test frameworks in Ruby.

It's nicely done, but the killer idea is in the next-to-last paragraph that says:

In this particular case, the app communicates with the outside world via XML over the network, FTP files, and a database, so there’s no need to link the fixtures into the code. (emphasis mine) If it worked through a GUI, some remote procedure call interface would have to be added, but I persist in thinking that’s not too much to ask.

This has been my approach to test automation for some time, but it's hard to get across the idea until you've done it and succeeded once or twice.

I'll go even further, and stipulate that there is always a way to slice the application such that a test automator can write fixtures without reference to the underlying source code, and still provide effective, efficient system-level testing.

Thursday, April 19, 2007

fine tools and beautiful machines

My new laptop for work arrived yesterday, and I've been spending the day getting it set up:

I've used Windows machines at work for my whole career, but I've always had Macs at home. It's going to be a real pleasure to use a Mac for work.

I had a birthday recently, and my wife completely surprised me with one of these, an Ibanez AG86:

It was on the wall at the local music store, and I would pull it down and play it every time I went in. A really well-made guitar, a pleasure to play, and a good value. Guitar is not my main instrument, but I've been motivated to try to cop a few Wes Montgomery tricks.

This computer and this guitar are both fine tools and are both beautiful machines. There is a time in every student's career when they have mastered enough skills that an excellent set of tools is required in order to advance. This seems to be that time.

Thursday, April 12, 2007

i've always wanted one of these

and now I have one of my very own.

RadView WebLOAD now open source

This is great news, and should really shake up the tools market. Not only is it open-source, it's released under GPL, which is a pretty radical move.

Open source load test tools have been kinda sorta OK for some time, but performance testing is the only case where commercial alternatives are pretty much universally acknowledged to be superior to open source tools. Building good performance test tools is hard. What RadView has done has instantly changed the whole scene.

I am no kind of performance test expert, but I've worked a little bit with the standard open source tools for doing that work. The next time I need one, I have no doubt that I'll reach for WebLOAD first to see how it works.

Thursday, April 05, 2007

make a change to Ruby

OK, I can't resist Pat Eyler's "Bloggin Contest"

I started programming in Ruby after having programmed in Perl for a while. There is not very much I miss about Perl, but there is one thing: in Perl, you can locate a subroutine anywhere in the script; in Ruby, any call to a method has to be located physically below the definition of the method. (Yes I know that there are reasons that Ruby's interpreter does it that way, but still...)

I write a lot of scripts that are run by non-technical non-programming users, like QA people and business analysts. The nice way to set up such scripts is to put all of the editable bits right up at the very top of the page, so that the users can ignore all of the programming guts below the important variables and such. This is easy in Perl. In Ruby, I am forced to either put all of the editable parts at the very bottom of the page, so the non-technical user has to scroll through a lot of intimidating code to get to the important stuff; or I have to distribute a separate library file and use 'require'.

Here's examples:

sub i_am_at_the_top {
print "I'm at the top\n";


sub i_am_at_the_bottom {
print "I'm at the bottom\n";
>perl -w subs.pl
I'm at the top
I'm at the bottom

def i_am_at_the_top
puts "I'm at the top"


def i_am_at_the_bottom
puts "I'm at the bottom"
>ruby defs.rb
I'm at the top
defs.rb:7: undefined local variable or method `i_am_at_the_bottom' for main:Object (NameError)
>Exit code: 1

If Ruby's interpreter would make all methods available for use at runtime, it would make communication with non-technical script users a lot more appealing.

Tuesday, April 03, 2007

it wouldn't hurt you to use the compiler, either

I surf the O'Reilly Network blogs pretty often, and this one from Chris Josephes caught my eye, especially the part where he says of system administrators
"I don’t expect them to be a full fledged C programmer, but I think it’s important to know how to build software. The candidates that have demonstrated these skills have also been more proficient in debugging, tracing system calls, and identifying performance problems ahead of time."
I think the same is true of testers.

If you are a tester who has never used a compiler (or even if it's been a while), I have an exercise for you to do the next time you need inspiration for a new test.

Find your local build person, or a sympathetic developer, and have them send you the compiler log(s) from building your particular application. Do text string searches in the log files for the terms "warning" and "error". Unless you are working in an extremely mature organization, I'll bet you find some warnings and errors from the compiler.

Those errors and warnings point to places where bugs are very likely to live.

Monday, March 26, 2007

tests to prevent bugs: coral snakes and king snakes

I was presented with a PNG screen capture of an application some time ago and asked to brainstorm test ideas based on the static representation of a limited section of the application UI.

I think I acquitted myself reasonably well at the time, but I've been thinking about the situation since then, because there is an interesting aspect to this particular application: if this software has defects, the repercussions of such defects could have a drastic effect on the physical health of users.

Assume that this application exists in order to tell the difference between snakes, so as to warn people if they are dealing with one that is poisonous or not. This is a very far-fetched analogy, but it serves to make my point. Right now, the development is in early stages, and the software is able to tell the difference between rattlesnakes, cobras, and coral snakes. Tests for those are easy: rattlesnakes have rattles, cobras have hoods, coral snakes are red, yellow, and black.

But someday we're going to want to add more snakes, like a scarlet king snake. Before we go on, take a look.

I don't have any requirement right now to add a scarlet king snake to the system, nor will I know any time soon if there ever will be such a requirement. However, the risk of confusing the two snakes is so incredibly high that I feel compelled to mitigate that risk with a test.

The appropriate test, of course, is to add a record for the scarlet king snake anyway, and to make sure regardless of the state of development of the app that the app never, ever confuses the coral snake record with the scarlet king snake record.

But the snakes are incredibly similar. How to design the test? We have to do enough research to find the 100% distinguishing feature:
Red next to Black - is safe for Jack;
Red next to Yellow - will kill any fellow.
I put a test into my regression suite saying that regardless of any other criteria (which could change anytime: my app is under development) if red-is-next-to-yellow: always choose "coral snake"; if red-is-next-to-black: always choose "scarlet king snake".

I'd probably want to add a couple of other bellwether tests as well, for instance, to distinguish between cottonmouth water moccasins and other water snakes.

If these tests ever break, then my users' health is at risk, and my company is at risk for serious litigation.

Friday, March 23, 2007


Welcome back Mr. Caputo...

A number of people have been commenting on Martin Fowler's "Transactionless" essay. I didn't realize how unusual that approach seems.

I've said a number of times in public that if I had not begun my career by reading COBOL instead of, say, Java, I probably would not have gotten so far so fast.

Likewise, if I had not begun my career dealing with ISAM files instead of relational databases, I would probably have a much shallower understanding of what databases are and how they work.

In fact, of all the projects I've ever been a part of, one of the most impressive, most critical, and most educational was converting several enormous databases full of life-critical 911 location information from what was essentially a fancy ISAM arrangement to a true SQL database. Even at the time, though, our target SQL db did not have full support for referential integrity, so I became quite familiar with code that handles both referential-integrity checks and also that handles commits and rollbacks based on those checks.

I once wrote an article called "Old School Meets New Wave". Its funny how certain old techniques return to use over and over.

Monday, March 12, 2007

agile telecommuters?

I would be very interested in hearing if anyone other than Kent Beck has stories about telecommuting for software development projects, particularly agile-ish software projects, with high-bandwidth communication requirements and short iterations and such.

My original assumption was that it was a widespread practice, but now I think it's much more rare than I had assumed. Which is strange, considering the savings to the business on office infrastructure, etc. and the availability of communication tools like IM, Skype, VPNs etc. Not to mention cheap airfare when you have to make that site visit.

I am also interested in promoting the practice-- if you are a telecommuting wannabe, you should say so as well.

I want to live in paradise and no one else seems to want to (although, if you all did, it wouldn't be paradise any more)
-Kent Beck (on the extremeprogramming mail list)

Thursday, March 08, 2007

one good thing about the term "QA"

Most serious software testers of my acquaintance object to the term "Quality Assurance". There are any number of specific objections to the term, but the general objection is that competent software testing goes far beyond the definition of Quality Assurance.

However, searching for jobs online is a heck of a lot easier using the string "QA".

Sunday, March 04, 2007

a comprehensive guide to Google SketchUp Help

In case you ever want to navigate your way through all of the help available in Google SketchUp, here's a quick guide:

There is the Help Center with really basic information about downloading and installing, links to major areas, high-level FAQs and such.

The User's Guide (which interestingly enough looks fine in Firefox but gives me an "Error on page" in IE) has the real nuts-and-bolts of what each control does and how to use it. This is deep information about all aspects of the UI.

The Help Groups which is a link to all of the Google Groups that have to do with SketchUp.

Video tutorials which are fun to watch but might be a little slow for some tastes. But there are 26 of them, so if something is perplexing, the answer might be here.

Examples (actually model structures) of both realistic and wireframe buildings.

Quick Reference Card a one-page PDF describing in shorthand how to use all of the UI widgets

SketchUp Community with a link to the public Google Groups, but also another link to the "Pro" groups.

Ruby Help The introduction to SketchUp's Ruby API

Self-Paced Tutorials Very nice, basic, simple tutorials that run within SketchUp itself. Only 6 of them, and they don't cover the super-complex topics, but they'll take you a long way.

Send a Suggestion Just in case this doesn't cover what you want, people are listening :)

Monday, February 26, 2007

at liberty

For the first time in many years, I find myself without a job.

Ultimately, I would like to find a test automation partial-telecommute job based anywhere I can reach easily from Denver (with a preference for Denver, Salt Lake City, or Phoenix).

In the short term, if you or your organization anywhere in the world would like:

customized, personal training in Watir, based on Bret Pettichord's "Scripting for Testers" class that I taught at STAREAST, STARWEST, and Agile2006,

or training in fundamental principles of functional test automation in Ruby and/or Perl, based on a curriculum I have had in development for some time,

or a start on a quick, effective test automation suite for Web Services, either SOAP or REST,

or basic training on ethical hacking tools like WireShark, NTOP, Nessus, and ettercap,

or any other short term project involving test automation or basic training in Ruby

please send me an email at christopher dot mcmahon at gmail dot com, or leave a comment on this blog.

Saturday, February 24, 2007

The Bay Area TD-DT Summit happened

Our theme had to do with "Writing Better Functional Test Code". What struck me more than any other aspect at the gathering was how many different ways programmers, scripters, testers, and managers are attacking the issues involved in writing (and maintaining!) functional tests.

We had demonstrations of imperative approaches, deterministic approaches, random approaches, frameworks using mocks and stubs and Watir, FITNesse and Selenium, frameworks using math and using pixels, we had Ruby and sed and C++ and Java and C# and SOAP and databases and files, we had BDD and TDD and performance approaches.

And we had a ukelele. But only for two minutes. That was my fault.

Elisabeth Henderson wants a large-scale consolidated approach to doing functional testing. But given what we saw today, I start to think that even if someone were to give Elisabeth her pony, other functional testers would still find a dire need for aardvarks and platypuses.

Monday, January 29, 2007

resources for figuring out ODBC index() function

(edited: man what a day)
(bottom of the page, thanks to Paul Rogers, I'm not sure I ever would have found it on my own.)

For Oracle, you probably want to avoid the indexes() function, and do queries straight to the system tables:

Tuesday, January 23, 2007

Announcing the Bay Area Developer-Tester/Tester-Developer Summit

If you're interested, leave a comment on the blog or send me an email or an IM or smoke signals or something. Here is the flyer Elisabeth Hendrickson and I have been sending:

What: A peer-driven gathering of developer-testers and tester-developers to share knowledge and code.

When: Saturday, February 24, 8:30AM – 5:00PM

This is a small, peer-driven, non-commercial, invitation-only conference in the tradition of LAWST, AWTA, and the like. The content comes from the participants, and we expect all participants to take an active role. There is no cost to participate.

We’re seeking participants who are testers who code, or developers who test.

Our emphasis will be on good coding practices for testing, and good testing practices for automation. That might include topics like: test code and patterns; refactoring test code; creating abstract layers; programmatically analyzing/verifying large amounts of data; achieving repeatability with random tests; OO model-based tests; creating domain specific languages; writing test fixtures or harnesses; and/or automatically generating large amounts of test data.

These are just possible topics we might explore. The actual topics will depend on who comes and what experience reports/code they’re willing to share.

For more information on the inspiration behind this meeting, see two of my recent blog entries:



This is an open call for participation. Please feel free to forward it to other Developer-Testers and Tester-Developers who you think are interested enough to want to spend a whole Saturday discussing the topic and sharing code.

Proposed Agenda:

* Timeboxed group discussions: “Essential attributes of a tester-developer and developer-tester (differences and similarities)” and “What tester-developers want to learn from developers; what developer-testers want to learn from testers.”

* Code Examples/Experience Reports (we figure we have time for 3 of these)

* End of day discussion: Raising visibility for the role of a DT/TD, building community among practitioner

If you’re interested in participating, please send me an email answering these questions:

* Which are you: a tester who codes or a developer who tests?

* How did you come to have that role?

* What languages do you usually program tests in?

* What do you hope to contribute to the Bay Area DT/TD summit? Do you have any code or examples that you’d like to share? (Please note that you should not share anything covered by a non-disclosure agreement.)

* What do you hope to get out of the Bay Area DT/TD summit?


Your hosts:

Elisabeth Hendrickson
Chris McMahon

Monday, January 22, 2007

Mountain West Ruby Conf March 16-17 Salt Lake City

This looks like a fine outing.

I've been lurking as these folks assembled the Mountain West Ruby Conference. It looks like a blast. Chad Fowler is giving the keynote, and at fifty simoleon bucks, the price is definitely right.

I'm not much for skiing, but it'd be a blast if I could make it to the shindig.

Friday, January 19, 2007

that's so crazy it's not even wrong

I'd Like a Pony, Please

The Watir mail list gets on a regular basis questions like "How do I do X in Watir? The commercial tool that I use gives me X, but I can't figure out how to do it with Watir." (Watir, for those who don't know it, is a set of Ruby libraries for manipulating Internet Explorer. That's all it does. )

(OK, Watir has accumulated a few bells and whistles over time, but mostly it just reaches down IE's throat to move IE's bones around

X is logging/database connection/encryption/special string handling/test case management/distributed test environment/data-driven frameworks/etc., etc. etc.

The answer, of course, is that Watir doesn't do that. Nor will it ever do that. Watir is not intended to be a replacement for an entire test department, it is merely a set of libraries to manipulate Internet Explorer. Ruby itself has a lot of goodness to do this kind of thing, but Watir never will.

The people that ask these questions are often quite expert. The problem is that they do not think of themselves as programmers.

That's So Crazy It's Not Even Wrong PART ONE

I once worked with a man who was an expert with the Mercury tools. We were using Ruby to test an application that used a lot of XML. At one point he asked me "Does Ruby give me any way to ignore parts of strings?" I told him that I could think of a couple of ways to do that, and I asked him to show me what he was trying to do.

He wanted to extract the data elements from XML documents by using what boils down to Ruby's gsub("foo","") to strip the XML wrapper parts.

Understand this: he was setting out to build an XML parser. From scratch. Using gsub("foo","") as his only tool. He did not lack boldness.

This of course is madness. I introduced him to REXML and regular expressions. I presume he is still using them.

The reason he set out on such an insane project was that apparently in the world of Mercury, the only string manipulation tool available to you is something a lot like gsub("string",""). My colleague was simply unaware of the wider world of programming, and of what was possible.

He did not think of himself as either a developer-tester, or a tester-developer. He thought of himself as a tester who is a consumer of things provided by others.

That's So Crazy It's Not Even Wrong PART TWO

Mercury is not the only culprit in this kind of brainwashing. I have encountered testers (and others) whose entire world revolves around Excel.

For instance, I know of someone who started out his test project by putting test data into Excel, then writing VBA macros to generate Ruby code on the fly and launch it from Excel. He had a difficult time seeing Excel as simply a repository for data, instead of the center of the entire testing universe.

Where's My Pony?

Programmers' main tools are programming languages. Many testers' main tools are expensive whiz-bang-multi-aspect thingies that, if they have programming languages attached at all, are primitive or crippled.

I would like to see more testers learn about programming. Not just how to do programming, but how to think about testing problems in such a way that the problems can be solved with programming. Programming remains the best way to manipulate computers. If you work closely with computers, you owe it to yourself and your colleagues to learn some programming.

If you've never seen an pony, how will you know what to look for when you go to the barn?

P.S. many thanks to Jon Eaves for the title of his blog, which I've stolen and then mangled.

Thursday, January 18, 2007

significance of developer-tester/tester-developer

This idea of programmers-who-test and testers-who-program is starting to inform my daily work.

Reading articles about fantastic projects that sail to the end is all well and good, those great stories are something we strive for, but accomplish only occasionally. From day to day, our designs could be better, our test coverage could be better; our test plans could be better, our test cases could be more powerful.

I wrote the same bug in a couple of places this week. Luckily, my Customer was a tester. One of the developers I work with had a merge/commit problem a couple of times this week. Luckily, I was his tester, and I've struggled with merges and commits myself. We're working on figuring out how to prevent such errors in the future.

As a toolsmith, my Customers are testers. I go out of my way to write readable code so that they can inspect what I am creating. I encourage collaboration, and my Customers are coming to the realization that I need their help to recognize the bugs and get to the end.

As a tester, I am a Customer for the developers. I am working hard to understand how they go about writing and committing code. I am sympathetic when something goes haywire. I treat that as an opportunity to learn more about design.

As a tester-developer, I am working hard to understand the design and programming constraints of the system. I'm building automated tests and exploratory tests that expose the weakest parts.

As a developer-tester, I am working to include all the (programming and non-programming) consumers of my test scripts I can find, because I recognize that I work best when I collaborate. Disappearing into a closet until I personally resolve every issue simply takes too long and is just too boring to consider.

Tuesday, January 16, 2007

You! Luke! Don't write that down! *

Robert Anton Wilson died last Thursday after a long struggle. If you haven't read The Illuminatus Trilogy, go do that now. It's a lot like Pynchon's The Crying of Lot 49, except 20 times longer and a hell of a lot funnier. Get back to me when you're done. I'll wait.

A couple of years ago I was privileged to see the movie about him made in his later years Maybe Logic at the old Durango Film Festival, which renewed my admiration.

Robert Anton Wilson refused to be made ridiculous, either by his critics or by his diseases. Both his sense of humor and his sense of the sublime were extraordinary, even in his last years. He refused to fade away, he raised a ruckus until the end.

*If you're intrigued by the title of this post, and you're too feeble to read Illuminatus, email me and I'll explain it.

developer-testers and tester-developers

After hearing about a number of system-functional-testing adventures at AWTA, Carl Erickson proposed that there are "developer-testers" and that there are also "tester-developers". In context, the difference was clear, and it's worth talking about the two approaches. (Assuming that I did in fact understand what Carl meant.)

Developer-testers are those developers who have become test-infected. They discovered the power of TDD and BDD, and extended that enthusiasm into the wider arena of system or functional tests. There is a lot of this sort of work going on in the Ruby world and in the agile world. For examples, look at some of Carl's publications on his site at atomicobject.com, or talk to anyone in the Chicago Ruby Brigade about Marcel Molina's presentation last winter on functional tests in Rails and 37signals. (Follow the link, the number of examples is gigantic!) The unit-test mentality of rigor and repeatability and close control survives in these developer-tester system tests.

This is a good thing.

Tester-developers are those testers who have discovered the power of programming to extend and augment their ability to test systems. Sometimes they have been production programmers, but even those who have not been production-code developers have almost always at least read some production code, and have a good grasp of how software is designed.

Tester-developers frequently use interpreted languages instead of compiled languages, for the sake of power. They come up with crazy ideas like finding, creating and using random data. They look at complex or "impossible" simulation problems and create the simulation in 100 lines of Ruby or Perl or Python. They turn problems inside-out and upside-down.

Tester-developers frequently make mistakes, but they attempt so many things that mostly you don't notice.

Tester-developers are often called either "test automators" or "toolsmiths". Some of us have started to suspect that there are situations where these terms are inadequate or misleading.

Tester-developers frequently (as in my own case) excel at craft but lack discipline.

In particular, there comes a time in the toolsmith's career when he needs an interface for testability. Or he needs looser coupling of the code to make his own work easier. Or he needs information about a private API he would like to use. Sometimes, if the toolsmith wants these things, he is the only one in a position to make them happen.

At that time, the tester-developer must be in a position to negotiate not in his own language of crazy ideas and attempts that may fail, but in the developer-tester's language of well-designed, carefully-executed implementation.

Likewise, the developer-tester will have come to realize that regardless of how disciplined his code is, there are probably situations in the world that will cause Bad Things to happen in his system. Often, he suspects that such things are possible, but doesn't have the perspective to recognize the exact problem.

At some point a developer-tester will probably find himself looking for someone to run his code and suggest reasonable ways in which it might fail. Of course, this is what all good testers do, but tester-developers have particular skill at this level, because they understand, at least in a general sense, how the code was written. Here are some reasonable criticisms that a tester-developer might make to a developer-tester:

That input should be a Struct, because if it's an ArrayOfString, there is a strong chance that the user will get the arguments in the wrong order.

This stored procedure contains too many functions. Either move some of the business logic back into the code, or break the procedure into smaller parts, because maintaining this in the future will become difficult. Also, I would like to invoke parts of this function for test purposes without having to exercise the entire set.

Consider adding logging to this aspect of the system, because critical information is being handled here, and if something goes wrong, having that information will be important.

"Developer-testers" and "tester-developers": The more I look at it, the more I like it.

Monday, January 15, 2007

craft and discipline, larks' tongues in aspic

One of my lightning talks at AWTA had to do with finding better languages than engineering and manufacturing with which to describe software.

With all due respect to the Leaners, if software work were like factory work, few of us would do it.

I suggest we look to art, literature, and especially music for language to talk about software.

I mentioned first a couple of academic examples taken from New Criticism and Structuralism, like Monroe Beardsley's idea that value is based on manifest criteria like unity, variety, and intensity, or the Structuralists' idea that value is based on the degree to which the work reflects the folklore and culture of the milieu where it was produced.

If they'd done software, Beardsley would have been analyzing coding standards and function points, while the Structuralists would have been all strict CMM or strict Scrum/XP. If you're a objective-measurements, CMM, or Agile person, I encourage you to find corollary principles in the language of art and literature.

But the language that really got a lot of interest at AWTA was my super-fast overview of Robert Fripp's ideas of guitar craft. (If you don't know Robert Fripp from his work with the band King Crimson, you might know him as the composer of all of the new sounds in Windows Vista.)

I wanted to find and cite some short examples of his work here, hoping the discussion would continue around Fripp's language based in musical practice.

There are any number of fine software practioners discussing software craft, but it's often a linear discussion. I have yet to run across much discussion of how to acquire, reinforce, and advance software craft. People agree it should be done, but when we discuss the details, there is much waving of hands.

Fripp, on the other hand, has at hand a rich, intellectually rigorous, logically consistent language, with a 20-year history of exercise and teaching, describing the practice of guitar craft that is fully prepared to inform our discussion of software craft.

The key is discipline.

Here is the beginning of "A Preface to Guitar Craft". As you read it, substitute "programmer" for "musician" and "software" for music:

The musician acquires craft in order to operate in the world. It is the patterning of information, function and feeling which brings together the world of music and sound, and enables the musician to perform to an audience. These patterns can be expressed in a series of instructions, manuals, techniques and principles of working.

This is the visible side of craft, and prepares the musician for performance. It is generally referred to as technique. The greater the technique, the less it is apparent.

When the invisible side of music craft presents itself, the apprentice sees directly for themself what is actually involved within the act of music, and their concern for technique per se is placed in perspective.

The invisible side of craftsmanship is how we conduct ourselves within the process of acquiring craft, and how we act on behalf of the creative impulse expressing itself through music. In time, this becomes a personal discipline. With discipline, we are able to make choices and act upon them. That is, we become effectual in music.

or this:

These are ten important principles for the practice of craft:

Act from principle.
Begin where you are.
Define your aim simply, clearly, briefly.
Establish the possible and move gradually towards the impossible.
Exercise commitment, and all the rules change.
Honor necessity.Honor sufficiency.
Offer no violence.
Suffer cheerfully.
Take our work seriously, but not solemnly.

Or this from an interview on emusician.com:

Guitar craft is a discipline; the discipline is the way of craft.
This is the part that is missing from the discussion of software craft: the discipline. It's something I'll be working on.

Wednesday, January 10, 2007

Monday, January 08, 2007

what is "automated testing"?

Apropos of a recent discussion on the software-testing mail list, I was reminded of reading James Bach's Agile Test Automation presentation for the first time. It contains this one great page that says that test automation is:

any use of tools to support testing.

until that time I had held the idea that test automation was closely related to manual tests. I was familiar with data-driven and keyword-driven test frameworks, and I was familiar with tests that relied on coded parts to run, but I still had this idea that there was a necessary connection between manual tests and automated tests. I had this idea that proper automated testing was simply where the machine took over and emulated human actions that were expensive or boring.

That way, of course, lies madness. And expense.

It was reading this particular presentation that really lit the light bulb and got me thinking about all the other things that testers do, and all kinds of other ways to approach test automation. Here are some things I've done as a test automator since I read Bach's presentation that bear no resemblance to a test case or a test plan (in no particular order):

2nd party test server was mostly down, so I wrote a sockets-based server to impersonate it.
Script to collect and organize output for gigantic number of reconciliation reports.
Script to parse compiler logs for errors and warnings.
Script to deploy machine images over the network for test environments.
Linux-based file storage-retrieval-display system in all-Windows shop.
Script to parse entire code base and report all strings shown to users. (So humans could find typos.)
Script to reach into requirements-management-tool database and present requirements in sane form.
Various network traffic-sniffing scripts to get a close look at test data in the wild.
Script to compare file structure on different directories.
Script to compare table structure in different databases.
Script to emulate large numbers of clicks with interesting HTTP headers on the GET.
Scripts to install the nightly build.
Monkey-test scripts to click randomly on windows.

and here's more testing-like substance I did as a test automator, that was still mostly not about validation:

Emulated GUI layer to talk to and test underlying server code.
Gold-master big-bang test automation framework.
SOAP tests framework.
Watir scripts for really boring regression tests. (But I didn't emulate the user, instead I made the code easy to maintain and the output easy to read.)

And lots of other odd bits of reporting, manipulation, chores and interesting ideas.

I have a couple of conclusions from a few years of working like this. Or maybe they're just opinions.

1) Excellent testers should be able to address the filesystem, the network, and the database, as well as the UI.
2) Testing is a species of software development. Testers and devs are closer than they think.
3) Testing is a species of system administration, too.
4) Testing is a species of customer support, also.

Thursday, January 04, 2007

they're not really one typo apart

they're one typographic error apart in the IDE, not in the code.

Reasonable tests should make any problems stick out like a sore thumb.

(and thanks for the history)

Wednesday, January 03, 2007

the minimum number of tests

Elisabeth Hendrickson wrote about a case of complacency and lack of imagination in a tester she interviewed.

I agree with the gist of her argument, that good testers have a "knack" for considering myriad ways to approach any particular situation.

But I would also like to point out that just as a "tester who can’t think beyond the most mundane tests" is dangerous, so a tester who assigns equal importance to every possible test is just as dangerous, if not more so.

The set of all possible tests is always immense, and the amount of time to do testing is always limited. I like to encourage a knack for choosing the most critical tests. Sometimes this knack looks like intuition or "error guessing", but in those cases it is almost always, as Brian Marick put it in his review of Kaner's "Testing Computer Software", "past experience and an understanding of the frailties of human developers" applied well.

Intuition and error guessing take you so far, but there are other techniques to limit the number of tests to those most likely to find errors. The diagram here represents work I did some time ago. I was given a comprehensive set of tests for a particular set of functions. Running every test took a long time. In order to arrive at the smallest possible set of tests that exercised every aspect of the set of functions, I arranged a mind map such that each test whose test criteria overlapped the test criteria of another test was put farthest away from the center. Each test whose test criteria included multiple sets of test criteria was put closer to the center. In this way, I arrived at the smallest possible set of tests that covers at least some aspect of all of the set of functions under test.

It is not a comprehensive set of tests. Rather, it is the set of tests, which, given the smallest amount of testing time, has the greatest chance of uncovering an error in the system.

There are certainly echoes here of model-based testing, although I think this approach plays a little fast and loose with the rules there.

Regardless of whether the tester chooses a minimum number of tests through intuition or through analysis, it's a good skill to have.

And if you do encounter a tester who creates a smaller number of tests than you think is appropriate, you might want to check his reasoning to be certain that that tester is really making a mistake.