Skip to main content

where ideas come from

Today stickyminds published my article with expanded descriptions of the "10 Frontiers for Software Testing" that I suggested as starting points for those interested in attending the second Writing About Testing conference

Since I announced the CFP for the first WAT conference in October 2009, I have published several dozen articles on software and software testing.  (I actually lost count: it is well over thirty but fewer than fifty individual pieces.)

My friend Charley Baker asked me recently where I get the ideas for so many articles.   It is an interesting question, and worth answering:

The most important source of ideas is simply everyday work.  As I go about doing my job, it happens fairly often that a situation crops up that I think would be of general interest to the community of software testers and developers.  So I write it down and I make it public.  Articles about bugs, bug reports, test design, architecture, workflow, telecommuting, frameworks, war stories all come from noticing the details of the everyday work.

Here is the story of the very first software article I ever published:  I have been following Brian Marick's work for a long time now.  Brian used to be the editor of Better Software magazine, and he would occasionally solicit articles for the magazine on his blog.  In March 2004 Brian asked for submissions for a piece along the lines of "add(ing) a scripting language to her manual testing portfolio.".  In particular, I recall that Brian wanted an article suitable for beginners with an example of a testing problem that could only be solved by scripting a little program. 

I had written book reviews before but I had never published a piece about software.  I had just encountered a situation at work that was a perfect example of what Brian wanted.  I was working for a company that was switching from shipping whole custom-built servers to shipping installation CDs for COTS hardware.  The installation CDs contained more than 4000 files.  The switch was a little bumpy, and at one point we very nearly shipped an installation CD missing 4 critical files of the 4000.   I had been teaching myself Perl (so I was a beginner myself), and I wrote a little script in Perl to compare recursively the contents of large directories, so that it would be easy to see if some few files had gone missing.  I described what I had done, Brian published it in Better Software, and in one of the highlights of my career as a writer, that article (with me as a character!) became the basis of the first example in Brian's book Everyday Scripting with Ruby.  (Get the book:  it will make you a better coder, no matter your level of skill.)  The article was titled "Is Your Haystack Missing a Needle"

Another source of ideas for software articles comes from having some Very Large Idea that evolves over a long time.  At Bret Pettichord's Austin Workshop on Test Automation in 2007, in a moment of inspiration, I gave a five-minute lightning talk demonstrating an example of using the artistic language of critical theory (in particular, New Criticism) to evaluate the quality of a piece of software.  The talk got an enthusiastic reaction from the people in the room, mixed with some skepticism as I recall.   It struck me at the time as being an odd idea, but the more I considered it, the more it made sense.  I wrote a long paper on the subject and submitted the paper to the CAST 2008 conference, but it was rejected.  I published it on my blog, and I still refer to it now and then.  My thinking on the subject has matured and expanded since then, so if you'd like to see the latest example, look at PragPub magazine for November of this year.  In 2008 I was a lonely voice on the subject.  Today I have colleagues, it is nice to see others considering critical theory applied to software as well. 

Finally, every once in a while, I manage to do something really unusual, something that will actually change peoples' minds about how they go about their work.  In 2006 I was working for Thoughtworks on an EAI project.  Our code base had great unit test coverage and integration test coverage, and as the QA guy, I was not finding defects in what we were creating.  But we had to interact with a legacy database, and we were often surprised by unusual or corrupt historical data.  I made it my business to expose as much of that bad data as I could.  I wrote a little Ruby script that would do quasi-random queries in the database, request the same data from the API we were building, and compare the results, running within an infinite loop.  I found a significant number of issues in this way, where the API we were building failed to handle data we never expected to find in the database.  To my knowledge, no one had ever published anything describing a situation like this.

So I wrote a draft of an article on the subject and submitted it to Brian at Better Software.  Nearly all of my articles have been published with only minor editorial changes, but this draft was a hot mess.  Any reasonable editor would have rejected it outright.  What Brian did instead was to dissect the piece, pull out the essential concepts, and make diagrams showing what I had failed to describe well.  He sent me some diagrams, I made some corrections, he sent me some more diagrams.  Once the diagrams were correct, I re-wrote the piece from scratch as a description of Brian's diagrams.  I've always thought he should have had co-author credit for that piece. It was called "Old School Meets New Wave" and it had some really goofy artwork, a photo of a skinny punk kid with a pink mohawk overlaid on a black-and-white fifties dude with a fedora.

It ended up being one of the best articles of my career.  Some time later a tester named Paul Carvalho told me that he had created and gotten funded a testing effort at his company based on the concepts in that article.  Sometimes writing really can change the world.  It has happened to me a couple of times since then, but that article was the first time I knew I had made a difference to someone else by writing about software.  (Paul, if you read this, I hope I didn't garble your story, it was a long time ago we had that conversation.)

From about 1998 until the middle of the decade, the field of software testing and software development experienced any number of radical shifts, with the increased value for the role of a tester because of Y2K testing, the rise of open source, the rise of the agile movement, the rise of dynamic programming languages, and more.  But by late 2009 my own sense was that the public discourse on software testing in particular had become stale and outdated.  I started the writing-about-testing mail list and the WAT conference in an attempt to encourage new voices and new ideas in the public discourse on software testing.  A little over a year later, I think we have had some influence.  Since the first WAT conference, Alan Page, Matt Heusser, and others have begun calling for some examination of what the future of software testing holds.  

New ideas in our field come from three places.  They come from beginners who stumble upon some beautifully simple idea and are moved to tell the world about what they have done.   They come from people who think about the work on a really grand scale over a long period of time and build a body of work to support that grand idea.  And they come from people who truly make a breakthrough of some sort and are moved to explain that breakthrough to everyone.

So Charley, that is where my ideas come from.

(UPDATED: fixed garbled links)