Monday, December 20, 2010

where ideas come from

Today stickyminds published my article with expanded descriptions of the "10 Frontiers for Software Testing" that I suggested as starting points for those interested in attending the second Writing About Testing conference

Since I announced the CFP for the first WAT conference in October 2009, I have published several dozen articles on software and software testing.  (I actually lost count: it is well over thirty but fewer than fifty individual pieces.)

My friend Charley Baker asked me recently where I get the ideas for so many articles.   It is an interesting question, and worth answering:

The most important source of ideas is simply everyday work.  As I go about doing my job, it happens fairly often that a situation crops up that I think would be of general interest to the community of software testers and developers.  So I write it down and I make it public.  Articles about bugs, bug reports, test design, architecture, workflow, telecommuting, frameworks, war stories all come from noticing the details of the everyday work.

Here is the story of the very first software article I ever published:  I have been following Brian Marick's work for a long time now.  Brian used to be the editor of Better Software magazine, and he would occasionally solicit articles for the magazine on his blog.  In March 2004 Brian asked for submissions for a piece along the lines of "add(ing) a scripting language to her manual testing portfolio.".  In particular, I recall that Brian wanted an article suitable for beginners with an example of a testing problem that could only be solved by scripting a little program. 

I had written book reviews before but I had never published a piece about software.  I had just encountered a situation at work that was a perfect example of what Brian wanted.  I was working for a company that was switching from shipping whole custom-built servers to shipping installation CDs for COTS hardware.  The installation CDs contained more than 4000 files.  The switch was a little bumpy, and at one point we very nearly shipped an installation CD missing 4 critical files of the 4000.   I had been teaching myself Perl (so I was a beginner myself), and I wrote a little script in Perl to compare recursively the contents of large directories, so that it would be easy to see if some few files had gone missing.  I described what I had done, Brian published it in Better Software, and in one of the highlights of my career as a writer, that article (with me as a character!) became the basis of the first example in Brian's book Everyday Scripting with Ruby.  (Get the book:  it will make you a better coder, no matter your level of skill.)  The article was titled "Is Your Haystack Missing a Needle"

Another source of ideas for software articles comes from having some Very Large Idea that evolves over a long time.  At Bret Pettichord's Austin Workshop on Test Automation in 2007, in a moment of inspiration, I gave a five-minute lightning talk demonstrating an example of using the artistic language of critical theory (in particular, New Criticism) to evaluate the quality of a piece of software.  The talk got an enthusiastic reaction from the people in the room, mixed with some skepticism as I recall.   It struck me at the time as being an odd idea, but the more I considered it, the more it made sense.  I wrote a long paper on the subject and submitted the paper to the CAST 2008 conference, but it was rejected.  I published it on my blog, and I still refer to it now and then.  My thinking on the subject has matured and expanded since then, so if you'd like to see the latest example, look at PragPub magazine for November of this year.  In 2008 I was a lonely voice on the subject.  Today I have colleagues, it is nice to see others considering critical theory applied to software as well. 

Finally, every once in a while, I manage to do something really unusual, something that will actually change peoples' minds about how they go about their work.  In 2006 I was working for Thoughtworks on an EAI project.  Our code base had great unit test coverage and integration test coverage, and as the QA guy, I was not finding defects in what we were creating.  But we had to interact with a legacy database, and we were often surprised by unusual or corrupt historical data.  I made it my business to expose as much of that bad data as I could.  I wrote a little Ruby script that would do quasi-random queries in the database, request the same data from the API we were building, and compare the results, running within an infinite loop.  I found a significant number of issues in this way, where the API we were building failed to handle data we never expected to find in the database.  To my knowledge, no one had ever published anything describing a situation like this.

So I wrote a draft of an article on the subject and submitted it to Brian at Better Software.  Nearly all of my articles have been published with only minor editorial changes, but this draft was a hot mess.  Any reasonable editor would have rejected it outright.  What Brian did instead was to dissect the piece, pull out the essential concepts, and make diagrams showing what I had failed to describe well.  He sent me some diagrams, I made some corrections, he sent me some more diagrams.  Once the diagrams were correct, I re-wrote the piece from scratch as a description of Brian's diagrams.  I've always thought he should have had co-author credit for that piece. It was called "Old School Meets New Wave" and it had some really goofy artwork, a photo of a skinny punk kid with a pink mohawk overlaid on a black-and-white fifties dude with a fedora.

It ended up being one of the best articles of my career.  Some time later a tester named Paul Carvalho told me that he had created and gotten funded a testing effort at his company based on the concepts in that article.  Sometimes writing really can change the world.  It has happened to me a couple of times since then, but that article was the first time I knew I had made a difference to someone else by writing about software.  (Paul, if you read this, I hope I didn't garble your story, it was a long time ago we had that conversation.)

From about 1998 until the middle of the decade, the field of software testing and software development experienced any number of radical shifts, with the increased value for the role of a tester because of Y2K testing, the rise of open source, the rise of the agile movement, the rise of dynamic programming languages, and more.  But by late 2009 my own sense was that the public discourse on software testing in particular had become stale and outdated.  I started the writing-about-testing mail list and the WAT conference in an attempt to encourage new voices and new ideas in the public discourse on software testing.  A little over a year later, I think we have had some influence.  Since the first WAT conference, Alan Page, Matt Heusser, and others have begun calling for some examination of what the future of software testing holds.  

New ideas in our field come from three places.  They come from beginners who stumble upon some beautifully simple idea and are moved to tell the world about what they have done.   They come from people who think about the work on a really grand scale over a long period of time and build a body of work to support that grand idea.  And they come from people who truly make a breakthrough of some sort and are moved to explain that breakthrough to everyone.

So Charley, that is where my ideas come from.

(UPDATED: fixed garbled links)

Friday, December 10, 2010

Writing About Testing participants

I took a poll of those interested so far in attending the second Writing About Testing peer conference May 13 and 14 and found that nine people are very seriously considering attending.  This is what they are thinking of discussing:

Lisa Crispin (CO)  new and emerging approaches since the publication of Agile Testing
Alan Page (WA) a new approach to test design, also personas for tester career roles
Marlena Compton  (Australia) ongoing research in visualization of software project data
Dawn Cannan  (NC) executable specifications within larger testing projects
Sylvia Killinen (CO) practicing software craftsmanship
Elena Yatzeck (IL) implementing DSLs for use by non-programmers
Shmuel Gershon (Israel) diverse approaches to writing about testing using personas
Charley Baker (CO) large-scale, Enterprise automation systems, open source
Marisa Burt (CO) EAI in Enterprise systems

UPDATED:
Zeger Van Hese (Belgium) critical theory, etc.
Frank Cohen (CA) handling AJAX and Flex
Markus Gärtner (Germany) ATDD success stories, test estimation
Joey McAllister (CO) stickyminds.com
There is one spot open.  The (fairly arbitrary) deadline for submissions is January 1. 

Anyone who would like to join the writing-about-testing mail list may submit a request.

Tuesday, November 30, 2010

UI test smells: if() and for() and files

I read with interest Matt Archer's blog post entitled "How test automation with Selenium or Watir can fail"

He shows a couple of examples that, while perfectly valid, are poor sorts of tests to execute at the UI level.  Conveniently, Mr. Archer's tests are so well documented that it is possible to point to exactly to where the smells come in. 

The test in the first example describes a UI element that has only two possible states: either "two-man lift" or "one-man lift", depending on the weight of an item.  In a well-designed test for a well-designed UI, it should be possible for the test to validate that only those two states are available to the user of the UI, and then stop.  

But Mr. Archer's test actually reaches out to the system under test in order to open a file whose contents may be variable or even arbitrary, iterates over the contents of the file, and attempts to check that the proper value is displayed in the UI based on the contents of the file.  Mr. Archer himself notes a number of problems with this approach, but he fails to note the biggest problem:  this test is expensive.  The contents of the file could be large, the contents of the file could be corrupt, and since each entry generates only one of two states in the UI, all but two checks made by this test will be unnecessary. 

Mr. Archer goes on to note a number of ways that this test is fragile.  I suggest that the cases involving bad data in the source file are excellent charters for exploratory testing, but a poor idea for an automated test.  An automated UI test that simply checks that the two states are available without reference to any outside test data is perfectly adequate for this situation. 

In his second example, Mr. Archer once again iterates over a large number of source records in order to achieve very little.  Again, exercising the same UI elements 100 times using different source data is wasteful, since all the UI test should be doing is checking that the UI functions correctly.  However, there is an interesting twist in Mr. Archer's second example that he fails to notice.   If Mr. Archer were to randomly pull a single source record from his list of 100 records for each run of the test, he would get the coverage that he seems to desire for his 100 records over the course of many hundreds of individual runs of the test.  I took a similar approach in a test harness I once built for an API test framework, and I described that work in the June 2006 issue of Better Software magazine, in a piece called "Old School Meets New Wave". 

Both of Mr. Archer's examples violate two well-known UI test design principles.  The first principle is called the "Testing Pyramid for Web Apps".   As far as I can tell, this pyramid was invented simultaneously and independently by both Jason Huggins (inventor of Selenium) and by Mike Cohn.  (Jason's pyramid image is around the web, but I nabbed it from here.)




Any test that reaches out to the file system or to a database is going to belong to the middle tier of business-logic functional tests.  And even then, most practitioners would probably use a mock rather than an actual file, depending on context.   While it is not always possible for UI tests to avoid the business-logic tier completely, it should be the case that UI tests are in fact focused on testing *only* the UI.  Loops and conditionals in UI tests are an indication that something is being tested that is not just part of the UI.  Business logic tests should to the greatest extent possible be executed "behind the GUI".  From years of experience I can say with authority that UI tests that exercise business logic become both expensive to run and expensive to maintain, if they are maintainable at all.

The other principle violated by these examples is that highest-level tests should never have loops or conditionals.  The well-known test harness Fitnesse does not allow loops or conditionals in its own UI.  Whatever loops or conditionals may be required by the tests represented in the Fitnesse UI must be coded as part of the fixture for each test.  For a detailed discussion of this design pattern, see this post by Gojko Adzic: "How to implement loops in Fitnesse test fixtures

Sunday, November 21, 2010

close reading/critical thinking

The last Weekend Testers (Australia/New Zealand) was brilliant. Let me urge you to read Marlena Compton's report and the transcript of the session.

This sort of practical implementation of critical theory is long overdue in the testing community, and the WTANZ crew did a great job of using a well-known theoretical tool to analyze and dissect some real problems in some real work.

Compare what WTANZ did with Zeger Van Hese's recent demonstration of deconstruction.

This sort of work, bringing reputable and sophisticated critical theory to bear on actual testing and QA activity, is a field wide open, barely explored, and long overdue. 

May we see much more of it soon.

Tuesday, November 16, 2010

more on certs, more numbers

I noticed (thanks Twitterverse) that there was an interview with Rex Black over on the UTest blog.  In that interview he reveals a very interesting number:

"...the ISTQB has issued over 160,000 certifications in the last ten years."

Using the numbers from my previous post:  if we assume that there are about 3,000,000 software testers in the world right now, and if we issued 160,000 certifications right now, that would mean about 5 certifications for every 100 software testers.   

I would be willing to bet that there were about the same number of testers ten years ago:  Y2K was just over and the value of dedicated testers had been shown.   But as Alan Page and others have noted, there is a lot of turnover, a lot of churn, among those practicing software testing. 

So my numbers start to get a little sketchy here, I don't have anything to back them, so consider this a thought experiment:  as noted above, let's say that there were about 3 million testers a decade ago and there are still 3 million testers today.  Let's say half of today's testers have started since 2000.   This gives us a field of 4.5 million testers who could have acquired a certification in the last decade.  This makes for about 3 certified testers for every 100 possible certifications. 

I think it is an excellent bet that a significant fraction of those 160,000 certifications were issued in the UK, Australia, and New Zealand.   Just to make it even, call it about 1/3, put 60,000 certs in those regions, leaving 100,000 for the rest of the world.  That brings us down to about 2 certs per 100 testers.  

But that still seems high to me.  I might have missed something.  Regardless, it still looks like a pretty small market, and I'd bet the market has been shrinking a lot with the rise of agile adoption and the economic downturn.  

Thursday, November 11, 2010

an object of interest

I bought this recently at a guerilla art show:



Here it is hanging in my office:


The poster caught my eye because I've loved the Alice books all my life and I re-read them often. I am especially fond of the Tenniel illustrations, and the one for Jabberwocky is a favorite.

The poster also caught my eye because of the odd and interesting typeface. The story behind that typeface is fascinating. I asked the artist to send me that story in email so that I could have it written down:

The story is this:
Just south of where I grew up (near Green Bay, WI) is the Hamilton Wood Type Museum. A while back, I visited armed with a few sheets of printmaking paper with the goal of printing some or all of the Jabberwocky poem from some original wood type. During the course of the 19th and 20th century Hamilton made wood type for advertisements and headlines and circuses and had gone on to accumulate wood type from other manufacturers who had given in to market pressures or the eventual obsolescence of the letterpress industry. What remained when I visited were cases and cases of cabinets full of uncategorized type.. roughly 1000 different type faces and sizes. I spent a better part of a day just finding a type face I liked, using a rather capricious method to determine "the one": the style of the lower case 'g'. I estimate the type to have been produced in the early 20th century, probably for about 20 years, if that. It is an obscure, unnamed type face. I set the type but realized that by choosing according to lower case 'g', I had picked a case that only let me set 3 lines of text. This was all that was left of this type in existence. So, I printed the top three lines first on a large flat bed cylinder press called a Vandercook 325G (incidentally I have the exact same model press in my shop here), disassembled the text and composed the 4th line. When I returned to Colorado, I replicated the illustration from Through the Looking Glass and then added that to the print.
That's the story.
Enjoy
Dan


Zeger Van Hese is a Belgian software tester who, like me, is interested in critical theory and what application critical theory might have to the work of creating software. The other day he mentioned in passing a seminal work by Walter Benjamin, The Work of Art in the Age of Mechanical Reproduction that I had not read in many years.

In the light of Benjamin's work, my poster is a strange object indeed. While it was created in a process of mechanical reproduction, it was created only once. The means to create it are lost in an anonymous bin in an obscure warehouse somewhere in Wisconsin. And even if someone were dedicated enough to find that one particular bin, not even enough of this particular wood type exists to even print all four lines from Jabberwocky.

My poster would have been a strange item even for 1936, when Benjamin wrote about mechanical reproduction. But to have such a thing on my wall in 2010 is, for me, astonishing.

Saturday, November 06, 2010

XP, Scrum, Lean, and QA

Before I do this, two things up front: for one thing, I am a crappy programmer. I read code far better than I write it, and I read non-OO code better than I read OO code. Also, I am writing as someone who knows a lot about Quality Assurance and testing, and very little about the hands-on day-to-day work of modern programming. So here goes:

As a QA guy, I know this: long before Scrum and XP and the Agile Manifesto, people working in Computer Science and software development knew three things about software quality: having an institutional process for code review always improves software quality; having automated unit tests (for some value of "unit") with code coverage improves software quality; and having frequent, fast builds in a useful feedback loop improves software quality. Sorry, I don't have references handy, I read most of this stuff about a decade ago. Maybe some heavy CS people could leave references in the comments.

The XP practices simply institutionalize those three well-known practices, and for the time, dialed them up to 11. Pair programming is just a very efficient way to accomplish code review. TDD is just a very efficient way to accomplish unit test coverage. CI is just a very efficient way to get a fast feedback loop for the build.

There is nothing magical about these practices, and I have worked on agile teams that don't do pair programming but do awesome code review. I have worked on agile teams whose microtest suite runs heavily to integration tests instead of traditional unit tests. I have worked on agile teams with a dedicated build guy. I started my career working in an incredibly well-designed COBOL code base. No objects in sight. Had I known then what I know now about test automation, I could have written an awesome unit/integration test framework for that system. The XP practices themselves are not sacred. The principles behind those practices are.

But the XP practices themselves are just a small piece of having a successful agile team. In musical terms, these are the equivalent of knowing scales and chords, just basic technical practices necessary to get along in the business. Of course they are not necessary: The Ramones and Tom Petty have only a basic grasp of the technical aspects of music, but they cranked out some monster hits. Put any of those guys in a jazz jam session or a symphony orchestra and they would be completely lost. There is some nasty software out in the world that makes a lot of money.

I like Scrum, for a number of reasons. For one thing, it has an aesthetic appeal. The concepts of developing, then releasing, then retrospective speaks to me strongly, not least because they map closely to the ideas from the performing arts of practice, perform, rehearse.

I also like Scrum because of its emphasis on human interaction rather than institutional process. Scrum is lightweight by design, and leaves much room for people to act as people with other people. Scrum favors mature, intelligent adults.

Finally, I like Scrum because it is a process derived directly from the actual practice of creating software. It is described in plain English and it relies on no special concepts. It was crafted out of whole cloth by good developers in a tough spot.

I dislike Lean/kanban for the same reasons. As a mature adult, I resent having any of my activities identified as "waste". I resent not having the end of an iteration to celebrate. I resent being treated as a station in a production chain.

Unlike Scrum, the Lean principles were not derived from the actual work of software development. They came from automobile manufacturing, and were overlaid on software development in what I consider to be a poor fit. Putting on my QA hat again, there are two other popular software development methodologies that came directly from manufacturing, and the state of those methodologies is instructive. One of them is ISO9000. The fatal flaw of ISO9000 is that once a process is in place, it becomes very difficult and expensive to change that process. This is fine in manufacturing, but it is death to a reasonable software development process. The other methodology from manufacturing is Six Sigma. Six Sigma is very expensive, and while it might yield information valuable to managers, it provides no benefit to those actually doing the day-to-day work of software development. I am not aware of any manufacturing processes shown conclusively to improve the hands-on work of software development.

XP and Scrum are not nearly enough to guarantee a successful software project. For a comparable situation, just because a band has a rehearsal schedule and some gigs does not guarantee that they will be international superstars. Brian Marick at one point talked a lot about four principles that also increase the chance of a successful software project: skill and discipline, ease and joy. I won't explain those here, interested readers can find that work themselves.

But beyond even skill, discipline, ease and joy, a successful software project requires that we as creators of the software reach out and interact with the world in a way that changes the lives of those who encounter our software. It is an awesome power. In some cases, we can make our users' lives a living hell. But it's a lot more fun to make everyone happy.

Friday, October 29, 2010

ignoring certification; with numbers

All of the questions about tester certification were answered many years ago. They exist and they cannot be made to unexist. The only remaining question on the subject is: how many tester certifications can be sold? And the answer to that question doesn't matter to anybody except the people selling the certifications.

A while ago on the writing-about-testing mail list we did a little exercise to come up with some back-of-the-napkin estimates about the number of software testers in the world. We used US Department of Labor data and also some other public information about software employment worldwide. We also had access to some privileged information about magazine subscriptions. In addition, a number of us have done serious work in social networking, and we have some analytical tools from that work to help estimate. Using all of that, we came up with a pretty consistent estimate that there are probably around 300,000 software testers in the US, and maybe 3 million in the whole world.

That is a pretty small market in which to sell tester certifications.

Elisabeth Hendrickson recently did a fascinating analysis of QA/testing job ads. According to her data, it is a good bet that 80% of the people doing modern software testing work in the US have programming skills of one sort or another.

Jason Huggins of Sauce Labs has been tracking job ads that mention browser automation tools. Jason notes a remarkable recent increase in the demand for Selenium skill. You can see the trend for some popular automation tools at indeed.com. The QTP vs. Selenium trend in job ads is fascinating, but looking closely, this graph indicates a general across-the-board increase in demand for technical skills in traditional UI-based software testing.

Finally, sorry I don't have a link handy, but I have seen a number of reports of a radical increase in the rate of adoption of agile practices among US companies of every size and description. And the agile whole-team approach to software development makes dedicated, siloed traditional V&V test departments irrelevant.

The existing tester certifications simply do not apply to this sort of work. Certification is becoming more and more useless to US testers, and to their employers as well.

I feel like I am pretty plugged in to the world-wide tester community and the world-wide agile community, and anecdotal evidence suggests that indeed, the market for tester certification in the US is very small. Again, this is anecdotal evidence, but the hot spots for certification seem to be the UK and Australia/New Zealand, possibly areas of Southeast Asia, possibly areas of Eastern Europe. Once more with the anecdotal evidence, but I would suggest that in political climates that favor a high degree of regulation of business practices, certification will be more popular.

So if we eliminate from our worldwide tester population of 3 million the majority of US testers and a significant fraction of the rest of the world as potential buyers of a tester certification, that leaves a pretty tiny market for tester certification.

I think we can say with some confidence that professional tester certification can safely be ignored by the vast majority of software testers. That said, if you are required to get a certification, or if you just want to get a certification, go ahead and do it. It won't hurt you, and at the very least, you'll learn how software was tested in 1996.

I think we can also safely say that any supposed controversy surrounding tester certification is overblown and can also be ignored.

Which suggests one more interesting question: if the supposed controversy over certification really is as trivial as these statistics indicate, then why does so much of the testosphere spend so much time agonizing over it?

I have a cynical answer to that, but I'll keep it to myself.


Update: made the links nice

Monday, October 25, 2010

Call for Participation: Second Writing About Testing peer conference

The Second Writing About Testing Conference: Frontiers for Software Testing

I am pleased to announce the call for papers for the Second Writing About Testing Conference to be held May 13 and 14 2011 in Durango Colorado.

For more information about the original conference and the Writing About Testing mail list please see:
http://chrismcmahonsblog.blogspot.com/2010/02/writing-about-testing-listconf-update.html
http://chrismcmahonsblog.blogspot.com/2009/10/cfp-peer-conference-writing-about.html

Writing About Testing is a peer conference for those interested in influencing the public discourse on software testing and software development by writing and speaking on those subjects. The discussion revolves around blogging, giving presentations at conferences and user groups, and writing for established media outlets, both for pay and for other reasons. There will very likely be representatives from established media outlets attending. Having software writers and publishers talking to each other face to face is a unique aspect of WAT.

For the first WAT conference we asked only that participants be interested and have had already written about software. The second WAT will be a little different.

For the second WAT we ask that applicants propose a talk (informal is fine, no slides required) of 30-45 minutes about some subject critical to their work as a tester or developer, or designer or business analyst, but which is not generally recognized as being part of such work. A list of suggestions for such talks is below. The purpose of this is to expand the practice of software beyond the current artificial boundaries of recognized software activity.

To that end, presentations on these subjects are not welcome, unless the presentation has obviously unusual aspects:

  • test heuristics/mnemonics
  • exploratory testing
  • classic test automation (record-and-play, automation pyramid, etc. Information on unusual approaches to test automation is welcome.)
  • Scrum/Lean/Context-Driven/whatever. In general, anything involving capital letters is not welcome.
  • certification (for or against)

In a nutshell: don't bore the other attendees with stuff that has been discussed to death for years. This conference is to discuss frontiers.

Given that the presentations will be on unusual subjects, there will be a minimal requirement for having published previous work. Any publicly available source of writing, for instance a blog, would qualify an attendee. Applicants with no publicly available writing at all will not be considered. Attendees are encouraged to write with ambition and daring after the conference ends.

Attendance will be limited to about fifteen people. There will be a nominal fee of $50.00 per person to help cover room rental, and lunch will be provided. A discounted rate at a convenient bed and breakfast hotel is available.

Durango has a lot to offer visitors. Conference attendees may wish to arrive early or stay late to take advantage of the nearby attractions: beautiful mountains to the north and east, desert sandstone canyons to the west and south. The steam train from Durango to Silverton and back is a fantastic experience, as is soaking in the hot springs nearby. Within a short drive are Mesa Verde National Park, Canyons of the Ancients National Monument, Monument Valley and the Navajo Reservation, Great Sand Dunes National Park, etc. Local opportunities for hiking, biking, and boating abound.

There are direct flights to Durango from Denver and from Phoenix. Many attendees will likely come from the Denver area, so carpooling from there may be possible.

To submit a proposal, either send a message to me at christopher dot mcmahon on gmail, or join the writing-about-testing mail list at http://groups.google.com/group/writing-about-testing and submit your proposal there.

The deadline for submissions is Jan 1.
Invitations to the conference to be sent Feb 1.
The conference itself will be May 13/14.

Here are some possible frontier subjects for presentations:

System Administration

There is a surge of interest in recent times in a concept called "DevOps". DevOps proposes an alliance among software developers and system administrators in order to create the best possible production environment experiences. Testers need to be a part of that conversation.

Data Visualization

Not only our applications, but the whole world around us generates incomprehensible amounts of data, and the only way to make sense of it all is to render that data in a visual or tactile fashion. Testers need to understand these technologies in the service of their work.

Frameworks/Mashups

Good test automation today happens at every level. A single framework may exercise the user interface, call REST or SOAP APIs, and reach into a database, all in the course of a single test suite. Myriad tools for such testing exist, and knowing how to get such tools to talk to each other for a particular purpose is becoming a critical skill for testers.

User Experience

Great strides have happened in user experience work in the last few years, and there are exciting advances on the horizon. Testers have largely ignored the conversation happening among user experience experts.

Web Design

New javascript libraries like jquery are making things possible in browsers that were unimaginable just a few years ago. Flex, Flash, and Adobe Air are making huge inroads into application UI design. HTML5 is on the horizon and looming. Testers need to know what is happening in this arena, and largely do not.

Web Services (REST/SOAP)

Twitter, Facebook, and the bleeding edge of web applications are no longer about the UI. Today it is all about the APIs, and the third party applications that use those APIs to bring killer experiences to users. Testers need to know how web services and APIs work.

Environments/VMs/Cloud Computing

Managing test environments has always been challenging. New cloud computing services in some ways make such work more challenging, but the reward is a vastly simplified process for provisioning test environments. This work needs public exposure.

Agile Methods

Agile methods work, but even today, no one knows exactly why. The explanations we have are frequently facile and often abused. Testers could be the ones to provide the well-considered explanations for the effectiveness of agile methods.

Process Work/Quality Assurance

QA has a bad reputation in the testing community that it does not deserve. I have said before on stickyminds.com and in Beautiful Testing that QA is not evil, that it is work that still needs doing, and that often testers are in a good position to provide quality assurance. Bring back real discussion about Quality Assurance.

Aesthetics/Artistic Performance

There is a wealth of knowledge available from disciplines within the Liberal Arts that apply directly to software development. Testers can help bring that knowledge over to the world of software development.

Sunday, October 03, 2010

One Year Writing for SearchSoftwareQuality.com

I generally do not post links to pay-wall or registration-wall sites, but today I am sincerely happy and proud to publish a link to SearchSoftwareQuality.com.

For each of the last twelve months, I have written at least two 1000-word articles for SSQ. In the last year, SSQ has published nearly forty individual pieces of mine, a book-size body of work.

I am sure some of those articles are better than others, but I wrote every one to the best of my ability with all sincerity, and I truly believe that every one of those articles contains at least one interesting idea intended to help people working in software testing and software development.

I would particularly like to thank my editors at SSQ, at first Jan Stafford, later Yvette Francino. The SSQ editorial staff is professional and efficient. Both Jan and Yvette have given me an enormous amount of freedom and encouragement over the last year, and it has been a real pleasure working with them both. I especially appreciate their tolerance on the few occasions when I pushed that freedom to the limit.

I would also like to thank the Writing About Testing mail list. A great number of people on that list have been immensely helpful, freely giving comments and constructive criticism, providing new ideas, and just being generally smart and encouraging human beings. In the very near future we will be announcing the CFP for the second Writing About Testing conference in the spring, which should bring even more new ideas and new voices into the public discussion of software development and testing.

Finally, I want to thank Matt Heusser specifically. Matt introduced me to SSQ a year ago. I would never have had this opportunity if it were not for his generosity.

Tuesday, August 31, 2010

not about testing: a bit of writing

I've been neglecting my blog, mostly because I have been doing a whole lot of professional freelance writing on the subject of software dev and test, and really enjoying it a lot.

A few months ago I also submitted a piece to the Mountain Gazette, one of my favorite magazines, available for free around the West. They always publish really good writing.

Mountain Gazette was soliciting pieces on the subject "My Favorite Mountain". To their surprise, they got more than 200 submissions, of which they could only publish 11. I submitted a piece, it was rejected, but I don't mind, I've been reading the issue, and there are some really great essays.

So since it isn't going to appear anywhere else, I figured I would publish it here:

----

I don't have very far to go to get to my favorite mountain. I go out
my front door and take a right, and I walk about a mile through my
neighborhood of mostly middle-class houses, some Victorian, some like
mine vintage 1930s-40s, a few more modern. I say "hello" to my
neighbors as I work my way a little uphill to the trailhead near the
electrical transformer station.

The way up the west side of the loop trail is kind of a slog, moving
from about 6500 feet of altitude to about 8200 feet through pine
forest and the occasional meadow, but every once in a while a mountain
biker comes barreling down from the top and catches some air on a nice
jump. That's fun to watch. I used to be pretty good on a mountain
bike, but the lure of adrenalin doesn't call so strong now. Now I get
a thrill holding one of the horned toads that seem so common on this
mountain but that I never find anywhere else.

Nearing the top I get peekaboo glimpses of the La Plata range to the
northwest, so I know I'm getting close. The forest opens up, and a
final climb takes me to the overlook where I can see the La Platas to
the northwest, the 14ers of the Weminuche Wilderness off in the
distance to the northeast, and all of the river valley laid out below.
There are always birds moving across the sheer cliff below my feet,
often crows, sometimes buzzards, once or twice eagles. Sometimes a
man-made glider works the thermals over my head.

My mountain isn't one of those tall craggy ones like on the Coors beer
label, it's an uplifted slab of shale and sandstone, and coming down
the broken edge of the slab is the best part of the walk. The view
gradually shifts from the north to the east, hiding the tall peaks but
revealing broken sandstone ridges marching off into the distance. The
college sits below one ridge, and my town is laid out along the banks
of the river moving downstream to the south, where the sandstone
canyons start.

When I finish the loop around my mountain, if I don't feel like going
straight home, just a couple blocks out of my way is the tap room for
one of the local breweries, a fine place to finish off a four-hour
walk on a nice afternoon on my favorite mountain.









Sunday, May 30, 2010

Writing About Testing wrapup

On May 20 and 21 some of the brightest people in the field of software testing met in Durango Colorado for the first ever Writing About Testing conference. We participated in a diverse set of activities: formal presentations, ad-hoc demonstrations, collaborative exercises, lightning talks, and informal discussions of topics of interest that ranged from the role of media, to finding the time to write.

I started my software testing career in the bad old days of the mid-1990s. Both Open Source software tools and agile methods were highly controversial at the time. And while many of us were doing amazing and innovative work, the entrenched culture of software development was highly skeptical that what we were accomplishing was valid, or even sane. I think there is a real danger of a return to those days, and I wanted to create a community where people working out on the edges of software creation could hone their ideas in a supportive community, and from what I saw at w-a-t, that community now exists.

Open Source and Agile both succeeded for three reasons: they fostered a laser focus on the technical aspects of software tools; created general support for communities of dedicated practitioners; and provided philosophical/theoretical frameworks within which to accomplish the work. And the information coming out of the Open Source and Agile communities was so valuable that the institutional trade media was forced first to pay attention, and then to participate actively in the promotion of those cultures.

At the Writing About Testing conference we discussed REST architectures and wiki-based test frameworks like Selenesse. (All three principals of the open source wiki-based test framework Selenesse were in the same room.) We discussed data visualization and the challenge of managing enormous datasets.

We discussed new ways of working being discovered and propagated from places like Agilistry and from within particular companies like Socialtext, 42lines, and others.

We discussed new ways to consider software users and consumers, and the implications of the increasingly common phenomenon of near-real-time interaction with those who enjoy and depend on our software.

We discussed what it means to actually do the work of software testing today, in the real world.

We discussed a lot of other stuff, too.

The most important thing I learned is that as software becomes more ubiquitous in the world, the work of software development is becoming radically diverse, as are software business models, as are the skills necessary to be successful in creating software. This has particular implications for software testing. Both the practice of software testing itself and the hiring of software testers are undergoing significant changes, with no end to the evolution in sight.

The software tester of the future will no longer do one thing in one way. The software tester of the future will be expert in some aspects of software creation. Testers will seek out teams that need someone with their particular set of skills and expertise, and teams will seek out people with particular sets of skills and experience to maximize the benefit to the users of their software. Some of the areas of expertise represented in the room at w-a-t:

  • deep database knowledge, framework programming, and exploratory testing
  • API and architecture expertise, user experience testing, and process knowledge
  • system administrator skill, scripting/development ability, and multibyte character processing knowledge
  • management experience, programming and architecture expertise
  • software security and software performance
  • data wrangling, visualization knowledge and deep experience in online communities
  • business expertise and business communication skills
  • Quality Assurance. As I've noted before in a number of places, QA is not evil.

Software testers of the future will invest in a range of skills and experience, and the teams that hire them will audition software testers based on their ability to use those skills and that experience to further the goals of those teams. Software testers who do only one thing in only one way will be relegated to the sidelines, doing an increasingly limited sort of work for a diminishing number of jobs.

It would not surprise me to see the term "software tester" itself gradually disappear over time. Instead, those of us who call ourselves "testers" will more and more say instead "I am an expert in X, Y, and Z, and I have a deep interest in A, B, and C. If that mix of skill and experience is what your team needs, then you need me to work on your team."

Those of us writing about testing face some interesting challenges. In the 90s the major communication channels were the trade publications and the research consultancies. Those organizations still swing a very big stick, but two trends seem very clear: for one thing, the cutting edge has moved away from the big institutional publications, out onto blogs, social networks, and loosely-organized communities of practice; at the same time, the major media have become more conservative, and are generally less likely to publish controversial or cutting-edge work. But that means that major media are caught in a bind, because as it becomes more attractive to publish highly original work outside the major media channels, the major media channels find themselves hungry for content. The entire situation is very fluid right now, and that provides remarkable opportunities for new voices in software to be recognized quickly.

It is an interesting question whether or not there will be a second Writing About Testing conference. Right now enthusiasm is high, but I wonder if a second conference would have as much impact as the first one did. For now I am postponing a decision on whether to pursue a second conference next year. I have not abandoned the project; over the next six months or so I will be talking with the original participants and with potential future participants to see if a second conference next year would be valuable to those of us working in software testing in the public arena. In hindsight, there are a few things I would do differently the second time around, and I suspect that I will get a lot of ideas from others as well.

On a personal note, I am immensely pleased and proud that the Writing About Testing conference and the community that sprang up around it have been so successful. I have invested a lot of energy in w-a-t over the last six months. After the conference ended, I went on a 5-day backpack trip in the remote canyon country of SE Utah to clear my head and reflect on it. I am fascinated by the ruins and artwork left by the Anasazi in the remote canyons of this part of the world, the most recent of which is about 800 years old, and the oldest of which I just saw is about 7000 years old. There is a mental phenomenon known to people who make such trips called sometimes "re-entry". After spending significant time in a very remote desert region contemplating the remains of a culture that thrived from 5000 BCE to 1300 CE, adjusting again to a world of streets and lights and computers can be jarring.

In the light of re-entry, Writing About Testing was a very good thing.

Wednesday, May 05, 2010

watch your language

For a number of years I've been writing about treating great software development as a very specialized subspecies of the performing arts.

Some time ago I reviewed a piece of writing from a software person inspired by the concept of artistic software, but who had no background in the arts at all. It showed: the most egregious error was that instead of using the term "performing arts", this person used the term "performance art". The rest of the piece was earnest but the author's lack of expertise (in art, not in software) was painfully obvious.

The performing arts are music, theater, and dance. Performance art, on the other hand, can be dangerous stuff.

But artistic software development is only a minor representative of a number of new concepts in the field bubbling madly just behind the zeitgeist. For example, methods of harnessing immense amount of data in order to make them comprehensible to human beings are about to change all of our lives, both in software development and in the world at large. Professionals working in the field refer to this as "data visualization", or as "visualization" in the broader sense, which encompasses a wide variety of technical endeavor.

A diagram is not visualization, just as performance art is not the performing arts. To misuse such terms not only spreads ignorance and misconception, but is also a grave disservice to those experts actually working in such fields.

Consider a few terms that once pointed to specific concepts and practices, but which today are laughably devoid of meaning: "Enterprise"; "Web 2.0"; "Service Oriented Architecture"; and "Agile" is coming up fast.

If you plan to use a technical term, please be familiar with the concepts that underlie the term. If you (mis)use a technical term because you heard it somewhere and you think it sounds cool, you do a grave disservice to your colleagues actually working in those trenches.

Saturday, March 27, 2010

bad agile estimation

Depending on how you define the term, I have been on at least five and as many as seven agile software teams. Two were brilliant; two were poisonous; the rest were just flaky. A big part of the poison stems from not understanding how to do agile estimation.

This is part of a message that showed up on a mail list I lurk on:

I'm working with a team that does great work. They are skilled and
work well together.

They also average about 50% or less in meeting their sprint
commitments. And don't seem to mind.

"There's a lot to do we just didn't get to it all."
"We'll do that in the next sprint."
"Yeah, that's not working yet."

These are the kinds of statements during the sprint or in the retrospectives.

How do I help this team look at the problem to solve it, instead of
just living with it?


Since the list name has the word "Scrum" in it, I will assume this person is a Scrum Master.

The first misunderstanding here is to know that all estimations are wrong. But when we do estimation in an agile way, we find over time that as we come to share a consistent idea of what a "point" means, that our estimations will be consistently wrong in the same way. This allows us to plan with a high degree of accuracy.

So if you have a agile team that consistently estimates 20 points per sprint and achieves 10 points per sprint, then the capacity of the team is 10 points, and 10 points is the figure you need to use for planning purposes.

The term "meeting their sprint commitments" bothers me a lot. For one thing, insisting that a team complete more stories than the capacity of the team can support is a well-known recipe for technical debt and poor quality. For another thing, it's not "their" sprint commitments, it is our sprint commitments. Finally, I object to describing this situation as "a problem" for the team to look at.

Remember what the job of the Scrum Master is? The Scrum Master has only one job on the team: to remove impediments. If there is a business reason to achieve a consistent velocity of 15 points instead of 10 points, then the Scrum Master should examine the situation for impediments to remove, not try to force the team to meet some sort of imaginary capacity in order to satisfy the requirements of what is essentially an old-style Gantt chart.

Thursday, March 25, 2010

Artful Making

I never read business books, I mean I NEVER read business books. But after Marlena Compton read my chapter in Beautiful Testing, she recommended that I read Artful Making by Rob Austin and Lee Devin, subtitled "What Managers Need to Know About How Artists Work".

I've been writing about creating artistic software for some time now, but with a copyright of 2003, this book pre-dates my endeavors and I was surprised not to have heard of it.

Austin and Devin are professors at Harvard, Austin of business and software, Devin of theater. Early in the book they recount how they began the conversation that led to writing the book:

We were surprised to discover common patterns and structures in our separate domains... Some recent ideas and methods in software development, especially in the so-called "agile" community, seemed almost identical to theater methods. As this became more obvious, an idea dawned on business professor Rob: These artists are much better at this than we are. (emphasis NOT mine)


They cite four qualities of artful making: Release, Collaboration, Ensemble, and Play. They address all of these qualities well except for Ensemble, of which more in a moment...

They devote a significant portion of the book to distinguishing knowledge work from industrial work, which is very much worth reading. Then they assuage managers' concerns about security, uncertainty, and fiscal responsibility.

I have three criticisms of the book. One is based purely on my own personal biases, another is a gaping hole in their argument, one that does not get the attention it deserves. The third is more or less wishful thinking on my part.

First, this is not a book for practitioners. This is a book for executives who wander by the agile team room and wonder why it's so noisy and the floor is covered with index cards and there is a box of Lucky Charms on the table and the scrum master is wearing a viking hat. It does not tell you how to do artful making, it only tells you what artful making looks like.

Second, the book glosses over two very important points related to actual practice. The first one is explicit, in the conclusion, discussing the quality of Ensemble, they say "An ensemble at work on a project is a group that exhibits the quality of Ensemble". They admit the tautology, but they have very little to say about how to foster or even recognize the quality of Ensemble in a group. The second point that gets even less attention than "ensemble" is talent. It is impossible to succeed at artistic performance without a talented group of performers. I assume it is difficult for managers to acknowledge that their workers lack talent; and it is very difficult to define what makes one person talented and another not; but I found the lack of discussion of the talent of those doing the work somewhat disturbing.

Finally, and ultimately, the book does not go far enough. On page 40 there is a graphic with the title "Characteristics of Artful Making in Agile Software Development and Play Making" with a lot of common practices. What I would much rather see is a graph entitled "Artistic Performance" listing common practices of Theater, Dance, Music, and Software Development. I'm going to keep working on that graph.

Thursday, February 04, 2010

take responsibility for UX

I am really starting to dislike the term "User Experience", but I'll get back to that.

In the mid-90s I was a bass player in the acoustic music scene in the South, living in Atlanta. If you happen to know Atlanta, to give you some perspective, my band opened New Years Eve at Eddie's Attic in 1994, and headlined New Years Eve in 1995 and 1996.

Eddie's Attic was and still is one of the most important and influential clubs on the acoustic music circuit in the South. Also on that circuit is a club in Nashville called The Bluebird Cafe.

The Bluebird is interesting because it enforces a strict no-talking policy. If you talk to your companions at all during a performance at the Bluebird, you are asked to leave the room.

At one time back in the 90s there was an intense discussion among people on the acoustic music scene as to whether Eddie's should implement a no-talking policy like the Bluebird's. As far as I could tell, the musicians who advocated the most for such a policy were the mediocre performers (you really couldn't get onstage at Eddie's if you were outright bad). The more that the performers had inattentive audiences, the less able they were to command a stage and hold the attention of the audience, the more likely they were to support a no-talking policy.

At the time there was a low-circulation newspaper devoted to the Atlanta acoustic music scene that interviewed me on the subject. In that interview I said three things: first, that if you intend to get on stage in the first place, it is your job to command that stage and compel the audience to listen solely by means of your own talent; second, that if you consistently have talkative audiences that don't pay attention, then you should either work on improving your performance, or else stop performing at all; third, that a no-talking policy robs performers of valuable feedback during the course of the performance.

I dislike the term "User". I think the word "user" has bad connotations and associations. I think it is too easy to turn "users" into "lusers" in our own minds. I far prefer the term "audience" to describe those who consume our software. It is only inanimate objects that have users. Performers have an audience.

Given that preference, I think the use of personas as proxies for types of users may not be a very good practice. It seems far too easy to exclude valuable segments of the potential audience and to miss valuable feedback by blinding oneself by the limits of the particular personas being considered as users.

I think a far more interesting and valid approach to UX work is to do things like instrument servers so that we can track in real time the activity of the largest groups of people using our software that we can muster, and to have that activity influence our development and delivery work as nearly instantaneously as possible. And of course, doing that also lets us know when the audience is not paying attention. This is exactly the feedback loop that audience applause provides performers on stage.

Monday, February 01, 2010

Writing About Testing list/conf update

The writing-about-testing mail list began in September 2009, and has already played a part in a remarkable number of achievements:

The following writers appeared in print for the first time or contracted to be published in print for the first time:

Abby Fichtner
Dawn Cannan
Catherine Powell
Lanette Creamer
Parimala Shankaraiah

Lanette Creamer and Matt Heusser collaborated on an article about test automation in waterfall and agile projects.

Alan Page, Matt Heusser, and Marlena Compton published the "Code Coverage Cage Match" collaboration. http://www.stpcollaborative.com/knowledge/538-heusser-v-page-code-coverage-cage-match

Matt Heusser started a book project "Testers at Work".

Adam Goucher signed a contract for a book on Selenium.

Fiona Charles edited the Women of Influence issue of Software Test and Performance, with contributions from many list members. http://www.stpcollaborative.com/magazine/year/2010

Yvette Francino landed an editorial position at SearchSoftwareQuality.com

Dawn Cannan and Lisa Crispin collaborated on a piece for Agile Record #1. http://www.agilerecord.com Dawn's first conference presentation ever was in Second Life. http://www.passionatetester.com/2010/01/welcome-to-virtual-agile-world.html

If you would like to join the list, contact me or any other member for information. If we don't know you or your writing, please point out links to some public examples like blog posts, public documentation, conference papers, or similar work when you do so.

The deadline to apply for the Writing About Testing conference is now past. The conference is completely full with 15 attendees. Any future applications will go onto a waiting list. The attendees will be:

Elisabeth Hendrickson (http://testobsessed.com/)
Lisa Crispin (http://lisacrispin.com/wordpress/)
Geordie Keitt (http://tester.geordiekeitt.com/)
Marlena Compton (http://marlenacompton.com/)
Rich Hand (http://www.stpcollaborative.com/)
Joey McAllister (http://www.stickyminds.com)
Marisa Seal (http://thetestingblog.com/)
Jon Hagar (http://www.swtesting.com/)
Ben Simo (http://www.questioningsoftware.com/)
Marisa Burt (http://www.burtconsultinginc.com)
Lanette Creamer (http://blog.testyredhead.com/)
Yvette Francino (http://www.yvettefrancino.com)
Rick Scott (http://rickscott.posterous.com/)
Dawn Cannan (http://www.passionatetester.com/)
Chris McMahon (http://chrismcmahonsblog.blogspot.com/)

Friday, January 01, 2010

looking back, looking forward

Looking Back

The world of software testing today is radically different than it was on this day a decade ago. On New Year's Day 2000 I had been a dedicated software tester for about three years. I was a leader on a team testing an application that provided location information to the dispatchers who handle 911 calls. I was intensely interested in the most progressive thinking about software testing available, because when we released a bug to production, someone could die because of it.

I remained interested in the most progressive thinking about software testing throughout the decade. Looking closely, we owe a vast debt to three people: James Bach, Bret Pettichord, and Brian Marick. If they didn't supply every breakthrough idea in software testing in the last decade, one of them was nearby when it happened.

There was a shot across the bow in 1996 when Bach published "Test Automation Snake Oil". This would be the first of a relentless assault on proprietary test tool vendors and the intellectual bankruptcy of the approach to testing that such vendors were selling.

In 2001 Pettichord published the incendiary "Hey Vendors, Give Us Real Scripting Languages". These articles and others that followed were the beginnings of the big push not only for open source test tools, but also for encouraging testers to increase their skills to make the best use of such tools. The tools themselves were still some years away, but the conversation started here.

Meanwhile, not long before that in 2000, Bach and his brother Jonathan had published "Session Based Test Management" in a magazine called Software Testing and Quality Engineering, known today as Better Software. The ideas fueling Exploratory Testing had been around for some time at this point, but skepticism was widespread that ET was a worthwhile test approach. SBTM was the first treatment of a large-scale measurable ET process. To my mind, this single article turned the tide in making ET a valid and acceptable practice.

Automation and ET are the touchpoints for how we work today. But at this time we had no guidance on when and how to use them. Then 2002 saw the publication of Lessons Learned in Software Testing by Cem Kaner, Bach, and Pettichord. A lot of us had read Dr. Kaner's earlier work, and Beizer, and quite a number of other books on software testing, but we found that those practices mapped poorly to our own experience. While Lessons Learned has gotten a lot of valid criticism over the years, that hodgepodge of techniques and approaches and theoretical frameworks was the first accurate description of what it was like to be a tester on the ground early in the 21st century.

One of the tenets of Lessons Learned is the idea of "no best practices". Shortly after the publication of Lessons Learned, Pettichord began defining the Four Schools of Software Testing, of which the Context-Driven school is fueled by ET and reinforced by Lessons Learned. Context-Driven testing is alive and well around the world today.

At this point we understood ET fairly well and the Context-Driven school of software testing was far and away the best match for our actual day-to-day working experience. But the lack of open-source test tools was galling. Dedicated testers were falling farther and farther behind developers in terms of the power of their tools.

In 2003 Pettichord and Brian Marick began teaching their 'Scripting for Testers' course. Marick had written a small time-clock application, and they used a library called "iec" created by Chris Morris to drive Internet Explorer and automate some tests of the application.

I downloaded SfT a number of times but could never get it to work properly. I begged several managers to let me attend SfT but I never got to go in those early days. Later I would teach SfT myself on three different occasions, at the STARWest conference, the STAREast conference, and at Agile2006.

In 2004 a programmer named Paul Rogers took the SfT code base and rewrote it from scratch. Pettichord and Rogers released the rewritten SfT as Web Application Testing in Ruby aka Watir. I was the first user of Watir. Watir was the first open source web-testing tool robust enough for actual production work. I was responsible for many of the early enhancement requests, particularly for frames and iframes. To my knowledge, Watir was the first tool in history with robust support for frames.

In the meantime, Marick had been one of the authors of the Agile Manifesto, a document that would eventually change our entire approach to software development.

Also, for all of the decade so far, Marick had been the technical editor at STQE/Better Software magazine. We would devour every issue because Marick was publishing more of the most innovative, inspiring, and mind-bending stuff about software development than any other source. Marick would often solicit material for the magazine via his blog. He published my first article in Better Software in 2004 when I answered one.

Pettichord had been hosting a small peer conference called the Austin Workshop on Test Automation for some time, and 2005 was the first to feature Watir prominently. I attended that AWTA, where I met a number of people with whom I would later work on various other projects. I would also attend AWTA in 2007, one of the formative experiences of my last several years.

Pettichord went to work for Thoughtworks at about this time, where I would follow him around 2006. There we met Jason Huggins, who was developing an in-house time-and-attendance application that had an innovative test harness. Pettichord would be instrumental in getting Huggins' test harness released by Thoughtworks as open source. The project was called Selenium.

Marick left the editorship of Better Software, Pettichord now works in private industry, and Bach has turned a lot of his effort to his memoir in recent times. But without the work of these three in this last decade, the world of software testing would be a very different place.

Looking Forward

I can't imagine what the next decade will bring for software testing. But I'll make a prediction: women will be the biggest innovators in the field.

Just as Bach's "Snake Oil" article was an indicator of things to come, I think Lisa Crispin's and Janet Gregory's
Agile Testing book points the way to our future. Now in multiple printings and multiple languages, the book is a comprehensive statement about the state of the practice today, and has some hints about where we'll be going.

In recent months I've been part of a community of software testers who write publicly on the subject of software testing. A remarkable number of them are women, and a remarkable number are doing really original, ground-breaking work. Here are some women that will shape the next decade of software testing:

Marlena Compton is working with the visualization of large sets of data and with social networks.

Catherine Powell
should publish her first piece, on software architecture, very soon.

Lanette Creamer has created a number of innovative approaches to software testing. She'll be making the rounds of conferences this year.

Elisabeth Hendrickson is an international superstar. She has just opened her own 'agile practice space' called Agilistry.

Marisa Seal and I have taken over Gojko Adzic's WebTest project that provides a portable, platform-independent bridge between Selenium and Fitnesse. We intend to rename the project Selenesse. Marisa recently became a committer on the Fitnesse project.