Sunday, May 30, 2010

Writing About Testing wrapup

On May 20 and 21 some of the brightest people in the field of software testing met in Durango Colorado for the first ever Writing About Testing conference. We participated in a diverse set of activities: formal presentations, ad-hoc demonstrations, collaborative exercises, lightning talks, and informal discussions of topics of interest that ranged from the role of media, to finding the time to write.

I started my software testing career in the bad old days of the mid-1990s. Both Open Source software tools and agile methods were highly controversial at the time. And while many of us were doing amazing and innovative work, the entrenched culture of software development was highly skeptical that what we were accomplishing was valid, or even sane. I think there is a real danger of a return to those days, and I wanted to create a community where people working out on the edges of software creation could hone their ideas in a supportive community, and from what I saw at w-a-t, that community now exists.

Open Source and Agile both succeeded for three reasons: they fostered a laser focus on the technical aspects of software tools; created general support for communities of dedicated practitioners; and provided philosophical/theoretical frameworks within which to accomplish the work. And the information coming out of the Open Source and Agile communities was so valuable that the institutional trade media was forced first to pay attention, and then to participate actively in the promotion of those cultures.

At the Writing About Testing conference we discussed REST architectures and wiki-based test frameworks like Selenesse. (All three principals of the open source wiki-based test framework Selenesse were in the same room.) We discussed data visualization and the challenge of managing enormous datasets.

We discussed new ways of working being discovered and propagated from places like Agilistry and from within particular companies like Socialtext, 42lines, and others.

We discussed new ways to consider software users and consumers, and the implications of the increasingly common phenomenon of near-real-time interaction with those who enjoy and depend on our software.

We discussed what it means to actually do the work of software testing today, in the real world.

We discussed a lot of other stuff, too.

The most important thing I learned is that as software becomes more ubiquitous in the world, the work of software development is becoming radically diverse, as are software business models, as are the skills necessary to be successful in creating software. This has particular implications for software testing. Both the practice of software testing itself and the hiring of software testers are undergoing significant changes, with no end to the evolution in sight.

The software tester of the future will no longer do one thing in one way. The software tester of the future will be expert in some aspects of software creation. Testers will seek out teams that need someone with their particular set of skills and expertise, and teams will seek out people with particular sets of skills and experience to maximize the benefit to the users of their software. Some of the areas of expertise represented in the room at w-a-t:

  • deep database knowledge, framework programming, and exploratory testing
  • API and architecture expertise, user experience testing, and process knowledge
  • system administrator skill, scripting/development ability, and multibyte character processing knowledge
  • management experience, programming and architecture expertise
  • software security and software performance
  • data wrangling, visualization knowledge and deep experience in online communities
  • business expertise and business communication skills
  • Quality Assurance. As I've noted before in a number of places, QA is not evil.

Software testers of the future will invest in a range of skills and experience, and the teams that hire them will audition software testers based on their ability to use those skills and that experience to further the goals of those teams. Software testers who do only one thing in only one way will be relegated to the sidelines, doing an increasingly limited sort of work for a diminishing number of jobs.

It would not surprise me to see the term "software tester" itself gradually disappear over time. Instead, those of us who call ourselves "testers" will more and more say instead "I am an expert in X, Y, and Z, and I have a deep interest in A, B, and C. If that mix of skill and experience is what your team needs, then you need me to work on your team."

Those of us writing about testing face some interesting challenges. In the 90s the major communication channels were the trade publications and the research consultancies. Those organizations still swing a very big stick, but two trends seem very clear: for one thing, the cutting edge has moved away from the big institutional publications, out onto blogs, social networks, and loosely-organized communities of practice; at the same time, the major media have become more conservative, and are generally less likely to publish controversial or cutting-edge work. But that means that major media are caught in a bind, because as it becomes more attractive to publish highly original work outside the major media channels, the major media channels find themselves hungry for content. The entire situation is very fluid right now, and that provides remarkable opportunities for new voices in software to be recognized quickly.

It is an interesting question whether or not there will be a second Writing About Testing conference. Right now enthusiasm is high, but I wonder if a second conference would have as much impact as the first one did. For now I am postponing a decision on whether to pursue a second conference next year. I have not abandoned the project; over the next six months or so I will be talking with the original participants and with potential future participants to see if a second conference next year would be valuable to those of us working in software testing in the public arena. In hindsight, there are a few things I would do differently the second time around, and I suspect that I will get a lot of ideas from others as well.

On a personal note, I am immensely pleased and proud that the Writing About Testing conference and the community that sprang up around it have been so successful. I have invested a lot of energy in w-a-t over the last six months. After the conference ended, I went on a 5-day backpack trip in the remote canyon country of SE Utah to clear my head and reflect on it. I am fascinated by the ruins and artwork left by the Anasazi in the remote canyons of this part of the world, the most recent of which is about 800 years old, and the oldest of which I just saw is about 7000 years old. There is a mental phenomenon known to people who make such trips called sometimes "re-entry". After spending significant time in a very remote desert region contemplating the remains of a culture that thrived from 5000 BCE to 1300 CE, adjusting again to a world of streets and lights and computers can be jarring.

In the light of re-entry, Writing About Testing was a very good thing.

Wednesday, May 05, 2010

watch your language

For a number of years I've been writing about treating great software development as a very specialized subspecies of the performing arts.

Some time ago I reviewed a piece of writing from a software person inspired by the concept of artistic software, but who had no background in the arts at all. It showed: the most egregious error was that instead of using the term "performing arts", this person used the term "performance art". The rest of the piece was earnest but the author's lack of expertise (in art, not in software) was painfully obvious.

The performing arts are music, theater, and dance. Performance art, on the other hand, can be dangerous stuff.

But artistic software development is only a minor representative of a number of new concepts in the field bubbling madly just behind the zeitgeist. For example, methods of harnessing immense amount of data in order to make them comprehensible to human beings are about to change all of our lives, both in software development and in the world at large. Professionals working in the field refer to this as "data visualization", or as "visualization" in the broader sense, which encompasses a wide variety of technical endeavor.

A diagram is not visualization, just as performance art is not the performing arts. To misuse such terms not only spreads ignorance and misconception, but is also a grave disservice to those experts actually working in such fields.

Consider a few terms that once pointed to specific concepts and practices, but which today are laughably devoid of meaning: "Enterprise"; "Web 2.0"; "Service Oriented Architecture"; and "Agile" is coming up fast.

If you plan to use a technical term, please be familiar with the concepts that underlie the term. If you (mis)use a technical term because you heard it somewhere and you think it sounds cool, you do a grave disservice to your colleagues actually working in those trenches.