Monday, December 28, 2009

Agilistry, practice space for Agile software development

Elisabeth Hendrickson and her colleagues with Quality Tree Software have announced the opening of Agilistry, their 'immersive training space' in Pleasanton California.

I've been following Elisabeth's announcements along the way to making this happen, and from what I can tell, the Agilistry facility is set up as real agile working space... except conceived and designed from the ground up by some of the most intelligent, experienced agile practitioners in the world today.

When I think of "training space" for software, I think of a trainer behind a podium with a whiteboard and a projector and some handouts, with trainees sitting at tables facing the podium.

This isn't that. Agilistry isn't training space at all; Agilistry is *practice* space. It even says so on the web site: "Agilistry Studio; a practice space for Agile software development"

And I'll bet that not many people know how to use a practice space.

For beginners, practice space is expensive but very possibly worth it. Say for instance that management informs your development organization that next month you will be "going agile". They're going to dismantle your cubicles, bring in some tables and whiteboards and index cards and sticky notes, and they expect your productivity to soar.

As a beginner, your first responsibility is do your research, improve your skills, set some reasonable goals, that sort of thing. But if you are going to be thrown into the soup anyway, it might be a good investment to get as good a start as you can. A beginner team at Agilistry won't be able to take advantage of (or even recognize) the more sophisticated aspects of what Agilistry is offering, but getting off on the right foot could very well mean the difference between success and failure. Having the Quality Tree staff available in such a well-designed space would be a huge boost to a beginner team. If nothing else, Agilistry can prevent beginners from having to rearrange the furniture every day until they get a good configuration.

Practice space is critical for intermediate groups. Intermediate groups have enough experience to have gotten some taste of success, but still need to adjust, and practice, and analyze, and adjust, and practice... Because intermediate groups have had some success, they are often under intense scrutiny. A practice space is a place where the group can go by themselves to work slowly, do a lot of analysis, make a lot of mistakes, and correct them, or at least protect themselves from making the same mistakes again and again. This is the biggest stumbling block I see for intermediate groups: so many of them retreat to a practice space only to make the same mistakes in the same way, sometimes without even recognizing that they are making mistakes. I see having the guidance of the people from Quality Tree available to help intermediate groups is the best value Agilistry offers.

Advanced groups have the skill and experience to learn new concepts and implement new practices instantaneously in the course of their work. For advanced groups, the value of practice space comes when one project ends and another begins. As long as the team is stable and the goals are well-known, an advanced group can implement new ideas on the fly, without really needing practice space so much. But when the goals are met and the project ends, and it comes time to start a new project or to gather a new team, having a separate practice space to repair to for planning and practice is invaluable.

Finally, a practice space provides a community. Groups move in and out, groups meet each other, groups exchange information about what they work on while they practice. Over time, a practice space can generate an entire "scene" of groups that know and respect each others' work. Knowing Elisabeth and Quality Tree, I have a feeling that will happen.

We musicians know all about practice space. Beginners have to rearrange the furniture in the garage. Intermediate musicians scrimp to rent a smelly soundproof room in a warehouse somewhere. A touring band can rent or borrow an empty club stage. Sting rents a fully-supplied medieval castle in France for the whole group, but only for a few weeks until the band goes on tour. I've rented every kind of practice space you can imagine, from dingy rooms carved out of warehouses to fully-equipped stages. Having a "practice space for Agile software development" fits right into my notions about how proper software development should be supported: the more the agile development environment resembles the ecosystem that supports the performing arts, the better performers that environment will produce.

Congratulations Elisabeth and Agilistry!

Saturday, December 12, 2009

Selenesse, nee WebTest

Marisa Seal and I have officially taken over maintenance of Gojko Adzic's 'WebTest' project. If you want to know why, I wrote about it here.

Marisa is doing the coding (so far; I've never written any Java and would like to learn) and I'm being something like a Customer. I've used similar tools in the past, and I know what I want this tool to do.

Eventually we're going to need documentation for our first release (0.9?), so it seems like a good idea to write down what we're doing, and this seems like a fine place to do it.

The first thing is that we want to change the name of the project to avoid confusion over the other tool with the same name, Canoo WebTest. My favorite so far is 'Selenesse'. Not only does it give a sense of the bridging nature of the tool, it also is very close to the word for the high-level selenium language known as 'Selenese', and since Selenium borrowed a lot of structure from Fitnesse early on, it's kind of poetic to bring it back around like that.

Marisa wants to host the project on Github not Sourceforge. I've used them both, so that switch will probably happen. This has the added benefit of leaving the old project on Sourceforge if anyone ever has a use for it.

We're going to change the namespace not to use "neuri", Mr Adzic's own company name.

The project will support SLIM and scenario tables from the beginning. I'm really liking scenario tables.

The project will have better support for Se-IDE-reported convenience methods. For instance, we've implemented support for clickAndWait, with more such coming.

The project will have a new approach to methods with two arguments. Because of how SLIM is built, methods with 2 arguments such as type() or select() that are normally spelled

| type | | |
| select | | |

are implemented as

| type | | in | |
| select | | in | |

In the course of working on Selenesse, Marisa has also become a committer on the Fitnesse project, so eventually we can support both styles of 2-argument commands.

One big change we're making is to have all Se commands return boolean values to Fitnesse. (This is what makes Fitnesse turn a line red or green; otherwise, if the command fails, Fitnesse returns an exception not a test failure.) At the moment, WebTest returns boolean values only for explicit assertions such as verifyFoo or waitForFoo. I want to emulate SWAT and WikiTest and have every test step return a boolean so that Fitnesse records each test step as a pass or fail. I have a long justification for doing this I've written about elsewhere.

Except that we hit a snag with this. We found that we need the Selenium command check(). Unfortunately, "check" is a reserved word in Fitnesse/SLIM. My suggestion here is to emulate WikiTest and Se-RC in Perl, which provides two versions of each method; a method that does not return boolean, such as click() or check(), and a method with the same name with 'Ok' appended that *does* return a boolean, e.g. checkOk(), clickOk(). If I'm not mistaken, this scheme is available in Se-RC in Perl, so having both versions of the method available would add a certain amount of compatibility between Selenesse and (at least) Se-RC in Perl.

At the moment all our work has been in Java, but already we have interest from a .NET user who may join the project also. The Java work is going to take priority, because we need that for our real job, but .NET support should come along eventually.

Finally, I have to acknowledge my own deep respect and gratitude for the people at Socialtext and at Ultimate Software who build the WikiTest framework and the SWAT tool. I've been an expert user of both, and Selenesse will be borrowing the best features of both projects.

Selenesse, though, will be far more portable, with far fewer dependencies than either SWAT or WikiTest.

Tuesday, December 01, 2009

telecommuting policy

Many of the members of the writing-about-testing group are also experienced telecommuters. It turns out that a number of us have had bad experiences when telecommuting for organizations that lack a sane and rational telecommuting policy.


So we wrote one. The following document has been through more than 80 revisions and contains text contributed by 5 separate authors, and also reflects the comments of even more participants. We hope you find a generic telecommuting policy useful.

Telecommuting Policy

Some organizations want to offer telecommuting as an amenity for work/life balance. Other organizations do not want to have to rely on local talent to build exceptional teams. Other organizations want to use telecommuting to reduce the cost of infrastructure such as office space.

Regardless of the reason, any organization wishing to succeed with telecommuting must have a policy for handling remote working. An institutional standard must be in place such that everyone telecommuting or interacting with remote workers has had a similar experience using similar tools to accomplish similar work. This is such a policy.

Any team member who wishes will be allowed to telecommute if: the whole team agrees to monitor and be responsive on all of the communication channels required by the organization for telecommuting teams. Team members doing solo or pairing work must keep the rest of the team apprised of their status, and should not be incommunicado for long periods of time.

Telecommuters who are not responsive on all of the required communication channels will be questioned and may be subject to having their telecommute privileges revoked or other disciplinary action.

Here we describe how to implement this simple policy.

For Telecommuting Beginners

As stated above, an effective telecommuting culture makes constant use of a set of communication channels. As a first step to implement this mode of working, an organization with little experience with telecommuting can have an expert supply compelling, collaborative training sessions, presented using teleworking technologies. These high-value training sessions provide a focus as the organization implements these policies and procedures for the first time, or consolidates such procedures already in place, and supplies incentive for everyone attending to participate and contribute. Although everyone will attend the training via the communication channels, interacting with the trainer, the material, and one another electronically as though they were telecommuting, training may take place locally and some or all attendees may be co-located. This enables smooth technical support and a controlled experience, affording natural opportunities for feedback on the policy.

Prepare the Team and Prepare the Environment

We strongly recommend that the organization put an IRC (Internet Relay Chat) server in place and have everyone in the organization know how to use it. The authors do not know of another messaging service that provides even a fraction of the features available in IRC. We recommend an institutional channel like "#social" for casual messages like "how was your weekend", as well as institutional channels for groups that collaborate closely, for instance "#dev" or "#my_team".

Before the remote training session begins, the following equipment must be in place for every member of the team:

  • A working headset
  • A VNC server and client for desktop sharing (1)
  • A working instance of Skype, or other VOIP service that provides conference-calling ability (2)

Agree to Participate

Every member of the team must agree to participate in all of the communication channels required by remote workers. During business hours, every member of the team must be monitoring at least the following channels and be prepared to answer on them quickly when appropriate:

  • email
  • instant messaging such as Yahoo IM, AIM, or similar service
  • VOIP/Skype
  • yahoo IM video (3)
  • wiki (4)
  • IRC in both #social and #

We consider these channels to be a minimum standard for telecommuting. Telecommuters require high bandwidth communication channels to be effective. If they are disconnected from the ongoing work of others on the team, the lack of critical information will make them unable to contribute; the risk of wasted work is far higher than for those on co-located teams. Having a required set of standard institutional communication channels mitigates the risk of wasting the work of telecommuters.

Other communication channels like a microblogging service, IRC interfaces to Twitter or Facebook, or other such communication channels may also be appropriate. The authors are aware of at least one organization experimenting with virtual worlds like Second Life for distributed teams.

Ongoing Remote Working

Telecommuting Policy

The organization must have a telecommuting policy to refer to once the training sessions are finished. Since we have built an institutional standard, where all of the telecommuters who participated in the training have had a similar experience, using similar tools to accomplish similar work, we can now put such a policy in place.

Upon finishing any required training, any team member who wishes will be allowed to telecommute at will under the same requirements that were agreed to when working in the training session: the whole team must agree to monitor and be responsive on all of the communication channels required by the organization. Team members doing solo or pairing work must keep the rest of the team apprised of their status, and should not be incommunicado for long periods of time.

It should be understood that telecommuters who are not responsive on all of the channels mentioned above will be questioned and may be subject to having their telecommute privileges revoked or other disciplinary action.

During the training experience, we assume that some workers were on-site, interacting with the trainer (and with each other) using the channels described above. When such workers are remote themselves, there may be a need to have on-site workers available on their office phone extensions. The open source PBX project "asterisk" supplies VOIP connections to office phone extensions, as do a number of commercial phone applications such as Avaya.

Meetings

Meetings should be structured such that remote employees can participate on an equal basis with those on-site. On-site meeting rooms must be equipped with adequate speakerphones. A multi-user IRC or IM 'back channel' where both on-site and remote participants may comment on the ongoing discussion in the meeting increases meeting bandwidth and makes it much easier for telecommuters to raise interruptions.

Travel

Travel requires a careful balancing act. Face-to-face time allows coworkers to connect on a human level and can be incredibly productive and beneficial. However, full-time telecommuters are telecommuting for a reason, and mandatory travel that inflicts hardship upon them harms the company in equal measure. Have a clear travel policy for telecommuting workers.

Appendix: Tools

(1) Desktop sharing is required, both for pair-programming and for demonstration of issues with software. Employees should be able to screen-share spontaneously, without the need to schedule a slot in advance. Tools such as Microsoft LiveMeeting or Webex or YuuGuu may substitute for the open source VNC. When working in a *nix environment, screen(1) is more productive than VNC for collaborating in a text-only environment. Examples of when to use screen(1) are, for instance, when using Vim or when doing system administration from the command line.

(2) The open source 'asterisk' project provides a professional-level PBX with multiple conference-call and office numbers available. Proprietary tools such as Avaya or other systems may provide the same sorts of features. What is required is the ability to have a group of people speaking together without echo, distortion, or other interference. Employees should be able to hold impromptu conference calls without the need to book a conference extension in advance.

(3) Some people find video screens of individuals unnecessary if those individuals are available on other channels. For remote workers interacting with an on-site team, having a camera in the team room is valuable so the remote workers can see who are at their desk, or engaged in conversation, or otherwise disposed.

(4) Teams need to manage significant amounts of project documentation and other text. Storing documents on network drives and sharing documents by email is far too low-bandwidth for remote workers to negotiate effectively. As well, these media do not allow for effective multi-user collaboration, nor do they provide an easily accessible revision history for documents, nor do they provide a single authoritative place to access a given document. A full-featured wiki like Confluence, Socialtext, Mediawiki, or Twiki is required for the access to constantly changing information that remote workers need. In our experience, Microsoft SharePoint has shown to be subpar for this purpose.

Wednesday, November 18, 2009

ui test tool search

I now work for 42lines. I was hired along with Marisa Seal to start the QA/testing practice for a very small totally distributed/remote software shop implementing agile processes as they make sense. I like this place.

One of the things that 42lines wants to do is to begin UI-level test automation. I have a lot of experience doing this, but I've never done it from a standing start, so this was a great opportunity to get a good look at the state of the practice for UI-level test automation.

For the last 3 years or so I've been using keyword-driven test frameworks that use a wiki to manage test data. I like these wiki-based table-based keyword-driven frameworks a lot. I'm a little suspicious of the BDD-style frameworks like Cucumber and others based on rspec-like text interpretation. Anecdotal evidence suggests that analyzing the causes of failing tests within BDD-style frameworks is an onerous task; also, I suspect that since BDD-style frameworks map closely to story coverage, there ends up being a lot of duplication of test steps. With wiki/table/keyword frameworks, I know from experience that it is possible to design tests with great domain coverage, little duplication, and scale to numbers of test steps in the suite in the five figures.

I was really hoping to find something great in Ruby. I did find that Fit has been ported to a number of languages, including Ruby, but it's still a pretty primitive ecosystem out there. I didn't find any significant bridges between Fit and Watir or Fit and Selenium-RC in Ruby.

(For the record, it would have been great to be a contributing member of the Watir community again. As far as anyone on the Watir team can tell, I was the very first Watir user, and I still think it is a brilliant project.)

So the best-of-breed wiki-based test management system seems without a doubt to be FitNesse. I've used it and I like it, and Marisa is something of an expert in FitNesse and its accoutrement. FitNesse comes in two flavors: Java and C#. 42lines is a Java shop. Selenium Core is in Java. I got no time to fuss with JRuby or whatever the latest coolest implementation of Ruby in the JVM is. It makes every kind of sense to use Java/Fitnesse and Java/Selenium for this project.

So FitNesse and Selenium in Java are beyond dispute the best-of-breed in their respective fields. I thought they went together like peanut butter and jelly, pizza and beer, and SURELY there would be a canonical reference implementation of an integration between these two awesome tools.

Sadly, I was mistaken.

There are to my knowledge three reasonable FitNesse+Selenium integration projects. The first one, fitnium, I dismissed mostly on the basis of rumors of bugs and rumors of lack of support. And in the official documentation, I found that the project was taking on features (Flex and Flash support) for which I had no need. Also, I object to abstracting/renaming Selenium methods for purely arbitrary reasons, as it makes the (very nice) Selenium documentation unusable in the tool.

That left us with a choice of StoryTestIQ and WebTest. We spiked them both.

StoryTestIQ has a beautiful UI, but it has too much magic. The killer flaw for us was that STIQ must run on the same host as the SUT. Even though it is possible to invoke *chrome/FF and *iehta/IE modes from the framework, the framework itself will not support cross-site operations. That's a non-starter for us.

And that left us with WebTest. The brilliant Gojko Adzic began the project but abandoned it about a year and a half ago. It has the feel of something partially-done. And yet in our spike with WebTest we found that we could not only invoke *chrome and *iehta modes, but we could also swap out the very latest FitNesse version and the very latest Selenium version for the antique versions shipped with WebTest, and the WebTest integration code still worked. To my mind, this speaks of good design.

Marisa and Gojko gave a joint presentation on DbFit at Agile2008. Gojko has been known to answer my emails. WebTest is a really obvious choice under the circumstances.

So as of today Marisa and I are administrators for the WebTest project on Sourceforge. I hesitate to say "owners", because neither of us knows much about Java (although Marisa certainly knows more than me) but the code for WebTest is so simple and readable that I am completely comfortable describing what it does and does not do.

Today I made some really nice demonstration tests in FitNesse driving Selenium via WebTest. We're going to make this work.

Thursday, November 05, 2009

backchannel

Kent Beck mentioned recently that he can't think of a situation for which Google Wave is indispensable. Jason Huggins left this amazing comment on Beck's blog post. Wave could very well be indispensable for implementing a "backchannel".

A backchannel is a multi-user space for commenting on some action being shared by everyone on the channel, whether it be listening to a speaker at a conference or attending a meeting during which people take turns speaking. A backchannel is perhaps most commonly implemented on IRC, but any reasonably robust multi-user messaging tool will serve.

But it's not only conference attendees that need a backchannel. Coming up I have a couple of articles recommending required communications channels for distributed teams and very large teams, wherein I strongly recommend having a backchannel during team meetings. It really is a critical core practice for distributed teams.

But even given an IRC backchannel, a lot of information gets lost during the interaction. The chat goes by very quickly, and it is impossible to type all the ideas one has into the backchannel (along with everyone else) fast enough or efficiently enough to capture everything that everyone thinks. Running a backchannel on a wiki is far too inefficient to be effective.

But Wave changes that. Not only can multiple users all contribute to the backchannel at once in real time; but after the meeting or the presentation, Wave provides a persistent, shareable, editable artifact that captured at least the basic premises of everyone's thoughts at the time. Any valuable aspects of this artifact may be edited and enhanced and otherwise manipulated after the fact.

I really want to try this out soon.

Thursday, October 15, 2009

when to use analogy

Analogy is a useful device when used to describe the things that we work on; but it is actively dangerous when used to describe our work itself.

When we're building software, it is useful to be able to say for example "it's like a library, where someone can make use of a feature and then stop using it so that someone else can use that feature" or "it's like a train, where there are just a few places where anyone can get on or where anyone can get off"

But when we use analogy to talk about our work, we invite misconception and misinterpretation. To say "writing software is like making a cake" invites misperception. Does writing software involve flour and sugar? Is there frosting involved?

Of course someone who says such a thing *probably* means that to write software one must do certain steps in a certain order-- but it is far better to describe the actual steps ("red, green, refactor" is not an analogy) than to invoke analogy, because analogy always misleads to some extent, and we have known for many years that the cost of failed communication on software projects is extremely high.

Thursday, October 01, 2009

CFP: Peer conference "Writing About Testing" May 21/22 Durango CO

CFP: Peer conference "Writing About Testing" May 21/22 Durango CO

I am organizing a a peer conference for people working in software testing and software development who write about their work in public. The conference will be organized LAWST-style, much like the recurring Austin Workshop on Test Automation or the Tester-Developer/Developer-Tester conference I helped organize in 2007.

The original proposal is on my blog here: http://chrismcmahonsblog.blogspot.com/2009/08/proposed-peer-conference-writing-about.html

There is significant demand for public information about software testing and software development and software process. This conference is for people who want to influence the public discourse on these subjects through public writing.

If you are interested in the subject, but not necessarily interested in the peer conference, there is a private mail list for writing about testing whose members include both writers and publishers. I (or any other member of the list) will add you to the mail list if you send us your email address. I am at christopher.mcmahon on gmail. Your email will be used for no other purpose than to join the list.

New voices are particularly encouraged to apply for the conference and to join the mail list. If we don't know you or your writing, please point out links to some public examples like blog posts, public documentation, conference papers, or similar work when you do so.

A number of prominent writers and also new voices in the software testing community have have expressed interest in attending the conference. In an effort to break down some of the wall between writers and publishers, at least two publishers of software testing material also have expressed interest in attending. The conference will have a definite agile slant, but I don't want to exclude great writers working in other situations.

This peer conference is run like many others: send a position statement saying why you want to attend, and what relevant material you would be prepared to contribute to the conference. Examples of relevant material might be a presentation, interesting experience, or even a really good set of questions. People will be invited based on those position statements. As of now, I will be the one evaluating these, but I will be sharing them freely with others on the mail list whose judgment I trust. Even better, join the mail list and publish your position statement there.

I may or may not cap attendance at 15. If we get a significant amount of great position statements, I might increase that number.

There will be no fee to attend, but you will be responsible for your own transportation and lodging. A discounted rate at a convenient bed and breakfast hotel within walking distance of the conference will be available for a limited number of rooms.

Here are the relevant dates:

Dec 1 2009
Deadline for position statements to be emailed to me or to be posted on the writing-about-testing mail list.

Feb 1 2010
Invitations to the conference sent. Waiting list established if necessary.

May 21 and 22 2010
Conference.

Durango has a lot to offer visitors. Attendees may want to arrive early and stay late. The Taste of Durango street festival is Sunday May 23, and we also plan to have an outdoor excursion that day. There will likely be a number of attendees driving from the Denver area Thursday May 20, so carpooling from Denver or Colorado Springs could be possible for the 6-8 hour drive. Direct flights to Durango are available from Denver, Phoenix, and Salt Lake City, or the Albuquerque airport is a 4 hour drive in a rental car. Let me know if you need more detailed information about getting to Durango or about the conference itself.

UPDATE: I have just been informed that there is no longer a direct flight to Durango from Salt Lake City. United, Frontier, and USAirways fly to Durango from Denver and Phoenix.

Tuesday, September 22, 2009

against kanban part 3

At the agile2009 conference, David Carlton and Brian Marick presented something they called "Idea Factory", an overview of three sociological systems by which the scientific community comes to regard a certain thing as fact.

Carlton's presentation was particularly intriguing. He cited work pointing out that disparate communities come together in what are called "trading spaces" in order to pool aspects of their expertise to create new work.

One of the signs of a "trading space" is a creole or pidgin language created by the participants in the trading space and used by participants in the trading space to negotiate among themselves. Such language may seem weird or impenetrable by outsiders, but participants wield it in order to accomplish things among themselves.

In contrast, Six Sigma, ISO9000, and now Lean/kanban, are imposed upon the agile development trading space. They are concepts and processes and ideas forged from managing factories, fitted (roughly and badly) into a culture that, when healthy, has no need of them. Just a quick glance at the hundred or so two-column translation tables in the book "Lean-Agile Pocket Guide for Scrum Teams" gives a clear indication that it takes serious effort to map Lean processes onto an indigenous Agile framework.

To my mind, the agile/XP/Scrum community conceived three brilliant "trading space" type of ideas around which our work revolves:

First, the idea that at a certain time, we will release a predictable and well-known set of running, tested features to production, over and over again, reliably, puts the emphasis on actually shipping things for people
to use. This emphasis on succeeding within a time box consistently (and the subsequent celebration by both the agile team and by the customers!) is utterly critical to the success of agile teams, and Lean/kanban throws it out the window.

I had a number of Lean advocates tell me that "you can still have iterations with kanban", but the radical de-emphasis of the time box and of the ceremony of releasing features to the customer at the end of every scheduled development cycle is a huge step backward. In other writing I have suggested that software development without a regular, public, scheduled, celebrated release of features to production cannot be agile, regardless of what other processes are involved in such software development.

Second, the fact that the language to describe this work (iterations; big visible charts; information radiators; test-driven development; continuous integration; pair programming; automated testing; etc. etc. etc.) comes
directly from those who practice the work and is indigenous to that work. These terms have no meaning outside the realm of agile software development. This insistence on our own creole or pidgin language gives the agile software development community a high degree of self-reliance, and a degree of resistance to corruption.

The language used by Lean/kanban software folk is strewn with corrupted Japanese words and phrases ripped from a generation of Japanese manufacturing. In my experience such words (kaizen; pokyoke) are used more to intimidate and confuse the uninitiated rather than to actually further the work at hand. There are perfectly good words in the agile "trading space" to describe such concepts without resorting to arbitrarily adding terms from an entirely different culture, the culture of Japanese automobile manufacturing.

Third is that every Scrum process for every individual team contains the seeds of its own destruction. Every agile retrospective contains an opportunity to break the rules for a good cause. Want to scale your team up to three times the recommended size? Go for it. Want to replace pair programming with institutional code review, replace CI and TDD with a hair-trigger build system plus dedicated testers expert in the debugger? Go for it.
There is no room for creativity on the factory floor. Any significant change to a manufacturing process involves significant retooling expense. In software we have no such expense. We can change our development processes right now, and then change them again an hour from now, and again an hour after that. The agile frameworks and tools we use give us fast feedback that tells us what works and what does not.

The Lean/kanban term that probably concerns me the most is "waste". On a factory floor, this term has a reasonably clear meaning. In a software development team this word has dangerous implications.

I am a dedicated telecommuter, and the best telecommuter situations I have worked in have provided proxies for the "water cooler": wikis where we can post photos of our gardens and of our vacations, messaging systems where we can talk about our music or even our religion. On the face of it, this is waste; in practice, it binds a team tightly together and increases productivity.

This healthy social interaction in the course of the work is a symptom of a healthy "trading space". Besides the pidgin/creole language created universally by devs + BAs + testers, there is a creole created within individual teams to describe their work and how it affects their lives.

The foreign concepts Lean/kanban offer present a clear risk of corruption of the natural discourse of agile software development, and its other concepts like "waste" and "value stream mapping" pose a threat to the social interactions that make working in healthy agile teams such a productive, easy, and even joyous experience.

Monday, September 14, 2009

against kanban part 2

After conversations with various well-meaning folk at the Agile 2009
conference about some details of Lean and kanban, I remain more
opposed to a general implementation of them than ever.

But I'm willing to grant that a single kanban call might have value.
Liz Keogh had a good example:  a group of agile coaching consultants, each with multiple clients, each client moving at different rates, with more clients
waiting for attention.  The consultants track their work with cards on a board. (Note: tracking work with cards on a board is NOT kanban...)

When one consultant finishes with a client, that consultant adds a
card to the board saying GIVE ME WORK.

As I understand it, this is the essence of kanban:  as Eric Willike told me, the essential image of kanban is a colored card in an empty basket, sent up the line to have the basket filled again.

But what a frighteningly low-bandwidth transaction!  This means of communication is for infants: GIMME followed by either YES or NO.  Put a bunch of infants in a room and you will certainly observe social behavior and even happy cooperation; but those social transactions cannot be very complex.  A social system based only on GIMME: OK/NO is a poor substitute for real adult communication.

But it is a perfectly adequate system for interacting with inanimate objects, such as on a factory floor.

I was asked to supply my favorite metaphor so we could apply lean/kanban to it, and of course I chose a musical performance:  a group of experts negotiate a set of things to perform at a certain time at a certain place for a paying audience who will be delighted by the performance.  At the arranged time and place, the group of performers demonstrate their expertise for the delighted audience. Then they do it again two weeks later.

My interlocutor then asked me to imagine how to deal with the transportation and the instruments and the catering and the sound and lights and the hotel, etc. etc. etc., implying that lean/kanban is an appropriate system with which to negotiate the larger ecosystem in which a software development team works.

Thinking it over later, though, I still think that not only is kanban a poor fit, it's a dangerous approach when applied to any transaction more complex than filling a basket with parts.

The essential problem is that arranging a performing arts tour or releasing successful iterations to production requires complex negotiations among people at every stage. Kanban defines only a single, infantile transaction: GIMME: OK/NO.  If this single
transaction is the only model you have for dealing with people, your project, whatever its nature, is doomed to fail.

On the other hand, if your communication model is complex, for example as they say in the Scrum world, if your story is a placeholder for a conversation rather than an empty basket to be filled, you are doing something-- but whatever it is, it is not kanban.  If your cards on a board represent more than just Work In Progress but instead represent complex agreements among groups of people, you have stepped outside
the lines of Lean and kanban and you are doing something altogether different.

The risk I perceive here is that organizations will not only attempt to reduce all of their software development practices to infantile GIMME:OK/NO transactions, but that they will pursue such oversimplifications to great expense in the name of "value stream
mapping".  This is an approach that blends the brain-dead process descriptions of ISO9000 with the outrageous process design expense of Six Sigma.

As I've noted many times, processes designed to manipulate physical objects map very poorly to software development.

Sunday, August 16, 2009

proposed: peer conference "Writing About Testing"

It seems like there is more demand than ever among the technical publications for information about software testing. Experience reports, theoretical pieces, tool documentation, all seem to be in great demand right now.

At the same time, I think the overall quality of what I've been reading about testing is declining, as people rush to meet that demand without adequate preparation or knowledge.

I'm interested enough to do the legwork to host such a peer conference, assuming the potential participants were willing to come to Durango. If not, I'd be willing to travel to attend such a conference, and to help facilitate, promote, or whatever is needed to bring it off.

I envision soliciting participation from two groups: established writers with a history of publication with outlets like SQE, STP, InformIT, O'Reilly, etc. etc.; and also new voices looking to start writing for these sorts of publishers (I very much hope that there really are new voices writing about testing very soon). New voices should have some sort of verifiable writing ability, like a public weblog or conference papers, or some other reasonable means to evaluate their work.

As for content, I'd like an Open Space style event, but I envision themes like:

Ethics of public discourse: the difference between reporting and opinion; proper citation; avoiding false pretences and misunderstandings; criticising ethical lapses on the part of others.

Voice and style; tone and timbre. Academic/scientific writing versus colloquial and general writing. Grammar, structure, how to engage the reader.

Working the business: queries and pitching, invoicing, negotiating with publishers, working with editors and art departments.

Professional development: improving technical skills, and being able to write about it.

Impact of blogging, Twitter, social media that shows extensive writing, like Facebook and LinkedIn and such.

As for a date, I envision some time within the next year, but not before January 2010. Assuming it's in Durango, there would be a lot of amenities available. In winter, there is good skiing available at a couple of places nearby. Spring is a shoulder season, but it's a great time to go to the canyon country. Summer has river sports, biking, and mountain trips, all in town.

Durango has a new library with meeting facilities that should be adequate for a fairly small group. For a larger group, the local college makes its facilities available for minimal cost. This conference, like most such, would be a break-even proposition.

There are a number of alternative locations to Durango. SQE is based in FL, as is the company I work for. STP is mostly in Nashville, not far away. The Bay Area or Chicago or Portland or Austin all might be possible locations, as I believe there are facilities for such gatherings easily available. I'd be particularly interested in Denver, Phoenix, SLC or Albuquerque also, as they are only a short plane ride or a day's drive away.

If this idea interests you, please leave a comment here, send me email, or drop me a line on Twitter.

Saturday, July 18, 2009

a year of STPedia

This month marks one year of the column Matt Heusser and I co-write every month for Software Test and Performance magazine, "STPedia". It was great to mark our anniversary in a double-length column on "Agile Testing in Practice" in July's newly redesigned print version of ST&P and to help out with the launch of ST&P's redesigned portal, stpcollaborative. ST&P are changing things around for the better.

The format of the column is that we have a paragraph or two introducing the topic for the month, then we define dictionary-style a set of terms related to the topic, then usually we draw some sort of conclusion from how the terms are related. ST&P has a 6-month editorial calendar, so at this point we've treated many subjects twice. And really, how much is there to say about source control? From here on out we're going to range a little more widely in our choice of topic.

A little Inside Baseball, for those who are interested in how the writing gets done: we have no set process. Some of the time one of us will write the whole draft for the other to edit. For instance, September's column is mostly Matt, and October (at this point at least) is mostly me. Other times one of us will start the column, run out of steam, and the other will finish it. I remember one column in particular where I just could not come up with a conclusion that made sense. Matt rewrote the last half so that it all made sense. Sometimes one of us will write some of the definitions, and the other will handle what's left over. Matt has a wider range of expertise than me, so he likely wrote the really technical stuff. If you read the column month to month, you probably get about 50% of each of us over time.

The fact that we've been at this a year with no end in sight speaks to our mutual respect. Matt is a bright, honest, motivated, dedicated guy. And fun to work with. I think we still have a lot to say, I hope you'll consider reading our stuff.

Saturday, July 11, 2009

against kanban

I've been somewhat alarmed by the enthusiasm with which the agile community has embraced Lean manufacturing principles, especially kanban. While I'm sure that kanban is effective in some circumstances, I am also convinced that it exposes software teams to real risk of impaired function.

Like many of my qa/tester colleagues who were working in the field in the 90s, I used to be a real process-head. I was part of projects that enthusiastically implemented Six Sigma, CMM, ISO9000, as well as other approaches like RUP, spiral development, etc. etc. We thought then that if only we could get the process right, we could crank out great software just like factories crank out cars and engineers crank out bridges.

Six Sigma and ISO9000 were lifted straight from manufacturing. CMM was the product of a lot of analysts who looked at some properties of a selected set of software projects.

Then the Agile movement came along, and with it new processes, the most significant of which turned out to be XP and Scrum together. These were revolutionary, because they were software development processes actually created from scratch by people who were very very good at creating software. Obviously, because these processes were created by smart people creating software, these processes map very well to how we do excellent software development.

We can argue about the details: is it better to have stable velocity or increasing velocity? should we estimate using points or using ideal hours? Regardless of these details, if your team measures velocity and uses velocity to plan iterations, and if your team always does pair programming, test-driven development, and continuous integration, then you will either release excellent software every iteration; or you will quickly expose and face the reasons why not.

Kanban doesn't make it necessary to face your inadequacies. I have two examples:

At one point my dev team had a chance to interview a self-appointed "kanban expert" on the subject of kanban and Lean. At this point the team had successfully shipped something like 25 of the last 27 iterations, so releasing iterations to production was routine, we had the whole process pretty well dialed in. And we had some killer questions for the kanban expert.

Under this questioning, it turned out that the kanban expert's team had been using a Scrum/XP process, but they blew iteration after iteration after iteration. From his answers to our questions, it seemed that his team lacked the skill and discipline to be consistently successful iteration after iteration. So one iteration, when they knew they'd blown it one more time, they simply stopped doing iterations or planning to release features. This is apparently when this person attached "kanban expert" to his resume.

Before we started releasing every iteration, my team had been using a kanban-like process. We would identify a set of features, then branch our trunk code for each feature under development. When a feature was finished, we would merge the feature branch back into the trunk code. This way we always had features being developed, and we could release trunk safely at any time.

We had a lot of very bright developers on the team. Some of them were maybe a little too bright. With no timebox pressure, we had people doing a lot of gold-plating, making architecture changes that had no relation to the feature being developed, making (cool!) experiments in the underlying code, and generally making merging not very fun.

Furthermore, it was an enormous headache for the marketing people. Without timeboxes or estimates, it was difficult to know when a particular feature would be available, which made it almost impossible to do effective marketing for the product. When we switched to planning iterations based on our velocity and our backlog, we made the marketing people very, very happy.

The fundamental difference between software development and manufacturing or engineering is that in software development we do not manipulate physical objects. We manipulate concepts, and algorithms, and process. Processes designed to optimize the handling of physical objects simply do not map to the work we do when we create software. I'm sympathetic to those who see estimation and iteration planning as waste, but I think the risk inherent in not having a mechanism to expose our inadequacies is too much risk to take, even for the best of teams. We have no warehouses to fill up or assembly lines to halt, we have only the contents of our heads and of our code repository. For software development, the only truly useful metric is: can you deliver running tested features on a consistent schedule? Iteration timeboxing exposes that metric in the most obvious way possible.

What we find in both literature and anecdote about the application of processes taken from manufacturing and engineering, puzzlingly, is that for every team for which the process is successful and helpful, there are other teams who implement the same process only to meet with failure.

What I suspect is that ANY process or methodology (that is not actively destructive), applied to a skilled, disciplined, high-functioning, motivated team, will succeed, regardless of the process. Likewise, any process applied to a low-functioning team will likely fail.

With an XP/Scrum sort of iterative process, we will know very soon exactly how and why we are failing. This seems to be not true of kanban or of any other software development process taken from the manufacturing arena.

Tuesday, July 07, 2009

validate order with regular expressions

Today I wrote some UI tests to validate that certain text appears on pages in a particular order. Since I've done this now in two different jobs using two different tools, I thought it might be of interest to others.

There are a couple of reasons you might want to do this. For example, a page might have a list of records sorted by certain aspects of the record. Here's a crude example:


|name |date|description|
|chris|july|tester |
|tracy|june|gardener |


One set of critical tests validates that name/date/description appear in the right order; that chris/july/tester appears in the right order; and that tracy/june/gardener appears in the right order.

Another critical test is that chris appears above tracy in the display.

The first challenge for the tester is to identify and be able to address the smallest part of the page that contains the text under consideration. Every tool does this in a slightly different way, but Firebug is your friend here.

The next challenge is to make sure that your particular framework knows how to interpret regular expressions. I have seen this in place in Selenium-RC and SWAT, and I have rolled my own in Watir. (But for all I know Watir might have this kind of thing automatically by now. It's been some time since I've used Watir on a real project.)

Now, to check the horizontal order, you construct a test that looks something like


|assert_text|id=text_container|name.+date.+description|
|assert_text|id=text_container|chris.+july.+tester |
|assert_text|id=text_container|tracy.+june.+gardener |


To test that chris appears before tracy on the page:


|assert_text|id=text_container|chris[\W\w]+tracy|


Regular expressions are for matching text. The first set of tests is fairly straightforward. The "." operator says "match any character". The "+" operator says "match 1 or more instances of whatever". This means that text like "namedate" would cause the test to fail (which is exactly what we want) but text like "name foo date" would pass (which might not be what we want but it's still a decent test, we could write a fancier regex if that happened to be critical).

The second test is a little trickier. The trouble is that by default "." does not match any newline characters. And since chris has to appear above tracy in the page, there will always be a newline involved. Most if not all languages have a way to tell the "." operator to match a newline, but some of them are really awkward. (I'm looking at you, .NET.)

So instead we make a little workaround: the \W means "match anything that's not a word" and "\w" means "match anything that is a word". "Word" is defined as any letters plus any numbers plus "_", but it doesn't matter. [\d\D] (for digits) would have worked as well, since the point is that one of those expressions will always match anything we encounter between one interesting string ("chris") and the other interesting string ("tracy").

Friday, April 24, 2009

UI test framework design using Watir

I wrote a little toy test automation framework recently.

It's a keyword-driven framework, and the test data is kept in a CSV file. The test data look like this:

goto,http://www.google.com,,
text_field,name,q,pickaxe
button,name,btnG,click
match,Programming Ruby,text

This is the classic Google Search example from the Watir documentation.

I used CSV because it's the simplest implementation. This is, after all, a toy. In the real world this might be HTML tables or wiki tables or a spreadsheet. What we want is for anyone to be able to write a test quickly and easily after a little bit of training on FireBug or the IE Developer Toolbar. (Test design is a different issue. Let's not talk about that now.)

The framework grabs the test data and interprets each row in order. A simple dispatcher invokes the correct execution method depending on what the first element in the row of test data is. This has two advantages.

For one thing, there are a finite number of types of page elements to deal with, so our methods to manipulate them will be limited in number.

For another thing, we can write custom test fixtures using our own special keywords, and make our own little DSL, like I did with the final row above that starts with "match".

Reporting pass/fail status is left as an exercise for the reader.

This simple framework could easily be the start of a real test automation project. It's robust, it scales well, and it can easily be customized, improved, and refactored over time.

[UPDATE: the comment below about leaking memory is interesting. I didn't say it explicitly in the original post, but I think test data (CSV files) should represent fewer than about 200 individual steps. My vision is that this script is invoked for a series of CSV files all testing different paths through the application, and each CSV file is handled by a new process. Paths through the application with more than about 200 steps make it much harder to analyze failures. The fewer the number of test steps, the better.]



require 'test/unit'
require 'watir'


class ExampleTest < Test::Unit::TestCase


def goto(args)
@ie.goto("#{args[1]}")
end

def text_field(args)
@ie.text_field(:"#{args[1]}",'q').set("#{args[3]}")
end

def button(args)
@ie.button(:"#{args[1]}","#{args[2]}").click
end

def match(args)
assert_match("#{args[1]}",@ie.text,message="FAIL!")
end

def test_google

@ie = Watir::IE.new

@command_array = []

File.open('watir_keywords.csv') do |file|
while line = file.gets
@command_array << line
end #while
end #do

@command_array.each do |comm|
args = comm.split(',')


#arguably 'case' would be nicer here than 'if', but this gets the job done.
if args[0] == "goto"
goto(args)
elsif args[0] == "text_field"
text_field(args)
elsif args[0] == "button"
button(args)
else args[0] == "match"
match(args)
end #if
end #do

end #def

end #class

Sunday, April 19, 2009

a really good bug report

I had occasion to use FireWatir recently and found a strange bit of behavior trying to manipulate a select_list element. I posted an executable bug report to the Watir-general mail list. (Please read the whole (short) discussion.)

I have not been working closely with Watir for some time, so I missed some obvious diagnostic paths. Even so, though I got a reasonable explanation for the observed behavior, I pushed it farther, wondering why I did not receive a NoMethodError trying to do what I did.

Please note: at no time did I ever editorialize; nor criticize; nor invent solutions. I merely discussed observed behavior with interested parties in disinterested tones. And upon examination, I had turned up not one, but two different bugs.

I am pleased once again to have contributed to the Watir project. Furthermore, I think the discussion linked above is pretty close to a Platonic ideal of bug reports.

Sunday, April 05, 2009

at liberty

blogging is so old-school. Thanks again Jeff!

I have been amazed by and touched by and thankful for and proud of all the people who have sent nice messages, said nice things, and sent me leads since I was laid off from the best job I've had in a decade.

I am seeking a telecommuting software testing/QA job or related work. I have expert-level skills and experience, a shockingly good resume, and awesome references from some of the best software people in the world.

agile COBOL is no oxymoron

Dr. Brian Hanks of Fort Lewis College said on Twitter just now:

"Agile Cobol - the new oxymoron"

I have to disagree. In the late 90s I worked testing a life-critical 911 data-validation system written in COBOL and C. I was the tester when we migrated from key-sequenced data files (basically VSAM, or very close) to an SQL database (albeit one with no relational integrity-- we had to write that into the code).

When I joined the company, system releases were chaos. By the time I left, system releases were orderly, done on a regular basis, with great confidence. We evolved what was essentially an agile process to regularly ship running, tested features that our customers wanted.

But we broke every rule in the book (of the time) to do it. We had customers talking to developers all the time, we had sysadmins reviewing features before release, we had testers reading code and running debuggers, we (quietly) ignored test plans in 5-inch binders.

The Agile Manifesto validated this way of working. It was the first public acknowledgment that we were not alone in thinking that it took people working together all the time to release good software. Suddenly I had somewhere to point to say "See? Someone else thinks that working this way is a good idea!" As the Agile Manifesto became more widely known, any number of old-school mainframe people came out of the woodwork to talk about how they had been doing similar work for a long time. The Manifesto did not come into being from nowhere.

I want to talk about a few test techniques I used in a couple of COBOL code bases that are creative analogs of modern techniques, but using old tools.

Human Unit Testing

My mainframe career was spent all on Tandem systems. Tandem had a GREAT debugger, and a GREAT proprietary scripting language. As a tester, the first thing I did whenever new code got checked in was to crank up the debugger, set a breakpoint at the new code, step back and forth modifying the values of variables trying to cause a failure. When I did cause a failure, I would step back to a system level and use the UI or the batch processes to attempt to generate the data to cause the failure. Most of the time this was possible. Then I filed a bug report.

It is worth noting, now that the statute of limitations has probably run out: I was completely confident that our system would survive Y2K. Not because of the system test plan in the 5-inch binder (a worthless and stupid exercise required by someone in another part of the building whom we never talked to and that I ignored outright) but because I had personally examined the use of every instance of every date in every place in the entire code base by hand, in the debugger. That took significant time, but it was worth it.

Debugger + macros + scripting = automated unit testing

In hindsight I should have done a lot more of this.

Tandem's scripting language allowed you to start a program, call the debugger, and load something called a 'macro'. The macro would drive the debugger, allowing you to set breakpoints, modify and capture values during the course of the run.

I used to use this technique a lot to accomplish what Jonathan Kohl calls "Computer Assisted Testing". Use automation to accomplish the tedious steps of bringing the system to a state where something interesting is just about to happen, then let human beings take over.

If I knew then what I know now, I could have built some awesome standalone regression test suites using this technique.

Golden Snapshot

I worked in another COBOL code base that was a disaster. Actually, the whole company was a mess. To give a sense of how bad it was: the ball of mud was far too big for the compiler. Whenever the size of the main procedure had to be cut, the devs didn't peel off a module or a functional area; they just cut lines 5000-6000 or whatever and called it a day. Utter chaos.

And improving the code base would have threatened the jobs of the most senior devs. The best developers in the company were given impossible assignments, sabotaged along the way, blamed for the failure, and then demoted. Not nice.

The code was so bad it literally could not be automated at a unit or integration level. So I did two things.

First, we designed a "smoke test" set of manual test cases. We had the best tester in the shop run each test very carefully by hand. At the end of each reference test run we took a snapshot of all the system files and stored them in directories named for each test case.

For subsequent manual runs, the tester would execute the steps, then (figuratively) press a button that caused a diff of the saved files vs. the current system files. Any discrepancies between the state of the files was cause to suspect a bug.

In step 2 of this system, I used a tool called VersaTest from a company called Ascert to begin automating the stuff that the manual testers did. This tool was basically a proxy for the UI and allowed me to drive the system guts by bypassing the UI. To this day, I have never had a better customer support experience than I did with Ascert. I was calling them constantly, and one of their customer support people became a real friend I still speak with now and then.

This is pretty much a last-resort automation option. For one thing, it is extremely expensive up front. For another thing, debugging failures is depressingly hard.

Stubs

I was teaching myself Perl at this time also, which came in really handy on several occasions.

At one point I was assigned to a small side project. One of the better devs had run afoul of his peers again and been sentenced to work on this not-sexy side project. This dev is responsible for one of my favorite quotes of all time. After we'd been working together for a while and the project was starting to flesh out a bit, I noticed that his COBOL was quite a lot better than the code in the main code base and the code I'd seen from other devs. He said "Just because it's COBOL doesn't mean it HAS to be ugly".

This project was a transaction system. Our code would send an transaction, and an outside party would send back one of 4 messages: Success; Failure; Queued for Later Processing; or Later Processing Complete. But their test interface was really flaky and was down most of the time. I wrote a little Perl script that would reply to our system with one of the 4 messages depending on whether the last digit in the transaction number in the record was 1, 2, 3, or 4. It was rather clever and simple, and that little script saw use for the whole life of the project. This little side project was also neat because the team was me, the dev, and a business guy, and we were iterating fast, all together, all using the code, all analyzing the work at the same time.

Meanwhile, in the big sexy part of the company, there was a highly visible project to do transactions with a very important partner, but our dev schedule was ahead of theirs, so interaction testing was some way out. When I was informed that the code was "done", I read the spec and implemented a little Perl network server that could read and write messages over TCP/IP. I pointed our code at it, which immediately fell over dead. I started reporting bugs on the code. The dev actually complained to my boss about this. He could not understand how someone could report bugs when the other side of the transaction was not complete. One day though, he turned all the way around and started being nice to me. I suspect someone had reviewed his "done" code and chewed him out.

No Oxymoron

So it is possible to have automated unit testing, integration testing, and system testing in a COBOL/mainframe environment. Furthermore, it is not only possible to develop COBOL according to the Agile Manifesto; I think the points made by the Manifesto came from the cultures of successful mainframe projects.

Agile software development has been around for a long, long time. It just didn't have a name until those guys got together at Snowbird and cranked up the Agile movement.

Saturday, January 03, 2009

Ruby net/http+SSL+Basic Auth

Since one post from this blog is (or at least used to be) a little chunk of official documentation for Basic Auth in Ruby, I decided I should write this down also.

Socialtext is switching some of the SaaS servers to be https all the time. So I had to rejigger my selenium test-runner script, because it reads (test data) from and writes (test results) to the wiki.

I found this very elegant example.

My take on it looks like

def put_test_results_to_wiki
http = Net::HTTP.new(@test_host,443)
req = Net::HTTP::Put.new(@put_loc, initheader = {'Content-Type' =>'text/x.socialtext-wiki'})
http.use_ssl = true
req.basic_auth @test_user, @test_pass
req.body = ".pre\n" + @content + "\n.pre"
response = http.request(req)
puts response
end


What is a little peculiar about this is that Basic Auth is very much optional for SSL connections. Normally the client would do a GET, the server would return a cookie, and the client would use that cookie in subsequent transactions. I found a nice example of that here, but as of today the server seems to be down.