Skip to main content


Showing posts from 2012

Open browser test automation at WMF

Almost a year ago I started working as QA Lead for the Wikimedia Foundation .  Among other things, I had a mandate to create an automated browser testing practice at WMF. From the start I wanted this project to be a world class, completely open, reference implementation of such a project, using the best and most modern tools and practices I could find.  I wanted this to be a project that anyone could read, anyone could run, and to which anyone could contribute.  I wanted this to be an industry standard implementation of a well-designed, well-implemented, working browser test automation project. Around 2006 my career had veered off into a test automation approach that, while valid and useful in certain circumstances, would be inappropriate for the WMF project.  And in the years since 2006, the tools and practices that were immature at the time had grown into mature, stable, powerful projects of their own.  I set out to educate myself about the details of the cutting edge of brow

Wikimedia Foundation hiring QA staff

The Wikimedia Foundation currently has two open positions for QA staff.  One is for a QA Engineer and the other for a Volunteer QA Coordinator .  I want to point out what a unique opportunity this is. Before I was hired at WMF four months ago as "QA Lead", there had never been anyone working on the Wikipedias and related projects whose only focus was testing and quality.  There was no UI test automation, there was no program of community testing.   Actually, there still isn't.  That's where these new staff come in.  No exaggeration:  this is an opportunity to create from scratch the quality and testing practices for one of the great achievements in human history.  But WMF only has about 100 staff, and only about half of those are technical.  At the moment, there are about 50 regular contributors to WMF software (and millions and millions of users!), so this QA staff will be outnumbered and outgunned.  So we need help, both from automation and from a community of peop

Join the Wikipedia/ test event 9 June

Wikipedia allows users to leave feedback on each article.  Experienced Wikipedians analyze this feedback in myriad different ways to improve the Wikipedia user experience and to improve the encyclopedia itself.  The Wikimedia Foundation has been creating a new Article Feedback system, and on Saturday 9 June from 10AM-noon Pacific time, WMF invites testers and anyone else interested to participate in a "shakedown cruise" to test a near-final version of the new Article Feedback system before the system is rolled out to all of Wikipedia. Following on May's successful test event with Weekend Testing Americas, WMF is teaming up with the Open Source fans at for this event.  I am hoping that having experienced exploratory testers plus people interested in improving Wikipedia articles will be a killer combination of expertise and interest to shake out any final issues in this critical aspect of the Wikipedia experience. Note:  anyone who shows up on 9 June wi

Testing (automation or whatever) 101: ask a good question.

I tried to do A, and I really don't understand the response I got, X.  Does this make sense? I know it should be possible to do A, but I tried it and X happened.  What sort of conditions would cause that? I tried to do A, and X, Y, and Z happened.  X makes sense, but I don't understand Y, what's going on here? It doesn't really matter whether you're asking about automation or any other kind of testing.  The tricky part is that before asking the question, you had better be pretty familiar with A, and you had better be able to report X, Y, and Z in a reasonable way.  I have a corollary, and I have a (counter) example. I have seen any number of people in the software world complain about testers who submit bad bug reports.  I'm sure it's true, I've seen the evidence, and it boggles my mind.  A good bug report will explain A and explain X, and a great bug report will phrase the issue in terms of a question.  Not long ago I got an email from someone asking ab

Testing Summit at Telerik

I attended the Test Summit peer conference this weekend at the invitation of Jim Holmes of Telerik . It was outstanding, as such peer conferences tend to be, and I and others will be posting a lot of information as a result of what went on there.  But I want to talk about the conference itself.  Software testing has a long history of peer conferences, starting (to the best of my knowledge) with the Los Altos Workshop on Software Testing (LAWST).  Bret Pettichord borrowed the LAWST format for his Austin Workshop on Test Automation (AWTA) in the mid-2000s, and I borrowed from AWST for my Writing About Testing (WAT) conferences in 2010 and 2011.   I think other examples exist.  The format has gotten looser over the years.  LOTS looser, as we find that motivated participants are pretty good at self-organization. As far as I am aware, the Telerik Test Summit (TTS?) is the first such software testing peer conference created and sponsored by a commercial company.   I think this is i

Weekend Testing for Wikipedia May 5

On Saturday May 5 at 10:00 AM Pacific Time the Weekend Testing Americas will be investigating the new release of Mediawiki on Wikimedia Foundation sites before WMF rolls out the new version to all of Wikipedia. I am really excited about this project, I do hope you will consider joining in. Details of how to join are on the official Weekend Testing site .  1. Add “weekendtestersamericas” to your Skype contacts if you haven’t already.
 2. Fifteen minutes prior to the start of the session, please message “weekendtestingamericas” and ask to be added to the chat session. Once we see you, we will add you to the session. The test plan is here See you on Saturday!

conf report: Test Automation Bazaar

I went to the Test Automation Bazaar because one of the (many) things I want to do at Wikimedia Foundation is to start a browser test automation project, open to the greater software testing community, in support of Wikipedia and related projects.  I have been out of the loop on this front (forgive the mixed metaphor) for some time, and I particularly wanted to learn about two things: * what an attractive, modern, well-designed browser test automation framework looks like * page objects and their use I got what I came for.  Particular thanks to  Brahma Ghosh and Jeff Morgan/Cheezy for the great discussion.  I am putting on the white belt for this project, but at the same time, I have years of valuable UI test design experience to bring to bear. But beyond that, TAB was full of friends, many of whom I had never met.  Kudos to Bret Pettichord for making it happen, I wish we'd had more time to chat, I've known Bret longer than anyone else there, through some weird times back in t

deja vu: code, culture, and QA

Some years ago I had the privilege of making some suggestions for Brian Marick's book Everyday Scripting based on the first article I ever wrote for Better Software magazine.  That article appeared in 2004, and I just recently ran into a similar situation at work.  Wikipedia is localized for well over 100 languages.  I had only been working at Wikimedia Foundation a couple of weeks when I heard that discrepancies between the localized message files from version to version could cause problems when upgrading.  I didn't know what kind of problems, but since we're upgrading all the Wikipedia wikis to version 1.19, that sounded like sort of a big deal, so I followed up. It turns out that changes to the localization files are essentially undocumented, no tools exist to monitor such changes, and we simply did not know anything about discrepancies in those files.  So I decided it would be useful to look into that. You can find the Wikipedia localization files for version

Who I Am and Where I Am, early 2012

I've been pretty quiet in recent times, but that's going to change somewhat in 2012, so I thought I'd write this(*) to catch up. As of last week, I am the QA Lead for the Wikimedia Foundation . My job there will be to create, codify, and execute the software testing and quality assurance regimes for the software that powers Wikipedia and its associated properties. I've worked some other interesting places, among them Thoughtworks and Socialtext . I like open source and wikis. I have been a dedicated telecommuter/remote worker since 2006. Depending on when you read this, I'm in either Santa Fe NM or Durango CO, or somewhere else. I have written about software a lot. Most of my writing in recent times has been for   (warning: registration wall), but I've also written a lot for   and a couple of articles for PragPub .   I wrote a chapter for Beautiful Testing . I created the Writing About Testing peer confere