Thursday, June 23, 2016

Reviewing "Context Driven Approach to Automation in Testing"



I recently had occasion to read the "Context Driven Approach to Automation in Testing". As a professional software tester with extensive experience in test automation at the user interface (both UI and API) for the last decade or more for organizations such as Thoughtworks, Wikipedia, Salesforce, and others, I found it a nostalgic mixture of FUD (Fear, Uncertainty, Doubt), propaganda, ignorance and obfuscation. 

It was weirdly nostalgic for me: take away the obfuscatory modern propaganda terminology and it could be an artifact directly out of the test automation landscape circa 1998 when vendors, in the absence of any competition, foisted broken tools like WinRunner and SilkTest on gullible customers, when Open Source was exotic, when the World Wide Web was novel. Times have changed since 1998, but the CDT approach to test automation has not changed with it. I'd like to point out the deficiencies in this document as a warning to people who might be tempted to take it seriously.

The opening paragraph is simply FUD. If we take out the opinionated language

poorly applied
terrible waste
confusion
pain
hard
shallow, narrow, and ritualistic
pandemic, rarely examined, and absolutely false

what's left is "Tool use in testing must therefore be mediated by people who understand the complexities of tools and of tests". This is of course trivially true, if not an outright tautology. The authors then proceed to demonstrate how little they know about such complexities.

The sections that follow down to the bits about "Invest in..." are mostly propaganda with some FUD and straw-man arguments about test automation strewn throughout. ("The only reason people consider it interesting to automate testing is that they honestly believe testing requires no skill or judgment" Please, spare me.) If you've worked in test automation for some time (and if you can parse the idiosyncratic language), there is nothing new to read here, this was all answered long ago. Again, much of these ten or so pages for me brought strong echoes of the state of test automation in the late 1990s. If you are new to test automation, consider thinking of this part of the document as an obsolete, historical look into the past. There are better sources for understanding the current state of test automation.

The sections entitled (as of June 2016) "Invest in tools that give you more freedom in more situations" and "Invest in testability" are actually all good basic advice, I can find no fault in any of this. Unfortunately the example shown in the sections that follow ignores every single piece of that advice.

Not only does the example that fills the final part of the paper ignore every bit of advice the authors give, it is as if the authors have chosen a project doomed to fail, from the odd nature of the system they've chosen to automate, to the wildly inappropriate tools they've chosen to automate it with.

Their application to be tested is a lightweight text editor they've gotten as a native Windows executable. Cursory research shows it is an open source project written in C++ and Qt, and the repo on github  has no test/ or spec/ directory, so it is likely to be some sort of cowboy code under there. I assume that is why they chose this instead of, say, Microsoft Word or some more well engineered application.

Case #1 and Case #2 describe some primitive mucking around with grep, regular expressions, and configuration. It would have been easier just to read the source on github. If this sort of thing is new to you, you probably haven't been doing this sort of work long, and I would suggest you look elsewhere for lessons.

Case #3 is where things get bizarre. First they try automating the editor with something called "AutoHotKey", which seems to be some sort of ad-hoc collection of Windows API client calls, which according to the AutoHotKey project history is wildly buggy as of late 2013 but has had some maintenance off and on since then. I would not depend on this tool in a production environment.

That fails, so then they try some Ruby libraries. Support for Windows on Ruby is notoriously bad, it's been a sticking point in the Ruby community for years, and any serious Ruby programmer would know that. Ruby is likely the worst possible language choice for a native Windows automation project. If all you have is a hammer...

Then they resort to some proprietary tool from HP. You can guess the result.

Again, assuming someone would want to automate a third-party Windows/Qt app at all, anyone serious about automating a native Windows app would use a native Windows language, C# or VisualBasic.NET, instead of some hack like AutoHotKey. C# and VisualBasic.NET are really the only reasonable choices for such a project.

It is as if this project has been deliberately or naively sabotaged. If this was done deliberately, then it is highly misleading; if naively, then it is simply sad.

Finally I have to point out (relevant to the article section "Invest in testability", and again strong shades of 1998) that this paper completely ignores the undeniable fact that the vast majority of modern software development takes place on the web, with the UI appearing in a web browser and APIs offered from servers over a network.  This article makes no mention that selenium/webdriver is a UI automation standard adopted by the World Wide Web Consortium (W3C), that the webdriver automation interface is fully supported by every major browser vendor:  Google Chrome, Mozilla Firefox, Microsoft Internet Explorer, Opera, and most recently Apple Safari, or that the Selenium API is fully supported in five programming languages: C#, Java, Ruby, Python, and Javascript, and partially supported in many more.

Ultimately, this article is mostly FUD, propaganda, and obfuscation. The parts that are not actually wrong or misleading are naive and trivial. Put it like this: if I were considering hiring someone for a testing position, and they submitted this exercise as part of their application, I would not hire them, even for a junior position. I would feel sorry for them.



Tuesday, June 21, 2016

Who I am and where I am June 2016



From time to time I find it helpful to mention where I am and how I got here. I have been pretty quiet since 2010 but I used to say a lot of stuff in public.

For the past year I have worked for Salesforce.org, formerly the Salesforce Foundation, the independent entity that administers the philanthropic programs of Salesforce.com. My team creates free open source software for the benefit of non-profit organizations.  I create and maintain automated browser tests in Ruby, using Jeff "Cheezy" Morgan's page_object gem.  I'm a big fan.

My job title is "Senior Member of the Technical Staff, Quality Assurance".  I have no objection to the term "Quality Assurance", that term accurately describes the work I do. I am known for having said "QA Is Not Evil".

Before Salesforce.org I spent three years with the Wikimedia Foundation , working with Željko Filipin  mostly, on a similar browser test automation project , but much larger.

I worked for Socialtext, well known in some circles for excellent software testing. I worked for the well known agile consultancy Thoughtworks for a year, just when the first version of Selenium was being released. I started my career testing life-critical software in the US 911 telecom systems, both wired/landline and wireless/mobile.

I have been 100% remote/telecommuting since 2007. Currently I live in Arizona, USA.

I used to give talks at conferences, including talks at Agile2006, Agile2009, and Agile2013. I've been part of the agile movement since before the Manifesto existed.  I attended most of the Google Test Automation Conferences  held in the US. I have no plans to present at any open conferences in the future.

I wrote a lot about software test and dev mostly around 2006-2010. You can read most of it at stickyminds  and TechTarget , and a bit at PragProg

I hosted two peer conferences in 2009 and 2010 in Durango Colorado called "Writing About Testing". They had some influence on the practice of software testing at the time, and still resonate from time to time today.

I create UI test automation that finds bugs. Before Selenium existed I was user #1 for WATIR, Web Application Testing In Ruby. I am quoted in both volumes of Crispin/Gregory Agile Testing , and I am a character in Marick's Everyday Scripting.

Sunday, March 27, 2016

Remarks on Wikimedia Foundation recent events


If you pay attention to Wikipedia culture and the WMF, you may know that the Executive Director of the WMF, Lila Tretikov, has resigned amid some controversy.

It is an extraordinary story, especially since, given the nature of Wikipedia culture, so much information about events is publicly available. I'll point you to Molly White's "Wikimedia timeline of recent events" as an excellent synopsis of Ms. Tretikov's tenure as ED.  The thing that strikes me most about that timeline is the number of people who left, and the long tenure of each person who departed. On the same subject, Terry Chay's note published on Quora also addresses this.

My own tenure at WMF was just over three years, from 2012 to 2015. In that time Željko Filipin  and I built an exceptionally good browser test automation framework, which at the time I left WMF was in use in about twenty different WMF code repositories. My time at WMF was roughly evenly split between Ms. Tretikov as ED and under the previous ED, Sue Gardner.

There are two things about Wikipedia and WMF that I think are key to understanding the failures of communication and culture under Ms. Tretikov's leadership.

As background, understand that everyone in the Wikimedia movement, without exception, and sometimes to a degree approaching zealotry, is committed to the vision: "Imagine a world in which every single human being can freely share in the sum of all knowledge."  I still am committed to this myself. My time at WMF absolutely shaped how I see the world.

Given that, what is important to understand is that Wikipedia is essentially a conservative culture. The status quo is supremely important, and attempting to change the status quo is *always* met with resistance. Always. There is good reason for that: Wikipedians see themselves as protecting the world's knowledge, and changes to the current status are naturally perceived as a threat to the quality or even the existence of that knowledge.

The other thing important to understand is that many of the staff at WMF come from the Wikipedia movement or the FOSS movement. Many (not all) of the technical staff began working with Wikimedia/FOSS software in college or even in high school, and ended up employed by WMF without ever experiencing how software is made and managed elsewhere. Likewise many (not all) of the management staff were (and are) important figures in the Wikipedia movement, without much experience in other milieux.

In practice, when attempting to make a change to Wikimedia software or Wikipedia culture, the default answer is always "no". No, you can't use that programming language, that library, that design approach, that framework. No, you can't introduce that feature or that methodology. 

So a big part of the work for those working in this culture is persuasion. One is constantly justifying one's ideas and actions to both one's peers and to management, and to the community, in the face of constant skepticism.  Wikipedians talk about "consensus culture" but in practice consensus is actually more along the lines of "grudging acceptance".  Sue Gardner's most recent blog post explains this better than I ever could.

And because so many Wikipedians have such a dearth of experience of other tech culture, NIH (Not Invented Here) is rampant. It was difficult to introduce proven, reliable, well-known tools simply because they were *too* well-known; they aren't *Wikipedia* tools, they don't have *Wikipedia* support, there is limited knowledge of them within the culture.

The result of these forces is that significant feature releases tend to be fiascos, but each fiasco of a somewhat different character. When WMF released the Visual Editor, the software was not fit for use, everyone involved knew it was not fit for use (or should have, they were certainly told), and the community rejected it for good reason. On the other hand, the Media Viewer *was* fit for use when it was released, but it was such a new paradigm that the community rejected it even more decisively than they had the Visual Editor. We could even speculate that had Media Viewer been as unusable upon release as the Visual Editor was, it might have received a kinder reception from the Wikipedia community.

Some notable exceptions to the fiasco release pattern were the Mobile Web work; the Mobile Web team did a great job and demonstrably made Wikipedia better, even if often over the occasional objections of their peers on the technical staff.  And the rollout of HHVM went well, as did the introduction of ElasticSearch, but none of these projects faced the Wikipedia old guard directly.

It also is notable that it took Željko and me three years to get our work accepted widely across all of WMF. Today I am building essentially the same system for Salesforce.org (the philanthropic entity attached to Salesforce.com) as Željko and I did for WMF. I expect to have my Salesforce.org project in the same position as the WMF project in one year, because I don't face the constant hurdle of having to persuade and persuade and persuade. Again, this is not necessarily a Bad Thing: the institutional skepticism and constant jockeying for acceptance of ideas, tools, and practices at WMF is a mechanism that protects the core mission of Wikipedia, even if it often makes the culture psychologically trying if not outright poisonous. You could argue that having to justify beforehand and evangelize afterward every step we took made the system that Željko and I built better than it would otherwise have been. If I seem to have a low opinion of the WMF understand that in my time at WMF I did some of the best work I've ever done, and I consider my time there to be the pinnacle of my career so far.

So it is perfectly understandable that Ms. Tretikov as Executive Director would want to launch an ambitious skunkworks project in secret. This is something CEOs do. CEOs have discretion over the budget, and they are responsible to shareholders for profits. But the Executive Director of the WMF cannot expect to hide a quarter-million dollar project engaging entities beyond Wikipedia without dire consequences, which is exactly what happened. Or was at least the final act in a long series of poorly executed maneuvers that alienated staff and community to the point of near-paralysis, and that caused a monumental loss of faith from the community as well as a huge loss of institutional knowledge as so many experienced staff abandoned the Foundation, or were abandoned by the Foundation.

I imagine that WMF and the Wikipedia movement will toddle on much as they always have. The Wikipedia vision of free knowledge for every human being remains compelling. And I hope that this troublesome period in the history of WMF can serve as a lesson not only to the Wikipedia community, but to the rest of us concerned with how best to make software work for our world.



Friday, September 27, 2013

Magic: strings and regular expressions in ruby Cucumber tables


I stumbled on an interesting trick that is worth documenting

Say you have a Cucumber Scenario Outline like so:

 Scenario Outline: Foo and bar
    When I click < foo >
    Then < bar > should appear in the page
  Examples:
    | foo        | bar                |
    | X          | =String or regex   |
    | Y          | ===String or regex |
    | Z          |  String or regex   |

and we use it like so:

Then(/^(.+) should appear in the page$/) do |bar|
  on(FooPage).page_contents.should match Regexp.new(bar)
end 

but this won't work, because the case with "=" will pass by accident if page_contents erroneously contains "===" instead.  And case Z would always pass no matter how many '=' characters are in the page.  (That's clear, right?  Say something in the comments if that doesn't make sense to you.)

Also, I really need to check for a leading space for case Z, so what I think I need is not a string but a regular expression, so those anchor carets below should work, right?

  Examples:
    | foo        | bar                 |
    | X          | ^=String or regex   |
    | Y          | ^===String or regex |
    | Z          | ^ String or regex   |

That seems reasonable.  I've made sure that the values for 'bar' are interpreted as regexes, so this should Just Work...

What I discovered is that the values for 'bar' are actually magically turned into *escaped* regexes, so what ends up being compared to page_contents is actually:

/\^\=String or regex/
/\^\=\=\=String or regex/
/\^ String or regex/


and that is no help at all.  Even though they are regexes, the anchor characters are being escaped within the regex so the arguments continue to function as strings.  That is, what is being sent to RSpec as the argument to match is equivalent to the literal strings

"^=String or regex"
"^===String or regex"
"^ String or regex"

and of course that isn't what we want at all.

Here is what I finally figured out:

First we put the regexes that we want to use inside a pair of some character, in this case single quotes.  (And it works for leading spaces in strings, too!)

  Examples:
    | foo        | bar                   |
    | X          | '^=String or regex'   |
    | Y          | '^===String or regex' |
    | Z          | '^ String or regex'   |

Here's the tricky part.  Before we do the match in RSpec we strip the single quotes, and hey presto the result is an unescaped regex *not* a string:

Then(/^(.+) should appear in the page$/) do |bar|
  bar = bar.gsub(/'/, '')
  on(FooPage).page_contents.should match Regexp.new(bar)
end 

That is, what gets sent to RSpec as the argument to 'match' ends up being

/^=String or regex/
/^===String or regex/
/^ String or regex/

and is correctly interpreted by RSpec as a regex-- not as an escaped regex and not a string either.

Sometimes you can have a little too much magic.

Sunday, July 28, 2013

giving a talk at Agile2013



So I'm giving a talk next week at the Agile2013 Conference about "radically open software testing".  It's about my experience over the last eighteen months or so founding and maintaining the QA/testing practice for the Wikimedia Foundation, the good folks who keep the lights on at Wikipedia.

I've done some little peer conferences, but I haven't presented at or even attended a big conference like Agile2013 since I talked about browser test design at Agile2009 in Chicago.  That worked out pretty well, at least Dave Haeffner liked it.  I know a little more about the subject since I gave that presentation also.

I think my presentation might be unusual.  I have nothing to sell.  I have no particular agenda to advance, except to encourage people to contribute to Wikipedia.  I intend to talk about some notable failures too.  What I'll be discussing isn't even particularly "agile", for whatever value the word "agile" has today any more.

I don't even have any slides.  I started making some, but they worked a lot better as high points or an outline for a long conversation than actual nuggets of useful information.  What if I met Tufte some day?  I'd rather just have a conversation and do demos.  Examples still come in handy.

So if you're in Nashville and interested in QA, testing, open software, free knowledge, ukulele, whatever, stop in for the conversation.


Monday, June 03, 2013

open testing Wikipedia mail list (and miagi-do)


For anyone interested in helping to test the software the runs Wikipedia, we have a new mail list dedicated to exactly that topic. The mail list is not even a month old, so the archives are pretty readable.     If you're interested, you can sign up here.

Let me expand on what I mean by "open testing" and by "interested".  At any given time, there are dozens of software development projects (at least) in progress in support of Wikipedia.  Some of these projects are for particular niches in the Wikipedia universe, but others are expensive, high-profile, in some cases world-changing.

Every one of these projects needs more testing than it gets.

So if you want to spend an hour looking for bugs in a Wikipedia feature under development, that's great.  We at the Wikimedia Foundation have worked with Weekend Testing Americas and other organizations on several occasions to do exactly that.

But would you like to help design every aspect of the ongoing test effort for major software development projects for one of the biggest web properties ever, and the biggest encyclopedia in the history of the world?  That's also possible.

And so is everything in between.  From the code to the documentation to the licensing to the communication, every aspect of this software development is open to scrutiny by anyone at any time.  This environment allows thoughtful contributions by an interested community, and I hope would be particularly attractive to thoughtful software testers.

That's what we're talking about on the Wikimedia Foundation QA mail list.  Feel free to join.

So about miagi-do...

A disclaimer up front:  I have been aware of the existence of miagi-do for some time, and many of its members are friends and acquaintances.  But I am not a member, nor I have ever had a conversation with anyone about any aspect of miagi-do.

So I was intrigued when the miagi-do folks started a blog, and I was particularly intrigued by the first   two posts. Or maybe four.

It's quite possible to certify yourself as a software tester by simply working openly, as the miagi-do people have.  And the software that powers Wikipedia is radically open, for testing, for everything.  And while thoughtless contributions are eliminated ruthlessly, no approval is required to contribute.  I think the miagi-do folks would agree with me:  If you meet the Buddha on the road, kill him.



Tuesday, April 02, 2013

Weekend Testing - Wikipedia Test Event Apr 6

Wikipedia is improving the new user experience.

Wikipedia is developing a new way to create accounts on Wikipedia and a new experience for new users when logging in.

Weekend Testing Americas has an online test event on the first Saturday of every month 10AM-1PM Pacific time (17:00-20:00 UTC).

 On April 6 WTA will be testing account creation, login, and the new user experience for Wikipedia . Justin Rohrman will facilitate.

 If you'd like to join this Saturday:


  • Send a message on Skype to weekendtestersamericas 
  • Join #wikimedia-dev on freenode on IRC 
  • Read over the Test Plan 
  • Test! 


 (If you can't attend the Weekend Testing session but you are interested in QA for Wikipedia, we have public QA events ranging from bug triage to Cucumber development every week.)