Thursday, June 30, 2016

Test is a Ghetto


If you read software testing news aimed at the general public, you might be of the opinion that software testing is done by, and *properly* done by:






The key of course is "minimal training". There is a class of software testers who have minimal programming skills, or system administration skills, or database skills, or any technical computer skills at all. These testers do honorable work and can be valuable members of a software development team. They have been my colleagues; I have helped hire them; and I have trained them in test automation. And I still do that sort of work myself sometimes, although others are better at it than I am.

However, their lack of technical skills mean that they tend to have lower status, lower income, and are often considered fungible, easily swapped out as economy dictates, or replaceable by automation. At least at the start of their careers, these testers can be caught in a vicious circle of not having the technical skills or critical abilities to advance their careers, while also lacking enough understanding of modern software development to see a way to improve their skills. Many stay in this circle indefinitely, where a whole mythology of the value of codified ignorance evolves. This is the software testing ghetto.

Ghettos exclude their occupants from the general discourse, but it is also true that ghettos are exploited by agents of the greater culture. Software testers, particularly junior-level software testers, especially such testers that are because of their circumstances ignorant of modern software development discourse, are especially prone to exploitation. If they are represented by an agent, then someone is paid to recruit them and train them. Someone is paid to negotiate their contracts. Someone sells their employers the tools they have been trained to use.

I have not found out anything about the training that autistic people, Aboriginals, or Malaysians receive; the original draft of this essay was to have provided a detailed analysis of the training provided by Doran Jones, Per Scholas , and Keith Klain in New York City. There is a wealth of material available, and I urge you to search online yourself.

However, in the course of doing that research, I discovered that Doran Jones has sued Keith Klain and Per Scholas for fraud over that software testing training program. The entire document (PDF download link)  is a fascinating look at the inner workings of those agencies that sell software testing services. The part of the lawsuit relevant to the software testing training begins at paragraph 89, for those who wish to read it. Also of interest given the stated list of Per Scholas partners is paragraph 37.

Under the circumstances, and given the nature of the claims against Klain and Per Scholas, I think it is not appropriate for me to publish my comments, but I would like to point out one fact not contained in the lawsuit documentation: the Association for Software Testing committed their resources  to the Per Scholas training program in NYC while Keith Klain was a member of the AST Board of Directors. I expect that the current and recent officers of the AST are extremely interested in the outcome of this lawsuit.

Klain says this about software testers: "Software testing is a strange business. It’s commoditized (sic), devalued, misunderstood, and goes through cycles of being chopped, changed, and lives at the front lines of imminent takeover by our robot overlords. Why anyone would want to be a professional software tester is even harder to understand." Read the whole thing

Interestingly, Klain and the people involved in this Per Scholas project are also the most vocal opponents of software testing certification, sometimes with questionable approaches to gutting certification efforts.

It makes sense that these agents of minimally trained software testers would oppose certification. A global, generally-accepted, inexpensive certification in software testing would allow those entry-level software testers with limited knowledge of modern software development culture to more easily be their own agents in that culture. The market for this sort of exploitation might shrink considerably. In hindsight, I wish I had said this explicitly when I tackled the topic in 2010.  As your career matures, your CV becomes more important than your certifications, but getting certified early on is a perfectly reasonable career move.

As Marlena Compton said in her 2015 essay "A Tableflip Guide: Transitioning from Tester to Developer"  "If you go to a testing conference you’ll find people talking about how you can stay in testing forever and how it is a great career path. I’ve noticed that, often, the testers who shout the loudest about staying in testing forever have carved out their own place in the power structure of the software testing industry." I urge you to read the whole essay.

I'll suggest further that those testers shouting the loudest may also depend on the minimally skilled testing ghetto for their livelihood, and may not have your best interests in mind.

If you as a software tester

are happy with your career path and prospects for growth
are happy with the skills you have and the prospects to develop them further
are respected by everyone on your development team and are treated as a peer
represent your own interests to your employer with good faith on both sides
have technical training available to you
understand technical aspects of software development other than testing

then this essay probably does not describe you. If these things are not true for you, you may be in the software testing ghetto.


Thursday, June 23, 2016

Reviewing "Context Driven Approach to Automation in Testing"



I recently had occasion to read the "Context Driven Approach to Automation in Testing". As a professional software tester with extensive experience in test automation at the user interface (both UI and API) for the last decade or more for organizations such as Thoughtworks, Wikipedia, Salesforce, and others, I found it a nostalgic mixture of FUD (Fear, Uncertainty, Doubt), propaganda, ignorance and obfuscation. 

It was weirdly nostalgic for me: take away the obfuscatory modern propaganda terminology and it could be an artifact directly out of the test automation landscape circa 1998 when vendors, in the absence of any competition, foisted broken tools like WinRunner and SilkTest on gullible customers, when Open Source was exotic, when the World Wide Web was novel. Times have changed since 1998, but the CDT approach to test automation has not changed with it. I'd like to point out the deficiencies in this document as a warning to people who might be tempted to take it seriously.

The opening paragraph is simply FUD. If we take out the opinionated language

poorly applied
terrible waste
confusion
pain
hard
shallow, narrow, and ritualistic
pandemic, rarely examined, and absolutely false

what's left is "Tool use in testing must therefore be mediated by people who understand the complexities of tools and of tests". This is of course trivially true, if not an outright tautology. The authors then proceed to demonstrate how little they know about such complexities.

The sections that follow down to the bits about "Invest in..." are mostly propaganda with some FUD and straw-man arguments about test automation strewn throughout. ("The only reason people consider it interesting to automate testing is that they honestly believe testing requires no skill or judgment" Please, spare me.) If you've worked in test automation for some time (and if you can parse the idiosyncratic language), there is nothing new to read here, this was all answered long ago. Again, much of these ten or so pages for me brought strong echoes of the state of test automation in the late 1990s. If you are new to test automation, consider thinking of this part of the document as an obsolete, historical look into the past. There are better sources for understanding the current state of test automation.

The sections entitled (as of June 2016) "Invest in tools that give you more freedom in more situations" and "Invest in testability" are actually all good basic advice, I can find no fault in any of this. Unfortunately the example shown in the sections that follow ignores every single piece of that advice.

Not only does the example that fills the final part of the paper ignore every bit of advice the authors give, it is as if the authors have chosen a project doomed to fail, from the odd nature of the system they've chosen to automate, to the wildly inappropriate tools they've chosen to automate it with.

Their application to be tested is a lightweight text editor they've gotten as a native Windows executable. Cursory research shows it is an open source project written in C++ and Qt, and the repo on github  has no test/ or spec/ directory, so it is likely to be some sort of cowboy code under there. I assume that is why they chose this instead of, say, Microsoft Word or some more well engineered application.

Case #1 and Case #2 describe some primitive mucking around with grep, regular expressions, and configuration. It would have been easier just to read the source on github. If this sort of thing is new to you, you probably haven't been doing this sort of work long, and I would suggest you look elsewhere for lessons.

Case #3 is where things get bizarre. First they try automating the editor with something called "AutoHotKey", which seems to be some sort of ad-hoc collection of Windows API client calls, which according to the AutoHotKey project history is wildly buggy as of late 2013 but has had some maintenance off and on since then. I would not depend on this tool in a production environment.

That fails, so then they try some Ruby libraries. Support for Windows on Ruby is notoriously bad, it's been a sticking point in the Ruby community for years, and any serious Ruby programmer would know that. Ruby is likely the worst possible language choice for a native Windows automation project. If all you have is a hammer...

Then they resort to some proprietary tool from HP. You can guess the result.

Again, assuming someone would want to automate a third-party Windows/Qt app at all, anyone serious about automating a native Windows app would use a native Windows language, C# or VisualBasic.NET, instead of some hack like AutoHotKey. C# and VisualBasic.NET are really the only reasonable choices for such a project.

It is as if this project has been deliberately or naively sabotaged. If this was done deliberately, then it is highly misleading; if naively, then it is simply sad.

Finally I have to point out (relevant to the article section "Invest in testability", and again strong shades of 1998) that this paper completely ignores the undeniable fact that the vast majority of modern software development takes place on the web, with the UI appearing in a web browser and APIs offered from servers over a network.  This article makes no mention that selenium/webdriver is a UI automation standard adopted by the World Wide Web Consortium (W3C), that the webdriver automation interface is fully supported by every major browser vendor:  Google Chrome, Mozilla Firefox, Microsoft Internet Explorer, Opera, and most recently Apple Safari, or that the Selenium API is fully supported in five programming languages: C#, Java, Ruby, Python, and Javascript, and partially supported in many more.

Ultimately, this article is mostly FUD, propaganda, and obfuscation. The parts that are not actually wrong or misleading are naive and trivial. Put it like this: if I were considering hiring someone for a testing position, and they submitted this exercise as part of their application, I would not hire them, even for a junior position. I would feel sorry for them.



Tuesday, June 21, 2016

Who I am and where I am June 2016



From time to time I find it helpful to mention where I am and how I got here. I have been pretty quiet since 2010 but I used to say a lot of stuff in public.

For the past year I have worked for Salesforce.org, formerly the Salesforce Foundation, the independent entity that administers the philanthropic programs of Salesforce.com. My team creates free open source software for the benefit of non-profit organizations.  I create and maintain automated browser tests in Ruby, using Jeff "Cheezy" Morgan's page_object gem.  I'm a big fan.

My job title is "Senior Member of the Technical Staff, Quality Assurance".  I have no objection to the term "Quality Assurance", that term accurately describes the work I do. I am known for having said "QA Is Not Evil".

Before Salesforce.org I spent three years with the Wikimedia Foundation , working with Ċ½eljko Filipin  mostly, on a similar browser test automation project , but much larger.

I worked for Socialtext, well known in some circles for excellent software testing. I worked for the well known agile consultancy Thoughtworks for a year, just when the first version of Selenium was being released. I started my career testing life-critical software in the US 911 telecom systems, both wired/landline and wireless/mobile.

I have been 100% remote/telecommuting since 2007. Currently I live in Arizona, USA.

I used to give talks at conferences, including talks at Agile2006, Agile2009, and Agile2013. I've been part of the agile movement since before the Manifesto existed.  I attended most of the Google Test Automation Conferences  held in the US. I have no plans to present at any open conferences in the future.

I wrote a lot about software test and dev mostly around 2006-2010. You can read most of it at stickyminds  and TechTarget , and a bit at PragProg

I hosted two peer conferences in 2009 and 2010 in Durango Colorado called "Writing About Testing". They had some influence on the practice of software testing at the time, and still resonate from time to time today.

I create UI test automation that finds bugs. Before Selenium existed I was user #1 for WATIR, Web Application Testing In Ruby. I am quoted in both volumes of Crispin/Gregory Agile Testing , and I am a character in Marick's Everyday Scripting.