Skip to main content

Selenium Implementation Patterns

Recently on Twitter Sarah Mei and Marlena Compton started a conversation about "...projects that still use selenium..."  to which Marlena replied "My job writing selenium tests convinced me to do whatever it took to avoid it but still write tests..."  A number of people in the Selenium community commented (favorably) on the thread, but I liked Simon Stewart's reply where he said "The traditional testing pyramid has a sliver of space for e2e tests – where #selenium shines. Most of your tests should use something else."  I'd like to talk about the nature of that "sliver of space".

In particular, I want to address two areas: where software projects evolve to a point where investing in browser tests makes sense in the life of the project; and also where the architecture of particular software projects make browser tests the most economical choice to achieve a desired level of test coverage.

What Is A Browser Test Project? 


Also recently Josh Grant wrote a blog post "How big is your UI automation project?" where he defines Selenium projects in terms of size. In Josh's terms, here I am discussing "medium" projects that expect to become "enterprise" size (or bigger!)

Given projects of this scale, we can't talk about using Selenium alone. Using Selenium alone for projects of this size would be madness, and destined for failure. Regardless of what language you use with Selenium, you will also need:


  • an abstraction layer or convenience methods that wrap Selenium itself. The most well-known instance of such a wrapper is the Watir project in Ruby, but others exist. (I've often said that if you don't use Watir already you will eventually write Watir yourself.)
  • an assertion framework for checking test results
  • a logging mechanism for reporting and collating test results
  • a test runner. Cucumber is today probably the best-known test runner, but this could be a keyword-driven table framework, or some other test management system


This is why I rarely talk about "testing with Selenium" and instead talk about "browser testing". A successful browser test project needs a lot more structure than just Selenium.

Selenium As a Logical Evolutionary Step


Over the last decade or so I have been part of four projects that followed a similar path to adopting a browser testing practice. I rarely hear others describe this path.

These are projects that start with just a few developers and a very high level of quality from the beginning of the project. Unit test coverage is high, code review/pair programming is essential practice, technical standards and project standards in general start high and are kept high. In three of the four cases in my history, the code is open source and subject to public scrutiny. The number of users grows, the budget for software development grows too.

As the project gets larger, the team finds that there are risks to the project that unit tests alone do not cover, so they institute integration tests that exercise the data store and interactions between system states and parts of the system that could interact in undesirable ways. The project continues to grow and to be successful.

Then the team realizes that there is a class of potential problems that can only be identified at the user interface and that the only way to address this particular class of risk is to have a browser test practice. This is where I make a living.

In the evolution of projects like I describe here, at the time a browser test practice is called for, the practice demands a high level of quality and sophistication. There is little leeway or tolerance for waste, or worse, failure. The browser test practice has to be expert from the beginning in order to be accepted in the existing culture of quality, and the scale of the project has to be significant in order to provide the value that the team expects.

Note that in three of the four instances of this evolutionary pattern that I have experienced personally, it is only after implementing a browser test practice that the team identifies a final level of risk, and goes about creating (or reifying) an exploratory testing practice. There is in fact a market for excellent exploratory testers on projects of very high quality. What I think many people find unusual that the role of exploratory tester is the last role added to the whole practice.

From what I gather, what I describe here is not the experience of most people in the testing community. I hope my description here opens the discourse.

Selenium As a Logical Strategic Choice


Some applications are difficult to test thoroughly at levels below the user interface, so the most efficient test approach is to test at the user interface. In other cases, a browser testing practice is indicated because the nature of the application dictates that the user interface has to be complicated, and simplifying the user interface would be detrimental to the project.

For the first case, where the underlying architecture of the project makes browser testing a logical choice for test coverage, my favorite reference is David Heinemeier Hansson's essay from 2014 "TDD is dead. Long live testing."  In it he argues that unit testing is a good first step that will eventually evolve into a robust set of "system tests". (Titus Fortner, who maintains the Selenium Ruby bindings and also the Watir project, likes to refer to this testing approach as "DOM-to-database") Selenium is the agent by which DHH's system tests happen.

DHH, of course, is the author of Ruby on Rails. I am not an expert on Rails, but I have been told that unit testing in Rails is difficult. But if you take a close look at how Rails actually works, a Rails app is in fact nothing but a collection of DOM-to-database operations. It makes sense to test Rails apps in this way.

Another example comes from Wikipedia. The core of Wikipedia is written in PHP. Somewhat like Rails, good unit testing in PHP is more difficult than in many other languages. When good unit testing is impractical, it makes sense to adopt a testing practice at a higher level, which is exactly what Željko Filipin and I did at the Wikimedia Foundation starting in 2012. Our original project was in Ruby. As I understand, they are porting the project to javascript now that Selenium bindings are fully supported in that language, but the design goals remain the same. I wish them well.

Finally, it may be that the nature of the application itself demands a complicated and complex UI, and testing that complexity must happen in the UI. One application I worked on was a coursework application for art students, where the controls for manipulating visual materials were the entire reason for the existence of the application, so all the function was in the UI. Another application I work on today is pushing the edge of the features available in a UI framework provided by a third party. It is complex and sophisticated and we don't have access to the framework internals.

The Selenium Sliver


With the WebDriver standard being backed by Mozilla, Google, Microsoft, Salesforce, etc., I don't think Selenium will become obsolete any time soon. Selenium solves a real problem that software projects of the highest quality continue to encounter.

But it may be that the scope of that problem is more narrow than most people think.

Comments

Popular posts from this blog

Reviewing "Context Driven Approach to Automation in Testing"

I recently had occasion to read the "Context Driven Approach to Automation in Testing". As a professional software tester with extensive experience in test automation at the user interface (both UI and API) for the last decade or more for organizations such as Thoughtworks, Wikipedia, Salesforce, and others, I found it a nostalgic mixture of FUD (Fear, Uncertainty, Doubt), propaganda, ignorance and obfuscation. 

It was weirdly nostalgic for me: take away the obfuscatory modern propaganda terminology and it could be an artifact directly out of the test automation landscape circa 1998 when vendors, in the absence of any competition, foisted broken tools like WinRunner and SilkTest on gullible customers, when Open Source was exotic, when the World Wide Web was novel. Times have changed since 1998, but the CDT approach to test automation has not changed with it. I'd like to point out the deficiencies in this document as a warning to people who might be tempted to take it se…

Watir is What You Use Instead When Local Conditions Make Automated Browser Testing Otherwise Difficult.

I spent last weekend in Toronto talking to Titus Fortner, Jeff "Cheezy" Morgan, Bret Pettichord, and a number of other experts involved with the Watir project. There are a few things you should know:

The primary audience and target user group for Watir is people who use programming languages other than Ruby, and also people who do little or no programming at all. Let's say that again:

The most important audience for Watir is not Ruby programmers 
Let's talk about "local conditions":

it may be that the language in which you work does not support Selenium
I have been involved with Watir since the very beginning, but I started using modern Watir with the Wikimedia Foundation to test Wikipedia software. The main language of Wikipedia is PHP, in which Selenium is not fully supported, and in which automated testing in general is difficult. Watir/Ruby was a great choice to do browser testing.  At the time we started the project, there were no selenium bindings for …

Open letter to the Association for Software Testing

To the Association for Software Testing:

Considering the discussion in the software testing community with regard to my blog post "Test is a Ghetto", I ask the Board of the AST  to release a statement regarding the relationship of the AST with Keith Klain and Per Scholas, particularly in regard to the lawsuit for fraud filed by Doran Jones (PDF download link) .

The AST has a Code of Ethics  and I also ask the AST Board to release a public statement on whether the AST would consider creating an Ethics Committee similar to, or as a part of the recently created Committee on Standards and Professional Practices.

The yearly election for the Board of the AST happens in just a few weeks, and I hope that the candidates for the Board and the voting members of the Association for Software Testing will consider these requests with the gravity they deserve.