Skip to main content

Posts

Showing posts from 2017

Test Heuristic: Managing Test Data

Managing test data is one of the most difficult parts of good testing practice, but I think that managing test data receives surprisingly little attention from the testing community. Here are a few approaches that I think are useful to consider for both automated testing and for exploratory testing. Antipattern: Do Not Create Test Data In The UI But first I want to point out what I think is a mistake that many testers make. When you are ready to test a new feature or you are ready to do a regression test, or you are ready to demonstrate some aspect of the system, the data you need to work with should already be in place for you to use. Having to create test data from scratch before meaningful testing begins is an anti-pattern (and a waste of time!), whether for automated testing or for exploratory testing. Create Test Data With An API This is usually my favorite approach to managing test data. Considering this scenario in the context of an automated test, I usually will have

Automated UI Test Heuristic: Sleep and Wait

Wait In modern web applications, elements on the page may come into existence as a result of actions that the user may take, or they may disappear from the page as a result of actions that the user may take. In order to test modern web applications in the browser, it is rare that tests do not have to wait for some condition or another to be true or false before the test can proceed properly. In general, there are two kinds of waiting in automated browser tests. In the language of Watir, these are "wait_until" and "wait_while". If you are not using Watir, you have probably already implemented similar methods in your own framework. My experience is that wait_until type methods are more prevalent than wait_while type methods. Your test needs to wait until a field is available to fill in, then it needs to wait until the "Save" button is enabled after filling in the field. Your test needs to wait until a modal dialog appears in order to dismiss it, the

Test Automation Heuristic: No Conditionals

A conditional statement is the use of if() (or its relatives) in code. Not using if() is a fairly well-known test design principle. I will explain why that is, but I am also going to point out some less well-known places where it is tempting to use if(), but where other flow constructs are better choices. My code looks like Ruby for convenience, but these principles are true in any language. No Conditionals in Test Code This is the most basic misuse of if() in test code: if (x is true) test x elsif (y is true) test y else do whatever comes next It is a perfectly legal thing to do, but consider: if this test passes, you have no way of knowing which branch of the statement your test followed. If "x" or "y" had some kind of problem, you would never know it, because your conditional statement allows your test to bypass those situations. Far better is to test x and y separately:  def test_x (bring about condition x) (do an x thing) (do another x t

UI Test Heuristic: Don't Repeat Your Paths

There is a principle in software development "DRY" for " Don't Repeat Yourself ", meaning that duplicate bits of code should be abstracted into methods so you only do one thing one way in one place. I have a similar guideline for automated UI tests, DRYP, for "Don't Repeat Your Paths". I discuss DRYP in the context of UI test automation, but it applies to UI exploratory testing as well.  Following this guideline in the course of exploratory testing helps avoid a particular instance of the "mine field problem". Antipattern: Repeating Paths to Test Business Logic I took this example from the Cucumber documentation : Scenario Outline: feeding a suckler cow Given the cow weighs "weight" kg When we calculate the feeding requirements Then the energy should be "energy" MJ And the protein should be "protein" kg Examples: | weight | energy | protein | | 450 | 26500 | 215 | | 50

Test Automation Heuristic: Minimum Data

When designing automated UI tests, one thing I've learned to do over the years is to start by creating the most minimal valid records in the system. Doing this illuminates assumptions in various parts of the system that are likely not being caught in unit tests. As I have written elsewhere, I make every effort to set up test data for UI tests by way of the system API. (Even if the "API" is raw SQL, it is still good practice.) That way when your browser hits the system to run the test, all the data it needs are right in place. For example (and this is a mostly true example!) say that you have a record in your system for User, and the only required field on the User record is "Last Name". If you start designing your tests with a record having only "Last Name" on it, you will quickly uncover places in the system that may assume the existence of a field "First Name", or "Address", or "Email", or "Phone". For som

Selenium Implementation Patterns

Recently on Twitter Sarah Mei and Marlena Compton started a conversation about "...projects that still use selenium..."  to which Marlena replied "My job writing selenium tests convinced me to do whatever it took to avoid it but still write tests..."  A number of people in the Selenium community commented (favorably) on the thread, but I liked Simon Stewart's reply where he said " The traditional testing pyramid has a sliver of space for e2e tests – where #selenium shines. Most of your tests should use something else. "  I'd like to talk about the nature of that "sliver of space". In particular, I want to address two areas: where software projects evolve to a point where investing in browser tests makes sense in the life of the project; and also where the architecture of particular software projects make browser tests the most economical choice to achieve a desired level of test coverage. What Is A Browser Test Project?  Also rec

Watir is What You Use Instead When Local Conditions Make Automated Browser Testing Otherwise Difficult.

I spent last weekend in Toronto talking to Titus Fortner , Jeff "Cheezy" Morgan , Bret Pettichord , and a number of other experts involved with the Watir project. There are a few things you should know: The primary audience and target user group for Watir is people who use programming languages other than Ruby, and also people who do little or no programming at all. Let's say that again: The most important audience for Watir is not Ruby programmers  Let's talk about "local conditions": it may be that the language in which you work does not support Selenium I have been involved with Watir since the very beginning, but I started using modern Watir with the Wikimedia Foundation to test Wikipedia software. The main language of Wikipedia is PHP, in which Selenium is not fully supported, and in which automated testing in general is difficult. Watir/Ruby was a great choice to do browser testing.  At the time we started the project, there were no seleni

Use "Golden Image" to test Big Ball Of Mud software systems

So I had a brief conversation on Twitter with Noah Sussman about testing a software system designed as a "Big Ball Of Mud" (BBOM). We could talk about the technical definition of BBOM, but in practical terms a BBOM is a system where we understand and expect that changing one part of the system is likely to cause unknown and unexpected results in other, unrelated parts of the system. Such systems are notoriously difficult to test, but I have tested them long ago in my career, and I was surprised that Noah hadn't encountered this approach of using a "Golden Image" to accomplish that. Let's assume that we're creating an automated system here. Every part of the procedure I describe can be automated. First you need some tests. And you'll need a test environment. BBOM systems come in many different flavors, so I won't specify a test environment too closely. It might be a clone of the production system, or a version of prod with fewer data.  It

Open Letter about Agile Testing Days cancelling US conference

I sent the following by email contact pages to Senator John McCain, Senator Jeff Flake, and Representative Martha McSally of Arizona in regard to Agile Testing Days cancelling their US conference on 13 February. Agile Testing Days is a top-tier tech conference about software testing and Quality Assurance in Europe. They had planned their first conference in the USA to be held in Boston MA, with a speaker lineup from around the world. They cancelled the entire conference on 13 February because of the "current political situation" in the USA. Here is their statement: https://agiletestingdays.us/ Although I was not scheduled to attend or to speak at this particular conference, it is conferences such as Agile Testing Days where the best ideas in my field are presented, and it is from conferences such as Agile Testing Days that many of my peers get those ideas, and I rely on conversations from those who do speak and attend in order to stay current in my field. As a residen