Skip to main content

Posts

Showing posts from November, 2017

Test Automation Heuristic: No Conditionals

A conditional statement is the use of if() (or its relatives) in code. Not using if() is a fairly well-known test design principle. I will explain why that is, but I am also going to point out some less well-known places where it is tempting to use if(), but where other flow constructs are better choices. My code looks like Ruby for convenience, but these principles are true in any language. No Conditionals in Test Code This is the most basic misuse of if() in test code: if (x is true) test x elsif (y is true) test y else do whatever comes next It is a perfectly legal thing to do, but consider: if this test passes, you have no way of knowing which branch of the statement your test followed. If "x" or "y" had some kind of problem, you would never know it, because your conditional statement allows your test to bypass those situations. Far better is to test x and y separately:  def test_x (bring about condition x) (do an x thing) (do another x t

UI Test Heuristic: Don't Repeat Your Paths

There is a principle in software development "DRY" for " Don't Repeat Yourself ", meaning that duplicate bits of code should be abstracted into methods so you only do one thing one way in one place. I have a similar guideline for automated UI tests, DRYP, for "Don't Repeat Your Paths". I discuss DRYP in the context of UI test automation, but it applies to UI exploratory testing as well.  Following this guideline in the course of exploratory testing helps avoid a particular instance of the "mine field problem". Antipattern: Repeating Paths to Test Business Logic I took this example from the Cucumber documentation : Scenario Outline: feeding a suckler cow Given the cow weighs "weight" kg When we calculate the feeding requirements Then the energy should be "energy" MJ And the protein should be "protein" kg Examples: | weight | energy | protein | | 450 | 26500 | 215 | | 50

Test Automation Heuristic: Minimum Data

When designing automated UI tests, one thing I've learned to do over the years is to start by creating the most minimal valid records in the system. Doing this illuminates assumptions in various parts of the system that are likely not being caught in unit tests. As I have written elsewhere, I make every effort to set up test data for UI tests by way of the system API. (Even if the "API" is raw SQL, it is still good practice.) That way when your browser hits the system to run the test, all the data it needs are right in place. For example (and this is a mostly true example!) say that you have a record in your system for User, and the only required field on the User record is "Last Name". If you start designing your tests with a record having only "Last Name" on it, you will quickly uncover places in the system that may assume the existence of a field "First Name", or "Address", or "Email", or "Phone". For som

Selenium Implementation Patterns

Recently on Twitter Sarah Mei and Marlena Compton started a conversation about "...projects that still use selenium..."  to which Marlena replied "My job writing selenium tests convinced me to do whatever it took to avoid it but still write tests..."  A number of people in the Selenium community commented (favorably) on the thread, but I liked Simon Stewart's reply where he said " The traditional testing pyramid has a sliver of space for e2e tests – where #selenium shines. Most of your tests should use something else. "  I'd like to talk about the nature of that "sliver of space". In particular, I want to address two areas: where software projects evolve to a point where investing in browser tests makes sense in the life of the project; and also where the architecture of particular software projects make browser tests the most economical choice to achieve a desired level of test coverage. What Is A Browser Test Project?  Also rec