Skip to main content

Posts

Showing posts from 2011

Just Fix It

On the writing-about-testing mail list recently was a discussion of defect tracking.  Given a good enough code base and a mature dev team, I think defect tracking is mostly unnecessary, and it's worth talking about why that is.  Some time ago there was a popular meme in the agile testing community that goes "Just Fix It", but I haven't heard it mentioned in some time, and I think it's worth reviving the discussion.  The idea behind Just Fix It is to bypass the overhead of creating a defect report, having those defect reports go through some sort of triage process, and only then addressing the problems themselves represented by the defect reports.  You save a lot of time and overhead if you Just Fix It.  For some time now I have specialized in testing at the UI, a combination of functional testing and UX work.  In my experience, in a good code base, important defects found at the UI level are almost always what I think of as "last mile" issues, where

a selenesse trick

Selenesse is the mashup between Fitnesse and Selenium I helped work on some time ago.  I keep encountering this pattern in my selenesse tests, so I thought I'd share it... Every once in a while a test needs some sort of persistent quasi-random bit of data.  In the example below I'm adding a new unique "location" and then checking that my new location appears selected in a selectbox.   This is also a neat trick for testing search, or anywhere else you need to add a bit of data to the system and then retrieve that exact bit of data from somewhere else in the system.  | note | eval javascript inline FTW! | | type; | location_name | javascript{RN='TestLocation' + Math.floor(Math.random()*9999999999);while (String(RN).length < 14) { RN= RN+'0';}} | | note | instantiate a variable and assign the new value to it in a single line FTW! | | $LOCATION= | getValue | location_name | | note | click a thingie to add the new data to the system | | click |

more UI test design (once more from Alan Page)

Before it gets lost in history, I want to riff off Alan Page once again, who made some excellent points .  But as someone who has been designing and creating GUI (browser) tests for a long time, I'd like to address some of those and also point out some bits of ROI that Alan missed.  Making UI tests not fragile is design work.  It requires an understanding of the architecture of the application.  In the case of a web app, the UI test designer is really required to understand things like how AJAX works, standards for identifying elements on pages, how the state of any particular page may or may not change, how user-visible messages are managed in the app, etc. etc.  Without this sort of deep understanding of software architecture, UI tests are bound to be fragile. I've said before that UI tests should be a shallow layer that tests only presentation, and that rarely tests logic, math, or any underlying manipulation of data.  If tests are designed in this way, then they will be

Automated Test Design (riffing/ripping off Alan Page)

Alan just posted this: http://angryweasel.com/blog/?p=325 This is the nicest test example I've seen in a long time, and I think it bears a little more analysis. If I were in charge of the development of this app, I would make automated testing happen on 3 levels. First, there would have to be some sort of test for generating a single random number. Which seems easy enough, but at this level, you really have to understand exactly what a call to rand() (or whatever) on your particular OS is going to do. I once wrote a script in Perl (praise Allah just a toy, not production code) that returned perfectly random numbers on OSX but returned exactly the same values every time on Windows. You'd better test that your call to rand() really returns random numbers, regardless of the OS it is running on. At a higher level, you'd want to do exactly what Alan talks about in looping 100,000 times (although 100K seems like overkill to me). This is what I think of as an "i