Monday, January 29, 2007

resources for figuring out ODBC index() function

Microsoft:
(edited: man what a day)
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/odbc/htm/odbcsqlstatistics.asp
(bottom of the page, thanks to Paul Rogers, I'm not sure I ever would have found it on my own.)

For Oracle, you probably want to avoid the indexes() function, and do queries straight to the system tables:
http://www.ss64.com/orad/ALL_IND_COLUMNS.html
http://www.ss64.com/orad/DBA_INDEXES.html

Tuesday, January 23, 2007

Announcing the Bay Area Developer-Tester/Tester-Developer Summit

If you're interested, leave a comment on the blog or send me an email or an IM or smoke signals or something. Here is the flyer Elisabeth Hendrickson and I have been sending:



What: A peer-driven gathering of developer-testers and tester-developers to share knowledge and code.

When: Saturday, February 24, 8:30AM – 5:00PM

This is a small, peer-driven, non-commercial, invitation-only conference in the tradition of LAWST, AWTA, and the like. The content comes from the participants, and we expect all participants to take an active role. There is no cost to participate.

We’re seeking participants who are testers who code, or developers who test.

Our emphasis will be on good coding practices for testing, and good testing practices for automation. That might include topics like: test code and patterns; refactoring test code; creating abstract layers; programmatically analyzing/verifying large amounts of data; achieving repeatability with random tests; OO model-based tests; creating domain specific languages; writing test fixtures or harnesses; and/or automatically generating large amounts of test data.

These are just possible topics we might explore. The actual topics will depend on who comes and what experience reports/code they’re willing to share.

For more information on the inspiration behind this meeting, see two of my recent blog entries:

http://www.testobsessed.com/2007/01/17/tester-developers-developer-testers/

http://www.testobsessed.com/2007/01/19/where-are-the-developer-testerstester-developers/

This is an open call for participation. Please feel free to forward it to other Developer-Testers and Tester-Developers who you think are interested enough to want to spend a whole Saturday discussing the topic and sharing code.

Proposed Agenda:

* Timeboxed group discussions: “Essential attributes of a tester-developer and developer-tester (differences and similarities)” and “What tester-developers want to learn from developers; what developer-testers want to learn from testers.”

* Code Examples/Experience Reports (we figure we have time for 3 of these)

* End of day discussion: Raising visibility for the role of a DT/TD, building community among practitioner

If you’re interested in participating, please send me an email answering these questions:

* Which are you: a tester who codes or a developer who tests?

* How did you come to have that role?

* What languages do you usually program tests in?

* What do you hope to contribute to the Bay Area DT/TD summit? Do you have any code or examples that you’d like to share? (Please note that you should not share anything covered by a non-disclosure agreement.)

* What do you hope to get out of the Bay Area DT/TD summit?

Thanks!

Your hosts:

Elisabeth Hendrickson
Chris McMahon

Monday, January 22, 2007

Mountain West Ruby Conf March 16-17 Salt Lake City

This looks like a fine outing.

I've been lurking as these folks assembled the Mountain West Ruby Conference. It looks like a blast. Chad Fowler is giving the keynote, and at fifty simoleon bucks, the price is definitely right.

I'm not much for skiing, but it'd be a blast if I could make it to the shindig.

Friday, January 19, 2007

that's so crazy it's not even wrong

I'd Like a Pony, Please

The Watir mail list gets on a regular basis questions like "How do I do X in Watir? The commercial tool that I use gives me X, but I can't figure out how to do it with Watir." (Watir, for those who don't know it, is a set of Ruby libraries for manipulating Internet Explorer. That's all it does. )

(OK, Watir has accumulated a few bells and whistles over time, but mostly it just reaches down IE's throat to move IE's bones around
.)


X is logging/database connection/encryption/special string handling/test case management/distributed test environment/data-driven frameworks/etc., etc. etc.

The answer, of course, is that Watir doesn't do that. Nor will it ever do that. Watir is not intended to be a replacement for an entire test department, it is merely a set of libraries to manipulate Internet Explorer. Ruby itself has a lot of goodness to do this kind of thing, but Watir never will.

The people that ask these questions are often quite expert. The problem is that they do not think of themselves as programmers.

That's So Crazy It's Not Even Wrong PART ONE

I once worked with a man who was an expert with the Mercury tools. We were using Ruby to test an application that used a lot of XML. At one point he asked me "Does Ruby give me any way to ignore parts of strings?" I told him that I could think of a couple of ways to do that, and I asked him to show me what he was trying to do.

He wanted to extract the data elements from XML documents by using what boils down to Ruby's gsub("foo","") to strip the XML wrapper parts.

Understand this: he was setting out to build an XML parser. From scratch. Using gsub("foo","") as his only tool. He did not lack boldness.

This of course is madness. I introduced him to REXML and regular expressions. I presume he is still using them.

The reason he set out on such an insane project was that apparently in the world of Mercury, the only string manipulation tool available to you is something a lot like gsub("string",""). My colleague was simply unaware of the wider world of programming, and of what was possible.

He did not think of himself as either a developer-tester, or a tester-developer. He thought of himself as a tester who is a consumer of things provided by others.

That's So Crazy It's Not Even Wrong PART TWO

Mercury is not the only culprit in this kind of brainwashing. I have encountered testers (and others) whose entire world revolves around Excel.

For instance, I know of someone who started out his test project by putting test data into Excel, then writing VBA macros to generate Ruby code on the fly and launch it from Excel. He had a difficult time seeing Excel as simply a repository for data, instead of the center of the entire testing universe.

Where's My Pony?

Programmers' main tools are programming languages. Many testers' main tools are expensive whiz-bang-multi-aspect thingies that, if they have programming languages attached at all, are primitive or crippled.

I would like to see more testers learn about programming. Not just how to do programming, but how to think about testing problems in such a way that the problems can be solved with programming. Programming remains the best way to manipulate computers. If you work closely with computers, you owe it to yourself and your colleagues to learn some programming.

If you've never seen an pony, how will you know what to look for when you go to the barn?

P.S. many thanks to Jon Eaves for the title of his blog, which I've stolen and then mangled.

Thursday, January 18, 2007

significance of developer-tester/tester-developer

This idea of programmers-who-test and testers-who-program is starting to inform my daily work.

Reading articles about fantastic projects that sail to the end is all well and good, those great stories are something we strive for, but accomplish only occasionally. From day to day, our designs could be better, our test coverage could be better; our test plans could be better, our test cases could be more powerful.

I wrote the same bug in a couple of places this week. Luckily, my Customer was a tester. One of the developers I work with had a merge/commit problem a couple of times this week. Luckily, I was his tester, and I've struggled with merges and commits myself. We're working on figuring out how to prevent such errors in the future.

As a toolsmith, my Customers are testers. I go out of my way to write readable code so that they can inspect what I am creating. I encourage collaboration, and my Customers are coming to the realization that I need their help to recognize the bugs and get to the end.

As a tester, I am a Customer for the developers. I am working hard to understand how they go about writing and committing code. I am sympathetic when something goes haywire. I treat that as an opportunity to learn more about design.

As a tester-developer, I am working hard to understand the design and programming constraints of the system. I'm building automated tests and exploratory tests that expose the weakest parts.

As a developer-tester, I am working to include all the (programming and non-programming) consumers of my test scripts I can find, because I recognize that I work best when I collaborate. Disappearing into a closet until I personally resolve every issue simply takes too long and is just too boring to consider.

Tuesday, January 16, 2007

You! Luke! Don't write that down! *

Robert Anton Wilson died last Thursday after a long struggle. If you haven't read The Illuminatus Trilogy, go do that now. It's a lot like Pynchon's The Crying of Lot 49, except 20 times longer and a hell of a lot funnier. Get back to me when you're done. I'll wait.

A couple of years ago I was privileged to see the movie about him made in his later years Maybe Logic at the old Durango Film Festival, which renewed my admiration.

Robert Anton Wilson refused to be made ridiculous, either by his critics or by his diseases. Both his sense of humor and his sense of the sublime were extraordinary, even in his last years. He refused to fade away, he raised a ruckus until the end.

*If you're intrigued by the title of this post, and you're too feeble to read Illuminatus, email me and I'll explain it.

developer-testers and tester-developers

After hearing about a number of system-functional-testing adventures at AWTA, Carl Erickson proposed that there are "developer-testers" and that there are also "tester-developers". In context, the difference was clear, and it's worth talking about the two approaches. (Assuming that I did in fact understand what Carl meant.)

Developer-testers are those developers who have become test-infected. They discovered the power of TDD and BDD, and extended that enthusiasm into the wider arena of system or functional tests. There is a lot of this sort of work going on in the Ruby world and in the agile world. For examples, look at some of Carl's publications on his site at atomicobject.com, or talk to anyone in the Chicago Ruby Brigade about Marcel Molina's presentation last winter on functional tests in Rails and 37signals. (Follow the link, the number of examples is gigantic!) The unit-test mentality of rigor and repeatability and close control survives in these developer-tester system tests.

This is a good thing.
However...

Tester-developers are those testers who have discovered the power of programming to extend and augment their ability to test systems. Sometimes they have been production programmers, but even those who have not been production-code developers have almost always at least read some production code, and have a good grasp of how software is designed.

Tester-developers frequently use interpreted languages instead of compiled languages, for the sake of power. They come up with crazy ideas like finding, creating and using random data. They look at complex or "impossible" simulation problems and create the simulation in 100 lines of Ruby or Perl or Python. They turn problems inside-out and upside-down.

Tester-developers frequently make mistakes, but they attempt so many things that mostly you don't notice.

Tester-developers are often called either "test automators" or "toolsmiths". Some of us have started to suspect that there are situations where these terms are inadequate or misleading.

Tester-developers frequently (as in my own case) excel at craft but lack discipline.

In particular, there comes a time in the toolsmith's career when he needs an interface for testability. Or he needs looser coupling of the code to make his own work easier. Or he needs information about a private API he would like to use. Sometimes, if the toolsmith wants these things, he is the only one in a position to make them happen.

At that time, the tester-developer must be in a position to negotiate not in his own language of crazy ideas and attempts that may fail, but in the developer-tester's language of well-designed, carefully-executed implementation.

Likewise, the developer-tester will have come to realize that regardless of how disciplined his code is, there are probably situations in the world that will cause Bad Things to happen in his system. Often, he suspects that such things are possible, but doesn't have the perspective to recognize the exact problem.

At some point a developer-tester will probably find himself looking for someone to run his code and suggest reasonable ways in which it might fail. Of course, this is what all good testers do, but tester-developers have particular skill at this level, because they understand, at least in a general sense, how the code was written. Here are some reasonable criticisms that a tester-developer might make to a developer-tester:

That input should be a Struct, because if it's an ArrayOfString, there is a strong chance that the user will get the arguments in the wrong order.

This stored procedure contains too many functions. Either move some of the business logic back into the code, or break the procedure into smaller parts, because maintaining this in the future will become difficult. Also, I would like to invoke parts of this function for test purposes without having to exercise the entire set.

Consider adding logging to this aspect of the system, because critical information is being handled here, and if something goes wrong, having that information will be important.

"Developer-testers" and "tester-developers": The more I look at it, the more I like it.

Monday, January 15, 2007

craft and discipline, larks' tongues in aspic

One of my lightning talks at AWTA had to do with finding better languages than engineering and manufacturing with which to describe software.






With all due respect to the Leaners, if software work were like factory work, few of us would do it.












I suggest we look to art, literature, and especially music for language to talk about software.


I mentioned first a couple of academic examples taken from New Criticism and Structuralism, like Monroe Beardsley's idea that value is based on manifest criteria like unity, variety, and intensity, or the Structuralists' idea that value is based on the degree to which the work reflects the folklore and culture of the milieu where it was produced.

If they'd done software, Beardsley would have been analyzing coding standards and function points, while the Structuralists would have been all strict CMM or strict Scrum/XP. If you're a objective-measurements, CMM, or Agile person, I encourage you to find corollary principles in the language of art and literature.

But the language that really got a lot of interest at AWTA was my super-fast overview of Robert Fripp's ideas of guitar craft. (If you don't know Robert Fripp from his work with the band King Crimson, you might know him as the composer of all of the new sounds in Windows Vista.)

I wanted to find and cite some short examples of his work here, hoping the discussion would continue around Fripp's language based in musical practice.

There are any number of fine software practioners discussing software craft, but it's often a linear discussion. I have yet to run across much discussion of how to acquire, reinforce, and advance software craft. People agree it should be done, but when we discuss the details, there is much waving of hands.

Fripp, on the other hand, has at hand a rich, intellectually rigorous, logically consistent language, with a 20-year history of exercise and teaching, describing the practice of guitar craft that is fully prepared to inform our discussion of software craft.

The key is discipline.

Here is the beginning of "A Preface to Guitar Craft". As you read it, substitute "programmer" for "musician" and "software" for music:

The musician acquires craft in order to operate in the world. It is the patterning of information, function and feeling which brings together the world of music and sound, and enables the musician to perform to an audience. These patterns can be expressed in a series of instructions, manuals, techniques and principles of working.

This is the visible side of craft, and prepares the musician for performance. It is generally referred to as technique. The greater the technique, the less it is apparent.

When the invisible side of music craft presents itself, the apprentice sees directly for themself what is actually involved within the act of music, and their concern for technique per se is placed in perspective.

The invisible side of craftsmanship is how we conduct ourselves within the process of acquiring craft, and how we act on behalf of the creative impulse expressing itself through music. In time, this becomes a personal discipline. With discipline, we are able to make choices and act upon them. That is, we become effectual in music.

or this:

These are ten important principles for the practice of craft:

Act from principle.
Begin where you are.
Define your aim simply, clearly, briefly.
Establish the possible and move gradually towards the impossible.
Exercise commitment, and all the rules change.
Honor necessity.Honor sufficiency.
Offer no violence.
Suffer cheerfully.
Take our work seriously, but not solemnly.

Or this from an interview on emusician.com:

Guitar craft is a discipline; the discipline is the way of craft.
This is the part that is missing from the discussion of software craft: the discipline. It's something I'll be working on.

Wednesday, January 10, 2007

Monday, January 08, 2007

what is "automated testing"?

Apropos of a recent discussion on the software-testing mail list, I was reminded of reading James Bach's Agile Test Automation presentation for the first time. It contains this one great page that says that test automation is:

any use of tools to support testing.

until that time I had held the idea that test automation was closely related to manual tests. I was familiar with data-driven and keyword-driven test frameworks, and I was familiar with tests that relied on coded parts to run, but I still had this idea that there was a necessary connection between manual tests and automated tests. I had this idea that proper automated testing was simply where the machine took over and emulated human actions that were expensive or boring.

That way, of course, lies madness. And expense.

It was reading this particular presentation that really lit the light bulb and got me thinking about all the other things that testers do, and all kinds of other ways to approach test automation. Here are some things I've done as a test automator since I read Bach's presentation that bear no resemblance to a test case or a test plan (in no particular order):

2nd party test server was mostly down, so I wrote a sockets-based server to impersonate it.
Script to collect and organize output for gigantic number of reconciliation reports.
Script to parse compiler logs for errors and warnings.
Script to deploy machine images over the network for test environments.
Linux-based file storage-retrieval-display system in all-Windows shop.
Script to parse entire code base and report all strings shown to users. (So humans could find typos.)
Script to reach into requirements-management-tool database and present requirements in sane form.
Various network traffic-sniffing scripts to get a close look at test data in the wild.
Script to compare file structure on different directories.
Script to compare table structure in different databases.
Script to emulate large numbers of clicks with interesting HTTP headers on the GET.
Scripts to install the nightly build.
Monkey-test scripts to click randomly on windows.

and here's more testing-like substance I did as a test automator, that was still mostly not about validation:

Emulated GUI layer to talk to and test underlying server code.
Gold-master big-bang test automation framework.
SOAP tests framework.
Watir scripts for really boring regression tests. (But I didn't emulate the user, instead I made the code easy to maintain and the output easy to read.)

And lots of other odd bits of reporting, manipulation, chores and interesting ideas.

I have a couple of conclusions from a few years of working like this. Or maybe they're just opinions.

1) Excellent testers should be able to address the filesystem, the network, and the database, as well as the UI.
2) Testing is a species of software development. Testers and devs are closer than they think.
3) Testing is a species of system administration, too.
4) Testing is a species of customer support, also.

Thursday, January 04, 2007

they're not really one typo apart

they're one typographic error apart in the IDE, not in the code.

Reasonable tests should make any problems stick out like a sore thumb.

(and thanks for the history)

Wednesday, January 03, 2007

the minimum number of tests

Elisabeth Hendrickson wrote about a case of complacency and lack of imagination in a tester she interviewed.

I agree with the gist of her argument, that good testers have a "knack" for considering myriad ways to approach any particular situation.

But I would also like to point out that just as a "tester who can’t think beyond the most mundane tests" is dangerous, so a tester who assigns equal importance to every possible test is just as dangerous, if not more so.

The set of all possible tests is always immense, and the amount of time to do testing is always limited. I like to encourage a knack for choosing the most critical tests. Sometimes this knack looks like intuition or "error guessing", but in those cases it is almost always, as Brian Marick put it in his review of Kaner's "Testing Computer Software", "past experience and an understanding of the frailties of human developers" applied well.

Intuition and error guessing take you so far, but there are other techniques to limit the number of tests to those most likely to find errors. The diagram here represents work I did some time ago. I was given a comprehensive set of tests for a particular set of functions. Running every test took a long time. In order to arrive at the smallest possible set of tests that exercised every aspect of the set of functions, I arranged a mind map such that each test whose test criteria overlapped the test criteria of another test was put farthest away from the center. Each test whose test criteria included multiple sets of test criteria was put closer to the center. In this way, I arrived at the smallest possible set of tests that covers at least some aspect of all of the set of functions under test.

It is not a comprehensive set of tests. Rather, it is the set of tests, which, given the smallest amount of testing time, has the greatest chance of uncovering an error in the system.




There are certainly echoes here of model-based testing, although I think this approach plays a little fast and loose with the rules there.

Regardless of whether the tester chooses a minimum number of tests through intuition or through analysis, it's a good skill to have.

And if you do encounter a tester who creates a smaller number of tests than you think is appropriate, you might want to check his reasoning to be certain that that tester is really making a mistake.