Skip to main content


Showing posts from 2008

artistic testing

I like to point out that historical, well-vetted aesthetic frameworks are useful for evaluating software . But what if similar aesthetic frameworks could not only validate software itself but also *predict* software test techniques? For example, let's examine overarching schools of painting: first came realism, where painters attempted to represent as much as possible what appears to the human eye. We can test realistic scenarios, we do this all the time. I would suggest that most bugs are not discovered under what we understand to be realistic circumstances. Then came expressionism, where painters attempted to express things not seen. Likewise, test techniques moved on to persona testing (imagining the use of software by peculiar users) and soap-opera testing. Such techniques expose significant bugs. Then came abstract expressionism. Think of Jackson Pollack. Are there abstract expressionist test techniques? Of course there are . I would really like to hear of any other

two agile antipatterns

Both of these have been crossing my path with increasing frequency in recent months (not at work, but in conversations with others whom one would think would know better): 1) Stories are scheduled work for features. Stories should not include: testing refactoring documentation undefined customer support calling your mom The problem with writing stories for stuff other than scheduled work on features is that you completely undermine the concept of velocity. Velocity is the number of story points accomplished per iteration. If you assign story points to activities other than scheduled work on features, you cheat the business of accurate information about release schedules. (Although it might make you feel better to be accomplishing so many points each iteration, if you are not delivering features, you fail.) 2) Doing it "by the book". Part of the agile culture is to have retrospectives at the end of each iteration. Each retrospective is an opportunity to change your process

testers as navigators

Not long ago Audrey Tang started working for Socialtext. It has been a remarkable experience, as Audrey is mostly fixing bugs based on bug reports submitted by our testers. Audrey is in Taiwan and another of our testers is working from Bangladesh (although he is usually in Vancouver BC). Today they needed to hash out some details of a bug report so they shared a VNC session on a machine hosted in Palo Alto CA. It took only a couple of minutes to arrive at an agreement that the bug was fixed. Audrey's comment in IRC really struck us all: " having an excellent qa team is like driving with a first-grade GPS navigation system :)" We have worked very hard to create consistent, readable, executable bug reports for real, visible bugs. Having a developer as good as Audrey validate that work means a lot.

a test technique

Very recently I was part of a bug hunt: an effort to find as many defects as possible in a UI in a short period of time. I am not an outstanding UI tester. I have worked with testers whose eye for detail, line, color, consistency, work flow, etc. etc. make them really outstanding testers to have examining the front end. But I've spent most of my career on green-screen apps, test automation, environment maintenance, stuff like that. I lack the really expert artist's eye that great UI testers have. That said, I was on a roll on this particular bug hunt, finding a lot of nice, sophisticated bugs. It was classic ET: decide a test approach, determine your next test based on the results of your last test; and when you stop finding bugs, change your test approach. I'd followed all the logical paths I could think of; I'd tried really big inputs and really small inputs; I'd tried permissions errors, navigation errors, and checking error messages themselves. I tried

technical investment

My colleague Matt Heusser is doing this workshop on technical debt soon. I posted a little meditation on techdebt as impedance mismatch , but that seems to me to be trivially true, facile. Since I made The Software Artists public, though, I've been fielding a lot of questions on the software-test mail list that has me rethinking my position on technical debt. So here is what I *really* think: Technical debt doesn't really exist. At least, it doesn't exist for high-performing software teams. It's a useful concept for those encumbered by poor programming and poor testing, but the idea ceases to be useful about the time the code base becomes manageable. I have a graph and and I have a metaphor that both explain my position. I'll just describe the graph because I'm too lazy to hunt down some free software to draw it. This graph describes a couple of different projects I've been involved in, for a couple of different employers. The critical line on the gra

The Software Artists: Citations for Part Three

Citations For the section “Practicing, Rehearsing, and Performing Software” Programmer, poet and guitarist Richard Gabriel's quote is from his proposal for a Master of Fine Arts in Software: Apple CEO Steve Jobs' quote is widely noted, but is documented in context by Andy Hertzfeld here: The practice/rehearse/perform rubric was first presented on the author's blog in September 2007: The title of this paper comes from the author's blog post of May 2007 in response to Jonathan Kohl's recommendation of theater and musical experience for software testing: A performance by Ella Fitzgerald and Count Basie available on YouTube was a strong influence on this paper, cited in response to another of B

The Software Artists: Practicing, Rehearsing, and Performing Software

Software development is a performance exhibiting skills developed by an individual—often in groups of teams... -Richard P. Gabriel Real artists ship. -Steve Jobs Since the language of art criticism provides useful tools for evaluating software, and the practice of teaching music provides useful approaches to teaching software, it is reasonable to think that the practice of artistic performance should provide useful ways to approach creating software. Only one way to build software has ever been acknowledged: software is designed, coded, and tested. Sometimes the design/code/test cycle is done a single time, sometimes design/code/test is done iteratively or incrementally, sometimes, as in Test Driven Development, the steps are moved around, so that the cycle goes test/design/code. But this is not how human beings create a performance. Instead of design/code/test, artists have a cycle of practice/rehearse/perform. Since human beings have been creating performances since p

The Software Artists: Citations for Part Two

Citations for the section “Pedagogy and Practice in Software and in Guitar” Some of Joel Spolsky's writing about working with young software creators is at these links: Thoughtworks University had been called “Thoughtworks Boot Camp” until the author had to explain the term upon crossing an international border in 2005. A description of Thoughtworks University is here: Some representative citations from IEEE publications about poor software education are: The citation from CrossTalk is here:

The Software Artists: Pedagogy and Practic in Software and in Guitar

Very few people graduating with a college degree in Computer Science or Information Technology are prepared to write production code or to test production software. Companies that hire recent college graduates often have special training for such employees before they may work on actual projects. Joel Spolsky has written extensively about how his company Fog Creek trains young software workers. Publications of the IEEE mention the issue frequently. The well-regarded software consultancy Thoughtworks has “Thoughtworks University”, a six-week “boot camp” style training event for new hires hosted in Bangalore. Even the journal CrossTalk: The Journal of Defense Software Engineering says: It is our view that Computer Science (CS) education is neglecting basic skills, in particular in the areas of programming and formal methods. The Craft of Software Besides the mentorship approach Spolsky takes and the “boot camp” approach Thoughtworks takes, Robert C. (Uncle Bob) Martin, Pete McBr

The Software Artists: Citations for Part One

Many of the ideas in this paper were first presented at the Austin Workshop on Test Automation in early 2007. The substance of the talks appeared on the author's blog shortly afterward: discipline-larks-tongues-in.html Most of the New Criticism and Structuralist citations are from Wikipedia, except: Child Jr., William C. 2000 Monroe Beardsley's Three Criteria for Aesthetic Value: A Neglected Resource in the Evaluation of Recent Music. Journal of Aesthetic Education, Vol 34, No. 2 (Summer 2000), pp 49-63 doi:10.2307/3333576 The author did not know that unity/variety/intensity had been first presented in music criticism until reading this article. The author also referred to Adams, Hazard, Searle, Leroy (eds.) 1986 Critical Theory Since 1965 University Presses of Florida/Florida State University Press for background information. Bruce Schneier on “security theater”:

The Software Artists: The Value of Software

Previous: The Software Artists: Explanation Previous: The Software Artists: Introduction Philosophy of Art and the Value of Software Manufactured goods generally have a clear relationship between cost, price, and value. In software, as in art, the value of the work is more often completely unrelated to cost and price. Operating systems provide a great example: the Windows audience for the most part must use Windows regardless of cost or price. The Mac OSX audience generally chooses to use OSX regardless of price, and often explicitly because of the aesthetic experience of using OSX. Linux has no cost at all, and a price that varies wildly, and it also has a dedicated audience. The value of software, like the value of an artistic performance, lies in the ability of the software to attract and keep an audience. The software industry would benefit immensely by turning the tools of artistic criticism on software. The 20th century in particular saw a great proliferation of critical the

The Software Artists: Introduction

The people who create software are not factory workers. Nor are they engineers, in the sense that engineering is the “practical application of the knowledge of pure sciences, as physics or chemistry”. But the software industry continues to treat software workers as if they were factory workers or construction workers. The software industry also attempts to value software as if it were a manufactured product. But making software is a fundamentally creative process, more similar to performance than to manufacturing. Like art and music, software has an audience that is involved in a personal way with the software. And the people who create software are much more like performers than they are like construction workers. If it is true that software is much more like art than it is like manufacturing, then the tools of artistic criticism should be useful for evaluating software. It should also be possible to apply successful approaches to art or music pedagogy to software pedagogy. Furt

The Software Artists: Explanation

I wrote a paper some time ago to submit to the Conference of the Association for Software Testing , but the paper was not accepted for the program. I'm on the waiting list if another presenter drops out, which seems unlikely at this point. I think it is likely that my paper was not accepted at CAST was because it is somewhat similar to a presentation from Jonathan Kohl and Michael Bolton , two of the best testers in the universe. I intend to publish my paper here on my blog in the hopes that Jonathan and Michael and other Software Artists will have access to as much relevant information as possible to support their position. I think that one enormous reason that few people take software-as-artistic-practice seriously is because of a perceived lack of practical application: manufacturing and engineering provide venerable examples of measurement tools, education curricula, and market strategy-- assuming that you believe that software is an engineered and manufactured product.

technical debt as impedance mismatch

My colleague Matt Heusser will be hosting a peer conference on "technical debt" later in the summer. I've been thinking about the subject and realized that technical debt could be interpreted as a kind of impedance mismatch. Impedance mismatch happens in acoustics, electric current, and many other places. Here are a couple of examples that everyone will understand: trying to fill a swimming pool with an eye dropper trying to get a drop of water from a firehose I can think of two software examples from my own career, one involving tools, the other involving skill. An application I was testing the other day has two javascript prompts in a row. Selenium correctly recognized and dismissed the first prompt, but it is not able to see or manipulate the second one. As a result, I have an unfinished automated test, and technical debt in the form of selenium hacking, manual testing, and future test maintenance. This is the eyedropper-swimming-pool example: no matter how m

advice on a selenium problem?

I thought I'd mention this here. If anyone has a theory, please leave a comment or find me somewhere... I'm using Selenium to drive a Javascript WYSIWYG editor. I type some text into a largish textarea; save the page; open the page again in the editor. Upon doing this, IE7 can properly read and parse the contents of the textarea, but Firefox thinks the textarea is empty. I can set up an environment to reproduce this for an interested party.