Skip to main content

technical investment

My colleague Matt Heusser is doing this workshop on technical debt soon.

I posted a little meditation on techdebt as impedance mismatch, but that seems to me to be trivially true, facile.

Since I made The Software Artists public, though, I've been fielding a lot of questions on the software-test mail list that has me rethinking my position on technical debt. So here is what I *really* think:

Technical debt doesn't really exist. At least, it doesn't exist for high-performing software teams. It's a useful concept for those encumbered by poor programming and poor testing, but the idea ceases to be useful about the time the code base becomes manageable.

I have a graph and and I have a metaphor that both explain my position.

I'll just describe the graph because I'm too lazy to hunt down some free software to draw it. This graph describes a couple of different projects I've been involved in, for a couple of different employers.

The critical line on the graph is "regression bugs released to production". Or you could title it "predictable bugs released to production" if you wanted to be more inclusive, but I like the regression idea best.

Another line on the graph is "features released to production". This is where we make the product better.

A third line on the graph is "test coverage". I don't really care about the distinction between unit tests, integration tests, acceptance tests, or whatever. Just stipulate that it is possible to achieve some significant automated test coverage of the product.

Over time, the graph of a productive, high-performance team creating software will likely show:

Early on, the number of regression bugs released to production will be a small number, then will go to zero. At Socialtext, we have a suite of Selenium-based regression tests. About a year ago, we rapidly increased the number of test steps from around 1000 to around 4000. The number of regression bugs released to production went from 'a few' at 1000 test steps to zero at 4000 test steps. That line on the graph has remained at zero, and the test suite now has about 7000 test steps.

The number of features added to the software will rise significantly. Good developers working well, releasing to production every two weeks: you get a lot of features.

Test coverage (and by implication, refactoring): this is the interesting part. Test coverage rises at a much smaller rate than the rate at which features are added. Picture the difference: as the number of features rises dramatically, and the amount of test coverage rises less dramatically, the amount of technical debt increases.

Yet the number of regression bugs released to production remains zero.

This works because that area of the graph between test coverage and features released doesn't really represent debt. We know of course that the number of *possible* tests for any reasonable feature far exceed the number of *necessary* tests for that feature. The only line that matters to us the 'defects released' line. We choose our tests to minimize defects, not to maximize coverage. Rather than go for feature-for-feature coverage, we weave a web of tests that is likely to catch any regression errors.

We INVEST in automated tests (and refactoring, etc.) to MINIMIZE RISK. I have to emphasize again that this can really only be done with a skilled and disciplined team in a reasonably well-designed code base. In a less-expert environment, the idea of technical debt might be more useful.

So now to the metaphor:

I have a friend who is a successful artist and gallery owner here in our small town in the West. She started in sculpture more than 25 years ago, moved to oils, and recently has fallen for acrylics and is doing some of the best work of her career. She tends toward landscapes both real and imagined, although she sometimes takes for a subject flowers or saxophones.

She's a Western female painter of landscapes and flowers: why is she not as successful as Georgia O'Keefe?

If we had been talking about software, the answer would probably involve 'technical debt'. But we can see on the face of it that such an answer is silly. There is no such thing as technical debt in a practice or in a performance.

However, the concept of technical *investment* does clearly apply in both cases. Georgia O'Keefe was an expert and shaped her career in a certain way. And she had a lot of luck. My friend Karyn is also an expert and shaped her career in a certain way. And she had her own share of luck, also.

The careers of both of these successful women show a lot of investment, but different kinds of investment in different areas, different people, different times, with different results.

The important thing is not that Ms. Gabaldon is less famous than Ms. O'Keefe: the important thing is that both succeeded in their work.

So where would the idea of technical debt actually be useful? Consider the team encumbered by poor programming and poor testing. If the number of predictable defects released to production on your project is high, consider studying the literature of software development and of software testing in order to bring the skills of the team up to a level where technical investment makes more sense than technical debt.