Sunday, May 18, 2008

The Software Artists: The Value of Software


Previous: The Software Artists: Explanation
Previous: The Software Artists: Introduction


Philosophy of Art and the Value of Software


Manufactured goods generally have a clear relationship between
cost, price, and value. In software, as in art, the value of the work
is more often completely unrelated to cost and price. Operating
systems provide a great example: the Windows audience for the
most part must use Windows regardless of cost or price. The
Mac OSX audience generally chooses to use OSX regardless of
price, and often explicitly because of the aesthetic experience of
using OSX. Linux has no cost at all, and a price that varies
wildly, and it also has a dedicated audience.

The value of software, like the value of an artistic performance,
lies in the ability of the software to attract and keep an audience.
The software industry would benefit immensely by turning the
tools of artistic criticism on software.

The 20th century in particular saw a great proliferation of critical
theory of artistic work. Among the most important ideas were
The New Criticism and Structuralism. Both provide fine tools
for evaluating software.


The New Criticism: The Intentional Fallacy and Aesthetic Value

The New Criticism was the most important school of literary
criticism in the middle of the 20th century. An essay this length
can touch only lightly on some key ideas of New Criticism, but
those ideas turn out to be quite valuable.

The New Critics believe that once the authors finish their works,
they disappear from the milieu in which the work exists. New
Critics do not involve themselves in what the author intended to
create; they concern themselves only with the work as it exists.
This principle is called the “Intentional Fallacy”: what the author
intended has no part in the value of the author's work.

Creating software can be seen in a similar light. Once the
software is released, the team that created it cannot control how
the software is used, nor who uses it, nor what it is used for. The
value of the software must lie in the software itself and in how it
is used, not in how it was created or in what it was intended to do.

The most important tool of the New Critics is a technique called
“close reading”. From Wikipedia: “close reading describes the
careful, sustained interpretation of a brief passage of text. Such a
reading places great emphasis on the particular over the general,
paying close attention to individual words, syntax, and the order
in which sentences and ideas unfold as they are read.”

Creating software has its own kinds of close reading: code
review, unit tests, and feature tests are all ways in which software
creators emphasize the particular over the general. But one of the
most important results of close reading, and one of the most
important aspects of value to New Critics, is to identify and
examine the works for ambiguity. These ambiguities are
examined for their value: some kinds of ambiguity add to the
value of the work; some kinds of ambiguity detract from the
value of the work.

Software whose ambiguous features allow the user to do more
than the developers intended will be more valuable-- think of a
wiki. Software whose ambiguous features stop the user in his
tracks has much less value-- think of the Windows Vista security
regimen.

Aesthetic Value: Unity, Complexity, Intensity

Monroe Beardsley, who wrote “The Intentional Fallacy” along
with William K. Wimsatt, also proposed a way to measure the
value of a particular work. Beardsley states that the value of the
work resides in three criteria: unity, variety, and intensity.
Applying these criteria to software is an interesting exercise.
Software may be unified if all of its features support a central
activity or theme. Software may have variety if the features cover
a wide range of activity. Software may be intense if using the
software is a compelling experience.

Periodically examining a software project throughout the
development process using these criteria is a fascinating exercise.

Structuralism: Signs and Myths

Structuralism became popular later in the 20th century. Where
New Criticism seeks to examine particular works disconnected
from intent or affect, Structuralism is rooted in linguistics and
anthropology, and seeks to examine works in the context of their
language and of their culture.

Two aspects of Structuralism are of particular interest: the idea
that linguistic signs can be decomposed into the signifier and the
signified; and that works can have deep structures that reflect
cultural values that can be represented as myths.

Signs: Signifier and Signified

A linguistic sign is a speech act: a word, a sentence, a poem, a
book. It is not at all unreasonable to treat the use of a software
feature or a software user interface as a sign.

The signifier is the “sound pattern” of the sign. It is what
happens in the physical act of speaking, or in the personal act of
reciting to ourselves. Furthermore, signifiers are the tokens by
which people communicate. Signifiers are what people send each
other as they participate in culture.

The signified is the “concept or meaning” of the signifier. The
signified is the true act or true feeling or
physical/emotional/apprehended manifestation of the signifier.
With the ability to separate signifier and signified, the
Structuralists look for common deep elements and mythological
underpinnings of artistic and cultural works.
The value of any particular work, then, can be evaluated in
several ways. A work might have value to a Structuralist if:
  • Signs are rich and easily available
  • Signifiers in the work are rich and pleasant
  • Signified are deep and clearly delineated by their
  • The work reflects a rich understanding of the cultural mythology and milieu
Exercise: New Critical and Structuralist Evaluation of Browser Behavior

New Critics contribute the idea of close reading in order to
evaluate unity, variety, and complexity of a work, looking for
ambiguity in the service of a rich aesthetic experience.

Structuralists contribute the ability to separate signifier and
signified within a work, in order to evaluate the richness of both
the experience of the work itself and the extent to which the work
relates cultural mythology, either that of the work itself or that of
the consumer of the work.

As a thought experiment, use these tools to measure the value of
this behavior:

When Internet Explorer opens a web page that contains MPEG3
files (for instance
http://www.drummingstyles.com/Genres/Latin/Bossa-
Nova/index.html), it by default prevents the user from loading the
page, and pops up a warning message that reads

(IE6 and IE7) Do you want to allow software such as
ActiveX controls and plugins to run?

(IE6) A script is accessing some software (an ActiveX
control) on this page which has been marked safe for scripting.
Do you want to allow this?

(IE7) This website wants to run the following add-on:
'Windows Media Player' from 'Microsoft Corporation'. If you
trust the website and the add-on and want to allow it to run,
click here...
A New Critic can use close reading on this behavior to see if it
exposes any ambiguities. If those ambiguities contribute to the
unity, intensity, and variety of the experience, then the behavior is
valuable.

Since the only executable files on the page are mp3, the false
reference to ActiveX sends the reader down a false path. Since
there is no explanation for why an mp3 is considered to be
ActiveX, this falseness only detracts from both the unity and the
intensity of the experience.

Furthermore, since the software requires validation of behavior
that the reader requested anyway (the false “ActiveX control” is
in fact “safe for scripting”) this behavior can only detract from
the whole point of the work, which is to play the mp3 file while
reading the text of the page.

So a close reading of the the behavior in question in a New
Critical sense shows that the software reports false information.
Although the behavior could be construed as adding variety, it
does so for no good reason, and prevents the experience of the
work itself, which is to play the mp3 files while reading the page.

A Structuralist would have a different interpretation. The
references to ActiveX are a signifier, but it is unclear exactly what
is being signified, since the obvious signified (a real ActiveX
control) does not actually exist.

The Structuralist finds a clue in IE7, where under some
circumstances, when loading a page with mp3 files, warns “This
website wants to run the following add-on: 'Windows Media
Player' from 'Microsoft Corporation'. If you trust the website and
the add-on and want to allow it to run, click here...”

Microsoft itself is concerned about security, or at least about the
appearance of security. Perhaps the false warnings are intended
to warn about the possible insecurity of the Microsoft Windows
Media Player itself.

Looking at the wider culture, everywhere from airports to banks
to offices, there is a demonstrable trend toward what Bruce
Schneier calls “security theater”. Security theater is
“countermeasures that provide the feeling of security while doing
little or nothing to actually improve security”.

A Structuralist will see that the culture that produced this work
values “security theater” while providing only the appearance of
security, so the Structuralist will concede a certain value to this
Internet Explorer behavior that a New Critic would not.

Citations for part one


4 comments:

Michael said...

Nice work, Chris.

"The Windows audience for the most part must use Windows regardless of cost or price. The Mac OSX audience generally chooses to use OSX regardless of price, and often explicitly because of the aesthetic experience of using OSX. Linux has no cost at all, and a price that varies wildly, and it also has a dedicated audience."

In addition to your research on artistic criticism, you might want to throw in some economics. You're correct to separate cost and price, but all of these operating systems have costs. Linux is not exempt. The costs are not necessarily expressed directly in monetary terms, but they're there. For example, the costs of adopting Linux, for me, would include giving up Excel, learning configuration and maintenance of a new operating system, purchase of additional hardware (because I'm not giving up my current machines), opportunity cost associated with learning Linux stuff that prevents from doing Windows or Mac stuff at the same time, and so on. Linux, for me, wouldn't provide sufficient value to compensate for those costs.

Next, it's not that Mac users are oblivious to cost or price; they're willing to pay the price and take on the costs of using an operating system that 90% of the population doesn't use, because they get sufficient value out of it.

It may help to think of Jerry Weinberg's definition of value: "what someone will do (or pay) to have their requirements met".

James Marcus Bach said...

I appreciate a philosophical approach to rethinking testing. I think we need to cast a very wide net to find useful ideas.

But, I'm scratching my head about this particular set of ideas. You seem to like the application of critical theory to testing, for instance, but you give no argument for why anyone *ought* to like it.

The New Critics have declared themselves to be important, perhaps, but that's not a persuasive argument.

You declare that there is such a thing as the "intentional fallacy." You say intention doesn't matter. I'm familiar with this line of reasoning, at least, via Derrida, and I'm not convinced. It seems to me that theories of intent matter a great deal in literary interpretation and software, both, because both literary and technological artifacts are social phenomena. Actual intent plays a role in theories of intent, and that in turn plays a role in how people react to the artifacts they encounter. If I believe the vendor has no intent to protect my computer from harm, I will probably not deal with him.

I see your New Critics and raise you Symbolic Interactionism, Situated Action Theory, and Ethnomethodology (especially the latter, wherein theories of intent play a critical role).

The main problem I have, though, is this: As I read what you've written here, I don't yet see that you are offering a clear, specific, different solution to the problem of software testing. What is "unity, variety, complexity" really? Are they quality factors? We already have that concept in ordinary requirements analysis and testing.

Are you just suggesting that studying these subjects may be useful, or are you really saying something like "stop doing that, start doing this?" In which case, exactly what are those thats and thises?

Cem Kaner said...

Hi, Chris:

I think this work is interesting and I think there's a lot of potential for application to testing. But I think we might disagree in the details, and maybe those disagreements are more fundamental. I can't tell yet.

I'm saddened by the reference to the "New Critics." Over the past 20 years, in the American courts, we have seen successful attacks on the use of documented legislative intent (e.g. Congressional Committee reports and floor debate) when interpreting statutes. The meaning, in this view, is to be found only in the plain wording of the statute (and the unfettered-by-Congressional-bias) bias of the judge or judicial panel. At the Supreme Court, this view has been especially successfully espoused by Scalia and I think that part of the dysfunction that we see in Congress has evolved out of this dramatic shift in the balance of power between Congress and the Courts, and in the despair that comes from losing the power to guide the interpretation of something complex via publication of a rational explanation or discussion. Intent matters. Our decision to ignore intent matters too.

I don't know much about The Arts, but I suspect that at least some artists attempt to communicate through their art. I wonder how much it changes what they are willing to do or try when they are told that as a matter of principle, their intent will be ignored.

Let me bring this back to software. I see software as a communication among several people, written in a syntax that can be executed by a machine.

One of the differences between the typical painting and the program is that the painting is static. Once it's done, it's done. In contrast, the program is often subject to future change. Like a book that is published in multiple editions, an authoring group can influence each other, influential stakeholders (like, the publisher) can influence the authors, and feedback from the public (or some members of the public) can influence the authors. One of the arguments that is often effective is to point out that the software operates inconsistently with the intent of the authors.

I think that in software, assessment that -- as a matter of principle -- ignores the intent of the authors can be counterproductive if our goal is to influence the evolution of the software.

Chris McMahon said...

Dr. Kaner makes an good and valid point. I used the New Critics because they provided a position from which to criticize the value of unquantifiable work that was internally consistent, intellectually rich, widely applicable, and fairly easy to explain in the context of this paper.

No one today is a serious New Critic. Their heyday was in the 1960s, and their ideas, while influential, were ultimately overtaken by more mature and nuanced work.

Here I was hoping only to provide a pointer to that work, because much of what followed the New Critics, while more intellectually powerful, is more difficult to explain without also explaining aspects of psychology, economics, folklore, and other aspects of criticism not immediately germane to software.

Using the New Critics as an example spared me a lot of digression.