The concept known as "worse is better" holds that in software making (and perhaps in other arenas as well) it is better to start with a minimal creation and grow it as needed. Christopher Alexander might call this "piecemeal growth." This is the story of the evolution of that concept.
This is the crux of the essay: The Rise of Worse is Better.
From 1984 until 1994 I had a Lisp company called "Lucid, Inc." In 1989 it was clear that the Lisp business was not going well, partly because the AI companies were floundering and partly because those AI companies were starting to blame Lisp and its implementations for the failures of AI. One day in Spring 1989, I was sitting out on the Lucid porch with some of the hackers, and someone asked me why I thought people believed C and Unix were better than Lisp. I jokingly answered, "because, well, worse is better." We laughed over it for a while as I tried to make up an argument for why something clearly lousy could be good.
A few months later, in Summer 1989, a small Lisp conference called EuroPAL (European Conference on the Practical Applications of Lisp) invited me to give a keynote, probably since Lucid was the premier Lisp company. I agreed, and while casting about for what to talk about, I gravitated toward a detailed explanation of the worse-is-better ideas we joked about as applied to Lisp. At Lucid we knew a lot about how we would do Lisp over to survive business realities as we saw them, and so the result was called "Lisp: Good News, Bad News, How to Win Big." [html] (slightly abridged version) [pdf] (has more details about the Treeshaker and delivery of Lisp applications).
I gave the talk in March, 1990 at Cambridge University. I had never been to Cambridge (nor to Oxford), and I was quite nervous about speaking at Newton’s school. There were about 500-800 people in the auditorium, and before my talk they played the Notting Hillbillies over the sound system - I had never heard the group before, and indeed, the album was not yet released in the US. The music seemed appropriate because I had decided to use a very colloquial American-style of writing in the talk, and the Notting Hillbillies played a style of music heavily influenced by traditional American music, though they were a British band. I gave my talk with some fear since the room was standing room only, and at the end, there was a long silence. The first person to speak up was Gerry Sussman, who largely ridiculed the talk, followed by Carl Hewitt who was similarly none too kind. I spent 30 minutes trying to justify my speech to a crowd in no way inclined to have heard such criticism - perhaps they were hoping for a cheerleader-type speech.
I survived, of course, and made my way home to California. Back then, the Internet was just starting up, so it was reasonable to expect not too many people would hear about the talk and its disastrous reception. However, the press was at the talk and wrote about it extensively in the UK. Headlines in computer rags proclaimed "Lisp Dead, Gabriel States." In one, there was a picture of Bruce Springsteen with the caption, "New Jersey Style," referring to the humorous name I gave to the worse-is-better approach to design. Nevertheless, I hid the talk away and soon was convinced nothing would come of it.
About a year later we hired a young kid from Pittsburgh named Jamie Zawinski. He was not much more than 20 years old and came highly recommended by Scott Fahlman. We called him "The Kid." He was a lot of fun to have around: not a bad hacker and definitely in a demographic we didn’t have much of at Lucid. He wanted to find out about the people at the company, particularly me since I had been the one to take a risk on him, including moving him to the West Coast. His way of finding out was to look through my computer directories - none of them were protected. He found the EuroPAL paper, and found the part about worse is better. He connected these ideas to those of Richard Stallman, whom I knew fairly well since I had been a spokesman for the League for Programming Freedom for a number of years. JWZ excerpted the worse-is-better sections and sent them to his friends at CMU, who sent them to their friends at Bell Labs, who sent them to their friends everywhere.
Soon I was receiving 10 or so e-mails a day requesting the paper. Departments from several large companies requested permission to use the piece as part of their thought processes for their software strategies for the 1990s. The companies I remember were DEC, HP, and IBM. In June 1991, AI Expert magazine republished the piece to gain a larger readership in the US.
However, despite the apparent enthusiasm by the rest of the world, I was uneasy about the concept of worse is better, and especially with my association with it. In the early 1990s, I was writing a lot of essays and columns for magazines and journals, so much so that I was using a pseudonym for some of that work: Nickieben Bourbaki. The original idea for the name was that my staff at Lucid would help with the writing, and the single pseudonym would represent the collective, much as the French mathematicians in the 1930s used "Nicolas Bourbaki" as their collective name while rewriting the foundations of mathematics in their image. However, no one but I wrote anything under that name.
In the Winter of 1991-1992 I wrote an essay called "Worse Is Better Is Worse" under the name "Nickieben Bourbaki." This piece attacked worse is better. In it, the fiction was created that Nickieben was a childhood friend and colleague of Richard P. Gabriel, and as a friend and for Richard’s own good, Nickieben was correcting Richard’s beliefs.
In the Autumn of 1992, the Journal of Object-Oriented Programming (JOOP) published a "rebuttal" editorial I wrote to "Worse Is Better Is Worse" called "Is Worse Really Better?" The folks at Lucid were starting to get a little worried because I would bring them review drafts of papers arguing (as me) for worse is better, and later I would bring them rebuttals (as Nickieben) against myself. One fellow was seriously nervous that I might have a mental disease.
In the middle of the 1990s I was working as a management consultant (more or less), and I became interested in why worse is better really could work, so I was reading books on economics and biology to understand how evolution happened in economic systems. Most of what I learned was captured in a presentation I would give back then, typically as a keynote, called "Models of Software Acceptance: How Winners Win," and in a chapter called "Money Through Innovation Reconsidered," in my book of essays, "Patterns of Software: Tales from the Software Community."
You might think that by the year 2000 I would have settled what I think of worse is better - after over a decade of thinking and speaking about it, through periods of clarity and periods of muck, and through periods of multi-mindedness on the issues. But, at OOPSLA 2000, I was scheduled to be on a panel entitled "Back to the Future: Is Worse (Still) Better?" And in preparation for this panel, the organizer, Martine Devos, asked me to write a position paper, which I did, called "Back to the Future: Is Worse (Still) Better?" In this short paper, I came out against worse is better. But a month or so later, I wrote a second one, called "Back to the Future: Worse (Still) is Better!" which was in favor of it. I still can’t decide. Martine combined the two papers into the single position paper for the panel, and during the panel itself, run as a fishbowl, participants routinely shifted from the pro-worse-is-better side of the table to the anti-side. I sat in the audience, having lost my voice giving my Mob Software talk that morning, during which I said, "risk-taking and a willingness to open one’s eyes to new possibilities and a rejection of worse-is-better make an environment where excellence is possible. Xenia invites the duende, which is battled daily because there is the possibility of failure in an aesthetic rather than merely a technical sense."
Decide for yourselves.
The classic essay on "worse is better" is either misunderstood or wrong.
Jim Waldo