Almost any argument about managing the software development process inevitably deteriorates into anecdote-ping-pong. We did wawa and everyone quit.
Oh yeah? Then how do you explain Company X? They wawa regularly and their stock is up 20%!
If you have even the slightest bit of common sense, you should ask: Where's the data? If I'm going to switch to Intense Programming I want to see proof that the extra money spent on dog kennels and bird cages is going to pay for itself in increased programmer self-esteem. Show me hard data!
And, of course, we have none.
One set of people will tell you you gotta have private offices with walls and a door that closes. Another set of extremos will tell you everyone has to be in a room together, shoulder-to-shoulder. Neither of them have any hard data whatsoever, where by hard data I mean data that wouldn't be laughed out of a sixth-grade science classroom. The truth is, you can't honestly compare the productivity of two software teams unless they are trying to build exactly the same thing under exactly the same circumstances with the exact same human individuals, who have been somehow cloned so they don't learn anything the first time through the experiment.
Tom DeMarco was so frustrated at the inherent impossibility of providing any kind of hard data that he went so far as to write a novel in which he fantasizes about a bizarre land in which programmers are so cheap you actually can do experiments where, say, half the people have offices and half the people have cubicles.
But we don't have the data. We don't have any data. You can give us anecdotes left and right about how methodology X worked or didn't work, but you can't prove that when it worked it wasn't just because of one really, really good programmer on the team, and you can't prove that when it failed is wasn't just because the company was in the process of going bankrupt and everybody was too demoralized to do anything at all, Aeron chairs notwithstanding.
But don't give up hope. We do have the collective wisdom of fifty years of building software to draw from. Or at least, it's somewhere. Your typical startup with three pals from college may not exactly have the collective wisdom, so they're going to reinvent things from scratch that IBM figured out in 1961, or go bankrupt failing to reinvent them. Too bad, because they could have read Facts and Fallacies of Software Engineering, by Robert L. Glass, the best summary of what the software profession should have agreed upon by now. Here are just a few examples from the 55 facts and 10 fallacies in the book:
- The most important factor in software work is not the tools and techniques used by the programmers, but rather the quality of the programmers themselves.
- Adding people to a late project makes it later.
- Reuse-in-the-small (libraries of subroutines) began nearly 50 years ago and is a well-solved problem.
- Reuse-in-the-large (components) remains a mostly unsolved problem, even though everyone agrees it is important and desirable.
You can read the others in the table of contents on Amazon. One of the best things about the book is that it has sources for each fact and fallacy, so you can go back and figure out why we collectively believe that, say, code inspection is valuable but cannot and should not replace testing. This is bound to be particularly helpful when you need ammunition for your arguments with people in suits making absurd demands (Can we make a baby in 1 month if we hire 9 mothers?). [Joel on Software]