Does Robert Glass’ Formula for Estimation Success Actually Work? by Bruce Nielson
In a previous post I mentioned Robert Glass’ “fact” that estimates are made at the beginning of the project before the problem is even defined, thus the estimate is invalid from the get go.
While I don’t disagree with Glass, I do believe he is under estimating (pun intended!) why we human’s prefer estimates – even known bad ones – over no estimate at all. I referred to this phenomenon as “water breathing” because if you are about to drown anyhow, you might as well see if you can breath under water.
Up front estimates are much the same – if you don’t make them at all, you’ve failed. If you make them and they are bad, you might (and often will) have time to correct them later. So that’s the real reason we make bad up front software estimates regularly. It’s a good reason once properly understood.
I know this probably sounds defeatist and I don’t mean it to be. In later posts I’ll discuss alternatives to making bad upfront estimates, even when you have “no choice.” But before I explore alternatives, I think we could benefit from looking at all of Glass’ “facts” about estimation. (See Facts and Fallacies of Software Engineering, p. 27 – 42)
Fact 8: One of the most common causes of runaway projects is poor estimation.
Fact 9: Most software estimates are performed at beginning of the life cycle. This makes sense until we realize that estimates are obtained before the requirements are defined and thus before the problem is understood. Estimation, therefore, usually occurs at the wrong time.
Fact 10: Most software estimates are made either by upper management or by marketing, not by the people who will build the software or their managers. Estimation is, therefore, done by the wrong people.
Fact 11: Software estimates are rarely adjusted as the project proceeds. Thus those estimates done at the wrong time by the wrong people are usually not corrected.
Fact 12: Since estimates are so faulty, there is little reason to be concerned when software projects do not meet estimates targets. But everyone is concerned anyway.
Fact 13: There is a disconnect between management and their programmers. In one research study of a project that failed to meet its estimates and was seen by its management as a failure, the technical participants saw it as the most successful project they had ever worked on.
Look over that list carefully. I can see everyone nodding their head approvingly. “Yup, it’s all so true! Those stupid pointy haired managers are the real cause of the bad estimates!”
While these facts all seem generally correct to me, I feel like something is missing. Glass seems to be advocating that if we do the inverse, we’ll be fine. Likewise, as you shake your head knowingly and wag your finger at management, you are advocating the same. But is this really true?
Let’s try taking the inverse of all his facts and see if we really believe the software estimation problem is solved. Can you honestly say you’ll agree with the following statement as being factually correct?
You’re software estimates will all be wildly successful if you just:
- Make good estimates instead of poor ones (inverse of fact 8) by…
- Making the estimates after the problem is defined – so after requirements gather is completed (inverse of fact 9)
- Have the developers make the estimate, not marketing or management (inverse of fact 10)
- And adjust your estimates when there are clear changes to the scope of the project (inverse of fact 11)
Since all of the above are true for your project, we can then conclude that it’s perfectly acceptable for management to get very concerned if you miss your estimates, since they “did it all the right way” (inverse of fact 12) and so we know there will never be another disconnected between management and programmers again. (inverse of fact 13).
Do you believe it?
I don’t.
Personally, I have never let management or marketing make estimates for the programming team (except to add fudge factor on top), I’ve always performed the estimate after requirements gathering when we’d supposedly “defined the problem”, and I always adjust my estimate for clear changes of scope. But even when I “do it all right” my experience is that programmers still can’t generally keep to their estimates.
So my conclusion is that while Glass has hit on a series of true problems about software estimation – and I’d agree if you don’t at least do the above, you have no hope at all in making your estimates – I feel he is failing to address the true underlying problems with software estimation.
So then what are the real reasons we all seem to suck at software estimation? To find that answer, we need to learn a lot more about how software and human psychology collide before we have any real hope of finding a solution to the problem.