December 2009 - Posts

Tell me What to Do, Not How to Do It! by Bruce Nielson

Never tell people how to do things. Tell them what to do and they will surprise you with their ingenuity. – George Patten

This sounds like good advice doesn’t it? It is, actually, but we need to understand it correctly.

The first thing we need to notice is that there is no objective difference between ‘what’ and ‘how.’ The difference is purely relative.

If I’m a marketing guy, I’m going to give you a ‘software spec’ that might be a series of bullet points that says:

  • Make the Whiz-Bang 50% faster to improve the user experience
  • Add the Gog Widget everyone keeps asking about
  • Enhance the Dribblium feature to automatically look up user information

If I’m the marketing guy “what” I’m doing is increasing sales. “How” I’m doing it is by the feature improvements listed above.

But to the team manager, this looks a lot more like a list of “what” needs to be done with no details on “how” to do it.

So the team manager takes this list of bullet points and creates what we (mistakenly) call “detailed requirements.” He now carefully lays out the details of “what” needs to be done but decides to leave it up to his tech lead on “how” to accomplish it.

The tech lead then creates what is (mistakenly) called a “detailed design” where he makes a bunch of UML and designs out exactly “what” he wants his programmers to do but still leaves it up to them on the specifics of “how” to implement that design.

Interestingly, the programmer then writes the code which, from the compiler's point of view (do compilers have points of view?) is really a specification of “what” the software needs to do, but leaves it up to the compiler of “how” to do it.

So based on this example, we must accept that “what” and “how” are purely relative to a point of view. They do not objectively exist.

Perhaps this is why we don’t usually make a separation between “what” and “how.” They tend to mingle freely in our minds. Have you ever called your spouse and said “Hey, what time were you planning to be home?” when what you really wanted to know was “How much longer can I stay out with the guys/girls?” I do this all the time.

This is actually useful to know. Because it means that you never have to take an instruction as “how” you have to do your job because what people actually care about is “what” they wanted to accomplish.

I remember a customer that insisted that a certain field, I think a zip code, was to have exactly 6 characters in it. This was a hard fast requirement.

It turned out that they were used to a DOS-based program where the program moved to the next screen upon finishing typing in the last field on the previous screen. The zip code was the last field, so they wanted to be able to enter it and not move to the next screen before having a change to review it. (For those of you under 30, DOS is a quaint little program that lets you type stuff and then it deletes all your files)

Of course in a windows based program (for those of you under 20, Windows is that thing you see just before you click on the browser) this was a non-issue. The correct ‘requirement’ was actually “allow me to review the screen before I move to the next” which is built into windows anyhow.

But that’s my point. You should never just assume that because your customer told you “how” to do something that they weren’t actually trying to tell you “what” to do and just didn’t know how to ask it right.

So don’t just blindly follow requirements. Instead, make a real effort to understand ‘what’ the customer really wants to accomplish and come up with your own suggestions of ‘how’ to accomplish it. Think of requirements as the starting point of the discussion, not the ending point of ‘what’ you’ll finally deliver.

Software Schedules as Budgets by Bruce Nielson

Do you make a keep a budget with your home finances?

Maybe I should ask “should you make and keep a budget with your home finances?”

If you are the average American it's likely that you answered “no” to the first and “yes” to the second. Why do we not do things we know we should do? I suspect the answer is quite simply: because it's hard work. As Scott Adams (creator of Dilbert) so aptly pointed out, hard ward is both “hard and work.”

The other day I was talking to my boss about a project over run. I explained that I had talked to the developer one day and the project was under budget. I had talked to the developer about how much more time to spend on the project and then to stop work and save some of our remaining budget for a final site visit. He had forgotten and kept working and used up the remaining budget.

When I told my boss about the overrun, he asked me why developers did this so often: “Why do they over run their budget and not tell anyone when all they had to do was prior to over running their budget tell someone so that the appropriate arrangements could be made?” he asked.

He is right to ask this question. This really is a case of cutting off your nose to spite your face. If you put yourself into the customer’s shoes, there is a considerable psychological difference between being told in advanced that you have some hours left and being given choices on how you can spend them versus being told you are already over budget and have no choices at all.

So why does it happen so often that the developer only informs us all about the budget overrun after the budget is gone?

I think it goes back to my personal finance budget question: budgets are hard work. I suspect that programmers really don't like to track their time and tasks.

Indeed, I have been routinely told by developers that “Tracking hours is the Project Manager’s job!”

I don’t buy it.

Now typically status updates are done once a week in detail, maybe less. So it's not hard to see that, if a programmer have a 20 hour task and the next status update is 40 hours away, a refusal to track his own hours will lead to up to a 20 hour overrun before the Project Manager is even capable of interfering.

So of course, if the Project Manager can't trust the programmer to report the pending overrun, he'll probably switch to twice weekly status reporting to avoid this problem in the future.

But what if it's a 1 hour task with with (now) bi-weekly status reporting? This is a losing proposition, isn’t it?

Lest we decided to have status reports every hour, there really is no alternative to a programmer tracking their own tasks and schedule.

The Blame Game by Bruce Nielson

I want to discuss the need for blame on software projects. “Ug! The blame game! I hate that!” I hear you groan.

But it's an all too familiar game for all of us. We all know that software (or all human endeavors actually) end with what we call “the blame game” where we all point figures at each other and claim the failure was someone else's fault. We say we are mad when this happens, yet we participate in said game with uncanny gusto.

I think it's a mistake to overlook the human value of “the blame game.” Frustrating though it is, the end result of the blame game is usually that blame is redistributed around to the point where it's impossible to blame anyone, because everyone (or no one) was at fault. And I'd argue that this end result is generally the accurate one – there really was no one person at fault more so than just about everyone else.

 

In fact, most of the time, the “fault” is the “process,” not a person. (There is that dirty word “process” again.) Yes, there are many exceptions, but my experience is that most people (granted, not all) really do care about their projects. They want to be successful and they want to make their company successful. If there was a failure, it's generally because there was no recourse but the one taken.

An Example – Just Cut Testing! 

Think about this familiar scenario: A software developer is faced with unreasonably mandated due date from the highest echelons of a large company.

Do they:

a) Have a nice chat with everyone up the food chain (possibly including the chairman of the board, the stockholders, or the competing company that forced the situation) and change the date to be more realistic, or

b) Quit their job now, since they’ve already failed, or

c) Make qualified statements about how “they can make it if…” knowing full well “if” will never happen.

For those that insist that they’d never choose option c, but would instead negotiate a better schedule, please, for the sake of argument, assume you tried that a thousand times and failed. So you now have to decide to either quit or go forward. You always have the option to quit later, of course.

I don’t know about you, but I’m not an idiot. I’d choose c over b. In fact, I’d dare say there is no choice here in the first place. C was the only correct answer.

In other words, my best bet is to take a stab at it.

I’d quickly come up with some written “requirements” (Heck, I’ll at least then be able to explain what requirements changed during the course of the project that way) but I’d have no qualms about reducing safety margins on other tasks, like say testing. (This is a painfully familiar story, right?)

Given this hypothetical circumstance, I’d have to be stupid to not use this approach.

Who’s to Blame for All These Defects?

When inevitably this leads to defects – perhaps even critical ones – can we truly say it's the developers fault? Or is it the testers fault for not having caught the defect? Or is it the project manager's for not planning enough time?

Or, here's a thought, maybe it's the Chairman of the Board’s fault for creating a culture where a developer finds himself in this position in the first place?

No wait! Maybe it’s the stockholder’s fault for demanding more than was reasonable.

You tell me, because it seems to me that you can't really pin the blame on anyone really. Granted, the Chairman of the Board (who probably didn't even know about the project) probably has more blame to shoulder than the developer who caused the problem. But can we really say it’s his/her fault?

So when things get ugly and the blame starts to fly, just repeat to yourself this little rule:

The blame game is a healthy way of redistributing blame until no one is found to be at fault.

Then, without guilt, play the blame game. Since you know it's a necessary ritual to moving on, you can choose to not be nasty about it. Defend yourself, but make it clear it's not “all your fault” either.

For example, let's say you are a developer. Go ahead and point out that you weren't given much test time, so it made sense to code and test quickly to make the schedule. Go ahead and point out that even the tester missed the problem. Go ahead and point out that if the company really wanted to see defect-less software, they'd take more time to get the software “just right” before a release. The end result is that you just blamed everyone else (including yourself) and helped appropriately redistribute the blame so that no one was really at fault.

And that's how it (generally) should be.

How Do You Estimate for That?! by Bruce Nielson

Trying to estimate software projects is difficult to say the least. But sometimes, it’s just impossible.

I just finished resolving a problem on my project that I spent the last two days working on. The task was to issue a Purchase Order (PO) to a vendor using the SAP interface available to me. 

It all started when the Purchasing group called to tell me they couldn’t release the PO because they got an error in SAP that says it would put the project over budget. They tried everything and couldn’t get it to work. They had no idea how to fix it, so they threw it back to me to resolve. I then started calling people all over the place. I talked to accounting, to finance, to IT, and I even called the help desk which resulted in a conversation with someone in some other country that I couldn’t understand.

Finally, IT sent me over to taxes, who told me that the reason it wasn’t working is because when they make a PO, they automatically add an amount to it for tax even if the purchase is services and not taxable. So any project amount you enter must have taxes added to it even if there are no taxes to be charged.

Armed with this knowledge, I then had the project amount raised to account for these ghost taxes and the problem disappeared.

Do you really believe it’s possible to estimate something like this in a project?

Does Robert Glass’ Formula for Estimation Success Actually Work? by Bruce Nielson

In a previous post I mentioned Robert Glass’ “fact” that estimates are made at the beginning of the project before the problem is even defined, thus the estimate is invalid from the get go.

While I don’t disagree with Glass, I do believe he is under estimating (pun intended!) why we human’s prefer estimates – even known bad ones – over no estimate at all. I referred to this phenomenon as “water breathing” because if you are about to drown anyhow, you might as well see if you can breath under water.

Up front estimates are much the same – if you don’t make them at all, you’ve failed. If you make them and they are bad, you might (and often will) have time to correct them later. So that’s the real reason we make bad up front software estimates regularly. It’s a good reason once properly understood.

I know this probably sounds defeatist and I don’t mean it to be. In later posts I’ll discuss alternatives to making bad upfront estimates, even when you have “no choice.” But before I explore alternatives, I think we could benefit from looking at all of Glass’ “facts” about estimation. (See Facts and Fallacies of Software Engineering, p. 27 – 42)

Fact 8: One of the most common causes of runaway projects is poor estimation.

Fact 9: Most software estimates are performed at beginning of the life cycle. This makes sense until we realize that estimates are obtained before the requirements are defined and thus before the problem is understood. Estimation, therefore, usually occurs at the wrong time.

Fact 10: Most software estimates are made either by upper management or by marketing, not by the people who will build the software or their managers. Estimation is, therefore, done by the wrong people.

Fact 11: Software estimates are rarely adjusted as the project proceeds. Thus those estimates done at the wrong time by the wrong people are usually not corrected.

Fact 12: Since estimates are so faulty, there is little reason to be concerned when software projects do not meet estimates targets. But everyone is concerned anyway.

Fact 13: There is a disconnect between management and their programmers. In one research study of a project that failed to meet its estimates and was seen by its management as a failure, the technical participants saw it as the most successful project they had ever worked on.

Look over that list carefully. I can see everyone nodding their head approvingly. “Yup, it’s all so true! Those stupid pointy haired managers are the real cause of the bad estimates!”

While these facts all seem generally correct to me, I feel like something is missing. Glass seems to be advocating that if we do the inverse, we’ll be fine. Likewise, as you shake your head knowingly and wag your finger at management, you are advocating the same. But is this really true?

Let’s try taking the inverse of all his facts and see if we really believe the software estimation problem is solved. Can you honestly say you’ll agree with the following statement as being factually correct?

You’re software estimates will all be wildly successful if you just:

  • Make good estimates instead of poor ones (inverse of fact 8) by…
  • Making the estimates after the problem is defined – so after requirements gather is completed (inverse of fact 9)
  • Have the developers make the estimate, not marketing or management (inverse of fact 10)
  • And adjust your estimates when there are clear changes to the scope of the project (inverse of fact 11)

Since all of the above are true for your project, we can then conclude that it’s perfectly acceptable for management to get very concerned if you miss your estimates, since they “did it all the right way” (inverse of fact 12) and so we know there will never be another disconnected between management and programmers again. (inverse of fact 13).

Do you believe it?

I don’t.

Personally, I have never let management or marketing make estimates for the programming team (except to add fudge factor on top), I’ve always performed the estimate after requirements gathering when we’d supposedly “defined the problem”, and I always adjust my estimate for clear changes of scope. But even when I “do it all right” my experience is that programmers still can’t generally keep to their estimates.

So my conclusion is that while Glass has hit on a series of true problems about software estimation – and I’d agree if you don’t at least do the above, you have no hope at all in making your estimates – I feel he is failing to address the true underlying problems with software estimation.

So then what are the real reasons we all seem to suck at software estimation? To find that answer, we need to learn a lot more about how software and human psychology collide before we have any real hope of finding a solution to the problem.

Sure It’s the People – But Which People? by Bruce Nielson

So if we admit that software is really about human intelligence, not tools, then we know that human factors matter the most.

Tom Demarco and Lister, in their famous (infamous?) book Peopleware, were the first to make popular the idea that it was people that mattered the most.

But I can't shake the sneaking suspicion that they missed something. You see, I've taken top notch programmers and had them succeed on one project and failure miserably on another.

I remember one client that, whenever he called, I'd cringe. I knew – knew – that no matter who I assigned to the project, it was already a failure from the moment he called. But when his counterpart at the same company called, I knew – knew -- the project was going to succeed no matter what.

Somewhere along the line the thought occurred to me: could it be that the customer was the primary success or failure factor on my projects?

 

But how could that be? I knew tools weren’t the main factor, but how could programmers not be? How could it be the customer -- who never wrote a line of code!?

Thus began my study of the broader software project success factors.

The Chaos Report

Perhaps you are familiar with the original Chaos study from the Standish group and it's project failure and project impair factors. This was a study that started out trying to determine what affect tools has on the success or failure of a project. As they started interviewing people, people would volunteer things like “well, the tools don't really matter, but let me tell you what does.”

Eventually, due to the overwhelming evidence, they had to rethink the study based on the documented factors that correlated with success.

First, they found that a lot of software projects –nearly a third! – fail entirely. As in, you spend a lot of money and get nothing back for it. (See this link for more information.) This is sort of scary and I hope word doesn't get out about this little fact about software production.

Think about that for a moment. If one third of all software projects fail (or did back when the study was done, anyhow) then that means that every software project you do already costs you one third more than you thought, even if you come in on time and on budget. Okay, I admit that's bad math. In reality, it's the big projects that fail, so it's probably closer to half or more if you're a big company with big projects and a lot less if you're a smaller company with smaller projects. But you get the point.

The Chaos report further showed that over half of projects will cost 189% or more of their original estimate. Imagine if that happened to your house when you built it.

“Hey, buddy, sorry about that. I told your house would be $200,000 – greater starter home by the way – but instead I'm going to have to charge you $378,000 for it. But don't worry, you can mortgage it and pay it back over 30 years. That's there what we in this industry call capitalization!”

What Were the Real Success / Failure Factors?

More interesting was the fact that the success and failure factors that we normally think about, such as having a good project team, didn't turn out to be that important to the project after all. Here is the list they found in the study:

Project Success Factors % of Responses

  1. User Involvement 15.9%
  2. Executive Management Support 13.9%
  3. Clear Statement of Requirements 13.0%
  4. Proper Planning 9.6%
  5. Realistic Expectations 8.2%
  6. Smaller Project Milestones 7.7%
  7. Competent Staff 7.2%
  8. Ownership 5.3%
  9. Clear Vision & Objectives 2.9%
  10. Hard-Working, Focused Staff 2.4%

Other 13.9%

Apparently a competent staff only mattered 7.2% of the time. You get another 2.4% if they are hard-working and focused. That's a whopping 9.6% of the time that the development team ended up causing the success of the project.

What really amazes me is which members of the extended project team really mattered: End Users (15.9%) and Executive Mismanagement (13.9%).

But it gets better. It turns out that the process you used (Run for the hills! He's talking about process!) mattered more than the development team!

If your “process” included a “clear statement of requirements,” you score an extra 13.0%. If it included that little thing called “planning,” you get another 9.6% (that's equivalent to the entire development staff under best circumstances!). If you have “smaller project milestones,” score another 7.7%.

So apparently “process” does matter and maybe even more than the development team. Counter intuitive? You betcha ya!

It’s Not the Development Team That Matters!?

So, as it turns out, people are what matter the most, but it's not primarily the “developer-people” that cause projects to succeed or fail.

How can this be? Since when does a project get determined by the people not doing the work on the project? Welcome to the topsy turvy roller coaster ride that is software projects. Buckle up and grab the barf bag. You're going to need it.

What are Water Breathers? by Bruce Nielson

In Robert Glass' excellent book, Fact and Fallacies of Software Engineering, one of his “facts” (I.e. Qualified opinions) if that “Most software estimates are preformed at the beginning of the life cycle... before the requirements are defined and thus before the problem is understood.”

He goes on to say that, unlike many of his “facts” in the book, there seems to be no controversy on this one; there seems to be general intellectual agreement that we lock in on estimates too soon. This being the case, he says, “Someone should be crying out to change things. But no one is.”

He then adds:

At the very least, you need to know what problem you are to solve. But the first phase of the life cycle, the very beginning of the project, is about requirements determination. ... Put more succinctly, requirements determination is about figuring out what problem is to be solved. How can you possibly estimate solution time and cost if you don't yet know what problem you are going to be solving?

Why No Outcry?

While I agree the basic sentiment behind what Glass is saying, namely that we do lock in on estimates far too soon, I can't help but feel that Glass has missed several important points. In future posts, I'll contrast the approach Glass is advocating to my own approach to software estimation and demonstrate that Glass is actually misunderstanding certain critical points about software development and estimation. He’s also confusing the subjective nature of a so-called “problem space.” 

But at the moment, I want to look at a different aspect of this paradox that Glass is uncovering: if everyone agrees software estimates are locked in too soon, why isn't there a general out cry for change?

Glass holds this lack of outcry up as something strange to behold. Why would we all, time and again, slit our own throats like this? Surely we're smart enough to change blatantly stupid behavior like this!

Human Psychology and Software

In my opinion, Glass has fallen into the trap of overlooking the overriding importance of human psychology as it pertains to software solutions. The problem of “premature software estimations” can't be fixed by simply pointing out that the estimates will be bad at that point and failing to address the real problem.

Glass has failed to realize that upfront software estimates are what I call a “water breather.”

Imagine that you angered the mafia. The Godfather has you caught, puts you in cement boots and drops you into the Brooklyn river. There you are, holding your breath at the bottom of the river. You are starting to pass out. This is it, you're going to die.

Why not go ahead and see if maybe you can breathe water?

Sure, normally trying to breath water is a bad idea and I don't recommend it. But if you are in the situation above, you really might as well. You're dead anyway and there has got to be at least some very (very very) slight chance that you happen to have a genetic adaption that you didn't know about until this point whereby you can breathe underwater. Since the risk of death by drowning yourself has been eliminated, it really is time to try to breathe water. Go for it.

Of course you're probably not genetically adapted for water breathing and you're probably going to die. But you were definitely going to die if you didn't try, so what the heck.

As silly as this example seems, its actually quite logical. And it explains quite accurately the real reason we lock in on premature software estimates over and over again despite overwhelming evidence that they don't work.

Business Plans Are Part of Reality

What I am trying to say, is that Glass is ignoring all together the very real need – overriding need – to make business plans. Once we factor in this overriding need, upfront software estimates make perfect sense. Sure, they don't statically work. But if you're only other alternative is to never do the project in the first place – and I contend that is almost always the case – of course you're going to make an upfront estimate and go for it. Why? Because it's the single most sensible thing for you to do under that circumstance.

And actually, this is better than water breathing. In water breathing, testing your genetic adaption to breath water only buys you another faction of a second at the most. But when it comes to upfront software estimates, making a bad estimate buys you months or even years to come up with good reasons why the estimate was “right, except things changed” and convince everyone (often including yourself) that you need a “minor” change of plans or maybe a “follow on project.”

In short, “premature software estimates” are here to stay because they make rational sense compared to the next best alternative. People innately “get it” and do what is in their and often their company’s best interest: they go ahead and make the premature estimate and roll the dice and then start working on their best excuses. I just can't argue with this logic and I certainly wouldn't want to see it changed or “fixed” until we really do have a better alternative available.