Human Factors In Action (Based On A True Story)

In this story, there are three “groups”. The vendor, who is the development team, the end users, and IT for the users. IT is the sponsor – i.e. they have the money for the project. 

The vendor does a pretty good job of documenting their requests for information. The end users don’t always do such a good job of responding in a timely manner. This isn’t really that shocking, because supporting the software development effort is not their primary job. The vendor has been very good about allowing schedule slippages to accommodate the users. (How they do this without charging more money is beyond me. Presumably they make their profit via selling their software licenses.)

Unfortunately, there is this one request for information that has been out there for weeks and the vendor is still waiting for a response. (All the others have been completed. The end users really are pretty much on top of things.)

Finally, getting nervous, the vendor calls a meeting to discuss the status of the request. During the meeting it becomes clear that the end users still don’t know what they want. The vendor pointed out that if we can’t resolve this soon the result will be that they will have to break these requirements out into a separate release and therefore ask for some additional money – but probably not that much.

Now having often been the vendor myself, I know this is an entirely reasonable request. But the end users reacted very badly to this. They immediately started trying to figure out who to blame for this perceived failure.

Now, of course, if there is any ‘fault’ in a situation like this, the only people that can really be blamed are the end users themselves. But this seems unfair to me. They are dang busy and they have returned every single response except for this one and there were extenuating circumstances with this one. No one is even considering the need to blame them. When it comes right down to it, IT should be ready to pay a bit more to the vendor to accommodate the users – and they are prepared to do so.

But the end users feel nervous that they are going to be ‘blamed’ and basically launch a pre-emptive strike to figure out ‘who is at fault’ only with the caveat that it most certainly is not themselves.

But how can the end users blame someone else? Isn’t this just a simple failure to get answers for the vendor? But of course, it’s always possible to find someone else to share blame with: “Who made the decision that we have to have these requirements now?” one end user asks.

Another user agreed, “Yeah, when this all started we asked to make this a separate project, not part of the current project. So who the heck decided to make it part of the current project anyhow?”

Their ‘blame tactic’ seems to be that actually this is all ITs fault because we just didn’t budget enough and plan well enough. If we had just been smart enough to follow their advice to not include this in the original scope, then we’d not be in this situation today.

Now mind you, absolutely no one but the end users is currently blaming anyone. And considering what a minor bump this is to the scope, I’m really not at all worried about the worst case scenario – breaking these requirements into a separate project.

And for that matter, how much sense would it make to start off with this as a separate release? That would just mean it was more expensive to begin with with no chance of getting it into the current project and thereby saving time and money. So the case that it would have made more sense to initially plan for the worst case scenario is pretty thin.

Yet this whole blame game seems to be a necessary part of moving on. The end users really need to go through this ritual to assure themselves that they are not at fault.

Unfortunately when you play the blame game, you actually have to come up with a reason why it’s not your fault and thereby you are actually blaming someone else. In this case, it’s IT’s failure to follow their initial advice. Now IT, who was never even blaming the end users to begin with, now feels they have to defend themselves.

And so that does IT do? Why they whip out the original proposal that the end users all signed off on which states that the cost on these requirements is contingent on getting it into the current release. So apparently the end users claim that they originally told us to put the requirements into a future release – if true at all – was forgotten even by themselves when they signed off on the proposal.

Now you can see the danger here. The end users, not wanting to take the blame, started trying to exonerate themselves upfront. This turned into blaming IT. IT then showed the end users their down sign offs that proved them wrong. Now what? That last thing you ever want to do is alienate your users. Bad. Very bad. And without a doubt, the single fastest way to alienate your users is to be able to prove something really is their own fault – which is what IT just did.

Luckily we managed to diffuse the escalating blame game. We claimed that clearly there was a miscommunication between what we requested of the vendor and what went into the proposal and unfortunately no one caught it until after it was signed. So now it’s, um, the vendor’s fault, I guess. Lame, I know, but it worked. And I don’t really blame the vendor. They were probably the most on top of this of anyone.

There are a lot of points to learn from here:

1. The very real human need to find blame away from yourself.

2. Anything said any time in the past (or misremembered as such) tends to be remembered as a requirement by the end users and they will try to hold you to it if necessary.

3. Having physical proof (a sign off) that the end users were misremembering actually had the potential to make the situation worse. Luckily it didn’t this time. 

4. On the other hand, I am very grateful that we did have the sign off. If this ever did become a more serious issue and was sent up the food chain, I could always just wave the sign off around.

5. In fact, having that sign off virtual guarantees that it will never go up the food chain. The end users are too much at risk of looking bad if they decide to raise this as an issue.  

There is another thought here worth mentioning – what if that proposal had not explicitly stated that the cost would change if we didn’t respond by a certain date? Of course every proposal is implicitly based on the assumption that you give timely responses. But had it not explicitly stated it, most likely the political pressure for the customer would have been to try to force the vendor to not raise their costs. This seems rather unfair for the vendor. They are really just lucky that they foresaw this possibility and happened to document it. But it’s impossible to foresee every such problem.

Posted by BruceNielson with no comments

An Improved Agile Manifesto?

Nate sent me the following link. Take a peek because it’s hilarious.

Was this created to mock enterprise companies or to mock Agile? I can’t even tell.

I hate to admit it, but I think this ‘joke’ is sort of correct. If I had one complaint about Agile it is that it seems too naive at times. The idea seems to be that if you use an Agile methodology you’ll change your customer from being a “hostile-customer” to a “customer-ally” and thereby make your project more successful. My experience suggests this is true most of the time.

But if you fail in that regard, you’ll end up with a “still-hostile-customer” and not even your normal mound of paperwork and documentation needed to keep yourself out of court. In short, Agile usually succeeds but if it fails, it tends to make the failure even worse.

Posted by BruceNielson with no comments

Physics and the Law of Lossy Requirements

This post over at The Eternal Universe is a physicist complaining about how he’s not seeking a computer science degree, yet he has to keep learning computer languages just to publish physics papers.

He should have read my previous post about the Law of Lossy Requirements. The cheapest way to capture all of the details of an algorithm is in code. English language specs never capture all the necessary details. So of course physics papers are often written in code.

Why Good Project Management Can Destroy Your Project

Okay, the title for this post is misleading, I admit. But I wanted to catch your attention.

One irony of software development is that the consulting companies I’ve worked for always want me, as project manager, to have documentation to prove I did every reasonable thing possible to make the project succeed. Is that so bad? Actually, sometimes, yes.

Now the thinking is obvious. If things start to go wrong on a project and the customer starts to blame us, at least we can prove it’s not our fault. Now, of course documenting everything is impossible to begin with due to the law of lossy requirements. Besides, you can’t really know in advanced what minor detail at the time will turn out to be perceived in retrospect as obvious negligence. But those problems aren’t the ones I’m writing about today. For the sake of argument, let’s pretend it was actually possible to document every possible issue that might later come back to bite you.

The problem is that proving something ‘isn’t your fault’ is probably going to be perceived as equivalent to ‘blaming the customer’ instead. As it turns out, proving your customer is at fault isn’t always what’s best for your project – even if it’s the truth.

Over the years, I’ve gotten pretty good as guessing in advanced where problems will crop up. I’ve also gotten pretty good at documenting in advanced that I really did try to resolve the issue and/or obtain help from the customer to do so. I’m no where near perfect at this, but I’m good enough at it that the obvious problems don’t catch me off guard any more.

But my experience is that this makes the consulting companies I work for over confident. They then wish to waltz into a meeting with the customer, prove via my documentation that the customer is at fault, and expect the customer to just accept this. I’ve actually seen text book ‘good project management’ like this ultimately undermine the relationship with the customer, and thereby sink the project because the realities of human psychology were overlooked.

Now how would you feel if your vendor did this to you? I know what I’d do. I’d fire them.

Why? Because there is no way I’m going to keep a vendor around that has proof I’m incompetent and that the project’s failures are my fault. 

So even if I do have documentation to ‘protect myself’ with, I rarely want to actually use it to correctly place the blame as to what the source of the problem is. It’s generally better to, if necessary, pretend I was partially to blame then to prove to the customer it was entirely their fault. (And of course, I am often partially to blame.)

So once again we see that software is primarily a human endeavor and that you can only ignore the psychology side of things at your own peril.

…And then a Miracle Occurs by Bruce Nielson

There is an old Far Side comic where a professor is working complex math on a white board. At one point he’s written “and then a miracle occurs” and then successfully finishes his difficult problem.

I’ve talked to a lot of programmers that feel software actually tries to do just that. They complain that that no matter how many problems in the past we have, we just don’t learn our lessons. We always assume that just because our approach failed last time that this time it will be different.

This sentiment has even lead to another joke I often hear: what’s the definition of insanity? Doing the same thing you did last time but expecting a different result.

But Miracles are Real!

Both of these jokes underscore to me that we’re missing something. And, of course, it’s the psychology of software projects we’re leaving out. If we are continually doing the same ‘bad’ things over and over, there is probably a reason for it. If it really was all bad and we just kept repeating the same bad behavior merely expecting a different result, then we’d really be collectively insane.


So I’m going to start with a different assumption. I’m going to suggest that maybe we are not insane. The logical corollary to this is that the ‘bad’ results we hate are in fact better than the perceived alternatives.

I once had a business analyst friend come running up to me in a huff, concerned that once again his company, that was consulting with, was going to try to do the impossible. He listed out to me the very long list all the projects and the due dates. Then he said: “They’re doing it again! We failed the last time we tried to do this, why are they trying to force us to make impossible due dates this time? Do they think a miracle is going to occur this time?”

I laughed and he looked like he was going to smack me. Then I said, “Relax, Jeremy. A miracle will occur and you’ll be okay.”

Now he really looked like he was going to smack me. So I went on to explain:

Why We Accept Due Dates We Know We Can’t Make

Your managers undoubtedly have good reasons for why they want to promise dates that everyone knows we can’t make. They, in turn, get you and the team to – at a minimum – at least not publicly admit you can’t make it. In fact, there is generally good reason to believe you could make it if ‘this time we communicate well, work hard, and there are no mistakes.’ The real problem is that you will have miscommunications, someone will have something happen that messes up the work, and there will be mistakes. Making these dates are not impossible, just highly improbable.

But tell me honestly, Jeremy, what is going to happen when the dates are missed? Will everyone be fired? Of course not. Instead, everyone will start the play the blame game. This will cause the blame to redistribute around so that the managers can’t possibly find who to blame. The managers don’t want to admit that they don’t know what caused the failure, so they’ll find the best reasons for why things changed from back when they made the estimates and will argue that the slip of the date is for good reasons. The executives up the chain will have mostly have forgotten by then why they had to have it by that date and they’ll calmly accept the date slide based on the reason given.

If I’m wrong about this, and instead the executives decide to come down on everyone – and that does sometimes happen, though rarely – then a witch hunt will begin. Eventually, because specific blame is basically impossible to assign, someone will become the token person that failed. That person will be fired and everyone else will be forgiven since now we’ve decided that it’s primarily that person’s fault. Eventually, we’ll all come to believe that was actually the case.

Now we all know that most likely the schedule will slip and nothing will happen. But if we’re wrong, then the odds that we’ll be the token person this time are very slim. So it really does make sense to not do any major objecting over the unreasonable due dates and just wait and see what happens. Because odds are things will change and it won’t be a big deal.

Oh, and isn’t that what happened last time? And the time before? And the time before that? In other words, the reason we keep repeating the same pattern of accepting unreasonable due dates is because we perceive that miracles normally occur and we’re betting they will this time too. The miracle won’t be that we make the due date. The miracle will be that something changes so that we don’t have to or that someone who isn’t us gets in trouble for missing it.


Jeremy’s face lit up. “You’re right! I feel so much better!” he exclaimed. And off he went to go ahead and do his best and not worry so much about the unreasonable due date.

Now, I’d argue that there is a downside to this reality. For one, we are giving up on the fact that software can be more predictable than this. And also, we are giving up our chance to do real prioritization to make sure we are doing the high value items first. So I am not advocating for complacency on this issue. But it does help put it into perspective.

We are not being stupid when we repeat our “errors.” We do it because it makes sense to do so because the results are optimized (compared to the perceived or real alternatives) to maximize our chance of personal success (i.e. not lose our reputation or source of income.)

But there is one more important point here. Now that we understand the psychology of accepting (or at least not strongly objecting to) unreasonable due dates, we can now understand why grass root efforts to fix the problem are doomed to failure. If this isn’t how you wish to see your company run, it’s the managers and executives that have to decide to fix the problem top down.

Software is Not Coding, It’s A Cooperative Game of Communication by Bruce Nielson

In my post on “code is really design” post, I mentioned that I would further address the paradox that software is created in “thoughts units” but the most important people to the success or failure of a project are the sponsor and customer. In this post, I’ll explore that paradox further by discussing Alistair Cockburn’s idea of Software as a cooperative game.

In Agile Software Development, Cockburn says:

Software development is therefore a cooperative game of invention and communication. There is nothing in the game but people’s ideas and the communication of those ideas to their colleagues and to the computer.

In Crystal Clear: A Human-Powered Methodology for Small Teams, he writes:

[Successful teams] view developing software as an evolving conversation – an economic-cooperative game, actually. In any one move they go forward and create reminders for each other.

But in what sense is software a game and how does this help us understand software development better?

I’ve never been fully comfortable with Cockburn’s calling it a “game.” I’m not sure images of Monopoly and Halo (which is what comes to my mind, anyhow) is what he intended.

It’s helpful to refer to other non-software examples that Cockburn uses. One of my favorite of his examples is making laws in a legislature.

Can You Estimate How Long it Will Take to Pass a Law?

Imagine if the republicans asked the democrats, “How long will it take you guys to pass that new health care law of yours?”

Now in theory, the democrats should be able to estimate this. It’s just a bunch of words on a pieces of paper, right? Someone has to sit down to write it, they have to talk to experts, they have to run a few numbers and come up with a budget, then they have to pass it through a committee, then bring it to the floor, and finally get a vote. So the question the republicans are asking seems reasonable on the surface.

But in reality, part of what the democrats have to estimate is how long it will take them to convince enough republicans to pass a version of the bill. If this step were unnecessary, making an estimate would probably be easy. But if the democrats need to actually convince some key republicans to vote for the bill to get it passed, then part of their estimate will have to be an estimate for how long it takes to keep revising the bill until the republicans accept it – if ever.

Once we realize that is the case, the republican’s question seems less legitimate. The Republicans have a significant say in determining when the bill is “done” and the democrats don’t actually control that part of the process. The democrats cannot, even in principle, make an estimate for how long it will take to pass the law.

I see software much like this. We have customers and sponsors on the one hand and the development team (programmers, testers, designers, project managers) on the other hand. But instead of capturing ideas into specific laws to be interpreted by other humans, they are cooperating together to capture business rules into an automated programs that will be interpreted by computers.

There is nothing else going on when we make software. There is no analogy here to building a house or a wall, or anything else physical. When my customers ask me “how long will it take to program the fitz-bart module?” I am literally incapable of answering them because part of my estimate would have to be how long they will choose to iterate before they accept the software as matching their needs. There is literally no limit to how many times they might choose to keep tweaking the requirements and I’ve been in cases where they never become satisfied with it.

If I didn’t have to worry about customers having to accept the software, estimates would be a lot easier. But with customer acceptance required, it’s impossible to estimate anything beyond the first iteration of the creation of the module.

An Alternative – Cooperation

Now consider an alternative example. Lets say that both the republicans and democrats both are anxious to pass a health care bill and they both have more or less synchronous goals. Now is it possible to estimate how long it will take to pass a health care law?


Why? Because now both sides are working together to meet a target date. This means both sides are motivated to make the date and, more to the point, both will be willing to accept some compromises to meet that date.

Software works the same way. If the customers/sponsors and the development team both want to make a certain release date or a specific budget target, there is almost always a way to come up with a workable (though usually manual, and thus less than ideal) solution to any software problem such that you can meet those goals. It just requires both sides to agree that they will make the target date or the target budget and to be willing to negotiate on scope until they match those goals.

The Law of Lossy Requirements by Bruce Nielson

Lossy Compression

In computer science, compression is an indispensible tool. Anyone familiar with .zip files knows what I mean. Interestingly, there are two kids of compression, lossless and lossy. Lossless compression is like .zip compression, you put a file of, say, 100kb in and the end result is a file of, say, 50k. But when you reverse the process, you get back your original 100kb file.

Lossless compression relies on the fact that in real life information contains patterns. Just imagine taking this post and finding the most common used words in it. Then replace the most common word with a the tag “<1>” and the next most common with “<2>”, etc. The end result would generally be a smaller file, yet by replacing “<x>” with the original words you could recover the original file in its entirety.

I used to wonder what possible use lossy compression could ever be. Why in the world would I ever want to save off a compressed version of my files if I couldn’t get them back to their original state ever again?

Of course for files, lossy compression is useless but for computer graphics it’s quite useful. Because the human eye is not capable of picking up all the details in an image it’s possible to reduce the details in the image without the human eye being able to tell the difference.

What’s even more interesting is that you can continue to increase that lossy compression on an image until the human eye can tell the difference, yet the brain will still be able to tell what the image is. It just looks a bit more blurry.

In fact, you can continue to compress and compress using lossy compression to any size. At some point you end up with a single solid color, of course, which isn’t useful, but even if you take a very large image – let’s say 1024 x 1024 pixels – and shrink it via lossy compression down to the equivalent of a 16 x 16 image, the human brain fills in the details and still makes sense of the image.

Requirements Are Lossy Compression

If Code is Design, one of the most under appreciated points of software development is that requirements are equivalent to lossy compression of graphics, only in reverse. (See also Tell Me What To Do, Not How to Do It for an example of this.) 

What we call “requirements documents” (or even “design documents”) are really a blurry picture of what is to be built. In other words, requirements and design documents by definition do not contain all the details.

Why do we do this? There are actually many reasons, but here are two key reasons:

1. Until the software is actually built, it doesn’t exist. So there isn’t really a choice but to imagine it with broad brush strokes and then refine it into reality.

2. As was discussed in a previous post, human beings more easily understand abstractions rather than precision. If we didn’t have abstract and blurry views of the design, no one but the programmer would ever understand what is to be done. Actually, even the programmer would only understand it in chunks and not holistically.

The Law of Lossy Requirements

If you followed my argument so far, then The Law of Lossy Requirements will now make sense to you. This law states:

All requirements documents are a lossy compression of the business rules. If you made it non-lossy, the cheapest way to document it would be in code.

Therefore there are two corollaries to this law:

Corollary 1: There will never be a point where you know all details of what software is to be created until the software is completed.

Corollary 2: There is no such thing as documenting everything in the software because to do so would require synchronizing essentially a second copy of the code.

Comprehensibility vs. Precision by Bruce Nielson

Abstraction vs. Precision in Requirements

I used to be an instructor for Rational Software’s RequisitePro software, which included a class called “Requirements College.” This useful class helped teach people how to elicit requirements from their customers.

Three things that really stuck with me from the class were, first, the idea that one does not “gather requirements’ per se, but elicits them. If taken literally, “gathering requirements” implies that requirements are readily available and you just have to pick them up and take them. This flies in the face of reality; requirements have to be created from nothing.

The second was the idea that How and What are relative to a point of view. I’ve written about this in the past.

The third was that, according to the course, abstraction and precision form a spectrum that affects comprehensibility.  The idea was that if you are too abstract, your writing won’t be comprehensible at all. But if you are too precise, it’s also not comprehensible. The following graph illustrates the idea:



Legalese is a good example of this. Legalese is very precise wording for legal purposes, but because its so precise, it’s difficult for anyone but a lawyer to understand – and that’s not a fair comparison because the lawyer at least gets paid to slog through it.

Therefore, according to the course, requirements should seek the sweet spot of maximum comprehensibility, where it’s neither too precise nor too abstract.

Useful, but Misleading

This idea of avoiding too much precision to increase comprehensibility is a useful idea. But I couldn’t help but feel it also somewhat mislead.

I kept thinking to myself: but what if that level of precision is what you need to come to an agreement on?

Imagine dumbing down legalese so that it’s more “comprehensible.” Would that be a good thing? It’s not like legalese was created to keep lawyers employed (though sometimes it might seem that way.) On the contrary, legalese was created because a more comprehensible abstraction would also have multiple possible meanings. In a court of law, that’s precisely what we’re trying to avoid. Ergo, the need for legalese.

Are Software Requirements Like Legalese?

Now if the point of requirements documents is to get everyone going in the same general fuzzy direction, I can see that a comprehensible abstraction would be the best way to handle it.

But if the point of the requirements is to come to agreement on how to specifically implement something, there is no substitute for precision in details. I believe “requirements” (and also “design”) must therefore be considered as something closer to legalese in such a circumstance.

In an ideal world, we’d probably want both. We’d want to start at the highest level of abstraction and work our way down, keeping every stakeholder informed and involved as we take the fuzzy abstraction and turn it into something specific and detailed.

But in reality, the Marketing People drop out after the first details start to be filled in, Management’s eyes begin to glaze over once the “requirements document” is complete, and even the Technical Lead might to nod off once we get into the really nitty gritty details.


I see an essential tension between “comprehensibility” and the need to come to agreement on an actual implementation, which requires comprehending the actual details. I think this is one of the hardest problems in software development and I do not expect this to be a problem that can ever be overcome because it’s the essential nature of software itself.

Proper Use of Overtime by Bruce Nielson

It seems to me that “overtime” is a much talked about subject, both in literature and just around the water cooler, but that people tend to take one of two extreme views on it.

The first view is that overtime is immoral unless the development team screwed up. This point of view says, “if I made a mistake, I’ll make good, but otherwise, I expect to never have to work overtime. So long as I’m giving it my best, overtime – particularly unpaid overtime – is immoral to ask me for.”

The second point of view is that “it’s part of the job.” From this point of view overtime is just part of doing your job and no developer should think otherwise. Those that do think otherwise should seek other employment.

Now it seems to me that people don’t usually stick with one point of view on this. We tend to swing about, based on our current situation, as to whether or not overtime is “part of the job” or “an immoral practice.” Particularly, we tend to think of it as “part of the job” when it’s someone else and “an immoral practice” when too much of it is asked of us personally.

I propose that neither point of view is true.

I think overtime is a legitimate tool in the toolbox to get a job done. But if it’s your only tool – or worse yet, only remaining tool -- you’ve probably already lost the battle.

In a past post I pointed out that software estimates tend to be pretty bad. And elsewhere, I pointed out that the need for upfront estimates is a ‘water breather’ i.e. the need for them is so significant that lack of ability to make good estimates isn’t reason enough to not make them.

I think the rest is obvious. In business, we rely on estimates that we often can’t accurately make. Even if we “do it right” when making an estimates it’s just not always enough for the estimate to be accurate. So when estimates fail, we must face hard choices. These include:

1. Cutting scope

2. Working overtime

3. Delaying the due date

Now it’s easy to say “Well, just cut scope or delay the due date, because it’s wrong to ask me to work overtime for something that isn’t my fault. I had no way of knowing that X was going to happen and delay the project.”

But I think this is an unrealistic view. If a project estimate goes south, typically it goes far south, so there is often a need to do all three of the above to rectify the situation.

But of course the inverse point of view must also be seen as valid. If your team is working overtime to make up a bad schedule, but you aren’t also cutting scope and/or delaying the due date as far as possible, then you might as well admit that you aren’t serious about your schedule and you’re really just punishing the team.

Oh, and expect your team to be aware of this.

So what’s the proper use of overtime? I’d say it’s one tool in the box to deal with bad estimates, that it shouldn’t be used without using other means as well (such as cutting back on scope), and that you should be careful to not do it more often than is necessary.

What Constitutes a Change of Scope? by Bruce Nielson

In a previous post I used Robert Glass’ advice from his excellent book, Facts and Fallacies of Software Engineering to come up with what I see as the industries standard advice on how to do good software estimates:

To summarize, the standard advice is to:

  1. Not make estimates until requirements have been defined
  2. Always let the programming team make the estimate, not management or marketing
  3. Use change control on the estimate as requirements change

While I believe the above advice is good advice, it’s no where near sufficient to address the very real problems of software estimation. However, if you aren’t doing at least the above, your estimates are doomed. So Glass’ points are at least a good starting point.

Now one objection I anticipate from the audience is that if #3, if understood in a certain way, would make it impossible to ever miss an estimate. If you can always change the estimate to match reality, how can it ever be missed?

I actually think this is a good point that starts up down the road to understanding the real problems of software estimation.

First of all, if we take that point of view to its logical conclusion, this is really just the same as saying that the estimate isn’t worth the paper it’s printed on. Imagine going into a car shop (or when buying a house, or whatever you want to imagine) and being told “well, it will probably cost $1,000, but we’ll let you know if it’s going to cost more.”

Then a couple of hours later you receive a phone call. “Yeah, it’s like I thought. The problem is bigger than anticipated. I’m projecting $2,000 now.”

Then the next day you receive another call. “Well, it’s taking longer than expected, so I’d say probably $3,000 now.”

By this point, you’d a) break into a cold sweat every time the phone rang, wondering how much more you owe now, and b) you’d probably have pulled the car out of the shop and taken it to a “reputable” place.

So the first thing we need to understand, is that if you take #3 above to mean “I get to correct the estimate any time I realize it’s going to take more” that we’ve fundamentally misunderstood both human nature and the purpose of an estimate.

The Meaning of “Scope”

Now I anticipate another objection might be: “Well, you’re giving an example of where nothing changed. It’s still the same scope at that car shop, they are just charging more because it took longer. That’s unfair and immoral. I’d never advocate that. I only advocate changes of scope that are obvious and clear.”

Ah, very good. So there is an emotional difference between asking for more money when there is a clear change of scope vs. when the change of scope is vague.

But what exactly is a “clear change of scope”? Are all legitimate changes of scope “obvious” and thus “clear?”

For example, if you brought your car into the shop because there is a knock in it and the shop looks it over and says “well, it’s probably the carburetor, so I know it will cost $1,000 to fix” and then they start work on the car and they don’t find a problem with the carburetor, is it really the same scope still?

There are two ways to think of this, both are fair in my opinion. The first is, the customer’s point of view:

The scope of work that was estimated was to remove the knock. They are the experts, so they should have diagnosed it correctly. If it wasn’t really the carburetor that was causing the problem, that’s their problem, not mine. The scope of work has not changed so the price should not change!

The second point of view is the car shops point of view:

I did my honest best to diagnose the problem. Every time in the past I’ve heard a knock like this, it’s been the carburetor. Any competent mechanic would have diagnosed it the same. When I made my ‘bid’ it was to fix the carburetor, not to remove a vague knock. If I later find out there is a much more extensive problem, that constitutes a change of scope, and thus I have the right to ask for more money!

So who is right? I think most of us impulsively believe the customer is right in this circumstance, mainly because it would make us really mad to be treated the way this hypothetical car shop is treating us. So we “go with our gut” on it and declare the car shop both wrong and immoral.

But think about this problem from the point of view of being a programmer. Really, give it some thought. I have never know a programmer that didn’t take the “car shop’s view” when the shoe was on their foot.

“Scope” is Inherently Vague in Software

I’d submit that actually both are right. Or more to the point, there is no actual “right or wrong” here. We’re making a moral issue where no moral issue exists.

In reality, whether the car shop should take the first or second point of view is entirely based on what is right for their business long term. If they want to ask for more money, this is completely fair and moral, but it’s very likely to end the relationship with the customer and that is as it should be. So they get to decide for themselves if they want to take a loss on this one to keep the customer or if they want to exert their right to declare this a change of scope and ask for more money before proceeding.

But the key point I want to make is that most of the time, it’s not clear what a “clear change of requirements” is. For it to be clear, the “requirements” would have had to have been very detailed, in this case, specifying what they were estimating based on their plan to inspect the carburetor.

But in the case of software, how detailed would you have to be with your ‘requirements’ to avoid debate over if something was a change of scope or not? Likely it would require you to write your ‘requirements’ in pseudo code. (More on this in later posts.)

If the requirements were more abstract and general then pseudo code, then I submit that there will usually be some wiggle room for both sides and for both points of view to declare a cost overrun as either “in” or “out” of scope for the requirements, just like our car shop example.

So my point is simply this: ‘Clear changes of scope’ are rare indeed. You’ll mostly have to live with muddy ones. Turning it into a moral issue is a mistake. (“Those crummy customers always changing the requirements on the project! What immoral cretins!”) It’s just a business decision about what will be in your best interest long term.

The Prime Mistake by Bruce Nielson

In a previous post I talked about the blame game. I suggested that “the blame game” is a necessary part of software failures so it shouldn't be treated with as much fear and loathing it usually receives. By understanding the human need to pin down blame (and the general inability for human's to be able to) we are avoiding ignoring this very human success or failure factor.

Related to “the Blame Game” is what I call “the Prime Mistake.” And unfortunately, The Prime Mistake, is very bad for software developers, so it’s best we understand the concept.

The perfect example of “the Prime Mistake” happened to me years ago back in high school, in my Geometry class. My teacher had a bad habit of making all the problems dependent on the answer of a previous problem. Worse yet, he didn’t give partial credit for having done the follow on problem the right way but with the wrong starting value.

On one test I took, I missed the very first problem due to a stupid error. The rest of the problems used that value as their starting point, so I basically missed everything.

That first problem that I missed is an example of “the Prime Mistake” because it’s the original mistake that causes all the follow on mistakes.

Part of the Blame Game is the attempt to track down the Prime Mistake so that we can pin everyone else’s mistakes on one person.

Developers and The Prime Mistake

When it comes to developers, the Prime Mistake seems to be particularly dangerous. As it turns out, absolutely all problems in software can be traced (in some way) back to the programmer, usually with a pretty obvious cause and effect – a mistake on the programmers part. Woe is the software programmer!

Due to the prime mistake, the programmer may end up with basically everything at their feet. After all, every single defect in the code is – in a very real sense – their fault.

The problem with this line of thought is that removing all defects is beyond human capacity.

Worse yet, we tend to be far more forgiving of “mistakes” in principle than in practice. Ask any user or manager or user upfront if they accept that programmers, no matter how hard they try, will make mistakes. “Sure, they say. It’s inevitable.”

But then watch those same managers and users after an actual bug lead to a problem that causes a real dollar loss! Heaven help the programmer then.

The problem is that when a real defect is uncovered, it doesn’t come in a “theoretical package” it comes with real damage, such as loss of dollars, confidence, or prestige. This is why we react differently to real defects than theoretical defects.

To make things worse, real defect are shortly followed with “a fix” which is really just a fancy way of saying “all information about how it could have been avoided if the programmer has just been ‘a bit more careful or smarter.’”

So the temptation to assign the Prime Mistake to the developer may become overwhelming for “real defects.”

Spreading the Blame Means Spreading the Responsibility

In our more theoretical moments, we know that programmers can’t possibly have no defects. It is this realization that has led to the industry standard of having “testers” that are separate from programmers. Indeed, the invention of “testers” is a very good step in the right direction. We effectively “spread the blame” out a bit. If a defect gets past both the developer and the tester, we might feel a bit more forgiving to the developer for having “screwed it up.”

(Another point is that a tester hopefully helps find the defect before there is a loss attached to it. But this is a tale for another time.)

But we shouldn’t stop here. Software is so complex that even a good developer and a good tester together can’t realistically “not make errors.” Everyone involved in the project must take responsibility for avoiding defects. The users must test regularly and give clear requirements, the programmer must use automated unit tests, and the testers must have test plans so that they aren’t relying on memory. Removing defects in software is never one person’s (or one group’s) responsibility.

And when all of that together fails, at least everyone knows we all did the best we could.

Code is Design by Bruce Nielson

The Story So Far

Let’s return to the primary concern with Software. We’ve talked about how most software projects either fail altogether or run significantly over budget. Along with that thought, we considered the statistics that show that the development team itself plays little role in the success and failure of the project in comparison to users and sponsors. However, contrasting that we’ve looked at how software development is really developed in “thought units” more so then actual code.

These two facts seem on the surface to be at odds with each other. If software is really developed in “thought units” then why do users and sponsors have more control over the success and failure of the project then the developers?

Coding As Design

I will now explore this question further in a series of posts. But first, we need to look at the concepts of “software design” vs. “software coding”

Rumor has it that software is an engineering discipline and because of that, our industry adopted much of the vocabulary and process used by real engineers. This led to the creation of the both famous and infamous “waterfall methodology” where we pass through water tight stages that start with Requirement Gathering, move on to a documented Design, and then finally Construct what was designed, finishing off with Testing and then Release.

It’s all a bunch of posh, of course.

Actually, despite it’s recent (and partially deserved) bad reputation, the waterfall methodology has a lot to recommend it. To this day, I’ve never come across any methodology, Agile or otherwise, that doesn’t owe a great debt to the Waterfall methodology. So let’s not throw out the baby with the bath water.

However, what we software developers do is not really engineering in the true sense and probably never will be. This is why a strict waterfall approach, which serves so well in other engineering fields, has failed us miserably.

Jack W. Reeves classic article, “What is Software Design?” addresses this directly. (See also this link, for further information.) What Reeves correctly points out is that what we call “Software Design” is not a complete design at all, but is rather what he calls a “high level structural design” yet we treat it as if it’s some sort of complete detailed design (even often calling it “The Detailed Design”).

Now maybe you’re now thinking, “So it’s a high level design instead of a detailed design. So what, who cares?”

Yet, as I’ll show in future posts, this seemingly small distinction might just be the entire difference between understanding the real problems of software development and turning a blind eye to them.

But let’s let Reeves speak for himself. He says:

For almost 10 years I have felt that the software industry collectively misses a subtle point about the difference between developing a software design and what a software design really is. … This lesson is that programming is not about building software; programming is about designing software.

…the software industry has created some false parallels with hardware engineering while missing some perfectly valid parallels. In essence, I concluded that we are not software engineers because we do not realize what a software design really is

A program listing is a document that represents a software design. Compilers and linkers actually build [manufacture or construct] software designs.

…it is cheaper and simpler to just build the design and test it than to do anything else. We do not care how many builds [constructions or manufactures] we do—they cost next to nothing in terms of time.

[The] designs will be coded in some programming language and then validated and refined via the build/test cycle.

Is Programming a Form of Engineering at All?

Reeves questions if Programmers should be called Engineers if they don’t even realize that programming isn’t “manufacturing” or “construction” at all, but is really “detailed design.” But Reeves never quite questions if programming might not be Engineering at all.

But this leaves one uncomfortable. If our industry is full of programmers that do detail design and think they are doing “manufacturing” (i.e. “construction or building of a design”) how could they possibly misunderstand their own engineering discipline so severely?

Agile Maven, Alistair Cockburn, suggests how by questioning the very notion of “software engineering”:

Software development is not “naturally” a branch of engineering. It was proposed as being a branch of engineering in 1968 as a deliberate provocation intended to stir people to new action [Naur-Randell]. As a provocation, it succeeded. As a means for providing sound advice to busy practitioners, however, it cannot be considered a success. After 35 years of using this model, our field lacks a notable project success rate [Standish], we do not find a correlation between project success and use of tidy “engineering” development processes [Cockburn 2003a], and we do not find practitioners able to derive practical advice to pressing problems on live projects. (See link)

Elsewhere, Cockburn points out that the whole notion of “software engineering” was actually just an attempt to provoke less than obvious parallels between programming and true engineering fields. “The term ‘software engineering’ was coined in 1968, in the NATO Conference on Software Engineering,” Says Cockburn, “It is fascinating to read the verbatim reports of that conference and compare what those attendees were saying with what we have created since then.”

He then quotes from that conferences notes on the background of the conference:

The Study Group concentrated on possible actions which would merit an international, rather than a national effort. In particular it focused its attentions on the problems of software. In late 1967 the Study Group recommended the holding of a working conference on Software Engineering. The phrase ‘software engineering’ was deliberately chosen as being provocative, in implying the need for software manufacture to be based on the types of theoretical foundations and practical disciplines, that are traditional in the established branches of engineering.

It may well be time for us to abandon the very notion of “Software Engineering” and to base our development techniques off of some more natural match. Reeves agrees:

In many ways, software bears more resemblance to complex social or organic systems than to hardware.

Ramifications of Software As Design

Rethink, for a moment, how accepting “Coding” as really being “Design” changes the way we think of software development. The ramifications are substantial.

First of all, it means software development, being a design activity, will always change after the requirements gathering and high level design are “completed.” We would no more expect software to merely follow the “original design” (which we now know is only a high level design without all the needed details) then we’d expect Intel to design their next generation process only at a high level and then lock it in permanently. Any attempt to do so would be met with the same sort of frustration that plagues the software industry.

Another ramification is that it often makes sense to “get to coding” (i.e. detailed design) as quickly as reasonable. Contrary to popular belief, not all rushes to code are “hacking.”

Indeed, any methodology that encourages “design” (i.e. high level design) to be simultaneous with “coding” (i.e. detailed design) will be superior to one that doesn’t. This explains the popularity of methodologies such as “spiral,” “rapid prototyping,” and now “agile.”

Furthermore, testing software is actually a design activity as well. It’s how we validate the design details. Thus any methodology (a la Agile) that encourages testing during “coding” will be more successful than one that doesn’t.

Conclusion: It’s All Design

Wait! Did you read that right? Is it really true that design, construction, and testing phases are all really just design?

As Reeves so aptly put it:

The overwhelming problem with software development is that everything is part of the design process. Coding is design, testing and debugging are part of design, and what we typically call software design is still part of design. Software may be cheap to build, but it is incredibly expensive to design.

Tell me What to Do, Not How to Do It! by Bruce Nielson

Never tell people how to do things. Tell them what to do and they will surprise you with their ingenuity. – George Patten

This sounds like good advice doesn’t it? It is, actually, but we need to understand it correctly.

The first thing we need to notice is that there is no objective difference between ‘what’ and ‘how.’ The difference is purely relative.

If I’m a marketing guy, I’m going to give you a ‘software spec’ that might be a series of bullet points that says:

  • Make the Whiz-Bang 50% faster to improve the user experience
  • Add the Gog Widget everyone keeps asking about
  • Enhance the Dribblium feature to automatically look up user information

If I’m the marketing guy “what” I’m doing is increasing sales. “How” I’m doing it is by the feature improvements listed above.

But to the team manager, this looks a lot more like a list of “what” needs to be done with no details on “how” to do it.

So the team manager takes this list of bullet points and creates what we (mistakenly) call “detailed requirements.” He now carefully lays out the details of “what” needs to be done but decides to leave it up to his tech lead on “how” to accomplish it.

The tech lead then creates what is (mistakenly) called a “detailed design” where he makes a bunch of UML and designs out exactly “what” he wants his programmers to do but still leaves it up to them on the specifics of “how” to implement that design.

Interestingly, the programmer then writes the code which, from the compiler's point of view (do compilers have points of view?) is really a specification of “what” the software needs to do, but leaves it up to the compiler of “how” to do it.

So based on this example, we must accept that “what” and “how” are purely relative to a point of view. They do not objectively exist.

Perhaps this is why we don’t usually make a separation between “what” and “how.” They tend to mingle freely in our minds. Have you ever called your spouse and said “Hey, what time were you planning to be home?” when what you really wanted to know was “How much longer can I stay out with the guys/girls?” I do this all the time.

This is actually useful to know. Because it means that you never have to take an instruction as “how” you have to do your job because what people actually care about is “what” they wanted to accomplish.

I remember a customer that insisted that a certain field, I think a zip code, was to have exactly 6 characters in it. This was a hard fast requirement.

It turned out that they were used to a DOS-based program where the program moved to the next screen upon finishing typing in the last field on the previous screen. The zip code was the last field, so they wanted to be able to enter it and not move to the next screen before having a change to review it. (For those of you under 30, DOS is a quaint little program that lets you type stuff and then it deletes all your files)

Of course in a windows based program (for those of you under 20, Windows is that thing you see just before you click on the browser) this was a non-issue. The correct ‘requirement’ was actually “allow me to review the screen before I move to the next” which is built into windows anyhow.

But that’s my point. You should never just assume that because your customer told you “how” to do something that they weren’t actually trying to tell you “what” to do and just didn’t know how to ask it right.

So don’t just blindly follow requirements. Instead, make a real effort to understand ‘what’ the customer really wants to accomplish and come up with your own suggestions of ‘how’ to accomplish it. Think of requirements as the starting point of the discussion, not the ending point of ‘what’ you’ll finally deliver.

Software Schedules as Budgets by Bruce Nielson

Do you make a keep a budget with your home finances?

Maybe I should ask “should you make and keep a budget with your home finances?”

If you are the average American it's likely that you answered “no” to the first and “yes” to the second. Why do we not do things we know we should do? I suspect the answer is quite simply: because it's hard work. As Scott Adams (creator of Dilbert) so aptly pointed out, hard ward is both “hard and work.”

The other day I was talking to my boss about a project over run. I explained that I had talked to the developer one day and the project was under budget. I had talked to the developer about how much more time to spend on the project and then to stop work and save some of our remaining budget for a final site visit. He had forgotten and kept working and used up the remaining budget.

When I told my boss about the overrun, he asked me why developers did this so often: “Why do they over run their budget and not tell anyone when all they had to do was prior to over running their budget tell someone so that the appropriate arrangements could be made?” he asked.

He is right to ask this question. This really is a case of cutting off your nose to spite your face. If you put yourself into the customer’s shoes, there is a considerable psychological difference between being told in advanced that you have some hours left and being given choices on how you can spend them versus being told you are already over budget and have no choices at all.

So why does it happen so often that the developer only informs us all about the budget overrun after the budget is gone?

I think it goes back to my personal finance budget question: budgets are hard work. I suspect that programmers really don't like to track their time and tasks.

Indeed, I have been routinely told by developers that “Tracking hours is the Project Manager’s job!”

I don’t buy it.

Now typically status updates are done once a week in detail, maybe less. So it's not hard to see that, if a programmer have a 20 hour task and the next status update is 40 hours away, a refusal to track his own hours will lead to up to a 20 hour overrun before the Project Manager is even capable of interfering.

So of course, if the Project Manager can't trust the programmer to report the pending overrun, he'll probably switch to twice weekly status reporting to avoid this problem in the future.

But what if it's a 1 hour task with with (now) bi-weekly status reporting? This is a losing proposition, isn’t it?

Lest we decided to have status reports every hour, there really is no alternative to a programmer tracking their own tasks and schedule.

The Blame Game by Bruce Nielson

I want to discuss the need for blame on software projects. “Ug! The blame game! I hate that!” I hear you groan.

But it's an all too familiar game for all of us. We all know that software (or all human endeavors actually) end with what we call “the blame game” where we all point figures at each other and claim the failure was someone else's fault. We say we are mad when this happens, yet we participate in said game with uncanny gusto.

I think it's a mistake to overlook the human value of “the blame game.” Frustrating though it is, the end result of the blame game is usually that blame is redistributed around to the point where it's impossible to blame anyone, because everyone (or no one) was at fault. And I'd argue that this end result is generally the accurate one – there really was no one person at fault more so than just about everyone else.


In fact, most of the time, the “fault” is the “process,” not a person. (There is that dirty word “process” again.) Yes, there are many exceptions, but my experience is that most people (granted, not all) really do care about their projects. They want to be successful and they want to make their company successful. If there was a failure, it's generally because there was no recourse but the one taken.

An Example – Just Cut Testing! 

Think about this familiar scenario: A software developer is faced with unreasonably mandated due date from the highest echelons of a large company.

Do they:

a) Have a nice chat with everyone up the food chain (possibly including the chairman of the board, the stockholders, or the competing company that forced the situation) and change the date to be more realistic, or

b) Quit their job now, since they’ve already failed, or

c) Make qualified statements about how “they can make it if…” knowing full well “if” will never happen.

For those that insist that they’d never choose option c, but would instead negotiate a better schedule, please, for the sake of argument, assume you tried that a thousand times and failed. So you now have to decide to either quit or go forward. You always have the option to quit later, of course.

I don’t know about you, but I’m not an idiot. I’d choose c over b. In fact, I’d dare say there is no choice here in the first place. C was the only correct answer.

In other words, my best bet is to take a stab at it.

I’d quickly come up with some written “requirements” (Heck, I’ll at least then be able to explain what requirements changed during the course of the project that way) but I’d have no qualms about reducing safety margins on other tasks, like say testing. (This is a painfully familiar story, right?)

Given this hypothetical circumstance, I’d have to be stupid to not use this approach.

Who’s to Blame for All These Defects?

When inevitably this leads to defects – perhaps even critical ones – can we truly say it's the developers fault? Or is it the testers fault for not having caught the defect? Or is it the project manager's for not planning enough time?

Or, here's a thought, maybe it's the Chairman of the Board’s fault for creating a culture where a developer finds himself in this position in the first place?

No wait! Maybe it’s the stockholder’s fault for demanding more than was reasonable.

You tell me, because it seems to me that you can't really pin the blame on anyone really. Granted, the Chairman of the Board (who probably didn't even know about the project) probably has more blame to shoulder than the developer who caused the problem. But can we really say it’s his/her fault?

So when things get ugly and the blame starts to fly, just repeat to yourself this little rule:

The blame game is a healthy way of redistributing blame until no one is found to be at fault.

Then, without guilt, play the blame game. Since you know it's a necessary ritual to moving on, you can choose to not be nasty about it. Defend yourself, but make it clear it's not “all your fault” either.

For example, let's say you are a developer. Go ahead and point out that you weren't given much test time, so it made sense to code and test quickly to make the schedule. Go ahead and point out that even the tester missed the problem. Go ahead and point out that if the company really wanted to see defect-less software, they'd take more time to get the software “just right” before a release. The end result is that you just blamed everyone else (including yourself) and helped appropriately redistribute the blame so that no one was really at fault.

And that's how it (generally) should be.

More Posts Next page »