A Flaw in Business Cases

September 19, 2009

A software business case compares the total cost of software with the benefits to be gained from implementing the software. If the IRR of the investment is adequate, relative to the company’s policies on capital investment, and if the simple-minded powers that be have a good gut feel about the case itself, it is approved.

Clearly, a business case isn’t perfect. Implementing software is an uncertain business. The costs are complex and hard to fix precisely. Implementation projects do have a way of going over; maintenance costs can vary widely; the benefits are not necessarily always realizable.

That’s where the gut feel comes in. If an executive thinks the benefits might not be there, the implementation team might have a steeper learning curve than estimated, the user acceptance might be problematic, he or she will blow the project off, even if the IRR sounds good.

Business cases of this kind are intended to reduce the risk of a software purchase, but I think they’ve actually been responsible for a lot of failures, because they fail to characterize the risk in an appropriate way.

There’s an assumption that is built into business cases, which turns out to be wrong. The fact that this assumption is wrong means that there’s a flaw in the business case. If you ignore this flaw (which everybody does), you take on a lot of risk. A lot.

The flaw is this. Business cases assume that benefits are roughly linear. So, the assumption runs, if you do a little better than expected on the implementation and maintenance, you’ll get a little more benefit, and if you do a little worse, you’ll get a little less.

Unfortunately, that’s just not the case. Benefits from software systems aren’t linear. They are step functions. So if you do a little worse on an implementation, you won’t get somewhat less benefit; you’ll get a lot less benefit or even zero benefit.

The reason for this is that large software systems tend to be “brittle” systems. (See the recent post on “Brittle Applications.”) With brittle systems, there are a lot of prerequisites that must be met, and if you don’t meet them, the systems work very, very poorly, yielding benefit at a rate far, far below what was expected of them.

This problem is probably easier to understand if we look at how business cases work in an analogous situation. Imagine, for instance, the business case for an apartment building. The expected IRR is based on the rents available at a reasonable occupancy rate. There is, of course, uncertainty, revolving around occupancy rates, rentals, maintenance costs, quality of management, etc. But all of this uncertainty is roughly linear. If occupancy goes up, return goes up, maintenance goes up, return goes down, etc. The business case deals with those kinds of risks very effectively, by identifying them and insuring that adequate cushions are built in.

But what if there were other risks, which the business case ignored? These risks would be associated with things that were absolutely required if the building was to get any return at all. Would a traditional business case work?

Imagine, for instance, that virtually every component of the building that you were going to construct–the foundation, the wiring, the roof, the elevators, the permits, the ventilation, etc.,–was highly engineered and relatively unreliable, required highly skilled people who were not readily available to install, fit so precisely with every other part of the system that anything out of tolerance caused the component to shut down, etc., etc. The building would be a brittle system. (There are buildings like this, and they have in fact proven to be enormously challenging.)

In such a case, it is not only misguided to use a traditional business case, it is very risky. If one of these risky systems doesn’t work–if there’s no roof or no electricity or no elevator or no permit–you don’t just generate somewhat less revenue. You generate none. Cushions and gut feel and figuring that an overrun or two might happen simply lead you to a totally false sense of security. With this kind of risk, it’s entirely possible that no matter how much you spend, you won’t get any benefit.

Now, businessmen are resourceful, and it is possible to develop a business case that correctly assesses the operational risk (the risk that the whole thing won’t work AT ALL). I’ve just never seen one in enterprise applications. (Comments welcome at this point.)

The business cases and business case methodologies that I’ve seen tend to derive from the software vendors themselves or from the large consulting companies. Neither of these are going to want to bring the risk of failure to the front and center. But even those that were developed by the companies themselves (I’ve seen a couple from GE) run into a similar problem: executives don’t want to acknowledge that there might be failure, either.

But the fact that the risk makes people uncomfortable doesn’t mean that it’s a risk that should be ignored. That’s like ignoring the risk that a piton will come out when you’re mountain climbing.

Is the risk real? Next post.

Advertisements

I used to work at QAD, a small manufacturing software vendor. I subscribe to a QAD chat group, and occasionally people ask questions like the one in the title.

It sounds as if the person asking is peddling something–who knows–but it’s an interesting question nonetheless. What kinds of knowledge are necessary (key) for an ERP implementation? If you run a manufacturing company, is APICS (that is, supply chain) knowledge particularly important?

Certainly, QAD used to think so. When I was an employee, you got a bonus for becoming APICS certified. (APICS is the American Production and Inventory Control Society; to get certified, you had to learn how MRP worked and how inventory should be managed.) And certainly, when the product was designed, the focus was on matching supply and demand. The product was built originally for Karl Lopker’s sandal manufacturing business, and the idea was always to have simple, usable product that managed inventory well.

So you would think that the answer, at least for QAD users, is, “Of course APICS knowledge is key. Duh.” But I don’t think so.

You see, while I was at QAD and then for some years afterward, I looked at a fair number of installations. And what I saw was disheartening, at least if you believed in good supply chain practices. The systems weren’t really using good supply chain practices, at least as APICS defined them.

Let me give you an example, which APICS-trained people will understand immediately. One of the ideas of these systems is to reduce the amount of inventory you have on hand at any one time. To do this inside the system, there are two parameters that you have to set, lead time (which is the amount of time it takes for an order to be fulfilled) and safety stock (the amount you want to have on hand at all times). The longer the lead time or the higher the amount of the safety stock, the greater your inventory expense.

So what would you say if discovered that in not one or even two installations, but many, the safety stock and lead time numbers for most of the inventory were set once, en masse, and then never set again? Well, I’ll tell you what to think. These figures, which are key to making the system work, are not being used.

Now this was not just true of QAD Software; it was equally true wherever I went, no matter what software was installed.

So doesn’t this say that supply chain knowledge is key, after all? If they had supply chain knowledge, wouldn’t they have paid more attention? At first, I thought so. But then after a while, I realized that more supply chain knowledge would have made very little difference.

You see, that’s not why they were using the software. All these companies, it turns out, didn’t really care about getting supply chain stuff right. They managed the supply chain fairly sloppily–tolerated a lot of inaccuracy and suboptimal behavior–and they got along (in their minds) just fine doing that. They didn’t want to put in the kind of care and rigor that is the sine qua non for doing with these systems what they were designed to do.

What were they using the software for? Well, mostly to manage the paperwork virtually. Please don’t cringe, Pam, if you happen to be reading this. This is not a hack on you. The plain fact is that the companies needed to keep track of their commitments (orders), their inventory, and their money, and that’s what they used the system for. They needed a piece of paper that told people what inventory to move that day and where to move it to. And the system gave it to them.

To do this, though, you didn’t need much APICS knowledge or, if you didn’t believe in APICS’s recipes for inventory management, other supply chain knowledge. All you really needed was to be able to count, which most of the users could do without being APICS-certified.

So is supply chain knowledge key for an ERP implementation? Not at all. You can have perfectly happy users who have got exactly the nice simple implementation they need without much supply chain knowledge at all.

This answer, of course, raises lots of questions. What is key? Why do these companies tolerate sloppy supply chain practices? Wouldn’t they be better off if they cleaned up their act. Herewith, brief answers.

What is key? At a rudimentary level, the financials. You have to get the basics right, here, or you’ll never close your books. In a system studied recently by a grad student at Harvard Business School, 65% of the inventory records were inaccurate. Can you imagine the upset if 65% of your account balances were incorrect?

Why do they tolerate sloppy supply chain practices? I think it’s largely because more finely tuned systems are much more brittle. They take a large amount of care and feeding and their ability to take hard, rude, unexpected shocks is limited.

And wouldn’t they do much better using the systems? In many cases, no. You see, at most of the companies I’ve run into, the MRP/APICS model that QAD (and every other software vendor) provided is not actually all that accurate. To make a really significant difference, you need more sophisticated tools that are better suited to the specifics of your supply chain.

Comments welcome.

This is one of a series of posts on the high cost of buying enterprise applications and the high cost of selling the products. This high cost, I’ve been arguing, is just plain bad for both sides and almost certainly unsustainable. So the question for the analyst is, “What can be done about it?”

In the background, I’ve had a lot of discussion with industry pundits and the Twitterati: Vinnie Mirchandani, of course, and Jason Busch and @dahowlett and Brian Sommer and Dennis Moore and many others who don’t regularly comment over the airwaves. What accounts for the high cost, I’ve been asking? Two answers seem to emerge.

1. Too many cooks.

2. Too little trust.

If you’ve ever been involved in buying applications, you’ve seen the first one happen over and over again. Too many people get involved with the decision, each with their own agendas, each more or less connected to one of the contenders, and getting all these parties to agree requires too much head-banging. It’s Congress trying to pass health care, writ small, and it’s just as pathetic.

The lack of trust. Well, I get why people don’t trust, and I get why they erect structures that are supposed to control the vendor. But in my experience, it’s always been a losing game. Buyers are nowhere near as devious, oops, I mean sophisticated, as sellers–how could they be, the sellers do it for a living? So if you go into a deal, it’s natural to feel skittish, but giving in to the feeling doesn’t really protect you and does slow you down.

So what’s to be done? In the long run, you can’t do much about either problem without cooperation from the other side. If you’re going to have lots of cooks, the cost of sales for the vendor is just plain going to be astronomical, and the cost of buying will rise accordingly. If the vendor behaves in what has become the normal fashion, alas, defensive (and expensive) buying will be only too appropriate.

Granting that, you have to start somewhere, and here are some suggestions for the buyer, suggestions that should save money. All of these will really help, even though the second sounds completely ridiculous.

1. Limit the scope of what you’re buying as much as possible. Look only at what you clearly need; prefer things that you’ll use immediately; go for immediate results; and don’t buy anything else. You can always buy the rest later. (This reduces the number of cooks.)

2. Use a tape recorder and a camcorder. Keep track of what the vendor says, in full, precisely. Look the transcripts over. If you have questions or don’t understand, follow up. In the long run, it saves a lot of time. (This can help you to get more trust–kind of trust, but verify.)

3. Prototype, prototype, prototype. Get a small team to set it up and try it out, rather than assembling a large team to review demos that were set up by the vendor. If you have to pay the vendor to help you with the prototype, do it. You’ll save in the long run. (This, too, reduces the number of cooks.)

Enough from me. Suggestions from the readership are welcome.

It stands to reason, doesn’t it. The more thorough and rational the buying process for enterprise applications, the better the outcome. For sure. Right?

Well, the other day, Dennis Moore, aka @dbmoore, a well-known figure in the industry, posted a query on Twitter asking for data that would show this is true. The more thorough the buying process, the more effective the implementation has to be true, doesn’t it. But no, the guy wants data. “Not anecdotes,” he said later, “but data.”

He isn’t going to get any. There are three reasons for this. First, there isn’t any reliable data of any kind on whether implementations were successful, at least none that I’ve seen in a career of nearly two decades. Second, most effort expended on pre-purchase analysis of software is misdirected, adding little to the quality or accuracy of the decision. And third, if you work backwards from failed implementations and identify the causes of the failures, it is very rare that the cause is the kind of thing that could have or should have been caught by a more thorough analysis.

The fact that there is little reliable data on whether an enterprise application product works is, of course, a scandal, but the fact remains and will continue to remain just so long as enterprise application companies want everybody to believe that the odds of success ar high and customers are embarrassed to admit failure.

I have been involved in at least two attempts by large, reputable companies to get a good analysis of what value, if any, has been gained after an enterprise app was implemented. The first interviewed only project managers and determined that the project managers found many, many soft benefits from the implementation. The second, a far larger effort, was eventually abandoned.

But let’s say that we had some rudimentary measure, like number of seats being actively used versus seats planned to be used two years after the initial projected go-live date. Would it show that thorough investigation really helps?

I don’t think so, and here’s why. The question of which software application to buy and/or whether one should buy one at all is usually a very simple question, one with a relatively clear right answer, at least to an objective observer. But it is rarely, if ever, treated as a simple question. People wrongly worry about a lot of irrelevant things; they are (usually) distracted by the salespeople, who naturally want the purchasing decision to be based on criteria most favorable to them, and because there’s a lot of risk (A LOT), people tend to create lengthy, rigorous, formal processes for getting to a decision, which do very, very little to improve the accuracy of the final decision.

Honestly, I can usually tell in an hour’s phone conversation what a company ought to do, and I often check back later — sorry Dennis, more anecdotal evidence — and I’d say I’m right at least 2/3 of the time, maybe more. And because of the way my business model works, I don’t even charge for these conversations.

What do you need to look at? Well, it’s a complex question, don’t get me wrong. But because the number of providers is limited, the capabilities are limited, and the likelihood of failure pretty high, there are usually only a few things that actually matter. And when there are only a few things, it shouldn’t take you that much time to figure them out.

Give a call, any time, if you want to test this out.

Those of us who have been in the business for a while feel it’s in the doldrums: there’s not much innovation, and every year, the apps get older.

Where will innovation come from? Well, the thing about innovation is that it has to change something that we think is fundamental and immutable: it has to be on a CD-ROM, it has to be typed from a phone keypad, etc. So you can’t look for innovation in the same old stuff.

That said, here are four areas that I think are wide open, wide open, wide, wide, wide open. An enterprise application (or an enterprise application company) can get an edge if they can do one of the following things:

Reduce the cost of sales by a factor of 10.

Reduce the risk of ownership by a factor of 2.

Increase the effectiveness of the application by a factor of 5. What’s a measure of effectiveness; well, let’s say it’s the number of users who use it seamlessly and easily.

Increase the speed of the application (usually by using a special-purpose database) by a factor of 50.

At least two of these aren’t impossible, because we’ve seen them happen.

Salesforce reduced the cost of sales for SMB customers by selling an enterprise-level application that you could test for free and buy with a credit card.

Workday and Qlikview both transformed applications in their area using special-purpose databases, and Hasso wants SAP to do the same.

And at least one large company, SAP, is trying for innovation in the third. Fumbling and clumsy as their effort to improve maintenance has been, the idea is to reduce the risk of owning SAP by making maintenance practices considerably more effective.

As for reduced risk of ownership or increased effectiveness (actually, maybe you’ve seen Qlikview do this), we’re still waiting.

Ideas on who’s done this or how it can be done?