Blame the Customer?
July 14, 2010
Who is to blame for IT project failures? My colleague, Michael Krigsman, argues that when IT projects wander into the “IT Devils Triangle,” all three participants–the vendor, the integrator, and the customer–are to blame. Michael is very insistent about this; in a recent post on Marin County v. Deloitte, he says, “In my view, it is highly unusual for a project to fail without some culpability on the part of the customer.”
Michael is the guy who almost singlehandedly took IT project failure out of the closet and into the open, and much of his professional life is spent analyzing these failures. For that reason, one should not criticize what he says without good reason. But I’ve always been uncomfortable with his line on things and felt that we would all be better off if we had better analytic tools for this kind of situation. I want to know when the “culpability” is minor and merely contributory and when it is really important or even primary.
One’s normal intuition about such things is that there are broad spheres of responsibility. If I get an artificial hip implanted, it’s up to the manufacturer to provide my doctor with a hip that works, up to the doctor to put it in correctly, and up to me not to abuse it. This is the intuition that underlies a lot of contract law and liability law, and it even applies to very large, complex engineering projects. When the architect found a flaw with the Citicorp building in New York City, which flaw could have caused it to collapse, the architect and the contractor took responsibility for it and had it fixed. They didn’t blame the customer, and they didn’t ask the customer to adjust his use of the building.
If you apply this intuition to technology, it works pretty well. In the case of IT projects, the software company is responsible for making sure that the design and building of the product is adequate to the demands that will be placed on it, the integrator is responsible for deploying the product in a way that meets what one might call the normal expectations of the customer, and the customer is responsible for using it in ways that are consistent with the design. If any of them fail to meet their responsibilities in their sphere, then they are culpable. And the others are not culpable (except perhaps in a minor way) if they don’t wish to compensate for that failure.
Now, I think Michael would agree with this, but he would say that, as a practical matter, what happens in these large project failures. is that all three people fail to deliver, so all are culpable. (Michael, please correct me if I’m wrong.)
But it seems to me that doesn’t go far enough. I think you need to be able to figure out when one participant is clearly morst culpable? Perhaps, for instance, one ought to apply a notion of priority, so that if the vendor fails to deliver, the vendor is primarily culpable by definition, and the most that the next two parties can do is provide some minor contribution. (This tends to be the approach in products liability, with some major caveats.) Or perhaps there are some other tools that need to be applied, tools that would allow one to extend (or perhaps even change) the “pox on all their houses,” line that Michael takes.
Whatever you do, though, it isn’t easy.
Let me try to illustrate what I mean with an example that has made a lot of news recently: the iPhone 4. Here you had three participants: Apple, AT&T, and the customer. You had a product that “did not work properly.” And you have a dispute about levels of culpability.
The problem, in case you’ve been in Tibet for the last month, is that the iPhone’s reception isn’t very good for reasons that still aren’t clear. (My wife has one. It’s true; it’s not not good.) The customers have basically been told by Apple, “The problem is in the way you hold the phone. Either hold the phone differently, or fork over $25 for a case.” The customers have said, if I may paraphrase, “It’s a phone. You should be able to hold the phone any way you want. It’s up to Apple to give me a phone that works no matter how I hold it.”
Now, the IT Devil’s Triangle analysis says, basically, that the customers are wrong. According to this argument, Apple made a good phone that made some design tradeoffs. (Basically, the antenna was put on the rim of the phone.) If one of those tradeoffs means they have to hold the phone in a special way, so what? They should either buy themselves a case or hold the phone correctly. If they don’t want to do that or can’t, they are contributing to the problem and are at least as culpable as Apple.
So is AT&T off the hook, here? Under the Devil’s Triangle argument, not at all. The phone service that they provide is weak and erratic, and that makes it very difficult to troubleshoot the phone. (I have experienced this in spades.) Their customer “service” is designed to handle normal problems expeditiously, but is not designed to track and resolve complex problems, and as a result, it makes what could be an irksome experience into something verging on horrible. (I have also experienced this, believe me.) And that means that customers are not as tolerant of the iPhone 4’s design choices as they should be.
It’s something of a conundrum, isn’t it? One’s normal intuition is that Apple should design a phone that people can hold any way they want. But one can certainly make a cogent argument that it’s really the customer, as much as anyone else, who is to blame, because they get mad at Apple, rather than being willing to hold the phone in the correct way.
In the end, the solution to this riddle is a matter of values. If you think (as Consumer Reports does) that it’s up to the vendor to get things right, when it comes to simple things like how one holds the phone, then the Devil’s Triangle argument is simply wrong. If you think (as Steve Jobs and many of my friends who work for vendors think) that the customer is to blame if they can’t deal with the “flaws” that they find in a complex and highly-engineered technology, then you’re going to agree with Michael and assume that there are very few situations where the customer doesn’t have some culpability.
Let me just tell you where I come down on this and hope for some illuminating comments. I think the benefit of the doubt should be given to the user. If the vendor is clear about the design tradeoffs and limitations and the deliverer (integrator) provides service that takes those limitations adequately into account and is clear about that, then I think, “Yes, it’s up to the consumer to deal with the limitations.” If the vendor and deliverer fail to do this–if the vendor isn’t clear about what the design tradeoffs are or if the vendor and deliverer allow you to get the idea that the product will do things (like make phone calls in normal situations when being held normally) that it won’t in fact do–then the onus is on them.
So, for instance, if the building architect makes a mistake in the way the building was designed or the general contractor deviates from the design in a way that turns out to be dangerous, it’s up to them to fix it; the building owners are not culpable because they don’t want to limit the number of people allowed on each floor to a number far below what they’d been led to expect. And if the hip designer uses a brittle material and the doctor chips it when he or she installs it, it’s not up to the patient to walk less.
Just my view.
So why is this important? It’s important because this is not how the software industry works. It’s normal for vendors to be very close about the design of their technology, for vendor and integrator salespeople to make unrealistic claims about what the technology does or about the benefits to be received, for the people delivering the product to view limitations in delivery as upsell opportunities, etc., etc. So if my view were to prevail, and it were up to the vendor and deliverer to make sure that the product works as you would normally expect and the installation is what it should be, the industry would have to change.
Is that likely? Maybe, maybe not. But it gets more and more likely every time a user asked to adapt to some software’s vagaries asks himself or herself, “Isn’t this like being asked to hold an iPhone with tweezers?”
The Typewriter, the Word Processor, and the PC
April 9, 2010
In yesterday’s post, I argued that ROI would not be an adequate measure of the benefits conferred by new-gen (or pseudo-new-gen) applications like Workday, Business By Design, or Fusion Application Suite. The previous-gen applications were all about automation. The new-gen suites confer real benefits (I think), but not necessarily benefits that fall through to the bottom line.
What benefits are they? Well, they have to do with working more effectively: making fewer errors, putting more time into work and less into busy work, making more accurate decisions, faster. Is there benefit from this kind of thing? Sure. But how do you measure it.
In the post, I suggested a hazy term, “operational effectiveness,” for the benefits one should expect. What is “operational effectiveness?” Let me admit freely that I don’t know for sure. In this post, let me propose an analogy, which should help you to understand what I’m getting at.
The analogy comes out of a historical situation that always posed a problem for ROI analysis, the transition in business from the typewriters that sat on secretary’s desks to the PC that sat on executive’s desks. This transition occurred in two different phases. First, the typewriters on the secretary’s desk were replaced with big, clunky word-processors that sat next to the desk. These word-processors automated the secretary’s document production work. Then, the secretarial position itself was eliminated, and the typing function became something that executives did themselves on that PC.
The transition to word processors could easily be justified in ROI terms. We could get more work out of the secretary or else hire fewer secretaries. Whether the justification was real is an open question. But it’s certain that that’s how people thought of it.
The next transition was much more problematic for ROI analysis. Expensive executive time was now being put into jobs that had been performed more efficiently by much cheaper labor.
At the time, people didn’t put a lot of thought into figuring out why they were funding this transition. Executives saw the PCs, knew that everyone else was using them, needed them for some functions (e-mail, spreadsheets), and just decided. “We’re doing it this way.” At least in my recollection, that’s what happened.
So were they just loony or lazy or wasting shareholder money on executive perks? I don’t think so. I think what they were plumping for was the same “operational efficiency” that I’m talking about 25 years later.
True, they spent more time typing. But they also had more control over the final product; they could change the product more easily; and they could distribute it without much overhead. And, at the same time, they were changing the form of what they were doing. They weren’t just producing typed memos; they were documents with fancy fonts and illustrations; and they were creating Power Points. True, many an executive was spending ridiculous amounts of time fiddling with type sizes so that they could get things on one page, but even acknowledging that, they thought the new way was better.
Indeed, by the time the transition was finished, justification wasn’t even a question, because the new tools changed the nature of work, and now you couldn’t get along without the tools. When executives were doing the typing, they stopped creating long reports. More and more of the time, a corporation’s decision-making was even wrapped around a full document (minutes, memos, or formal reports), it was wrapped around Power Point decks.
So by the end of the transition, ROI analysis had become entirely moot. How could you get a tangible measure of benefits when you were comparing apples and oranges?
Could we be seeing a similar transition now? It’s certainly possible. The analog to the word processors is that first generation of enterprise applications, which were funded by the automation benefits they confer and by ROI analysis. The analog to the PC is the second generation of enterprise applications.
(One caveat. As I’ve said before, I don’t think that Fusion Applications or the versions of Business by Design that I’ve seen are in fact second-generation applications. They’re more like Version 1.3. But they’re close enough to next-gen to raise the problem I’m talking about.)
If the analogy holds and if second-gen apps work as the developers hope, the benefits that businesses are going to experience will be equally hard to get your arms around, partly because the benefits are so subtle and disparate and partly because you’ll see a shift in the way work is done.
Does that mean that we won’t be able to talk about the benefits and we’ll just bull ahead with them? Well, that’s why I’m introducing the notion of operational effectiveness. It does seem to me that we can get clearer about what the benefits are.
So come on guys. Make comments. What is operational effectiveness? And how can we tell whether we are getting it?
Operational Effectiveness
April 8, 2010
This blog post is as more an open question than a pronouncement. So please feel free to comment on this or take the idea in a different direction.
I’ve been thinking about the next generation of Enterprise Applications, the value that they (might) bring, and about how people might justify replacing the enterprise applications they have with the new generation.
Generally speaking, you justify an investment in infrastructure using ROI. You invest this much, get this much return. For the first generation of enterprise applications (most everything designed between 1990 and 2003 or later), this made sense, because they were basically automation apps. They automated work done by people. The ROI showed up because you didn’t have to pay people to do it any more.
Now these new applications simply don’t do that, that is, they don’t automate appreciably better than the old applications do. And this means that ROI is a pretty crummy tool for evaluating whether an investment is worthwhile. Yes, there will be ROI if the enterprise application works the way it’s supposed to. But the return will be highly indirect. You won’t be able to fire people and pay for the application.
Brian Sommer has been talking about this problem for years–essentially, he points out that automating something that’s already automated doesn’t justify an investment on the same scale. But he has never really explored whether there are other forms of justification.
Nenshad argues in his book and his blog that good performance management enabled by modern tools will get you to a place you want to be, and he tells you a lot about how to do it. And while it is true that these new applications help you manage performance better and that’s one of the reasons you want them, he doesn’t offer a way of thinking about justifying the move to what he recommends. (Nenshad, if I missed this in your book, I’m sorry.)
So what does one use? Well, let me offer a concept and sketch it out in a paragraph or two, and you my small but apparently very loyal readership can then take me to task.
I’ll argue that what these new applications really do is improve “operational effectiveness.” What’s that? Well, to start out with, let’s just say that they let each employee put more effort into moving the company forward and less effort on overcoming friction, that is.
Well, that’s suitably hazy. So here are some things that you could measure that would, I think, be indicators that employee effort is more coherent and focused. You could, for instance, take a page from the black belts and measure operational errors or even just exceptions as part of operational effectiveness. Or, you could look at corporate processes that are nominally automated and see whether they are managed by exception. (Truly best, automatable practices should require almost no routine manual actions.)
People sometimes try to look at operational effectiveness by measuring what percentage of revenue is spent on things that feel like pure expense, like IT. So, a company that spends 3% of its revenues is less effective than one that spends 1% of its revenues. People also try to get at it by trying to look at which activities are “value-added” and trying to get people to do more of them. Both ideas are silly, of course, in themselves. (The 3% company may be spending on stuff that makes them effective, while the 1% isn’t.) But I think there might be indicators of operational effectiveness that are better. Wouldn’t operationally effective companies spend less time in meetings, send fewer junk e-mails, work fewer hours/employee (!), resolve more customer complaints and deflect fewer, etc., etc., etc.?
You get the idea, I think. So why is this a good measure for the new generation of applications? Because, bottom line, I think that’s what they’ll do. They’ll help organizations and people avoid wheel-spinning, error correction, and pointless processes or rules by getting to what matters, faster.
Of course, people have always accused me of being a ridiculous, blue-sky, naive optimist. But that’s how it seems to me.
What do you think?
Selling Brittle Applications
October 7, 2009
Has it ever occurred to you that software salespeople get a bad rap? In the popular imagination, they have blood dripping from their fangs. But in my experience, that’s not what they’re like at all. don’t. The ones I know are pretty decent folk, live in the suburbs, maybe even coach soccer. Not one of them would actually throw their mothers under a bus in order to get the deal. At least, I don’t think they would.
So what accounts for this reputation? Well, in the course of writing this series of blogs on brittle apps, I think I’ve come up with an answer of sorts.
Start with a fact that you should know, but maybe haven’t paid much attention to. Good software salespeople don’t sell software. They sell benefits. They don’t try to confuse you with features or functions or architecture; they try to make you understand what those things will do for you.
This is what they should be doing. After all, the buyers of software don’t really know about or understand what the features do, and they don’t have the patience to figure out exactly how the features produce the benefits. You’re a CEO or CFO, you want to get to the point. What is the value that these products deliver?
Software salespeople have developed this reputation, I think, because people have discovered or heard or found that the benefits aren’t available. And they think the salesperson knew this all along and, like some snake-oil salesman was promising a cure for cancer in order to wrest the last dollar out of some old lady’s handbag.
But in my experience, that’s not what’s going on at all. The salespeople actually have a fairly rational, if optimistic view of their product. They know that the products are designed to achieve certain benefits; they can see themselves how the design could work; and they probably know of customers who have achieved the benefits they should achieve. At worst, they’re like a Ferrari or a Jaguar salesman, who focuses on the riding experience and doesn’t feel it is incumbent upon him to bring up the repair records.
People like Dennis Moore would go even further in the software salesman’s defense, and perhaps they’re right. They would say that the salesperson is more like a good physical trainer, who can genuinely promise to get you in better shape if you’ll work out. If after you buy, you don’t want to put in what it takes to get the benefits, that’s your problem not theirs.
So is Dennis right (assuming he would agree with the words I’m putting in his mouth); when somebody buys software and underutilizes or puts it on the shelf, is it entirely their problem. Well, no. I think it would be if they realized how brittle these applications really are. But in my experience, they rarely do. They buy this Ferrari, go out on a Sunday afternoon, and only when they find themselves riding back next to the tow truck driver, do they find out what they’ve bought.
So whose problem is it? Well, I’ve already gone on too long. You’ll have to wait until the next post.
Why Develop Brittle (High-Risk) Apps?
September 25, 2009
“Brittle” design isn’t limited to enterprise application software. You can find brittle design in cars, bridges, buildings, TVs, or even the vegetable bin. (What are those 1/2-pint boxes of $5.00 raspberries, 3/4 of which are moldy, but examples of brittle design?)
What do brittle designs have in common? The designer chose to accentuate high performance at the expense of other reasonable design parameters, like cost, reliability, usability, etc. A Ferrari goes very, very fast, and it feels good when it goes fast, and that’s a design choice. And it’s part of the design choice that the car requires a technician or two to keep it going fast for longer than an afternoon.
So why are most enterprise applications brittle? You can see this coming a mile away. It was a design choice. The enterprise applications in question were designed to be the Ferraris of their particular class of application. They were designed to do the most, have the most functionality, be the most strategic, appeal to the most advanced early adopters, be the most highly differentiated, etc.
To get Ferrari-like performance, they had to make the same design choices Ferrari did. They had to assume the application was perfectly tuned every time the key was turned, and they had to assume that the technicians were there to perform the tuning.
Enterprise applications, you see, were intended to run on the best, the highest-end machines (for their class). They were intended to be set up by experts. They were intended to be maintained by people who had the resources to do what was necessary. They were intended to satisfy the demands of good early-adopter customers who put a lot of pressure on them, with complex pricing schemes or intricate accounting, even if later on, it made setting the thing up complex, increased the chance that there were bugs, and made later upgrades expensive.
This wasn’t bad design; it was good design, especially from the marketing point of view. The applications that put the most pressure on every other design parameter got the highest ratings, attracted the earliest early adopters, recruited the most capable (and highest-cost) implementers, etc., etc. So they won in the marketplace and beat out other applications in their class that made different design choices.
I lived through this when I worked at QAD. At QAD, the founder (Pam Lopker) made different design choices. She built a simple app, one that was pretty easy to understand and pretty easy to set up and did the basics. And for about two years, shortly after I got there, she had the leading application in the marketplace. And then SAP and JDE and PeopleSoft came in and cleaned our clock with applications that promised to do more.
Now, none of the people who actually bought SAP instead of QAD back then or chose (later) to try to replace QAD with SAP did this because they really wanted high performance, per se. They wanted “value” and “flexibility” and “return on investment” and “marketable skills.” They literally didn’t realize that the value and flexibility came at a cost, that the cost was that the application was brittle and that therefore, the value or flexibility or whatever was only achievable if you did everything right.
If they had realized this, would they have made different choices?
I don’t know. I remember a company that made kilns in Pittsburgh that had been using QAD for many years. The company had been taken over by a European company that used SAP, and the CIO had been sent over from Germany to replace the QAD system with the one that was (admittedly) more powerful. He called me in (years after I had worked at QAD) to help him justify the project.
I looked at it pretty carefully, and I shook my head. Admittedly, the QAD product didn’t do what he wanted. But I didn’t like the fit with SAP. I was worried that the product designed for German kiln production just wasn’t going to work. I didn’t want to be right, and I was disappointed to find out that two years later, despite very disciplined and careful efforts, he was back in Germany and QAD was still running the company.
I’m glossing over a lot, of course. There are secondary effects. Very often, the first user of an application dominates its development, so the app will be tuned to the users strengths and weaknesses. It will turn out to be brittle for other users because they don’t have the same strengths. Stuff that was easy for the first user then turns out to be hard for others.
Two final points need to be made. First, when a brittle application works, it’s GREAT. It can make a huge difference to the user. Brian Sommer frequently points out that the first users of an application adopt it for the strategic benefit, but later users don’t. He thinks it’s because the benefit gets commoditized. But I think it’s at least partially because the first users are often the best equipped to get the strategic benefit, whereas later users are not. I think you see something of the same issue, too, in many of Vinnie Mirchandani’s comments about the value that vendors deliver (or don’t deliver).
Second, as to the cause of failure. Michael Krigsman often correctly says that projects are a three-legged stool and that the vendors are often blamed for errors that could just as easily be blamed on the customers. Dennis Moore often voices similar thoughts. With brittle systems, of course, they’re quite right; the failure point can come anywhere. But when they say this, I think they may be overlooking how much the underlying design has contributed.
It may be the technician’s fault that he dropped the beaker of nitrogycerin. But whose brilliant idea was it to move nitroglycerin around in a beaker?
Brittle (High-Risk) Enterprise Applications
September 24, 2009
In the last few posts, I’ve been talking about “brittle” applications, applications that just don’t work unless everything goes right. We all know lots of analogues for these apps in other areas: the Ferrari that coughs and chokes if it’s not tuned once a week or the souffle that falls if you just look at it funny. But it doesn’t occur to most people that many, if not most enterprise applications fall into the same category.
They don’t realize, that is, that ordinary, run of the mill, plain-vanilla enterprise apps are kind of like Ferraris: it takes a lot to get them to do what they’re supposed to do.
Here are some examples.
Consider, for instance, the homely CRM application. Many, if not most executive users go to the trouble of buying and putting one in because they want the system to give them an actionable view of the pipeline. Valuable stuff, if you can get it. When the pipeline is down, you can lay people off or try new marketing campaigns. When it’s up, you can redeploy resources.
Unfortunately, though, it takes a lot to get a CRM system to do this. If, for instance, the pipeline data is inaccurate, it won’t be actionable. Say a mere 10% of the salesforce is so good or so recalcitrant that their pipeline data just can’t be trusted. You just won’t be able to use your system for that purpose.
It’s kind of funny, really. Giving you actionable pipeline data is a huge selling point for these CRM systems. But when was the last time you personally encountered one that was 90% accurate.
Another example I ran into recently was workforce scheduling in retail. A lot of retailers buy these things. But it’s clear from looking at them that the scheduling can be very easily blown.
Finally, take the example I talked about before, MRP. When you run that calculation, you have to get all the data right, or the MRP calculation can’t help you be responsive to demand. If even 30% of the lead times are off, you won’t be able to trust much of the run. And when was the last time more than 70% of the lead times were right.
Notice that if you underutilize a brittle system, it can be somewhat serviceable. If you use the CRM system to follow the performance of the salespeople who use it and don’t try to use the totals, the inaccuracy doesn’t matter. If you just use MRP to generate purchase requisitions, the lead times don’t matter. But, as I said in the earlier post, if you do underutilize the system, the amount of the benefit available falls precipitously. And then the whole business case that justified this purchase and all this effort just crumples.
What makes me think that a lot of enterprise applications are brittle? Well, I have a lot of practical, personal experience. But setting that to one side, it’s always seemed to me that the notion of brittle application explains a lot of data that is otherwise very puzzling.
Take for instance all the stuff that Michael Krigsman tells us about project failures. He is constantly reminding us that the cause of failure is a three-legged stool, that it could be the vendor or the consultant or the company, and he is surely right. What ought to be puzzling about this, though, is why there is no redundancy, why the efforts of one group can’t be redoubled to make up for failings in another. If the apps themselves are brittle, however, it takes all three groups working at full capacity to make the project work.
Or, more generally, look at the huge number of project failures (what’s the number, 40%, are abandoned?). Given how embarrassing and awful it is to throw away the kind of money that enterprise app projects take, you’d think that most people would declare some level of victory and go home, unless the benefit they actually got was so far from what they were hoping for that the whole effort became pointless.
How can you tell whether the enterprise app you’re looking at is brittle? Well, look at the failure rate. If companies in your industry historically report a lot of trouble getting these things to work, or if the “reference” installations that you look at have a hard time explaining how they’re getting the benefits that the salesman is promising you, it’s probably a brittle app.
Two questions remain. When do you want to bite the bullet and put in a brittle app? And why would vendors create apps that are intrinsically brittle?
Next post.
A Flaw in Business Cases
September 19, 2009
A software business case compares the total cost of software with the benefits to be gained from implementing the software. If the IRR of the investment is adequate, relative to the company’s policies on capital investment, and if the simple-minded powers that be have a good gut feel about the case itself, it is approved.
Clearly, a business case isn’t perfect. Implementing software is an uncertain business. The costs are complex and hard to fix precisely. Implementation projects do have a way of going over; maintenance costs can vary widely; the benefits are not necessarily always realizable.
That’s where the gut feel comes in. If an executive thinks the benefits might not be there, the implementation team might have a steeper learning curve than estimated, the user acceptance might be problematic, he or she will blow the project off, even if the IRR sounds good.
Business cases of this kind are intended to reduce the risk of a software purchase, but I think they’ve actually been responsible for a lot of failures, because they fail to characterize the risk in an appropriate way.
There’s an assumption that is built into business cases, which turns out to be wrong. The fact that this assumption is wrong means that there’s a flaw in the business case. If you ignore this flaw (which everybody does), you take on a lot of risk. A lot.
The flaw is this. Business cases assume that benefits are roughly linear. So, the assumption runs, if you do a little better than expected on the implementation and maintenance, you’ll get a little more benefit, and if you do a little worse, you’ll get a little less.
Unfortunately, that’s just not the case. Benefits from software systems aren’t linear. They are step functions. So if you do a little worse on an implementation, you won’t get somewhat less benefit; you’ll get a lot less benefit or even zero benefit.
The reason for this is that large software systems tend to be “brittle” systems. (See the recent post on “Brittle Applications.”) With brittle systems, there are a lot of prerequisites that must be met, and if you don’t meet them, the systems work very, very poorly, yielding benefit at a rate far, far below what was expected of them.
This problem is probably easier to understand if we look at how business cases work in an analogous situation. Imagine, for instance, the business case for an apartment building. The expected IRR is based on the rents available at a reasonable occupancy rate. There is, of course, uncertainty, revolving around occupancy rates, rentals, maintenance costs, quality of management, etc. But all of this uncertainty is roughly linear. If occupancy goes up, return goes up, maintenance goes up, return goes down, etc. The business case deals with those kinds of risks very effectively, by identifying them and insuring that adequate cushions are built in.
But what if there were other risks, which the business case ignored? These risks would be associated with things that were absolutely required if the building was to get any return at all. Would a traditional business case work?
Imagine, for instance, that virtually every component of the building that you were going to construct–the foundation, the wiring, the roof, the elevators, the permits, the ventilation, etc.,–was highly engineered and relatively unreliable, required highly skilled people who were not readily available to install, fit so precisely with every other part of the system that anything out of tolerance caused the component to shut down, etc., etc. The building would be a brittle system. (There are buildings like this, and they have in fact proven to be enormously challenging.)
In such a case, it is not only misguided to use a traditional business case, it is very risky. If one of these risky systems doesn’t work–if there’s no roof or no electricity or no elevator or no permit–you don’t just generate somewhat less revenue. You generate none. Cushions and gut feel and figuring that an overrun or two might happen simply lead you to a totally false sense of security. With this kind of risk, it’s entirely possible that no matter how much you spend, you won’t get any benefit.
Now, businessmen are resourceful, and it is possible to develop a business case that correctly assesses the operational risk (the risk that the whole thing won’t work AT ALL). I’ve just never seen one in enterprise applications. (Comments welcome at this point.)
The business cases and business case methodologies that I’ve seen tend to derive from the software vendors themselves or from the large consulting companies. Neither of these are going to want to bring the risk of failure to the front and center. But even those that were developed by the companies themselves (I’ve seen a couple from GE) run into a similar problem: executives don’t want to acknowledge that there might be failure, either.
But the fact that the risk makes people uncomfortable doesn’t mean that it’s a risk that should be ignored. That’s like ignoring the risk that a piton will come out when you’re mountain climbing.
Is the risk real? Next post.
Brittle Applications
August 31, 2009
In a previous post, I said that MRP was a “brittle” application, and a commenter questioned me. What is a “brittle” application? Is this a technical term? What makes MRP brittle? All good questions.
A brittle application is one that doesn’t work at all unless a lot of disparate conditions are met. MRP, for instance, doesn’t work unless all the data is right, people know how to use the program, the demand for the products is stable, purchasing is also committed to minimizing inventory levels, etc., etc.
The notion applies to a lot of other programs besides MRP, though I’ve rarely heard the term used. But notice that it brittleness isn’t so much a feature of the program as it is the purpose to which the program is put.
Let’s take a simple example: a word processing program. For normal purposes, a word processing program in this day and age is not brittle. A rank novice can use it to type and print. But even today, if you want to use, say Microsoft Word, to put out a 16-page brochure, complete with illustrations, well, good luck, is all I can say. You try to get an illustration and have it float and change the size and put in a table, and–well, just try it, it’s a nightmare. So, to put a 16-page brochure together, Microsoft Word is brittle, but to print out a letter, it’s not.
The point about the MRP program that QAD wrote, which follows the APICS standard religiously, is that it’s brittle relative to the purposes for which it was intended. Pam and Karl and Evan (the founders of QAD) really believed that QAD’s could do supply chain management for manufacturing facilities very effectively. My point was that the program is too brittle. To get things right, you have to get all the data right and keep it right, etc., etc. And if you don’t, what you have is an overwrought and overcomplicated Kanban system, without Kanban’s virtues.
Are there other enterprise application systems that are brittle? Lots and lots of them, I think. Almost all the old, Siebel-style CRM systems were simply too brittle; they depended too much on the good-will of the salespeople, the accuracy of the sales model embedded in the system, the reliability of the sales cycle, etc., etc. You wouldn’t think that financial systems are brittle–after all, they have to work–but they often had components that were overly brittle: cash management systems, for instance, and fixed asset systems and budgeting systems.
What do most companies do when they have an overly brittle system? They use the system for lesser purposes. And they feel REALLY bad about it. So, the Microsoft Word user makes a brochure that is far less fancy, but more manageable, and the QAD MRP user uses the product for tracking inventory. And both of them keep on saying, “Well, one of these days, I’ll get around to really making this product sing.”
They shouldn’t. Brittle applications are brittle for a reason. A lot of the time, it’s because they’re really a special-purpose product, but yours is not that purpose. Some of the time, they’re brittle because they’re badly designed. Some of the time, the model they’re using (MRP is a good example) just doesn’t fit the situation you’re in. In any of the cases, the fact is that they really can sing for the right user, but that doesn’t mean it’s your fault if they don’t sing for you.
What do you do if you have a brittle app that isn’t singing? Give up on it. It won’t work for you. Get another app, one that works. Or change the process. Or just accept the fact that it will never work the way you thought it would.
In any case, good luck.
Is Supply Chain Knowledge Key for ERP Implementations?
August 22, 2009
I used to work at QAD, a small manufacturing software vendor. I subscribe to a QAD chat group, and occasionally people ask questions like the one in the title.
It sounds as if the person asking is peddling something–who knows–but it’s an interesting question nonetheless. What kinds of knowledge are necessary (key) for an ERP implementation? If you run a manufacturing company, is APICS (that is, supply chain) knowledge particularly important?
Certainly, QAD used to think so. When I was an employee, you got a bonus for becoming APICS certified. (APICS is the American Production and Inventory Control Society; to get certified, you had to learn how MRP worked and how inventory should be managed.) And certainly, when the product was designed, the focus was on matching supply and demand. The product was built originally for Karl Lopker’s sandal manufacturing business, and the idea was always to have simple, usable product that managed inventory well.
So you would think that the answer, at least for QAD users, is, “Of course APICS knowledge is key. Duh.” But I don’t think so.
You see, while I was at QAD and then for some years afterward, I looked at a fair number of installations. And what I saw was disheartening, at least if you believed in good supply chain practices. The systems weren’t really using good supply chain practices, at least as APICS defined them.
Let me give you an example, which APICS-trained people will understand immediately. One of the ideas of these systems is to reduce the amount of inventory you have on hand at any one time. To do this inside the system, there are two parameters that you have to set, lead time (which is the amount of time it takes for an order to be fulfilled) and safety stock (the amount you want to have on hand at all times). The longer the lead time or the higher the amount of the safety stock, the greater your inventory expense.
So what would you say if discovered that in not one or even two installations, but many, the safety stock and lead time numbers for most of the inventory were set once, en masse, and then never set again? Well, I’ll tell you what to think. These figures, which are key to making the system work, are not being used.
Now this was not just true of QAD Software; it was equally true wherever I went, no matter what software was installed.
So doesn’t this say that supply chain knowledge is key, after all? If they had supply chain knowledge, wouldn’t they have paid more attention? At first, I thought so. But then after a while, I realized that more supply chain knowledge would have made very little difference.
You see, that’s not why they were using the software. All these companies, it turns out, didn’t really care about getting supply chain stuff right. They managed the supply chain fairly sloppily–tolerated a lot of inaccuracy and suboptimal behavior–and they got along (in their minds) just fine doing that. They didn’t want to put in the kind of care and rigor that is the sine qua non for doing with these systems what they were designed to do.
What were they using the software for? Well, mostly to manage the paperwork virtually. Please don’t cringe, Pam, if you happen to be reading this. This is not a hack on you. The plain fact is that the companies needed to keep track of their commitments (orders), their inventory, and their money, and that’s what they used the system for. They needed a piece of paper that told people what inventory to move that day and where to move it to. And the system gave it to them.
To do this, though, you didn’t need much APICS knowledge or, if you didn’t believe in APICS’s recipes for inventory management, other supply chain knowledge. All you really needed was to be able to count, which most of the users could do without being APICS-certified.
So is supply chain knowledge key for an ERP implementation? Not at all. You can have perfectly happy users who have got exactly the nice simple implementation they need without much supply chain knowledge at all.
This answer, of course, raises lots of questions. What is key? Why do these companies tolerate sloppy supply chain practices? Wouldn’t they be better off if they cleaned up their act. Herewith, brief answers.
What is key? At a rudimentary level, the financials. You have to get the basics right, here, or you’ll never close your books. In a system studied recently by a grad student at Harvard Business School, 65% of the inventory records were inaccurate. Can you imagine the upset if 65% of your account balances were incorrect?
Why do they tolerate sloppy supply chain practices? I think it’s largely because more finely tuned systems are much more brittle. They take a large amount of care and feeding and their ability to take hard, rude, unexpected shocks is limited.
And wouldn’t they do much better using the systems? In many cases, no. You see, at most of the companies I’ve run into, the MRP/APICS model that QAD (and every other software vendor) provided is not actually all that accurate. To make a really significant difference, you need more sophisticated tools that are better suited to the specifics of your supply chain.
Comments welcome.
Delivering Sourcing Optimization
August 6, 2009
It’s a plain fact that there are certain kinds of math problems that even bright people don’t solve very well and computers solve better than we do. Some of these look surprisingly simple. If you’re a retailer, when should you mark down product and by how much? Or if you’ve got to make six deliveries, should you use one truck and make six stops, two trucks with three each, etc.
Entrepeneurs in the world of enterprise software have recognized this fact for a long, long time and have seen an opportunity there. They develop so-called optimization software, which solves an essentially mathematical problem and then provide it in various forms to the market. A few have provided only the engine itself, most of them ending up in I-Log, and many more that provide optimization solutions that address specific problems, like supply chain optimization, markdown optimization, or the subject of this post, sourcing optimization.
Now if there’s one thing that’s completely clear about optimization software, it is that the track record has been disappointing. I’m not just talking about the overpromising and underdelivering that afflicted the big-name supply chain optimization vendors. (No names, but you know who you are.) I’m talking about really wonderful products, like Joe Shamir’s ToolsGroup, which have struggled to put their tools in the hands of people who need it.
Sourcing is no exception to the general rule. In sourcing, optimization came to the fore when companies like FreeMarkets or Digital Freight wanted to set up sourcing “events,” where possibly hundreds of bidders could bid on thousands of line items, offering discounts for bundles of selected lines. What if A bids X for lines 1, 2, and 3, and B bids Y for lines 2, 3, and 4, etc.? This is one of those math problems that linear programming solutions solve better than human beings do.
A number of vendors still in the market place (you know who you are) proceeded to develop solutions for setting up and managing these events, which of course did a lot more than just figuring out which bids to accept. And to some extent, they worked; they did change the way many companies do business, and sourcing events are now fairly normal, with several large companies being big proponents.
But in my (admittedly somewhat limited) view, the optimization piece has been relatively unimportant. Certainly some companies have developed some good optimization algorithms, and they are used, but in the events I’ve seen or heard about, it’s not clear to me that the solutions they offered were actually all that optimal, and the reaction of buyers that I know to the solutions they offered have been tepid, shall we say.
I’m not surprised, as I say, because I’ve seen similar reactions from users of other optimization software.
Now I have a degree in math, and I’ve taken a course in linear programming. I know that the solutions they give you are better than what people can come up with by themselves. So are the people who don’t like these solutions just grousing? That’s certainly what most vendors think.
Well, the other day, I got a glimmer of the answer from some terrific research done by Vishal Gaur of Cornell University. (See my earlier post, “An Automated Replenishment System.”) He found in his case study that a solution produced by optimization software had a highly suboptimal operational effect because it was optimizing around the wrong things. The math was done perfectly, but the problem had been set up incorrectly.
This is always a problem in principle when you set optimization software chugging away. The software draws a box around the problem and finds the point inside the box with the best available solution. If you draw a different box (usually a larger one), the solution will be different, and will take longer to find.
In Professor Gaur’s case, the software developer had drawn the wrong box around the problem. Now what’s interesting here is that the software developer never knew that he’d done anything wrong. He delivered the software, and the IT people who were using it expressed satisfaction. And you might think, “Well, so what, even if it drew slightly the wrong box, it got roughly the right answer.” But in fact, when you change the box, you change the answer completely. So in this case, the users had to replace (manually) all the solutions the software offered (manually) with better and completely different solutions.
If you have optimization software, in other words, you have to draw the right box, or the answers that it gives you are not better than what you can provide, they are worse, often much, much worse.
Now look at how this applies to sourcing optimization. Let’s say that you set up the problem in one way. (You only allow bids from certain vendors, for instance, and you require that they offer discounts of a certain percentage.) If you draw the box differently, you will come up with quite different answers.
I was discussing this problem with Garry Mansell, president of TradeExtensions, a small sourcing optimization vendor. He argues that the only way of solving this problem is to provide a lot of services, along with the software. The experts that he provides (it just so happens, of course, that he’s in the business of providing this pairing) can keep the sourcing on track and make sure the right box is drawn.
Now I react to most honeyed words from vendors the way I react to offers of honey for my tea. (Hint: “No, thank you.”) But this struck me as roughly right. The problems that I’ve seen with implementations of myriad kinds of optimization software wouldn’t be solved, then, by fixing the software. It would be solved by helping people to make sure that they’re drawing the right box around the problem.
So who knows. Maybe Mansell is right. (Or maybe he’s just selling software. Or both.) But it seems to me to be a useful thing to keep in mind. If you’re looking at sourcing optimization software (or, more generally, any optimization software), don’t buy it and don’t use it, unless you have some serious experts, who can help you make sure that you drew the right box.