The SUGEN Numbers

January 20, 2010

I argued in the previous post that SAP’s new, “two-tier” pricing system for maintenance offers customers less choice than meets the eye, and commentators like Dennis Howlett agree.

So why did they bother? If one offering is “good support for a fair and reliable price” and the other offering is “less good support for roughly the same price (only no one will really know for six years)” why would anyone pick the latter? And why would SAP risk a public relations nightmare when the people who pick the apparently lower-cost alternative find that they’ve been snookered?

Is it just that SAP needs to offer the enterprise application version of “small coffee,” the coffee size that nobody ever orders, but you need on the menu, so people will order medium?

The question is particularly salient because SAP has data that, one could at least argue, shows that Enterprise Support really is better.

This data comes out of a program embarked on last summer, sponsored by SUGEN (SAP User Group Executive Network) and SAP. In this program, companies were put on Enteprise Support, and the benefits thereof were measured in 11 benchmarks and the sum of those benefits added into something called the SUGEN KPI Benchmark Index. SAP had vowed not to raise the cost of Enterprise Support until this program showed that a gain in the Index that gained justified the increased cost of Enterprise Suppory.

SAP reported the results at the Influencer Summit last December, which I attended. According to the numbers they showed me, the SUGEN KPI benchmarks had indeed been achieved.

These numbers were disclosed to me on the condition that I not repeat them until the full results were published, and while I’ve been given informal permission to speak about them, I will try to respect this request.

I think, though, that I can convey a fairly accurate idea of what is going on without actually citing the numbers.

Before I can get to this, though, I need to explain something about the program and the expectations that people had for it. When SAP first announced a new, improved class of support at an increased price, which all customers were required to use, many customers thought that this was just price-gouging. They didn’t believe that SAP’s across-the-board price increase for maintenance would be accompanied by any benefits. When SAP started hearing from these customers, they were clearly taken aback, since the executives in charge of this new support program clearly did (and do) believe in its benefits.

So SAP (and SUGEN, the customers’ self-appointed representatives) agreed to put the question to an empirical test.

Now anybody reading the press release about this program or anybody attending the journalist session at last May’s Sapphire (as I did) would believe that this test would be done along traditional social science lines. A representative sample of the SAP customer base would be given the opportunity to take advantage of Enterprise Support, and the benefits would be measured.

After attending that session, I told my client base (people who are professionally interested in tracking what SAP does) that it was impossible for this program to show so much benefit that it would justify the across-the-board increase. The reason was simple. To get the available benefit from Enterprise Support, a customer must get and install a software product called the Solution Manager and must then do a lot of process documentation and process modification. A representative sample of SAP customers simply wouldn’t include very many customers who had done all this installation, documentation, and change, because the total amount of work was considerable, and most customers weren’t going to do it, at least not any time soon.

Isn’t it sort of squirrelly, expecting Enterprise Support customers to get software, install it, and then do a fair amount of work before they get the benefit that SAP promised them? Well, yes, but it isn’t quite as squirrelly as it sounds.

At the Influencer Summit, Uwe Hommel, the person behind this idea, expressed it roughly this way. A lot of customers don’t really run support as well as they could. The Solution Manager provides them with a framework for the practices that they should be using, plus it enables SAP support personnel to give better, more accurate, and faster help, because the Solution Manager gives them better information about what was going on at a customer site. It would be nice if SAP could wave a magic wand and improve support without any effort from the customers. But that just can’t happen. All we can do is provide a framework.

As far as Hommel is concerned, what SAP is saying is roughly what the trainer at the gym offers. “We’ll make you better, bigger, stronger, and leaner, but of course you have to do your share.”

Fair enough, of course. But that’s not actually what SAP said. SAP actually said something more like, “You need to do support better, and to do it better, you’ll need a trainer and you’ll have to put in some effort, but oh, by the way, you have to pay for the trainer whether or not you actually get around to going to the gym.”

Perhaps the oddest thing about the test that SUGEN and SAP ran is that both parties pretended that SAP had said the first thing and not the second.

You see this, for instance, in the way they [SUGEN according to Myers, below] chose the subjects for the test. Rather than choosing the representative sample of the customer base that I was told they would choose, they asked for volunteers to apply for the program. 140 customers did apply; of those, only 56 were chosen for testing. This, of course, simply guaranteed that the test would not prove what SAP wanted it to prove, that the price increase was justified. At best, it would prove that those customers who decided to go to this metaphorical trainer would get some benefit from it.

So, did they get some benefit?

Well, um, uh, sort of.

As I said above, SAP and SUGEN agreed before the test that there were 11 areas where benefit might be provided. The areas ranged from the obvious things–fewer outages, faster problem resolution, and fewer problems–to the less obvious, but still important things, like more efficient CPU utilization and better use of disk storage.

In the actual test, the benefits of Enterprise Support was measured in only 6 of these 11 areas. The areas chosen had to do with total cost of operations (use of CPU and storage), the cost/effectiveness of managing patches, and the extent to which customers used SAP’s current software effectively. Clearly, this made things harder for SAP, since they were trying to prove benefit, but the benefit was actually measured in less than half the areas where benefit might be available.

Nevertheless, SAP thinks that it succeeded, and technically they did. They measured benefit by giving the SUGEN Index an arbitrary value of 100. The way I understood it at the session, the aim was to show that the increased benefits at least offset the roughly 7.6% price increase in 2009. [According to Myers, below, the actual aim was 4%].

Both aims (what I thought was the aim and what Myers said was the aim) were actually achieved. The benchmark index dropped by 6.89 percentage points. Even though only 6 benchmark areas were measured, the benefit achieved did offset the 7.6% increase (at least within 1 percentage point).

There is, however, a little, tiny, “but.”

All the benefit was achieved by massive improvements in only two out of the six areas: storage utilization and number of failed changes. (A “failed change,” is an attempt to install a patch which fails.) In all the other areas that were measured, the average improvement was very small.

Both of these measure appear to me to be one-time-only improvements. Take, for instance, storage utilization. If you have one of those awful Windows machines, and your disk is sluggish, you can run a utility that compacts your disk and frees up disk space. You’ll show massive improvement in storage utilization. But running this utility once is not the sort of thing that justifies a permanent yearly increase in maintenance costs. Yes, you can do it next year, but it won’t show the same level of improvement, because you gained most of the benefit the first time you did it. The same thing goes for reducing failed changes. Changes in process (and use of the Solution Manager, or Sol Man) can reduce this number a lot. But once you’ve made the changes, further reductions aren’t really available.

I certainly hope that somebody from SAP is reading this; if you are, you’re probably upset, because you’re saying to yourself, “Well, the benefit is permanent; for the rest of time, people will have fewer failed changes and use less disk space.” [Myers does in fact argue this. See below.] You are, of course, right. But you’ll have overlooked the larger question: does helping people to a one-time improvement justify a permanent, yearly price increase. That is hard for me to see. If Enterprise Support promised to bring these kinds of improvements in regularly, then it would be OK to pay more for it regularly. But this test doesn’t show that these regular improvements will be forthcoming.

In any case, it’s all moot now. SAP has scrapped the SUGEN benchmark process. In a way, it’s a shame. This is one of the few times that any enterprise application company has ever tried to run a systematic test of whether its software and services work as advertised. And the results of this test are very interesting. In some areas, the software and services don’t seem to work; the benefits are minimal. In other areas, though, they work very well indeed; the benefits are startling. Who woulda thunk it?

At the very least, shouldn’t SAP keep going with this, so it can go back and fine-tune its software, figure out why some benefits aren’t forthcoming and do something about it?

Apparently not.

Advertisement

SAP announced yesterday that it was creating a two-tier support system, effectively reinstating its old Standard Support offering at a slightly increased price. (The new price is 18% of net versus the old 17% of net.)

This has been hailed as a U-Turn by press and analysts, all of which proves something to me: most writers can’t do math.

SAP begins its press release as follows (emphasis mine):

In a demonstration of its commitment to customer satisfaction, SAP AG (NYSE: SAP) today announced a new, comprehensive tiered support model that is being offered to customers worldwide. This support offering includes SAP Enterprise Support services and the SAP® Standard Support option and will enable all customers to choose the option that best meets their requirements.

So let’s look at the choice that’s being offered to customers; after you look at it, you can judge how much satisfaction it’s going to generate.

The cost of Enterprise Support this year is 18.36% of a base number, a number that usually stems from (but may not be identical to) the net amount paid for the SAP licenses that are being supported. So, this year, assuming that the base for a company was $100,000, the total cost of Enterprise Support is $18,360 and the total cost of Standard Support is $18,000.

Next year, the cost of Enterprise Support goes up to 18.9%, increasing to 22% by 2016. That means that in 2011, it is $18,900, and in 2016, it is $22,000. Those of you who are writers, I apologize for all these numbers, I know they do get confusing.

Now to Standard Support. With Standard Support, the percentage is fixed. But the base is not. It is subject to cost of living increases. We don’t know what COLA (cost of living adjustment) SAP will impose. But let’s just say for the sake of argument that it is 3.00%/year. In 2011, the cost will be $18,540.00, and in 2016, it will be $21,493.00.

All this is in a spreadsheet which you are welcome to look at and play with. (In the spreadsheet, I rounded 18.36% down, so the Enterprise Support costs are slightly low.) Assuming you’re not a writer and you want to play with the numbers, here’s what you’ll see. If the COLA is 3%/year, the costs of either kind of support will be very close for a long time to come. If the COLA is 1%, Standard Support will be quite a bit cheaper. And if it is 5%, Standard Support will be quite a bit more expensive.

It’s confusing, I know, but it’s true. If the COLA is 5%, then “18%” support will cost more than “22%” support. If the COLA is 3%, then “18%” support will cost about 2% less than “22%” support. And if the COLA is 1%, then the lower tier of support will cost about 10% less than the upper tier.

So what will it be. 5%? 3%? 1%? 0%? At the press conference, SAP didn’t say. There is no commitment to impose these increases and there is no commitment not to impose them. SAP, according to Léo Apotheker, “[has] the liberty of linking Standard Support [] to the cost of living index.” (Thanks Information Week.) Whatever their decision, the imposition of COLA will not be uniform. The cost of living index is the index for the country whose currency is the master currency for the contract, and the actual linkage to this actual index depends on the contract language, which varies.

So what are we to think? Whatever SAP is doing, it is not a U-Turn, and it is not a rescission of the price increase. It is offering customers a new choice, which I’ll characterize as follows:

1. Return to Standard Support and get less support than you got with Enterprise Support (though how much less is unclear) and price increases in the form of COLA (though how much increase is unclear).

2. Stay with/sign up for Enterprise Support and get more support (how much more is unclear, but I’ve been posting on this and will post more) and definite, clear price increases in the form of increases in the maintenance percentage.

It’s a choice. But is it really the sort of choice customers want?

And is offering this choice really an example of a commitment to customer satisfaction?

Over the past few years, a cadre of “independent” analysts have set up shop and started to speak frankly about the enterprise application vendors, in their blogs and tweets. You know who I’m talking about: Vinnie, of course, and Dennis, and the Enterprise Irregulars, and Brian , and many, many others. These people were really good analysts to begin with–I’ve known them for years–and they have found their more-or-less independent status freeing, so they write the best stuff that is out there.

So if they’ve got a better mousetrap, why is it that the big guys, Forrester and Gartner, just seem to roll on and on, happily enough? Why haven’t they folded, the way the portable CD player did when the iPod came out? In the free market, after all, shouldn’t consumers pick the best quality at the lowest prices?

I got an interesting answer, yesterday, when I attended a talk at Harvard by Marc Flandreau, who is at the Graduate Institute of Development and International Studies, Geneva. Marc is an expert on bad-mouthing, or as we like to say in English, “blackmail.” And he has a fascinating historical explanation of how pay-to-play can emerge in information markets.

Marc’s focus is the wild and woolly bond market in Paris pre-World War I, a market that was deeply affected by the emergence of a free (or at least libel-free) press in France, post 1880. At the time, it was so easy to start and print a newspaper cheaply that a new kind of blackmail emerged. It was, essentially, “Pay us, or we’ll say bad things about you.” The very relaxed libel laws at this time made this a genuine threat, and people (Marc shows) really did make money doing it.

In the financial markets, the threat took the form, “Mr. Russian Government, pay us, or we’ll publish an article saying that you’re losing the then-active Russo-Japanese war.” And, as it turns out, the Russian Government paid up. The records, which were published in the 1930s, show that the government’s expenditure on publicité went up by a factor of two or more during that period, over what would “normally” be expected.

The interesting thing, though, is where the money went. Essentially, a set of what we would now call unscrupulous PR men (possibly, a redundancy, I admit) who took the blackmail money and distributed it among the press.

Now, here is the rub. Most of the money apparently went to the most reputable, most stable, and most expensive financial journals, not to the blackmailers. What these people tried to do with the bribe money was to make blackmail expensive, by “supporting” an alternate, established, reputable forum, which people would look to for authoritative information, and the existence of this forum brought the threat of blackmail from the cheap-sheet vendors down to acceptable levels.

Flandreau demonstrates fairly convincingly that while some money did go to throw-away (sometimes one-issue) newspapers, most of the money went to those journals and was a significant source of income for them.

“So if I may paraphrase,” a Harvard professor said, after hearing this, “The National Enquirer is one of the things that keeps The New York Times alive.” Marc replied in the affirmative.

Marc’s broad conclusion is that a pay-to-play industry will emerge whenever there is a significant threat from “badmouthing.” (He cites Moody’s as a modern-day example of the same phenomenon.) In all these cases (I think movie stars of the 1920s are another example), the best strategy for coping with badmouthing is to support cooperative, but reputable mouthpieces who will then be a permanent counter to whatever bad things are said by the smaller, less reputable people. In his analysis, the accuracy of what these smaller, less reputable people say is irrelevant; it could be true, it could be false. What matters is that you can exert some control over the best people in the industry.

Anybody who has ever taken a PR class already knows this, of course. But what Flandreau contributes are two simple, but odd facts. The premiums are in fact very large, and MOST of the money goes to the larger, more reputable firms.

So what does this mean for Dennis and Vinnie and Brian and Michael Krigsman and Helmuth Gümbel? Well, pretty much it means that their efforts are enriching Gartner and Forrester far more than it enriches them.

Dennis says in a recent tweet, “Pay to play doesn’t cut it.” Sorry Dennis, in this case you’re just wrong. If Marc is right (and I have no reason to think he isn’t), what you’re really doing is supporting the pay-to-play industry.

If you were one of SAP’s biggest customers and you found out that SAP was giving big discounts to another big customer, pretty much because they asked for it, what would you do?

Assuming you have at least a room-temperature IQ, that is.

Wait a minute. Let’s be democratic. If you were one of Oracle’s biggest customers and you found out that SAP was discounting maintenance for the asking, what would you do? I mean, you’re an Oracle customer, you definitely have a room temperature IQ.

Still not sure what to do? Here’s a hint. The phone number at SAP headquarters is 49/6227/7-47474. At Oracle, it’s +1.650.506.7000.

“Wait a minute, wait a minute, wait a minute,” I hear you saying. “SAP didn’t start handing out discounts, did they? They raised maintenance prices; they didn’t lower them?”

Perhaps. But let’s try to apply that IQ of yours.

As I’m sure you know, it’s been an bruited about in the media that Siemens was seriously considering the possibility of dropping its maintenance contract with SAP, starting January 1 of this year. Their plan was to have a third party provide maintenance, possibly either IBM or Rimini Street. (For a representative summary of the situation, as reported in the press, see this Market Watch report.)

So what happened? As all of you big SAP customers realize, Siemens had to make a decision September 30. Well, here’s what we know. About a week ago, SAP issued a press release, saying that Siemens had in fact re-upped its maintenance contract for three years.

Case closed, right? SAP doesn’t ordinarily announce maintenance renewals, but the underlying tone was, “Well, we’ve read the stuff in the press, too, so let’s deal with those scurrilous rumors, and issue a press release. After all, Siemens didn’t just come back. They bought more.” End of story?

Maybe. But you’ll notice that the press release doesn’t actually say anything about how much they paid for the maintenance. Indeed, there’s a funny little line about, “based on SAP’s maintenance standards for large customers,” which seems to demand some explanation.

So, let’s pursue it a little further. Is there any further information anywhere about what Siemens actually paid? About the same time as the press release, a post appeared on the Sapience blog. The post said that Siemens had been paying 30 million euros, pre-deal and was now paying 18 million euros, plus some other concessions. If you value the concessions at zero, this is a roughly 40% discount.

Sapience is written by Helmuth Gümbel, an industry analyst who has been following SAP for longer than I’ve been in the business. Helmuth is not an uninterested party here; he offers consulting on how to pay less in maintenance. But he’s also a well-respected figure, a person who doesn’t just say whatever he feels like saying, true or not.

[Full disclosure: Helmut is also a person I regard as my friend, someone whom I see socially on the rare occasions when he’s in town.]

So what is one supposed to believe? On the one hand, you can say, “Why believe an isolated blogger, especially when he has an axe to grind?” Then, you assume that the press release is giving you basically the right idea about what happened. On the other hand, you can say, “Where there’s smoke, there’s fire,” and assume that Helmuth (and the Enterprise Advocates, a group that discussed the Siemens situation in its recent webcast, have to have roughly the right version of the truth.

Helmuth isn’t the only source of smoke, here. Kash Rangan, an investment analyst at Bank of America/Merrill, estimated, recently, that 20-25% of customers get discounts on their maintenance payments. (The relevant figures are reproduced in the dealarchitectt blog.

No full disclosure required here. I have only a nodding acquaintance with Kash.

Of course, all this can be pretty muddy. American accounting rules tend to make it difficult for companies to give direct discounts on maintenance; basically, if a maintenance agreement is part of the initial license contract and the stated price of maintenance isn’t supported by objective evidence, companies are supposed to recognize the license revenue ratably, not all at once. If you give discounts, then your ability to demonstrate that the stated price is supported by objective evidence, is called into question. So, contra Kash and Helmuth, you could argue that SAP can’t be giving out discounts, because it would screw up their reporting.

But of course there are ways of discounting maintenance without actually charging less than the stated price. There is, for instance, a long, long history in the software business of handing out free seats, instead of cash, when customers are unhappy. (Both Helmuth and a cynical reading of the press release suggest that something of the sort may be going on here.) If pressed, vendors have also been known to reduce the basis for the maintenance charge and also to fiddle around with start and stop dates. I’m not saying that’s going on here–I don’t know–but I’ve been told by reliable sources that it has been done, at least by some companies.

If that were the case here, then Helmuth’s way of characterizing it is really the only sensible way to figure out what’s going on. You look at your outflow before. Then you look at your outflow afterward. The difference gives you a gauge of what the discount is.

So did Siemens get a discount? The plain fact is that we don’t know for sure, even with all that IQ, and won’t know unless SAP and Siemens agree to tell us, and even then we won’t know, because the one thing we can be sure of is that SAP will present the case in a way most favorable to them.

So, if we don’t know for sure, and yet it seems possible that in fact SAP is giving discounts, what should you do? Well, I have a suggestion. It’s 49/6227/7-47474. See what they say.

But don’t hide it. Post what they say right here. If SAP really is holding the line on disounts, well and good. But if they’re giving them out, don’t you think it’s time for you to get in line?

With Siemens asking for and apparently getting a huge break on SAP maintenance costs, it is time to take a look once again to take a look at the whole issue. Understandably, there is a lot of emotion and name-calling and confusion around this; it’s not quite the health-care debate here in the United States, but the real issues have been buried under rhetoric in a strikingly similar way.

SAP Charges for Improved Support

Let me first state SAP’s position as sympathetically as possible. SAP believes (quite correctly) that the customer’s TCO (total cost of ownership) ought to go down, as software and hardware gets better and cheaper. It also believes that if it does something that would help TCO go down, it should get a share of the benefits.

Who would disagree with either point?

Roughly two years ago, therefore, it introduced a series of software and support improvements that it believed would indeed reduce TCO. These improvements largely revolved around a newly improved version of the Solution Manager, a piece of software that is supposed to do what its name implies.

The Solution Manager (or Sol Man, as it is called familiarly) was actually introduced roughly ten years ago. In its original version, it was a separate piece of software (one that ran on its own Windows box) that one used to communicate with SAP support (filing bug reports, etc.) and to monitor the performance of your SAP installation.

In the version introduced two years ago, the Sol Man EE (or enterprise edition), it did considerably more: it allowed you to do more extensive monitoring, test and manage upgrades, and even document your business processes. This new edition also beefed up the connectivity with SAP Support, so that support could use it to troubleshoot your installation more rapidly and effectively.

When SAP introduced its new, higher-priced Enterprise Edition support package, it placed the Solution Manager EE front and center. In every speech and every press release, it wasn’t, “We’re raising prices because Oracle got away with it.” It was, “We’ve developed new tools and support services based on those tools, and we’re increasing the cost of maintenance, because the maintenance has improved.”

The tools it was referring to were the various components of the Sol Man EE, and the improved support services were made possible by and delivered through the Sol Man.

To sum up, SAP’s position is that it is improving enterprise support by providing customers with new and better support tools. It is in the software business. So it is only reasonable for it to charge for those tools. That it charges via a maintenance price increase rather than by charging for the product itself is reasonable, presumably, because many of the benefits involve improved services, which are provided through the maintenance contract.

The Customer Reaction

It’s just a plain fact, of course, that nobody paid much attention to this. It took me, for instance, almost a year (until John Krakowski’s excellent presentation at ASUG last May) to figure out what SAP was getting at when it talked about new tools. (Before that, I wrongly, but honestly believed that the talk about tools was pure hand-waving.)

Other commentators on this, like Vinnie Mirchandani or Ray Wang or Dennis Howlett , may have gotten to a proper understanding of the argument faster than I did, but for the most part, they didn’t try to address its merits.

Today, for instance, the Enterprise Advocates, gave a webinar on Reducing SAP Maintenance Costs. (The Enterprise Advocates include the aforementioned three, plus Frank Scavo and Oliver Marks.) Not once during the main body of the talk did they even mention SAP’s recommendation for reducing maintenance support costs, which is to implement the Solution Manager and use it.

I don’t blame the advocates for this; it’s not their job. But I do blame SAP. If the Sol Man is what justifies the price increase, then SAP need to explain this in clear language.

Once SAP fails to do this, the Advocates and the SAP customer base are entitled to believe what you and I would believe when somebody offers an unclear explanation for something that seems to require some explanation: they dismiss the explanation that’s offered.

At some point, though, it does seem that someone should give SAP the benefit of the doubt and ask the question that SAP wants you to ask, namely, “Can the Sol Man EE deliver so much benefit that it justifies the maintenance price increase?”

If the answer is, “Yes,” that would of course be the best thing all around. Customers would have a clear path to reducing maintenance costs. SAP would have a product that keeps its customers paying maintenance. Total cost of ownership would go down.

And the Answer Is…?

Over the past four months, I’ve spent a fair amount of time finding out what I could about the Sol Man. I don’t have access to the documentation (all 1000 pages of it), but I do have the more public documents that SAP has issued, and I have talked to a number of Sol Man users and consultants.

What I found out is so complicated, and this blog is already too long. So I’ll delay a full report to another post. But here’s the answer in a nutshell:

1. To get the benefits of the Sol Man absolutely requires significant investment on the part of the customer.

2. It does not appear to be the case that the Sol Man was designed with the goal that SAP now has for it top of mind. It appears to be a product that was designed to be one thing that is now being turned to a different purpose.

3. The areas of benefit that the Sol Man promises are indeed important, and it is at least possible that customers can get significant benefit from it, if they put in the work.

4. At the end of the day, though, it appears to this humble observer that SAP needs to put more skin in the game.

Hope all this whets your appetite for the next post on the subject.

“Brittle” design isn’t limited to enterprise application software. You can find brittle design in cars, bridges, buildings, TVs, or even the vegetable bin. (What are those 1/2-pint boxes of $5.00 raspberries, 3/4 of which are moldy, but examples of brittle design?)

What do brittle designs have in common? The designer chose to accentuate high performance at the expense of other reasonable design parameters, like cost, reliability, usability, etc. A Ferrari goes very, very fast, and it feels good when it goes fast, and that’s a design choice. And it’s part of the design choice that the car requires a technician or two to keep it going fast for longer than an afternoon.

So why are most enterprise applications brittle? You can see this coming a mile away. It was a design choice. The enterprise applications in question were designed to be the Ferraris of their particular class of application. They were designed to do the most, have the most functionality, be the most strategic, appeal to the most advanced early adopters, be the most highly differentiated, etc.

To get Ferrari-like performance, they had to make the same design choices Ferrari did. They had to assume the application was perfectly tuned every time the key was turned, and they had to assume that the technicians were there to perform the tuning.

Enterprise applications, you see, were intended to run on the best, the highest-end machines (for their class). They were intended to be set up by experts. They were intended to be maintained by people who had the resources to do what was necessary. They were intended to satisfy the demands of good early-adopter customers who put a lot of pressure on them, with complex pricing schemes or intricate accounting, even if later on, it made setting the thing up complex, increased the chance that there were bugs, and made later upgrades expensive.

This wasn’t bad design; it was good design, especially from the marketing point of view. The applications that put the most pressure on every other design parameter got the highest ratings, attracted the earliest early adopters, recruited the most capable (and highest-cost) implementers, etc., etc. So they won in the marketplace and beat out other applications in their class that made different design choices.

I lived through this when I worked at QAD. At QAD, the founder (Pam Lopker) made different design choices. She built a simple app, one that was pretty easy to understand and pretty easy to set up and did the basics. And for about two years, shortly after I got there, she had the leading application in the marketplace. And then SAP and JDE and PeopleSoft came in and cleaned our clock with applications that promised to do more.

Now, none of the people who actually bought SAP instead of QAD back then or chose (later) to try to replace QAD with SAP did this because they really wanted high performance, per se. They wanted “value” and “flexibility” and “return on investment” and “marketable skills.” They literally didn’t realize that the value and flexibility came at a cost, that the cost was that the application was brittle and that therefore, the value or flexibility or whatever was only achievable if you did everything right.

If they had realized this, would they have made different choices?

I don’t know. I remember a company that made kilns in Pittsburgh that had been using QAD for many years. The company had been taken over by a European company that used SAP, and the CIO had been sent over from Germany to replace the QAD system with the one that was (admittedly) more powerful. He called me in (years after I had worked at QAD) to help him justify the project.

I looked at it pretty carefully, and I shook my head. Admittedly, the QAD product didn’t do what he wanted. But I didn’t like the fit with SAP. I was worried that the product designed for German kiln production just wasn’t going to work. I didn’t want to be right, and I was disappointed to find out that two years later, despite very disciplined and careful efforts, he was back in Germany and QAD was still running the company.

I’m glossing over a lot, of course. There are secondary effects. Very often, the first user of an application dominates its development, so the app will be tuned to the users strengths and weaknesses. It will turn out to be brittle for other users because they don’t have the same strengths. Stuff that was easy for the first user then turns out to be hard for others.

Two final points need to be made. First, when a brittle application works, it’s GREAT. It can make a huge difference to the user. Brian Sommer frequently points out that the first users of an application adopt it for the strategic benefit, but later users don’t. He thinks it’s because the benefit gets commoditized. But I think it’s at least partially because the first users are often the best equipped to get the strategic benefit, whereas later users are not. I think you see something of the same issue, too, in many of Vinnie Mirchandani’s comments about the value that vendors deliver (or don’t deliver).

Second, as to the cause of failure. Michael Krigsman often correctly says that projects are a three-legged stool and that the vendors are often blamed for errors that could just as easily be blamed on the customers. Dennis Moore often voices similar thoughts. With brittle systems, of course, they’re quite right; the failure point can come anywhere. But when they say this, I think they may be overlooking how much the underlying design has contributed.

It may be the technician’s fault that he dropped the beaker of nitrogycerin. But whose brilliant idea was it to move nitroglycerin around in a beaker?

In the last few posts, I’ve been talking about “brittle” applications, applications that just don’t work unless everything goes right. We all know lots of analogues for these apps in other areas: the Ferrari that coughs and chokes if it’s not tuned once a week or the souffle that falls if you just look at it funny. But it doesn’t occur to most people that many, if not most enterprise applications fall into the same category.

They don’t realize, that is, that ordinary, run of the mill, plain-vanilla enterprise apps are kind of like Ferraris: it takes a lot to get them to do what they’re supposed to do.

Here are some examples.

Consider, for instance, the homely CRM application. Many, if not most executive users go to the trouble of buying and putting one in because they want the system to give them an actionable view of the pipeline. Valuable stuff, if you can get it. When the pipeline is down, you can lay people off or try new marketing campaigns. When it’s up, you can redeploy resources.

Unfortunately, though, it takes a lot to get a CRM system to do this. If, for instance, the pipeline data is inaccurate, it won’t be actionable. Say a mere 10% of the salesforce is so good or so recalcitrant that their pipeline data just can’t be trusted. You just won’t be able to use your system for that purpose.

It’s kind of funny, really. Giving you actionable pipeline data is a huge selling point for these CRM systems. But when was the last time you personally encountered one that was 90% accurate.

Another example I ran into recently was workforce scheduling in retail. A lot of retailers buy these things. But it’s clear from looking at them that the scheduling can be very easily blown.

Finally, take the example I talked about before, MRP. When you run that calculation, you have to get all the data right, or the MRP calculation can’t help you be responsive to demand. If even 30% of the lead times are off, you won’t be able to trust much of the run. And when was the last time more than 70% of the lead times were right.

Notice that if you underutilize a brittle system, it can be somewhat serviceable. If you use the CRM system to follow the performance of the salespeople who use it and don’t try to use the totals, the inaccuracy doesn’t matter. If you just use MRP to generate purchase requisitions, the lead times don’t matter. But, as I said in the earlier post, if you do underutilize the system, the amount of the benefit available falls precipitously. And then the whole business case that justified this purchase and all this effort just crumples.

What makes me think that a lot of enterprise applications are brittle? Well, I have a lot of practical, personal experience. But setting that to one side, it’s always seemed to me that the notion of brittle application explains a lot of data that is otherwise very puzzling.

Take for instance all the stuff that Michael Krigsman tells us about project failures. He is constantly reminding us that the cause of failure is a three-legged stool, that it could be the vendor or the consultant or the company, and he is surely right. What ought to be puzzling about this, though, is why there is no redundancy, why the efforts of one group can’t be redoubled to make up for failings in another. If the apps themselves are brittle, however, it takes all three groups working at full capacity to make the project work.

Or, more generally, look at the huge number of project failures (what’s the number, 40%, are abandoned?). Given how embarrassing and awful it is to throw away the kind of money that enterprise app projects take, you’d think that most people would declare some level of victory and go home, unless the benefit they actually got was so far from what they were hoping for that the whole effort became pointless.

How can you tell whether the enterprise app you’re looking at is brittle? Well, look at the failure rate. If companies in your industry historically report a lot of trouble getting these things to work, or if the “reference” installations that you look at have a hard time explaining how they’re getting the benefits that the salesman is promising you, it’s probably a brittle app.

Two questions remain. When do you want to bite the bullet and put in a brittle app? And why would vendors create apps that are intrinsically brittle?

Next post.

A Flaw in Business Cases

September 19, 2009

A software business case compares the total cost of software with the benefits to be gained from implementing the software. If the IRR of the investment is adequate, relative to the company’s policies on capital investment, and if the simple-minded powers that be have a good gut feel about the case itself, it is approved.

Clearly, a business case isn’t perfect. Implementing software is an uncertain business. The costs are complex and hard to fix precisely. Implementation projects do have a way of going over; maintenance costs can vary widely; the benefits are not necessarily always realizable.

That’s where the gut feel comes in. If an executive thinks the benefits might not be there, the implementation team might have a steeper learning curve than estimated, the user acceptance might be problematic, he or she will blow the project off, even if the IRR sounds good.

Business cases of this kind are intended to reduce the risk of a software purchase, but I think they’ve actually been responsible for a lot of failures, because they fail to characterize the risk in an appropriate way.

There’s an assumption that is built into business cases, which turns out to be wrong. The fact that this assumption is wrong means that there’s a flaw in the business case. If you ignore this flaw (which everybody does), you take on a lot of risk. A lot.

The flaw is this. Business cases assume that benefits are roughly linear. So, the assumption runs, if you do a little better than expected on the implementation and maintenance, you’ll get a little more benefit, and if you do a little worse, you’ll get a little less.

Unfortunately, that’s just not the case. Benefits from software systems aren’t linear. They are step functions. So if you do a little worse on an implementation, you won’t get somewhat less benefit; you’ll get a lot less benefit or even zero benefit.

The reason for this is that large software systems tend to be “brittle” systems. (See the recent post on “Brittle Applications.”) With brittle systems, there are a lot of prerequisites that must be met, and if you don’t meet them, the systems work very, very poorly, yielding benefit at a rate far, far below what was expected of them.

This problem is probably easier to understand if we look at how business cases work in an analogous situation. Imagine, for instance, the business case for an apartment building. The expected IRR is based on the rents available at a reasonable occupancy rate. There is, of course, uncertainty, revolving around occupancy rates, rentals, maintenance costs, quality of management, etc. But all of this uncertainty is roughly linear. If occupancy goes up, return goes up, maintenance goes up, return goes down, etc. The business case deals with those kinds of risks very effectively, by identifying them and insuring that adequate cushions are built in.

But what if there were other risks, which the business case ignored? These risks would be associated with things that were absolutely required if the building was to get any return at all. Would a traditional business case work?

Imagine, for instance, that virtually every component of the building that you were going to construct–the foundation, the wiring, the roof, the elevators, the permits, the ventilation, etc.,–was highly engineered and relatively unreliable, required highly skilled people who were not readily available to install, fit so precisely with every other part of the system that anything out of tolerance caused the component to shut down, etc., etc. The building would be a brittle system. (There are buildings like this, and they have in fact proven to be enormously challenging.)

In such a case, it is not only misguided to use a traditional business case, it is very risky. If one of these risky systems doesn’t work–if there’s no roof or no electricity or no elevator or no permit–you don’t just generate somewhat less revenue. You generate none. Cushions and gut feel and figuring that an overrun or two might happen simply lead you to a totally false sense of security. With this kind of risk, it’s entirely possible that no matter how much you spend, you won’t get any benefit.

Now, businessmen are resourceful, and it is possible to develop a business case that correctly assesses the operational risk (the risk that the whole thing won’t work AT ALL). I’ve just never seen one in enterprise applications. (Comments welcome at this point.)

The business cases and business case methodologies that I’ve seen tend to derive from the software vendors themselves or from the large consulting companies. Neither of these are going to want to bring the risk of failure to the front and center. But even those that were developed by the companies themselves (I’ve seen a couple from GE) run into a similar problem: executives don’t want to acknowledge that there might be failure, either.

But the fact that the risk makes people uncomfortable doesn’t mean that it’s a risk that should be ignored. That’s like ignoring the risk that a piton will come out when you’re mountain climbing.

Is the risk real? Next post.

Brittle Applications

August 31, 2009

In a previous post, I said that MRP was a “brittle” application, and a commenter questioned me. What is a “brittle” application? Is this a technical term? What makes MRP brittle? All good questions.

A brittle application is one that doesn’t work at all unless a lot of disparate conditions are met. MRP, for instance, doesn’t work unless all the data is right, people know how to use the program, the demand for the products is stable, purchasing is also committed to minimizing inventory levels, etc., etc.

The notion applies to a lot of other programs besides MRP, though I’ve rarely heard the term used. But notice that it brittleness isn’t so much a feature of the program as it is the purpose to which the program is put.

Let’s take a simple example: a word processing program. For normal purposes, a word processing program in this day and age is not brittle. A rank novice can use it to type and print. But even today, if you want to use, say Microsoft Word, to put out a 16-page brochure, complete with illustrations, well, good luck, is all I can say. You try to get an illustration and have it float and change the size and put in a table, and–well, just try it, it’s a nightmare. So, to put a 16-page brochure together, Microsoft Word is brittle, but to print out a letter, it’s not.

The point about the MRP program that QAD wrote, which follows the APICS standard religiously, is that it’s brittle relative to the purposes for which it was intended. Pam and Karl and Evan (the founders of QAD) really believed that QAD’s could do supply chain management for manufacturing facilities very effectively. My point was that the program is too brittle. To get things right, you have to get all the data right and keep it right, etc., etc. And if you don’t, what you have is an overwrought and overcomplicated Kanban system, without Kanban’s virtues.

Are there other enterprise application systems that are brittle? Lots and lots of them, I think. Almost all the old, Siebel-style CRM systems were simply too brittle; they depended too much on the good-will of the salespeople, the accuracy of the sales model embedded in the system, the reliability of the sales cycle, etc., etc. You wouldn’t think that financial systems are brittle–after all, they have to work–but they often had components that were overly brittle: cash management systems, for instance, and fixed asset systems and budgeting systems.

What do most companies do when they have an overly brittle system? They use the system for lesser purposes. And they feel REALLY bad about it. So, the Microsoft Word user makes a brochure that is far less fancy, but more manageable, and the QAD MRP user uses the product for tracking inventory. And both of them keep on saying, “Well, one of these days, I’ll get around to really making this product sing.”

They shouldn’t. Brittle applications are brittle for a reason. A lot of the time, it’s because they’re really a special-purpose product, but yours is not that purpose. Some of the time, they’re brittle because they’re badly designed. Some of the time, the model they’re using (MRP is a good example) just doesn’t fit the situation you’re in. In any of the cases, the fact is that they really can sing for the right user, but that doesn’t mean it’s your fault if they don’t sing for you.

What do you do if you have a brittle app that isn’t singing? Give up on it. It won’t work for you. Get another app, one that works. Or change the process. Or just accept the fact that it will never work the way you thought it would.

In any case, good luck.

I used to work at QAD, a small manufacturing software vendor. I subscribe to a QAD chat group, and occasionally people ask questions like the one in the title.

It sounds as if the person asking is peddling something–who knows–but it’s an interesting question nonetheless. What kinds of knowledge are necessary (key) for an ERP implementation? If you run a manufacturing company, is APICS (that is, supply chain) knowledge particularly important?

Certainly, QAD used to think so. When I was an employee, you got a bonus for becoming APICS certified. (APICS is the American Production and Inventory Control Society; to get certified, you had to learn how MRP worked and how inventory should be managed.) And certainly, when the product was designed, the focus was on matching supply and demand. The product was built originally for Karl Lopker’s sandal manufacturing business, and the idea was always to have simple, usable product that managed inventory well.

So you would think that the answer, at least for QAD users, is, “Of course APICS knowledge is key. Duh.” But I don’t think so.

You see, while I was at QAD and then for some years afterward, I looked at a fair number of installations. And what I saw was disheartening, at least if you believed in good supply chain practices. The systems weren’t really using good supply chain practices, at least as APICS defined them.

Let me give you an example, which APICS-trained people will understand immediately. One of the ideas of these systems is to reduce the amount of inventory you have on hand at any one time. To do this inside the system, there are two parameters that you have to set, lead time (which is the amount of time it takes for an order to be fulfilled) and safety stock (the amount you want to have on hand at all times). The longer the lead time or the higher the amount of the safety stock, the greater your inventory expense.

So what would you say if discovered that in not one or even two installations, but many, the safety stock and lead time numbers for most of the inventory were set once, en masse, and then never set again? Well, I’ll tell you what to think. These figures, which are key to making the system work, are not being used.

Now this was not just true of QAD Software; it was equally true wherever I went, no matter what software was installed.

So doesn’t this say that supply chain knowledge is key, after all? If they had supply chain knowledge, wouldn’t they have paid more attention? At first, I thought so. But then after a while, I realized that more supply chain knowledge would have made very little difference.

You see, that’s not why they were using the software. All these companies, it turns out, didn’t really care about getting supply chain stuff right. They managed the supply chain fairly sloppily–tolerated a lot of inaccuracy and suboptimal behavior–and they got along (in their minds) just fine doing that. They didn’t want to put in the kind of care and rigor that is the sine qua non for doing with these systems what they were designed to do.

What were they using the software for? Well, mostly to manage the paperwork virtually. Please don’t cringe, Pam, if you happen to be reading this. This is not a hack on you. The plain fact is that the companies needed to keep track of their commitments (orders), their inventory, and their money, and that’s what they used the system for. They needed a piece of paper that told people what inventory to move that day and where to move it to. And the system gave it to them.

To do this, though, you didn’t need much APICS knowledge or, if you didn’t believe in APICS’s recipes for inventory management, other supply chain knowledge. All you really needed was to be able to count, which most of the users could do without being APICS-certified.

So is supply chain knowledge key for an ERP implementation? Not at all. You can have perfectly happy users who have got exactly the nice simple implementation they need without much supply chain knowledge at all.

This answer, of course, raises lots of questions. What is key? Why do these companies tolerate sloppy supply chain practices? Wouldn’t they be better off if they cleaned up their act. Herewith, brief answers.

What is key? At a rudimentary level, the financials. You have to get the basics right, here, or you’ll never close your books. In a system studied recently by a grad student at Harvard Business School, 65% of the inventory records were inaccurate. Can you imagine the upset if 65% of your account balances were incorrect?

Why do they tolerate sloppy supply chain practices? I think it’s largely because more finely tuned systems are much more brittle. They take a large amount of care and feeding and their ability to take hard, rude, unexpected shocks is limited.

And wouldn’t they do much better using the systems? In many cases, no. You see, at most of the companies I’ve run into, the MRP/APICS model that QAD (and every other software vendor) provided is not actually all that accurate. To make a really significant difference, you need more sophisticated tools that are better suited to the specifics of your supply chain.

Comments welcome.