Blame the Customer?
July 14, 2010
Who is to blame for IT project failures? My colleague, Michael Krigsman, argues that when IT projects wander into the “IT Devils Triangle,” all three participants–the vendor, the integrator, and the customer–are to blame. Michael is very insistent about this; in a recent post on Marin County v. Deloitte, he says, “In my view, it is highly unusual for a project to fail without some culpability on the part of the customer.”
Michael is the guy who almost singlehandedly took IT project failure out of the closet and into the open, and much of his professional life is spent analyzing these failures. For that reason, one should not criticize what he says without good reason. But I’ve always been uncomfortable with his line on things and felt that we would all be better off if we had better analytic tools for this kind of situation. I want to know when the “culpability” is minor and merely contributory and when it is really important or even primary.
One’s normal intuition about such things is that there are broad spheres of responsibility. If I get an artificial hip implanted, it’s up to the manufacturer to provide my doctor with a hip that works, up to the doctor to put it in correctly, and up to me not to abuse it. This is the intuition that underlies a lot of contract law and liability law, and it even applies to very large, complex engineering projects. When the architect found a flaw with the Citicorp building in New York City, which flaw could have caused it to collapse, the architect and the contractor took responsibility for it and had it fixed. They didn’t blame the customer, and they didn’t ask the customer to adjust his use of the building.
If you apply this intuition to technology, it works pretty well. In the case of IT projects, the software company is responsible for making sure that the design and building of the product is adequate to the demands that will be placed on it, the integrator is responsible for deploying the product in a way that meets what one might call the normal expectations of the customer, and the customer is responsible for using it in ways that are consistent with the design. If any of them fail to meet their responsibilities in their sphere, then they are culpable. And the others are not culpable (except perhaps in a minor way) if they don’t wish to compensate for that failure.
Now, I think Michael would agree with this, but he would say that, as a practical matter, what happens in these large project failures. is that all three people fail to deliver, so all are culpable. (Michael, please correct me if I’m wrong.)
But it seems to me that doesn’t go far enough. I think you need to be able to figure out when one participant is clearly morst culpable? Perhaps, for instance, one ought to apply a notion of priority, so that if the vendor fails to deliver, the vendor is primarily culpable by definition, and the most that the next two parties can do is provide some minor contribution. (This tends to be the approach in products liability, with some major caveats.) Or perhaps there are some other tools that need to be applied, tools that would allow one to extend (or perhaps even change) the “pox on all their houses,” line that Michael takes.
Whatever you do, though, it isn’t easy.
Let me try to illustrate what I mean with an example that has made a lot of news recently: the iPhone 4. Here you had three participants: Apple, AT&T, and the customer. You had a product that “did not work properly.” And you have a dispute about levels of culpability.
The problem, in case you’ve been in Tibet for the last month, is that the iPhone’s reception isn’t very good for reasons that still aren’t clear. (My wife has one. It’s true; it’s not not good.) The customers have basically been told by Apple, “The problem is in the way you hold the phone. Either hold the phone differently, or fork over $25 for a case.” The customers have said, if I may paraphrase, “It’s a phone. You should be able to hold the phone any way you want. It’s up to Apple to give me a phone that works no matter how I hold it.”
Now, the IT Devil’s Triangle analysis says, basically, that the customers are wrong. According to this argument, Apple made a good phone that made some design tradeoffs. (Basically, the antenna was put on the rim of the phone.) If one of those tradeoffs means they have to hold the phone in a special way, so what? They should either buy themselves a case or hold the phone correctly. If they don’t want to do that or can’t, they are contributing to the problem and are at least as culpable as Apple.
So is AT&T off the hook, here? Under the Devil’s Triangle argument, not at all. The phone service that they provide is weak and erratic, and that makes it very difficult to troubleshoot the phone. (I have experienced this in spades.) Their customer “service” is designed to handle normal problems expeditiously, but is not designed to track and resolve complex problems, and as a result, it makes what could be an irksome experience into something verging on horrible. (I have also experienced this, believe me.) And that means that customers are not as tolerant of the iPhone 4’s design choices as they should be.
It’s something of a conundrum, isn’t it? One’s normal intuition is that Apple should design a phone that people can hold any way they want. But one can certainly make a cogent argument that it’s really the customer, as much as anyone else, who is to blame, because they get mad at Apple, rather than being willing to hold the phone in the correct way.
In the end, the solution to this riddle is a matter of values. If you think (as Consumer Reports does) that it’s up to the vendor to get things right, when it comes to simple things like how one holds the phone, then the Devil’s Triangle argument is simply wrong. If you think (as Steve Jobs and many of my friends who work for vendors think) that the customer is to blame if they can’t deal with the “flaws” that they find in a complex and highly-engineered technology, then you’re going to agree with Michael and assume that there are very few situations where the customer doesn’t have some culpability.
Let me just tell you where I come down on this and hope for some illuminating comments. I think the benefit of the doubt should be given to the user. If the vendor is clear about the design tradeoffs and limitations and the deliverer (integrator) provides service that takes those limitations adequately into account and is clear about that, then I think, “Yes, it’s up to the consumer to deal with the limitations.” If the vendor and deliverer fail to do this–if the vendor isn’t clear about what the design tradeoffs are or if the vendor and deliverer allow you to get the idea that the product will do things (like make phone calls in normal situations when being held normally) that it won’t in fact do–then the onus is on them.
So, for instance, if the building architect makes a mistake in the way the building was designed or the general contractor deviates from the design in a way that turns out to be dangerous, it’s up to them to fix it; the building owners are not culpable because they don’t want to limit the number of people allowed on each floor to a number far below what they’d been led to expect. And if the hip designer uses a brittle material and the doctor chips it when he or she installs it, it’s not up to the patient to walk less.
Just my view.
So why is this important? It’s important because this is not how the software industry works. It’s normal for vendors to be very close about the design of their technology, for vendor and integrator salespeople to make unrealistic claims about what the technology does or about the benefits to be received, for the people delivering the product to view limitations in delivery as upsell opportunities, etc., etc. So if my view were to prevail, and it were up to the vendor and deliverer to make sure that the product works as you would normally expect and the installation is what it should be, the industry would have to change.
Is that likely? Maybe, maybe not. But it gets more and more likely every time a user asked to adapt to some software’s vagaries asks himself or herself, “Isn’t this like being asked to hold an iPhone with tweezers?”
And While You Wait? What?
March 31, 2010
A few days ago, I argued that a company that wants to replace its old, limping systems with a brand, spanking new Oracle or SAP application should probably wait for the next generation of apps.
The post got a lot of approving comments, which surprised me. I think there are pretty good arguments for just hauling off and buying. Consider what one of these hypothetical companies might say in defense of a decision to buy now rather than later.
1. We’ve got the money now and we should spend it.
2. The product we’ll get will be much more reliable.
3. The cost of services surrounding the product will be lower.
4. We don’t know when the new products will be out (with the possible exception of Workday 10).
5. We don’t know what will be in the new product.
Imagine what idiots we’d look like, the buyer might say, if we waited for five years for a product that was no better than what we could buy today and far more incomplete and buggy.
I see the force of this, which is why I thought it was a close call. Ultimately, I did decide it’s better to wait for markedly better products that are coming, despite the risks and delays. But I saw why people would disagree.
So are my commenters intemperate or am I dithering? One test for this is to look at the case of somebody who is not considering a replacement, but is considering a fairly big investment in the existing platform. Maybe they’re considering an upgrade, or maybe they’re considering some extension products, or maybe they want to push their existing installation out to other geographies.
This is a very realistic case. Several companies I know fairly well are considering one or more of these options. One is upgrading to Oracle 11i; another wants to upgrade to Infor LN. Still another wants to extend its QAD implementation to another geography. Still another wants to buy a CRM system from its existing vendor.
For each of these companies, the details matter a lot, but on reflection, I think the same argument applies. It will be hard to get the return on this big new investment in the old platform, because the useful life of what is to you a new product is not long enough to justify the expense.
This raises two questions. First, how does one calculate “useful life.” Most of the companies I’m thinking of believe that the useful life is defined only by how long they’re willing to use it, and for some companies, that’s pretty long. (There are, after all, big SAP clients who are still using R/2.) I think this is wrong, but I’m going to have to hold off explaining why till a later blog.
The second question is, “Assuming that the commenters and I are right, what kinds of investments should companies who have decided to wait actually make?” I have no determinate answers to this, but I think there are some guidelines.
Start with Gavin’s comment on the previous post. Gavin points out that some investments in the Oracle infrastructure and in new Fusion-based products will actually take you toward the next Oracle generation. He is quite right; Oracle designed things this way, and partly because of the problem that I’m describing, they quite deliberately created some ways of investing in products from Oracle without locking yourself into an older technology.
There are many, many caveats, of course. Among other things, if you have an Oracle system now, you might not want Oracle in the future. The three main products I mentioned in the earlier post (Workday, Business by Design, and Fusion Applications) are all highly differentiated, each with its own flaws and virtues. A rational person would do well to look at all of them before deciding to stick with the vendor they have now. (This applies to SAP customers, too.)
Even if you are bent on Oracle, you still may want to take some time thinking through your infrastructure stack before investing in pieces of it. Even the examples Gavin gives like OpenID, which are likely to be pretty good, may not be right in the long run, and if they aren’t, that will be a lot of trouble and expense gone to waste.
You should note that Gavin’s argument almost certainly will apply to SAP, as well. SAP is also workng on “hybrid” intermediate solutions, and I’ll bet you dollars to donuts that they’re trying to figure out every way they can to ease the migration to the new system by asking you to make steady, rational investments in products that extend your current capabilities.
But what about customers for whom an infrastructure or extension investment isn’t right? Here, I think there are some interesting arguments, akin to Gavin’s, for small, light cloud-based apps, point solutions designed to solve highly specific problems. I’m not just talking about a CRM app or a call-center app or a recruiting app; I’m talking about things that are very, very specific to your industry, but really powerful, things like Tradestone in the apparel industry.
Another possibility is to spend some time and effort cleaning up your existing installation. This will improve its current effectiveness, extend its useful life, and very possibly lower the cost of the new system significantly. (A lot of the cost of any implementation is cleaning up after the mess left by a system that everybody has given up on.)
There’s a very smart analyst in Europe, Helmuth Gümbel, who has spent a lot of time thinking about this problem. He has a blog, and he also has a conference, Sapience, which goes into these questions at length. If you’re thinking about extending your current ERP to other geographies, a reasonable alternative might be to find lower-cost ERP systems to serve those geographies.
The basic theme running through these latter approaches is that while you’re waiting, you can focus on saving some money and preparing your current installation, thereby making the later transition to a newer technology faster and more affordable.
The SUGEN Numbers
January 20, 2010
I argued in the previous post that SAP’s new, “two-tier” pricing system for maintenance offers customers less choice than meets the eye, and commentators like Dennis Howlett agree.
So why did they bother? If one offering is “good support for a fair and reliable price” and the other offering is “less good support for roughly the same price (only no one will really know for six years)” why would anyone pick the latter? And why would SAP risk a public relations nightmare when the people who pick the apparently lower-cost alternative find that they’ve been snookered?
Is it just that SAP needs to offer the enterprise application version of “small coffee,” the coffee size that nobody ever orders, but you need on the menu, so people will order medium?
The question is particularly salient because SAP has data that, one could at least argue, shows that Enterprise Support really is better.
This data comes out of a program embarked on last summer, sponsored by SUGEN (SAP User Group Executive Network) and SAP. In this program, companies were put on Enteprise Support, and the benefits thereof were measured in 11 benchmarks and the sum of those benefits added into something called the SUGEN KPI Benchmark Index. SAP had vowed not to raise the cost of Enterprise Support until this program showed that a gain in the Index that gained justified the increased cost of Enterprise Suppory.
SAP reported the results at the Influencer Summit last December, which I attended. According to the numbers they showed me, the SUGEN KPI benchmarks had indeed been achieved.
These numbers were disclosed to me on the condition that I not repeat them until the full results were published, and while I’ve been given informal permission to speak about them, I will try to respect this request.
I think, though, that I can convey a fairly accurate idea of what is going on without actually citing the numbers.
Before I can get to this, though, I need to explain something about the program and the expectations that people had for it. When SAP first announced a new, improved class of support at an increased price, which all customers were required to use, many customers thought that this was just price-gouging. They didn’t believe that SAP’s across-the-board price increase for maintenance would be accompanied by any benefits. When SAP started hearing from these customers, they were clearly taken aback, since the executives in charge of this new support program clearly did (and do) believe in its benefits.
So SAP (and SUGEN, the customers’ self-appointed representatives) agreed to put the question to an empirical test.
Now anybody reading the press release about this program or anybody attending the journalist session at last May’s Sapphire (as I did) would believe that this test would be done along traditional social science lines. A representative sample of the SAP customer base would be given the opportunity to take advantage of Enterprise Support, and the benefits would be measured.
After attending that session, I told my client base (people who are professionally interested in tracking what SAP does) that it was impossible for this program to show so much benefit that it would justify the across-the-board increase. The reason was simple. To get the available benefit from Enterprise Support, a customer must get and install a software product called the Solution Manager and must then do a lot of process documentation and process modification. A representative sample of SAP customers simply wouldn’t include very many customers who had done all this installation, documentation, and change, because the total amount of work was considerable, and most customers weren’t going to do it, at least not any time soon.
Isn’t it sort of squirrelly, expecting Enterprise Support customers to get software, install it, and then do a fair amount of work before they get the benefit that SAP promised them? Well, yes, but it isn’t quite as squirrelly as it sounds.
At the Influencer Summit, Uwe Hommel, the person behind this idea, expressed it roughly this way. A lot of customers don’t really run support as well as they could. The Solution Manager provides them with a framework for the practices that they should be using, plus it enables SAP support personnel to give better, more accurate, and faster help, because the Solution Manager gives them better information about what was going on at a customer site. It would be nice if SAP could wave a magic wand and improve support without any effort from the customers. But that just can’t happen. All we can do is provide a framework.
As far as Hommel is concerned, what SAP is saying is roughly what the trainer at the gym offers. “We’ll make you better, bigger, stronger, and leaner, but of course you have to do your share.”
Fair enough, of course. But that’s not actually what SAP said. SAP actually said something more like, “You need to do support better, and to do it better, you’ll need a trainer and you’ll have to put in some effort, but oh, by the way, you have to pay for the trainer whether or not you actually get around to going to the gym.”
Perhaps the oddest thing about the test that SUGEN and SAP ran is that both parties pretended that SAP had said the first thing and not the second.
You see this, for instance, in the way they [SUGEN according to Myers, below] chose the subjects for the test. Rather than choosing the representative sample of the customer base that I was told they would choose, they asked for volunteers to apply for the program. 140 customers did apply; of those, only 56 were chosen for testing. This, of course, simply guaranteed that the test would not prove what SAP wanted it to prove, that the price increase was justified. At best, it would prove that those customers who decided to go to this metaphorical trainer would get some benefit from it.
So, did they get some benefit?
Well, um, uh, sort of.
As I said above, SAP and SUGEN agreed before the test that there were 11 areas where benefit might be provided. The areas ranged from the obvious things–fewer outages, faster problem resolution, and fewer problems–to the less obvious, but still important things, like more efficient CPU utilization and better use of disk storage.
In the actual test, the benefits of Enterprise Support was measured in only 6 of these 11 areas. The areas chosen had to do with total cost of operations (use of CPU and storage), the cost/effectiveness of managing patches, and the extent to which customers used SAP’s current software effectively. Clearly, this made things harder for SAP, since they were trying to prove benefit, but the benefit was actually measured in less than half the areas where benefit might be available.
Nevertheless, SAP thinks that it succeeded, and technically they did. They measured benefit by giving the SUGEN Index an arbitrary value of 100. The way I understood it at the session, the aim was to show that the increased benefits at least offset the roughly 7.6% price increase in 2009. [According to Myers, below, the actual aim was 4%].
Both aims (what I thought was the aim and what Myers said was the aim) were actually achieved. The benchmark index dropped by 6.89 percentage points. Even though only 6 benchmark areas were measured, the benefit achieved did offset the 7.6% increase (at least within 1 percentage point).
There is, however, a little, tiny, “but.”
All the benefit was achieved by massive improvements in only two out of the six areas: storage utilization and number of failed changes. (A “failed change,” is an attempt to install a patch which fails.) In all the other areas that were measured, the average improvement was very small.
Both of these measure appear to me to be one-time-only improvements. Take, for instance, storage utilization. If you have one of those awful Windows machines, and your disk is sluggish, you can run a utility that compacts your disk and frees up disk space. You’ll show massive improvement in storage utilization. But running this utility once is not the sort of thing that justifies a permanent yearly increase in maintenance costs. Yes, you can do it next year, but it won’t show the same level of improvement, because you gained most of the benefit the first time you did it. The same thing goes for reducing failed changes. Changes in process (and use of the Solution Manager, or Sol Man) can reduce this number a lot. But once you’ve made the changes, further reductions aren’t really available.
I certainly hope that somebody from SAP is reading this; if you are, you’re probably upset, because you’re saying to yourself, “Well, the benefit is permanent; for the rest of time, people will have fewer failed changes and use less disk space.” [Myers does in fact argue this. See below.] You are, of course, right. But you’ll have overlooked the larger question: does helping people to a one-time improvement justify a permanent, yearly price increase. That is hard for me to see. If Enterprise Support promised to bring these kinds of improvements in regularly, then it would be OK to pay more for it regularly. But this test doesn’t show that these regular improvements will be forthcoming.
In any case, it’s all moot now. SAP has scrapped the SUGEN benchmark process. In a way, it’s a shame. This is one of the few times that any enterprise application company has ever tried to run a systematic test of whether its software and services work as advertised. And the results of this test are very interesting. In some areas, the software and services don’t seem to work; the benefits are minimal. In other areas, though, they work very well indeed; the benefits are startling. Who woulda thunk it?
At the very least, shouldn’t SAP keep going with this, so it can go back and fine-tune its software, figure out why some benefits aren’t forthcoming and do something about it?
Apparently not.
SAP’s “U-Turn” on Maintenance
January 15, 2010
SAP announced yesterday that it was creating a two-tier support system, effectively reinstating its old Standard Support offering at a slightly increased price. (The new price is 18% of net versus the old 17% of net.)
This has been hailed as a U-Turn by press and analysts, all of which proves something to me: most writers can’t do math.
SAP begins its press release as follows (emphasis mine):
In a demonstration of its commitment to customer satisfaction, SAP AG (NYSE: SAP) today announced a new, comprehensive tiered support model that is being offered to customers worldwide. This support offering includes SAP Enterprise Support services and the SAP® Standard Support option and will enable all customers to choose the option that best meets their requirements.
So let’s look at the choice that’s being offered to customers; after you look at it, you can judge how much satisfaction it’s going to generate.
The cost of Enterprise Support this year is 18.36% of a base number, a number that usually stems from (but may not be identical to) the net amount paid for the SAP licenses that are being supported. So, this year, assuming that the base for a company was $100,000, the total cost of Enterprise Support is $18,360 and the total cost of Standard Support is $18,000.
Next year, the cost of Enterprise Support goes up to 18.9%, increasing to 22% by 2016. That means that in 2011, it is $18,900, and in 2016, it is $22,000. Those of you who are writers, I apologize for all these numbers, I know they do get confusing.
Now to Standard Support. With Standard Support, the percentage is fixed. But the base is not. It is subject to cost of living increases. We don’t know what COLA (cost of living adjustment) SAP will impose. But let’s just say for the sake of argument that it is 3.00%/year. In 2011, the cost will be $18,540.00, and in 2016, it will be $21,493.00.
All this is in a spreadsheet which you are welcome to look at and play with. (In the spreadsheet, I rounded 18.36% down, so the Enterprise Support costs are slightly low.) Assuming you’re not a writer and you want to play with the numbers, here’s what you’ll see. If the COLA is 3%/year, the costs of either kind of support will be very close for a long time to come. If the COLA is 1%, Standard Support will be quite a bit cheaper. And if it is 5%, Standard Support will be quite a bit more expensive.
It’s confusing, I know, but it’s true. If the COLA is 5%, then “18%” support will cost more than “22%” support. If the COLA is 3%, then “18%” support will cost about 2% less than “22%” support. And if the COLA is 1%, then the lower tier of support will cost about 10% less than the upper tier.
So what will it be. 5%? 3%? 1%? 0%? At the press conference, SAP didn’t say. There is no commitment to impose these increases and there is no commitment not to impose them. SAP, according to Léo Apotheker, “[has] the liberty of linking Standard Support [] to the cost of living index.” (Thanks Information Week.) Whatever their decision, the imposition of COLA will not be uniform. The cost of living index is the index for the country whose currency is the master currency for the contract, and the actual linkage to this actual index depends on the contract language, which varies.
So what are we to think? Whatever SAP is doing, it is not a U-Turn, and it is not a rescission of the price increase. It is offering customers a new choice, which I’ll characterize as follows:
1. Return to Standard Support and get less support than you got with Enterprise Support (though how much less is unclear) and price increases in the form of COLA (though how much increase is unclear).
2. Stay with/sign up for Enterprise Support and get more support (how much more is unclear, but I’ve been posting on this and will post more) and definite, clear price increases in the form of increases in the maintenance percentage.
It’s a choice. But is it really the sort of choice customers want?
And is offering this choice really an example of a commitment to customer satisfaction?
Two Cheers for Léo
November 2, 2009
The German Financial Times today took a bead on Léo Apotheker, SAP’s CEO, saying that on his watch, SAP had lost touch with its roots [verlorene Wurzeln]. No longer, as in the days of Hasso Plattner and Dietmar Hopp, is SAP customer-focused, the article says, and as a consequence, customers no longer think the software is worth the money. (The article cites the current customer unhappiness about increased maintenance prices as evidence for this.)
Helmuth Gümbel, no stranger to this blog, is cited frequently in the article; clearly, he was persuasive about the current state of affairs between SAP and its customers. Clearly, too, one disagrees with Helmuth at one’s peril.
Still, I wonder whether it’s really fair to hang all these problems on Léo. Take the infinitely hashed-over introduction and semi-withdrawal of Business By Design. Léo tends to get the blame for this because he was there on the podium claiming that SAP would get $1 billion in revenue from this product by, what is it, next year? But he had nothing to do with the original product. He was working in sales when Peter Zencke was put in charge of Project Vienna, and he was still in sales when Nimish Mehta’s team was developing T-Rex (a great product, no question), and he was still in sales when Hasso was insisting that T-Rex be incorporated into Business by Design, and so on.
Certainly, it was injudicious for him to promise that his organization could sell the heck out of a product that wasn’t ready for prime time. But why does this make him responsible for SAP’s lost roots? If anybody lost their roots, it was the development organization, which somehow or other couldn’t build the product it thought it would be able to build.
Don’t blame Léo for problems that were not of his making.
Now, I have my own issues with Léo, as my readers know. But frankly, when he came in, I think he was right about SAP. Too much time and money was being spent on stuff that wasn’t what the customer needed (or wouldn’t sell), and this had to stop. Hence the acquisition of Business Objects, the downsizing, the restructuring in the development organization. All of these things deserve at least one cheer, and I hereby give it. Hip, hip, hoorah.
Where I criticize Léo–and I’ve told him this to his face–is in his view of SAP’s role vis a vis the customer. He thinks what SAP has always thunk, that it’s up to SAP to make the best possible tools and it’s up to the customer (aided by the SI community) to figure out what to do with them. I think this view is wrong; SAP has to take more responsibility for making sure that the stuff works.
Ten years ago, when the heroes of the FT Deutschland article were fully in charge of the SAP business, I agree, SAP didn’t need to do that. SAP knew about businesses and about software, and they could figure out what they should do next without fretting about the problems customers were then having. (Believe me, there were a lot of them.) Today, though, with so much development time and development effort squandered, they can no longer believe that they can just build the right tools and count on the customers to get the benefit. Instead, they need to find out what went wrong with the tools they’ve been building and what needs to be done in the future. And the only way they can do that is to figure out EXACTLY what is preventing customers from getting the value they think they ought to get.
You can see why this problem is so important if you look at the SAP Solution Manager. This product ought to be what justifies the maintenance price increase. But as Dennis Howlett and Helmuth himself have both said, the product itself does not yet do what SAP needs it to do. If SAP really wants to justify this price increase, it needs to figure out why customers aren’t getting the value that SAP needs them to get. And they need to do it fast.
This is not easy; indeed, when I said this to Hasso late one night at an analyst party, he said, roughly, “We don’t know how to do that.” But let me just say, of all the executives I know at SAP, the one who is most likely to figure it out is Léo.
If he can figure it out, he will be able to get his customer base back again; indeed, I think they’ll be cheering, and when we’re convinced, so will I and so will Dennis and maybe even Helmuth. So, just in case this actually happens, let me give my cheer now, before it is actually deserved, as a way of saying, “I think you can do the right thing.”
Hip, hip, hoorah.
SAP, Siemens, and Maintenance Discounts
October 27, 2009
If you were one of SAP’s biggest customers and you found out that SAP was giving big discounts to another big customer, pretty much because they asked for it, what would you do?
Assuming you have at least a room-temperature IQ, that is.
Wait a minute. Let’s be democratic. If you were one of Oracle’s biggest customers and you found out that SAP was discounting maintenance for the asking, what would you do? I mean, you’re an Oracle customer, you definitely have a room temperature IQ.
Still not sure what to do? Here’s a hint. The phone number at SAP headquarters is 49/6227/7-47474. At Oracle, it’s +1.650.506.7000.
“Wait a minute, wait a minute, wait a minute,” I hear you saying. “SAP didn’t start handing out discounts, did they? They raised maintenance prices; they didn’t lower them?”
Perhaps. But let’s try to apply that IQ of yours.
As I’m sure you know, it’s been an bruited about in the media that Siemens was seriously considering the possibility of dropping its maintenance contract with SAP, starting January 1 of this year. Their plan was to have a third party provide maintenance, possibly either IBM or Rimini Street. (For a representative summary of the situation, as reported in the press, see this Market Watch report.)
So what happened? As all of you big SAP customers realize, Siemens had to make a decision September 30. Well, here’s what we know. About a week ago, SAP issued a press release, saying that Siemens had in fact re-upped its maintenance contract for three years.
Case closed, right? SAP doesn’t ordinarily announce maintenance renewals, but the underlying tone was, “Well, we’ve read the stuff in the press, too, so let’s deal with those scurrilous rumors, and issue a press release. After all, Siemens didn’t just come back. They bought more.” End of story?
Maybe. But you’ll notice that the press release doesn’t actually say anything about how much they paid for the maintenance. Indeed, there’s a funny little line about, “based on SAP’s maintenance standards for large customers,” which seems to demand some explanation.
So, let’s pursue it a little further. Is there any further information anywhere about what Siemens actually paid? About the same time as the press release, a post appeared on the Sapience blog. The post said that Siemens had been paying 30 million euros, pre-deal and was now paying 18 million euros, plus some other concessions. If you value the concessions at zero, this is a roughly 40% discount.
Sapience is written by Helmuth Gümbel, an industry analyst who has been following SAP for longer than I’ve been in the business. Helmuth is not an uninterested party here; he offers consulting on how to pay less in maintenance. But he’s also a well-respected figure, a person who doesn’t just say whatever he feels like saying, true or not.
[Full disclosure: Helmut is also a person I regard as my friend, someone whom I see socially on the rare occasions when he’s in town.]
So what is one supposed to believe? On the one hand, you can say, “Why believe an isolated blogger, especially when he has an axe to grind?” Then, you assume that the press release is giving you basically the right idea about what happened. On the other hand, you can say, “Where there’s smoke, there’s fire,” and assume that Helmuth (and the Enterprise Advocates, a group that discussed the Siemens situation in its recent webcast, have to have roughly the right version of the truth.
Helmuth isn’t the only source of smoke, here. Kash Rangan, an investment analyst at Bank of America/Merrill, estimated, recently, that 20-25% of customers get discounts on their maintenance payments. (The relevant figures are reproduced in the dealarchitectt blog.
No full disclosure required here. I have only a nodding acquaintance with Kash.
Of course, all this can be pretty muddy. American accounting rules tend to make it difficult for companies to give direct discounts on maintenance; basically, if a maintenance agreement is part of the initial license contract and the stated price of maintenance isn’t supported by objective evidence, companies are supposed to recognize the license revenue ratably, not all at once. If you give discounts, then your ability to demonstrate that the stated price is supported by objective evidence, is called into question. So, contra Kash and Helmuth, you could argue that SAP can’t be giving out discounts, because it would screw up their reporting.
But of course there are ways of discounting maintenance without actually charging less than the stated price. There is, for instance, a long, long history in the software business of handing out free seats, instead of cash, when customers are unhappy. (Both Helmuth and a cynical reading of the press release suggest that something of the sort may be going on here.) If pressed, vendors have also been known to reduce the basis for the maintenance charge and also to fiddle around with start and stop dates. I’m not saying that’s going on here–I don’t know–but I’ve been told by reliable sources that it has been done, at least by some companies.
If that were the case here, then Helmuth’s way of characterizing it is really the only sensible way to figure out what’s going on. You look at your outflow before. Then you look at your outflow afterward. The difference gives you a gauge of what the discount is.
So did Siemens get a discount? The plain fact is that we don’t know for sure, even with all that IQ, and won’t know unless SAP and Siemens agree to tell us, and even then we won’t know, because the one thing we can be sure of is that SAP will present the case in a way most favorable to them.
So, if we don’t know for sure, and yet it seems possible that in fact SAP is giving discounts, what should you do? Well, I have a suggestion. It’s 49/6227/7-47474. See what they say.
But don’t hide it. Post what they say right here. If SAP really is holding the line on disounts, well and good. But if they’re giving them out, don’t you think it’s time for you to get in line?
Why Develop Brittle (High-Risk) Apps?
September 25, 2009
“Brittle” design isn’t limited to enterprise application software. You can find brittle design in cars, bridges, buildings, TVs, or even the vegetable bin. (What are those 1/2-pint boxes of $5.00 raspberries, 3/4 of which are moldy, but examples of brittle design?)
What do brittle designs have in common? The designer chose to accentuate high performance at the expense of other reasonable design parameters, like cost, reliability, usability, etc. A Ferrari goes very, very fast, and it feels good when it goes fast, and that’s a design choice. And it’s part of the design choice that the car requires a technician or two to keep it going fast for longer than an afternoon.
So why are most enterprise applications brittle? You can see this coming a mile away. It was a design choice. The enterprise applications in question were designed to be the Ferraris of their particular class of application. They were designed to do the most, have the most functionality, be the most strategic, appeal to the most advanced early adopters, be the most highly differentiated, etc.
To get Ferrari-like performance, they had to make the same design choices Ferrari did. They had to assume the application was perfectly tuned every time the key was turned, and they had to assume that the technicians were there to perform the tuning.
Enterprise applications, you see, were intended to run on the best, the highest-end machines (for their class). They were intended to be set up by experts. They were intended to be maintained by people who had the resources to do what was necessary. They were intended to satisfy the demands of good early-adopter customers who put a lot of pressure on them, with complex pricing schemes or intricate accounting, even if later on, it made setting the thing up complex, increased the chance that there were bugs, and made later upgrades expensive.
This wasn’t bad design; it was good design, especially from the marketing point of view. The applications that put the most pressure on every other design parameter got the highest ratings, attracted the earliest early adopters, recruited the most capable (and highest-cost) implementers, etc., etc. So they won in the marketplace and beat out other applications in their class that made different design choices.
I lived through this when I worked at QAD. At QAD, the founder (Pam Lopker) made different design choices. She built a simple app, one that was pretty easy to understand and pretty easy to set up and did the basics. And for about two years, shortly after I got there, she had the leading application in the marketplace. And then SAP and JDE and PeopleSoft came in and cleaned our clock with applications that promised to do more.
Now, none of the people who actually bought SAP instead of QAD back then or chose (later) to try to replace QAD with SAP did this because they really wanted high performance, per se. They wanted “value” and “flexibility” and “return on investment” and “marketable skills.” They literally didn’t realize that the value and flexibility came at a cost, that the cost was that the application was brittle and that therefore, the value or flexibility or whatever was only achievable if you did everything right.
If they had realized this, would they have made different choices?
I don’t know. I remember a company that made kilns in Pittsburgh that had been using QAD for many years. The company had been taken over by a European company that used SAP, and the CIO had been sent over from Germany to replace the QAD system with the one that was (admittedly) more powerful. He called me in (years after I had worked at QAD) to help him justify the project.
I looked at it pretty carefully, and I shook my head. Admittedly, the QAD product didn’t do what he wanted. But I didn’t like the fit with SAP. I was worried that the product designed for German kiln production just wasn’t going to work. I didn’t want to be right, and I was disappointed to find out that two years later, despite very disciplined and careful efforts, he was back in Germany and QAD was still running the company.
I’m glossing over a lot, of course. There are secondary effects. Very often, the first user of an application dominates its development, so the app will be tuned to the users strengths and weaknesses. It will turn out to be brittle for other users because they don’t have the same strengths. Stuff that was easy for the first user then turns out to be hard for others.
Two final points need to be made. First, when a brittle application works, it’s GREAT. It can make a huge difference to the user. Brian Sommer frequently points out that the first users of an application adopt it for the strategic benefit, but later users don’t. He thinks it’s because the benefit gets commoditized. But I think it’s at least partially because the first users are often the best equipped to get the strategic benefit, whereas later users are not. I think you see something of the same issue, too, in many of Vinnie Mirchandani’s comments about the value that vendors deliver (or don’t deliver).
Second, as to the cause of failure. Michael Krigsman often correctly says that projects are a three-legged stool and that the vendors are often blamed for errors that could just as easily be blamed on the customers. Dennis Moore often voices similar thoughts. With brittle systems, of course, they’re quite right; the failure point can come anywhere. But when they say this, I think they may be overlooking how much the underlying design has contributed.
It may be the technician’s fault that he dropped the beaker of nitrogycerin. But whose brilliant idea was it to move nitroglycerin around in a beaker?
Brittle (High-Risk) Enterprise Applications
September 24, 2009
In the last few posts, I’ve been talking about “brittle” applications, applications that just don’t work unless everything goes right. We all know lots of analogues for these apps in other areas: the Ferrari that coughs and chokes if it’s not tuned once a week or the souffle that falls if you just look at it funny. But it doesn’t occur to most people that many, if not most enterprise applications fall into the same category.
They don’t realize, that is, that ordinary, run of the mill, plain-vanilla enterprise apps are kind of like Ferraris: it takes a lot to get them to do what they’re supposed to do.
Here are some examples.
Consider, for instance, the homely CRM application. Many, if not most executive users go to the trouble of buying and putting one in because they want the system to give them an actionable view of the pipeline. Valuable stuff, if you can get it. When the pipeline is down, you can lay people off or try new marketing campaigns. When it’s up, you can redeploy resources.
Unfortunately, though, it takes a lot to get a CRM system to do this. If, for instance, the pipeline data is inaccurate, it won’t be actionable. Say a mere 10% of the salesforce is so good or so recalcitrant that their pipeline data just can’t be trusted. You just won’t be able to use your system for that purpose.
It’s kind of funny, really. Giving you actionable pipeline data is a huge selling point for these CRM systems. But when was the last time you personally encountered one that was 90% accurate.
Another example I ran into recently was workforce scheduling in retail. A lot of retailers buy these things. But it’s clear from looking at them that the scheduling can be very easily blown.
Finally, take the example I talked about before, MRP. When you run that calculation, you have to get all the data right, or the MRP calculation can’t help you be responsive to demand. If even 30% of the lead times are off, you won’t be able to trust much of the run. And when was the last time more than 70% of the lead times were right.
Notice that if you underutilize a brittle system, it can be somewhat serviceable. If you use the CRM system to follow the performance of the salespeople who use it and don’t try to use the totals, the inaccuracy doesn’t matter. If you just use MRP to generate purchase requisitions, the lead times don’t matter. But, as I said in the earlier post, if you do underutilize the system, the amount of the benefit available falls precipitously. And then the whole business case that justified this purchase and all this effort just crumples.
What makes me think that a lot of enterprise applications are brittle? Well, I have a lot of practical, personal experience. But setting that to one side, it’s always seemed to me that the notion of brittle application explains a lot of data that is otherwise very puzzling.
Take for instance all the stuff that Michael Krigsman tells us about project failures. He is constantly reminding us that the cause of failure is a three-legged stool, that it could be the vendor or the consultant or the company, and he is surely right. What ought to be puzzling about this, though, is why there is no redundancy, why the efforts of one group can’t be redoubled to make up for failings in another. If the apps themselves are brittle, however, it takes all three groups working at full capacity to make the project work.
Or, more generally, look at the huge number of project failures (what’s the number, 40%, are abandoned?). Given how embarrassing and awful it is to throw away the kind of money that enterprise app projects take, you’d think that most people would declare some level of victory and go home, unless the benefit they actually got was so far from what they were hoping for that the whole effort became pointless.
How can you tell whether the enterprise app you’re looking at is brittle? Well, look at the failure rate. If companies in your industry historically report a lot of trouble getting these things to work, or if the “reference” installations that you look at have a hard time explaining how they’re getting the benefits that the salesman is promising you, it’s probably a brittle app.
Two questions remain. When do you want to bite the bullet and put in a brittle app? And why would vendors create apps that are intrinsically brittle?
Next post.
Brittle Applications
August 31, 2009
In a previous post, I said that MRP was a “brittle” application, and a commenter questioned me. What is a “brittle” application? Is this a technical term? What makes MRP brittle? All good questions.
A brittle application is one that doesn’t work at all unless a lot of disparate conditions are met. MRP, for instance, doesn’t work unless all the data is right, people know how to use the program, the demand for the products is stable, purchasing is also committed to minimizing inventory levels, etc., etc.
The notion applies to a lot of other programs besides MRP, though I’ve rarely heard the term used. But notice that it brittleness isn’t so much a feature of the program as it is the purpose to which the program is put.
Let’s take a simple example: a word processing program. For normal purposes, a word processing program in this day and age is not brittle. A rank novice can use it to type and print. But even today, if you want to use, say Microsoft Word, to put out a 16-page brochure, complete with illustrations, well, good luck, is all I can say. You try to get an illustration and have it float and change the size and put in a table, and–well, just try it, it’s a nightmare. So, to put a 16-page brochure together, Microsoft Word is brittle, but to print out a letter, it’s not.
The point about the MRP program that QAD wrote, which follows the APICS standard religiously, is that it’s brittle relative to the purposes for which it was intended. Pam and Karl and Evan (the founders of QAD) really believed that QAD’s could do supply chain management for manufacturing facilities very effectively. My point was that the program is too brittle. To get things right, you have to get all the data right and keep it right, etc., etc. And if you don’t, what you have is an overwrought and overcomplicated Kanban system, without Kanban’s virtues.
Are there other enterprise application systems that are brittle? Lots and lots of them, I think. Almost all the old, Siebel-style CRM systems were simply too brittle; they depended too much on the good-will of the salespeople, the accuracy of the sales model embedded in the system, the reliability of the sales cycle, etc., etc. You wouldn’t think that financial systems are brittle–after all, they have to work–but they often had components that were overly brittle: cash management systems, for instance, and fixed asset systems and budgeting systems.
What do most companies do when they have an overly brittle system? They use the system for lesser purposes. And they feel REALLY bad about it. So, the Microsoft Word user makes a brochure that is far less fancy, but more manageable, and the QAD MRP user uses the product for tracking inventory. And both of them keep on saying, “Well, one of these days, I’ll get around to really making this product sing.”
They shouldn’t. Brittle applications are brittle for a reason. A lot of the time, it’s because they’re really a special-purpose product, but yours is not that purpose. Some of the time, they’re brittle because they’re badly designed. Some of the time, the model they’re using (MRP is a good example) just doesn’t fit the situation you’re in. In any of the cases, the fact is that they really can sing for the right user, but that doesn’t mean it’s your fault if they don’t sing for you.
What do you do if you have a brittle app that isn’t singing? Give up on it. It won’t work for you. Get another app, one that works. Or change the process. Or just accept the fact that it will never work the way you thought it would.
In any case, good luck.
Effective Buying Strategies for Enterprise Applications
July 19, 2009
It stands to reason, doesn’t it. The more thorough and rational the buying process for enterprise applications, the better the outcome. For sure. Right?
Well, the other day, Dennis Moore, aka @dbmoore, a well-known figure in the industry, posted a query on Twitter asking for data that would show this is true. The more thorough the buying process, the more effective the implementation has to be true, doesn’t it. But no, the guy wants data. “Not anecdotes,” he said later, “but data.”
He isn’t going to get any. There are three reasons for this. First, there isn’t any reliable data of any kind on whether implementations were successful, at least none that I’ve seen in a career of nearly two decades. Second, most effort expended on pre-purchase analysis of software is misdirected, adding little to the quality or accuracy of the decision. And third, if you work backwards from failed implementations and identify the causes of the failures, it is very rare that the cause is the kind of thing that could have or should have been caught by a more thorough analysis.
The fact that there is little reliable data on whether an enterprise application product works is, of course, a scandal, but the fact remains and will continue to remain just so long as enterprise application companies want everybody to believe that the odds of success ar high and customers are embarrassed to admit failure.
I have been involved in at least two attempts by large, reputable companies to get a good analysis of what value, if any, has been gained after an enterprise app was implemented. The first interviewed only project managers and determined that the project managers found many, many soft benefits from the implementation. The second, a far larger effort, was eventually abandoned.
But let’s say that we had some rudimentary measure, like number of seats being actively used versus seats planned to be used two years after the initial projected go-live date. Would it show that thorough investigation really helps?
I don’t think so, and here’s why. The question of which software application to buy and/or whether one should buy one at all is usually a very simple question, one with a relatively clear right answer, at least to an objective observer. But it is rarely, if ever, treated as a simple question. People wrongly worry about a lot of irrelevant things; they are (usually) distracted by the salespeople, who naturally want the purchasing decision to be based on criteria most favorable to them, and because there’s a lot of risk (A LOT), people tend to create lengthy, rigorous, formal processes for getting to a decision, which do very, very little to improve the accuracy of the final decision.
Honestly, I can usually tell in an hour’s phone conversation what a company ought to do, and I often check back later — sorry Dennis, more anecdotal evidence — and I’d say I’m right at least 2/3 of the time, maybe more. And because of the way my business model works, I don’t even charge for these conversations.
What do you need to look at? Well, it’s a complex question, don’t get me wrong. But because the number of providers is limited, the capabilities are limited, and the likelihood of failure pretty high, there are usually only a few things that actually matter. And when there are only a few things, it shouldn’t take you that much time to figure them out.
Give a call, any time, if you want to test this out.