Blame the Customer?
July 14, 2010
Who is to blame for IT project failures? My colleague, Michael Krigsman, argues that when IT projects wander into the “IT Devils Triangle,” all three participants–the vendor, the integrator, and the customer–are to blame. Michael is very insistent about this; in a recent post on Marin County v. Deloitte, he says, “In my view, it is highly unusual for a project to fail without some culpability on the part of the customer.”
Michael is the guy who almost singlehandedly took IT project failure out of the closet and into the open, and much of his professional life is spent analyzing these failures. For that reason, one should not criticize what he says without good reason. But I’ve always been uncomfortable with his line on things and felt that we would all be better off if we had better analytic tools for this kind of situation. I want to know when the “culpability” is minor and merely contributory and when it is really important or even primary.
One’s normal intuition about such things is that there are broad spheres of responsibility. If I get an artificial hip implanted, it’s up to the manufacturer to provide my doctor with a hip that works, up to the doctor to put it in correctly, and up to me not to abuse it. This is the intuition that underlies a lot of contract law and liability law, and it even applies to very large, complex engineering projects. When the architect found a flaw with the Citicorp building in New York City, which flaw could have caused it to collapse, the architect and the contractor took responsibility for it and had it fixed. They didn’t blame the customer, and they didn’t ask the customer to adjust his use of the building.
If you apply this intuition to technology, it works pretty well. In the case of IT projects, the software company is responsible for making sure that the design and building of the product is adequate to the demands that will be placed on it, the integrator is responsible for deploying the product in a way that meets what one might call the normal expectations of the customer, and the customer is responsible for using it in ways that are consistent with the design. If any of them fail to meet their responsibilities in their sphere, then they are culpable. And the others are not culpable (except perhaps in a minor way) if they don’t wish to compensate for that failure.
Now, I think Michael would agree with this, but he would say that, as a practical matter, what happens in these large project failures. is that all three people fail to deliver, so all are culpable. (Michael, please correct me if I’m wrong.)
But it seems to me that doesn’t go far enough. I think you need to be able to figure out when one participant is clearly morst culpable? Perhaps, for instance, one ought to apply a notion of priority, so that if the vendor fails to deliver, the vendor is primarily culpable by definition, and the most that the next two parties can do is provide some minor contribution. (This tends to be the approach in products liability, with some major caveats.) Or perhaps there are some other tools that need to be applied, tools that would allow one to extend (or perhaps even change) the “pox on all their houses,” line that Michael takes.
Whatever you do, though, it isn’t easy.
Let me try to illustrate what I mean with an example that has made a lot of news recently: the iPhone 4. Here you had three participants: Apple, AT&T, and the customer. You had a product that “did not work properly.” And you have a dispute about levels of culpability.
The problem, in case you’ve been in Tibet for the last month, is that the iPhone’s reception isn’t very good for reasons that still aren’t clear. (My wife has one. It’s true; it’s not not good.) The customers have basically been told by Apple, “The problem is in the way you hold the phone. Either hold the phone differently, or fork over $25 for a case.” The customers have said, if I may paraphrase, “It’s a phone. You should be able to hold the phone any way you want. It’s up to Apple to give me a phone that works no matter how I hold it.”
Now, the IT Devil’s Triangle analysis says, basically, that the customers are wrong. According to this argument, Apple made a good phone that made some design tradeoffs. (Basically, the antenna was put on the rim of the phone.) If one of those tradeoffs means they have to hold the phone in a special way, so what? They should either buy themselves a case or hold the phone correctly. If they don’t want to do that or can’t, they are contributing to the problem and are at least as culpable as Apple.
So is AT&T off the hook, here? Under the Devil’s Triangle argument, not at all. The phone service that they provide is weak and erratic, and that makes it very difficult to troubleshoot the phone. (I have experienced this in spades.) Their customer “service” is designed to handle normal problems expeditiously, but is not designed to track and resolve complex problems, and as a result, it makes what could be an irksome experience into something verging on horrible. (I have also experienced this, believe me.) And that means that customers are not as tolerant of the iPhone 4’s design choices as they should be.
It’s something of a conundrum, isn’t it? One’s normal intuition is that Apple should design a phone that people can hold any way they want. But one can certainly make a cogent argument that it’s really the customer, as much as anyone else, who is to blame, because they get mad at Apple, rather than being willing to hold the phone in the correct way.
In the end, the solution to this riddle is a matter of values. If you think (as Consumer Reports does) that it’s up to the vendor to get things right, when it comes to simple things like how one holds the phone, then the Devil’s Triangle argument is simply wrong. If you think (as Steve Jobs and many of my friends who work for vendors think) that the customer is to blame if they can’t deal with the “flaws” that they find in a complex and highly-engineered technology, then you’re going to agree with Michael and assume that there are very few situations where the customer doesn’t have some culpability.
Let me just tell you where I come down on this and hope for some illuminating comments. I think the benefit of the doubt should be given to the user. If the vendor is clear about the design tradeoffs and limitations and the deliverer (integrator) provides service that takes those limitations adequately into account and is clear about that, then I think, “Yes, it’s up to the consumer to deal with the limitations.” If the vendor and deliverer fail to do this–if the vendor isn’t clear about what the design tradeoffs are or if the vendor and deliverer allow you to get the idea that the product will do things (like make phone calls in normal situations when being held normally) that it won’t in fact do–then the onus is on them.
So, for instance, if the building architect makes a mistake in the way the building was designed or the general contractor deviates from the design in a way that turns out to be dangerous, it’s up to them to fix it; the building owners are not culpable because they don’t want to limit the number of people allowed on each floor to a number far below what they’d been led to expect. And if the hip designer uses a brittle material and the doctor chips it when he or she installs it, it’s not up to the patient to walk less.
Just my view.
So why is this important? It’s important because this is not how the software industry works. It’s normal for vendors to be very close about the design of their technology, for vendor and integrator salespeople to make unrealistic claims about what the technology does or about the benefits to be received, for the people delivering the product to view limitations in delivery as upsell opportunities, etc., etc. So if my view were to prevail, and it were up to the vendor and deliverer to make sure that the product works as you would normally expect and the installation is what it should be, the industry would have to change.
Is that likely? Maybe, maybe not. But it gets more and more likely every time a user asked to adapt to some software’s vagaries asks himself or herself, “Isn’t this like being asked to hold an iPhone with tweezers?”
The Typewriter, the Word Processor, and the PC
April 9, 2010
In yesterday’s post, I argued that ROI would not be an adequate measure of the benefits conferred by new-gen (or pseudo-new-gen) applications like Workday, Business By Design, or Fusion Application Suite. The previous-gen applications were all about automation. The new-gen suites confer real benefits (I think), but not necessarily benefits that fall through to the bottom line.
What benefits are they? Well, they have to do with working more effectively: making fewer errors, putting more time into work and less into busy work, making more accurate decisions, faster. Is there benefit from this kind of thing? Sure. But how do you measure it.
In the post, I suggested a hazy term, “operational effectiveness,” for the benefits one should expect. What is “operational effectiveness?” Let me admit freely that I don’t know for sure. In this post, let me propose an analogy, which should help you to understand what I’m getting at.
The analogy comes out of a historical situation that always posed a problem for ROI analysis, the transition in business from the typewriters that sat on secretary’s desks to the PC that sat on executive’s desks. This transition occurred in two different phases. First, the typewriters on the secretary’s desk were replaced with big, clunky word-processors that sat next to the desk. These word-processors automated the secretary’s document production work. Then, the secretarial position itself was eliminated, and the typing function became something that executives did themselves on that PC.
The transition to word processors could easily be justified in ROI terms. We could get more work out of the secretary or else hire fewer secretaries. Whether the justification was real is an open question. But it’s certain that that’s how people thought of it.
The next transition was much more problematic for ROI analysis. Expensive executive time was now being put into jobs that had been performed more efficiently by much cheaper labor.
At the time, people didn’t put a lot of thought into figuring out why they were funding this transition. Executives saw the PCs, knew that everyone else was using them, needed them for some functions (e-mail, spreadsheets), and just decided. “We’re doing it this way.” At least in my recollection, that’s what happened.
So were they just loony or lazy or wasting shareholder money on executive perks? I don’t think so. I think what they were plumping for was the same “operational efficiency” that I’m talking about 25 years later.
True, they spent more time typing. But they also had more control over the final product; they could change the product more easily; and they could distribute it without much overhead. And, at the same time, they were changing the form of what they were doing. They weren’t just producing typed memos; they were documents with fancy fonts and illustrations; and they were creating Power Points. True, many an executive was spending ridiculous amounts of time fiddling with type sizes so that they could get things on one page, but even acknowledging that, they thought the new way was better.
Indeed, by the time the transition was finished, justification wasn’t even a question, because the new tools changed the nature of work, and now you couldn’t get along without the tools. When executives were doing the typing, they stopped creating long reports. More and more of the time, a corporation’s decision-making was even wrapped around a full document (minutes, memos, or formal reports), it was wrapped around Power Point decks.
So by the end of the transition, ROI analysis had become entirely moot. How could you get a tangible measure of benefits when you were comparing apples and oranges?
Could we be seeing a similar transition now? It’s certainly possible. The analog to the word processors is that first generation of enterprise applications, which were funded by the automation benefits they confer and by ROI analysis. The analog to the PC is the second generation of enterprise applications.
(One caveat. As I’ve said before, I don’t think that Fusion Applications or the versions of Business by Design that I’ve seen are in fact second-generation applications. They’re more like Version 1.3. But they’re close enough to next-gen to raise the problem I’m talking about.)
If the analogy holds and if second-gen apps work as the developers hope, the benefits that businesses are going to experience will be equally hard to get your arms around, partly because the benefits are so subtle and disparate and partly because you’ll see a shift in the way work is done.
Does that mean that we won’t be able to talk about the benefits and we’ll just bull ahead with them? Well, that’s why I’m introducing the notion of operational effectiveness. It does seem to me that we can get clearer about what the benefits are.
So come on guys. Make comments. What is operational effectiveness? And how can we tell whether we are getting it?
Operational Effectiveness
April 8, 2010
This blog post is as more an open question than a pronouncement. So please feel free to comment on this or take the idea in a different direction.
I’ve been thinking about the next generation of Enterprise Applications, the value that they (might) bring, and about how people might justify replacing the enterprise applications they have with the new generation.
Generally speaking, you justify an investment in infrastructure using ROI. You invest this much, get this much return. For the first generation of enterprise applications (most everything designed between 1990 and 2003 or later), this made sense, because they were basically automation apps. They automated work done by people. The ROI showed up because you didn’t have to pay people to do it any more.
Now these new applications simply don’t do that, that is, they don’t automate appreciably better than the old applications do. And this means that ROI is a pretty crummy tool for evaluating whether an investment is worthwhile. Yes, there will be ROI if the enterprise application works the way it’s supposed to. But the return will be highly indirect. You won’t be able to fire people and pay for the application.
Brian Sommer has been talking about this problem for years–essentially, he points out that automating something that’s already automated doesn’t justify an investment on the same scale. But he has never really explored whether there are other forms of justification.
Nenshad argues in his book and his blog that good performance management enabled by modern tools will get you to a place you want to be, and he tells you a lot about how to do it. And while it is true that these new applications help you manage performance better and that’s one of the reasons you want them, he doesn’t offer a way of thinking about justifying the move to what he recommends. (Nenshad, if I missed this in your book, I’m sorry.)
So what does one use? Well, let me offer a concept and sketch it out in a paragraph or two, and you my small but apparently very loyal readership can then take me to task.
I’ll argue that what these new applications really do is improve “operational effectiveness.” What’s that? Well, to start out with, let’s just say that they let each employee put more effort into moving the company forward and less effort on overcoming friction, that is.
Well, that’s suitably hazy. So here are some things that you could measure that would, I think, be indicators that employee effort is more coherent and focused. You could, for instance, take a page from the black belts and measure operational errors or even just exceptions as part of operational effectiveness. Or, you could look at corporate processes that are nominally automated and see whether they are managed by exception. (Truly best, automatable practices should require almost no routine manual actions.)
People sometimes try to look at operational effectiveness by measuring what percentage of revenue is spent on things that feel like pure expense, like IT. So, a company that spends 3% of its revenues is less effective than one that spends 1% of its revenues. People also try to get at it by trying to look at which activities are “value-added” and trying to get people to do more of them. Both ideas are silly, of course, in themselves. (The 3% company may be spending on stuff that makes them effective, while the 1% isn’t.) But I think there might be indicators of operational effectiveness that are better. Wouldn’t operationally effective companies spend less time in meetings, send fewer junk e-mails, work fewer hours/employee (!), resolve more customer complaints and deflect fewer, etc., etc., etc.?
You get the idea, I think. So why is this a good measure for the new generation of applications? Because, bottom line, I think that’s what they’ll do. They’ll help organizations and people avoid wheel-spinning, error correction, and pointless processes or rules by getting to what matters, faster.
Of course, people have always accused me of being a ridiculous, blue-sky, naive optimist. But that’s how it seems to me.
What do you think?
And While You Wait? What?
March 31, 2010
A few days ago, I argued that a company that wants to replace its old, limping systems with a brand, spanking new Oracle or SAP application should probably wait for the next generation of apps.
The post got a lot of approving comments, which surprised me. I think there are pretty good arguments for just hauling off and buying. Consider what one of these hypothetical companies might say in defense of a decision to buy now rather than later.
1. We’ve got the money now and we should spend it.
2. The product we’ll get will be much more reliable.
3. The cost of services surrounding the product will be lower.
4. We don’t know when the new products will be out (with the possible exception of Workday 10).
5. We don’t know what will be in the new product.
Imagine what idiots we’d look like, the buyer might say, if we waited for five years for a product that was no better than what we could buy today and far more incomplete and buggy.
I see the force of this, which is why I thought it was a close call. Ultimately, I did decide it’s better to wait for markedly better products that are coming, despite the risks and delays. But I saw why people would disagree.
So are my commenters intemperate or am I dithering? One test for this is to look at the case of somebody who is not considering a replacement, but is considering a fairly big investment in the existing platform. Maybe they’re considering an upgrade, or maybe they’re considering some extension products, or maybe they want to push their existing installation out to other geographies.
This is a very realistic case. Several companies I know fairly well are considering one or more of these options. One is upgrading to Oracle 11i; another wants to upgrade to Infor LN. Still another wants to extend its QAD implementation to another geography. Still another wants to buy a CRM system from its existing vendor.
For each of these companies, the details matter a lot, but on reflection, I think the same argument applies. It will be hard to get the return on this big new investment in the old platform, because the useful life of what is to you a new product is not long enough to justify the expense.
This raises two questions. First, how does one calculate “useful life.” Most of the companies I’m thinking of believe that the useful life is defined only by how long they’re willing to use it, and for some companies, that’s pretty long. (There are, after all, big SAP clients who are still using R/2.) I think this is wrong, but I’m going to have to hold off explaining why till a later blog.
The second question is, “Assuming that the commenters and I are right, what kinds of investments should companies who have decided to wait actually make?” I have no determinate answers to this, but I think there are some guidelines.
Start with Gavin’s comment on the previous post. Gavin points out that some investments in the Oracle infrastructure and in new Fusion-based products will actually take you toward the next Oracle generation. He is quite right; Oracle designed things this way, and partly because of the problem that I’m describing, they quite deliberately created some ways of investing in products from Oracle without locking yourself into an older technology.
There are many, many caveats, of course. Among other things, if you have an Oracle system now, you might not want Oracle in the future. The three main products I mentioned in the earlier post (Workday, Business by Design, and Fusion Applications) are all highly differentiated, each with its own flaws and virtues. A rational person would do well to look at all of them before deciding to stick with the vendor they have now. (This applies to SAP customers, too.)
Even if you are bent on Oracle, you still may want to take some time thinking through your infrastructure stack before investing in pieces of it. Even the examples Gavin gives like OpenID, which are likely to be pretty good, may not be right in the long run, and if they aren’t, that will be a lot of trouble and expense gone to waste.
You should note that Gavin’s argument almost certainly will apply to SAP, as well. SAP is also workng on “hybrid” intermediate solutions, and I’ll bet you dollars to donuts that they’re trying to figure out every way they can to ease the migration to the new system by asking you to make steady, rational investments in products that extend your current capabilities.
But what about customers for whom an infrastructure or extension investment isn’t right? Here, I think there are some interesting arguments, akin to Gavin’s, for small, light cloud-based apps, point solutions designed to solve highly specific problems. I’m not just talking about a CRM app or a call-center app or a recruiting app; I’m talking about things that are very, very specific to your industry, but really powerful, things like Tradestone in the apparel industry.
Another possibility is to spend some time and effort cleaning up your existing installation. This will improve its current effectiveness, extend its useful life, and very possibly lower the cost of the new system significantly. (A lot of the cost of any implementation is cleaning up after the mess left by a system that everybody has given up on.)
There’s a very smart analyst in Europe, Helmuth Gümbel, who has spent a lot of time thinking about this problem. He has a blog, and he also has a conference, Sapience, which goes into these questions at length. If you’re thinking about extending your current ERP to other geographies, a reasonable alternative might be to find lower-cost ERP systems to serve those geographies.
The basic theme running through these latter approaches is that while you’re waiting, you can focus on saving some money and preparing your current installation, thereby making the later transition to a newer technology faster and more affordable.
Wait for the Next Gen? Yes!
March 26, 2010
Is it time to wait? If it isn’t now, then when?
Wait, that is, for the next gen of applications–Workday HR and Financials, SAP Business by Design, or Oracle Fusion Application Suite–rather than go with what’s out there now: PeopleSoft 9, Oracle EBS 11, or SAP Business Suite–all quite good products, but limited in many ways.
My gut says, “Wait.”
Of course, unless you happen to be my gastroenterologist, you shouldn’t care much about what my gut says. So here’s the reasoning behind it, which I think you can adapt to your own purposes.
PeopleSoft, EBS, and BS were all designed in the early ’90s and are now mature. (There will be no fundamental improvements made to any of them.) So they’re roughly 20 years old. Let’s assume that this takes them halfway through their useful life.
Now let’s do some algebra. Assume that the new products have a similar useful life and offer a 30% improvement in overall effectiveness.
Say the net benefit of buying a this-gen system is 1. In that case, the net benefit of a system that’s 30% better and lasts twice as long is 2.6. Now assume that the net cost of not replacing your old system -.1/yr, which makes it very, very expensive to keep your old system. Even if you have to wait four years for the next-gen system, you’re twice as well off (2.2 vs. 1) waiting. Even if the next-gen system costs significantly more than the old one (fairly likely, depending on the vendor), it’s still a big win.
If you e-mail me, I can give you a spreadsheet, and you can run the numbers yourself.
You don’t need the spreadsheet, though, to see that the argument is a function of four factors: the relative benefit of adopting next-gen apps (over the life of both apps), the cost of implementing them, the risk of implementing them, and the cost of waiting.
A friend who reviewing this argument offered the following analogy. Let’s say you live in an older house whose roof is leaking, pipes are rusty, electrical way out of date. Sure, it’s time to move. But what if there were a big tax break coming fairly soon which would allow you to buy a much better house. As long as the break was big enough, my friend says, the best bet for most people is to wait, because it’s a house, houses last a long time, and being in the better house makes a big difference for a long time.
Even if things are pretty bad in the old house, he goes on to say, your best bet is just to fix the immediate problems: repair the roof, add some new wiring, etc.
Clearly, the biggest and most important factor is how much better that house will be. For a conservative company, this may seem to be a big unknown. But really, it’s not. If you look at any of the new-gen apps, the improvements they’re offering are fairly clear. None of them are killer or transformational; they won’t let you fly when you had been walking. They’re just the sort of things that anybody would add now that they have 20 years of perspective on the old designs.
What are those things? Well, better and faster access to data, what the other pundits call “embedded analytics.” The ability to do some level of search, without having to print out reports and trek down hierarchical menus to get to a record. The ability to bring other people into a discussion of a record, by e-mailing it or asking them to approve it or whatever. All of these things can be done in the old system. But it takes longer, is often a pain in the you-know, and is often not done. Systems that will have all these things built in will be systems where each of your employees wastes somewhat less time each day wrestling with a system that was never designed to have the flexibility and accessibility that the web era has taught us to expect from any application we deal with.
None of these is earth-shattering; indeed, I usually call the next-gen apps Version 1.3 because they’re really not that big an advance over the 20-year-old ERP applications that are Version 1.0. (Is a 2.0 coming? I think so.)
But taken in aggregate, I think they’ll make a material difference in your operational efficiency. Enough of a difference to be worth waiting for.
Does this really apply to your situation? What about that risk? What are the chances that you will get the gains that would justify waiting? What about the fact that your company is ready to move now and for you, such a move comes at the right time in your career? All good questions. And in some cases, it may be right to jump. But for most people, the best thing to do is to take steps to reduce the risk and time to benefit.
SAP’s “U-Turn” on Maintenance
January 15, 2010
SAP announced yesterday that it was creating a two-tier support system, effectively reinstating its old Standard Support offering at a slightly increased price. (The new price is 18% of net versus the old 17% of net.)
This has been hailed as a U-Turn by press and analysts, all of which proves something to me: most writers can’t do math.
SAP begins its press release as follows (emphasis mine):
In a demonstration of its commitment to customer satisfaction, SAP AG (NYSE: SAP) today announced a new, comprehensive tiered support model that is being offered to customers worldwide. This support offering includes SAP Enterprise Support services and the SAP® Standard Support option and will enable all customers to choose the option that best meets their requirements.
So let’s look at the choice that’s being offered to customers; after you look at it, you can judge how much satisfaction it’s going to generate.
The cost of Enterprise Support this year is 18.36% of a base number, a number that usually stems from (but may not be identical to) the net amount paid for the SAP licenses that are being supported. So, this year, assuming that the base for a company was $100,000, the total cost of Enterprise Support is $18,360 and the total cost of Standard Support is $18,000.
Next year, the cost of Enterprise Support goes up to 18.9%, increasing to 22% by 2016. That means that in 2011, it is $18,900, and in 2016, it is $22,000. Those of you who are writers, I apologize for all these numbers, I know they do get confusing.
Now to Standard Support. With Standard Support, the percentage is fixed. But the base is not. It is subject to cost of living increases. We don’t know what COLA (cost of living adjustment) SAP will impose. But let’s just say for the sake of argument that it is 3.00%/year. In 2011, the cost will be $18,540.00, and in 2016, it will be $21,493.00.
All this is in a spreadsheet which you are welcome to look at and play with. (In the spreadsheet, I rounded 18.36% down, so the Enterprise Support costs are slightly low.) Assuming you’re not a writer and you want to play with the numbers, here’s what you’ll see. If the COLA is 3%/year, the costs of either kind of support will be very close for a long time to come. If the COLA is 1%, Standard Support will be quite a bit cheaper. And if it is 5%, Standard Support will be quite a bit more expensive.
It’s confusing, I know, but it’s true. If the COLA is 5%, then “18%” support will cost more than “22%” support. If the COLA is 3%, then “18%” support will cost about 2% less than “22%” support. And if the COLA is 1%, then the lower tier of support will cost about 10% less than the upper tier.
So what will it be. 5%? 3%? 1%? 0%? At the press conference, SAP didn’t say. There is no commitment to impose these increases and there is no commitment not to impose them. SAP, according to Léo Apotheker, “[has] the liberty of linking Standard Support [] to the cost of living index.” (Thanks Information Week.) Whatever their decision, the imposition of COLA will not be uniform. The cost of living index is the index for the country whose currency is the master currency for the contract, and the actual linkage to this actual index depends on the contract language, which varies.
So what are we to think? Whatever SAP is doing, it is not a U-Turn, and it is not a rescission of the price increase. It is offering customers a new choice, which I’ll characterize as follows:
1. Return to Standard Support and get less support than you got with Enterprise Support (though how much less is unclear) and price increases in the form of COLA (though how much increase is unclear).
2. Stay with/sign up for Enterprise Support and get more support (how much more is unclear, but I’ve been posting on this and will post more) and definite, clear price increases in the form of increases in the maintenance percentage.
It’s a choice. But is it really the sort of choice customers want?
And is offering this choice really an example of a commitment to customer satisfaction?
SAP, Siemens, and Maintenance Discounts
October 27, 2009
If you were one of SAP’s biggest customers and you found out that SAP was giving big discounts to another big customer, pretty much because they asked for it, what would you do?
Assuming you have at least a room-temperature IQ, that is.
Wait a minute. Let’s be democratic. If you were one of Oracle’s biggest customers and you found out that SAP was discounting maintenance for the asking, what would you do? I mean, you’re an Oracle customer, you definitely have a room temperature IQ.
Still not sure what to do? Here’s a hint. The phone number at SAP headquarters is 49/6227/7-47474. At Oracle, it’s +1.650.506.7000.
“Wait a minute, wait a minute, wait a minute,” I hear you saying. “SAP didn’t start handing out discounts, did they? They raised maintenance prices; they didn’t lower them?”
Perhaps. But let’s try to apply that IQ of yours.
As I’m sure you know, it’s been an bruited about in the media that Siemens was seriously considering the possibility of dropping its maintenance contract with SAP, starting January 1 of this year. Their plan was to have a third party provide maintenance, possibly either IBM or Rimini Street. (For a representative summary of the situation, as reported in the press, see this Market Watch report.)
So what happened? As all of you big SAP customers realize, Siemens had to make a decision September 30. Well, here’s what we know. About a week ago, SAP issued a press release, saying that Siemens had in fact re-upped its maintenance contract for three years.
Case closed, right? SAP doesn’t ordinarily announce maintenance renewals, but the underlying tone was, “Well, we’ve read the stuff in the press, too, so let’s deal with those scurrilous rumors, and issue a press release. After all, Siemens didn’t just come back. They bought more.” End of story?
Maybe. But you’ll notice that the press release doesn’t actually say anything about how much they paid for the maintenance. Indeed, there’s a funny little line about, “based on SAP’s maintenance standards for large customers,” which seems to demand some explanation.
So, let’s pursue it a little further. Is there any further information anywhere about what Siemens actually paid? About the same time as the press release, a post appeared on the Sapience blog. The post said that Siemens had been paying 30 million euros, pre-deal and was now paying 18 million euros, plus some other concessions. If you value the concessions at zero, this is a roughly 40% discount.
Sapience is written by Helmuth Gümbel, an industry analyst who has been following SAP for longer than I’ve been in the business. Helmuth is not an uninterested party here; he offers consulting on how to pay less in maintenance. But he’s also a well-respected figure, a person who doesn’t just say whatever he feels like saying, true or not.
[Full disclosure: Helmut is also a person I regard as my friend, someone whom I see socially on the rare occasions when he’s in town.]
So what is one supposed to believe? On the one hand, you can say, “Why believe an isolated blogger, especially when he has an axe to grind?” Then, you assume that the press release is giving you basically the right idea about what happened. On the other hand, you can say, “Where there’s smoke, there’s fire,” and assume that Helmuth (and the Enterprise Advocates, a group that discussed the Siemens situation in its recent webcast, have to have roughly the right version of the truth.
Helmuth isn’t the only source of smoke, here. Kash Rangan, an investment analyst at Bank of America/Merrill, estimated, recently, that 20-25% of customers get discounts on their maintenance payments. (The relevant figures are reproduced in the dealarchitectt blog.
No full disclosure required here. I have only a nodding acquaintance with Kash.
Of course, all this can be pretty muddy. American accounting rules tend to make it difficult for companies to give direct discounts on maintenance; basically, if a maintenance agreement is part of the initial license contract and the stated price of maintenance isn’t supported by objective evidence, companies are supposed to recognize the license revenue ratably, not all at once. If you give discounts, then your ability to demonstrate that the stated price is supported by objective evidence, is called into question. So, contra Kash and Helmuth, you could argue that SAP can’t be giving out discounts, because it would screw up their reporting.
But of course there are ways of discounting maintenance without actually charging less than the stated price. There is, for instance, a long, long history in the software business of handing out free seats, instead of cash, when customers are unhappy. (Both Helmuth and a cynical reading of the press release suggest that something of the sort may be going on here.) If pressed, vendors have also been known to reduce the basis for the maintenance charge and also to fiddle around with start and stop dates. I’m not saying that’s going on here–I don’t know–but I’ve been told by reliable sources that it has been done, at least by some companies.
If that were the case here, then Helmuth’s way of characterizing it is really the only sensible way to figure out what’s going on. You look at your outflow before. Then you look at your outflow afterward. The difference gives you a gauge of what the discount is.
So did Siemens get a discount? The plain fact is that we don’t know for sure, even with all that IQ, and won’t know unless SAP and Siemens agree to tell us, and even then we won’t know, because the one thing we can be sure of is that SAP will present the case in a way most favorable to them.
So, if we don’t know for sure, and yet it seems possible that in fact SAP is giving discounts, what should you do? Well, I have a suggestion. It’s 49/6227/7-47474. See what they say.
But don’t hide it. Post what they say right here. If SAP really is holding the line on disounts, well and good. But if they’re giving them out, don’t you think it’s time for you to get in line?
SAP Maintenance and the Sol Man (I)
October 22, 2009
With Siemens asking for and apparently getting a huge break on SAP maintenance costs, it is time to take a look once again to take a look at the whole issue. Understandably, there is a lot of emotion and name-calling and confusion around this; it’s not quite the health-care debate here in the United States, but the real issues have been buried under rhetoric in a strikingly similar way.
SAP Charges for Improved Support
Let me first state SAP’s position as sympathetically as possible. SAP believes (quite correctly) that the customer’s TCO (total cost of ownership) ought to go down, as software and hardware gets better and cheaper. It also believes that if it does something that would help TCO go down, it should get a share of the benefits.
Who would disagree with either point?
Roughly two years ago, therefore, it introduced a series of software and support improvements that it believed would indeed reduce TCO. These improvements largely revolved around a newly improved version of the Solution Manager, a piece of software that is supposed to do what its name implies.
The Solution Manager (or Sol Man, as it is called familiarly) was actually introduced roughly ten years ago. In its original version, it was a separate piece of software (one that ran on its own Windows box) that one used to communicate with SAP support (filing bug reports, etc.) and to monitor the performance of your SAP installation.
In the version introduced two years ago, the Sol Man EE (or enterprise edition), it did considerably more: it allowed you to do more extensive monitoring, test and manage upgrades, and even document your business processes. This new edition also beefed up the connectivity with SAP Support, so that support could use it to troubleshoot your installation more rapidly and effectively.
When SAP introduced its new, higher-priced Enterprise Edition support package, it placed the Solution Manager EE front and center. In every speech and every press release, it wasn’t, “We’re raising prices because Oracle got away with it.” It was, “We’ve developed new tools and support services based on those tools, and we’re increasing the cost of maintenance, because the maintenance has improved.”
The tools it was referring to were the various components of the Sol Man EE, and the improved support services were made possible by and delivered through the Sol Man.
To sum up, SAP’s position is that it is improving enterprise support by providing customers with new and better support tools. It is in the software business. So it is only reasonable for it to charge for those tools. That it charges via a maintenance price increase rather than by charging for the product itself is reasonable, presumably, because many of the benefits involve improved services, which are provided through the maintenance contract.
The Customer Reaction
It’s just a plain fact, of course, that nobody paid much attention to this. It took me, for instance, almost a year (until John Krakowski’s excellent presentation at ASUG last May) to figure out what SAP was getting at when it talked about new tools. (Before that, I wrongly, but honestly believed that the talk about tools was pure hand-waving.)
Other commentators on this, like Vinnie Mirchandani or Ray Wang or Dennis Howlett , may have gotten to a proper understanding of the argument faster than I did, but for the most part, they didn’t try to address its merits.
Today, for instance, the Enterprise Advocates, gave a webinar on Reducing SAP Maintenance Costs. (The Enterprise Advocates include the aforementioned three, plus Frank Scavo and Oliver Marks.) Not once during the main body of the talk did they even mention SAP’s recommendation for reducing maintenance support costs, which is to implement the Solution Manager and use it.
I don’t blame the advocates for this; it’s not their job. But I do blame SAP. If the Sol Man is what justifies the price increase, then SAP need to explain this in clear language.
Once SAP fails to do this, the Advocates and the SAP customer base are entitled to believe what you and I would believe when somebody offers an unclear explanation for something that seems to require some explanation: they dismiss the explanation that’s offered.
At some point, though, it does seem that someone should give SAP the benefit of the doubt and ask the question that SAP wants you to ask, namely, “Can the Sol Man EE deliver so much benefit that it justifies the maintenance price increase?”
If the answer is, “Yes,” that would of course be the best thing all around. Customers would have a clear path to reducing maintenance costs. SAP would have a product that keeps its customers paying maintenance. Total cost of ownership would go down.
And the Answer Is…?
Over the past four months, I’ve spent a fair amount of time finding out what I could about the Sol Man. I don’t have access to the documentation (all 1000 pages of it), but I do have the more public documents that SAP has issued, and I have talked to a number of Sol Man users and consultants.
What I found out is so complicated, and this blog is already too long. So I’ll delay a full report to another post. But here’s the answer in a nutshell:
1. To get the benefits of the Sol Man absolutely requires significant investment on the part of the customer.
2. It does not appear to be the case that the Sol Man was designed with the goal that SAP now has for it top of mind. It appears to be a product that was designed to be one thing that is now being turned to a different purpose.
3. The areas of benefit that the Sol Man promises are indeed important, and it is at least possible that customers can get significant benefit from it, if they put in the work.
4. At the end of the day, though, it appears to this humble observer that SAP needs to put more skin in the game.
Hope all this whets your appetite for the next post on the subject.
Selling Brittle Applications
October 7, 2009
Has it ever occurred to you that software salespeople get a bad rap? In the popular imagination, they have blood dripping from their fangs. But in my experience, that’s not what they’re like at all. don’t. The ones I know are pretty decent folk, live in the suburbs, maybe even coach soccer. Not one of them would actually throw their mothers under a bus in order to get the deal. At least, I don’t think they would.
So what accounts for this reputation? Well, in the course of writing this series of blogs on brittle apps, I think I’ve come up with an answer of sorts.
Start with a fact that you should know, but maybe haven’t paid much attention to. Good software salespeople don’t sell software. They sell benefits. They don’t try to confuse you with features or functions or architecture; they try to make you understand what those things will do for you.
This is what they should be doing. After all, the buyers of software don’t really know about or understand what the features do, and they don’t have the patience to figure out exactly how the features produce the benefits. You’re a CEO or CFO, you want to get to the point. What is the value that these products deliver?
Software salespeople have developed this reputation, I think, because people have discovered or heard or found that the benefits aren’t available. And they think the salesperson knew this all along and, like some snake-oil salesman was promising a cure for cancer in order to wrest the last dollar out of some old lady’s handbag.
But in my experience, that’s not what’s going on at all. The salespeople actually have a fairly rational, if optimistic view of their product. They know that the products are designed to achieve certain benefits; they can see themselves how the design could work; and they probably know of customers who have achieved the benefits they should achieve. At worst, they’re like a Ferrari or a Jaguar salesman, who focuses on the riding experience and doesn’t feel it is incumbent upon him to bring up the repair records.
People like Dennis Moore would go even further in the software salesman’s defense, and perhaps they’re right. They would say that the salesperson is more like a good physical trainer, who can genuinely promise to get you in better shape if you’ll work out. If after you buy, you don’t want to put in what it takes to get the benefits, that’s your problem not theirs.
So is Dennis right (assuming he would agree with the words I’m putting in his mouth); when somebody buys software and underutilizes or puts it on the shelf, is it entirely their problem. Well, no. I think it would be if they realized how brittle these applications really are. But in my experience, they rarely do. They buy this Ferrari, go out on a Sunday afternoon, and only when they find themselves riding back next to the tow truck driver, do they find out what they’ve bought.
So whose problem is it? Well, I’ve already gone on too long. You’ll have to wait until the next post.
Why Develop Brittle (High-Risk) Apps?
September 25, 2009
“Brittle” design isn’t limited to enterprise application software. You can find brittle design in cars, bridges, buildings, TVs, or even the vegetable bin. (What are those 1/2-pint boxes of $5.00 raspberries, 3/4 of which are moldy, but examples of brittle design?)
What do brittle designs have in common? The designer chose to accentuate high performance at the expense of other reasonable design parameters, like cost, reliability, usability, etc. A Ferrari goes very, very fast, and it feels good when it goes fast, and that’s a design choice. And it’s part of the design choice that the car requires a technician or two to keep it going fast for longer than an afternoon.
So why are most enterprise applications brittle? You can see this coming a mile away. It was a design choice. The enterprise applications in question were designed to be the Ferraris of their particular class of application. They were designed to do the most, have the most functionality, be the most strategic, appeal to the most advanced early adopters, be the most highly differentiated, etc.
To get Ferrari-like performance, they had to make the same design choices Ferrari did. They had to assume the application was perfectly tuned every time the key was turned, and they had to assume that the technicians were there to perform the tuning.
Enterprise applications, you see, were intended to run on the best, the highest-end machines (for their class). They were intended to be set up by experts. They were intended to be maintained by people who had the resources to do what was necessary. They were intended to satisfy the demands of good early-adopter customers who put a lot of pressure on them, with complex pricing schemes or intricate accounting, even if later on, it made setting the thing up complex, increased the chance that there were bugs, and made later upgrades expensive.
This wasn’t bad design; it was good design, especially from the marketing point of view. The applications that put the most pressure on every other design parameter got the highest ratings, attracted the earliest early adopters, recruited the most capable (and highest-cost) implementers, etc., etc. So they won in the marketplace and beat out other applications in their class that made different design choices.
I lived through this when I worked at QAD. At QAD, the founder (Pam Lopker) made different design choices. She built a simple app, one that was pretty easy to understand and pretty easy to set up and did the basics. And for about two years, shortly after I got there, she had the leading application in the marketplace. And then SAP and JDE and PeopleSoft came in and cleaned our clock with applications that promised to do more.
Now, none of the people who actually bought SAP instead of QAD back then or chose (later) to try to replace QAD with SAP did this because they really wanted high performance, per se. They wanted “value” and “flexibility” and “return on investment” and “marketable skills.” They literally didn’t realize that the value and flexibility came at a cost, that the cost was that the application was brittle and that therefore, the value or flexibility or whatever was only achievable if you did everything right.
If they had realized this, would they have made different choices?
I don’t know. I remember a company that made kilns in Pittsburgh that had been using QAD for many years. The company had been taken over by a European company that used SAP, and the CIO had been sent over from Germany to replace the QAD system with the one that was (admittedly) more powerful. He called me in (years after I had worked at QAD) to help him justify the project.
I looked at it pretty carefully, and I shook my head. Admittedly, the QAD product didn’t do what he wanted. But I didn’t like the fit with SAP. I was worried that the product designed for German kiln production just wasn’t going to work. I didn’t want to be right, and I was disappointed to find out that two years later, despite very disciplined and careful efforts, he was back in Germany and QAD was still running the company.
I’m glossing over a lot, of course. There are secondary effects. Very often, the first user of an application dominates its development, so the app will be tuned to the users strengths and weaknesses. It will turn out to be brittle for other users because they don’t have the same strengths. Stuff that was easy for the first user then turns out to be hard for others.
Two final points need to be made. First, when a brittle application works, it’s GREAT. It can make a huge difference to the user. Brian Sommer frequently points out that the first users of an application adopt it for the strategic benefit, but later users don’t. He thinks it’s because the benefit gets commoditized. But I think it’s at least partially because the first users are often the best equipped to get the strategic benefit, whereas later users are not. I think you see something of the same issue, too, in many of Vinnie Mirchandani’s comments about the value that vendors deliver (or don’t deliver).
Second, as to the cause of failure. Michael Krigsman often correctly says that projects are a three-legged stool and that the vendors are often blamed for errors that could just as easily be blamed on the customers. Dennis Moore often voices similar thoughts. With brittle systems, of course, they’re quite right; the failure point can come anywhere. But when they say this, I think they may be overlooking how much the underlying design has contributed.
It may be the technician’s fault that he dropped the beaker of nitrogycerin. But whose brilliant idea was it to move nitroglycerin around in a beaker?