September 17, 2010
Over the past 20 years, the American Airlines nonstop between BOS and SFO has been a fixture of the software industry. Almost any time you took it, you’d see friends, and even when you didn’t, you’d be sure you hadn’t boarded the wrong plane, just because of the number of Oracle bags.
This flight is no more. A little more than a month from now, American Airlines will end non-stop service between SFO and BOS.
Why? The commuters left American for JetBlue and Virgin America, for better seats, better entertainment, fewer niggling charges, and maybe even better treatment. I saw this myself when I actually got ** an upgrade ** on American a few weeks ago, and I confirmed it by talking to a frequent commuter who had gone over to Jet Blue even though he had more than a million and a half miles on American.
I wonder. Is this how other empires crumble? In my world, you have SAP and Oracle, seemingly as unassailable as the Great Wall of China, but both offering an experience that has been compromised by excess investment in an aging infrastructure, a labor force that has been asked to shoulder most of the burden imposed by this infrastructure, and a management that grew up in the good old days and still (my guess is) secretly longs for them.
For a long time, people like myself stayed with American. And while that was happening, the management could fool itself. Leather seats? No fees? These don’t matter to our loyal customer base, so we don’t need to invest the boodle that we don’t have in matching the competition. Until, of course, the day it did matter, when there was nothing they could do.
What are the moral equivalents of leather seats and fee-less flying in the enterprise applications market? Oh, it’s any number of things. It’s the fact that search, if it’s there at all, is just about as cumbersome as search was on AltaVista. It’s the fact that doing anything, anything unfamiliar requires the equivalent of a Type III license and just about as much training. It’s the endless, endless wait for even the most trivial or needed changes in the system. It’s the fact that you can’t use them on an iPhone or an iPad or even a Mac. (It’s amazing how many enterprise application companies are IE only.)
Please feel free to add to the list. It’s kind of fun.
Oh, people can and will put up with it for a really long time. But eventually, if the competition is smooth and persistent, they can march into a market and take it from the behemoths. I think you’re seeing this today where Workday is replacing PeopleSoft 7.5, and you saw it a few years ago when Salesforce was replacing Siebel. It does happen.
Now if only I could remember my JetBlue frequent flyer number.
June 4, 2010
Once again, we have a flurry of posts/announcements about the apparently mysterious Solution Manager. Panaya starts things off with a survey of Sol Man customers, who apparently don’t get much (?) out of it and don’t understand it. My much-opinionated colleague, Dennis Howlett, publishes a post [I am mentioned, thank you, Dennis] accurately going back over the history and suggesting that SAP needs to do better. Tom Wailgum publishes a post asking why SAP’s account of Sol Man benefits and Panaya’s account are so different.
It’s all a little confusing, partly because Panaya’s survey isn’t well-structured and partly because SAP itself does as much as it can to throw oil into the waters. (This used to be a quite different metaphor.)
So let me try to clear things up. Here’s what is known to any analyst or user who does his homework. The Solution Manager is a big product, which tries to do lots of different things. Some it does OK or pretty well, which means SAP is right. Some it does poorly. Like most software products. Here is a rough, off-the-top-of-my-head list of things the Sol Man does.
1. The Sol Man downloads patches that come from SAP. It does this pretty well, and it had better, because it is supposedly the only way to get patches from SAP.
2. The Sol Man gives you a fairly straightforward way of installing those patches and some templates/test cases that are supposed to help you test them. This, too, works fairly well.
3. The Sol Man does some basic SAP system cleanup and monitoring. It finds code that’s not being used, frees up disk space, identifies code you are using that’s been superseded (possibly), etc., etc. This works; some people use it, some don’t.
4. The Sol Man does some SAP performance monitoring. (Jim Spath, an SAP mentor, is the expert on this aspect of the product.) Spath has not spent a lot of time praising the product and from what I’ve seen of the product, there’s a good reason for this. [Adddendum: Jim has also recently posted on the Sol Man's abilities to help with cleanup.]
[Note on this last point. Performance monitoring is a very complex area, because SAP performance and other system performance are deeply interlinked. There are actually many products out there that do some kind of performance monitoring, usually not just confined to SAP, and many SAP customers already use these products. So no matter how good or bad the product, SAP's entry into this area is going to be problematic in a way that patch installation is not going to be.]
5. The Sol Man provides some tools that should allow SAP to deliver better support because the SAP organization can use these tools to get a better understanding of what your whole system does. To get any benefit at all from this, however, you need to spend some time documenting your system for SAP. If a lot of customers didn’t do this or didn’t know about it, I wouldn’t be surprised. For more on the benefits that might accrue from this, talk to my favorite Australian SAP Mentor, Tony de Thomassis.
6. The Sol Man can help you improve your business processes, with help from SAP, as long as you first document your business processes. This help could take various forms. The Sol Man could help eliminate redundant customizations, find process bottlenecks that SAP upgrades can fix, do root cause analysis of performance issues, etc., etc. Could this benefit you? If you ask Tony, the answer is “Yes,” but bear in mind that this a lot of work.
Now how do I know all this, especially what works and what doesn’t? Well, I didn’t do anything special. To find out what the Sol Man does, I went to a session at Sapphire 2009 (SapphireThen?) and then went to a really interesting session at the Influencer Summit. To find out about the benefit, I mostly relied on studies that SAP and SUGEN did last year, whose results were described at the summit. I’ve published this in a couple of blog posts, and both Jim and Tony have done much more than I to get the word out.
I don’t fault Panaya or Dennis or Tom for not having done this research. Why should they? But I do think that SAP could have done one thing very easily that would have helped everyone involved. Publish that study they told us about at the Influencer Summit. In it, the benefits are laid out fairly clearly and accurately. And then Dennis and Tom and Panaya and I could start arguing about how much those benefits amount to and what could/should be done to improve them.
April 9, 2010
In yesterday’s post, I argued that ROI would not be an adequate measure of the benefits conferred by new-gen (or pseudo-new-gen) applications like Workday, Business By Design, or Fusion Application Suite. The previous-gen applications were all about automation. The new-gen suites confer real benefits (I think), but not necessarily benefits that fall through to the bottom line.
What benefits are they? Well, they have to do with working more effectively: making fewer errors, putting more time into work and less into busy work, making more accurate decisions, faster. Is there benefit from this kind of thing? Sure. But how do you measure it.
In the post, I suggested a hazy term, “operational effectiveness,” for the benefits one should expect. What is “operational effectiveness?” Let me admit freely that I don’t know for sure. In this post, let me propose an analogy, which should help you to understand what I’m getting at.
The analogy comes out of a historical situation that always posed a problem for ROI analysis, the transition in business from the typewriters that sat on secretary’s desks to the PC that sat on executive’s desks. This transition occurred in two different phases. First, the typewriters on the secretary’s desk were replaced with big, clunky word-processors that sat next to the desk. These word-processors automated the secretary’s document production work. Then, the secretarial position itself was eliminated, and the typing function became something that executives did themselves on that PC.
The transition to word processors could easily be justified in ROI terms. We could get more work out of the secretary or else hire fewer secretaries. Whether the justification was real is an open question. But it’s certain that that’s how people thought of it.
The next transition was much more problematic for ROI analysis. Expensive executive time was now being put into jobs that had been performed more efficiently by much cheaper labor.
At the time, people didn’t put a lot of thought into figuring out why they were funding this transition. Executives saw the PCs, knew that everyone else was using them, needed them for some functions (e-mail, spreadsheets), and just decided. “We’re doing it this way.” At least in my recollection, that’s what happened.
So were they just loony or lazy or wasting shareholder money on executive perks? I don’t think so. I think what they were plumping for was the same “operational efficiency” that I’m talking about 25 years later.
True, they spent more time typing. But they also had more control over the final product; they could change the product more easily; and they could distribute it without much overhead. And, at the same time, they were changing the form of what they were doing. They weren’t just producing typed memos; they were documents with fancy fonts and illustrations; and they were creating Power Points. True, many an executive was spending ridiculous amounts of time fiddling with type sizes so that they could get things on one page, but even acknowledging that, they thought the new way was better.
Indeed, by the time the transition was finished, justification wasn’t even a question, because the new tools changed the nature of work, and now you couldn’t get along without the tools. When executives were doing the typing, they stopped creating long reports. More and more of the time, a corporation’s decision-making was even wrapped around a full document (minutes, memos, or formal reports), it was wrapped around Power Point decks.
So by the end of the transition, ROI analysis had become entirely moot. How could you get a tangible measure of benefits when you were comparing apples and oranges?
Could we be seeing a similar transition now? It’s certainly possible. The analog to the word processors is that first generation of enterprise applications, which were funded by the automation benefits they confer and by ROI analysis. The analog to the PC is the second generation of enterprise applications.
(One caveat. As I’ve said before, I don’t think that Fusion Applications or the versions of Business by Design that I’ve seen are in fact second-generation applications. They’re more like Version 1.3. But they’re close enough to next-gen to raise the problem I’m talking about.)
If the analogy holds and if second-gen apps work as the developers hope, the benefits that businesses are going to experience will be equally hard to get your arms around, partly because the benefits are so subtle and disparate and partly because you’ll see a shift in the way work is done.
Does that mean that we won’t be able to talk about the benefits and we’ll just bull ahead with them? Well, that’s why I’m introducing the notion of operational effectiveness. It does seem to me that we can get clearer about what the benefits are.
So come on guys. Make comments. What is operational effectiveness? And how can we tell whether we are getting it?
January 20, 2010
I argued in the previous post that SAP’s new, “two-tier” pricing system for maintenance offers customers less choice than meets the eye, and commentators like Dennis Howlett agree.
So why did they bother? If one offering is “good support for a fair and reliable price” and the other offering is “less good support for roughly the same price (only no one will really know for six years)” why would anyone pick the latter? And why would SAP risk a public relations nightmare when the people who pick the apparently lower-cost alternative find that they’ve been snookered?
Is it just that SAP needs to offer the enterprise application version of “small coffee,” the coffee size that nobody ever orders, but you need on the menu, so people will order medium?
The question is particularly salient because SAP has data that, one could at least argue, shows that Enterprise Support really is better.
This data comes out of a program embarked on last summer, sponsored by SUGEN (SAP User Group Executive Network) and SAP. In this program, companies were put on Enteprise Support, and the benefits thereof were measured in 11 benchmarks and the sum of those benefits added into something called the SUGEN KPI Benchmark Index. SAP had vowed not to raise the cost of Enterprise Support until this program showed that a gain in the Index that gained justified the increased cost of Enterprise Suppory.
SAP reported the results at the Influencer Summit last December, which I attended. According to the numbers they showed me, the SUGEN KPI benchmarks had indeed been achieved.
These numbers were disclosed to me on the condition that I not repeat them until the full results were published, and while I’ve been given informal permission to speak about them, I will try to respect this request.
I think, though, that I can convey a fairly accurate idea of what is going on without actually citing the numbers.
Before I can get to this, though, I need to explain something about the program and the expectations that people had for it. When SAP first announced a new, improved class of support at an increased price, which all customers were required to use, many customers thought that this was just price-gouging. They didn’t believe that SAP’s across-the-board price increase for maintenance would be accompanied by any benefits. When SAP started hearing from these customers, they were clearly taken aback, since the executives in charge of this new support program clearly did (and do) believe in its benefits.
So SAP (and SUGEN, the customers’ self-appointed representatives) agreed to put the question to an empirical test.
Now anybody reading the press release about this program or anybody attending the journalist session at last May’s Sapphire (as I did) would believe that this test would be done along traditional social science lines. A representative sample of the SAP customer base would be given the opportunity to take advantage of Enterprise Support, and the benefits would be measured.
After attending that session, I told my client base (people who are professionally interested in tracking what SAP does) that it was impossible for this program to show so much benefit that it would justify the across-the-board increase. The reason was simple. To get the available benefit from Enterprise Support, a customer must get and install a software product called the Solution Manager and must then do a lot of process documentation and process modification. A representative sample of SAP customers simply wouldn’t include very many customers who had done all this installation, documentation, and change, because the total amount of work was considerable, and most customers weren’t going to do it, at least not any time soon.
Isn’t it sort of squirrelly, expecting Enterprise Support customers to get software, install it, and then do a fair amount of work before they get the benefit that SAP promised them? Well, yes, but it isn’t quite as squirrelly as it sounds.
At the Influencer Summit, Uwe Hommel, the person behind this idea, expressed it roughly this way. A lot of customers don’t really run support as well as they could. The Solution Manager provides them with a framework for the practices that they should be using, plus it enables SAP support personnel to give better, more accurate, and faster help, because the Solution Manager gives them better information about what was going on at a customer site. It would be nice if SAP could wave a magic wand and improve support without any effort from the customers. But that just can’t happen. All we can do is provide a framework.
As far as Hommel is concerned, what SAP is saying is roughly what the trainer at the gym offers. “We’ll make you better, bigger, stronger, and leaner, but of course you have to do your share.”
Fair enough, of course. But that’s not actually what SAP said. SAP actually said something more like, “You need to do support better, and to do it better, you’ll need a trainer and you’ll have to put in some effort, but oh, by the way, you have to pay for the trainer whether or not you actually get around to going to the gym.”
Perhaps the oddest thing about the test that SUGEN and SAP ran is that both parties pretended that SAP had said the first thing and not the second.
You see this, for instance, in the way they [SUGEN according to Myers, below] chose the subjects for the test. Rather than choosing the representative sample of the customer base that I was told they would choose, they asked for volunteers to apply for the program. 140 customers did apply; of those, only 56 were chosen for testing. This, of course, simply guaranteed that the test would not prove what SAP wanted it to prove, that the price increase was justified. At best, it would prove that those customers who decided to go to this metaphorical trainer would get some benefit from it.
So, did they get some benefit?
Well, um, uh, sort of.
As I said above, SAP and SUGEN agreed before the test that there were 11 areas where benefit might be provided. The areas ranged from the obvious things–fewer outages, faster problem resolution, and fewer problems–to the less obvious, but still important things, like more efficient CPU utilization and better use of disk storage.
In the actual test, the benefits of Enterprise Support was measured in only 6 of these 11 areas. The areas chosen had to do with total cost of operations (use of CPU and storage), the cost/effectiveness of managing patches, and the extent to which customers used SAP’s current software effectively. Clearly, this made things harder for SAP, since they were trying to prove benefit, but the benefit was actually measured in less than half the areas where benefit might be available.
Nevertheless, SAP thinks that it succeeded, and technically they did. They measured benefit by giving the SUGEN Index an arbitrary value of 100. The way I understood it at the session, the aim was to show that the increased benefits at least offset the roughly 7.6% price increase in 2009. [According to Myers, below, the actual aim was 4%].
Both aims (what I thought was the aim and what Myers said was the aim) were actually achieved. The benchmark index dropped by 6.89 percentage points. Even though only 6 benchmark areas were measured, the benefit achieved did offset the 7.6% increase (at least within 1 percentage point).
There is, however, a little, tiny, “but.”
All the benefit was achieved by massive improvements in only two out of the six areas: storage utilization and number of failed changes. (A “failed change,” is an attempt to install a patch which fails.) In all the other areas that were measured, the average improvement was very small.
Both of these measure appear to me to be one-time-only improvements. Take, for instance, storage utilization. If you have one of those awful Windows machines, and your disk is sluggish, you can run a utility that compacts your disk and frees up disk space. You’ll show massive improvement in storage utilization. But running this utility once is not the sort of thing that justifies a permanent yearly increase in maintenance costs. Yes, you can do it next year, but it won’t show the same level of improvement, because you gained most of the benefit the first time you did it. The same thing goes for reducing failed changes. Changes in process (and use of the Solution Manager, or Sol Man) can reduce this number a lot. But once you’ve made the changes, further reductions aren’t really available.
I certainly hope that somebody from SAP is reading this; if you are, you’re probably upset, because you’re saying to yourself, “Well, the benefit is permanent; for the rest of time, people will have fewer failed changes and use less disk space.” [Myers does in fact argue this. See below.] You are, of course, right. But you’ll have overlooked the larger question: does helping people to a one-time improvement justify a permanent, yearly price increase. That is hard for me to see. If Enterprise Support promised to bring these kinds of improvements in regularly, then it would be OK to pay more for it regularly. But this test doesn’t show that these regular improvements will be forthcoming.
In any case, it’s all moot now. SAP has scrapped the SUGEN benchmark process. In a way, it’s a shame. This is one of the few times that any enterprise application company has ever tried to run a systematic test of whether its software and services work as advertised. And the results of this test are very interesting. In some areas, the software and services don’t seem to work; the benefits are minimal. In other areas, though, they work very well indeed; the benefits are startling. Who woulda thunk it?
At the very least, shouldn’t SAP keep going with this, so it can go back and fine-tune its software, figure out why some benefits aren’t forthcoming and do something about it?
November 2, 2009
The German Financial Times today took a bead on Léo Apotheker, SAP’s CEO, saying that on his watch, SAP had lost touch with its roots [verlorene Wurzeln]. No longer, as in the days of Hasso Plattner and Dietmar Hopp, is SAP customer-focused, the article says, and as a consequence, customers no longer think the software is worth the money. (The article cites the current customer unhappiness about increased maintenance prices as evidence for this.)
Helmuth Gümbel, no stranger to this blog, is cited frequently in the article; clearly, he was persuasive about the current state of affairs between SAP and its customers. Clearly, too, one disagrees with Helmuth at one’s peril.
Still, I wonder whether it’s really fair to hang all these problems on Léo. Take the infinitely hashed-over introduction and semi-withdrawal of Business By Design. Léo tends to get the blame for this because he was there on the podium claiming that SAP would get $1 billion in revenue from this product by, what is it, next year? But he had nothing to do with the original product. He was working in sales when Peter Zencke was put in charge of Project Vienna, and he was still in sales when Nimish Mehta’s team was developing T-Rex (a great product, no question), and he was still in sales when Hasso was insisting that T-Rex be incorporated into Business by Design, and so on.
Certainly, it was injudicious for him to promise that his organization could sell the heck out of a product that wasn’t ready for prime time. But why does this make him responsible for SAP’s lost roots? If anybody lost their roots, it was the development organization, which somehow or other couldn’t build the product it thought it would be able to build.
Don’t blame Léo for problems that were not of his making.
Now, I have my own issues with Léo, as my readers know. But frankly, when he came in, I think he was right about SAP. Too much time and money was being spent on stuff that wasn’t what the customer needed (or wouldn’t sell), and this had to stop. Hence the acquisition of Business Objects, the downsizing, the restructuring in the development organization. All of these things deserve at least one cheer, and I hereby give it. Hip, hip, hoorah.
Where I criticize Léo–and I’ve told him this to his face–is in his view of SAP’s role vis a vis the customer. He thinks what SAP has always thunk, that it’s up to SAP to make the best possible tools and it’s up to the customer (aided by the SI community) to figure out what to do with them. I think this view is wrong; SAP has to take more responsibility for making sure that the stuff works.
Ten years ago, when the heroes of the FT Deutschland article were fully in charge of the SAP business, I agree, SAP didn’t need to do that. SAP knew about businesses and about software, and they could figure out what they should do next without fretting about the problems customers were then having. (Believe me, there were a lot of them.) Today, though, with so much development time and development effort squandered, they can no longer believe that they can just build the right tools and count on the customers to get the benefit. Instead, they need to find out what went wrong with the tools they’ve been building and what needs to be done in the future. And the only way they can do that is to figure out EXACTLY what is preventing customers from getting the value they think they ought to get.
You can see why this problem is so important if you look at the SAP Solution Manager. This product ought to be what justifies the maintenance price increase. But as Dennis Howlett and Helmuth himself have both said, the product itself does not yet do what SAP needs it to do. If SAP really wants to justify this price increase, it needs to figure out why customers aren’t getting the value that SAP needs them to get. And they need to do it fast.
This is not easy; indeed, when I said this to Hasso late one night at an analyst party, he said, roughly, “We don’t know how to do that.” But let me just say, of all the executives I know at SAP, the one who is most likely to figure it out is Léo.
If he can figure it out, he will be able to get his customer base back again; indeed, I think they’ll be cheering, and when we’re convinced, so will I and so will Dennis and maybe even Helmuth. So, just in case this actually happens, let me give my cheer now, before it is actually deserved, as a way of saying, “I think you can do the right thing.”
Hip, hip, hoorah.