May 31, 2009
How will SAP develop? Where will it lead its customers?
We expect a CEO keynote at a software conference to answer questions like this; in what was his first full-CEO keynote, Leo did not disappoint. The answer? Clarity. Clarity is exactly what is needed to combat a global downturn and, guess what, clarity is exactly what you get from SAP.
I was there: the heads were nodding around the room and the Twitterati were passing the approval on as fast as their thumbs could go. “Good message,” one reputable analyst told me, and given the general dearth of good messages from SAP in recent years, you could see why he was pleased.
Good message? No. Illogical message. It makes no more sense than saying, “Language enables clarity,” and for roughly the same reason. Language doesn’t in do anything about clarity one way or the other. Used well and in the proper way by people who are intending to communicate, language can bring clarity. Used for other purposes, it can equally well confuse or deceive.
Now, in the history of the world, there have been many people who have claimed that a new language or a better language would indeed improve clarity and bring new intelligence and insight to people. But they’ve all been mistaken. They’ve seen a desirable effect (clarity), and they’ve known that you don’t achieve this effect unless a certain tool (language) is used correctly. So they think that the effect can be achieved by constructing the tool in a way that makes it impossible to misuse. But that’s just a mistake (what philosophers call a category mistake). The effect is not in fact caused by the tool, but by the way it’s used, and the tool itself has only a limited amount of ability to dictate what the final effect will be.
Similarly, SAP is merely a tool for storing business data. Good data, right problem, excellent analysis: voila, insight! Bad data, wrong problem, silly analysis, voila, confusion.
Now, there is an actual arguments that could be made in favor of what Leo is so blithely promising. You could say that the structure of SAP, the way it’s put together, is designed, through the use of best practices, to provide clarity, much as those mythical clear languages were going to be incapable of stating things that weren’t true.
I’m sure that Leo believes this, but since I don’t work there, I find it a bit of a stretch. It certainly can’t be something one would want to take on faith. It’s at best an empirical question. Whether it’s true or not would depend enormously on the quality of the implementations, the quality of the data actually being stored, the ability of the data to be used in multiple different circumstances, so that the data could be helpful in addressing real problems. If best practices genuinely produced good and useful data, flexibility and agility, etc., etc., that would be good to know, but I, for one, would find it surprising and at odds with what I know.
I might be more inclined to believe that SAP really does produce clarity in this way if I hadn’t asked Henning years ago what were his biggest problems in running SAP, and he pretty much said that it was hard for him to get insight into what was going on. I would also be more willing to believe it if SAP spent a lot of time analyzing whether the customers are using the software effectively. If they had lots of evidence that showed that people aren’t having problems with data quality, with fitting best practices to their own, or with flexibility and adaptability–that is, if they spent a lot of time figuring out whether their software was working in this way and could persuasive show that it is–one might feel that they really have happened on that magical clear language.
I actually had a long talk with Leo on a subject closely related to this. He’s a very smart guy, and it was a very enjoyable talk. In the talk, he insisted over and over again that it’s not his company’s job to see to it that the software is used effectively. I believe him, and I believe that’s a sensible statement of the company’s mission.
Unfortunately, though, if he doesn’t want to start reaching into companies and making sure that they’re using the software well, he can’t know one way or the other whether the best practices, etc., of SAP are producing the clarity that he’s claiming for it.
It may be enough for him to proclaim as an article of faith that SAP is enabling clarity. But it is not enough for me.
May 28, 2009
A client asked me the title question recently, and my answer was just a shrug of the shoulders. The answer is, “They probably shouldn’t.”
Yes, it’s a little more complicated than that. Maintenance on enterprise applications may be worth it to some customers, just as an extended warranty plan might well be worth it to some buyers of appliances, automobiles, or electronic, because they they get some comfort in knowing that they can do something if it breaks. It’s also worth it–absolutely 100% worth it–if your installation is in its early stages or incomplete, that is, you’re planning on buying more software from the company or you’re certain that you will need to maintain an aggressive updating policy.
But for most customers of most enterprise applications, it seems to me that paying a maintenance bill is in some ways akin to putting down your money in a three-card monte game. The money is gone. And it is more than a little unlikely that you’ll get any return from it.
Have I ever done this? Well, I’ve avoided three-card monte. But there was a period in my life when I paid for a health club membership that I wasn’t using. I kept on thinking, “Oh, I’ll go there next week,” or “I don’t want to have to pay the initiation fee,” or “My doctor told me to get more exercise,” and so I paid.
But really, it was just that I couldn’t face the facts. I hated that health club; I hated the pool and the locker room and the crowding. I hated the way they were gouging me. And I hated having to admit all this to myself. So I paid.
I didn’t use my membership, but I paid.
So is that why most people pay maintenance. I think so. At the moment of payment, it’s easier to write a check than it is to face facts, so they write a check.
Now there is an obvious counterargument to this. It goes like this. Paying maintenance isn’t like paying for a health club membership; it’s like paying for insurance. True, with insurance, you pay and you pay and there’s little visible benefit. But that’s the idea. Frankly, you ought to be happier to pay and not get much visible benefit than to pick up the phone and call the insurance agent.
This is certainly the argument that the CFO hears when he calls up a CIO and says, “Can’t we do something about the 2 million a year we pay to $AP (that is, any of the middle-aged apps vendors)?” The CIO says, “There’s nothing we can do. It’s an insurance policy. We have to pay. What happens if our system goes down?”
Most CFOs seem to find this compelling. But they shouldn’t. Come on. They’ve been to business school. There is a point, obviously, when it’s not worthwhile to pay insurance. To see what that point is, imagine that you lived in a normal, middle-class neighborhood where there hadn’t been a serious house fire in twenty years. Would you carry insurance if the insurance cost was 1/5 the cost of rebuilding the house? You shouldn’t. What about 1/10? Nah. What about 1/50? Well, maybe, but probably not. 1/100? Nope. Normal insurance for this kind of situation shouldn’t be much more than 1/200, and that should cover a lot of other things besides fire replacement.
So the CIO is right only if the odds of failure are high or the cost of maintenance is low. As for the odds of failure being high, I don’t think any CIO should be basing the argument on this. If the odds of catastrophic failure are high, even after operating the system for years, maybe you need a new CIO. As for the cost of maintenance being low? Well, they’re not. They’re a a little more than 1/5 (of the software costs). So you could just stop paying maintenance and after five years, you’d have enough money to buy a brand new package, with money left over to pay the integrators. Yes, the integration and hardware costs would add on a fair amount. But to pay those, you could just coast for a few more years. And remember, when you’re finished, you’re getting a new system, one that is now ten years more advanced than the system you bought.
So which should you do? Don’t pay and save for a new system that will work better. Or pay for insurance that you don’t or shouldn’t need.
So what’s the answer? Don’t pay. It’s really very simple.
May 20, 2009
A few weeks ago, I made some people at Lawson unhappy, because I criticized their inscrutable demo of a new interface and search tool. (See the post, below.) I have to admit that my irritation level with what they were doing was pretty high, since they had not invited me to their conference, and their virtual sessions for analysts were abysmal.
The basic point of the piece, though, had nothing to do with my irritation. Basically, Lawson had demoed something that purported to make a big difference to customers. But you couldn’t actually tell whether it would make a difference or not. I argued that these days, for middle-aged apps, the burden of proof is on the vendor. They can’t just show us something that you can’t even make out and then expect us to stand up and cheer. They have to make what they’re doing persuasive.
Now this point applies to every vendor of middle-aged apps, not just Lawson. Whether you are Oracle, SAP, Infor, Epicor, Dynamics, or Lawson, you can’t just wave your hands, announce that you’ve created something awesome, and leave the audience befuddled about whether what you’ve got is worth anything at all. If you have something great, prove it.
Isn’t this just common sense? When there really was a lot of new stuff you could do with these apps, just showing it off really did garner some deserved oohs and ahhs. But those days are over. These are commodity products; for the most part, their technology is well behind the curve. To get an ooh and an ahh, you have to show that something about what you’ve been doing for ten or fifteen years has really changed.
I happened to pick Lawson to make this point, but it could just as easily have been SAP. It’s just
that their user conference came later.
So now, turnabout’s fair play. Let me show you how this argument applies to SAP.
First, some context. For the past three months, SAP has been making a BIG deal over the release of Business Suite 7, its “new” software package which for the first time gave all the latest software at SAP the same version number. (Until the announcement, the main part of the suite was called ECC 6.0, and the version that was relabeled 7.0 was ECC 6.0 EhP 4, which stood for enhancement pack 4.)
Now I admit, I’m not exactly being fair. That wasn’t quite all there was to Business Suite 7. Business Suite 7, SAP told us, changed the model for delivering software, because in Business Suite 7, you would be able to add SAP software to your current installation at a click of a button. No more sending for disks or downloads or whatever. If you needed any piece of software, you could look at the menu in your administration panel, select what you needed, and there it would be.
In an earlier age, that would have been “something amazing,” as Auden put it. These days, however, the idea is pretty familiar. The SaaS vendors do this routinely, more or less, and so do most consumer apps. So you kind of want to see just what this big, big, big advance really amounts to.
At the original announcement, to which I was also not invited, you couldn’t tell anything about what they were providing. (The broadcast was even worse than Lawson’s.) At Sapphire, though, we got a chance to see it. The inimitable Ian Kimbell, a man I greatly admire (though I must confess I can well imagine him playing Uriah Heep in some amateur theatricals in the Midlands, without a change of costume), gave us a demo.
The context was the Jim Hagemann Snabe/Bill McDermott keynote, which was devoted to product news. In the demo, which you can view yourself here
May 8, 2009
There’s an assumption, these days, that systems whose basic design principles and strategies have been around for twenty years are, by this time, working pretty well. If you think that, consider the following.
A person I know well was asked to evaluate the effectiveness of some automated replenishment software at a retailer of his acquaintance. The automated replenishment system had been built to optimize inventory levels, so it automatically asked for replenishment when inventory was low.
What he found was that the store managers were systematically ignoring the replenishment requests and overriding them. Hah. We all know about this, right? Stupid, willfull store managers ignoring the mathematically sophisticated system that was calculating real supply and demand. The real trouble with business is people, right?
Not at all. It turns out that the best way to optimize inventory levels is to delay orders. (First day, Supply Chain 101). The peak days for the stores were Thursday, Friday, and Saturday. Since the stores got daily deliveries, taht meant that the peak replenishment days were…Thursday, Friday, and Saturday. Unfortunately, the store managers didn’t want deliveries on those days. They didn’t have the staff for it. They wanted deliveries on slow days, when the staff would be available for it. So every week, they were going into the system and moving the replenishment requests forward to Monday and Tuesday. Every single week. For thousands of SKUs.
Now there’s a good use for managers’ time.
My friend is a skeptical sort, though very helpful. So before throwing up his hands, he tried to decide whether they were doing the right thing. He wrote an optimization algorithm that optimized deliveries based on inventory levels AND labor costs. Guess what. The store managers were doing exactly the right thing. His algorithm slightly outperformed the managers, probably because they had to do all that changing by hand. But the outcomes (the actual outcomes by the store managers and his theoretically optimal outcomes calculated by his algorithm) were very close.
And yes, he did leave the algorithm behind him, so now the managers don’t have to do it by hand any more.
So here’s what we know. Basically, the people who wrote the automated replenishment system had done a bad job. They had written an “advanced,” “automated,” replenishment system that optimized decision-making, but they had optimized on the wrong thing. So instead of providing a system that did optimal, automated ordering, they created a system that generated thousands of orders, each of which had to be corrected by the highest paid person at the store.
And until my friend came along, nobody seemed to know about it.
Now, I’m not saying who is responsible for this, whether it is a commercial replenishment system or a custom-built replenishment system. But what I can say is that all the commercial replenishment systems that I know about do it in exactly the same way; they optimize around inventory levels.
Why do they do that? Well, for one thing, the system is self-contained. They have the inventory information available in the same system, so adding in an optimization doesn’t require any integration.
For another, the people who are building systems like this do not think like retailers (even though their customers are retailers). If they did, they would realize that the orders need to be staggered.
And for another, neither they nor their bosses nor the people who bought the system never, ever followed up on what they did. If they had followed up, they would realize that the system they built was not doing the job that it should.
What it should have done is generate the right orders automatically. But all it was actually doing was generating template orders all of which had to be inspected and corrected by hand. It wasn’t a complete wash. It did generate orders with roughly the right amounts (over a week). And, because of the way the system worked, some of the corrections were relatively rapid. (Sometimes, if you moved two orders forward, subsequent orders would automatically self-correct, because the system would reduce subsequent orders by the amount that had already been ordered.)
So is this some isolated instance? I don’t think so. As you probably know, the basic operation and design of these systems has been around for twenty years. All the commercial systems, so far as I know, optimize around inventory, without taking labor availability into account. If this particular system works very poorly because of a design error, there’s every reason to believe that others don’t work either.
If that’s the case, it’s a little scary. There have been a lot of automated replenishment systems put in over the past twenty years, and almost all of them have been justified by the claim that they kept inventory levels down to optimum levels. If that claim is wrong (because keeping inventory low necessarily entails excess labor costs), then there are a lot of systems out there that were sold and installed under false pretences and are now being used incorrectly.
The ironic thing about this is, of course, is that only a very little attention to how these systems are being used would not only show you the problem, but also show you the solution. To correct the system, you don’t need to integrate with a labor availability system (that would be hard and isn’t what my friend did). All you’d need to do is write a small program that corrected the timing, using the corrections that the humans were doing already as a guide.