I have been watching the earnest attempts with which Ray Wang is trying to “define” the flavors of cloud computing and Phil Wainewright is trying to describe its “unspoken benefits” and James Governor’s simplifications of cloud concepts, and all those efforts make me want to use words that you shouldn’t use on the Internet.

Why? Well, they’re a bit too high-minded and value-free.

Let me explain the problem with the use of an example. Imagine you were a high-minded dictionary writer who was trying to stand above the fray, so you defined “good,” as a ‘set of behaviors and attributes approved of by some,” and “bad” as “a set of behaviors and attributes not approved of by people who approve of the good.” There may be some sad, sorry truth to these definitions. But far from being even-handed or fair-minded, they’re simply serving the interests of the bad guys, who get to turn bad and good into a popularity contest.

Similarly, if you define hosted or SaaS or whatever as if all of these were legitimate choices which you, the high-minded definer are not going to evaluate, you serve….well, all the guys who want to foist a bad choice on you while pretending that it has all the virtues of a good choice.

You can see this most clearly if you look at the history. Ever since people (Marc, you know who you are) started using the term SaaS, applications that are shared and leased have been what the academics call “a site for contested definition.” As soon as the term SaaS was bruited about by some people, other people would say, “Oh, well that’s just a silly new term for ‘hosted,’ and ‘hosted’ is a great thing and it’s what we’ve always done. If you’d like to lease our software, go ahead.” (Larry, you know who you are.) The same things have happened to “on-demand” and “cloud” and all the other terms that the eminent and high-minded analysts are trying to explain.

This contest is exactly why there is the confusion in the market, a confusion that Ray rightly decries. The terms are confused and confusing because people are trying to appropriate them.

They’ve tried to appropriate them for a perfectly obvious, simple reason. They’d like to get the credit and money associated with doing something difficult, but very good, without all the trouble and bother of doing that difficult and good thing. But as a definer (or as an entrant in the marketplace), you will never, repeat never figure out what they’re trying to do if you don’t look squarely at why the difficult design choice really is difficult and what the payoff for that design choice really is.

If you try to define “good,” that is, without deploying some notion of worth, you’ll end up with no practical difference between the two and a whole lot of bad people trying to use your definition to show that they’re really good.

Maybe in one of those hoity-toity academic environments like the one across the street from me, it’s OK to publish even-handed appraisals and fine, nuanced distinctions that clearly lay out all the issues for those five other people who are interested in publishing themselves on the subject. But in the real world, where some people who haven’t done their homework are trying to appropriate the hard work of people who have done their homework, it’s not OK.

That said, let me tell you the real scoop on “cloud,” “multi-tenant,” etc., etc., etc. True SaaS, true multi-tenant, true cloud applications are shiny like silver. Hosted, on-demand, and private cloud are shiny like tinsel. For any purposes that aren’t temporary and don’t have pretense built in, silver is better. Multi-tenant applications have more functionality. They’re more adaptable. They are easier to operate for the vendor. They are cheaper. They are easier to manage. They are more competitive and more likely to last. Unless you’re throwing a party and are planning to take down the decorations the next day while suffering from a hangover, they are better.

Why is true multi-tenant better? It’s very simple. They’re more efficient. More of any dollar spent on a multi-tenant application goes into value delivered to the customer. A programming dollar spent on a multi-tenant application is immediately delivered out to all the users of the product, and, it’s put there to make the product more useful. To get the same effect in an on-premise application, it takes a lot more money. The same dollar for the programming (actually a little more), another dollar (say) for the testing, packaging, and delivery, another dollar (say) to test it at the other end, and another dollar (say) to get users to make it effective. Let me be generous and include the cost of delay in the delivery of value in those last two dollars. I think the cost is really more. But of course, there are no reliable published figures. Analysts, where are you?

Aghast? Don’t believe me? Well, let me use an example where there are published figures. A couple of years ago, a senior executive at a major application company gave some lectures at local universities where he claimed that single-tenant (hosted or on-demand) applications cost 10 times more to run than multi-tenant. Let’s say he’s right. The benchmark in this area is the (highish) per-user cost of a major SaaS provider, which is about $3.50/month (not published, but widely known). His company, at the time, offered a single-tenant application that it leased out for a hundred and change each month. Let’s assume that the major SaaS provider was a direct competitor and was charging the same. Are these companies really offering the market good alternatives, both of which need to be respected. Remember, one is putting $31.50/month/user (roughly 1/4 of your subscription) into paying for extra servers, extra electricity, extra licenses for virtual machines, extra complexity in their provisioning systems, extra management of load balancing, etc., etc., etc., all of which (from your point of view) add exactly zero value?

Imagine you’re buying one of these two choices. One vendor has made a bad, inefficient, and expensive design choice in the way it delivers the software. The other vendor has made a good, efficient, and inexpensive design choice. Which should you buy? Wouldn’t you feel really annoyed at any analyst who said, evenhandedly, that there’s much both good and bad to be said about both choices?

“But…but…but…” I hear you saying, “I just read Ray, and Ray says that you can customize single-instance products, whereas you can’t customize multi-tenant apps, and you can have more confidence that your installation is secure if it’s not sharing resources with some other application.”

Ray is perfectly right in his facts and perfectly wrong in his emphasis. What he isn’t saying is that in this context, customization and security are impossibly expensive, ridiculous luxuries, the moral equivalent of the rear-view mirrors in the Rolls-Royce that used hand-blown glass. Imagine the following conversation with your CFO. to see what I mean. “To get customization and security, we have to pay ten times as much for the software to be delivered by a company whose gross margins are 1/10 of industry best practice? Excuse me? We’re paying for a limousine when everybody else is paying for a bicycle messenger? Hello. Do we really need customization? Is the security really all that much better?” To which, the only rational answers are “No” and “No.”

The real question in this era of contested definitions is not what the definitions are, but how we can tell when we’re getting the real goods. This is a very, very difficult question, not at all easily answerable through the use of a single label. “Multi-tenancy”–essentially sharing computing resources among users of applications in a way that valorizes efficiency and accessibility–is a difficult and complex engineering problem involving numerous, complex tradeoffs. There is good multi-tenancy and bad multi-tenancy, and bad multi-tenancy can be so bad that it is virtually indistinguishable from single-tenancy. (That’s why, Naomi, I sometimes only give two cheers to your own really laudable insistence on multi-tenancy from HR vendors; it isn’t just multi-tenancy, but how you do multi-tenancy.)

Over the years, we’ve seen many attempts to isolate the truly distinguishing feature of multi-tenancy: that it runs on a single database or a single machine, that everyone gets upgraded at the same time, etc., etc. It’s just as silly and fruitless to do this as to find the single, isolating characteristic of “chair.” (Four legs? No, there can be chairs with any number of legs or none. People sit on it? People sit on lots of things besides chairs.) The only real way you can tell is to look at the engineering.

Not surprisingly, many vendors, even ones claiming to be “true multi-tenant,” don’t pass muster. A month or so ago, my old colleague, Brian Sommer spent a good deal of time calling around to companies that claimed to be multi-tenant. “You would be astonished, ASTONISHED,” he told me in his inimitable way, “at how few there actually were.”

If we hadn’t wasted time wrangling about definitions and propounding definitions that simply missed the real point, maybe a healthy discussion of the engineering issues surrounding multi-tenancy would have grown up by now. The plain fact is that the way Salesforce.com does it is really different from the way NetSuite does it, each company making quite different tradeoffs and pushing quite different levers. But today, whenever you get one of these obligatory “due diligence” site visits from companies fearfully dipping their toe into the cloud waters, the only “engineering” questions anybody asks have to do with SAS-70 Type II.

Advertisement

In yesterday’s post, I argued that ROI would not be an adequate measure of the benefits conferred by new-gen (or pseudo-new-gen) applications like Workday, Business By Design, or Fusion Application Suite. The previous-gen applications were all about automation. The new-gen suites confer real benefits (I think), but not necessarily benefits that fall through to the bottom line.

What benefits are they? Well, they have to do with working more effectively: making fewer errors, putting more time into work and less into busy work, making more accurate decisions, faster. Is there benefit from this kind of thing? Sure. But how do you measure it.

In the post, I suggested a hazy term, “operational effectiveness,” for the benefits one should expect. What is “operational effectiveness?” Let me admit freely that I don’t know for sure. In this post, let me propose an analogy, which should help you to understand what I’m getting at.

The analogy comes out of a historical situation that always posed a problem for ROI analysis, the transition in business from the typewriters that sat on secretary’s desks to the PC that sat on executive’s desks. This transition occurred in two different phases. First, the typewriters on the secretary’s desk were replaced with big, clunky word-processors that sat next to the desk. These word-processors automated the secretary’s document production work. Then, the secretarial position itself was eliminated, and the typing function became something that executives did themselves on that PC.

The transition to word processors could easily be justified in ROI terms. We could get more work out of the secretary or else hire fewer secretaries. Whether the justification was real is an open question. But it’s certain that that’s how people thought of it.

The next transition was much more problematic for ROI analysis. Expensive executive time was now being put into jobs that had been performed more efficiently by much cheaper labor.

At the time, people didn’t put a lot of thought into figuring out why they were funding this transition. Executives saw the PCs, knew that everyone else was using them, needed them for some functions (e-mail, spreadsheets), and just decided. “We’re doing it this way.” At least in my recollection, that’s what happened.

So were they just loony or lazy or wasting shareholder money on executive perks? I don’t think so. I think what they were plumping for was the same “operational efficiency” that I’m talking about 25 years later.

True, they spent more time typing. But they also had more control over the final product; they could change the product more easily; and they could distribute it without much overhead. And, at the same time, they were changing the form of what they were doing. They weren’t just producing typed memos; they were documents with fancy fonts and illustrations; and they were creating Power Points. True, many an executive was spending ridiculous amounts of time fiddling with type sizes so that they could get things on one page, but even acknowledging that, they thought the new way was better.

Indeed, by the time the transition was finished, justification wasn’t even a question, because the new tools changed the nature of work, and now you couldn’t get along without the tools. When executives were doing the typing, they stopped creating long reports. More and more of the time, a corporation’s decision-making was even wrapped around a full document (minutes, memos, or formal reports), it was wrapped around Power Point decks.

So by the end of the transition, ROI analysis had become entirely moot. How could you get a tangible measure of benefits when you were comparing apples and oranges?

Could we be seeing a similar transition now? It’s certainly possible. The analog to the word processors is that first generation of enterprise applications, which were funded by the automation benefits they confer and by ROI analysis. The analog to the PC is the second generation of enterprise applications.

(One caveat. As I’ve said before, I don’t think that Fusion Applications or the versions of Business by Design that I’ve seen are in fact second-generation applications. They’re more like Version 1.3. But they’re close enough to next-gen to raise the problem I’m talking about.)

If the analogy holds and if second-gen apps work as the developers hope, the benefits that businesses are going to experience will be equally hard to get your arms around, partly because the benefits are so subtle and disparate and partly because you’ll see a shift in the way work is done.

Does that mean that we won’t be able to talk about the benefits and we’ll just bull ahead with them? Well, that’s why I’m introducing the notion of operational effectiveness. It does seem to me that we can get clearer about what the benefits are.

So come on guys. Make comments. What is operational effectiveness? And how can we tell whether we are getting it?

This blog post is as more an open question than a pronouncement. So please feel free to comment on this or take the idea in a different direction.

I’ve been thinking about the next generation of Enterprise Applications, the value that they (might) bring, and about how people might justify replacing the enterprise applications they have with the new generation.

Generally speaking, you justify an investment in infrastructure using ROI. You invest this much, get this much return. For the first generation of enterprise applications (most everything designed between 1990 and 2003 or later), this made sense, because they were basically automation apps. They automated work done by people. The ROI showed up because you didn’t have to pay people to do it any more.

Now these new applications simply don’t do that, that is, they don’t automate appreciably better than the old applications do. And this means that ROI is a pretty crummy tool for evaluating whether an investment is worthwhile. Yes, there will be ROI if the enterprise application works the way it’s supposed to. But the return will be highly indirect. You won’t be able to fire people and pay for the application.

Brian Sommer has been talking about this problem for years–essentially, he points out that automating something that’s already automated doesn’t justify an investment on the same scale. But he has never really explored whether there are other forms of justification.

Nenshad argues in his book and his blog that good performance management enabled by modern tools will get you to a place you want to be, and he tells you a lot about how to do it. And while it is true that these new applications help you manage performance better and that’s one of the reasons you want them, he doesn’t offer a way of thinking about justifying the move to what he recommends. (Nenshad, if I missed this in your book, I’m sorry.)

So what does one use? Well, let me offer a concept and sketch it out in a paragraph or two, and you my small but apparently very loyal readership can then take me to task.

I’ll argue that what these new applications really do is improve “operational effectiveness.” What’s that? Well, to start out with, let’s just say that they let each employee put more effort into moving the company forward and less effort on overcoming friction, that is.

Well, that’s suitably hazy. So here are some things that you could measure that would, I think, be indicators that employee effort is more coherent and focused. You could, for instance, take a page from the black belts and measure operational errors or even just exceptions as part of operational effectiveness. Or, you could look at corporate processes that are nominally automated and see whether they are managed by exception. (Truly best, automatable practices should require almost no routine manual actions.)

People sometimes try to look at operational effectiveness by measuring what percentage of revenue is spent on things that feel like pure expense, like IT. So, a company that spends 3% of its revenues is less effective than one that spends 1% of its revenues. People also try to get at it by trying to look at which activities are “value-added” and trying to get people to do more of them. Both ideas are silly, of course, in themselves. (The 3% company may be spending on stuff that makes them effective, while the 1% isn’t.) But I think there might be indicators of operational effectiveness that are better. Wouldn’t operationally effective companies spend less time in meetings, send fewer junk e-mails, work fewer hours/employee (!), resolve more customer complaints and deflect fewer, etc., etc., etc.?

You get the idea, I think. So why is this a good measure for the new generation of applications? Because, bottom line, I think that’s what they’ll do. They’ll help organizations and people avoid wheel-spinning, error correction, and pointless processes or rules by getting to what matters, faster.

Of course, people have always accused me of being a ridiculous, blue-sky, naive optimist. But that’s how it seems to me.

What do you think?

A few days ago, I argued that a company that wants to replace its old, limping systems with a brand, spanking new Oracle or SAP application should probably wait for the next generation of apps.

The post got a lot of approving comments, which surprised me. I think there are pretty good arguments for just hauling off and buying. Consider what one of these hypothetical companies might say in defense of a decision to buy now rather than later.

1. We’ve got the money now and we should spend it.

2. The product we’ll get will be much more reliable.

3. The cost of services surrounding the product will be lower.

4. We don’t know when the new products will be out (with the possible exception of Workday 10).

5. We don’t know what will be in the new product.

Imagine what idiots we’d look like, the buyer might say, if we waited for five years for a product that was no better than what we could buy today and far more incomplete and buggy.

I see the force of this, which is why I thought it was a close call. Ultimately, I did decide it’s better to wait for markedly better products that are coming, despite the risks and delays. But I saw why people would disagree.

So are my commenters intemperate or am I dithering? One test for this is to look at the case of somebody who is not considering a replacement, but is considering a fairly big investment in the existing platform. Maybe they’re considering an upgrade, or maybe they’re considering some extension products, or maybe they want to push their existing installation out to other geographies.

This is a very realistic case. Several companies I know fairly well are considering one or more of these options. One is upgrading to Oracle 11i; another wants to upgrade to Infor LN. Still another wants to extend its QAD implementation to another geography. Still another wants to buy a CRM system from its existing vendor.

For each of these companies, the details matter a lot, but on reflection, I think the same argument applies. It will be hard to get the return on this big new investment in the old platform, because the useful life of what is to you a new product is not long enough to justify the expense.

This raises two questions. First, how does one calculate “useful life.” Most of the companies I’m thinking of believe that the useful life is defined only by how long they’re willing to use it, and for some companies, that’s pretty long. (There are, after all, big SAP clients who are still using R/2.) I think this is wrong, but I’m going to have to hold off explaining why till a later blog.

The second question is, “Assuming that the commenters and I are right, what kinds of investments should companies who have decided to wait actually make?” I have no determinate answers to this, but I think there are some guidelines.

Start with Gavin’s comment on the previous post. Gavin points out that some investments in the Oracle infrastructure and in new Fusion-based products will actually take you toward the next Oracle generation. He is quite right; Oracle designed things this way, and partly because of the problem that I’m describing, they quite deliberately created some ways of investing in products from Oracle without locking yourself into an older technology.

There are many, many caveats, of course. Among other things, if you have an Oracle system now, you might not want Oracle in the future. The three main products I mentioned in the earlier post (Workday, Business by Design, and Fusion Applications) are all highly differentiated, each with its own flaws and virtues. A rational person would do well to look at all of them before deciding to stick with the vendor they have now. (This applies to SAP customers, too.)

Even if you are bent on Oracle, you still may want to take some time thinking through your infrastructure stack before investing in pieces of it. Even the examples Gavin gives like OpenID, which are likely to be pretty good, may not be right in the long run, and if they aren’t, that will be a lot of trouble and expense gone to waste.

You should note that Gavin’s argument almost certainly will apply to SAP, as well. SAP is also workng on “hybrid” intermediate solutions, and I’ll bet you dollars to donuts that they’re trying to figure out every way they can to ease the migration to the new system by asking you to make steady, rational investments in products that extend your current capabilities.

But what about customers for whom an infrastructure or extension investment isn’t right? Here, I think there are some interesting arguments, akin to Gavin’s, for small, light cloud-based apps, point solutions designed to solve highly specific problems. I’m not just talking about a CRM app or a call-center app or a recruiting app; I’m talking about things that are very, very specific to your industry, but really powerful, things like Tradestone in the apparel industry.

Another possibility is to spend some time and effort cleaning up your existing installation. This will improve its current effectiveness, extend its useful life, and very possibly lower the cost of the new system significantly. (A lot of the cost of any implementation is cleaning up after the mess left by a system that everybody has given up on.)

There’s a very smart analyst in Europe, Helmuth Gümbel, who has spent a lot of time thinking about this problem. He has a blog, and he also has a conference, Sapience, which goes into these questions at length. If you’re thinking about extending your current ERP to other geographies, a reasonable alternative might be to find lower-cost ERP systems to serve those geographies.

The basic theme running through these latter approaches is that while you’re waiting, you can focus on saving some money and preparing your current installation, thereby making the later transition to a newer technology faster and more affordable.

Is it time to wait? If it isn’t now, then when?

Wait, that is, for the next gen of applications–Workday HR and Financials, SAP Business by Design, or Oracle Fusion Application Suite–rather than go with what’s out there now: PeopleSoft 9, Oracle EBS 11, or SAP Business Suite–all quite good products, but limited in many ways.

My gut says, “Wait.”

Of course, unless you happen to be my gastroenterologist, you shouldn’t care much about what my gut says. So here’s the reasoning behind it, which I think you can adapt to your own purposes.

PeopleSoft, EBS, and BS were all designed in the early ’90s and are now mature. (There will be no fundamental improvements made to any of them.) So they’re roughly 20 years old. Let’s assume that this takes them halfway through their useful life.

Now let’s do some algebra. Assume that the new products have a similar useful life and offer a 30% improvement in overall effectiveness.
Say the net benefit of buying a this-gen system is 1. In that case, the net benefit of a system that’s 30% better and lasts twice as long is 2.6. Now assume that the net cost of not replacing your old system -.1/yr, which makes it very, very expensive to keep your old system. Even if you have to wait four years for the next-gen system, you’re twice as well off (2.2 vs. 1) waiting. Even if the next-gen system costs significantly more than the old one (fairly likely, depending on the vendor), it’s still a big win.

If you e-mail me, I can give you a spreadsheet, and you can run the numbers yourself.

You don’t need the spreadsheet, though, to see that the argument is a function of four factors: the relative benefit of adopting next-gen apps (over the life of both apps), the cost of implementing them, the risk of implementing them, and the cost of waiting.

A friend who reviewing this argument offered the following analogy. Let’s say you live in an older house whose roof is leaking, pipes are rusty, electrical way out of date. Sure, it’s time to move. But what if there were a big tax break coming fairly soon which would allow you to buy a much better house. As long as the break was big enough, my friend says, the best bet for most people is to wait, because it’s a house, houses last a long time, and being in the better house makes a big difference for a long time.

Even if things are pretty bad in the old house, he goes on to say, your best bet is just to fix the immediate problems: repair the roof, add some new wiring, etc.

Clearly, the biggest and most important factor is how much better that house will be. For a conservative company, this may seem to be a big unknown. But really, it’s not. If you look at any of the new-gen apps, the improvements they’re offering are fairly clear. None of them are killer or transformational; they won’t let you fly when you had been walking. They’re just the sort of things that anybody would add now that they have 20 years of perspective on the old designs.

What are those things? Well, better and faster access to data, what the other pundits call “embedded analytics.” The ability to do some level of search, without having to print out reports and trek down hierarchical menus to get to a record. The ability to bring other people into a discussion of a record, by e-mailing it or asking them to approve it or whatever. All of these things can be done in the old system. But it takes longer, is often a pain in the you-know, and is often not done. Systems that will have all these things built in will be systems where each of your employees wastes somewhat less time each day wrestling with a system that was never designed to have the flexibility and accessibility that the web era has taught us to expect from any application we deal with.

None of these is earth-shattering; indeed, I usually call the next-gen apps Version 1.3 because they’re really not that big an advance over the 20-year-old ERP applications that are Version 1.0. (Is a 2.0 coming? I think so.)

But taken in aggregate, I think they’ll make a material difference in your operational efficiency. Enough of a difference to be worth waiting for.

Does this really apply to your situation? What about that risk? What are the chances that you will get the gains that would justify waiting? What about the fact that your company is ready to move now and for you, such a move comes at the right time in your career? All good questions. And in some cases, it may be right to jump. But for most people, the best thing to do is to take steps to reduce the risk and time to benefit.

Who’s to blame when IT projects fail? Michael Krigsman argues that blame should generally be shared among the three major culprits (software company, consulting company, customer), and a recent torrid discussion among the Enterprise Irregulars largely supported him. If anything, several pundits have told me in private, blame the customers. “If a customer’s going to be stupid, we can’t stop him,” said one.

I’m not comfortable with this argument, because I don’t think it’s sufficiently rigorous. In any unfortunate chain of events, many people have a hand. But it’s simply wrong to blame them all. Some, because of their role or their power or their ability to influence events are principally responsible, while others are merely incidentally involved or are even victims. To get the blame right, you have to dig down and figure out where the power lies.

To see what I mean, let me use an example used by philosophers in the area. You all probably know that World War I began after Gavrilo Princip assassinated the Archduke Ferdinand in Sarajevo. The problem is, who do you blame? Princip (who, it happened was so indecisive and incompetent that he was walking away from the spot he’d picked), the Archduke, who went to Sarajevo only because he wanted to have an outing with his mistress, or the political situation in Europe? Hard to say. But the one thing you don’t want to do is blame the coachman, who, as it turns out, turned down the wrong street and thereby brought the Archduke right to Princip.

In my view, the closer you are to setting the parameters for a project, the more likely it is that you’re setting the course of future events. So if I want a culprit, I look at the person who’s holding the gun, because it’s his decision whether to fire or not. For that reason, I tend to think that the software vendors are presumptively responsible. They’re the ones who decide what can and can’t be done with the tool they sell, and if they create a situation where dominos like misuse, misunderstanding, careerism, stupidity, etc., fall into each other one after the other, like the great countries of Europe, they are responsible, because they could have and often should have anticipated this and done what they can to prevent it.

This isn’t just because they set the conditions and they are responsible if they design conditions that don’t work. It’s also because they give the people the idea that that’s what they’re doing. We tend to give these mandarins the benefit of the doubt, believing that these products are fully tested, that they accord with best practices, that the most experienced people are working on them, etc., etc., etc. And once they realize that this is the expectation, that people do expect them to be good engineers, etc., etc., they take on extra responsibility. It may be fooish, yes, and If the customers believe that tommy-rot, perhaps their foolishness is contributing to the problem. But it still seems to me that the people who accept the power given them by this false belief need to take the responsibility that goes with it.

And that’s why they are presumptively responsible.

I was reminded, forcibly, of this by a recent event at the local library, which just re-opened in a brand new facility with brand new automatic checkout machines. I’m a good citizen, so when I went in the other day with my seven-year-old red-headed daughter, I renewed a book she hadn’t finished. Thirty minutes later, we checked out a couple of new items that she had found.

In the meantime, somebody had checked out seven videos on my account. I still don’t exactly know how, but let’s just assume for the sake of argument that when I finished renewing, I walked away from the terminal, leaving my session “live,” and somebody later walked up to the terminals and just added a few videos to what “I” was doing.

The sessions do time out, and the first thing you’re supposed to do is scan your card, so there’s at least some possibility that this was done with malice aforethought. Somebody had realized that you can steal the Cambridge Public Library blind, if you just hang around the terminals and act quickly after somebody walks away from them. It’s kind of a public-spirited stealing actually, because the person who gets stuck with the bill is not the taxpayer, but the person who used the terminal. The books appear on their account, and as the supervisor told me later, it’s the library policy to make people take responsibility for the books on their account.

“If you use plastic,” he said, “There are tradeoffs, and you just have to accept that fact, my friend.”

You see, as soon as I found these spurious borrowings on my account, I had immediately gone and found said supervisor and told him. My feeling was, “This is a new system, and it’s flawed. If one person has figured out how to do this, others will, too, and pretty soon, it will be open season on the pitifully few books in the aforementioned library. Somebody had better get cracking and fix this.”

Well, that was my feeling, until I talked to the supervisor, who corrected me. “What you say is impossible. We have never had a case like this. You are responsible for the books. Case closed.” I didn’t take this quietly, so eventually, he took down my name and the list of spuriously borrowed videos, and after I left, he shoved the list in his drawer.

The only reason I know even this much is that three days later, I came in to borrow another book, and the videos were still on my account. “I called the assistant director, but he/she is out of town,” he said. I think if he honestly believed that one could steal lots of books with impunity from the library, he would not have reacted this way. But he didn’t. He just thought I was a liar.

Here is where the aforementioned presumption comes in. This guy simply couldn’t believe that the computer system that he had been given could have a flaw. Rather than believe that, he preferred to believe that I, standing there with my seven-year-old daughter, had checked out the videos (seven of them), secreted them somewhere in the library, gone up to him and tried to get them off my account. After I succeeded, I guess he was thinking, I would then go back to the hidden cache, put the seven videos under my arm and walk out right in front of him.

Now I ask you, which of these scenarios is more implausible. One, the designers of the checkout system did so imperfectly, leaving a security hole, which somebody found, possibly inadvertently. Two, this white-haired father of a seven-year-old was trying to steal from the Cambridge Public Library by claiming that he had not checked out videos that the system said he had just checked out (even though the videos were nowhere about his person).

Implausible as it is, this well-educated person who by his choice of profession shows that he has dedicated himself to the life of the mind simply found it impossible to believe that there was anything wrong with the system. He believed this so firmly and so thoroughly that he couldn’t even be bothered to notify anybody about the problem, even though, if there were a problem, it would be a good idea to fix it as quickly as possible, before the library shelves were emptied. So far as I can tell, he still believes it.

The thing is, I almost fell into the trap, too. I’d had my probity questioned, so you know who I was blaming? That supervisor. I really had to sit down and think before I realized who was really at fault. That’s right, it was the vendor. To see why, think about what happens in an analogous situation, at the bank machine. When you take money out of these machines, you simply can’t leave the machine open for the next guy to use. You have to get your card back, and if you try to leave without doing so, you are alerted, loudly. Clearly, best practice in the area is to make it really hard for somebody else to pirate a session. And the vendor didn’t follow this best practice.

I should have known this from the beginning, because I should have remembered that the vendor is presumptively responsible. But as long as I don’t remember it or the supervisor doesn’t know it, the vendor is off the hook. Instead of blaming Princip (or maybe the Archduke or the political situation in Europe), that supervisor is doing the natural thing and blaming the coachman, the guy who made the last and most visible mistake. And until he can be taught that this is a mistake, books will continue to be stolen from the Cambridge Public Library.

The Crocodile in the Room

February 12, 2010

Several years ago, the top, top people at SAP were offered a gigantic bonus if they could “double the stock price by 2010.” Not too soon after, a friend of mine who was in line for the bonus left the company for a start-up. “Why,” I asked, incredulous.

The answer was pretty simple. “It’s not possible,” my friend said.

Not possible, one might have wondered at the time. Then why are all these intelligent people pretending that it is possible. Don’t they see the crocodile in the room?

Somewhat later, I talked to another friend, who had gotten a REALLY good deal if he signed an SAP contract now, before the end of what looked to be a tight quarter. “But all they’re doing is pulling revenue forward,” I said. “That’s right,” said my friend. “They’re taking a huge hit and for no particular reason.” So, I wondered, why are all these intelligent people trying so hard to hit that quarter. Don’t they see the crocodile in the room?

Still later, I talked to an investor who was pretty pleased about SAP’s new margin targets. Henning, you’ll remember, had told the investor community that they’d finished a long, expensive project to upgrade the product, and now they could afford to put the money back into margin. “But the investment didn’t produce the results they wanted,” I said. “So now, if they increase margin by reducing R&D, they’ll be taking away from investment they have to make.” Didn’t the people at SAP see that? Didn’t they see the crocodile in the room?

It seems to me that with this week’s changes at SAP, it’s possible that somebody (Hasso?) actually sees the crocodile. You can be the most forceful, sensible, and astute manager in the world, but if you try to make a technology company more valuable without generating value, eventually, that crocodile will get hungry. You can’t do it by giving out bonuses and hoping. You can’t do it by pulling revenue forward. You can’t do it by saving when you need to invest.

The only way you can do it is to bite the bullet and start using the maintenance that people pay you to make modern products that work well, products that will actually justify the maintenance they spend AND appeal to many other people who don’t yet have relationships with SAP. And if that takes a while, it takes a while. You can’t just say 2010 or else. You have to figure out how to do it, first. And sometimes it takes a while.

Now I’m not saying that the new management can find crocodiles any better than the old management. We’ll see. But if they do see the same crocodiles that I do, and they want a recommendation from little old me, here it is. Come clean. Right now, most investors are disappointed, because they think that SAP is backing off the old CEOs promises, and they don’t know why. If you tell them about the crocodile, admittedly, many of them will run away. But others will realize that it’s a crocodile that can be tamed, and since you trusted them, they will trust you to do the right thing.

PS. If this post makes it sound as if I’m blaming Léo for lack of realism, I apologize, to you and to him. Léo was using all the levers at his disposal to do what he was being asked to do, and he was using them in an insightful way. SAP did need to reduce its workforce, improve its support, and improve its operations, and under Léo, all these things were addressed. My only point is that what Léo was doing could not address the underlying problem. But that’s not what he was charged to do.

The SUGEN Numbers

January 20, 2010

I argued in the previous post that SAP’s new, “two-tier” pricing system for maintenance offers customers less choice than meets the eye, and commentators like Dennis Howlett agree.

So why did they bother? If one offering is “good support for a fair and reliable price” and the other offering is “less good support for roughly the same price (only no one will really know for six years)” why would anyone pick the latter? And why would SAP risk a public relations nightmare when the people who pick the apparently lower-cost alternative find that they’ve been snookered?

Is it just that SAP needs to offer the enterprise application version of “small coffee,” the coffee size that nobody ever orders, but you need on the menu, so people will order medium?

The question is particularly salient because SAP has data that, one could at least argue, shows that Enterprise Support really is better.

This data comes out of a program embarked on last summer, sponsored by SUGEN (SAP User Group Executive Network) and SAP. In this program, companies were put on Enteprise Support, and the benefits thereof were measured in 11 benchmarks and the sum of those benefits added into something called the SUGEN KPI Benchmark Index. SAP had vowed not to raise the cost of Enterprise Support until this program showed that a gain in the Index that gained justified the increased cost of Enterprise Suppory.

SAP reported the results at the Influencer Summit last December, which I attended. According to the numbers they showed me, the SUGEN KPI benchmarks had indeed been achieved.

These numbers were disclosed to me on the condition that I not repeat them until the full results were published, and while I’ve been given informal permission to speak about them, I will try to respect this request.

I think, though, that I can convey a fairly accurate idea of what is going on without actually citing the numbers.

Before I can get to this, though, I need to explain something about the program and the expectations that people had for it. When SAP first announced a new, improved class of support at an increased price, which all customers were required to use, many customers thought that this was just price-gouging. They didn’t believe that SAP’s across-the-board price increase for maintenance would be accompanied by any benefits. When SAP started hearing from these customers, they were clearly taken aback, since the executives in charge of this new support program clearly did (and do) believe in its benefits.

So SAP (and SUGEN, the customers’ self-appointed representatives) agreed to put the question to an empirical test.

Now anybody reading the press release about this program or anybody attending the journalist session at last May’s Sapphire (as I did) would believe that this test would be done along traditional social science lines. A representative sample of the SAP customer base would be given the opportunity to take advantage of Enterprise Support, and the benefits would be measured.

After attending that session, I told my client base (people who are professionally interested in tracking what SAP does) that it was impossible for this program to show so much benefit that it would justify the across-the-board increase. The reason was simple. To get the available benefit from Enterprise Support, a customer must get and install a software product called the Solution Manager and must then do a lot of process documentation and process modification. A representative sample of SAP customers simply wouldn’t include very many customers who had done all this installation, documentation, and change, because the total amount of work was considerable, and most customers weren’t going to do it, at least not any time soon.

Isn’t it sort of squirrelly, expecting Enterprise Support customers to get software, install it, and then do a fair amount of work before they get the benefit that SAP promised them? Well, yes, but it isn’t quite as squirrelly as it sounds.

At the Influencer Summit, Uwe Hommel, the person behind this idea, expressed it roughly this way. A lot of customers don’t really run support as well as they could. The Solution Manager provides them with a framework for the practices that they should be using, plus it enables SAP support personnel to give better, more accurate, and faster help, because the Solution Manager gives them better information about what was going on at a customer site. It would be nice if SAP could wave a magic wand and improve support without any effort from the customers. But that just can’t happen. All we can do is provide a framework.

As far as Hommel is concerned, what SAP is saying is roughly what the trainer at the gym offers. “We’ll make you better, bigger, stronger, and leaner, but of course you have to do your share.”

Fair enough, of course. But that’s not actually what SAP said. SAP actually said something more like, “You need to do support better, and to do it better, you’ll need a trainer and you’ll have to put in some effort, but oh, by the way, you have to pay for the trainer whether or not you actually get around to going to the gym.”

Perhaps the oddest thing about the test that SUGEN and SAP ran is that both parties pretended that SAP had said the first thing and not the second.

You see this, for instance, in the way they [SUGEN according to Myers, below] chose the subjects for the test. Rather than choosing the representative sample of the customer base that I was told they would choose, they asked for volunteers to apply for the program. 140 customers did apply; of those, only 56 were chosen for testing. This, of course, simply guaranteed that the test would not prove what SAP wanted it to prove, that the price increase was justified. At best, it would prove that those customers who decided to go to this metaphorical trainer would get some benefit from it.

So, did they get some benefit?

Well, um, uh, sort of.

As I said above, SAP and SUGEN agreed before the test that there were 11 areas where benefit might be provided. The areas ranged from the obvious things–fewer outages, faster problem resolution, and fewer problems–to the less obvious, but still important things, like more efficient CPU utilization and better use of disk storage.

In the actual test, the benefits of Enterprise Support was measured in only 6 of these 11 areas. The areas chosen had to do with total cost of operations (use of CPU and storage), the cost/effectiveness of managing patches, and the extent to which customers used SAP’s current software effectively. Clearly, this made things harder for SAP, since they were trying to prove benefit, but the benefit was actually measured in less than half the areas where benefit might be available.

Nevertheless, SAP thinks that it succeeded, and technically they did. They measured benefit by giving the SUGEN Index an arbitrary value of 100. The way I understood it at the session, the aim was to show that the increased benefits at least offset the roughly 7.6% price increase in 2009. [According to Myers, below, the actual aim was 4%].

Both aims (what I thought was the aim and what Myers said was the aim) were actually achieved. The benchmark index dropped by 6.89 percentage points. Even though only 6 benchmark areas were measured, the benefit achieved did offset the 7.6% increase (at least within 1 percentage point).

There is, however, a little, tiny, “but.”

All the benefit was achieved by massive improvements in only two out of the six areas: storage utilization and number of failed changes. (A “failed change,” is an attempt to install a patch which fails.) In all the other areas that were measured, the average improvement was very small.

Both of these measure appear to me to be one-time-only improvements. Take, for instance, storage utilization. If you have one of those awful Windows machines, and your disk is sluggish, you can run a utility that compacts your disk and frees up disk space. You’ll show massive improvement in storage utilization. But running this utility once is not the sort of thing that justifies a permanent yearly increase in maintenance costs. Yes, you can do it next year, but it won’t show the same level of improvement, because you gained most of the benefit the first time you did it. The same thing goes for reducing failed changes. Changes in process (and use of the Solution Manager, or Sol Man) can reduce this number a lot. But once you’ve made the changes, further reductions aren’t really available.

I certainly hope that somebody from SAP is reading this; if you are, you’re probably upset, because you’re saying to yourself, “Well, the benefit is permanent; for the rest of time, people will have fewer failed changes and use less disk space.” [Myers does in fact argue this. See below.] You are, of course, right. But you’ll have overlooked the larger question: does helping people to a one-time improvement justify a permanent, yearly price increase. That is hard for me to see. If Enterprise Support promised to bring these kinds of improvements in regularly, then it would be OK to pay more for it regularly. But this test doesn’t show that these regular improvements will be forthcoming.

In any case, it’s all moot now. SAP has scrapped the SUGEN benchmark process. In a way, it’s a shame. This is one of the few times that any enterprise application company has ever tried to run a systematic test of whether its software and services work as advertised. And the results of this test are very interesting. In some areas, the software and services don’t seem to work; the benefits are minimal. In other areas, though, they work very well indeed; the benefits are startling. Who woulda thunk it?

At the very least, shouldn’t SAP keep going with this, so it can go back and fine-tune its software, figure out why some benefits aren’t forthcoming and do something about it?

Apparently not.

SAP announced yesterday that it was creating a two-tier support system, effectively reinstating its old Standard Support offering at a slightly increased price. (The new price is 18% of net versus the old 17% of net.)

This has been hailed as a U-Turn by press and analysts, all of which proves something to me: most writers can’t do math.

SAP begins its press release as follows (emphasis mine):

In a demonstration of its commitment to customer satisfaction, SAP AG (NYSE: SAP) today announced a new, comprehensive tiered support model that is being offered to customers worldwide. This support offering includes SAP Enterprise Support services and the SAP® Standard Support option and will enable all customers to choose the option that best meets their requirements.

So let’s look at the choice that’s being offered to customers; after you look at it, you can judge how much satisfaction it’s going to generate.

The cost of Enterprise Support this year is 18.36% of a base number, a number that usually stems from (but may not be identical to) the net amount paid for the SAP licenses that are being supported. So, this year, assuming that the base for a company was $100,000, the total cost of Enterprise Support is $18,360 and the total cost of Standard Support is $18,000.

Next year, the cost of Enterprise Support goes up to 18.9%, increasing to 22% by 2016. That means that in 2011, it is $18,900, and in 2016, it is $22,000. Those of you who are writers, I apologize for all these numbers, I know they do get confusing.

Now to Standard Support. With Standard Support, the percentage is fixed. But the base is not. It is subject to cost of living increases. We don’t know what COLA (cost of living adjustment) SAP will impose. But let’s just say for the sake of argument that it is 3.00%/year. In 2011, the cost will be $18,540.00, and in 2016, it will be $21,493.00.

All this is in a spreadsheet which you are welcome to look at and play with. (In the spreadsheet, I rounded 18.36% down, so the Enterprise Support costs are slightly low.) Assuming you’re not a writer and you want to play with the numbers, here’s what you’ll see. If the COLA is 3%/year, the costs of either kind of support will be very close for a long time to come. If the COLA is 1%, Standard Support will be quite a bit cheaper. And if it is 5%, Standard Support will be quite a bit more expensive.

It’s confusing, I know, but it’s true. If the COLA is 5%, then “18%” support will cost more than “22%” support. If the COLA is 3%, then “18%” support will cost about 2% less than “22%” support. And if the COLA is 1%, then the lower tier of support will cost about 10% less than the upper tier.

So what will it be. 5%? 3%? 1%? 0%? At the press conference, SAP didn’t say. There is no commitment to impose these increases and there is no commitment not to impose them. SAP, according to Léo Apotheker, “[has] the liberty of linking Standard Support [] to the cost of living index.” (Thanks Information Week.) Whatever their decision, the imposition of COLA will not be uniform. The cost of living index is the index for the country whose currency is the master currency for the contract, and the actual linkage to this actual index depends on the contract language, which varies.

So what are we to think? Whatever SAP is doing, it is not a U-Turn, and it is not a rescission of the price increase. It is offering customers a new choice, which I’ll characterize as follows:

1. Return to Standard Support and get less support than you got with Enterprise Support (though how much less is unclear) and price increases in the form of COLA (though how much increase is unclear).

2. Stay with/sign up for Enterprise Support and get more support (how much more is unclear, but I’ve been posting on this and will post more) and definite, clear price increases in the form of increases in the maintenance percentage.

It’s a choice. But is it really the sort of choice customers want?

And is offering this choice really an example of a commitment to customer satisfaction?

Over the past few years, a cadre of “independent” analysts have set up shop and started to speak frankly about the enterprise application vendors, in their blogs and tweets. You know who I’m talking about: Vinnie, of course, and Dennis, and the Enterprise Irregulars, and Brian , and many, many others. These people were really good analysts to begin with–I’ve known them for years–and they have found their more-or-less independent status freeing, so they write the best stuff that is out there.

So if they’ve got a better mousetrap, why is it that the big guys, Forrester and Gartner, just seem to roll on and on, happily enough? Why haven’t they folded, the way the portable CD player did when the iPod came out? In the free market, after all, shouldn’t consumers pick the best quality at the lowest prices?

I got an interesting answer, yesterday, when I attended a talk at Harvard by Marc Flandreau, who is at the Graduate Institute of Development and International Studies, Geneva. Marc is an expert on bad-mouthing, or as we like to say in English, “blackmail.” And he has a fascinating historical explanation of how pay-to-play can emerge in information markets.

Marc’s focus is the wild and woolly bond market in Paris pre-World War I, a market that was deeply affected by the emergence of a free (or at least libel-free) press in France, post 1880. At the time, it was so easy to start and print a newspaper cheaply that a new kind of blackmail emerged. It was, essentially, “Pay us, or we’ll say bad things about you.” The very relaxed libel laws at this time made this a genuine threat, and people (Marc shows) really did make money doing it.

In the financial markets, the threat took the form, “Mr. Russian Government, pay us, or we’ll publish an article saying that you’re losing the then-active Russo-Japanese war.” And, as it turns out, the Russian Government paid up. The records, which were published in the 1930s, show that the government’s expenditure on publicité went up by a factor of two or more during that period, over what would “normally” be expected.

The interesting thing, though, is where the money went. Essentially, a set of what we would now call unscrupulous PR men (possibly, a redundancy, I admit) who took the blackmail money and distributed it among the press.

Now, here is the rub. Most of the money apparently went to the most reputable, most stable, and most expensive financial journals, not to the blackmailers. What these people tried to do with the bribe money was to make blackmail expensive, by “supporting” an alternate, established, reputable forum, which people would look to for authoritative information, and the existence of this forum brought the threat of blackmail from the cheap-sheet vendors down to acceptable levels.

Flandreau demonstrates fairly convincingly that while some money did go to throw-away (sometimes one-issue) newspapers, most of the money went to those journals and was a significant source of income for them.

“So if I may paraphrase,” a Harvard professor said, after hearing this, “The National Enquirer is one of the things that keeps The New York Times alive.” Marc replied in the affirmative.

Marc’s broad conclusion is that a pay-to-play industry will emerge whenever there is a significant threat from “badmouthing.” (He cites Moody’s as a modern-day example of the same phenomenon.) In all these cases (I think movie stars of the 1920s are another example), the best strategy for coping with badmouthing is to support cooperative, but reputable mouthpieces who will then be a permanent counter to whatever bad things are said by the smaller, less reputable people. In his analysis, the accuracy of what these smaller, less reputable people say is irrelevant; it could be true, it could be false. What matters is that you can exert some control over the best people in the industry.

Anybody who has ever taken a PR class already knows this, of course. But what Flandreau contributes are two simple, but odd facts. The premiums are in fact very large, and MOST of the money goes to the larger, more reputable firms.

So what does this mean for Dennis and Vinnie and Brian and Michael Krigsman and Helmuth Gümbel? Well, pretty much it means that their efforts are enriching Gartner and Forrester far more than it enriches them.

Dennis says in a recent tweet, “Pay to play doesn’t cut it.” Sorry Dennis, in this case you’re just wrong. If Marc is right (and I have no reason to think he isn’t), what you’re really doing is supporting the pay-to-play industry.