This blog post is as more an open question than a pronouncement. So please feel free to comment on this or take the idea in a different direction.

I’ve been thinking about the next generation of Enterprise Applications, the value that they (might) bring, and about how people might justify replacing the enterprise applications they have with the new generation.

Generally speaking, you justify an investment in infrastructure using ROI. You invest this much, get this much return. For the first generation of enterprise applications (most everything designed between 1990 and 2003 or later), this made sense, because they were basically automation apps. They automated work done by people. The ROI showed up because you didn’t have to pay people to do it any more.

Now these new applications simply don’t do that, that is, they don’t automate appreciably better than the old applications do. And this means that ROI is a pretty crummy tool for evaluating whether an investment is worthwhile. Yes, there will be ROI if the enterprise application works the way it’s supposed to. But the return will be highly indirect. You won’t be able to fire people and pay for the application.

Brian Sommer has been talking about this problem for years–essentially, he points out that automating something that’s already automated doesn’t justify an investment on the same scale. But he has never really explored whether there are other forms of justification.

Nenshad argues in his book and his blog that good performance management enabled by modern tools will get you to a place you want to be, and he tells you a lot about how to do it. And while it is true that these new applications help you manage performance better and that’s one of the reasons you want them, he doesn’t offer a way of thinking about justifying the move to what he recommends. (Nenshad, if I missed this in your book, I’m sorry.)

So what does one use? Well, let me offer a concept and sketch it out in a paragraph or two, and you my small but apparently very loyal readership can then take me to task.

I’ll argue that what these new applications really do is improve “operational effectiveness.” What’s that? Well, to start out with, let’s just say that they let each employee put more effort into moving the company forward and less effort on overcoming friction, that is.

Well, that’s suitably hazy. So here are some things that you could measure that would, I think, be indicators that employee effort is more coherent and focused. You could, for instance, take a page from the black belts and measure operational errors or even just exceptions as part of operational effectiveness. Or, you could look at corporate processes that are nominally automated and see whether they are managed by exception. (Truly best, automatable practices should require almost no routine manual actions.)

People sometimes try to look at operational effectiveness by measuring what percentage of revenue is spent on things that feel like pure expense, like IT. So, a company that spends 3% of its revenues is less effective than one that spends 1% of its revenues. People also try to get at it by trying to look at which activities are “value-added” and trying to get people to do more of them. Both ideas are silly, of course, in themselves. (The 3% company may be spending on stuff that makes them effective, while the 1% isn’t.) But I think there might be indicators of operational effectiveness that are better. Wouldn’t operationally effective companies spend less time in meetings, send fewer junk e-mails, work fewer hours/employee (!), resolve more customer complaints and deflect fewer, etc., etc., etc.?

You get the idea, I think. So why is this a good measure for the new generation of applications? Because, bottom line, I think that’s what they’ll do. They’ll help organizations and people avoid wheel-spinning, error correction, and pointless processes or rules by getting to what matters, faster.

Of course, people have always accused me of being a ridiculous, blue-sky, naive optimist. But that’s how it seems to me.

What do you think?

Advertisement

It stands to reason, doesn’t it. The more thorough and rational the buying process for enterprise applications, the better the outcome. For sure. Right?

Well, the other day, Dennis Moore, aka @dbmoore, a well-known figure in the industry, posted a query on Twitter asking for data that would show this is true. The more thorough the buying process, the more effective the implementation has to be true, doesn’t it. But no, the guy wants data. “Not anecdotes,” he said later, “but data.”

He isn’t going to get any. There are three reasons for this. First, there isn’t any reliable data of any kind on whether implementations were successful, at least none that I’ve seen in a career of nearly two decades. Second, most effort expended on pre-purchase analysis of software is misdirected, adding little to the quality or accuracy of the decision. And third, if you work backwards from failed implementations and identify the causes of the failures, it is very rare that the cause is the kind of thing that could have or should have been caught by a more thorough analysis.

The fact that there is little reliable data on whether an enterprise application product works is, of course, a scandal, but the fact remains and will continue to remain just so long as enterprise application companies want everybody to believe that the odds of success ar high and customers are embarrassed to admit failure.

I have been involved in at least two attempts by large, reputable companies to get a good analysis of what value, if any, has been gained after an enterprise app was implemented. The first interviewed only project managers and determined that the project managers found many, many soft benefits from the implementation. The second, a far larger effort, was eventually abandoned.

But let’s say that we had some rudimentary measure, like number of seats being actively used versus seats planned to be used two years after the initial projected go-live date. Would it show that thorough investigation really helps?

I don’t think so, and here’s why. The question of which software application to buy and/or whether one should buy one at all is usually a very simple question, one with a relatively clear right answer, at least to an objective observer. But it is rarely, if ever, treated as a simple question. People wrongly worry about a lot of irrelevant things; they are (usually) distracted by the salespeople, who naturally want the purchasing decision to be based on criteria most favorable to them, and because there’s a lot of risk (A LOT), people tend to create lengthy, rigorous, formal processes for getting to a decision, which do very, very little to improve the accuracy of the final decision.

Honestly, I can usually tell in an hour’s phone conversation what a company ought to do, and I often check back later — sorry Dennis, more anecdotal evidence — and I’d say I’m right at least 2/3 of the time, maybe more. And because of the way my business model works, I don’t even charge for these conversations.

What do you need to look at? Well, it’s a complex question, don’t get me wrong. But because the number of providers is limited, the capabilities are limited, and the likelihood of failure pretty high, there are usually only a few things that actually matter. And when there are only a few things, it shouldn’t take you that much time to figure them out.

Give a call, any time, if you want to test this out.

The Oracle Quarter

June 23, 2009

I was following some Oracle deals, as their quarter closed last month, and what struck me most about some of these deals is that Oracle competitors are trying to out-Oracle Oracle–and failing.

Let me give you an example. One of the most common, best-known plays in the ERP salesperson’s playbook–the equivalent of post-right in football–is called elevating the deal.

When you elevate the deal, you try to take it out of the hands of the “team” that has been assembled to evaluate your product and put it in the hands of busy executives, preferably several levels up, who are in a position to override the team. When you’re put in front of these executives, you pitch business benefits, instead of functional capabilities, and fill their soft little heads with visions of flexibility and agility, competitive differentiators, and money dropping like rain to the bottom line. At this level, of course, you just assume that your software actually has the functionality that will allow you to gain those benefits, which is of course what that low-level team was taking all this time trying to assess.

Both Oracle and SAP are past masters of this particular play; when they run it, it’s Tom Brady to Randy Moss, man.

So this quarter, I heard about two situations where one ERP vendor tried to elevate the deal, at the express instructions of the coach, er, head of sales. In either case, was it Oracle? No, it was the other guy. And did it work? Not a bit. The only thing the competitor succeeded in doing was ticking off the selection team.

It’s a pretty small sample size, of course. But it made me wonder. I know that the applications themselves showing their age. Is it possible that the tactics that go along with those apps are beginning to fray, too? I have been saying for some time, now, that they’ve outlived their usefulness to the customer (if there ever was any). See my blog post, “Lawson’s New Office,” for instance. But now, I’m wondering, have the old sales tactics stopped working for the vendors, too?

I’d be curious to get comments on this; as I said, my sample size is too small. But assuming I’m right, there’s one other question to ask. Why has Oracle figured this out and other companies haven’t?

If so,