In yesterday’s post, I argued that ROI would not be an adequate measure of the benefits conferred by new-gen (or pseudo-new-gen) applications like Workday, Business By Design, or Fusion Application Suite. The previous-gen applications were all about automation. The new-gen suites confer real benefits (I think), but not necessarily benefits that fall through to the bottom line.

What benefits are they? Well, they have to do with working more effectively: making fewer errors, putting more time into work and less into busy work, making more accurate decisions, faster. Is there benefit from this kind of thing? Sure. But how do you measure it.

In the post, I suggested a hazy term, “operational effectiveness,” for the benefits one should expect. What is “operational effectiveness?” Let me admit freely that I don’t know for sure. In this post, let me propose an analogy, which should help you to understand what I’m getting at.

The analogy comes out of a historical situation that always posed a problem for ROI analysis, the transition in business from the typewriters that sat on secretary’s desks to the PC that sat on executive’s desks. This transition occurred in two different phases. First, the typewriters on the secretary’s desk were replaced with big, clunky word-processors that sat next to the desk. These word-processors automated the secretary’s document production work. Then, the secretarial position itself was eliminated, and the typing function became something that executives did themselves on that PC.

The transition to word processors could easily be justified in ROI terms. We could get more work out of the secretary or else hire fewer secretaries. Whether the justification was real is an open question. But it’s certain that that’s how people thought of it.

The next transition was much more problematic for ROI analysis. Expensive executive time was now being put into jobs that had been performed more efficiently by much cheaper labor.

At the time, people didn’t put a lot of thought into figuring out why they were funding this transition. Executives saw the PCs, knew that everyone else was using them, needed them for some functions (e-mail, spreadsheets), and just decided. “We’re doing it this way.” At least in my recollection, that’s what happened.

So were they just loony or lazy or wasting shareholder money on executive perks? I don’t think so. I think what they were plumping for was the same “operational efficiency” that I’m talking about 25 years later.

True, they spent more time typing. But they also had more control over the final product; they could change the product more easily; and they could distribute it without much overhead. And, at the same time, they were changing the form of what they were doing. They weren’t just producing typed memos; they were documents with fancy fonts and illustrations; and they were creating Power Points. True, many an executive was spending ridiculous amounts of time fiddling with type sizes so that they could get things on one page, but even acknowledging that, they thought the new way was better.

Indeed, by the time the transition was finished, justification wasn’t even a question, because the new tools changed the nature of work, and now you couldn’t get along without the tools. When executives were doing the typing, they stopped creating long reports. More and more of the time, a corporation’s decision-making was even wrapped around a full document (minutes, memos, or formal reports), it was wrapped around Power Point decks.

So by the end of the transition, ROI analysis had become entirely moot. How could you get a tangible measure of benefits when you were comparing apples and oranges?

Could we be seeing a similar transition now? It’s certainly possible. The analog to the word processors is that first generation of enterprise applications, which were funded by the automation benefits they confer and by ROI analysis. The analog to the PC is the second generation of enterprise applications.

(One caveat. As I’ve said before, I don’t think that Fusion Applications or the versions of Business by Design that I’ve seen are in fact second-generation applications. They’re more like Version 1.3. But they’re close enough to next-gen to raise the problem I’m talking about.)

If the analogy holds and if second-gen apps work as the developers hope, the benefits that businesses are going to experience will be equally hard to get your arms around, partly because the benefits are so subtle and disparate and partly because you’ll see a shift in the way work is done.

Does that mean that we won’t be able to talk about the benefits and we’ll just bull ahead with them? Well, that’s why I’m introducing the notion of operational effectiveness. It does seem to me that we can get clearer about what the benefits are.

So come on guys. Make comments. What is operational effectiveness? And how can we tell whether we are getting it?

Advertisement

This blog post is as more an open question than a pronouncement. So please feel free to comment on this or take the idea in a different direction.

I’ve been thinking about the next generation of Enterprise Applications, the value that they (might) bring, and about how people might justify replacing the enterprise applications they have with the new generation.

Generally speaking, you justify an investment in infrastructure using ROI. You invest this much, get this much return. For the first generation of enterprise applications (most everything designed between 1990 and 2003 or later), this made sense, because they were basically automation apps. They automated work done by people. The ROI showed up because you didn’t have to pay people to do it any more.

Now these new applications simply don’t do that, that is, they don’t automate appreciably better than the old applications do. And this means that ROI is a pretty crummy tool for evaluating whether an investment is worthwhile. Yes, there will be ROI if the enterprise application works the way it’s supposed to. But the return will be highly indirect. You won’t be able to fire people and pay for the application.

Brian Sommer has been talking about this problem for years–essentially, he points out that automating something that’s already automated doesn’t justify an investment on the same scale. But he has never really explored whether there are other forms of justification.

Nenshad argues in his book and his blog that good performance management enabled by modern tools will get you to a place you want to be, and he tells you a lot about how to do it. And while it is true that these new applications help you manage performance better and that’s one of the reasons you want them, he doesn’t offer a way of thinking about justifying the move to what he recommends. (Nenshad, if I missed this in your book, I’m sorry.)

So what does one use? Well, let me offer a concept and sketch it out in a paragraph or two, and you my small but apparently very loyal readership can then take me to task.

I’ll argue that what these new applications really do is improve “operational effectiveness.” What’s that? Well, to start out with, let’s just say that they let each employee put more effort into moving the company forward and less effort on overcoming friction, that is.

Well, that’s suitably hazy. So here are some things that you could measure that would, I think, be indicators that employee effort is more coherent and focused. You could, for instance, take a page from the black belts and measure operational errors or even just exceptions as part of operational effectiveness. Or, you could look at corporate processes that are nominally automated and see whether they are managed by exception. (Truly best, automatable practices should require almost no routine manual actions.)

People sometimes try to look at operational effectiveness by measuring what percentage of revenue is spent on things that feel like pure expense, like IT. So, a company that spends 3% of its revenues is less effective than one that spends 1% of its revenues. People also try to get at it by trying to look at which activities are “value-added” and trying to get people to do more of them. Both ideas are silly, of course, in themselves. (The 3% company may be spending on stuff that makes them effective, while the 1% isn’t.) But I think there might be indicators of operational effectiveness that are better. Wouldn’t operationally effective companies spend less time in meetings, send fewer junk e-mails, work fewer hours/employee (!), resolve more customer complaints and deflect fewer, etc., etc., etc.?

You get the idea, I think. So why is this a good measure for the new generation of applications? Because, bottom line, I think that’s what they’ll do. They’ll help organizations and people avoid wheel-spinning, error correction, and pointless processes or rules by getting to what matters, faster.

Of course, people have always accused me of being a ridiculous, blue-sky, naive optimist. But that’s how it seems to me.

What do you think?

A few days ago, I argued that a company that wants to replace its old, limping systems with a brand, spanking new Oracle or SAP application should probably wait for the next generation of apps.

The post got a lot of approving comments, which surprised me. I think there are pretty good arguments for just hauling off and buying. Consider what one of these hypothetical companies might say in defense of a decision to buy now rather than later.

1. We’ve got the money now and we should spend it.

2. The product we’ll get will be much more reliable.

3. The cost of services surrounding the product will be lower.

4. We don’t know when the new products will be out (with the possible exception of Workday 10).

5. We don’t know what will be in the new product.

Imagine what idiots we’d look like, the buyer might say, if we waited for five years for a product that was no better than what we could buy today and far more incomplete and buggy.

I see the force of this, which is why I thought it was a close call. Ultimately, I did decide it’s better to wait for markedly better products that are coming, despite the risks and delays. But I saw why people would disagree.

So are my commenters intemperate or am I dithering? One test for this is to look at the case of somebody who is not considering a replacement, but is considering a fairly big investment in the existing platform. Maybe they’re considering an upgrade, or maybe they’re considering some extension products, or maybe they want to push their existing installation out to other geographies.

This is a very realistic case. Several companies I know fairly well are considering one or more of these options. One is upgrading to Oracle 11i; another wants to upgrade to Infor LN. Still another wants to extend its QAD implementation to another geography. Still another wants to buy a CRM system from its existing vendor.

For each of these companies, the details matter a lot, but on reflection, I think the same argument applies. It will be hard to get the return on this big new investment in the old platform, because the useful life of what is to you a new product is not long enough to justify the expense.

This raises two questions. First, how does one calculate “useful life.” Most of the companies I’m thinking of believe that the useful life is defined only by how long they’re willing to use it, and for some companies, that’s pretty long. (There are, after all, big SAP clients who are still using R/2.) I think this is wrong, but I’m going to have to hold off explaining why till a later blog.

The second question is, “Assuming that the commenters and I are right, what kinds of investments should companies who have decided to wait actually make?” I have no determinate answers to this, but I think there are some guidelines.

Start with Gavin’s comment on the previous post. Gavin points out that some investments in the Oracle infrastructure and in new Fusion-based products will actually take you toward the next Oracle generation. He is quite right; Oracle designed things this way, and partly because of the problem that I’m describing, they quite deliberately created some ways of investing in products from Oracle without locking yourself into an older technology.

There are many, many caveats, of course. Among other things, if you have an Oracle system now, you might not want Oracle in the future. The three main products I mentioned in the earlier post (Workday, Business by Design, and Fusion Applications) are all highly differentiated, each with its own flaws and virtues. A rational person would do well to look at all of them before deciding to stick with the vendor they have now. (This applies to SAP customers, too.)

Even if you are bent on Oracle, you still may want to take some time thinking through your infrastructure stack before investing in pieces of it. Even the examples Gavin gives like OpenID, which are likely to be pretty good, may not be right in the long run, and if they aren’t, that will be a lot of trouble and expense gone to waste.

You should note that Gavin’s argument almost certainly will apply to SAP, as well. SAP is also workng on “hybrid” intermediate solutions, and I’ll bet you dollars to donuts that they’re trying to figure out every way they can to ease the migration to the new system by asking you to make steady, rational investments in products that extend your current capabilities.

But what about customers for whom an infrastructure or extension investment isn’t right? Here, I think there are some interesting arguments, akin to Gavin’s, for small, light cloud-based apps, point solutions designed to solve highly specific problems. I’m not just talking about a CRM app or a call-center app or a recruiting app; I’m talking about things that are very, very specific to your industry, but really powerful, things like Tradestone in the apparel industry.

Another possibility is to spend some time and effort cleaning up your existing installation. This will improve its current effectiveness, extend its useful life, and very possibly lower the cost of the new system significantly. (A lot of the cost of any implementation is cleaning up after the mess left by a system that everybody has given up on.)

There’s a very smart analyst in Europe, Helmuth Gümbel, who has spent a lot of time thinking about this problem. He has a blog, and he also has a conference, Sapience, which goes into these questions at length. If you’re thinking about extending your current ERP to other geographies, a reasonable alternative might be to find lower-cost ERP systems to serve those geographies.

The basic theme running through these latter approaches is that while you’re waiting, you can focus on saving some money and preparing your current installation, thereby making the later transition to a newer technology faster and more affordable.

Is it time to wait? If it isn’t now, then when?

Wait, that is, for the next gen of applications–Workday HR and Financials, SAP Business by Design, or Oracle Fusion Application Suite–rather than go with what’s out there now: PeopleSoft 9, Oracle EBS 11, or SAP Business Suite–all quite good products, but limited in many ways.

My gut says, “Wait.”

Of course, unless you happen to be my gastroenterologist, you shouldn’t care much about what my gut says. So here’s the reasoning behind it, which I think you can adapt to your own purposes.

PeopleSoft, EBS, and BS were all designed in the early ’90s and are now mature. (There will be no fundamental improvements made to any of them.) So they’re roughly 20 years old. Let’s assume that this takes them halfway through their useful life.

Now let’s do some algebra. Assume that the new products have a similar useful life and offer a 30% improvement in overall effectiveness.
Say the net benefit of buying a this-gen system is 1. In that case, the net benefit of a system that’s 30% better and lasts twice as long is 2.6. Now assume that the net cost of not replacing your old system -.1/yr, which makes it very, very expensive to keep your old system. Even if you have to wait four years for the next-gen system, you’re twice as well off (2.2 vs. 1) waiting. Even if the next-gen system costs significantly more than the old one (fairly likely, depending on the vendor), it’s still a big win.

If you e-mail me, I can give you a spreadsheet, and you can run the numbers yourself.

You don’t need the spreadsheet, though, to see that the argument is a function of four factors: the relative benefit of adopting next-gen apps (over the life of both apps), the cost of implementing them, the risk of implementing them, and the cost of waiting.

A friend who reviewing this argument offered the following analogy. Let’s say you live in an older house whose roof is leaking, pipes are rusty, electrical way out of date. Sure, it’s time to move. But what if there were a big tax break coming fairly soon which would allow you to buy a much better house. As long as the break was big enough, my friend says, the best bet for most people is to wait, because it’s a house, houses last a long time, and being in the better house makes a big difference for a long time.

Even if things are pretty bad in the old house, he goes on to say, your best bet is just to fix the immediate problems: repair the roof, add some new wiring, etc.

Clearly, the biggest and most important factor is how much better that house will be. For a conservative company, this may seem to be a big unknown. But really, it’s not. If you look at any of the new-gen apps, the improvements they’re offering are fairly clear. None of them are killer or transformational; they won’t let you fly when you had been walking. They’re just the sort of things that anybody would add now that they have 20 years of perspective on the old designs.

What are those things? Well, better and faster access to data, what the other pundits call “embedded analytics.” The ability to do some level of search, without having to print out reports and trek down hierarchical menus to get to a record. The ability to bring other people into a discussion of a record, by e-mailing it or asking them to approve it or whatever. All of these things can be done in the old system. But it takes longer, is often a pain in the you-know, and is often not done. Systems that will have all these things built in will be systems where each of your employees wastes somewhat less time each day wrestling with a system that was never designed to have the flexibility and accessibility that the web era has taught us to expect from any application we deal with.

None of these is earth-shattering; indeed, I usually call the next-gen apps Version 1.3 because they’re really not that big an advance over the 20-year-old ERP applications that are Version 1.0. (Is a 2.0 coming? I think so.)

But taken in aggregate, I think they’ll make a material difference in your operational efficiency. Enough of a difference to be worth waiting for.

Does this really apply to your situation? What about that risk? What are the chances that you will get the gains that would justify waiting? What about the fact that your company is ready to move now and for you, such a move comes at the right time in your career? All good questions. And in some cases, it may be right to jump. But for most people, the best thing to do is to take steps to reduce the risk and time to benefit.

It stands to reason, doesn’t it. The more thorough and rational the buying process for enterprise applications, the better the outcome. For sure. Right?

Well, the other day, Dennis Moore, aka @dbmoore, a well-known figure in the industry, posted a query on Twitter asking for data that would show this is true. The more thorough the buying process, the more effective the implementation has to be true, doesn’t it. But no, the guy wants data. “Not anecdotes,” he said later, “but data.”

He isn’t going to get any. There are three reasons for this. First, there isn’t any reliable data of any kind on whether implementations were successful, at least none that I’ve seen in a career of nearly two decades. Second, most effort expended on pre-purchase analysis of software is misdirected, adding little to the quality or accuracy of the decision. And third, if you work backwards from failed implementations and identify the causes of the failures, it is very rare that the cause is the kind of thing that could have or should have been caught by a more thorough analysis.

The fact that there is little reliable data on whether an enterprise application product works is, of course, a scandal, but the fact remains and will continue to remain just so long as enterprise application companies want everybody to believe that the odds of success ar high and customers are embarrassed to admit failure.

I have been involved in at least two attempts by large, reputable companies to get a good analysis of what value, if any, has been gained after an enterprise app was implemented. The first interviewed only project managers and determined that the project managers found many, many soft benefits from the implementation. The second, a far larger effort, was eventually abandoned.

But let’s say that we had some rudimentary measure, like number of seats being actively used versus seats planned to be used two years after the initial projected go-live date. Would it show that thorough investigation really helps?

I don’t think so, and here’s why. The question of which software application to buy and/or whether one should buy one at all is usually a very simple question, one with a relatively clear right answer, at least to an objective observer. But it is rarely, if ever, treated as a simple question. People wrongly worry about a lot of irrelevant things; they are (usually) distracted by the salespeople, who naturally want the purchasing decision to be based on criteria most favorable to them, and because there’s a lot of risk (A LOT), people tend to create lengthy, rigorous, formal processes for getting to a decision, which do very, very little to improve the accuracy of the final decision.

Honestly, I can usually tell in an hour’s phone conversation what a company ought to do, and I often check back later — sorry Dennis, more anecdotal evidence — and I’d say I’m right at least 2/3 of the time, maybe more. And because of the way my business model works, I don’t even charge for these conversations.

What do you need to look at? Well, it’s a complex question, don’t get me wrong. But because the number of providers is limited, the capabilities are limited, and the likelihood of failure pretty high, there are usually only a few things that actually matter. And when there are only a few things, it shouldn’t take you that much time to figure them out.

Give a call, any time, if you want to test this out.

Those of us who have been in the business for a while feel it’s in the doldrums: there’s not much innovation, and every year, the apps get older.

Where will innovation come from? Well, the thing about innovation is that it has to change something that we think is fundamental and immutable: it has to be on a CD-ROM, it has to be typed from a phone keypad, etc. So you can’t look for innovation in the same old stuff.

That said, here are four areas that I think are wide open, wide open, wide, wide, wide open. An enterprise application (or an enterprise application company) can get an edge if they can do one of the following things:

Reduce the cost of sales by a factor of 10.

Reduce the risk of ownership by a factor of 2.

Increase the effectiveness of the application by a factor of 5. What’s a measure of effectiveness; well, let’s say it’s the number of users who use it seamlessly and easily.

Increase the speed of the application (usually by using a special-purpose database) by a factor of 50.

At least two of these aren’t impossible, because we’ve seen them happen.

Salesforce reduced the cost of sales for SMB customers by selling an enterprise-level application that you could test for free and buy with a credit card.

Workday and Qlikview both transformed applications in their area using special-purpose databases, and Hasso wants SAP to do the same.

And at least one large company, SAP, is trying for innovation in the third. Fumbling and clumsy as their effort to improve maintenance has been, the idea is to reduce the risk of owning SAP by making maintenance practices considerably more effective.

As for reduced risk of ownership or increased effectiveness (actually, maybe you’ve seen Qlikview do this), we’re still waiting.

Ideas on who’s done this or how it can be done?