Groupon at $6 billion, for what, coupons? Twitter at who knows how much more, for what, stray pulses of thought? CornerStone OnDemand at 10 times revenues? How much of that is for the words “OnDemand” cleverly tacked onto the name?

Are we in the middle of a cloud boom? And if so, can we learn anything from the last big boom, the dot-bomb?

Well, I remember the last boom pretty vividly–I was young and foolish back then–and I would like to offer, if not sage advice, a way of thinking about these companies that at least help you separate them.

I’m going to use as Exhibit A, a rather smurmfy story that was circulating a few weeks ago in the financial community. Marc Benioff calls up Dave Duffield at Workday and says, “How much?” Dave says, “$2 billion.” The person telling the story intends for you to be shocked, shocked, but just in case you missed the point, he closes the story with a snide comment: “$100 million in revenue, and they want $2 billion? Time to sell cloud stocks.”

Now, I don’t believe that this story actually happened, for two obvious reasons. Reason #1. Both people have the good sense to keep the story to themselves. Reason #2. Well, wait until you get to the end.

But first, let me tell you about Workday.

Workday is a SaaS (cloud) company that does full-suite (HR, Payroll, and now Talent), global HR on an object-oriented in-memory database. It is run by two of the smartest people in the business, and since they cared about HR, they recruited many of the best people from PeopleSoft, which at one time had the best HR and Financials package out there. I’m not a super HR geeko dweeb genius–you know who you are, Naomi–but I’ve seen the package many times, and I’ve talked to them often about what they’re trying to do, and as far as I’m concerned, they’ve got a better mousetrap. They do HR (for their target market) better than anybody else in their space.

Now, when Workday was being started, they decided early on that they would build a product for big, global companies. They had done this once before at PeopleSoft, as I said, so they understood the challenges. Privacy laws, getting the addresses right worldwide, languages, expats, payroll, you name it. Hard stuff (and, I might add, not done entirely successfully the first time around). This time, though, they figured that their cloud-based, object-oriented, in-memory technology would be way better able to do all this hard stuff than what they had the first time around, plus, as it turns out, it would be highly buzzword compliant.

Have they gotten there? Not yet. But so far, so good. They’ve recruited one global client, Flextronics, and installed the package worldwide. There are issues, there are complications. But they’ve got a pretty good proof point, and now, they’re trying to sell it to and install it in other large, global companies who happen to want better mousetraps in HR.

So what do you think? Is this one-third as good as Groupon? One-fifth? As good? Or, as the smurmfer seems to suggest, 1/20th of what Groupon is worth, max.

Well, let me suggest a way of looking at this question. And please, the usual disclaimers. I don’t own any Workday stock, don’t have any economic relationship with them, and paid my own way to the last meeting I had with them. I’m enthusiastic about them, but because I can really see the point of in-memory, object-oriented cloud computing and I like the way they applied the technology to the problem.

So what’s the value of a better mousetrap?

When I think about any of these companies, I start by trying to figure out what kind of value they could deliver. Back in the dot-bomb days, this was a really good way of separating out the wheat from the chaff, because many of the vendors didn’t deliver any value at all and wouldn’t unless they changed what they were doing radically. CommerceOne, I remember, was my paradigm for this. Their software didn’t do anything and wouldn’t do anything, so there was very little value.

I then figure that the value of the company is some function of the value they deliver multiplied by the time that they can deliver it for. This last part is important. One worry I have about Groupon, for instance, is that it might turn out to be a fad. People will get tired of getting coupons (unlikely, I admit), or merchants will get tired of giving stuff away, and after a while they’ll fade to black. With Google, by contrast, I don’t have that worry. It will deliver value for a very long time to come, unless people stop using the web.

So where does Workday stack up on this measure? Obviously, it’s not bad. If it really is a better mousetrap and it gets into more of the large companies in the world, there’ll clearly be a lot of value delivered. And given how long companies use these products (20 years is pretty normal), the total value delivered over time is significant. As much as coupons? Sure. Maybe more.

But, then why haven’t they already zoomed up into the stratosphere, like Facebook or Twitter? The answer, I think, points up a really important difference between the public cloud applications and the ones, like Workday, which are meant for organizations. The public consumer applications are things that an individual picks up and uses. But enterprise applications are really a piece of infrastructure, like buildings or electrical systems, which don’t deliver much value until they work for a lot of people.

When you build products for individuals, you can bootstrap. But when you build infrastucture, revenue doesn’t come in right away. There’s always a large up-front investment, partly in physical plant, partly in getting the infrastructure to the user. True of electrical systems or telephone systems. True of roads or harbors. True of enterprise software systems.

So Workday today isn’t zooming because it’s still building out. In a way, it’s a little bit like Hoover Dam, say 75 years ago, when the dam was done and the pipeline was being built. A monster amount of money had already been spent on the dam, and during that time, there was no revenue at all. Then, while the pipeline was being built, some revenue came trickling in (hee, hee). The people in El Centro or Thermal or Brawley were getting good, fresh water, and they were paying for it. But it didn’t add up to a lot. No offense meant to Flextronics, but this is the situation that Workday is in now, with that $100 million that some people find so sneer-worthy, if $100 million it is.

So why are they sneering? Well, sure, if the pipe stopped in Brawley, it wouldn’t be so great. But you have to ask, why does anyone think the pipe’s stopping there. Presumably, the pipe will go farther, hit a city or two, and then the investment looks a lot better.

Sure, there is some doubt, as long as the value isn’t actually being delivered. An earthquake might break the pipe or the dam. The water might be no good. Somebody might invent a fresh-water-from-salt process that works really well. At Workday, people might not like the product as much as I do. It might be difficult or expensive for companies to dump their old systems. New competition might crop up.

But that’s not the point. The question is, how do you value Hoover Dam when the pipe has only gotten to Brawley? Well, you don’t start out by sneering at the revenue they’re getting so far.

So, two points to remember, if you’re afraid of boom valuations.

1) The real value is a function of the total value delivered over the life of the product.

2) For infrastructure products, especially cloud infrastructure products, you see a huge up-front investment, and while things are being prepared, money comes in slowly.

So, under these criteria, how does Workday look? Well, it’s pretty good. Take a look at the value that Flextronics is planning to get. Assume that other large, global companies will also want that value. And assume that once it’s in place, the product can go on for the 20 years that PeopleSoft seems to be lasting inside companies. As good as Groupon, ya think? I think.

Which brings me to the second reason I don’t believe that story. $2 billion sounds like a number that a value investor made up because he’s trying to show how ridiculous valuations have gotten. But when you look at what’s actually going on there, $2 billion seems off. It appears to me that it’s rather too low.

Ozymandias Crumbles

September 17, 2010

Over the past 20 years, the American Airlines nonstop between BOS and SFO has been a fixture of the software industry. Almost any time you took it, you’d see friends, and even when you didn’t, you’d be sure you hadn’t boarded the wrong plane, just because of the number of Oracle bags.

This flight is no more. A little more than a month from now, American Airlines will end non-stop service between SFO and BOS.

Why? The commuters left American for JetBlue and Virgin America, for better seats, better entertainment, fewer niggling charges, and maybe even better treatment. I saw this myself when I actually got ** an upgrade ** on American a few weeks ago, and I confirmed it by talking to a frequent commuter who had gone over to Jet Blue even though he had more than a million and a half miles on American.

I wonder. Is this how other empires crumble? In my world, you have SAP and Oracle, seemingly as unassailable as the Great Wall of China, but both offering an experience that has been compromised by excess investment in an aging infrastructure, a labor force that has been asked to shoulder most of the burden imposed by this infrastructure, and a management that grew up in the good old days and still (my guess is) secretly longs for them.

For a long time, people like myself stayed with American. And while that was happening, the management could fool itself. Leather seats? No fees? These don’t matter to our loyal customer base, so we don’t need to invest the boodle that we don’t have in matching the competition. Until, of course, the day it did matter, when there was nothing they could do.

What are the moral equivalents of leather seats and fee-less flying in the enterprise applications market? Oh, it’s any number of things. It’s the fact that search, if it’s there at all, is just about as cumbersome as search was on AltaVista. It’s the fact that doing anything, anything unfamiliar requires the equivalent of a Type III license and just about as much training. It’s the endless, endless wait for even the most trivial or needed changes in the system. It’s the fact that you can’t use them on an iPhone or an iPad or even a Mac. (It’s amazing how many enterprise application companies are IE only.)

Please feel free to add to the list. It’s kind of fun.

Oh, people can and will put up with it for a really long time. But eventually, if the competition is smooth and persistent, they can march into a market and take it from the behemoths. I think you’re seeing this today where Workday is replacing PeopleSoft 7.5, and you saw it a few years ago when Salesforce was replacing Siebel. It does happen.

Now if only I could remember my JetBlue frequent flyer number.

The Supreme Court threw a big stone into a small pond last week when they decided a case popularly known as Bilski and for the third time in 15 years changed the rules for the protection of software. I won’t weary you with all the ins and outs. But I do want to say, quickly, what’s changed in this area of the software industry and speculate a little about how this might affect things.

(By the by, not enough credit for all this can be given to a long conversation I had earlier this week with Robert Tosti of local IP law specialists, Brown Rudnick.)

Change 1: Many of the patents granted in the last 15 years that essentially try to patent a business process done in software are now called into question. They are NOT invalidated, per se. But the Supreme Court has also said that at least some business processes (like the eponymous Bilski’s) are not patentable, because they are “abstract ideas.” So the more like an abstract idea the patented process is, the less likely it can be enforced.

Change 2: You can still patent software, but in the absence of any guidelines as to what is an “abstract idea,” (not patentable), it is not clear that software patents will take the form they used to take. Clearly, any attempt to patent something in this area is now fraught, but clearly, a sequence of screens is less risky than (but also more limited than) the business process that the sequence is intended to support.

Change 3: Innovation in software is no longer as easily protected as it was when Amazon patented “one-click shopping.” (Tosti’s example of a patent that could well be still enforceable, but might not be). To get protection for an innovation, software companies will probably need to design and coordinate efforts in four separate areas of the law: patent law, copyright law, protection of trade secrets, and contract law. This will be expensive to do, and it will make enforcement complex (though also giving enforcement efforts lots of scope).

How will this affect software companies? Clearly, Bilski should give patent trolls some pause. (Not that companies like NTP are dissuaded. NTP just filed suit against Apple and Google, among others things claiming infringement of their e-mail-on-mobile-devices patent.) Software companies like Oracle and Salesforce now being sued by (depending on your point of view) greedy patent trolls or inventors who have had their rights trampled on by greedy software companies should feel that they have more ability to defend themselves than they did.

The one thing we won’t see is more justice. While people are trying to figure out what a patentable business process is, you’re going to all the overreaching, ambition, greed, arrogance, foolhardiness, self-delusion, and self-righteousness that you might have seen had you been in alive in the days of the Gold Rush.

I have been watching the earnest attempts with which Ray Wang is trying to “define” the flavors of cloud computing and Phil Wainewright is trying to describe its “unspoken benefits” and James Governor’s simplifications of cloud concepts, and all those efforts make me want to use words that you shouldn’t use on the Internet.

Why? Well, they’re a bit too high-minded and value-free.

Let me explain the problem with the use of an example. Imagine you were a high-minded dictionary writer who was trying to stand above the fray, so you defined “good,” as a ‘set of behaviors and attributes approved of by some,” and “bad” as “a set of behaviors and attributes not approved of by people who approve of the good.” There may be some sad, sorry truth to these definitions. But far from being even-handed or fair-minded, they’re simply serving the interests of the bad guys, who get to turn bad and good into a popularity contest.

Similarly, if you define hosted or SaaS or whatever as if all of these were legitimate choices which you, the high-minded definer are not going to evaluate, you serve….well, all the guys who want to foist a bad choice on you while pretending that it has all the virtues of a good choice.

You can see this most clearly if you look at the history. Ever since people (Marc, you know who you are) started using the term SaaS, applications that are shared and leased have been what the academics call “a site for contested definition.” As soon as the term SaaS was bruited about by some people, other people would say, “Oh, well that’s just a silly new term for ‘hosted,’ and ‘hosted’ is a great thing and it’s what we’ve always done. If you’d like to lease our software, go ahead.” (Larry, you know who you are.) The same things have happened to “on-demand” and “cloud” and all the other terms that the eminent and high-minded analysts are trying to explain.

This contest is exactly why there is the confusion in the market, a confusion that Ray rightly decries. The terms are confused and confusing because people are trying to appropriate them.

They’ve tried to appropriate them for a perfectly obvious, simple reason. They’d like to get the credit and money associated with doing something difficult, but very good, without all the trouble and bother of doing that difficult and good thing. But as a definer (or as an entrant in the marketplace), you will never, repeat never figure out what they’re trying to do if you don’t look squarely at why the difficult design choice really is difficult and what the payoff for that design choice really is.

If you try to define “good,” that is, without deploying some notion of worth, you’ll end up with no practical difference between the two and a whole lot of bad people trying to use your definition to show that they’re really good.

Maybe in one of those hoity-toity academic environments like the one across the street from me, it’s OK to publish even-handed appraisals and fine, nuanced distinctions that clearly lay out all the issues for those five other people who are interested in publishing themselves on the subject. But in the real world, where some people who haven’t done their homework are trying to appropriate the hard work of people who have done their homework, it’s not OK.

That said, let me tell you the real scoop on “cloud,” “multi-tenant,” etc., etc., etc. True SaaS, true multi-tenant, true cloud applications are shiny like silver. Hosted, on-demand, and private cloud are shiny like tinsel. For any purposes that aren’t temporary and don’t have pretense built in, silver is better. Multi-tenant applications have more functionality. They’re more adaptable. They are easier to operate for the vendor. They are cheaper. They are easier to manage. They are more competitive and more likely to last. Unless you’re throwing a party and are planning to take down the decorations the next day while suffering from a hangover, they are better.

Why is true multi-tenant better? It’s very simple. They’re more efficient. More of any dollar spent on a multi-tenant application goes into value delivered to the customer. A programming dollar spent on a multi-tenant application is immediately delivered out to all the users of the product, and, it’s put there to make the product more useful. To get the same effect in an on-premise application, it takes a lot more money. The same dollar for the programming (actually a little more), another dollar (say) for the testing, packaging, and delivery, another dollar (say) to test it at the other end, and another dollar (say) to get users to make it effective. Let me be generous and include the cost of delay in the delivery of value in those last two dollars. I think the cost is really more. But of course, there are no reliable published figures. Analysts, where are you?

Aghast? Don’t believe me? Well, let me use an example where there are published figures. A couple of years ago, a senior executive at a major application company gave some lectures at local universities where he claimed that single-tenant (hosted or on-demand) applications cost 10 times more to run than multi-tenant. Let’s say he’s right. The benchmark in this area is the (highish) per-user cost of a major SaaS provider, which is about $3.50/month (not published, but widely known). His company, at the time, offered a single-tenant application that it leased out for a hundred and change each month. Let’s assume that the major SaaS provider was a direct competitor and was charging the same. Are these companies really offering the market good alternatives, both of which need to be respected. Remember, one is putting $31.50/month/user (roughly 1/4 of your subscription) into paying for extra servers, extra electricity, extra licenses for virtual machines, extra complexity in their provisioning systems, extra management of load balancing, etc., etc., etc., all of which (from your point of view) add exactly zero value?

Imagine you’re buying one of these two choices. One vendor has made a bad, inefficient, and expensive design choice in the way it delivers the software. The other vendor has made a good, efficient, and inexpensive design choice. Which should you buy? Wouldn’t you feel really annoyed at any analyst who said, evenhandedly, that there’s much both good and bad to be said about both choices?

“But…but…but…” I hear you saying, “I just read Ray, and Ray says that you can customize single-instance products, whereas you can’t customize multi-tenant apps, and you can have more confidence that your installation is secure if it’s not sharing resources with some other application.”

Ray is perfectly right in his facts and perfectly wrong in his emphasis. What he isn’t saying is that in this context, customization and security are impossibly expensive, ridiculous luxuries, the moral equivalent of the rear-view mirrors in the Rolls-Royce that used hand-blown glass. Imagine the following conversation with your CFO. to see what I mean. “To get customization and security, we have to pay ten times as much for the software to be delivered by a company whose gross margins are 1/10 of industry best practice? Excuse me? We’re paying for a limousine when everybody else is paying for a bicycle messenger? Hello. Do we really need customization? Is the security really all that much better?” To which, the only rational answers are “No” and “No.”

The real question in this era of contested definitions is not what the definitions are, but how we can tell when we’re getting the real goods. This is a very, very difficult question, not at all easily answerable through the use of a single label. “Multi-tenancy”–essentially sharing computing resources among users of applications in a way that valorizes efficiency and accessibility–is a difficult and complex engineering problem involving numerous, complex tradeoffs. There is good multi-tenancy and bad multi-tenancy, and bad multi-tenancy can be so bad that it is virtually indistinguishable from single-tenancy. (That’s why, Naomi, I sometimes only give two cheers to your own really laudable insistence on multi-tenancy from HR vendors; it isn’t just multi-tenancy, but how you do multi-tenancy.)

Over the years, we’ve seen many attempts to isolate the truly distinguishing feature of multi-tenancy: that it runs on a single database or a single machine, that everyone gets upgraded at the same time, etc., etc. It’s just as silly and fruitless to do this as to find the single, isolating characteristic of “chair.” (Four legs? No, there can be chairs with any number of legs or none. People sit on it? People sit on lots of things besides chairs.) The only real way you can tell is to look at the engineering.

Not surprisingly, many vendors, even ones claiming to be “true multi-tenant,” don’t pass muster. A month or so ago, my old colleague, Brian Sommer spent a good deal of time calling around to companies that claimed to be multi-tenant. “You would be astonished, ASTONISHED,” he told me in his inimitable way, “at how few there actually were.”

If we hadn’t wasted time wrangling about definitions and propounding definitions that simply missed the real point, maybe a healthy discussion of the engineering issues surrounding multi-tenancy would have grown up by now. The plain fact is that the way Salesforce.com does it is really different from the way NetSuite does it, each company making quite different tradeoffs and pushing quite different levers. But today, whenever you get one of these obligatory “due diligence” site visits from companies fearfully dipping their toe into the cloud waters, the only “engineering” questions anybody asks have to do with SAS-70 Type II.

In yesterday’s post, I argued that ROI would not be an adequate measure of the benefits conferred by new-gen (or pseudo-new-gen) applications like Workday, Business By Design, or Fusion Application Suite. The previous-gen applications were all about automation. The new-gen suites confer real benefits (I think), but not necessarily benefits that fall through to the bottom line.

What benefits are they? Well, they have to do with working more effectively: making fewer errors, putting more time into work and less into busy work, making more accurate decisions, faster. Is there benefit from this kind of thing? Sure. But how do you measure it.

In the post, I suggested a hazy term, “operational effectiveness,” for the benefits one should expect. What is “operational effectiveness?” Let me admit freely that I don’t know for sure. In this post, let me propose an analogy, which should help you to understand what I’m getting at.

The analogy comes out of a historical situation that always posed a problem for ROI analysis, the transition in business from the typewriters that sat on secretary’s desks to the PC that sat on executive’s desks. This transition occurred in two different phases. First, the typewriters on the secretary’s desk were replaced with big, clunky word-processors that sat next to the desk. These word-processors automated the secretary’s document production work. Then, the secretarial position itself was eliminated, and the typing function became something that executives did themselves on that PC.

The transition to word processors could easily be justified in ROI terms. We could get more work out of the secretary or else hire fewer secretaries. Whether the justification was real is an open question. But it’s certain that that’s how people thought of it.

The next transition was much more problematic for ROI analysis. Expensive executive time was now being put into jobs that had been performed more efficiently by much cheaper labor.

At the time, people didn’t put a lot of thought into figuring out why they were funding this transition. Executives saw the PCs, knew that everyone else was using them, needed them for some functions (e-mail, spreadsheets), and just decided. “We’re doing it this way.” At least in my recollection, that’s what happened.

So were they just loony or lazy or wasting shareholder money on executive perks? I don’t think so. I think what they were plumping for was the same “operational efficiency” that I’m talking about 25 years later.

True, they spent more time typing. But they also had more control over the final product; they could change the product more easily; and they could distribute it without much overhead. And, at the same time, they were changing the form of what they were doing. They weren’t just producing typed memos; they were documents with fancy fonts and illustrations; and they were creating Power Points. True, many an executive was spending ridiculous amounts of time fiddling with type sizes so that they could get things on one page, but even acknowledging that, they thought the new way was better.

Indeed, by the time the transition was finished, justification wasn’t even a question, because the new tools changed the nature of work, and now you couldn’t get along without the tools. When executives were doing the typing, they stopped creating long reports. More and more of the time, a corporation’s decision-making was even wrapped around a full document (minutes, memos, or formal reports), it was wrapped around Power Point decks.

So by the end of the transition, ROI analysis had become entirely moot. How could you get a tangible measure of benefits when you were comparing apples and oranges?

Could we be seeing a similar transition now? It’s certainly possible. The analog to the word processors is that first generation of enterprise applications, which were funded by the automation benefits they confer and by ROI analysis. The analog to the PC is the second generation of enterprise applications.

(One caveat. As I’ve said before, I don’t think that Fusion Applications or the versions of Business by Design that I’ve seen are in fact second-generation applications. They’re more like Version 1.3. But they’re close enough to next-gen to raise the problem I’m talking about.)

If the analogy holds and if second-gen apps work as the developers hope, the benefits that businesses are going to experience will be equally hard to get your arms around, partly because the benefits are so subtle and disparate and partly because you’ll see a shift in the way work is done.

Does that mean that we won’t be able to talk about the benefits and we’ll just bull ahead with them? Well, that’s why I’m introducing the notion of operational effectiveness. It does seem to me that we can get clearer about what the benefits are.

So come on guys. Make comments. What is operational effectiveness? And how can we tell whether we are getting it?

This blog post is as more an open question than a pronouncement. So please feel free to comment on this or take the idea in a different direction.

I’ve been thinking about the next generation of Enterprise Applications, the value that they (might) bring, and about how people might justify replacing the enterprise applications they have with the new generation.

Generally speaking, you justify an investment in infrastructure using ROI. You invest this much, get this much return. For the first generation of enterprise applications (most everything designed between 1990 and 2003 or later), this made sense, because they were basically automation apps. They automated work done by people. The ROI showed up because you didn’t have to pay people to do it any more.

Now these new applications simply don’t do that, that is, they don’t automate appreciably better than the old applications do. And this means that ROI is a pretty crummy tool for evaluating whether an investment is worthwhile. Yes, there will be ROI if the enterprise application works the way it’s supposed to. But the return will be highly indirect. You won’t be able to fire people and pay for the application.

Brian Sommer has been talking about this problem for years–essentially, he points out that automating something that’s already automated doesn’t justify an investment on the same scale. But he has never really explored whether there are other forms of justification.

Nenshad argues in his book and his blog that good performance management enabled by modern tools will get you to a place you want to be, and he tells you a lot about how to do it. And while it is true that these new applications help you manage performance better and that’s one of the reasons you want them, he doesn’t offer a way of thinking about justifying the move to what he recommends. (Nenshad, if I missed this in your book, I’m sorry.)

So what does one use? Well, let me offer a concept and sketch it out in a paragraph or two, and you my small but apparently very loyal readership can then take me to task.

I’ll argue that what these new applications really do is improve “operational effectiveness.” What’s that? Well, to start out with, let’s just say that they let each employee put more effort into moving the company forward and less effort on overcoming friction, that is.

Well, that’s suitably hazy. So here are some things that you could measure that would, I think, be indicators that employee effort is more coherent and focused. You could, for instance, take a page from the black belts and measure operational errors or even just exceptions as part of operational effectiveness. Or, you could look at corporate processes that are nominally automated and see whether they are managed by exception. (Truly best, automatable practices should require almost no routine manual actions.)

People sometimes try to look at operational effectiveness by measuring what percentage of revenue is spent on things that feel like pure expense, like IT. So, a company that spends 3% of its revenues is less effective than one that spends 1% of its revenues. People also try to get at it by trying to look at which activities are “value-added” and trying to get people to do more of them. Both ideas are silly, of course, in themselves. (The 3% company may be spending on stuff that makes them effective, while the 1% isn’t.) But I think there might be indicators of operational effectiveness that are better. Wouldn’t operationally effective companies spend less time in meetings, send fewer junk e-mails, work fewer hours/employee (!), resolve more customer complaints and deflect fewer, etc., etc., etc.?

You get the idea, I think. So why is this a good measure for the new generation of applications? Because, bottom line, I think that’s what they’ll do. They’ll help organizations and people avoid wheel-spinning, error correction, and pointless processes or rules by getting to what matters, faster.

Of course, people have always accused me of being a ridiculous, blue-sky, naive optimist. But that’s how it seems to me.

What do you think?

A few days ago, I argued that a company that wants to replace its old, limping systems with a brand, spanking new Oracle or SAP application should probably wait for the next generation of apps.

The post got a lot of approving comments, which surprised me. I think there are pretty good arguments for just hauling off and buying. Consider what one of these hypothetical companies might say in defense of a decision to buy now rather than later.

1. We’ve got the money now and we should spend it.

2. The product we’ll get will be much more reliable.

3. The cost of services surrounding the product will be lower.

4. We don’t know when the new products will be out (with the possible exception of Workday 10).

5. We don’t know what will be in the new product.

Imagine what idiots we’d look like, the buyer might say, if we waited for five years for a product that was no better than what we could buy today and far more incomplete and buggy.

I see the force of this, which is why I thought it was a close call. Ultimately, I did decide it’s better to wait for markedly better products that are coming, despite the risks and delays. But I saw why people would disagree.

So are my commenters intemperate or am I dithering? One test for this is to look at the case of somebody who is not considering a replacement, but is considering a fairly big investment in the existing platform. Maybe they’re considering an upgrade, or maybe they’re considering some extension products, or maybe they want to push their existing installation out to other geographies.

This is a very realistic case. Several companies I know fairly well are considering one or more of these options. One is upgrading to Oracle 11i; another wants to upgrade to Infor LN. Still another wants to extend its QAD implementation to another geography. Still another wants to buy a CRM system from its existing vendor.

For each of these companies, the details matter a lot, but on reflection, I think the same argument applies. It will be hard to get the return on this big new investment in the old platform, because the useful life of what is to you a new product is not long enough to justify the expense.

This raises two questions. First, how does one calculate “useful life.” Most of the companies I’m thinking of believe that the useful life is defined only by how long they’re willing to use it, and for some companies, that’s pretty long. (There are, after all, big SAP clients who are still using R/2.) I think this is wrong, but I’m going to have to hold off explaining why till a later blog.

The second question is, “Assuming that the commenters and I are right, what kinds of investments should companies who have decided to wait actually make?” I have no determinate answers to this, but I think there are some guidelines.

Start with Gavin’s comment on the previous post. Gavin points out that some investments in the Oracle infrastructure and in new Fusion-based products will actually take you toward the next Oracle generation. He is quite right; Oracle designed things this way, and partly because of the problem that I’m describing, they quite deliberately created some ways of investing in products from Oracle without locking yourself into an older technology.

There are many, many caveats, of course. Among other things, if you have an Oracle system now, you might not want Oracle in the future. The three main products I mentioned in the earlier post (Workday, Business by Design, and Fusion Applications) are all highly differentiated, each with its own flaws and virtues. A rational person would do well to look at all of them before deciding to stick with the vendor they have now. (This applies to SAP customers, too.)

Even if you are bent on Oracle, you still may want to take some time thinking through your infrastructure stack before investing in pieces of it. Even the examples Gavin gives like OpenID, which are likely to be pretty good, may not be right in the long run, and if they aren’t, that will be a lot of trouble and expense gone to waste.

You should note that Gavin’s argument almost certainly will apply to SAP, as well. SAP is also workng on “hybrid” intermediate solutions, and I’ll bet you dollars to donuts that they’re trying to figure out every way they can to ease the migration to the new system by asking you to make steady, rational investments in products that extend your current capabilities.

But what about customers for whom an infrastructure or extension investment isn’t right? Here, I think there are some interesting arguments, akin to Gavin’s, for small, light cloud-based apps, point solutions designed to solve highly specific problems. I’m not just talking about a CRM app or a call-center app or a recruiting app; I’m talking about things that are very, very specific to your industry, but really powerful, things like Tradestone in the apparel industry.

Another possibility is to spend some time and effort cleaning up your existing installation. This will improve its current effectiveness, extend its useful life, and very possibly lower the cost of the new system significantly. (A lot of the cost of any implementation is cleaning up after the mess left by a system that everybody has given up on.)

There’s a very smart analyst in Europe, Helmuth Gümbel, who has spent a lot of time thinking about this problem. He has a blog, and he also has a conference, Sapience, which goes into these questions at length. If you’re thinking about extending your current ERP to other geographies, a reasonable alternative might be to find lower-cost ERP systems to serve those geographies.

The basic theme running through these latter approaches is that while you’re waiting, you can focus on saving some money and preparing your current installation, thereby making the later transition to a newer technology faster and more affordable.

Follow

Get every new post delivered to your Inbox.