Smash and Grab Semantics: Cloud vs. Hosted
September 27, 2010
What is “cloud” computing, and how does it differ from “hosted?” The question emerged once again recently among the Enterprise Irregulars, as one Irregular used the terms interchangeably and another objected. Neither, however, wanted to get into semantics (that is, what the words “cloud” and “hosted” actually mean), so they agreed that the problem is knotty and went on with their lives.
These are both sensible, intelligent people who make sane decisions. But I want to point out that something has happened to both of them. Both of them, I think, have become victims of what I’ll call “smash and grab semantics,” a practice where companies will take a term that they find attractive and use it to describe something that they do. In some cases, this is pretty legitimate–I think both Salesforce and Amazon can use the word, “cloud,” without arguing about it too much–but in the case of “smash and grab semantics,” there’s a real distortion; something of value has been taken when they do it.
In other contexts, we regard smash and grab semantics as pretty reprehensible, because something quite real is at stake. I was talking last night to the editor of a newspaper in a country that calls itself “democratic,” but isn’t. He goes to court regularly and regularly ends up in jail, though not usually for too long. The last time he was hauled into court, the judge began the proceedings by asking, “Do you stand behind the lies that you just published?” As far as he’s concerned, when a regime practices smash and grab semantics on the word, “democratic,” it helps this regime get away with putting him in jail. And he should know.
So, is there a distortion in the use of the term, “cloud” vs. the term, “hosted?” And does this distortion give the people who’ve grabbed the term something they shouldn’t have? I think so. The plain fact is that cloud is a lot cheaper than hosted, a lot cheaper, because cloud applications or services have been engineered to share resources efficiently, something that hosted applications or services can’t do, because they’ve had to be rewritten from the ground up.
So when people offer something hosted and make their customers think it’s cloud, they’re giving people the idea that they’ve done the homework and were providing the advantages of efficient resource sharing when they haven’t in fact.
The details matter, of course, which is why my two very smart colleagues didn’t want to get into a complicated argument. With hosted applications there is always some resource sharing, by definition. But the plain fact is that whatever the details, something that’s engineered to be cloud is roughly ten times cheaper than hosted. So whatever the details, people who call themselves “cloud,” are doing a bit of smash and grab.
So how do you combat smash and grab semantics? Fortunately, the answer to this question has been known for a 100 years. You don’t let them get away with it. Despite the fact that they want you to use the term they’re using, you don’t go along. George Orwell put it better than anyone in “Politics and the English Language.” I am paraphrasing. If you want “language [to be] an instrument for expressing, and not just concealing or preventing though,” you must “choose…the phrases that best cover the meaning.”
Don’t get me wrong. If somebody says their offering is “cloud” or “SaaS” or “on-demand” when it is actually hosted, this is smash and grab, but only on a small scale. It is not the same as using the word “democracy” to describe a tyranny. One is good, aggressive marketing; the other is morally confused. But since the technique used–smash and grab semantics–is the same, you combat both in the same way You use the right word.
Ozymandias Crumbles
September 17, 2010
Over the past 20 years, the American Airlines nonstop between BOS and SFO has been a fixture of the software industry. Almost any time you took it, you’d see friends, and even when you didn’t, you’d be sure you hadn’t boarded the wrong plane, just because of the number of Oracle bags.
This flight is no more. A little more than a month from now, American Airlines will end non-stop service between SFO and BOS.
Why? The commuters left American for JetBlue and Virgin America, for better seats, better entertainment, fewer niggling charges, and maybe even better treatment. I saw this myself when I actually got ** an upgrade ** on American a few weeks ago, and I confirmed it by talking to a frequent commuter who had gone over to Jet Blue even though he had more than a million and a half miles on American.
I wonder. Is this how other empires crumble? In my world, you have SAP and Oracle, seemingly as unassailable as the Great Wall of China, but both offering an experience that has been compromised by excess investment in an aging infrastructure, a labor force that has been asked to shoulder most of the burden imposed by this infrastructure, and a management that grew up in the good old days and still (my guess is) secretly longs for them.
For a long time, people like myself stayed with American. And while that was happening, the management could fool itself. Leather seats? No fees? These don’t matter to our loyal customer base, so we don’t need to invest the boodle that we don’t have in matching the competition. Until, of course, the day it did matter, when there was nothing they could do.
What are the moral equivalents of leather seats and fee-less flying in the enterprise applications market? Oh, it’s any number of things. It’s the fact that search, if it’s there at all, is just about as cumbersome as search was on AltaVista. It’s the fact that doing anything, anything unfamiliar requires the equivalent of a Type III license and just about as much training. It’s the endless, endless wait for even the most trivial or needed changes in the system. It’s the fact that you can’t use them on an iPhone or an iPad or even a Mac. (It’s amazing how many enterprise application companies are IE only.)
Please feel free to add to the list. It’s kind of fun.
Oh, people can and will put up with it for a really long time. But eventually, if the competition is smooth and persistent, they can march into a market and take it from the behemoths. I think you’re seeing this today where Workday is replacing PeopleSoft 7.5, and you saw it a few years ago when Salesforce was replacing Siebel. It does happen.
Now if only I could remember my JetBlue frequent flyer number.
Blame the Customer?
July 14, 2010
Who is to blame for IT project failures? My colleague, Michael Krigsman, argues that when IT projects wander into the “IT Devils Triangle,” all three participants–the vendor, the integrator, and the customer–are to blame. Michael is very insistent about this; in a recent post on Marin County v. Deloitte, he says, “In my view, it is highly unusual for a project to fail without some culpability on the part of the customer.”
Michael is the guy who almost singlehandedly took IT project failure out of the closet and into the open, and much of his professional life is spent analyzing these failures. For that reason, one should not criticize what he says without good reason. But I’ve always been uncomfortable with his line on things and felt that we would all be better off if we had better analytic tools for this kind of situation. I want to know when the “culpability” is minor and merely contributory and when it is really important or even primary.
One’s normal intuition about such things is that there are broad spheres of responsibility. If I get an artificial hip implanted, it’s up to the manufacturer to provide my doctor with a hip that works, up to the doctor to put it in correctly, and up to me not to abuse it. This is the intuition that underlies a lot of contract law and liability law, and it even applies to very large, complex engineering projects. When the architect found a flaw with the Citicorp building in New York City, which flaw could have caused it to collapse, the architect and the contractor took responsibility for it and had it fixed. They didn’t blame the customer, and they didn’t ask the customer to adjust his use of the building.
If you apply this intuition to technology, it works pretty well. In the case of IT projects, the software company is responsible for making sure that the design and building of the product is adequate to the demands that will be placed on it, the integrator is responsible for deploying the product in a way that meets what one might call the normal expectations of the customer, and the customer is responsible for using it in ways that are consistent with the design. If any of them fail to meet their responsibilities in their sphere, then they are culpable. And the others are not culpable (except perhaps in a minor way) if they don’t wish to compensate for that failure.
Now, I think Michael would agree with this, but he would say that, as a practical matter, what happens in these large project failures. is that all three people fail to deliver, so all are culpable. (Michael, please correct me if I’m wrong.)
But it seems to me that doesn’t go far enough. I think you need to be able to figure out when one participant is clearly morst culpable? Perhaps, for instance, one ought to apply a notion of priority, so that if the vendor fails to deliver, the vendor is primarily culpable by definition, and the most that the next two parties can do is provide some minor contribution. (This tends to be the approach in products liability, with some major caveats.) Or perhaps there are some other tools that need to be applied, tools that would allow one to extend (or perhaps even change) the “pox on all their houses,” line that Michael takes.
Whatever you do, though, it isn’t easy.
Let me try to illustrate what I mean with an example that has made a lot of news recently: the iPhone 4. Here you had three participants: Apple, AT&T, and the customer. You had a product that “did not work properly.” And you have a dispute about levels of culpability.
The problem, in case you’ve been in Tibet for the last month, is that the iPhone’s reception isn’t very good for reasons that still aren’t clear. (My wife has one. It’s true; it’s not not good.) The customers have basically been told by Apple, “The problem is in the way you hold the phone. Either hold the phone differently, or fork over $25 for a case.” The customers have said, if I may paraphrase, “It’s a phone. You should be able to hold the phone any way you want. It’s up to Apple to give me a phone that works no matter how I hold it.”
Now, the IT Devil’s Triangle analysis says, basically, that the customers are wrong. According to this argument, Apple made a good phone that made some design tradeoffs. (Basically, the antenna was put on the rim of the phone.) If one of those tradeoffs means they have to hold the phone in a special way, so what? They should either buy themselves a case or hold the phone correctly. If they don’t want to do that or can’t, they are contributing to the problem and are at least as culpable as Apple.
So is AT&T off the hook, here? Under the Devil’s Triangle argument, not at all. The phone service that they provide is weak and erratic, and that makes it very difficult to troubleshoot the phone. (I have experienced this in spades.) Their customer “service” is designed to handle normal problems expeditiously, but is not designed to track and resolve complex problems, and as a result, it makes what could be an irksome experience into something verging on horrible. (I have also experienced this, believe me.) And that means that customers are not as tolerant of the iPhone 4’s design choices as they should be.
It’s something of a conundrum, isn’t it? One’s normal intuition is that Apple should design a phone that people can hold any way they want. But one can certainly make a cogent argument that it’s really the customer, as much as anyone else, who is to blame, because they get mad at Apple, rather than being willing to hold the phone in the correct way.
In the end, the solution to this riddle is a matter of values. If you think (as Consumer Reports does) that it’s up to the vendor to get things right, when it comes to simple things like how one holds the phone, then the Devil’s Triangle argument is simply wrong. If you think (as Steve Jobs and many of my friends who work for vendors think) that the customer is to blame if they can’t deal with the “flaws” that they find in a complex and highly-engineered technology, then you’re going to agree with Michael and assume that there are very few situations where the customer doesn’t have some culpability.
Let me just tell you where I come down on this and hope for some illuminating comments. I think the benefit of the doubt should be given to the user. If the vendor is clear about the design tradeoffs and limitations and the deliverer (integrator) provides service that takes those limitations adequately into account and is clear about that, then I think, “Yes, it’s up to the consumer to deal with the limitations.” If the vendor and deliverer fail to do this–if the vendor isn’t clear about what the design tradeoffs are or if the vendor and deliverer allow you to get the idea that the product will do things (like make phone calls in normal situations when being held normally) that it won’t in fact do–then the onus is on them.
So, for instance, if the building architect makes a mistake in the way the building was designed or the general contractor deviates from the design in a way that turns out to be dangerous, it’s up to them to fix it; the building owners are not culpable because they don’t want to limit the number of people allowed on each floor to a number far below what they’d been led to expect. And if the hip designer uses a brittle material and the doctor chips it when he or she installs it, it’s not up to the patient to walk less.
Just my view.
So why is this important? It’s important because this is not how the software industry works. It’s normal for vendors to be very close about the design of their technology, for vendor and integrator salespeople to make unrealistic claims about what the technology does or about the benefits to be received, for the people delivering the product to view limitations in delivery as upsell opportunities, etc., etc. So if my view were to prevail, and it were up to the vendor and deliverer to make sure that the product works as you would normally expect and the installation is what it should be, the industry would have to change.
Is that likely? Maybe, maybe not. But it gets more and more likely every time a user asked to adapt to some software’s vagaries asks himself or herself, “Isn’t this like being asked to hold an iPhone with tweezers?”
The Value of Cloud Computing
June 7, 2010
A client asked me whether I had anything in the files that laid out “the value proposition of cloud computing.” I don’t, and I don’t know of another good treatment. (Please, commenters, refer me.) But I thought I’d provide an answer here. It’s provisional, but I’m hoping that the comments will help improve it.
Before I begin, a few notes. First, as I noted in “Silver and Tinsel” the term “cloud computing” is heavily contested.If you are confused by all the silly claims, Ray Wang does a good job disentangling the terms. In what follows, I am ONLY talking about rue multi-tenant cloud applications, not hosted or single-tenant applications.
Second, in what follows, I’m only talking applications (SaaS). The value proposition for infrastructure (IaaS) or platforms (PaaS) is somewhat different. Third and last, I am assuming that multi-tenant application vendors are exploiting the advantages I’ll describe. In making this assumption, I’m clearly oversimplifying; many vendors simply don’t exploit the technological oopportunities as effectively as they could.
That said, here are the advantages of cloud computing:
I. Cloud computing is much cheaper.
The cost per unit of application functionality delivered by a SaaS company to the user is, in principle, much less than the cost of the same functionality delivered from your own premises, because the SaaS company shares the cost of expensive resources among all its customers and because the costs of distribution and remote support drop to zero.
What resources are shared? Database. Application server(s). Disk memory. RAM. Backup. Load management. Bandwidth. Provisioning. Patching/Patch testing. Security. Utilities. Personnel. All of these provide major economies of scale. What costs disappear? The cost of testing on multiple machines/OS’s/DBs. The cost of preparing and sending out disks or downloads. The cost of getting access to and understanding the configuration of a remote site.
Now, within the world of cloud applications, the problem of sharing all these resources efficiently is not an easy one to solve. There are several different “standard” approaches, each involving tradeoffs. Nevertheless, it’s safe to say that the more resources shared, the cheaper the cloud computing application. So if you want to figure out whether the cloud computing application you’re thinking about buying is any good, ask them for details about how they share those resources.
Now, very few of us really have a good intuitive feel for how much saving is involved. Your knee-jerk reaction might be that your data center costs roughly the same amount as a SaaS company’s data center. And if you know that your own data center is a little expensive, maybe you think that some partial solution, like hosting the application somewhere else, is roughly as good.
So, when Oracle tells you that their cloud application is hosted in their datacenter, so that you share data center management costs with other customers, that sounds good to most of us. And if Lawson tells you that they’re running on Amazon, so you’re sharing data center and app server and disk costs with other Amazon customers, that sounds good, too.. And if some smaller vendor tells you that you’re running on a virtual machine in a rented data center like Rackspace, that sounds good. It turns out, though, that none of these common, single-tenant “cloud” solutions really get anywhere near close to achieving the cost savings that are available. The only way you can get those cost savings is if the application itself is “multi-tenant,” which means that the application has been written in a way that allows it to manage the resource sharing.
How much of a difference does multi-tenant make? It’s astonishing, but the answer appears to be that multi-tenant is ten times cheaper than a single-tenant solution in a shared data center. (See “Silver and Tinsel” for more on this.)
And what does that say about taking delivery of a solution. Obviously, it depends. But the experience so far says you should think of it this way. Choice 1. Multi-tenant solution taking advantage of shared data center resources, shared computing resources, lower distribution costs, and lower management costs. Choice 2. You take on all the costs of the data center, computing, distribution (to you), and management, sharing those costs among all the applications at your company. You know that companies that do hosted or single-tenant cost 10 times more than multi-tenant. Do you really think that Choice 2 is comparable in cost to Choice 1?
II. Cloud Applications Are Easy to Consume
Cloud applications are generally much easier to set up, try out, get access to, buy, and use than on-premise applications are, for a very simple reason. Cloud applications are designed to be delivered with as little intermediation as possible to the end user. On-premise applications are designed to be intermediated by IT. They are built and sold under the assumption that IT will approve the purchase, take delivery, find all the computing resources, install the software, set up the users, arrange for training, fix problems (if any), ask for improvements, etc., etc.
To some extent, this is an artifact of the first value. The SaaS company is taking over the job of delivery to the user from your IT department, and they’re getting a lot of leverage because they’re delivering it to lots of users, not just the users at your company. Not unnaturally, they can do automatically what your IT department has to do manually. With that automation come a lot of savings in cost for them and a lot of improvement in the user experience for you.
III. Cloud Applications Are Up-To-Date
According to the many really knowledgeable people who have tried the real edge that the cloud application vendors have lies in the speed and effectiveness of their development.
A cloud application vendor can develop more efficiently than a vendor that delivers on-premise, because they’re only developing for one installation. They can develop faster, because they don’t have to build up a massive “release.” They can deliver much, much, much faster. They can fix problems much more easily, because they’re basically fixing once and they can do analysis on the same system that they’re delivering. They can respond to customer requests more easily, partly because of the aforementioned, but also because they can actually look at how customers are using the application.
Cloud application vendors often tell customers, “No more upgrades,” and wonder that this statement fails to thrill. (It is thrilling, but only to the poor gents in IT who have lost too much of their lives in 72-hour stints over some Labor Day or Christmas.) They should be saying, “Look, we’re keeping up.” New devices? We’re on it. New interface techniques? Done. New requirements? Months, not years.
In my view, even the most modern of modern on-premise applications is roughly 5 years behind what a modern consumer experiences every day, and most applications used today are 10-15 years behind. With a cloud application, you have a chance of staying within reach of the latest innovations. With an on-premise application, you simply don’t.
IV. Cloud Applications Can’t Be Customized
Wait a minute? Wasn’t that supposed to be a disadvantage of cloud applications. Sorry, it’s an advantage. A big one.
The problem with customization is simple. It costs too much. Unfortunately, the way modern corporations and IT organizations are structured, most of the costs are hidden. IT organizations are service organizations, and they’re rewarded for providing service, so they simply defer the astronomical costs of managing, maintaining, and upgrading around the customizations because they look bad. They’re in the position of a butler at a billionaire’s house who is asked for fresh raspberries in February and, rather than disappoint, simply charters a jet from Chile and accepts the complaints when the billionaire finds they’re not fresh enough.
So when a SaaS company says, “No,” to its customers, they’re providing the same kind of benefit that I am when I tell my daughter, “No sugar.” They’re helping the customers to do something that they shouldn’t do anyway.
This doesn’t mean and shouldn’t mean that the SaaS company doesn’t allow any changes whatsoever to the way the product is used or doesn’t allow customers to do things with the data that they hadn’t envisioned. But typically, anything that the customer should want can be provided either by changing configuration settings or by moving the work (far more cheaply) to some external system.
If you find a SaaS system where you MUST have a customization (or an integration, the same logic applies) or it will be literally impossible for you to use the system, don’t buy it. Period. The fit is wrong for you, and it will never be right. This, too, is a huge value, both for the SaaS company and the customer. They have fewer customers who should never have bought in the first place, customers that are excessively expensive to support and often likely to sue.
Choice and Control
What is the cost of all this value that’s suddenly being delivered? To SaaS naysayers, the cost is choice and control. “We let the customer choose between on-premise and on-demand.” “We let the customer run on his own machine, which he controls.” “We let the customer run the application the way he wants.” (I use the word “he” advisedly.) Those silly SaaS companies force you to do things their way.
Well, if you ever, ever hear the CEO of an on-demand company say, “What we offer is choice,” run, do not walk, in the other direction. Because what he is not telling you is the cost of that choice. Modern enterprise applications are highly tuned, finely designed, very complex machines–think Ferrari, if you want–and for such machines, the kind of choice offered to customers is pretty much akin to offering Ferrari customers a choice of rear axles or of steering systems. Giving you that choice costs you a lot, costs them a lot, and is completely pointless.
The same thing applies to when an IT person says, “We need to have the control” or “We really can’t afford the security problems.” Unless you ask what the cost of that control is or the cost of that [non-]security, and get very accurate answers, you have no idea what a bill of goods you’re being sold.
About three years ago, I was a judge in a software contest, and all the entrants were SaaS vendors. They had figured out already what my client was asking me last week: the value proposition of a SaaS (multi-tenant) is so overwhelming that there’s simply no point in building an application for on-premise delivery.
Does the Sol Man Give Benefit?
June 4, 2010
Once again, we have a flurry of posts/announcements about the apparently mysterious Solution Manager. Panaya starts things off with a survey of Sol Man customers, who apparently don’t get much (?) out of it and don’t understand it. My much-opinionated colleague, Dennis Howlett, publishes a post [I am mentioned, thank you, Dennis] accurately going back over the history and suggesting that SAP needs to do better. Tom Wailgum publishes a post asking why SAP’s account of Sol Man benefits and Panaya’s account are so different.
It’s all a little confusing, partly because Panaya’s survey isn’t well-structured and partly because SAP itself does as much as it can to throw oil into the waters. (This used to be a quite different metaphor.)
So let me try to clear things up. Here’s what is known to any analyst or user who does his homework. The Solution Manager is a big product, which tries to do lots of different things. Some it does OK or pretty well, which means SAP is right. Some it does poorly. Like most software products. Here is a rough, off-the-top-of-my-head list of things the Sol Man does.
1. The Sol Man downloads patches that come from SAP. It does this pretty well, and it had better, because it is supposedly the only way to get patches from SAP.
2. The Sol Man gives you a fairly straightforward way of installing those patches and some templates/test cases that are supposed to help you test them. This, too, works fairly well.
3. The Sol Man does some basic SAP system cleanup and monitoring. It finds code that’s not being used, frees up disk space, identifies code you are using that’s been superseded (possibly), etc., etc. This works; some people use it, some don’t.
4. The Sol Man does some SAP performance monitoring. (Jim Spath, an SAP mentor, is the expert on this aspect of the product.) Spath has not spent a lot of time praising the product and from what I’ve seen of the product, there’s a good reason for this. [Adddendum: Jim has also recently posted on the Sol Man’s abilities to help with cleanup.]
[Note on this last point. Performance monitoring is a very complex area, because SAP performance and other system performance are deeply interlinked. There are actually many products out there that do some kind of performance monitoring, usually not just confined to SAP, and many SAP customers already use these products. So no matter how good or bad the product, SAP’s entry into this area is going to be problematic in a way that patch installation is not going to be.]
5. The Sol Man provides some tools that should allow SAP to deliver better support because the SAP organization can use these tools to get a better understanding of what your whole system does. To get any benefit at all from this, however, you need to spend some time documenting your system for SAP. If a lot of customers didn’t do this or didn’t know about it, I wouldn’t be surprised. For more on the benefits that might accrue from this, talk to my favorite Australian SAP Mentor, Tony de Thomassis.
6. The Sol Man can help you improve your business processes, with help from SAP, as long as you first document your business processes. This help could take various forms. The Sol Man could help eliminate redundant customizations, find process bottlenecks that SAP upgrades can fix, do root cause analysis of performance issues, etc., etc. Could this benefit you? If you ask Tony, the answer is “Yes,” but bear in mind that this a lot of work.
Now how do I know all this, especially what works and what doesn’t? Well, I didn’t do anything special. To find out what the Sol Man does, I went to a session at Sapphire 2009 (SapphireThen?) and then went to a really interesting session at the Influencer Summit. To find out about the benefit, I mostly relied on studies that SAP and SUGEN did last year, whose results were described at the summit. I’ve published this in a couple of blog posts, and both Jim and Tony have done much more than I to get the word out.
I don’t fault Panaya or Dennis or Tom for not having done this research. Why should they? But I do think that SAP could have done one thing very easily that would have helped everyone involved. Publish that study they told us about at the Influencer Summit. In it, the benefits are laid out fairly clearly and accurately. And then Dennis and Tom and Panaya and I could start arguing about how much those benefits amount to and what could/should be done to improve them.
Multi-Tenant, Silver: On-Demand/Hosted, Tinsel
May 7, 2010
I have been watching the earnest attempts with which Ray Wang is trying to “define” the flavors of cloud computing and Phil Wainewright is trying to describe its “unspoken benefits” and James Governor’s simplifications of cloud concepts, and all those efforts make me want to use words that you shouldn’t use on the Internet.
Why? Well, they’re a bit too high-minded and value-free.
Let me explain the problem with the use of an example. Imagine you were a high-minded dictionary writer who was trying to stand above the fray, so you defined “good,” as a ‘set of behaviors and attributes approved of by some,” and “bad” as “a set of behaviors and attributes not approved of by people who approve of the good.” There may be some sad, sorry truth to these definitions. But far from being even-handed or fair-minded, they’re simply serving the interests of the bad guys, who get to turn bad and good into a popularity contest.
Similarly, if you define hosted or SaaS or whatever as if all of these were legitimate choices which you, the high-minded definer are not going to evaluate, you serve….well, all the guys who want to foist a bad choice on you while pretending that it has all the virtues of a good choice.
You can see this most clearly if you look at the history. Ever since people (Marc, you know who you are) started using the term SaaS, applications that are shared and leased have been what the academics call “a site for contested definition.” As soon as the term SaaS was bruited about by some people, other people would say, “Oh, well that’s just a silly new term for ‘hosted,’ and ‘hosted’ is a great thing and it’s what we’ve always done. If you’d like to lease our software, go ahead.” (Larry, you know who you are.) The same things have happened to “on-demand” and “cloud” and all the other terms that the eminent and high-minded analysts are trying to explain.
This contest is exactly why there is the confusion in the market, a confusion that Ray rightly decries. The terms are confused and confusing because people are trying to appropriate them.
They’ve tried to appropriate them for a perfectly obvious, simple reason. They’d like to get the credit and money associated with doing something difficult, but very good, without all the trouble and bother of doing that difficult and good thing. But as a definer (or as an entrant in the marketplace), you will never, repeat never figure out what they’re trying to do if you don’t look squarely at why the difficult design choice really is difficult and what the payoff for that design choice really is.
If you try to define “good,” that is, without deploying some notion of worth, you’ll end up with no practical difference between the two and a whole lot of bad people trying to use your definition to show that they’re really good.
Maybe in one of those hoity-toity academic environments like the one across the street from me, it’s OK to publish even-handed appraisals and fine, nuanced distinctions that clearly lay out all the issues for those five other people who are interested in publishing themselves on the subject. But in the real world, where some people who haven’t done their homework are trying to appropriate the hard work of people who have done their homework, it’s not OK.
That said, let me tell you the real scoop on “cloud,” “multi-tenant,” etc., etc., etc. True SaaS, true multi-tenant, true cloud applications are shiny like silver. Hosted, on-demand, and private cloud are shiny like tinsel. For any purposes that aren’t temporary and don’t have pretense built in, silver is better. Multi-tenant applications have more functionality. They’re more adaptable. They are easier to operate for the vendor. They are cheaper. They are easier to manage. They are more competitive and more likely to last. Unless you’re throwing a party and are planning to take down the decorations the next day while suffering from a hangover, they are better.
Why is true multi-tenant better? It’s very simple. They’re more efficient. More of any dollar spent on a multi-tenant application goes into value delivered to the customer. A programming dollar spent on a multi-tenant application is immediately delivered out to all the users of the product, and, it’s put there to make the product more useful. To get the same effect in an on-premise application, it takes a lot more money. The same dollar for the programming (actually a little more), another dollar (say) for the testing, packaging, and delivery, another dollar (say) to test it at the other end, and another dollar (say) to get users to make it effective. Let me be generous and include the cost of delay in the delivery of value in those last two dollars. I think the cost is really more. But of course, there are no reliable published figures. Analysts, where are you?
Aghast? Don’t believe me? Well, let me use an example where there are published figures. A couple of years ago, a senior executive at a major application company gave some lectures at local universities where he claimed that single-tenant (hosted or on-demand) applications cost 10 times more to run than multi-tenant. Let’s say he’s right. The benchmark in this area is the (highish) per-user cost of a major SaaS provider, which is about $3.50/month (not published, but widely known). His company, at the time, offered a single-tenant application that it leased out for a hundred and change each month. Let’s assume that the major SaaS provider was a direct competitor and was charging the same. Are these companies really offering the market good alternatives, both of which need to be respected. Remember, one is putting $31.50/month/user (roughly 1/4 of your subscription) into paying for extra servers, extra electricity, extra licenses for virtual machines, extra complexity in their provisioning systems, extra management of load balancing, etc., etc., etc., all of which (from your point of view) add exactly zero value?
Imagine you’re buying one of these two choices. One vendor has made a bad, inefficient, and expensive design choice in the way it delivers the software. The other vendor has made a good, efficient, and inexpensive design choice. Which should you buy? Wouldn’t you feel really annoyed at any analyst who said, evenhandedly, that there’s much both good and bad to be said about both choices?
“But…but…but…” I hear you saying, “I just read Ray, and Ray says that you can customize single-instance products, whereas you can’t customize multi-tenant apps, and you can have more confidence that your installation is secure if it’s not sharing resources with some other application.”
Ray is perfectly right in his facts and perfectly wrong in his emphasis. What he isn’t saying is that in this context, customization and security are impossibly expensive, ridiculous luxuries, the moral equivalent of the rear-view mirrors in the Rolls-Royce that used hand-blown glass. Imagine the following conversation with your CFO. to see what I mean. “To get customization and security, we have to pay ten times as much for the software to be delivered by a company whose gross margins are 1/10 of industry best practice? Excuse me? We’re paying for a limousine when everybody else is paying for a bicycle messenger? Hello. Do we really need customization? Is the security really all that much better?” To which, the only rational answers are “No” and “No.”
The real question in this era of contested definitions is not what the definitions are, but how we can tell when we’re getting the real goods. This is a very, very difficult question, not at all easily answerable through the use of a single label. “Multi-tenancy”–essentially sharing computing resources among users of applications in a way that valorizes efficiency and accessibility–is a difficult and complex engineering problem involving numerous, complex tradeoffs. There is good multi-tenancy and bad multi-tenancy, and bad multi-tenancy can be so bad that it is virtually indistinguishable from single-tenancy. (That’s why, Naomi, I sometimes only give two cheers to your own really laudable insistence on multi-tenancy from HR vendors; it isn’t just multi-tenancy, but how you do multi-tenancy.)
Over the years, we’ve seen many attempts to isolate the truly distinguishing feature of multi-tenancy: that it runs on a single database or a single machine, that everyone gets upgraded at the same time, etc., etc. It’s just as silly and fruitless to do this as to find the single, isolating characteristic of “chair.” (Four legs? No, there can be chairs with any number of legs or none. People sit on it? People sit on lots of things besides chairs.) The only real way you can tell is to look at the engineering.
Not surprisingly, many vendors, even ones claiming to be “true multi-tenant,” don’t pass muster. A month or so ago, my old colleague, Brian Sommer spent a good deal of time calling around to companies that claimed to be multi-tenant. “You would be astonished, ASTONISHED,” he told me in his inimitable way, “at how few there actually were.”
If we hadn’t wasted time wrangling about definitions and propounding definitions that simply missed the real point, maybe a healthy discussion of the engineering issues surrounding multi-tenancy would have grown up by now. The plain fact is that the way Salesforce.com does it is really different from the way NetSuite does it, each company making quite different tradeoffs and pushing quite different levers. But today, whenever you get one of these obligatory “due diligence” site visits from companies fearfully dipping their toe into the cloud waters, the only “engineering” questions anybody asks have to do with SAS-70 Type II.
The Typewriter, the Word Processor, and the PC
April 9, 2010
In yesterday’s post, I argued that ROI would not be an adequate measure of the benefits conferred by new-gen (or pseudo-new-gen) applications like Workday, Business By Design, or Fusion Application Suite. The previous-gen applications were all about automation. The new-gen suites confer real benefits (I think), but not necessarily benefits that fall through to the bottom line.
What benefits are they? Well, they have to do with working more effectively: making fewer errors, putting more time into work and less into busy work, making more accurate decisions, faster. Is there benefit from this kind of thing? Sure. But how do you measure it.
In the post, I suggested a hazy term, “operational effectiveness,” for the benefits one should expect. What is “operational effectiveness?” Let me admit freely that I don’t know for sure. In this post, let me propose an analogy, which should help you to understand what I’m getting at.
The analogy comes out of a historical situation that always posed a problem for ROI analysis, the transition in business from the typewriters that sat on secretary’s desks to the PC that sat on executive’s desks. This transition occurred in two different phases. First, the typewriters on the secretary’s desk were replaced with big, clunky word-processors that sat next to the desk. These word-processors automated the secretary’s document production work. Then, the secretarial position itself was eliminated, and the typing function became something that executives did themselves on that PC.
The transition to word processors could easily be justified in ROI terms. We could get more work out of the secretary or else hire fewer secretaries. Whether the justification was real is an open question. But it’s certain that that’s how people thought of it.
The next transition was much more problematic for ROI analysis. Expensive executive time was now being put into jobs that had been performed more efficiently by much cheaper labor.
At the time, people didn’t put a lot of thought into figuring out why they were funding this transition. Executives saw the PCs, knew that everyone else was using them, needed them for some functions (e-mail, spreadsheets), and just decided. “We’re doing it this way.” At least in my recollection, that’s what happened.
So were they just loony or lazy or wasting shareholder money on executive perks? I don’t think so. I think what they were plumping for was the same “operational efficiency” that I’m talking about 25 years later.
True, they spent more time typing. But they also had more control over the final product; they could change the product more easily; and they could distribute it without much overhead. And, at the same time, they were changing the form of what they were doing. They weren’t just producing typed memos; they were documents with fancy fonts and illustrations; and they were creating Power Points. True, many an executive was spending ridiculous amounts of time fiddling with type sizes so that they could get things on one page, but even acknowledging that, they thought the new way was better.
Indeed, by the time the transition was finished, justification wasn’t even a question, because the new tools changed the nature of work, and now you couldn’t get along without the tools. When executives were doing the typing, they stopped creating long reports. More and more of the time, a corporation’s decision-making was even wrapped around a full document (minutes, memos, or formal reports), it was wrapped around Power Point decks.
So by the end of the transition, ROI analysis had become entirely moot. How could you get a tangible measure of benefits when you were comparing apples and oranges?
Could we be seeing a similar transition now? It’s certainly possible. The analog to the word processors is that first generation of enterprise applications, which were funded by the automation benefits they confer and by ROI analysis. The analog to the PC is the second generation of enterprise applications.
(One caveat. As I’ve said before, I don’t think that Fusion Applications or the versions of Business by Design that I’ve seen are in fact second-generation applications. They’re more like Version 1.3. But they’re close enough to next-gen to raise the problem I’m talking about.)
If the analogy holds and if second-gen apps work as the developers hope, the benefits that businesses are going to experience will be equally hard to get your arms around, partly because the benefits are so subtle and disparate and partly because you’ll see a shift in the way work is done.
Does that mean that we won’t be able to talk about the benefits and we’ll just bull ahead with them? Well, that’s why I’m introducing the notion of operational effectiveness. It does seem to me that we can get clearer about what the benefits are.
So come on guys. Make comments. What is operational effectiveness? And how can we tell whether we are getting it?