January 29, 2013
This is the first of a series of occasional blogs whose main purpose is to make other people very rich. I mean, heck, I’ve got enough, or at least I would have enough if my family couldn’t read the word “Sale,” in department stores.
So how can YOU get rich from my idea. Build an app that does it. I explain how the app works, and you go ahead and do it.
This first idea is pretty simple, but the app might be hard to build. That’s OK. You don’t usually get rich without talent.
The idea is to take advantage of something we’ve all observed: that management slows projects down. Not deliberately, mind you; usually, they want to speed things up. But all those gates and calendars and (with top, top management) staff that surround management make it a big production just to get hold of these guys. (And it’s usually “guys,” I’m sorry to say.)
Now this makes no sense at all, if you think about it from the enterprise’s point of view. A team of busy, talented people who are being productive should never, ever wait around twiddling their thumbs for weeks until it’s time for their half-hour with the boss, who will usually make a decision in a split-second.
Bosses don’t like it either.
So what I propose is an app–or really, a series of apps, which I’ll call “unscheduling” apps. They’re designed to short-circuit, circumvent, avoid, get rid of as much of the ceremony and delay associated with decision-making as is possible.
I believe that small, simple apps that make incremental improvements are always better, so let me give you a few examples of what I mean, all of which are indeed very small improvements.
** Da De-Delayer: Your Table’s Ready, Sir. You know the problem. That really great executive just happens to be running late. Everybody else’s schedule gets thrown off. So do for busy executives what many restaurants already do for their patrons: send them an alert when their meeting is actually about to start. In the meantime, all those people waiting can do something productive.
** Da Agendifier. Make it possible for the people who are asking for the meeting to put two lines or so right in the calendar that says what question is being asked or what decision is being asked for. This has to be readable at a glance. As with Twitter, only giving people 140 characters makes for a needed concision.
** Da Snooperintendent. Back in the days when people were co-located, managers would drop in from time to time and a lot of stuff would get settled. Nowadays, that isn’t possible. Your manager is in Bangalore or someplace and aint’ dropping in any time soon. So give the manager a virtual place where they can drop in and say hello and sniff around. The group that’s working on something would keep a live description of what’s going on Now: what people are working on, questions that are coming up, issues. Next to each notation would be a presence indicator for each of the people involved. The manager would be able to drop in at any time, then be able to send messages or, if they were present, videoconference the people involved.
Da De-Synchronizer. Meetings, which require synchronous communication, are much more expensive than a chat or e-mail exchange, which are asynchronous. So give people who want to talk to a busy executive a choice of two schedules, the synchronous (we’ll see each other face to face, eye to eye, mano a mano) and the asynchronous (I’ll spend xx minutes reviewing this and get back to you by XX). The executive then spends part of his (or her) time on each schedule.
I can go on and on, but there’s no need to do this, because you can see what I’m trying to do. Whenever possible, try to give people more control over their (mutual) schedules, get right to the point, and try not to waste a lot of time when something simpler would do.
Comments welcome, of course. And of course if there are apps that really do any of these things already, feel free to boast.
January 21, 2013
Just finished a year and a half of unblogging. Had a gig, and oddly enough, what they wanted was exclusive access to my opinions about technology. So it wouldn’t have been right to broadcast them out to the net, too.
Before ramping back up, though I’m going to take a break. Sit back. Travel some. Read some books. Extend my range a little bit.
In the process, I thought, why not do the same with the blog.
What follows, then, has nothing to do with technology. But it contains much better advice than I usually give.
Read these books. All of them are terrific. A lot of fun. Worth your time. I first compiled the list for some people in my family who asked for it. Thought I’d share it with you.
In no particular order:
1. To Each His Own. Leonardo Sciascia. Sciascia was an Italian journalist who wrote both fact and fiction about the Mafia. His three works of fiction are among the creepiest murder mysteries ever written. If the thought of the Mafia scares you now, any of these very short books will have you looking in your closet before you go to sleep. In this one, a Sicilian priest begins to suspect that there’s more to the murder of a postman than meets the eye. And there is.
2. The Long Ships. Frans G. Bengtsson. An adventure novel written by a Norwegian journalist during the Nazi occupation, set (more or less safely) in 975 AD, when the Vikings would row (literally) from Norway to Spain and back, if they thought they could pick up some treasure on the way. The hero, a Viking captain, does indeed go to Spain, is captured, rescues himself from the galleys, becomes a favorite of a Castilian Prince, revenges himself on his captor, escapes with a fair amount of booty, clevery fends off the King of Norway who thinks that boodle belongs to him by rights, deals with mutiny, shipwreck, and treachery, and that’s just the first 100 pages. Then it gets better.
3. Odds Against. Dick Francis. Horses. Self-possessed loners who deal with whatever comes up. Burly, ruthless villains. It’s Dick Francis. What can you say? The best of the lot, probably, and if you like this hero (Sid Halley), there are two more with him.
4. The Spies of Warsaw. Alan Furst. Clever Nazi villains, but a cleverer hero, who just barely escapes. Wealthy, but available countesses who have a past; dangerous border crossings; bags of money. What could be more fun?
5. True Grit. Charles Portis. There’s a reason why it’s been made into a movie, not once, but twice. It’s a heckuva yarn. What the movies lose, however, is the best thing in the book, the voice of the heroine–resolute, prudish, penny-pinching Mattie, the one who’s willing to hire John Wayne and make him do what he wants, but sure as shootin’ ain’t gonna pay full freight, not if she can help it, even if it is John Wayne. I mean, he drinks. The (first) movie does have one thing superior to the book. It offs Glen Campbell. Can’t complain about that.
6. Gone with the Wind. Margaret Mitchell. Another book that’s better than the movie, but takes a lot more time. I don’t need to tell you anything about it, except maybe that the title comes from a poem by Victorian poet, Ernest Dowson, who also came up with “days of wine and roses,” and “loyal to you, darling, in my fashion.” No one reads Dowson any more. Does anybody read Margaret?
7. The Count of Monte Cristo. Alexandre Dumas. At a little over 1000 pages, it isn’t an easy commitment, and even Dumas got tired about page 850, but listen, even if you just do the first 350 pages, where he’s barely escaped from the Chateau d’If, you can call it a day, and still had about 20 books worth of great reading.
March 30, 2011
Groupon at $6 billion, for what, coupons? Twitter at who knows how much more, for what, stray pulses of thought? CornerStone OnDemand at 10 times revenues? How much of that is for the words “OnDemand” cleverly tacked onto the name?
Are we in the middle of a cloud boom? And if so, can we learn anything from the last big boom, the dot-bomb?
Well, I remember the last boom pretty vividly–I was young and foolish back then–and I would like to offer, if not sage advice, a way of thinking about these companies that at least help you separate them.
I’m going to use as Exhibit A, a rather smurmfy story that was circulating a few weeks ago in the financial community. Marc Benioff calls up Dave Duffield at Workday and says, “How much?” Dave says, “$2 billion.” The person telling the story intends for you to be shocked, shocked, but just in case you missed the point, he closes the story with a snide comment: “$100 million in revenue, and they want $2 billion? Time to sell cloud stocks.”
Now, I don’t believe that this story actually happened, for two obvious reasons. Reason #1. Both people have the good sense to keep the story to themselves. Reason #2. Well, wait until you get to the end.
But first, let me tell you about Workday.
Workday is a SaaS (cloud) company that does full-suite (HR, Payroll, and now Talent), global HR on an object-oriented in-memory database. It is run by two of the smartest people in the business, and since they cared about HR, they recruited many of the best people from PeopleSoft, which at one time had the best HR and Financials package out there. I’m not a super HR geeko dweeb genius–you know who you are, Naomi–but I’ve seen the package many times, and I’ve talked to them often about what they’re trying to do, and as far as I’m concerned, they’ve got a better mousetrap. They do HR (for their target market) better than anybody else in their space.
Now, when Workday was being started, they decided early on that they would build a product for big, global companies. They had done this once before at PeopleSoft, as I said, so they understood the challenges. Privacy laws, getting the addresses right worldwide, languages, expats, payroll, you name it. Hard stuff (and, I might add, not done entirely successfully the first time around). This time, though, they figured that their cloud-based, object-oriented, in-memory technology would be way better able to do all this hard stuff than what they had the first time around, plus, as it turns out, it would be highly buzzword compliant.
Have they gotten there? Not yet. But so far, so good. They’ve recruited one global client, Flextronics, and installed the package worldwide. There are issues, there are complications. But they’ve got a pretty good proof point, and now, they’re trying to sell it to and install it in other large, global companies who happen to want better mousetraps in HR.
So what do you think? Is this one-third as good as Groupon? One-fifth? As good? Or, as the smurmfer seems to suggest, 1/20th of what Groupon is worth, max.
Well, let me suggest a way of looking at this question. And please, the usual disclaimers. I don’t own any Workday stock, don’t have any economic relationship with them, and paid my own way to the last meeting I had with them. I’m enthusiastic about them, but because I can really see the point of in-memory, object-oriented cloud computing and I like the way they applied the technology to the problem.
So what’s the value of a better mousetrap?
When I think about any of these companies, I start by trying to figure out what kind of value they could deliver. Back in the dot-bomb days, this was a really good way of separating out the wheat from the chaff, because many of the vendors didn’t deliver any value at all and wouldn’t unless they changed what they were doing radically. CommerceOne, I remember, was my paradigm for this. Their software didn’t do anything and wouldn’t do anything, so there was very little value.
I then figure that the value of the company is some function of the value they deliver multiplied by the time that they can deliver it for. This last part is important. One worry I have about Groupon, for instance, is that it might turn out to be a fad. People will get tired of getting coupons (unlikely, I admit), or merchants will get tired of giving stuff away, and after a while they’ll fade to black. With Google, by contrast, I don’t have that worry. It will deliver value for a very long time to come, unless people stop using the web.
So where does Workday stack up on this measure? Obviously, it’s not bad. If it really is a better mousetrap and it gets into more of the large companies in the world, there’ll clearly be a lot of value delivered. And given how long companies use these products (20 years is pretty normal), the total value delivered over time is significant. As much as coupons? Sure. Maybe more.
But, then why haven’t they already zoomed up into the stratosphere, like Facebook or Twitter? The answer, I think, points up a really important difference between the public cloud applications and the ones, like Workday, which are meant for organizations. The public consumer applications are things that an individual picks up and uses. But enterprise applications are really a piece of infrastructure, like buildings or electrical systems, which don’t deliver much value until they work for a lot of people.
When you build products for individuals, you can bootstrap. But when you build infrastucture, revenue doesn’t come in right away. There’s always a large up-front investment, partly in physical plant, partly in getting the infrastructure to the user. True of electrical systems or telephone systems. True of roads or harbors. True of enterprise software systems.
So Workday today isn’t zooming because it’s still building out. In a way, it’s a little bit like Hoover Dam, say 75 years ago, when the dam was done and the pipeline was being built. A monster amount of money had already been spent on the dam, and during that time, there was no revenue at all. Then, while the pipeline was being built, some revenue came trickling in (hee, hee). The people in El Centro or Thermal or Brawley were getting good, fresh water, and they were paying for it. But it didn’t add up to a lot. No offense meant to Flextronics, but this is the situation that Workday is in now, with that $100 million that some people find so sneer-worthy, if $100 million it is.
So why are they sneering? Well, sure, if the pipe stopped in Brawley, it wouldn’t be so great. But you have to ask, why does anyone think the pipe’s stopping there. Presumably, the pipe will go farther, hit a city or two, and then the investment looks a lot better.
Sure, there is some doubt, as long as the value isn’t actually being delivered. An earthquake might break the pipe or the dam. The water might be no good. Somebody might invent a fresh-water-from-salt process that works really well. At Workday, people might not like the product as much as I do. It might be difficult or expensive for companies to dump their old systems. New competition might crop up.
But that’s not the point. The question is, how do you value Hoover Dam when the pipe has only gotten to Brawley? Well, you don’t start out by sneering at the revenue they’re getting so far.
So, two points to remember, if you’re afraid of boom valuations.
1) The real value is a function of the total value delivered over the life of the product.
2) For infrastructure products, especially cloud infrastructure products, you see a huge up-front investment, and while things are being prepared, money comes in slowly.
So, under these criteria, how does Workday look? Well, it’s pretty good. Take a look at the value that Flextronics is planning to get. Assume that other large, global companies will also want that value. And assume that once it’s in place, the product can go on for the 20 years that PeopleSoft seems to be lasting inside companies. As good as Groupon, ya think? I think.
Which brings me to the second reason I don’t believe that story. $2 billion sounds like a number that a value investor made up because he’s trying to show how ridiculous valuations have gotten. But when you look at what’s actually going on there, $2 billion seems off. It appears to me that it’s rather too low.
October 20, 2010
The other day, my 2nd grader told me how to get to the moon. Take a ladder and lean it up against the house. Climb up the ladder with a second ladder and put the two together. Then iterate.
I was reminded of this the other day when I read yet another press release from a software company, which promised improved profitability and massive ROI from one of its apps. I happened to be familiar with this species of app, and it seemed to me that it was just like my daughter’s ladder. You could start on your way to the moon with it, but in fact, the only way it was really going to help you get there was if you took the ladder and leaned it up against the airlock of a rocket ship.
This app is what I want to call a “ladder” app, an app that can indeed take you in the direction that you want to go, but that is all. It can’t actually get you there, and any time you spend on it is going to be wasted.
I’ve written about ladder apps before. There was the replenishment application used by a Belgian retailer that was reported on by Vishal Gaur of Cornell. The app calculated the optimum replenishment time and amount for every item in the store, but it optimized on the wrong thing, so every one of its recommendations was wrong. Every manager had to spend 10-15% of the work week correcting each individual recommendation or run short.
Or there was SAP’s famous Solution Manager, which would supposedly make your use of the SAP product and SAP support more effective, but only if you documented every process, custom application, and integration where SAP was or might be involved.
How can you tell whether something is a ladder app? It can be very difficult, believe me. Those ladder ideas always sound plausible; it’s only when you look into them that you find that there’s some insuperable barrier.
One rule of thumb. If it sounds hard, it probably is. If some guy knocks on your door and says, “We can get you to the moon,” look around very, very carefully. He’s probably got a ladder on the truck, so he can show you just how high he can get you and how fast.
September 27, 2010
What is “cloud” computing, and how does it differ from “hosted?” The question emerged once again recently among the Enterprise Irregulars, as one Irregular used the terms interchangeably and another objected. Neither, however, wanted to get into semantics (that is, what the words “cloud” and “hosted” actually mean), so they agreed that the problem is knotty and went on with their lives.
These are both sensible, intelligent people who make sane decisions. But I want to point out that something has happened to both of them. Both of them, I think, have become victims of what I’ll call “smash and grab semantics,” a practice where companies will take a term that they find attractive and use it to describe something that they do. In some cases, this is pretty legitimate–I think both Salesforce and Amazon can use the word, “cloud,” without arguing about it too much–but in the case of “smash and grab semantics,” there’s a real distortion; something of value has been taken when they do it.
In other contexts, we regard smash and grab semantics as pretty reprehensible, because something quite real is at stake. I was talking last night to the editor of a newspaper in a country that calls itself “democratic,” but isn’t. He goes to court regularly and regularly ends up in jail, though not usually for too long. The last time he was hauled into court, the judge began the proceedings by asking, “Do you stand behind the lies that you just published?” As far as he’s concerned, when a regime practices smash and grab semantics on the word, “democratic,” it helps this regime get away with putting him in jail. And he should know.
So, is there a distortion in the use of the term, “cloud” vs. the term, “hosted?” And does this distortion give the people who’ve grabbed the term something they shouldn’t have? I think so. The plain fact is that cloud is a lot cheaper than hosted, a lot cheaper, because cloud applications or services have been engineered to share resources efficiently, something that hosted applications or services can’t do, because they’ve had to be rewritten from the ground up.
So when people offer something hosted and make their customers think it’s cloud, they’re giving people the idea that they’ve done the homework and were providing the advantages of efficient resource sharing when they haven’t in fact.
The details matter, of course, which is why my two very smart colleagues didn’t want to get into a complicated argument. With hosted applications there is always some resource sharing, by definition. But the plain fact is that whatever the details, something that’s engineered to be cloud is roughly ten times cheaper than hosted. So whatever the details, people who call themselves “cloud,” are doing a bit of smash and grab.
So how do you combat smash and grab semantics? Fortunately, the answer to this question has been known for a 100 years. You don’t let them get away with it. Despite the fact that they want you to use the term they’re using, you don’t go along. George Orwell put it better than anyone in “Politics and the English Language.” I am paraphrasing. If you want “language [to be] an instrument for expressing, and not just concealing or preventing though,” you must “choose…the phrases that best cover the meaning.”
Don’t get me wrong. If somebody says their offering is “cloud” or “SaaS” or “on-demand” when it is actually hosted, this is smash and grab, but only on a small scale. It is not the same as using the word “democracy” to describe a tyranny. One is good, aggressive marketing; the other is morally confused. But since the technique used–smash and grab semantics–is the same, you combat both in the same way You use the right word.
September 17, 2010
Over the past 20 years, the American Airlines nonstop between BOS and SFO has been a fixture of the software industry. Almost any time you took it, you’d see friends, and even when you didn’t, you’d be sure you hadn’t boarded the wrong plane, just because of the number of Oracle bags.
This flight is no more. A little more than a month from now, American Airlines will end non-stop service between SFO and BOS.
Why? The commuters left American for JetBlue and Virgin America, for better seats, better entertainment, fewer niggling charges, and maybe even better treatment. I saw this myself when I actually got ** an upgrade ** on American a few weeks ago, and I confirmed it by talking to a frequent commuter who had gone over to Jet Blue even though he had more than a million and a half miles on American.
I wonder. Is this how other empires crumble? In my world, you have SAP and Oracle, seemingly as unassailable as the Great Wall of China, but both offering an experience that has been compromised by excess investment in an aging infrastructure, a labor force that has been asked to shoulder most of the burden imposed by this infrastructure, and a management that grew up in the good old days and still (my guess is) secretly longs for them.
For a long time, people like myself stayed with American. And while that was happening, the management could fool itself. Leather seats? No fees? These don’t matter to our loyal customer base, so we don’t need to invest the boodle that we don’t have in matching the competition. Until, of course, the day it did matter, when there was nothing they could do.
What are the moral equivalents of leather seats and fee-less flying in the enterprise applications market? Oh, it’s any number of things. It’s the fact that search, if it’s there at all, is just about as cumbersome as search was on AltaVista. It’s the fact that doing anything, anything unfamiliar requires the equivalent of a Type III license and just about as much training. It’s the endless, endless wait for even the most trivial or needed changes in the system. It’s the fact that you can’t use them on an iPhone or an iPad or even a Mac. (It’s amazing how many enterprise application companies are IE only.)
Please feel free to add to the list. It’s kind of fun.
Oh, people can and will put up with it for a really long time. But eventually, if the competition is smooth and persistent, they can march into a market and take it from the behemoths. I think you’re seeing this today where Workday is replacing PeopleSoft 7.5, and you saw it a few years ago when Salesforce was replacing Siebel. It does happen.
Now if only I could remember my JetBlue frequent flyer number.
July 14, 2010
Who is to blame for IT project failures? My colleague, Michael Krigsman, argues that when IT projects wander into the “IT Devils Triangle,” all three participants–the vendor, the integrator, and the customer–are to blame. Michael is very insistent about this; in a recent post on Marin County v. Deloitte, he says, “In my view, it is highly unusual for a project to fail without some culpability on the part of the customer.”
Michael is the guy who almost singlehandedly took IT project failure out of the closet and into the open, and much of his professional life is spent analyzing these failures. For that reason, one should not criticize what he says without good reason. But I’ve always been uncomfortable with his line on things and felt that we would all be better off if we had better analytic tools for this kind of situation. I want to know when the “culpability” is minor and merely contributory and when it is really important or even primary.
One’s normal intuition about such things is that there are broad spheres of responsibility. If I get an artificial hip implanted, it’s up to the manufacturer to provide my doctor with a hip that works, up to the doctor to put it in correctly, and up to me not to abuse it. This is the intuition that underlies a lot of contract law and liability law, and it even applies to very large, complex engineering projects. When the architect found a flaw with the Citicorp building in New York City, which flaw could have caused it to collapse, the architect and the contractor took responsibility for it and had it fixed. They didn’t blame the customer, and they didn’t ask the customer to adjust his use of the building.
If you apply this intuition to technology, it works pretty well. In the case of IT projects, the software company is responsible for making sure that the design and building of the product is adequate to the demands that will be placed on it, the integrator is responsible for deploying the product in a way that meets what one might call the normal expectations of the customer, and the customer is responsible for using it in ways that are consistent with the design. If any of them fail to meet their responsibilities in their sphere, then they are culpable. And the others are not culpable (except perhaps in a minor way) if they don’t wish to compensate for that failure.
Now, I think Michael would agree with this, but he would say that, as a practical matter, what happens in these large project failures. is that all three people fail to deliver, so all are culpable. (Michael, please correct me if I’m wrong.)
But it seems to me that doesn’t go far enough. I think you need to be able to figure out when one participant is clearly morst culpable? Perhaps, for instance, one ought to apply a notion of priority, so that if the vendor fails to deliver, the vendor is primarily culpable by definition, and the most that the next two parties can do is provide some minor contribution. (This tends to be the approach in products liability, with some major caveats.) Or perhaps there are some other tools that need to be applied, tools that would allow one to extend (or perhaps even change) the “pox on all their houses,” line that Michael takes.
Whatever you do, though, it isn’t easy.
Let me try to illustrate what I mean with an example that has made a lot of news recently: the iPhone 4. Here you had three participants: Apple, AT&T, and the customer. You had a product that “did not work properly.” And you have a dispute about levels of culpability.
The problem, in case you’ve been in Tibet for the last month, is that the iPhone’s reception isn’t very good for reasons that still aren’t clear. (My wife has one. It’s true; it’s not not good.) The customers have basically been told by Apple, “The problem is in the way you hold the phone. Either hold the phone differently, or fork over $25 for a case.” The customers have said, if I may paraphrase, “It’s a phone. You should be able to hold the phone any way you want. It’s up to Apple to give me a phone that works no matter how I hold it.”
Now, the IT Devil’s Triangle analysis says, basically, that the customers are wrong. According to this argument, Apple made a good phone that made some design tradeoffs. (Basically, the antenna was put on the rim of the phone.) If one of those tradeoffs means they have to hold the phone in a special way, so what? They should either buy themselves a case or hold the phone correctly. If they don’t want to do that or can’t, they are contributing to the problem and are at least as culpable as Apple.
So is AT&T off the hook, here? Under the Devil’s Triangle argument, not at all. The phone service that they provide is weak and erratic, and that makes it very difficult to troubleshoot the phone. (I have experienced this in spades.) Their customer “service” is designed to handle normal problems expeditiously, but is not designed to track and resolve complex problems, and as a result, it makes what could be an irksome experience into something verging on horrible. (I have also experienced this, believe me.) And that means that customers are not as tolerant of the iPhone 4’s design choices as they should be.
It’s something of a conundrum, isn’t it? One’s normal intuition is that Apple should design a phone that people can hold any way they want. But one can certainly make a cogent argument that it’s really the customer, as much as anyone else, who is to blame, because they get mad at Apple, rather than being willing to hold the phone in the correct way.
In the end, the solution to this riddle is a matter of values. If you think (as Consumer Reports does) that it’s up to the vendor to get things right, when it comes to simple things like how one holds the phone, then the Devil’s Triangle argument is simply wrong. If you think (as Steve Jobs and many of my friends who work for vendors think) that the customer is to blame if they can’t deal with the “flaws” that they find in a complex and highly-engineered technology, then you’re going to agree with Michael and assume that there are very few situations where the customer doesn’t have some culpability.
Let me just tell you where I come down on this and hope for some illuminating comments. I think the benefit of the doubt should be given to the user. If the vendor is clear about the design tradeoffs and limitations and the deliverer (integrator) provides service that takes those limitations adequately into account and is clear about that, then I think, “Yes, it’s up to the consumer to deal with the limitations.” If the vendor and deliverer fail to do this–if the vendor isn’t clear about what the design tradeoffs are or if the vendor and deliverer allow you to get the idea that the product will do things (like make phone calls in normal situations when being held normally) that it won’t in fact do–then the onus is on them.
So, for instance, if the building architect makes a mistake in the way the building was designed or the general contractor deviates from the design in a way that turns out to be dangerous, it’s up to them to fix it; the building owners are not culpable because they don’t want to limit the number of people allowed on each floor to a number far below what they’d been led to expect. And if the hip designer uses a brittle material and the doctor chips it when he or she installs it, it’s not up to the patient to walk less.
Just my view.
So why is this important? It’s important because this is not how the software industry works. It’s normal for vendors to be very close about the design of their technology, for vendor and integrator salespeople to make unrealistic claims about what the technology does or about the benefits to be received, for the people delivering the product to view limitations in delivery as upsell opportunities, etc., etc. So if my view were to prevail, and it were up to the vendor and deliverer to make sure that the product works as you would normally expect and the installation is what it should be, the industry would have to change.
Is that likely? Maybe, maybe not. But it gets more and more likely every time a user asked to adapt to some software’s vagaries asks himself or herself, “Isn’t this like being asked to hold an iPhone with tweezers?”