Talk to the typical SaaS executive about the complexities of customer service in Cloudland, and they give you a blank stare. What’s the problem. We provide our service. We guarantee it. We do a better job than

Well, yes. But that’s not all. When you’re getting cloud services, the services are often coming from multiple providers. Each of them can think they’re doing a good job or the right job. But even so, delivery can fail. And when that happens, it’s a nightmare for the customer. A nightmare.

Here’s a simple example. I have a Blackberry Pearl. My web site and domain name are hosted by XO. Two SaaS providers. I get my Blackberry e-mail from XO, which forwards it to the Blackberry domain name (tmo.xxx.com).

All of a sudden, this weekend, I started lots and lots of new spam–you know, the usual, sex stuff, plus a lot of spam with a Facebook return address, etc., etc., stuff that had been filtered. The new spam appears on both my Blackberry and in the e-mail delivered directly to my POP3 Mail account on my Mac. But much more new spam appears on my Blackberry.

Obviously, something happened to one or more of the spam filters that are provided to me. I know of at least two of these filters: one at XO and one on my Mac. Is there another one at RIM? Who knows.

I’d like it to stop, but I don’t have the time to troubleshoot this problem. Did XO change its forwarding policy? Did RIM install a new filter and screw up? Who knows.

If I just had one provider, I’d have some chance of solving it. But when I have two, I just have to sit and wait. The worst of it is, it might be the case that neither provider will ever realize that there’s a problem.

I wonder if the iPhone has a spam filter on it?

The point? Well, cloud service providers need to start thinking about this–and taking responsibility for the actual delivery of services, not just for making those services available.

Advertisement

Acres of Servers

April 13, 2009

Brian Sommer offers this astonishing story about a bad implementation. The story comes from early in his career.

“I worked an entire year getting the first mainframe General Ledger (and other vendor apps) in production at a Texas client. I have never before or since worked so many hours in one year.

After that project, I was off to another installation of the same vendor. When I arrived, I found a team that had been diligently preparing for a conversion to take place in approximately one year.

The users wanted to push this software product to the absolute extremes of its functionality. The software was going to be configured with virtually every conceivable option.

Having just lived through the Bataan death implementation, I knew there were some practical limits for this software. Within 24 hours of being on-site, I shared with the client my disk space concerns for one of the validation tables: a reverse code block look up table. By my calculation, this table alone would require 14 acres of IBM 3350 disk drives. That’s
correct:14 acres.

The client didn’t believe me.

Two weeks later, the client executive director invited me to listen in to a call he’s having with the vendor’s CEO. Everything went well until we got to the subject of the lookup table. The software CEO confirmed my math and told the client that his software was ahead of its time. He said that hardware hadn’t caught up with his software and that the client should “either scale back the functionality or do a hostile takeover of IBM”. Needless to say, that didn’t go over well with the client.

Now the client believed me.

They also believed me when I told them that the product wasn’t ready to run in IMS DB/DC. It took another 12-18 months before the technical platform was ready.”

Thanks, Brian. But one question. Who was at fault? The client? The vendor?

“Fault. It was the project team. They went along with everything the client wanted. Sometimes, in the long run, you need to show clients a little tough love.”

Oh, and one other. What is an IMS DB/DC?