Future of Cloud Computing: Data Centers, Outsourcing, and the Power of Cultures

6/3/09

Over the last couple of years, we have been witnessing a resurgence of the “Internet as the new computing platform” idea. I say resurgence because that was the premise of the late 90′s “Internet Bubble.” Given that history, it would be a mistake to use the same term, of course. Instead, we’ve coined a new one— “cloud computing.” New term aside, the core premise of providing applications & services over the Internet remains at the heart of this resurgence. The Internet Bubble has taught our industry to be more circumspect. Furthermore, with 10+ years of multiple successful Internet services under its belt, the industry is that much more mature. Amazon, Google, Hotmail, Yahoo, and SalesForce.com are all great proof points of this maturity.

Yet, as I observe this resurgence, I see the hype is rebuilding and we might be getting carried away again. History is a great place to look for patterns to help predict patterns in the future. Here are some I see.

Fallacy of paradigm shifts

Mainframes to PCs, and now PCs to the Cloud. I’ve noticed that when there is a paradigm shift, a set of people take on the task of re-implementing existing applications in the new paradigm. Somehow, a mindset develops that “paradigm shift” means it is an opportunity to replace the incumbent solutions by re-implementing them in the new paradigm and suddenly the world will move over. I believe that it is a waste of time & energy to re-implement applications that work well in the existing paradigm. ROI for making the shift rarely exists for the majority of the market. It is important to internalize that computing is a tool for most organizations and individuals, not a way of life (like it is for some of us) so they won’t make the change unless there is a very good reason to do so.

Historically, when computing shifted from mainframes to PCs, some believed that mainframes would go away and even tried to rewrite key mainframe applications on PCs. Mainframes remain and continue to be a very healthy business. Most batch/transaction processing applications that worked extremely well on these systems continue to do so. What popularized the PCs was killer applications such as spreadsheets, word processing, etc. that made computing tools much more accessible to businesses and individuals at large, and significantly increased efficiency and productivity compared to using paper and typewriters.

The same is going to be true as we make the shift from PCs to the cloud. PCs aren’t going away—in fact, they are at the heart of popularizing the Internet and fueling the adoption of this new computing paradigm. So PC applications such as Microsoft Office and Adobe PhotoShop that harness the power of local computing capacity, storage, and huge existing user bases aren’t going to be easily replaced, if at all. In fact, I believe that such efforts will see very limited success. I encourage entrepreneurs and innovators to focus their energies on new applications and services that weren’t possible before the Internet (the cloud). Applications such as e-mail and search are great examples. I’m sure there are many more!

Outsourcing IT to the cloud. One “application” enabled by ubiquitous Internet access is ability for organizations to move some of their on-premise IT infrastructure to some off-premise location. Doing so has clear ROI motivators, such as, eliminating significant capital expenditure when facing infrastructure deployment for a new application or expanding/replacing existing infrastructure. In other cases, it may be motivated by reducing ongoing operational expenses of running the IT infrastructure by letting a specialist organization do it more economically. This is the value proposition of many “cloud” (a.k.a. data center) players such as Amazon’s AWS, Rackspace, the Planet, and others. Note that this is analogous to IBM Global Services replacing internal IT staff in organizations with trained IGS consultants to improve efficiency through IGS’s best practice processes. However, outsourced IT in the cloud is a type of scenario that sometimes exaggerates the “replacement” hype about the new paradigm. By extrapolation, it seems to imply that all applications would eventually get hosted in the cloud—i.e., all IT will be hosted in a set of Internet Data Centers.

Something about this extrapolation bothers me. I’m unable to fathom how computing that is now highly decentralized and distributed all over the globe can be made to converge to a few data centers. Ironically, this doesn’t seem to compute in my little brain. Not all applications require Internet Data Centers. Such data centers are prohibitively expensive to build and operate—even more so than the mainframes! And such data centers have inherent physical limits in terms of scalability when you consider real estate, building, power, mechanical and bandwidth needs of such a facility. Why would it ever be a good idea to have a word processing application run in a data center and have millions of users all access it over the Internet when it is much cheaper and faster to run it right on the user’s personal device?

I believe that certain classes of applications lend themselves to needing data centers. Internet search is one of the best examples of that. Many mega e-commerce applications fit the bill as well. Applications that need to process large volumes of data, slice and dice data to find patterns and interrelationships at high speed, need the concentrated capacity of data centers. However, data centers are very expensive beasts, so the applications that run in these data centers better justify the ROI. I am so happy that Google figured out how to monetize Internet search through advertising! Unfortunately, not everything is monetizable through advertising. A large number of applications are much more efficient and cost effective to run on the edge of the Internet, rather than in the (data) center. The center may be provide facilities such as security, control, orchestration, rendezvousing or meta-data that enhance these edge applications to be more powerful, efficient and effective. The footprint of such a coordination facilities is usually small and in many cases it may even be better to have many small centers around the globe than one large one. I do not believe it makes sense to centralize these applications unless a strong revenue model can be demonstrated to support the underlying data center costs.

Power of Decentralization

So, I believe there are at least two classes of cloud applications: centralized and decentralized. Today, we are seeing most of the energy and investment being spent in building centralized cloud applications and surprisingly little energy is being spent on enabling decentralized cloud applications. Most applications we see on the Internet today are hosted centrally in some data center and take very little advantage of the capabilities on the edge. Heck, electronic mail, which was designed to be a decentralized application has been turned central with mega e-mail providers such as Hotmail, Yahoo, and Google! Business e-mail is still fairly decentralized, though we do see a definite trend towards centralization there as well. Skype is one of the few decentralized cloud applications we see in widespread use. I believe there is lot of business opportunity to uncover in this space.

Let’s take a concrete example: customer acquisition costs. For centralized cloud applications this is in addition to the already prohibitive data center costs. Let’s use the manufacturing industry as an analogy, the cost of goods is usually only a fraction of the price customer ultimately pays. The rest of the money goes into getting the product from factory floor to the point where the target customer actually pays for it. Then there is the important issue of how much something is worth to a customer—it determines the maximum price you can extract from target customer. If the cost of goods + channel costs + profit margins aren’t within the price customer is willing to pay, there is no market for the product. Now, imagine if you could take a large chunk of the money you’ll be spending on a data center to scale your cloud application centrally and instead go for a more decentralized design and thereby lower your “cost of goods.” This creates money-on-the-table for the channel to get your application to its target customer. This could be all the difference between no market and a viable market for an application or a service.

I’m a big believer of decentralized systems in general. Decentralization by definition is more chaotic and most of us struggle with chaos, so there is a natural drive towards centralization. Centralization brings order and control. However, decentralization is the only thing that ultimately scales. The Internet—its power comes from the fact that it is decentralized. No one person or organization controls it. It spreads organically and through natural economic drivers. It is however a chaotic place. The Open source movement – again the same answer. It might come to some as a surprise that Microsoft succeeded in establishing a huge Windows franchise through a highly decentralized strategy across OEMs, ISVs, and VAR/Channel partners! Yet again, it is a bit of a chaotic world when you compare it to, say, a closed system like the Apple Mac. If you don’t believe me, go ask anyone on the Windows team. From outside, it might seem like Microsoft controls it, but in reality the Windows ecosystem today controls what Microsoft does more than the other way around.

It is worth observing that all decentralized systems do have a central core of some sort. Standard specifications like TCP/IP, HTTP, and HTML are at the core of the Internet. Linux is at the core of Open Source movement. Windows is at the core of Microsoft’s ecosystem. Initially, each of these systems came into existence as part of solving specific problems that the implementers were focused on. However, what ultimately made these systems successful was the focus on attracting more and more people to start using the core in different ways to solve their own problems. Solving more problems created more solutions and solutions generated revenues. A virtuous economic cycle ensued.

Not surprisingly, at Symform, we are creating a decentralized cloud system. It is not just decentralized technology; it is also being deployed and monetized through a decentralized business model—a partner network with revenue opportunity. We know it will be chaotic but we love it because we believe it will enable a large number of people to generate highly lucrative economics for themselves. And we hope that in doing so, we would ignite another commercially viable ecosystem.

Power of cultures

In The Innovator’s Dilemma, Clayton Christensen explains why organizations get specialized and contextualized and find it hard to break the mold. The power of cultures is strong. It is often said that most organizations develop a DNA early on, and it is hard if not impossible to change. We are going to see some of the effects of this in the new paradigm of cloud computing as well. Amazon has the culture of an e-retailer. It deeply understands how to sustain and grow a business with razor thin margins by tightly managing costs, deeply understanding demand/supply and inventory management. Furthermore, they have years of know-how under their belt on how to run highly reliable and efficient internet services. This DNA makes them one of the strongest players in my books for offering efficient, cost-effective, and reliable data center services.

Contrast this with Microsoft. As noted earlier, Microsoft’s DNA is a decentralized platform ecosystem, productivity applications, and developer tools. It knows high margin software economics. And as can be evidenced with the lukewarm success it has had with Internet services, it does not have the DNA for running a high capex/opex services business with razor thin margins. Given this, it is hard for me to make the bet on Windows Azure (the name isn’t helping either). As an ex-Microsoftie, I hope they surprise me.

Google is a more difficult one to analyze from a culture perspective. On one hand, like Amazon, Google really knows how to run some of the most efficient data centers. Furthermore, they are creating & exposing their platform exactly how I believe platforms should be created—by first building killer applications and then taking the common layers of those applications and exposing them as utilities for other applications to take advantage. On the flip side though, Google seems to be too secretive as a culture, and creating a platform ecosystem requires being a much more open and embracing culture to gain trust and attract others to your party. Insularity can be a huge barrier to adoption. Additionally, Google has a lot to learn about mobilizing a commercially viable ecosystem. Today, practically all of its revenues come from search advertising. An application platform business will require demonstrating to developers how they make money by adopting Google’s platform.

Praerit Garg is the co-founder and president of Symform, a cloud data storage startup based in Seattle. Follow @

By posting a comment, you agree to our terms and conditions.

  • Pingback: RIMM warns about serious BlackBerry vulnerability « The Lonely Geek

  • http://www.TheBlackBookOfOutsourcing.com Alan A. Perkins

    The Black Book of Outsourcing just named Cloud Computing as one of the six biggest paradigm shifts in outsourcing for the next few years. I recommend you go to their downloads page and get the full free study entitled 2009 STATE OF THE OUTSOURCING INDUSTRY REPORT. Very comprehensive and free.

  • Pingback: Advanced Monitoring with System Center Operations Manager 2007 and System Center Virtual Machine Manager : Opinion: Future of Cloud Computing

  • http://permabit.com Tom Cook

    There is a lot of spin about Cloud in the market. Fundamentally, we view the cloud as a distribution channel. It simply aggregates demand for efficiencies.

    We supply massively scalable deduped, value tier storage to enterprises. Among our customers are both SSPs for public clouds and Fortune 2000 enterprises with private cloud deployments. Both models are built upon one provider aggregating users to achieve better service levels and cost efficiencies than the operating units can attain on their own.

    What technologies are required to address the private/public cloud storage markets?
    1. Massively Scalability – the solution must scale to PBs to address today’s needs and those in the future.
    2. Always Available – the storage must must be fault tolerant and online.
    3. Secure – the target must be multi-tenant capable, compliant and encrypted.
    4. Cost Effective – the solution must provide high ROI.

    The attributes are simple but only a select few solutions make the cut. Like other distribution plays, winners will assemble best of breed products to supply a total solution.