After Virtualization: VMware’s Valiant Plan to Co-opt the Cloud
(Page 3 of 4)
traditional virtualization software. Using vSphere, VMware’s core virtualization engine, organizations can create up to 10,000 virtual machines within a single cluster of computers. But big computing jobs also require dedicated storage and network bandwidth—and lining them up manually can still be a slow, complex process.
When you rent resources at an Amazon or a Rackspace, somebody else handles all of that, which is why developers like public clouds so much. The ideal alternative would be a private cloud—a set of computing, storage, and networking resources that’s entirely on-premises but still scales up and down on demand, automatically. But that wasn’t exactly what VMware was offering with vSphere.
Even before Gelsinger’s arrival, the cloud question was convulsing the company, says Jacques. “The big questions that came from our customers and channel partners were, ‘You guys have great technology, but how are you different? Is having a private cloud the same as virtualization, or is it completely different? How are you going to win?’ That encouraged tremendous debate and discussion within VMware.”
The winning argument in that debate came from two executives: Raghu Raghuram, VMware’s executive vice president of cloud infrastructure and management, and Steve Herrod, its former chief technology officer. (Herrod has since left VMware to become a managing director at venture firm General Catalyst Partners.) The problem, as they saw it, was clear-cut: corporate IT departments are still too yoked to their hardware. Even the simplest change requires fiddling with servers and provisioning networks. And that was a challenge the company thought it should be able to address, given its expertise in manipulating physical resources through software.
Raghuram and Herrod described their solution as the “software-defined data center.” The idea was to virtualize everything that hadn’t already been virtualized, including networking, storage, load balancers, and firewalls. In keynote talks at several big computing conferences in 2012, Herrod talked about the company’s belief that every data center service would eventually be abstracted and pooled on standard hardware, allowing a new degree of automation.
But VMware didn’t own all the pieces it needed to realize this vision. That’s one of the reasons it went after companies like DynamicOps and Nicira. A bidding war over five-year-old Nicira, perceived as the leader in software-defined networking, drove the price up to a stunning $1.2 billion. It was, perhaps, a fair price, given that “you need to do for the entire data center—computing, networking, and storage—what VMware originally did for the computing part,” in the words of Bruce Davie, a former Cisco and Nicira employee who is now principal engineer in VMware’s networking and security business unit.
When Gelsinger took over in September 2012, he knew he wanted to narrow down VMware’s focus areas to a handful; in the end, it would be just three. Maritz, Ragharam, and Herrod all counseled him to make the software-defined data center one of them.
The other two: the hybrid cloud, and end user computing, which encompasses a a variety of products in the areas of virtual desktop infrastructure, PC virtualization, and workspace management on mobile devices.
The pursuit of those three projects will define VMware for years to come—and determine whether it can find a solid foothold in a market rife with cloud-computing options. “Pat has given us three priorities, and if it doesn’t fall into one of those, it is not a big deal to Pat, and thus it’s not a big deal for everyone else here,” says Adams.
Blending Private and Public Clouds
So, how does a software-defined data center actually work? Adams says the vision—and the company isn’t quite there yet—is to be able to treat each incoming computing job as if it were stamped with a barcode. Scanning this barcode would automatically configure the needed processors, storage, networking, and security, at the required service level (basically, a measure of priority and responsiveness).
VMware’s vCloud Suite, introduced in late 2012, is a combination of new and existing software products that let companies build their own private clouds. It starts with vSphere, for automatically virtualizing individual servers. Then there’s vCloud Director, which virtualizes an entire data center, and vCloud Networking and Security, which sets up a software-defined networking environment, complete with firewalls. There’s the vCenter Site Recovery Manager, for automating offsite backup and disaster recovery, and vCenter Operations Management, which handles storage, performance monitoring and optimization, analytics, metering and chargebacks, and the like.
And if that isn’t a long enough list of products for you, above everything else sits vCloud Automation Center (from the DynamicOps acquisition), the self-service portal where administrators and developers go to initiate computing jobs, and vFabric Application Director, which remembers which applications need which customized combinations of resources.
The vCloud Suite comes in three editions with increasing amounts of firepower—standard, advanced, and enterprise—and the company charges according to the number of CPUs in a customer’s data center. (The standard suite license will set you back by about $4,995 per CPU, plus a $1,050 yearly support subscription; the enterprise version costs $11,495 per CPU, with a $2,414 support subscription.)
“This is the first go at this,” Adams says. “We will be modifying these products over time to meet the exact definition of the software-defined data center and complete the picture. We have all the pieces; it’s more around aligning some of them, and doing more around storage and the network.”
The second of Gelsinger’s three priorities for VMware is the hybrid cloud—and in a press conference in May, Gelsinger himself unveiled the company’s offering in that area. It’s called vCloud Hybrid Service, and it’s a true public cloud—just one that’s been optimized to … Next Page »