How Big Data Is Changing Everything

2/28/12

(Page 2 of 2)

fundamental architecture of storage to enable flexibility and massive scale. Storage 2.0 still runs on the same old protocols, and that means the same old bottlenecks. Think of it this way: Storage 2.0 is a used car—one with a more powerful, more efficient engine, but still a used car. And like a lot of used cars, it just can’t keep up with the traffic.

Big data demands a radically different approach. Today’s data-intensive enterprises need massively scalable high-performance storage arrays with extremely low per-petabyte costs. Those arrays have to be connected with easy-to-manage networks that run at speeds of 10 gigabits (GBs) and above. And performance bottlenecks must be eliminated by the optimal combination of flash and disk technologies. The good news is that different approach is being created right now. It’s Storage 3.0.

Storage 3.0 is being driven by new crop of venture-capital-backed innovators that leverage technology disruptors like solid state flash storage, 10G Ethernet and new data storage algorithms. Flash storage greatly enhances big-data analytics, but it’s too expensive to scale in petabyte volumes. Fusion-io pioneered the use of flash to accelerate the performance of datasets that can reside in a single server. Avere, another Menlo Ventures portfolio company, takes flash a step further, putting it in the network instead of on the server. The private storage providers Violin Memory and Coraid , where Kevin is CEO, are delivering flash-powered storage arrays that can replace the biggest boxes from companies like EMC and Hitachi Data Systems.

But the Storage 3.0 story doesn’t end with new storage media and algorithms. It’s rapidly expanding to encompass “macro” innovations in the design of the entire networking and storage layers of the data center. The Ethernet storage-area network (SAN) model, for example, leverages the incredible price-performance gains of 10-GB Ethernet, and replaces old, excessively complex SAN design with a liquid, massively parallel cluster of low-cost commodity hardware.

Storage 3.0 represents the intersection of enterprise storage capabilities with modern cloud architectures. In order to keep up with big data growth and rapid changes in the compute and networking layers, this disruption in storage is a necessity. Billions of dollars in market share will be reallocated, and new tech leaders will be created. No one’s sure how it will play out, but it’s clear that the enterprise and cloud storage of the future will run on high-performance media, commodity hardware economics, scale-out architectures, virtualization and self-service automation. And it’s clear that it will run on Ethernet.

And that’s all good news. Whenever you take a fundamental IT component like storage and enable it to handle 10 or 100 times more capacity, at higher speeds and much lower costs, great things happen. Storage 3.0′s big-data innovations can deliver better medicines, smarter power grids, new quantum physics discoveries and more intelligent online services—and they can do it right now. So let’s hurry it up!

John W. Jarve is a Managing Director of Menlo Ventures. Kevin Brown is CEO of Coraid, a Menlo portfolio company. Follow @

Single Page Currently on Page: 1 2 previous page

By posting a comment, you agree to our terms and conditions.