Under Seattle’s Cloud, a Big Data Cluster Grows

12/4/12Follow @bromano

(Page 2 of 2)

Splunk, Cloudera, Continuuity, and Couchbase. “And, we will be investing in more big data companies in the future,” founding partner Cameron Myhrvold says.

Myhrvold says Splunk, which helps companies capture and make sense of machine-generated data in the datacenter, has about 20 people in its Seattle office, which was opened to tap engineering expertise, particularly in developer tools and application program interfaces (APIs).

There’s a “critical cluster” forming in Seattle, says Ruben Ortega, who spent nine years at Amazon before moving to Google where he’s an engineering director. Companies small and large benefit from a pool of talented people who are “familiar with the nouns and the verbs” of big data. This growing pool of math and data-driven people realizes “that a petabyte is a relatively small amount of information when you’re computing against the exabytes,” Ortega says.

But the talent pool is only so large. Tableau has more than doubled its headcount in 2012 to 720, and as it continues hiring, has confronted “a massive, worldwide shortage of STEM talent,” Chabot says.

Chabot and Tableau’s other co-founders are Stanford grads. They saw no drawback to moving their company to the Pacific Northwest from Silicon Valley. While it was a personal decision at first, Chabot says basing Tableau in Seattle was “one of the best things for the business.”

Now, however, Tableau must look beyond the mountains, coffee, and salmon. “We have been forced, like any company before us that goes through a high-growth phase, to open offices in other locations if for no other reason to tap into other pools of talent,” Chabot says. Tableau has opened offices in Menlo Park, CA; Austin, TX; London; and, Singapore.

UW helps refresh the Seattle pool.

About three quarters of the 30-person Decide team have a UW background, says Decide chief executive Mike Fridgen.

Lazowska notes that UW alumni have had a hand in two foundational big data systems: Hadoop, the open-source system for managing distributed computation across thousands of machines, and MapReduce, the Google system on which Hadoop is based.

There is wide acknowledgement that Hadoop needs improvement.

Google’s Ortega says that although it’s still “the workhorse within the company, even we find that difficult to use.”

Chabot took Hadoop to task for being “highly specialized,” and “understood by a small priesthood of people.” “As long as the big data revolution looks like that, I can tell you it’s not going to go very far,” he says.

Myhrvold says the difficulty of Hadoop creates an opening for big data startups to help companies use it. Ignition invested in Cloudera, one of several attacking this problem. (The company was co-founded by UW computer science alum and Gig Harbor, WA, product Christophe Bisciglia.)

Myhrvold said he sees the big data story unfolding in much the same way as previous technology trends. The movement is starting with core infrastructure companies, and is being followed by big-data applications providers. “There will be a phase after that that will focus on monitoring and performance management, systems management,” he says.

And Myhrvold added, the work is really just beginning.

“I think big data is going to give us at least a 15-year investment horizon,” Myhrvold says. “We’re probably in year three today.”

Photo by JD Hancock via Flickr.

[Note: This story was updated Dec. 4 at 8:39 p.m. PT to correct the spelling of Eric Horvitz's surname.]

Benjamin Romano is editor of Xconomy Seattle. Email him at bromano [at] xconomy.com. Follow @bromano

Single Page Currently on Page: 1 2 previous page

By posting a comment, you agree to our terms and conditions.