Facebook’s Parikh: Mum on Google+, Lots to Say About Infrastructure

1/11/12Follow @curtwoodward

First things first: Yes, Facebook engineering director Jay Parikh has some thoughts about Google’s move to boost Google Plus above other social sources in its search stream. And no, he can’t say anything about it.

Twitter has been out front in criticizing Google’s newest social-signals revamp—named Search Plus Your World—and Facebook Seattle adviser Hadi Partovi said on his personal Twitter feed that he “used to love new @google search improvements with joy and even a bit of awe. This new social-results rollout is the opposite.”

But Facebook itself, which has a separate search arrangement with investor Microsoft’s Bing search engine, hasn’t made any sort of public pronouncement about the Google changes. When politely reminded about that by a public relations guy, I prodded Parikh a bit: Surely you must have some thoughts about it?

Jay Parikh

“Possibly,” he said, with a bit of a smile. “He wiped my brain. I know nothing.”

That particular brain-wipe would probably be a big job. Parikh, who joined Facebook in 2009 from Ning, heads up the social network’s infrastructure team, which spans everything from software that keeps more than 800 million users tied together to the nuts-and-bolts datacenter designs of the Open Compute Project.

Parkih was visiting Facebook’s Seattle engineering office to give a technical talk on the company’s projects—the kind of appearance that serves as a recruiting tool for engineers that Facebook might want to woo away from Microsoft, Amazon, or Google’s own sizable Seattle-area offices.

We’ve previously heard about some of the more user-facing projects that Seattle engineers have worked on, including messenger features, the iPad app, and the integration of Skype. But Parikh notes that there’s a good contingent of infrastructure engineers in Seattle too.

“We do have a growing presence up here for our infrastructure engineering team,” with about four groups who have a presence in Seattle of “a couple to a lot of engineers.”

That continued growth is the driver behind the Facebook office’s impending move to new, more spacious Seattle digs—double the size of the office near Pike Place Market, which held about 60 last time we heard. “We’ve been aggressively trying to build the set of projects and thus the set of engineers up here working on those projects,” Parikh says.

And perhaps counter to the Silicon Valley cliché of twentysomethings working all hours and passing out at their desks, Facebook is looking for a range of experience in its hires, Parikh says. That’s partly a necessity, since there aren’t enough new computer science graduates to fill all the technical jobs available, and an enormous number of digital startups are increasing the competition for engineers of all kinds.

“Finding folks that have years of experience—whether it be two years of experience or 20 years of experience—that is also really important for us. Because we do want to keep a good mixed culture of experienced versus fresh out of school,” Parikh says. “I think that is what yields the best result in terms of producing great quality and high-caliber systems.”

When founder and CEO Mark Zuckerberg and engineering head Mike Schroepfer made a similar trip to Seattle last year, they mentioned that one of the byproducts of growing to huge scale and making quick product changes means that engineers outside the mothership aren’t relegated to second-tier tasks.

“Really, it’s like, ‘Which one of these about-to-fall-over projects would you like to work on?’” Schroepfer said at the time.

Parikh pointed to the new Timeline layout, which categorizes a user’s Facebook content in a new historical layout of life events, as an example of the “insane speed” that the company still operates under, even as it’s grown to thousands of employees.

Timeline was put together in about six months, a project that would have taken “two to three times longer” under normal development procedures, Parikh says. To get it done quickly, Facebook had to perform the work in overlapping layers rather than getting pieces done and handing them off to someone else.

“They had to basically put up enough of an infrastructure that the product teams could then go iterate and think about the vision, and what the user experience is going to look like. And as they needed new access to data or new APIs, the back-end teams had to sort of shim those in and come back later, migrate data, and optimize them,” he says.

Serving up all of that content in a timeline form, which can be presented as a stream that stretches back to birth or any other time in history, also presents a bigger challenge.

“If you have years and years of content and hundreds and hundreds of objects that you have to fetch and sort through, that very easily could be a service or system that doesn’t even have enough time to render. People won’t even spend enough time waiting for the page to load, because you’re doing all of these sequential fetches of data. It just takes too long,” he says. “So we had to parallelize the data fetching, we had to use very aggressive caching mechanisms to make sure that pre-computed sections of content could be cached and read quickly and sort of spliced all together to compose the full view that you see.”

Curt Woodward is a senior editor for Xconomy based in Boston. Email: cwoodward@xconomy.com Follow @curtwoodward

By posting a comment, you agree to our terms and conditions.