SiCortex: High Performance Computing Without the High Electric Bills

9/23/08Follow @wroush

The Assabet River these days rushes through Maynard, MA, without lending any of its liquid muscle to local industry. But for more than a century, the river supplied power to the Assabet Woolen Mill, a vast brick complex that, in its heyday, was the largest source of wool for U.S. military uniforms. I went to the mill two weeks ago to visit computer maker SiCortex, which is just one of numerous high-tech startups, including Monster.com and 38 Studios, that have taken over the complex, now known Clock Tower Place. And when I saw how swiftly the Assabet flows past the old mill buildings, I was reminded that for some companies—including, increasingly, computing companies—rivers are still a prime source of power. Google, for example, spends so much money on electricity that the search giant decided to build its newest data centers near hydroelectric dams in Washington state, where electricity is cheaper.

As it turns out, SiCortex’s whole mission is to help organizations do lots of computing without having to worry so much about energy costs. The company makes massively parallel computers that contain thousands of individual processors, wired together in a way that lets them exchange data very quickly—so quickly that the processors themselves don’t have to be very fast in order for the machine as a whole to carry out trillions of operations per second. And because the processors in SiCortex’s machines run at a relatively pokey 700 Megahertz, they don’t consume nearly as much power (or give off as much waste heat) as the multi-Gigahertz processors hawked by the Intels of the world.

If you take power and cooling expenses into account, according to SiCortex, its machines are only one-third as costly to own and operate as equally fast Intel-based clusters. In fact, a SiCortex machine uses so little electricity that it can be powered by a small team of cyclists. The company organized just such a stunt at MIT last December, when 10 members of the MIT cyclocross team hooked stationary bikes up to generators and pumped out enough juice to run a fusion simulation. Of course, “That’s not a great way to power your computer system,” admits Matt Reilly, SiCortex’s co-founder and chief engineer. “The first thing we found out was that you have to cool the people pedaling the bikes. A really good bicyclist can sustain something like 300 watts, but normally they’re moving through the air while they do that. These guys were sweating like pigs.”

Matt Reilly, Co-Founder and Chief Engineer, SiCortexReilly and co-founders Jud Leonard (now CTO) and John Mucci (a board member and the longtime CEO) came up with the basic idea for SiCortex’s fast but energy-efficient hardware back in 2002. The time needed to finish a computation, Reilly explained to me, is usually determined by three factors: the time required to do arithmetic in the CPU, the time required to move data around in memory, and the time required for input/output operations (that is, getting data into and out of the CPU). For parallel computers—which most of today’s high-performance computers are—there’s also a fourth factor: the communications time, or the time needed to move data between processors.

Semiconductor manufacturers have done an amazing job of speeding up both CPUs and memory chips over the last three decades (but at a high energy cost, as already mentioned). I/O operations are a still a bottleneck, though a variety of tricks exist for speeding them up. But Reilly, Leonard, and Mucci—all veterans of the famed Boston minicomputer company Digital Equipment Corporation—noted that nobody was really working on the fourth problem: reducing the travel time between processors in parallel machines. “That created an opportunity for a very small company to do very large things,” says Reilly.

In a machine with thousands of processors, you can’t simply string an Ethernet cable from each processor to every neighbor that it might need to communicate with. (Imagine how many phone lines would be coming out of your house if you needed a dedicated line to connect with every home or office you might want to dial.) To keep the number of wires manageable, a parallel machine’s “backplane” or communications mesh has to take the form of a network, where messages hop from node to node on the way to their final destination. So Reilly, Leonard, and Mucci started looking for a network topology—a scheme for wiring up the backplane—that would let them connect a large number of nodes without requiring messages to make too many hops to get through the network.

That, says Reilly, was when Leonard learned about a topology called a Kautz graph. First described in 1968, Kautz graphs have exactly the properties the SiCortex founders were looking for: they can have lots of nodes, but at a fairly low cost in terms of connections and hops. In fact, if you triple the number of nodes in a Kautz graph, the number of hops required to traverse the graph grows by just one. In a 324-node Kautz graph where every node has just three outgoing and three incoming links, for instance, data can get from one node to any other node in five hops or less. Increase the number of nodes to 972, and it’s still only six hops—a lot fewer than the number required by other parallel-computing topologies the industry has tried, such as 3-D meshes and “hypercubes.”

SiCortex SC5832 ComputerThe 972-node architecture was the one that Reilly, Leonard, and Mucci decided to build. The company lined up venture backing from the likes of Polaris Venture Partners, Prism Venture Partners, Flagship Ventures, JK&B Capital, and Chevron Technology Ventures. And six years and $42 million later, the result is SiCortex’s top-of-the-line machine, the SC5832. About the size of two refrigerators side by side, the computer contains 972 custom chips, each configured with a switch, six processors, and a communications engine that addresses messages going across the backplane. (6 x 972 = 5,832 processors in all, hence the name.)

Reilly says the machine has found adherents so far among two types of organizations—universities and defense intelligence agencies, both of which suffer from power and space constraints. “We have a federal customer who confesses that he is out of power,” says Reilly. “Every time he buys a new computer, an old one has to leave the room. For them, it’s all about solving more problems within the power budget they’ve got. We are talking with other people where the imperative is to fit the system into a given physical volume with limited cooling capacity. In an airplane, for example, you have more than enough electricity, because Pratt & Whitney provides this wonderful generator hanging off of each wing, but you can’t put a Winnebago-sized air conditioner on top of the plane to add more cooling.”

Ironically, making their machines run cooler and more efficiently wasn’t SiCortex’s original goal; they wanted them to be fast. Reilly says the founders knew from the beginning that they’d have to use relatively slow processors, since the machine was going to contain thousands of them, and the machine as a whole could only draw so much power from the wall socket. But the speedy backplane meant that the processors could spend more of their time computing and less waiting around for data. Other tricks also helped with efficiency: for example, the engineers tweaked the machine’s compilers and its Linux-based operating system to dispense with “speculative execution,” an often wasteful process in which some data is loaded into memory based on predictions that it will be needed later.

“It’s the absolute antithesis of everything I spent the previous 20 years of my career working on, but the simplicity is what led to the power efficiency,” says Reilly.

Novell veteran Christopher Stone was brought in as CEO this summer to ramp up SiCortex’s sales and marketing effort. He argues that the number of technical-computing customers running up against power limitations is growing. “More and more of our educational customers have power consumption as an issue—they’re being told they are going to be capped,” Stone says. And in an announcement last week, SiCortex said that recent tweaks have made its newest computer models twice as efficient as previous ones—at least when the expense of power, cooling, staffing, and space is taken into account. “You have to include those in your total-cost-of-ownership measurement,” says Stone. “Once you do that, you start to see phenomenal returns.”

Google probably won’t come calling any time soon: SiCortex’s massively parallel systems aren’t tailored for data-center operations like searching, indexing, and serving Web pages. But with SiCortex’s help, many a customer may be able to keep its computer facility going—even if they don’t have a river running through it.

Wade Roush is a contributing editor at Xconomy. Follow @wroush

By posting a comment, you agree to our terms and conditions.

  • dave pierson

    Past hi tech occupants of what is now Clock Tower Place had the waterpower on line….