MIT Plans to Win DARPA Robot Car Challenge
Driving in urban traffic is a stupendously tricky task demanding a constant stream of split-second, almost subconscious decisions. In fact, if you give it too much thought—Am I driving inside the lane markers? How much space should I give the car ahead of me? Who got to this intersection first? Is that old lady going wait for the walk signal?—you’ll probably steer yourself right into an accident. Yet creating an autonomous vehicle that can handle such decisions in real time is the whole point of the DARPA Urban Challenge, the third major robot-car competition mounted by the U.S. Defense Advance Research Projects Agency. And to meet that challenge, a team at MIT has built what amounts to a supercomputer on wheels.
On August 9, DARPA named the MIT team as one of 36 semi-finalists for the Urban Challenge, to be held October 26 through November 3 at the urban military training facility at the former George Air Force Base in Victorville, CA. The institute’s team will be the only one representing New England in the DARPA challenge, and believe it or not, it’s the first time MIT has won a berth in a DARPA robot-car event, which this time around carries a $2 million first prize.
That creates just a “little bit” of stress for Team MIT, acknowledges aeronautics and astronautics professor Jonathan How, one of four principal investigators (PIs) leading the school’s effort. “The MIT name is on the side of the car,” How pointed out last week as we peered into the team’s extensively pimped-out Land Rover 3. “Is it possible to add more pressure? I don’t know.”
How’s specialty is developing path-planning algorithms of the type used by unmanned aerial vehicles and deep space probes to chart safe trajectories through unknown territory. And planning will be one of the big tasks preoccupying the 40 CPUs, or “cores,” bolted into in the way-back of the Land Rover, which currently occupies the Aero/Astro hangar in MIT’s Building 33. With all that computing power on board—more than any other team’s vehicle will be carrying, as far as How knows—the MIT car will, in theory, be able to think its way out of dilemmas that may stymie other vehicles navigating the DARPA course (the details of which are kept secret until the day before the competition). “We have algorithms in place that are using 15 to 18 of our cores,” says How. “If you didn’t have 40 computers you couldn’t do it that way—so we have a design freedom that others may not have.”
Ostensibly, How and co-PI Seth Teller, a professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), invited me over to the hangar to show off the Land Rover. (See a brief slide show of my visit here.) But we spent most of our hour together talking about its sensors and planning algorithms—which is probably a sign of how the field of autonomous-vehicle research has progressed.
Teller and How explained that thanks in part to technology developed and tested during DARPA’s previous competitions, it’s now fairly easy to equip a car for autonomous operation. Attach electric servo-motors up to the gas pedal, brakes, and steering column, and throw in a few dozen off-the-shelf automobile radars, laser range finders, and video cameras, and you’re done—well, if you have the expertise of several MIT departments and a team of undergraduates at Olin College of Engineering* to call on.
“Of course, that doesn’t mean it’s going to drive so well in an urban setting,” says How. That’s because the real challenge in the DARPA competition lies in building the software that will give the vehicle the ability to picture its surroundings and respond to encroaching hazards, all the while moving toward the finish line.
Vehicles participating in the DARPA challenge will need every ounce of awareness their builders can provide, given that the race itself has become much more complex since its earlier incarnations. In the 2004 and 2005 DARPA robot challenges, cars zoomed through the open Mojave desert between Los Angeles and Las Vegas. (Or rather, in 2005 they did; five vehicles crossed the finish line that year, with Stanford taking first prize, whereas the best team in the disastrous 2004 competition progressed no further than about 7.5 miles.) This time, cars will have six hours to navigate along a series of checkpoints, such as intersections, that will be handed to teams 24 hours prior to the competition.
Each robot car must plan its own route between the checkpoints and make the journey completely autonomously, using only sensor data and GPS systems to navigate. The cars must also obey all California traffic laws, such as ceding the right-of-way to preceding vehicles at four-way intersections. Judges will deduct points if cars fail to avoid obstacles such as lane markers, curbs, or traffic barrels.
“There are at least three levels of planning happening in the car,” Teller explains. “There is the long-range planning of ‘What intersections do I visit?’ Then there’s ‘What is the next few hundred meters of trajectory? What road segment should I choose to advance the mission?’ And then there’s, ‘What’s coming up in the next five or 10 meters, or the next few hundred milliseconds?’ [And] how should the gas and steering and brakes be moved so that the car meets the higher-level trajectory goals.” (See video of the MIT vehicle in action here.)
“It’s a lot more dynamic than previous challenges,” sums up How. It’s also, arguably, more socially relevant. As Teller notes, “If you could have safe autonomous cars, you might be able to avoid many of the 40,000 highway deaths we have every year in the United States. You could probably save a lot of fuel by having cars drive in a more smoothly coordinated fashion. And you could improve productivity by letting people read or work in the back seat while their cars drive.”
That relevance is part of the reason MIT threw its weight behind the project this year. But students from MIT’s Aero/Astro, Mechanical Engineering, and EECS departments also had a fair bit to do with it, say Teller and How. MIT didn’t send a team to the challenge at all in 2004. A student-led group raised $100,000 to build a vehicle in 2005, but the institute provided minimal additional support, and the team failed to qualify. A self-critical article in the November/December 2006 MIT Faculty Newsletter argued that “MIT can do much better” in supporting student projects. The new effort reflects that philosophy, and has also won the sponsorship of the Charles Stark Draper Laboratory, the Ford-MIT Alliance, and a number of other sponsors.
“We have had a very motivated group of graduate students who very early, in June 2006, basically banged on the table and said, ‘We really want to do this,'” Teller says. “We were game to do it, but to have them come and basically insist that we do it pushed us over the edge.”
Stanford roboticist Sebastian Thrun, who led his team to victory in the 2005 competition and will again head the Stanford entry in Victorville, says he welcomes the new competition. “I was sad to see MIT absent in past Grand Challenges,” Thrun says. “I am a big fan of the MIT team, since it has recruited world-class robotic researchers…I predict the level of technical innovation will be remarkable for this team in the 2007 race.”
Indeed, when MIT faculty enter a competition, they enter to win. At least, that’s the impression I got from inspecting the team’s Land Rover, which is bristling with sensors on the outside and stuffed with processing power on the inside. To gather information about its position, the car carries an inertial sensor to track short-term changes in direction, precise odometers to measure the amount of ground the car has covered, and a high-end GPS receiver. To see what’s around it, the car is also equipped with mid-range laser scanners or “LIDAR” (for light detection and ranging) that project a skirt of laser beams around the body, detecting objects within about 60 meters. This is supplemented by about 15 longer-range automotive radars of the same type used on luxury passenger cars for adaptive cruise control—and, to provide an extra level of awareness, a fleet of video cameras, which are mainly used to detect road edges.
All in all, the car’s sensors sweep a disk-shaped area about 150 meters in radius. “Within that disk, we have pretty good situational awareness of what we have to avoid,” says Teller. From there on, it’s a matter of finding the way between checkpoints and adapting to last-minute changes in the course, doubling back if necessary—which is all the job of How’s planning algorithms.
On the final day of the competition—November 3, regardless of rain or fog—the most important task will be “establishing what you know and what you don’t know, and planning effectively through that in an uncertain world,” says How. “I think what we have here is a car that, both from a sensing perspective and a computing perspective, will achieve sufficient understanding so that we can proceed. That’s the only thing we’re thinking about right now. That, plus winning the race, of course.”