(Page 2 of 2)
flesh out and optimize promising ideas that are still nascent today, rather than having to spend so much time rewriting their applications for new platforms and devices.
I’ve got three big areas in mind: artificial intelligence, cloud computing, and interface design.
As companies like IBM and Google are demonstrating, making computers smarter isn’t necessarily about making them faster. IBM’s Watson supercomputer was far from the world’s fastest (it ranked 94th at the time it won on Jeopardy!), and Google is famous for filling its data centers with custom Linux servers that have been optimized for low cost and low power consumption rather than speed. What makes these systems smart—allowing Google to fill in your search query even before you finish typing it, for example—is that they have access to huge amounts of data.
Google Now, a feature of Google’s mobile search app for Android phones and iPhones, plumbs both your personal data (e-mail, calendar, location, etc.) and Google’s Knowledge Graph database to proactively offer you information about local weather, traffic, public transit, airline flights, sports scores, and the like. Apple’s Siri assistant is just as versatile, but usually waits to be asked. Google and Apple will vastly improve these systems over the coming years as they collect more data about users’ travels, affiliations, habits, and speech patterns. (As Xconomy’s Curt Woodward reports today, Apple has opened a new office near MIT to work on exactly that.)
At the same time, virtual personal assistants will turn up in many other walks of life, from banking to customer service to games and education. These systems don’t need HAL-like intelligence to be useful—it turns out that having some contextual data about our internal lives, plus an encyclopedic knowledge of the external world, plus a bit of smarts about how we interact with that world, can take them pretty far.
Another big area of consumer technology that will still be ripe for innovation even in the post-Moore’s Law era is cloud computing. There’s obviously some overlap in my categories, as both Google Now and Siri are cloud-based services requiring a wireless connection to the companies’ data centers. But my point is that everything is gradually moving to the cloud.
Let’s suppose that today’s smartphones and laptops have as much computing horsepower as they’re ever going to have. Things will still be okay, since so many applications—word processing, spreadsheets, even video editing—are now available as cloud services, and it’s easy to just keep adding servers to data centers. In a cloud-centric world, the limiting factor will be network bandwidth (especially wireless bandwidth), not chip speeds.
Thirdly, there’s room for major strides in interface design. We’ve already graduated from the mouse-and-keyboard era into the touchscreen era. With their motion-sensitive devices, companies like Leap Motion and PrimeSense are bringing gesture control into the mix. Now it’s time to go back to basics and rethink how information is presented, and how we move our focus within and between computing tasks.
It looks like Apple plans to advance the ball here again with iOS 7, the new mobile operating system coming this fall. But I’m also waiting for big improvements in speech recognition—which, fortunately, is another data-driven problem—as well as 3D displays and wearable displays. (I’ll be a Google Glass skeptic until Google figures out how to make the display much more crisp and much less obtrusive.)
Is Moore’s Law really over, or is it just taking a breather? In the end, it doesn’t really matter. We should know by now that innovation proceeds in fits and starts, and that it’s always intertwined with political and economic developments.
Look at rocket technology: from the Nazis’ V-2 attacks on London to the U.S. landings on the Moon, a mere 25 years went by, with the key advances in propulsion and guidance driven by a world war, then a cold war. If progress in rocketry and space exploration slowed after Apollo, it’s largely because we stopped needing better ICBMs. (Now, of course, companies like SpaceX are trying to put orbital flight on a more rational economic footing.)
Or look at commercial aviation. Planes haven’t changed much since Boeing brought out the 747 in 1970 (another Cold War spinoff, by the way: the plane’s double-decker design was patterned after the C-5 Galaxy cargo transport). But access to jet travel has increased enormously, thanks to deregulation and greater competition. (In inflation-adjusted terms, domestic airfares have fallen by half since 1978.)
Given the right economic or political incentives, computer researchers will eventually perfect a new medium that will get the party going again, supporting another several decades of exponential advances in processor speed. Maybe it will be proteins or other molecules; maybe it will be qubits (quantum-entangled particles). But whatever it is, it won’t be ready before Moore’s Law peters out on silicon-based devices. Engineers should use the interregnum to work on better consumer-facing software, which isn’t nearly as cool as it could be, or should be.
By posting a comment, you agree to our terms and conditions.