Technologies for the Blind and Deaf Could Have Much Broader Impact, Says UW’s Richard Ladner

Think about the technological tools you use most often. For many of us, cell phones and computers rank high up on that list. But these devices are designed with the hearing and sighted in mind, and are constantly evolving, so there are numerous hurdles to clear to make a phone or a computer usable to the blind or deaf.

The University of Washington’s Richard Ladner, along with students in the electrical engineering and computer science departments, is using engineering and computational tools to work on several of these hurdles—and the commercial applications could have far-ranging impact.

“When you think about a person with a disability, such as a blind person, most people think that’s a medical problem,” he said in a recent interview. “Just restoring the human function may be a solvable problem, but probably not for a long time. But maybe there’s another way to get the same thing done, to allow a person to read a book or talk to their family. So thinking non-medically, as an engineer, there are other ways to solve these problems.”

Ladner, who was born to two deaf parents, also believes that technologies developed for the blind and deaf may eventually lead to broader technological advancements—not such a far-fetched idea, as it’s happened before. Mobile GPS was originally developed as an aid for the blind, Ladner said, as was optical character recognition, a technology developed in the 1960s to turn an image of text (such as a photo of a book page) into digital text, which would then be read out loud using speech synthesizers. Now, the same technology is ubiquitous in turning pictures of text into digital text;Google uses it to digitize books.

Ladner used to work on computational theory before shifting to accessible technology in 2002. He and colleague Eve Riskin, professor of electrical engineering, are now trying to take their long-running project on accessibility for the deaf, MobileASL, to the market. This project uses video compression technology to enable signing over video cell phones on low-bandwidth wireless networks (such as those in the U.S.). Currently, deaf people can’t reliably use video cell phones to communicate using sign language, because the videos are too choppy to be intelligible. Ladner and his colleagues are working with UW TechTransfer on commercializing MobileASL.

“We’re trying to get it out and get it in actual use,” he said. “It’s in high demand. I get hundreds of e-mails about it.”

Although designed with the deaf in mind, MobileASL could be used by anyone who wants better quality video phone calls, Ladner said. Bringing it to market is slightly complicated by the fact that … Next Page »

Single PageCurrently on Page: 1 2

Rachel Tompa is a freelance journalist based in Seattle. She can be reached at Follow @

Trending on Xconomy