Skip to content ↓

A virtual “guide dog” for navigation

Low-power chip processes 3-D camera data, could enable wearable device to guide the visually impaired.
Press Inquiries

Press Contact:

Abby Abazorius
Phone: 617-253-2709
MIT News Office

Media Download

A new chip could enable devices that help visually impaired users navigate their environments.
Download Image
Caption: A new chip could enable devices that help visually impaired users navigate their environments.

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Close
A new chip could enable devices that help visually impaired users navigate their environments.
Caption:
A new chip could enable devices that help visually impaired users navigate their environments.

MIT researchers have developed a low-power chip for processing 3-D camera data that could help visually impaired people navigate their environments. The chip consumes only one-thousandth as much power as a conventional computer processor executing the same algorithms.

Using their chip, the researchers also built a prototype of a complete navigation system for the visually impaired. About the size of a binoculars case and similarly worn around the neck, the system uses an experimental 3-D camera from Texas Instruments. The user carries a mechanical Braille interface developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), which conveys information about the distance to the nearest obstacle in the direction the user is moving.

The researchers reported the new chip and the prototype navigation system in a paper presented earlier this week at the International Solid-State Circuits Conference in San Francisco.

“There was some prior work on this type of system, but the problem was that the systems were too bulky, because they require tons of different processing,” says Dongsuk Jeon, a postdoc at MIT’s Microsystems Research Laboratories (MTL) when the work was done who joined the faculty of Seoul National University in South Korea this year. “We wanted to miniaturize this system and realized that it is critical to make a very tiny chip that saves power but still provides enough computational power.”

Jeon is the first author on the new paper, and he’s joined by Anantha Chandrakasan, the Vannevar Bush Professor of electrical engineering and computer science; Daniela Rus, the Andrew and Erna Viterbi professor of electrical engineering and computer science; Priyanka Raina, a graduate student in electrical engineering and computer science; Nathan Ickes, a former research scientist at MTL who’s now at Apple Computer; and Hsueh-Cheng Wang, a postdoc at CSAIL when the work was done who will join the National Chiao Tung University in Taiwan as an assistant professor this month.

In work sponsored by the Andrea Bocelli Foundation, which was founded by the blind singer Andrea Bocelli, Rus’ group had developed an algorithm for converting 3-D camera data into useful navigation aids. The output of any 3-D camera can be interpreted as a 3-D representation called a “point cloud,” which depicts the spatial locations of individual points on the surfaces of objects. The Rus group’s algorithm clustered points together to identify flat surfaces in the scene, then measured the unobstructed walking distance in multiple directions.

For the new paper, the researchers modified this algorithm, with power conservation in mind. The standard way to identify planes in point clouds, for instance, is to pick a point at random, then look at its immediate neighbors, and determine whether any of them lie in the same plane. If one of them does, the algorithm looks at its neighbors, determining whether any of them lie in the same plane, and so on, gradually expanding the surface.

This is computationally efficient, but it requires frequent requests to a chip’s main memory bank. Because the algorithm doesn’t know in advance which direction it will move through the point cloud, it can’t reliably preload the data it will need into its small working-memory bank.

Fetching data from main memory, however, is the biggest energy drain in today’s chips, so the MIT researchers modified the standard algorithm. Their algorithm always begins in the upper left-hand corner of the point cloud and scans along the top row, comparing each point only to the neighbor on its left. Then it starts at the leftmost point in the next row down, comparing each point only to the neighbor on its left and to the one directly above it, and repeats this process until it has examined all the points. This enables the chip to load as many rows as will fit into its working memory, without having to go back to main memory.

This and similar tricks drastically reduced the chip’s power consumption. But the data-processing chip isn’t the component of the navigation system that consumes the most energy; the 3-D camera is. So the chip also includes a circuit that quickly and coarsely compares each new frame of data captured by the camera with the one that immediately preceded it. If little changes over successive frames, that’s a good indication that the user is still; the chip sends a signal to the camera, which can lower its frame rate, saving power.

Although the prototype navigation system is less obtrusive than its predecessors, it should be possible to miniaturize it even further. Currently, one of its biggest components is a heat dissipation device atop a second chip that converts the camera’s output into a point cloud. Adding the conversion algorithm to the data-processing chip should have a negligible effect on its power consumption but would significantly reduce the size of the system’s electronics.

"There are many problems that have been solved for visually impaired people, but a real one that is still open is navigating and walking alone in a safe way, going around in a big city,” said Bocelli, at an event in Milan during which the new system was demonstrated. “I've seen today a big step toward the solution to the problem. Together we will soon cross the great ocean. We will soon have a small device in our pocket to help us in walking in a city, going everywhere alone.”

In addition to the Andrea Bocelli Foundation, the work was cosponsored by Texas Instruments, and the prototype chips were manufactured through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.

Press Mentions

Boston.com

Boston.com reporter Hilary Sargent writes that MIT researchers have developed a new device that could help guide the visually impaired. Sargent explains that a prototype system the researchers developed “is about the size of a binoculars case and is designed to be worn around someone’s neck.”

Related Links

Related Topics

Related Articles

More MIT News