Skip to content ↓

Perfect communication with imperfect chips

Error-correcting codes discovered at MIT can still guarantee reliable communication, even in cellphones with failure-prone low-power chips.
Credits:
Graphic: Christine Daniloff

One of the triumphs of the information age is the idea of error-correcting codes, which ensure that data carried by electromagnetic signals — traveling through the air, or through cables or optical fibers — can be reconstructed flawlessly at the receiving end, even when they’ve been corrupted by electrical interference or other sources of what engineers call “noise.”

For more than 60 years, the analysis of error-correcting codes has assumed that, however corrupted a signal may be, the circuits that decode it are error-free. In the next 10 years, however, that assumption may have to change. In order to extend the battery life of portable computing devices, manufacturers may soon turn to low-power signal-processing circuits that are themselves susceptible to noise, meaning that errors sometimes creep into their computations.

Fortunately, in the July issue of IEEE Transactions on Information Theory, Lav Varshney PhD ‘10, a research affiliate at MIT’s Research Laboratory of Electronics, demonstrates that some of the most commonly used codes in telecommunications can still ensure faithful transmission of information, even when the decoders themselves are noisy. The same analysis, which is adapted from his MIT thesis, also demonstrates that memory chips, which present the same trade-off between energy efficiency and reliability that signal-processing chips do, can preserve data indefinitely even when their circuits sometimes fail.

According to the semiconductor industry’s 15-year projections, both memory and computational circuits “will become smaller and lower-power,” Varshney explains. “As you make circuits smaller and lower-power, they’re subject to noise. So these effects are starting to come into play.”

Playing the odds

The theory of error-correcting codes was established by Claude Shannon — who taught at MIT for 22 years — in a groundbreaking 1948 paper. Shannon envisioned a message sent through a communications medium as a sequence of bits — 0s and 1s. Noise in the channel might cause some of the bits to flip or become indeterminate. An error-correcting code would consist of additional bits tacked on to the message bits and containing information about them. If message bits became corrupted, the extra bits would help describe what their values were supposed to be.

The longer the error-correcting code, the less efficient the transmission of information, since more total bits are required for a given number of message bits. To date, the most efficient codes known are those discovered in 1960 by MIT professor emeritus Robert Gallager, which are called low-density parity-check codes — or sometimes, more succinctly, Gallager codes. Those are the codes that Varshney analyzed.

The key to his new analysis, Varshney explains, was not to attempt to quantify the performance of particular codes and decoders but rather to look at the statistical properties of whole classes of them. Once he was able to show that, on average, a set of noisy decoders could guarantee faithful reconstruction of corrupted data, he was then able to identify a single decoder within that set that met the average-performance standard.

The noisy brain

Today, updated and optimized versions of Gallager’s 1960 codes are used for error correction by many cellphone carriers. Those codes would have to be slightly modified to guarantee optimal performance with noisy circuits, but “you use essentially the same decoding methodologies that Gallager did,” Varshney says. And since the codes have to correct for errors in both transmission and decoding, they would also yield lower transmission rates (or require higher-power transmitters).

Shashi Chilappagari, an engineer at Marvell Semiconductor, which designs signal-processing chips, says that like Gallager’s codes, the question of whether noisy circuits can correct transmission errors dates back to the 1960s, when it attracted the attention of computing pioneer John von Neumann. “This is kind of a surprising result,” Chilappagari says. “It’s not very intuitive to say that this kind of scheme can work.” Chilappagari points out, however, that like most analyses of error-correcting codes, Varshney’s draws conclusions by considering what happens as the length of the encoded messages approaches infinity. Chipmakers would be reluctant to adopt the coding scheme Varshney proposes, Chilappagari says, without “time to test it and see how it works on a given-length code.”

While researching his thesis, Varshney noticed that the decoder for Gallager’s codes — which in fact passes data back and forth between several different decoders, gradually refining its reconstruction of the original message — has a similar structure to ensembles of neurons in the brain’s cortex. In ongoing work, he’s trying to determine whether his analysis of error-correcting codes can be adapted to characterize information processing in the brain. “It’s pretty well established that neural things are noisy — neurons fire randomly … and there’s other forms of noise as well,” Varshney says.

Related Links

Related Topics

More MIT News