MIT News - Error correction
https://news.mit.edu/topic/miterror-correction-rss.xml
MIT news feed about: Error correctionenFri, 10 Feb 2012 05:00:01 -0500The blind codemaker
https://news.mit.edu/2012/error-correcting-codes-0210
New error-correcting codes guarantee the fastest possible rate of data transmission, even over fluctuating wireless links.Fri, 10 Feb 2012 05:00:01 -0500https://news.mit.edu/2012/error-correcting-codes-0210Larry Hardesty, MIT News Office<a href="/newsoffice/2010/explained-shannon-0115.html" target="_self">Error-correcting codes</a> are one of the triumphs of the digital age. They’re a way of encoding information so that it can be transmitted across a communication channel — such as an optical fiber or a wireless connection — with perfect fidelity, even in the presence of the corrupting influences known as “noise.”<br /><br />An encoded message is called a codeword; the noisier the channel, the longer the codeword has to be to ensure perfect communication. But the longer the codeword, the longer it takes to transmit the message. So the ideal of maximally efficient, perfectly faithful communication requires precisely matching codeword length to the level of noise in the channel.<br /><br />Wireless devices, such as cellphones or Wi-Fi transmitters, regularly send out test messages to gauge noise levels, so they can adjust their codes accordingly. But as anyone who’s used a cellphone knows, reception quality can vary at locations just a few feet apart — or even at a single location. Noise measurements can rapidly become outdated, and wireless devices routinely end up using codewords that are too long, squandering bandwidth, or too short, making accurate decoding impossible.<br /><br />In the next issue of the journal <i>IEEE Transactions on Information Theory</i>, Gregory Wornell, a professor in the Department of Electrical Engineering and Computer Science at MIT, Uri Erez at Tel Aviv University in Israel and Mitchell Trott at Google describe a new coding scheme that guarantees the fastest possible delivery of data over fluctuating wireless connections without requiring prior knowledge of noise levels. The researchers also received <a href="http://www.google.com/patents/US8023570" target="_blank">a U.S. patent</a> for the technique in September.<br /><br /><strong>Say ‘when’</strong><br /><br />The scheme works by creating one long codeword for each message, but successively longer chunks of the codeword are themselves good codewords. “The transmission strategy is that we send the first part of the codeword,” Wornell explains. “If it doesn’t succeed, we send the second part, and so on. We don’t repeat transmissions: We always send the next part rather than resending the same part again. Because when you marry the first part, which was too noisy to decode, with the second and any subsequent parts, they together constitute a new, good encoding of the message for a higher level of noise.”<br /><br />Say, for instance, that the long codeword — call it the master codeword — consists of 30,000 symbols. The first 10,000 symbols might be the ideal encoding if there’s a minimum level of noise in the channel. But if there’s more noise, the receiver might need the next 5,000 symbols as well, or the next 7,374. If there’s a lot of noise, the receiver might require almost all of the 30,000 symbols. But once it has received enough symbols to decode the underlying message, it signals the sender to stop. In the paper, the researchers prove mathematically that at that point, the length of the received codeword is the shortest possible length given the channel’s noise properties — even if they’ve been fluctuating.<br /><br />To produce their master codeword, the researchers first split the message to be sent into several — for example, three — fragments of equal length. They encode each of those fragments using existing error-correcting codes, such as <a href="/newsoffice/2010/gallager-codes-0121.html" target="_self">Gallager codes</a>, a very efficient class of codes common in wireless communication. Then they multiply each of the resulting codewords by a different number and add the results together. That produces the first chunk of the master codeword. Then they multiply the codewords by a different set of numbers and add those results, producing the second chunk of the master codeword, and so on.<br /><br /><strong>Tailor-made</strong><br /><br />In order to decode a message, the receiver needs to know the numbers by which the codewords were multiplied. Those numbers — along with the number of fragments into which the initial message is divided and the size of the chunks of the master codeword — depend on the expected variability of the communications channel. Wornell surmises, however, that a few standard configurations will suffice for most wireless applications.<br /><br />The only chunk of the master codeword that must be transmitted in its entirety is the first. Thereafter, the receiver could complete the decoding with only partial chunks. So the size of the initial chunk is calibrated to the highest possible channel quality that can be expected for a particular application.<br /><br />Finally, the complexity of the decoding process depends on the number of fragments into which the initial message is divided. If that number is three, which Wornell considers a good bet for most wireless links, the decoder has to decode three messages instead of one for every chunk it receives, so it will perform three times as many computations as it would with a conventional code. “In the world of digital communication, however,” Wornell says, “a fixed factor of three is not a big deal, given Moore’s Law on the growth of computation power.”<br /><br />H. Vincent Poor, the Michael Henry Strater University Professor of Electrical Engineering and dean of the School of Engineering and Applied Science at Princeton University, sees few obstacles to the commercial deployment of a coding scheme such as the one developed by Wornell and his colleagues. “The codes are inherently practical,” Poor says. “In fact, the paper not only develops the theory and analysis of such codes but also provides specific examples of practical constructions.”<br /><br />Because the codes “enable efficient communication over unpredictable channels,” he adds, “they have an important role to play in future wireless-communication applications and standards for connecting mobile devices.”<br />Graphic: Christine DaniloffElectrical engineering and electronicsError correctionInformation theoryWirelessRateless codesPerfect communication with imperfect chips
https://news.mit.edu/2011/imperfect-circuits-0804
Error-correcting codes discovered at MIT can still guarantee reliable communication, even in cellphones with failure-prone low-power chips.Thu, 04 Aug 2011 04:00:01 -0400https://news.mit.edu/2011/imperfect-circuits-0804Larry Hardesty, MIT News OfficeOne of the triumphs of the information age is the idea of error-correcting codes, which ensure that data carried by electromagnetic signals — traveling through the air, or through cables or optical fibers — can be reconstructed flawlessly at the receiving end, even when they’ve been corrupted by electrical interference or other sources of what engineers call “noise.”<br /><br />For more than 60 years, the analysis of error-correcting codes has assumed that, however corrupted a signal may be, the circuits that decode it are error-free. In the next 10 years, however, that assumption may have to change. In order to extend the battery life of portable computing devices, manufacturers may soon turn to low-power signal-processing circuits that are themselves susceptible to noise, meaning that errors sometimes creep into their computations.<br /><br />Fortunately, <a href="http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5895097" target="_blank">in the July issue</a> of <i>IEEE Transactions on Information Theory</i>, Lav Varshney PhD ‘10, a research affiliate at MIT’s Research Laboratory of Electronics, demonstrates that some of the most commonly used codes in telecommunications can still ensure faithful transmission of information, even when the decoders themselves are noisy. The same analysis, which is adapted from his MIT thesis, also demonstrates that memory chips, which present the same trade-off between energy efficiency and reliability that signal-processing chips do, can preserve data indefinitely even when their circuits sometimes fail.<br /><br />According to the semiconductor industry’s 15-year projections, both memory and computational circuits “will become smaller and lower-power,” Varshney explains. “As you make circuits smaller and lower-power, they’re subject to noise. So these effects are starting to come into play.”<br /><br /><strong>Playing the odds</strong><br /><br />The theory of error-correcting codes <a href="/newsoffice/2010/explained-shannon-0115.html" target="_self">was established</a> by Claude Shannon — who taught at MIT for 22 years — in a groundbreaking 1948 paper. Shannon envisioned a message sent through a communications medium as a sequence of bits — 0s and 1s. Noise in the channel might cause some of the bits to flip or become indeterminate. An error-correcting code would consist of additional bits tacked on to the message bits and containing information about them. If message bits became corrupted, the extra bits would help describe what their values were supposed to be.<br /><br />The longer the error-correcting code, the less efficient the transmission of information, since more total bits are required for a given number of message bits. To date, the most efficient codes known are those <a href="/newsoffice/2010/gallager-codes-0121.html" target="_self">discovered in 1960</a> by MIT professor emeritus Robert Gallager, which are called low-density parity-check codes — or sometimes, more succinctly, Gallager codes. Those are the codes that Varshney analyzed.<br /><br />The key to his new analysis, Varshney explains, was not to attempt to quantify the performance of particular codes and decoders but rather to look at the statistical properties of whole classes of them. Once he was able to show that, on average, a set of noisy decoders could guarantee faithful reconstruction of corrupted data, he was then able to identify a single decoder within that set that met the average-performance standard.<br /><br /><strong>The noisy brain</strong><br /><br />Today, updated and optimized versions of Gallager’s 1960 codes are used for error correction by many cellphone carriers. Those codes would have to be slightly modified to guarantee optimal performance with noisy circuits, but “you use essentially the same decoding methodologies that Gallager did,” Varshney says. And since the codes have to correct for errors in both transmission and decoding, they would also yield lower transmission rates (or require higher-power transmitters).<br /><br />Shashi Chilappagari, an engineer at Marvell Semiconductor, which designs signal-processing chips, says that like Gallager’s codes, the question of whether noisy circuits can correct transmission errors dates back to the 1960s, when it attracted the attention of computing pioneer John von Neumann. “This is kind of a surprising result,” Chilappagari says. “It’s not very intuitive to say that this kind of scheme can work.” Chilappagari points out, however, that like most analyses of error-correcting codes, Varshney’s draws conclusions by considering what happens as the length of the encoded messages approaches infinity. Chipmakers would be reluctant to adopt the coding scheme Varshney proposes, Chilappagari says, without “time to test it and see how it works on a given-length code.”<br /><br />While researching his thesis, Varshney noticed that the decoder for Gallager’s codes — which in fact passes data back and forth between several different decoders, gradually refining its reconstruction of the original message — has a similar structure to ensembles of neurons in the brain’s cortex. In ongoing work, he’s trying to determine whether his analysis of error-correcting codes can be adapted to characterize information processing in the brain. “It’s pretty well established that neural things are noisy — neurons fire randomly … and there’s other forms of noise as well,” Varshney says.<br /><br />Graphic: Christine DaniloffInformation theoryLow-density parity check codesResearch Laboratory of ElectronicsCoding theoryError correctionFault-tolerant computing