MIT News - Fourier transforms
http://news.mit.edu/topic/mitfourier-transforms-rss.xml
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.enWed, 11 Dec 2013 05:00:00 -0500Leaner Fourier transforms
http://news.mit.edu/2013/leaner-fourier-transforms-1211
New algorithm can separate signals into their individual frequencies using a minimal number of samples.Wed, 11 Dec 2013 05:00:00 -0500Helen Knight, MIT News correspondenthttp://news.mit.edu/2013/leaner-fourier-transforms-1211The fast Fourier transform, one of the most important algorithms of the 20th century, revolutionized signal processing. The algorithm allowed computers to quickly perform Fourier transforms — fundamental operations that separate signals into their individual frequencies — leading to developments in audio and video engineering and digital data compression.<br /><br />But ever since its development in the 1960s, computer scientists have been searching for an algorithm to better it.<br /><br />Last year MIT researchers Piotr Indyk and Dina Katabi did just that, <a href="/newsoffice/2012/faster-fourier-transforms-0118.html" target="_self">unveiling an algorithm</a> that in some circumstances can perform Fourier transforms hundreds of times more quickly than the fast Fourier transform (FFT). <br /><br />Now Indyk, a professor of computer science and engineering and a member of the Theory of Computation Group within the Computer Science and Artificial Intelligence Laboratory (CSAIL), and his team have gone a step further, significantly reducing the number of samples that must be taken from a given signal in order to perform a Fourier transform operation. <br /><br /><strong>Close to theoretical minimum</strong><br /><br />In a paper to be presented at the ACM-SIAM Symposium on Discrete Algorithms in January, Indyk, postdoc Michael Kapralov, and former student Eric Price will reveal an algorithm that can perform Fourier transforms using close to the theoretical minimum number of samples. They have also bettered even this, developing an algorithm that uses the minimum possible number of signal samples.<br /><br />This could significantly reduce the time it takes medical devices such as magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR) machines to scan patients, or allow astronomers to take more detailed images of the universe, Indyk says.<br /><br />The Fourier transform is a fundamental mathematical notion that allows signals to be broken down into their component parts. When you listen to someone speak, for example, you can hear a dominant tone, which is the principal frequency in their voice. “But there are many other underlying frequencies, which is why the human voice is not a single tone, it’s much richer than that,” Indyk says. “So in order to understand what the spectrum looks like, we need to decompose the sounds into their basic frequencies, and that is exactly what the Fourier transform does.”<br /><br />The development of the FFT automated this process for the first time, allowing computers to rapidly manipulate and compress digital signals into a more manageable form. This is possible because not all of the frequencies within a digital signal are equal. Indeed, in nature many signals contain just a few dominant frequencies and a number of far less important ones, which can be safely disregarded. These are known as sparse signals. <br /><br />“In real life, often when you look at a signal, there are only a small number of frequencies that dominate the spectrum,” Indyk says. “So we can compress [the signal] by keeping only the top 10 percent of these.”<br /><br />Indyk and Katabi’s previous work focused on the length of time their algorithm needed to perform a sparse Fourier transform operation. However, in many applications, the number of samples the algorithm must take of the signal can be as important as its running time.<br /><br /><strong>Applications in medical imaging, astronomy</strong><br /><br />One such example is in MRI scanning, Indyk says. “The device acquires Fourier samples, basically snapshots of the body lying inside the machine, which it uses to recover the inner structure of the body,” he says. “In this situation, the number of samples taken is directly proportionate to the amount of time that the patient has to spend in the machine.”<br /><br />So by allowing the MRI scanner to produce an image of the body using a fraction of the samples needed by existing devices, it could significantly reduce the time patients must spend lying still inside the narrow, noisy machines.<br /><br />The team is also investigating the idea of using the new sparse Fourier transform algorithm in astronomy. They are working with researchers at the MIT Haystack Observatory, who specialize in radio astronomy, to use the system in interferometry, in which signals from an array of telescopes are combined to produce a single, high-resolution image of space. Applying the sparse Fourier transform algorithm to the telescope signals would reduce the number of observations needed to produce an image of the same quality, Indyk says. <br /><br />“That’s important,” he says, “because these are really massive data sets, and to make matters worse, much of this data is distributed because there are several different, separated telescopes, and each of them acquires some of the information, and then it all has to be sent to the same place to be processed.” <br /><br />What’s more, radio telescopes are extremely expensive to build, so an algorithm that allows astronomers to use fewer of them, or to obtain better quality images from the same number of sensors, could be extremely important, he says.<br /><br />Martin Strauss, a professor of mathematics, electrical engineering, and computer science at the University of Michigan, who develops fundamental algorithms for applications such as signal processing and massive data sets, says work by Indyk and others makes sparse Fourier transform algorithms advantageous over the celebrated FFT on a larger class of problems than before. “The current paper squeezes out nearly all [of the performance] that is possible with these methods,” he says.image: christine daniloff/MITAlgorithms, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), Faculty, Fourier transforms, ResearchExplained: Matrices
http://news.mit.edu/2013/explained-matrices-1206
Concepts familiar from grade-school algebra have broad ramifications in computer science.Fri, 06 Dec 2013 05:00:00 -0500Larry Hardesty, MIT News Officehttp://news.mit.edu/2013/explained-matrices-1206Among the most common tools in electrical engineering and computer science are rectangular grids of numbers known as matrices. The numbers in a matrix can represent data, and they can also represent mathematical equations. In many time-sensitive engineering applications, multiplying matrices can give quick but good approximations of much more complicated calculations.<br /><br />Matrices arose originally as a way to describe systems of linear equations, a type of problem familiar to anyone who took grade-school algebra. “<a href="/newsoffice/2010/explained-linear-0226.html" target="_self">Linear</a>” just means that the variables in the equations don’t have any exponents, so their graphs will always be straight lines.<br /><br />The equation x - 2y = 0, for instance, has an infinite number of solutions for both x and y, which can be depicted as a straight line that passes through the points (0,0), (2,1), (4,2), and so on. But if you combine it with the equation x - y = 1, then there’s only one solution: x = 2 and y = 1. The point (2,1) is also where the graphs of the two equations intersect.<br /><br />The matrix that depicts those two equations would be a two-by-two grid of numbers: The top row would be [1 -2], and the bottom row would be [1 -1], to correspond to the coefficients of the variables in the two equations.<br /><br />In a range of applications from image processing to genetic analysis, computers are often called upon to solve systems of linear equations — usually with many more than two variables. Even more frequently, they’re called upon to multiply matrices.<br /><br />Matrix multiplication can be thought of as solving linear equations for particular variables. Suppose, for instance, that the expressions t + 2p + 3h; 4t + 5p + 6h; and 7t + 8p + 9h describe three different mathematical operations involving temperature, pressure, and humidity measurements. They could be represented as a matrix with three rows: [1 2 3], [4 5 6], and [7 8 9].<br /><br />Now suppose that, at two different times, you take temperature, pressure, and humidity readings outside your home. Those readings could be represented as a matrix as well, with the first set of readings in one column and the second in the other. Multiplying these matrices together means matching up rows from the first matrix — the one describing the equations — and columns from the second — the one representing the measurements — multiplying the corresponding terms, adding them all up, and entering the results in a new matrix. The numbers in the final matrix might, for instance, predict the trajectory of a low-pressure system.<br /><br />Of course, reducing the complex dynamics of weather-system models to a system of linear equations is itself a difficult task. But that points to one of the reasons that matrices are so common in computer science: They allow computers to, in effect, do a lot of the computational heavy lifting in advance. Creating a matrix that yields useful computational results may be difficult, but performing matrix multiplication generally isn’t.<br /><br />One of the areas of computer science in which matrix multiplication is particularly useful is graphics, since a digital image is basically a matrix to begin with: The rows and columns of the matrix correspond to rows and columns of pixels, and the numerical entries correspond to the pixels’ color values. Decoding digital video, for instance, requires matrix multiplication; earlier this year, MIT researchers were able to build one of the <a href="/newsoffice/2013/mit-researchers-build-quad-hd-tv-chip-0220.html" target="_self">first chips </a>to implement the new high-efficiency video-coding standard for ultrahigh-definition TVs, in part because of patterns they discerned in the matrices it employs. <br /><br />In the same way that matrix multiplication can help process digital video, it can help process digital sound. A digital audio signal is basically a sequence of numbers, representing the variation over time of the air pressure of an acoustic audio signal. Many techniques for filtering or compressing digital audio signals, such as the <a href="/newsoffice/2012/faster-fourier-transforms-0118.html" target="_self">Fourier transform</a>, rely on matrix multiplication.<br /><br />Another reason that matrices are so useful in computer science is that <a href="/newsoffice/2012/explained-graphs-computer-science-1217.html" target="_self">graphs</a> are. In this context, a graph is a mathematical construct consisting of nodes, usually depicted as circles, and edges, usually depicted as lines between them. Network diagrams and family trees are familiar examples of graphs, but in computer science they’re used to represent everything from <a href="/newsoffice/2012/making-web-applications-more-efficient-0831.html" target="_self">operations performed</a> during the execution of a computer program to the relationships characteristic of <a href="/newsoffice/2013/algorithm-extends-artificial-intelligence-technique-1114.html" target="_self">logistics problems</a>.<br /><br />Every graph can be represented as a matrix, however, where each column and each row represents a node, and the value at their intersection represents the strength of the connection between them (which might frequently be zero). Often, the most efficient way to analyze graphs is to convert them to matrices first, and the solutions to problems involving graphs are frequently solutions to systems of linear equations.A matrix multiplication diagram. Linear algebra, Matrices, Fourier transformsToward practical compressed sensing
http://news.mit.edu/2013/toward-practical-compressed-sensing-0201
Researchers show how the vagaries of real-world circuitry affect the performance of a promising new technique in signal processing and imaging.Fri, 01 Feb 2013 05:00:03 -0500Larry Hardesty, MIT News Officehttp://news.mit.edu/2013/toward-practical-compressed-sensing-0201The last 10 years have seen a flurry of research on an emerging technology called compressed sensing. Compressed sensing does something that seems miraculous: It extracts more information from a signal than the signal would appear to contain. One of the most celebrated demonstrations of the technology came in 2006, when Rice University researchers produced images with a resolution of tens of thousands of pixels using a camera whose sensor had only one pixel.<br /><br />Compressed sensing promises dramatic reductions in the cost and power consumption of a wide range of imaging and signal-processing applications. But it’s been slow to catch on commercially, in part because of a general skepticism that sophisticated math ever works as well in practice as it does in theory. Researchers at MIT’s Research Laboratory of Electronics (RLE) hope to change that, with a new mathematical framework for evaluating compressed-sensing schemes that factors in the real-world performance of hardware components.<br /><br />“The people who are working on the theory side make some assumptions that circuits are ideal, when in reality, they are not,” says Omid Abari, a doctoral student in the Department of Electrical Engineering and Computer Science (EECS) who led the new work. “On the other hand, it’s very costly to build a circuit, in terms of time and also money. So this work is a bridge between these two worlds. Theory people could improve algorithms by considering circuit nonidealities, and the people who are building a chip could use this framework and methodology to evaluate the performance of those algorithms or systems. And if they see their potential, they can build a circuit.”<br /><br /><strong>Mixed reviews</strong><br /><br />In a series of recent papers, four members of associate professor Vladimir Stojanovic’s Integrated Systems Group at RLE — Abari, Stojanovic, postdoc Fabian Lim and recent graduate Fred Chen — applied their methodology to two applications where compressed sensing appeared to promise significant power savings. The first was spectrum sensing, in which wireless devices would scan the airwaves to detect unused frequencies that they could use to increase their data rates. The second was the transmission of data from wireless sensors — such as electrocardiogram (EKG) leads — to wired base stations.<br /><br />At last year’s International Conference on Acoustics, Speech, and Signal Processing, the researchers showed that, alas, in spectrum detection, compressed sensing can provide only a relatively coarse-grained picture of spectrum allocation; even then, the power savings are fairly meager.<br /><br />But in other work, they argue that encoding data from wireless sensors may be a more natural application of the technique. In a forthcoming paper in the journal <i>IEEE Transactions on Circuits and Systems</i>, they show that, indeed, in the case of EKG monitoring, it can provide a 90 percent reduction in the power consumed by battery-powered wireless leads.<br /><br />The reason the Rice camera could get away with a single-pixel sensor is that, before striking the sensor, incoming light — the optical signal — bounced off an array of micromirrors, some of which were tilted to reflect the signal and some of which weren’t. The pattern of “on” and “off” mirrors was random and changed hundreds or even thousands of times, and the sensor measured the corresponding changes in total light intensity. Software could then use information about the sequence of patterns to reconstruct the original signal.<br /><br /><strong>Ups and downs</strong><br /><br />The applications the RLE researchers investigated do something similar, but rather than using mirrors to modify a signal, they use another signal, one that alternates between two values — high and low — in a random pattern. In the case of spectrum sensing, the frequency of the input signal is so high that mixing it with the second signal eats up much of the power savings that compressed sensing affords. <br /><br />Moreover, the time intervals during which the second signal is high or low should be of precisely equal duration, and the transition from high to low, or vice versa, should be instantaneous. In practice, neither is true, and the result is the steady accumulation of tiny errors that, in aggregate, diminish the precision with which occupied frequencies can be identified.<br /><br />An EKG signal, however, is mostly silence, punctuated by spikes every second or so, when the heart contracts. As a consequence, the circuitry that mixes it with the second signal can operate at a much lower frequency, so it consumes less power.<br /><br />Abari, however, says he hasn’t given up on applying compressed sensing to spectrum sensing. A new algorithm called the sparse Fast Fourier Transform, <a href="/newsoffice/2012/faster-fourier-transforms-0118.html" target="_self">developed at MIT</a>, would modify the signal in the spectrum-sensing application in a way that offsets both the loss of resolution and the increase in power consumption. Abari is currently working with EECS professor Dina Katabi, one of the new algorithm’s inventors, to build a chip that implements that algorithm and could be integrated into future compressed-sensing systems.Compressed sensing, Computer science and technology, Research, Research Laboratory of Electronics, Graduate, postdoctoral, Electrical Engineering & Computer Science (eecs), Electrical engineering and electronics, Fourier transforms, Electrocardiogram monitoring, Spectrum sensing, AlgorithmsFaster fourier transform named one of world’s most important emerging technologies
http://news.mit.edu/2012/faster-fourier-transform-named-one-of-worlds-most-important-emerging-technologies
Mon, 07 May 2012 17:01:20 -0400CSAILhttp://news.mit.edu/2012/faster-fourier-transform-named-one-of-worlds-most-important-emerging-technologiesEarlier this year, Professors and CSAIL Principal Investigators Piotr Indyk and Dina Katabi, along with CSAIL graduate students Haitham Hassanieh and Eric Price, announced that they had improved upon the Fourier transform, an algorithm for processing streams of data. Their new algorithm, called the sparse Fourier transform (SFT), has been named to <i>MIT Technology Review</i>’s 2012 list of the world’s 10 most important emerging technologies.<br /><br />With the SFT algorithm, streams of data can be processed 10 to 100 times faster than was possible before, allowing for a speedier and more efficient digital world.<br /><br />“We selected the sparse Fourier transform developed by Dina Katabi, Haitham Hassanieh, Piotr Indyk and Eric Price as one of the 10 most important technology milestones of the past year because we expect it to have a significant impact,” said Brian Bergstein, <i>Technology Review</i>’s deputy editor. “By decreasing the amount of computation required to process information, this algorithm should make our devices and networks more powerful.”<br /><br />Each year, the editors of <i>MIT Technology Review</i> select the 10 emerging technologies with the greatest potential to transform our world. These innovations promise fundamental shifts in areas including energy, health care, computing and communications. The SFT is featured in the May/June edition of <i>Technology Review</i> and is posted on the web at <a href="http://www.technologyreview.com/tr10/" target="_blank">http://www.technologyreview.com/tr10/</a>.<br />Awards, honors and fellowships, Computer Science and Artificial Intelligence Laboratory (CSAIL), Computer science and technology, Fourier transforms, ResearchThe faster-than-fast Fourier transform
http://news.mit.edu/2012/faster-fourier-transforms-0118
For a large range of practically useful cases, MIT researchers find a way to increase the speed of one of the most important algorithms in the information sciences.Wed, 18 Jan 2012 05:00:00 -0500Larry Hardesty, MIT News Officehttp://news.mit.edu/2012/faster-fourier-transforms-0118The <a href="/newsoffice/2009/explained-fourier.html" target="_self">Fourier transform</a> is one of the most fundamental concepts in the information sciences. It’s a method for representing an irregular signal — such as the voltage fluctuations in the wire that connects an MP3 player to a loudspeaker — as a combination of pure frequencies. It’s universal in signal processing, but it can also be used to compress image and audio files, solve differential equations and price stock options, among other things.<br /><br />The reason the Fourier transform is so prevalent is an algorithm called the fast Fourier transform (FFT), devised in the mid-1960s, which made it practical to calculate Fourier transforms on the fly. Ever since the FFT was proposed, however, people have wondered whether an even faster algorithm could be found.<br /><br />
<div class="video_captions" style="float: left; padding: 10px 10px 10px 0px; width: 150px;"><img src="/newsoffice/sites/mit.edu.newsoffice/files/images/best-of-2012.jpg" border="0" alt="Best of 2012" /></div>
At the Symposium on Discrete Algorithms (SODA) this week, a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic — a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smartphones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments.<br /><br />Like the FFT, the new algorithm works on digital signals. A digital signal is just a series of numbers — discrete samples of an analog signal, such as the sound of a musical instrument. The FFT takes a digital signal containing a certain number of samples and expresses it as the weighted sum of an equivalent number of frequencies.<br /><br />“Weighted” means that some of those frequencies count more toward the total than others. Indeed, many of the frequencies may have such low weights that they can be safely disregarded. That’s why the Fourier transform is useful for compression. An eight-by-eight block of pixels can be thought of as a 64-sample signal, and thus as the sum of 64 different frequencies. But as the researchers point out in their new paper, empirical studies show that on average, 57 of those frequencies can be discarded with minimal loss of image quality.<br /><br /><strong>Heavyweight division</strong><br /><br />Signals whose Fourier transforms include a relatively small number of heavily weighted frequencies are called “sparse.” The new algorithm determines the weights of a signal’s most heavily weighted frequencies; the sparser the signal, the greater the speedup the algorithm provides. Indeed, if the signal is sparse enough, the algorithm can simply sample it randomly rather than reading it in its entirety.<br /><br />“In nature, most of the normal signals are sparse,” says Dina Katabi, one of the developers of the new algorithm. Consider, for instance, a recording of a piece of chamber music: The composite signal consists of only a few instruments each playing only one note at a time. A recording, on the other hand, of all possible instruments each playing all possible notes at once wouldn’t be sparse — but neither would it be a signal that anyone cares about.<br /><br />The new algorithm — which associate professor Katabi and professor Piotr Indyk, both of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), developed together with their students Eric Price and Haitham Hassanieh — relies on two key ideas. The first is to divide a signal into narrower slices of bandwidth, sized so that a slice will generally contain only one frequency with a heavy weight. <br /><br />In signal processing, the basic tool for isolating particular frequencies is a filter. But filters tend to have blurry boundaries: One range of frequencies will pass through the filter more or less intact; frequencies just outside that range will be somewhat attenuated; frequencies outside that range will be attenuated still more; and so on, until you reach the frequencies that are filtered out almost perfectly.<br /><br />If it so happens that the one frequency with a heavy weight is at the edge of the filter, however, it could end up so attenuated that it can’t be identified. So the researchers’ first contribution was to find a computationally efficient way to combine filters so that they overlap, ensuring that no frequencies inside the target range will be unduly attenuated, but that the boundaries between slices of spectrum are still fairly sharp.<br /><br /><strong>Zeroing in</strong><br /><br />Once they’ve isolated a slice of spectrum, however, the researchers still have to identify the most heavily weighted frequency in that slice. In the SODA paper, they do this by repeatedly cutting the slice of spectrum into smaller pieces and keeping only those in which most of the signal power is concentrated. But in an <a href="http://arxiv.org/abs/1201.2501v1" target="_blank">as-yet-unpublished paper</a>, they describe a much more efficient technique, which borrows a signal-processing strategy from 4G cellular networks. Frequencies are generally represented as up-and-down squiggles, but they can also be though of as oscillations; by sampling the same slice of bandwidth at different times, the researchers can determine where the dominant frequency is in its oscillatory cycle.<br /><br />Two University of Michigan researchers — Anna Gilbert, a professor of mathematics, and Martin Strauss, an associate professor of mathematics and of electrical engineering and computer science — had previously proposed an algorithm that improved on the FFT for very sparse signals. “Some of the previous work, including my own with Anna Gilbert and so on, would improve upon the fast Fourier transform algorithm, but only if the sparsity k” — the number of heavily weighted frequencies — “was considerably smaller than the input size n,” Strauss says. The MIT researchers’ algorithm, however, “greatly expands the number of circumstances where one can beat the traditional FFT,” Strauss says. “Even if that number k is starting to get close to n — to all of them being important — this algorithm still gives some improvement over FFT.”<br />Graphic: Christine DaniloffCompression, Computer Science and Artificial Intelligence Laboratory (CSAIL), Computer science and technology, Electrical engineering and electronics, Fourier transforms, Signal processingUnraveling the Matrix
http://news.mit.edu/2010/faster-fourier-0729
A new way of analyzing grids of numbers known as matrices could improve signal-processing applications and data-compression schemes.Thu, 29 Jul 2010 04:00:00 -0400Larry Hardesty, MIT News Officehttp://news.mit.edu/2010/faster-fourier-0729Among the most common tools in electrical engineering and computer science are rectangular grids of numbers known as matrices. The numbers in a matrix can represent data: The rows, for instance, could represent temperature, air pressure and humidity, and the columns could represent different locations where those three measurements were taken. But matrices can also represent mathematical equations. If the expressions t + 2p + 3h and 4t + 5p + 6h described two different mathematical operations involving temperature, pressure and humidity measurements, they could be represented as a matrix with two rows, [1 2 3] and [4 5 6]. Multiplying the two matrices together means performing both mathematical operations on every column of the data matrix and entering the results in a new matrix. In many time-sensitive engineering applications, multiplying matrices can give quick but good approximations of much more complicated calculations. <br /><br />In a paper <a href="http://www.pnas.org/content/107/28/12413.abstract" target="_blank">published in the July 13 issue</a> of <em>Proceedings of the National Academy of Science</em>, MIT math professor Gilbert Strang describes a new way to split certain types of matrices into simpler matrices. The result could have implications for software that processes video or audio data, for compression software that squeezes down digital files so that they take up less space, or even for systems that control mechanical devices.<br /><br />Strang’s analysis applies to so-called banded matrices. Most of the numbers in a banded matrix are zeroes; the only exceptions fall along diagonal bands, at or near the central diagonal of the matrix. This may sound like an esoteric property, but it often has practical implications. Some applications that process video or audio signals, for instance, use banded matrices in which each band represents a different time slice of the signal. By analyzing local properties of the signal, the application could, for instance, sharpen frames of video, or look for redundant information that can be removed to save memory or bandwidth.<br /><br /><strong>Working backwards</strong><br /><br />Since most of the entries in a banded matrix — maybe 99 percent, Strang says — are zero, multiplying it by another matrix is a very efficient procedure: You can ignore all the zero entries. After a signal has been processed, however, it has to be converted back into its original form. That requires multiplying it by the “inverse” of the processing matrix: If multiplying matrix A by matrix B yields matrix C, multiplying C by the inverse of B yields A. <br /><br />But the fact that a matrix is banded doesn’t mean that its inverse is. In fact, Strang says, the inverse of a banded matrix is almost always “full,” meaning that almost all of its entries are nonzero. In a signal-processing application, all the speed advantages offered by banded matrices would be lost if restoring the signal required multiplying it by a full matrix. So engineers are interested in banded matrices with banded inverses, but which matrices those are is by no means obvious. <br /><br />In his <em>PNAS</em> paper, Strang describes a new technique for breaking a banded matrix up into simpler matrices — matrices with fewer bands. It’s easy to tell whether these simpler matrices have banded inverses, and if they do, their combination will, too. Strang’s technique thus allows engineers to determine whether some promising new signal-processing techniques will, in fact, be practical.<br /><br /><strong>Faster than Fourier?</strong><br /><br />One of the most common digital-signal-processing techniques is the <a href="/newsoffice/2009/explained-fourier.html">discrete Fourier transform (DFT)</a>, which breaks a signal into its component frequencies and can be represented as a matrix. Although the matrix for the Fourier transform is full, Strang says, “the great fact about the Fourier transform is that it happens to be possible, even though it’s full, to multiply fast and to invert it fast. That’s part of what makes Fourier wonderful.” Nonetheless, for some signal-processing applications, banded matrices could prove more efficient than the Fourier transform. If only parts of the signal are interesting, the bands provide a way to home in on them and ignore the rest. “Fourier transform looks at the whole signal at once,” Strang says. “And that’s not always great, because often the signal is boring for 99 percent of the time.” <br /><br />Richard Brualdi, the emeritus UWF Beckwith Bascom Professor of Mathematics at the University of Wisconsin-Madison, points out that a mathematical conjecture that Strang presents in the paper has already been proven by three other groups of researchers. “It’s a very interesting theorem,” says Brualdi. “It’s already generated a couple of papers, and it’ll probably generate some more.” Brualdi points out that large data sets, such as those generated by gene sequencing, medical imaging, or weather monitoring, often yield matrices with regular structures. Bandedness is one type of structure, but there are others, and Brualdi expects other mathematicians to apply techniques like Strang’s to other types of structured matrices. “Whether or not those things will work, I really don’t know,” Brualdi says. “But Gil’s already said that he’s going to look at a different structure in a future paper.”<br /><br />In a banded matrix, all the nonzero entries cluster around the diagonal.Graphic: Christine DaniloffDigital signal processing, Fourier transforms, Linear algebra, Mathematics, Matrices, Video, WaveletsExplained: The Discrete Fourier Transform
http://news.mit.edu/2009/explained-fourier
The theories of an early-19th-century French mathematician have emerged from obscurity to become part of the basic language of engineering.Wed, 25 Nov 2009 05:00:00 -0500Larry Hardesty, MIT News Officehttp://news.mit.edu/2009/explained-fourier<i>Science and technology journalists pride themselves on the ability to explain complicated ideas in accessible ways, but there are some technical principles that we encounter so often in our reporting that paraphrasing them or writing around them begins to feel like missing a big part of the story. So in a new series of articles called "Explained," MIT News Office staff will explain some of the core ideas in the areas they cover, as reference points for future reporting on MIT research.</i><br /><br />In 1811, Joseph Fourier, the 43-year-old prefect of the French district of Isère, entered a competition in heat research sponsored by the French Academy of Sciences. The paper he submitted described a novel analytical technique that we today call the Fourier transform, and it won the competition; but the prize jury declined to publish it, criticizing the sloppiness of Fourier’s reasoning. According to Jean-Pierre Kahane, a French mathematician and current member of the academy, as late as the early 1970s, Fourier’s name still didn’t turn up in the major French encyclopedia the Encyclopædia Universalis.<br /><br />Now, however, his name is everywhere. The Fourier transform is a way to decompose a signal into its constituent frequencies, and versions of it are used to generate and filter cell-phone and Wi-Fi transmissions, to compress audio, image, and video files so that they take up less bandwidth, and to solve differential equations, among other things. It’s so ubiquitous that “you don’t really study the Fourier transform for what it is,” says Laurent Demanet, an assistant professor of applied mathematics at MIT. “You take a class in signal processing, and there it is. You don’t have any choice.”<br /><br />The Fourier transform comes in three varieties: the plain old Fourier transform, the Fourier series, and the discrete Fourier transform. But it’s the discrete Fourier transform, or DFT, that accounts for the Fourier revival. In 1965, the computer scientists James Cooley and John Tukey described an algorithm called the fast Fourier transform, which made it much easier to calculate DFTs on a computer. All of a sudden, the DFT became a practical way to process digital signals.<br /><br />To get a sense of what the DFT does, consider an MP3 player plugged into a loudspeaker. The MP3 player sends the speaker audio information as fluctuations in the voltage of an electrical signal. Those fluctuations cause the speaker drum to vibrate, which in turn causes air particles to move, producing sound.<br /><br />An audio signal’s fluctuations over time can be depicted as a graph: the x-axis is time, and the y-axis is the voltage of the electrical signal, or perhaps the movement of the speaker drum or air particles. Either way, the signal ends up looking like an erratic wavelike squiggle. But when you listen to the sound produced from that squiggle, you can clearly distinguish all the instruments in a symphony orchestra, playing discrete notes at the same time.<br /><br />That’s because the erratic squiggle is, effectively, the sum of a number of much more regular squiggles, which represent different frequencies of sound. “Frequency” just means the rate at which air molecules go back and forth, or a voltage fluctuates, and it can be represented as the rate at which a regular squiggle goes up and down. When you add two frequencies together, the resulting squiggle goes up where both the component frequencies go up, goes down where they both go down, and does something in between where they’re going in different directions.<br /><br />The DFT does mathematically what the human ear does physically: decompose a signal into its component frequencies. Unlike the analog signal from, say, a record player, the digital signal from an MP3 player is just a series of numbers, each representing a point on a squiggle. Collect enough such points, and you produce a reasonable facsimile of a continuous signal: CD-quality digital audio recording, for instance, collects 44,100 samples a second. If you extract some number of consecutive values from a digital signal — 8, or 128, or 1,000 — the DFT represents them as the weighted sum of an equivalent number of frequencies. (“Weighted” just means that some of the frequencies count more than others toward the total.)<br /><br />The application of the DFT to wireless technologies is fairly straightforward: the ability to break a signal into its constituent frequencies lets cell-phone towers, for instance, disentangle transmissions from different users, allowing more of them to share the air.<br /><br />The application to data compression is less intuitive. But if you extract an eight-by-eight block of pixels from an image, each row or column is simply a sequence of eight numbers — like a digital signal with eight samples. The whole block can thus be represented as the weighted sum of 64 frequencies. If there’s little variation in color across the block, the weights of most of those frequencies will be zero or near zero. Throwing out the frequencies with low weights allows the block to be represented with fewer bits but little loss of fidelity.<br /><br />Demanet points out that the DFT has plenty of other applications, in areas like spectroscopy, magnetic resonance imaging, and quantum computing. But ultimately, he says, “It’s hard to explain what sort of impact Fourier’s had,” because the Fourier transform is such a fundamental concept that by now, “it’s part of the language.”<br /><br />Fourier transforms, Compression, Computer science and technology, Electrical engineering and electronics, Explained, Signal processing