Slate
Writing for Slate, Elliot Hannon reports on the new technology developed by MIT researchers that allows audio to be extracted from visual information by processing the vibrations of sound waves as they move through objects.
Writing for Slate, Elliot Hannon reports on the new technology developed by MIT researchers that allows audio to be extracted from visual information by processing the vibrations of sound waves as they move through objects.
Hal Hodson of New Scientist reports on the new algorithm developed by MIT researchers that can turn visual images into sound. "We were able to recover intelligible speech from maybe 15 feet away, from a bag of chips behind soundproof glass," explains Abe Davis, a graduate student at MIT.
Michael Morisy writes for BetaBoston about an algorithm developed by MIT researchers that can recreate speech by analyzing material vibrations. “The sound re-creation technique typically required cameras shooting at thousands of frames per second,” writes Morisy.