Skip to content ↓

Topic

Research

Download RSS feed: News Articles / In the Media / Audio

Displaying 5326 - 5340 of 5435 news clips related to this topic.
Show:

USA Today

In a piece for USA Today, Mark Olalde reports on a new study of municipal politics co-written by MIT Professor Chris Warshaw. Through the study, which examined the political preferences of U.S. residents in specific cities, Warshaw found that Mesa, Arizona was the most conservative city and San Francisco, California the most liberal. 

PBS NewsHour

Colleen Shalby reports for the PBS NewHour on the “visual microphone” developed by MIT researchers that can detect and reconstruct audio by analyzing the sound waves traveling through objects. 

Bloomberg Businessweek

Bloomberg Businessweek reporter Drake Bennett writes about how MIT researchers have developed a technique for extracting audio by analyzing the sound vibrations traveling through objects. Bennett reports that the researchers found that sound waves could be detected even when using cell phone camera sensors. 

ABC News

Alyssa Newcomb of ABC News reports on how MIT researchers have developed a new method that can uncover intelligible audio by videotaping everyday objects and translating the sound vibrations back into intelligible sound. 

NPR

NPR’s Melissa Block examines the new MIT algorithm that can translate visual information into sound. Abe Davis explains that by analyzing sound waves traveling through an object, “you can start to filter out some of that noise and you can actually recover the sound that produced that motion.” 

Time

Time reporter Nolan Feeney writes about how researchers from MIT have developed a new technique to extract intelligible audio of speech by “videotaping and analyzing the tiny vibrations of objects.”

Wired

“Researchers have developed an algorithm that can use visual signals from videos to reconstruct sound and have used it to recover intelligible speech from a video,” writes Katie Collins for Wired about an algorithm developed by a team of MIT researchers that can derive speech from material vibrations.

USA Today

Lisa Kiplinger writes for USA Today about research from Dr. Joseph Coughlin, director of the the MIT AgeLab, that indicates that 85 percent of Millennials are open to having conversations about finances with their grandparents, but only 8 percent of grandparents are likely to initiate the conversation.

WBUR

WBUR’s Bruce Gellerman reports on MIT.nano, the nanotechnology research facility that when completed will provide cutting-edge laboratory space for thousands of researchers. “The world is built on nanoscale and the 21st century will be defined by it,” says Prof. Vladimir Bulovic. 

Scientific American

Cynthia Graber of Scientific American reports on the new MIT technique to use solar energy to generate steam.  Graber reports that the new system reaches, “85 percent efficiency in converting the solar energy into steam." 

The Washington Post

Rachel Feltman of The Washington Post examines the new MIT algorithm that can reconstruct sound by examining the visual vibrations of sound waves. “This is a new dimension to how you can image objects,” explains graduate student Abe Davis. 

Popular Science

In a piece for Popular Science, Douglas Main writes on the new technique developed by MIT researchers that can reconstruct speech from visual information. The researchers showed that, “an impressive amount of information about the audio (although not its content) could also be recorded with a regular DSLR that films at 60 frames per second.”

Slate

Writing for Slate, Elliot Hannon reports on the new technology developed by MIT researchers that allows audio to be extracted from visual information by processing the vibrations of sound waves as they move through objects.

New Scientist

Hal Hodson of New Scientist reports on the new algorithm developed by MIT researchers that can turn visual images into sound. "We were able to recover intelligible speech from maybe 15 feet away, from a bag of chips behind soundproof glass," explains Abe Davis, a graduate student at MIT. 

BetaBoston

Michael Morisy writes for BetaBoston about an algorithm developed by MIT researchers that can recreate speech by analyzing material vibrations. “The sound re-creation technique typically required cameras shooting at thousands of frames per second,” writes Morisy.