Skip to content ↓

Topic

Electrical engineering and computer science (EECS)

Download RSS feed: News Articles / In the Media / Audio

Displaying 1081 - 1095 of 1111 news clips related to this topic.
Show:

ABC News

Alyssa Newcomb of ABC News reports on how MIT researchers have developed a new method that can uncover intelligible audio by videotaping everyday objects and translating the sound vibrations back into intelligible sound. 

NPR

NPR’s Melissa Block examines the new MIT algorithm that can translate visual information into sound. Abe Davis explains that by analyzing sound waves traveling through an object, “you can start to filter out some of that noise and you can actually recover the sound that produced that motion.” 

Time

Time reporter Nolan Feeney writes about how researchers from MIT have developed a new technique to extract intelligible audio of speech by “videotaping and analyzing the tiny vibrations of objects.”

Wired

“Researchers have developed an algorithm that can use visual signals from videos to reconstruct sound and have used it to recover intelligible speech from a video,” writes Katie Collins for Wired about an algorithm developed by a team of MIT researchers that can derive speech from material vibrations.

The Washington Post

Rachel Feltman of The Washington Post examines the new MIT algorithm that can reconstruct sound by examining the visual vibrations of sound waves. “This is a new dimension to how you can image objects,” explains graduate student Abe Davis. 

Popular Science

In a piece for Popular Science, Douglas Main writes on the new technique developed by MIT researchers that can reconstruct speech from visual information. The researchers showed that, “an impressive amount of information about the audio (although not its content) could also be recorded with a regular DSLR that films at 60 frames per second.”

New Scientist

Hal Hodson of New Scientist reports on the new algorithm developed by MIT researchers that can turn visual images into sound. "We were able to recover intelligible speech from maybe 15 feet away, from a bag of chips behind soundproof glass," explains Abe Davis, a graduate student at MIT. 

BetaBoston

Michael Morisy writes for BetaBoston about an algorithm developed by MIT researchers that can recreate speech by analyzing material vibrations. “The sound re-creation technique typically required cameras shooting at thousands of frames per second,” writes Morisy.

Fortune- CNN

Jane Porter writes for Fortune about WiTricity, an MIT spinout focused on the development of wireless power-transfer technology. By using vibrational frequencies, electricity can be transferred over distances of up to four feet.

Boston Magazine

Boston Magazine reporter Steve Annear writes about a new robot, designed by MIT undergraduate Patrick McCabe, that can play the game Connect Four. “It’s kind of a magical thing with computer science and technology, being able to leverage that to actually make something smarter than you are,” said McCabe of the device, which can beat its creator.

Wired

Wired reporter Margaret Rhodes writes about a new system developed by MIT researchers that uses drones as lighting assistants for photographs. The system operates by examining, “how much light is hitting the subject, and where the drone needs to move to adjust that light.”

Gizmag

Ben Coxworth of Gizmag writes about the new system developed by MIT researchers that allows photographers to achieve rim lighting during photo shoots. “Their system not only does away with light stands, but the light-equipped aircraft automatically moves to compensate for movements of the model or photographer,” writes Coxworth.

Fortune- CNN

In a piece for Fortune, Benjamin Snyder writes about how MIT researchers have developed a new system to help achieve the perfect lighting for photo shoots. Flying robots are programmed to produce rim lighting, which illuminates the edge of the subject in a photograph. 

Wired

Katie Collins writes for Wired that MIT researchers have developed a system that allows people to choose exactly what information they share online. “The primary benefit of this is that you as an individual would not be able to be identified from an anonymised dataset,” writes Collins.

Boston Globe

“They've created an app which recasts mediocre headshots in the styles of famous portrait photographers like Richard Avedon and Diane Arbus- and in the process reveals how subtle shifts in lighting can completely change the way we perceive a face,” writes Boston Globe reporter Kevin Hartnett.