Skip to content ↓

Topic

Computer vision

Download RSS feed: News Articles / In the Media / Audio

Displaying 166 - 179 of 179 news clips related to this topic.
Show:

Time

Time reporter Nolan Feeney writes about how researchers from MIT have developed a new technique to extract intelligible audio of speech by “videotaping and analyzing the tiny vibrations of objects.”

Wired

“Researchers have developed an algorithm that can use visual signals from videos to reconstruct sound and have used it to recover intelligible speech from a video,” writes Katie Collins for Wired about an algorithm developed by a team of MIT researchers that can derive speech from material vibrations.

The Washington Post

Rachel Feltman of The Washington Post examines the new MIT algorithm that can reconstruct sound by examining the visual vibrations of sound waves. “This is a new dimension to how you can image objects,” explains graduate student Abe Davis. 

Popular Science

In a piece for Popular Science, Douglas Main writes on the new technique developed by MIT researchers that can reconstruct speech from visual information. The researchers showed that, “an impressive amount of information about the audio (although not its content) could also be recorded with a regular DSLR that films at 60 frames per second.”

Slate

Writing for Slate, Elliot Hannon reports on the new technology developed by MIT researchers that allows audio to be extracted from visual information by processing the vibrations of sound waves as they move through objects.

New Scientist

Hal Hodson of New Scientist reports on the new algorithm developed by MIT researchers that can turn visual images into sound. "We were able to recover intelligible speech from maybe 15 feet away, from a bag of chips behind soundproof glass," explains Abe Davis, a graduate student at MIT. 

BetaBoston

Michael Morisy writes for BetaBoston about an algorithm developed by MIT researchers that can recreate speech by analyzing material vibrations. “The sound re-creation technique typically required cameras shooting at thousands of frames per second,” writes Morisy.

Boston Globe

“They've created an app which recasts mediocre headshots in the styles of famous portrait photographers like Richard Avedon and Diane Arbus- and in the process reveals how subtle shifts in lighting can completely change the way we perceive a face,” writes Boston Globe reporter Kevin Hartnett. 

Wired

Writing for Wired, Olivia Solon describes a new algorithm that can identify human action in video. “The activity-recognising algorithm is faster than previous versions and is able to make good guesses at partially completed actions, meaning it can handle streaming video,” Solon writes. 

Engadget

MIT researchers have helped to produce an algorithm that applies professional photograph editing to self-portraits, writes Billy Steele for Engadget. The software uses existing works to make a match with the captured image, explains grad student YiChang Shih. 

BBC News

In a video for BBC News, Spencer Kelly reports on how, “A researcher at Massachusetts Institute of Technology (MIT) has developed an algorithm which he says can predict how popular a photograph will be when it is posted online.”

Network World

Jon Gold reports on how MIT researchers have developed an algorithm that can identify human activity from video input. “The researchers drew on natural language processing techniques,” Gold writes, “to create a 'grammar' for each action they wanted the system to recognize.”

Time

“A team of scientists from MIT’s Computer Science and Artificial Intelligence Lab, eBay Research Labs, and DigitalGlobe—led by MIT doctoral candidate Aditya Khosla—wrote an algorithm that’s intended to predict just how popular a photo you post will be,” writes TIME’s Bijan Stephen of a new algorithm that predicts the popularity of photographs.

HuffPost

Huffington Post reporter Bianca Bosker writes about a new algorithm developed by MIT graduate student Aditya Khosla that can predict how popular a photograph will be.