Skip to content ↓

Topic

Computer vision

Download RSS feed: News Articles / In the Media / Audio

Displaying 106 - 120 of 173 news clips related to this topic.
Show:

The Wall Street Journal

Prof. Ramesh Raskar has been awarded the Lemelson-MIT prize for his “trailblazing work which includes the co-invention of an ultra-fast imaging camera that can see around corners, low-cost eye-care solutions and a camera that enables users to read the first few pages of a book without opening the cover,” writes Krishna Pokharel for The Wall Street Journal

Popular Science

MIT researchers have developed a new algorithm to create videos from still images, writes G. Clay Whittaker for Popular Science. “The system "learns" types of videos (beach, baby, golf swing...) and, starting from still images, replicates the movements that are most commonly seen in those videos,” Whittaker explains. 

The Guardian

MIT researchers have developed a system that allows users to interact with video simulations, writes Joanna Goodman for The Guardian. The system “uses video to virtualize physical content so that it can interact with virtual content, so that when you see – on your smartphone – a Pokémon interact with a flexible object, you also see that object react.”

Scientific American

A new imaging technique developed by MIT researchers creates video simulations that people can interact with, writes Charles Choi for Scientific American. “In addition to fueling game development, these advances could help simulate how real bridges and buildings might respond to potentially disastrous situations,” Choi explains. 

Fox News

CSAIL researchers have created an algorithm that makes videos interactive, writes Andrew Freedman for Fox News. Freedman explains how this technology could transform games like Pokémon Go, “With interactive dynamic video, the Spearow could interact with the leaves rather than simply sit on top of them.”Reach in and touch objects in videos with “Interactive Dynamic Video”

BBC News

BBC News reports that CSAIL researchers have created an algorithm that can manipulate still objects in photographs and videos. The technique doesn’t require any special cameras, which makes it great for improving the realism in augmented reality games like Pokémon Go.

NBC News

Alyssa Newcomb writes for NBC News that MIT researchers have developed a system that allows users to interact with virtual objects. Newcomb explains that the “technology could be used to make movies or even by engineers wanting to find out how an old bridge may respond to inclement weather.”

Popular Science

CSAIL researchers have created a tool that allows people to interact with videos, writes Mary Beth Griggs for Popular Science. The technique could “make augmented reality animations integrate even more with the 'reality' part of augmented reality, help engineers model how structures will react when different forces are applied, or as a less expensive way to create special effects.”

Boston Globe

CSAIL researchers recently presented an algorithm that teaches computers to predict sounds, writes Kevin Hartnett for The Boston Globe. The ability to predict sounds will help robots successfully navigate the world and “make sense of what’s in front of them and figure out how to proceed,” writes Hartnett.

Forbes

CSAIL researchers used videos of popular TV shows to train an algorithm to predict how two people will greet one another. “[T]he algorithm got it right more than 43 percent of the time, as compared to the shoddier 36 percent accuracy achieved by algorithms without the TV training,” notes Janet Burns in Forbes.

Popular Science

Mary Beth Griggs writes for Popular Science that CSAIL researchers have created an algorithm that can predict human interaction. Griggs explains that the algorithm could “lead to artificial intelligence that is better able to react to humans or even security cameras that could alert authorities when people are in need of help.”

CBC News

Dan Misener writes for CBC News that CSAIL researchers have developed an algorithm that can predict interactions between two people. PhD student Carl Vondrick explains that the algorithm is "learning, for example, that when someone's hand is outstretched, that means a handshake is going to come." 

CNN

CSAIL researchers have trained a deep-learning program to predict interactions between two people, writes Hope King for CNN. “Ultimately, MIT's research could help develop robots for emergency response, helping the robot assess a person's actions to determine if they are injured or in danger,” King explains. 

Wired

In an article for Wired, Tim Moynihan writes that a team of CSAIL researchers has created a machine-learning system that can produce sound effects for silent videos. The researchers hope that the system could be used to “help robots identify the materials and physical properties of an object by analyzing the sounds it makes.”

The Washington Post

Washington Post reporter Matt McFarland writes that MIT researchers have created an algorithm that can produce realistic sounds. “The findings are an example of the power of deep learning,” explains McFarland. “With deep learning, a computer system learns to recognize patterns in huge piles of data and applies what it learns in useful ways.”