Skip to content ↓

Topic

Computer vision

Download RSS feed: News Articles / In the Media / Audio

Displaying 106 - 120 of 179 news clips related to this topic.
Show:

Scientific American

A new system developed by MIT researchers can predict how a scene will unfold, similar to how humans can visually imagine the future, reports Ed Gent for Scientific American. Graduate student Carl Vondrick explains that the system is “an encouraging development in suggesting that computer scientists can imbue machines with much more advanced situational understanding."

NBC News

Steven Melendez of NBC News writes that a new system developed by CSAIL researchers can predict the future by examining a photograph. Grad student Carl Vondrick explains that the system’s ability to forecast normal behavior could allow it to be used for applications like self-driving cars.

New Scientist

New Scientist reporter Victoria Turk writes that MIT researchers have developed a system that can predict the future based off of a still image. Turk writes that the system could enable “an AI assistant to recognize when someone is about to fall, or help a self-driving car foresee an accident.”

HuffPost

Writing for The Huffington Post, Adi Gaskell highlights how CSAIL researchers have developed system to help robots work together successfully. Gaskell explains that the system allows three robots to “work successfully together to ensure items are delivered accurately in an unpredictable environment.”

Boston Globe

Researchers from MIT and IBM are joining forces to develop systems that enable machines to recognize images and sounds as people do, reports Hiawatha Bray for The Boston Globe. James DiCarlo, head of the Department of Brain and Cognitive Sciences, notes that as researchers build systems that can interpret events, “we learn ways our own brains might be doing that.”

New York Times

Writing for The New York Times, Steve Lohr features Prof. Tomaso Poggio’s work “building computational models of the visual cortex of the brain, seeking to digitally emulate its structure, even how it works and learns from experience.” Lohr notes that efforts like Poggio’s could lead to breakthroughs in computer vision and machine learning. 

The Wall Street Journal

Prof. Ramesh Raskar has been awarded the Lemelson-MIT prize for his “trailblazing work which includes the co-invention of an ultra-fast imaging camera that can see around corners, low-cost eye-care solutions and a camera that enables users to read the first few pages of a book without opening the cover,” writes Krishna Pokharel for The Wall Street Journal

Popular Science

MIT researchers have developed a new algorithm to create videos from still images, writes G. Clay Whittaker for Popular Science. “The system "learns" types of videos (beach, baby, golf swing...) and, starting from still images, replicates the movements that are most commonly seen in those videos,” Whittaker explains. 

The Guardian

MIT researchers have developed a system that allows users to interact with video simulations, writes Joanna Goodman for The Guardian. The system “uses video to virtualize physical content so that it can interact with virtual content, so that when you see – on your smartphone – a Pokémon interact with a flexible object, you also see that object react.”

Scientific American

A new imaging technique developed by MIT researchers creates video simulations that people can interact with, writes Charles Choi for Scientific American. “In addition to fueling game development, these advances could help simulate how real bridges and buildings might respond to potentially disastrous situations,” Choi explains. 

Fox News

CSAIL researchers have created an algorithm that makes videos interactive, writes Andrew Freedman for Fox News. Freedman explains how this technology could transform games like Pokémon Go, “With interactive dynamic video, the Spearow could interact with the leaves rather than simply sit on top of them.”Reach in and touch objects in videos with “Interactive Dynamic Video”

BBC News

BBC News reports that CSAIL researchers have created an algorithm that can manipulate still objects in photographs and videos. The technique doesn’t require any special cameras, which makes it great for improving the realism in augmented reality games like Pokémon Go.

NBC News

Alyssa Newcomb writes for NBC News that MIT researchers have developed a system that allows users to interact with virtual objects. Newcomb explains that the “technology could be used to make movies or even by engineers wanting to find out how an old bridge may respond to inclement weather.”

Popular Science

CSAIL researchers have created a tool that allows people to interact with videos, writes Mary Beth Griggs for Popular Science. The technique could “make augmented reality animations integrate even more with the 'reality' part of augmented reality, help engineers model how structures will react when different forces are applied, or as a less expensive way to create special effects.”

Boston Globe

CSAIL researchers recently presented an algorithm that teaches computers to predict sounds, writes Kevin Hartnett for The Boston Globe. The ability to predict sounds will help robots successfully navigate the world and “make sense of what’s in front of them and figure out how to proceed,” writes Hartnett.