Skip to content ↓

Topic

Computer vision

Download RSS feed: News Articles / In the Media / Audio

Displaying 91 - 105 of 171 news clips related to this topic.
Show:

NPR

CSAIL researchers have developed an artificial neural network that generates recipes from pictures of food, reports Laurel Dalrymple for NPR. The researchers input recipes into an AI system, which learned patterns “connections between the ingredients in the recipes and the photos of food,” explains Dalrymple.

Wired

A team of researchers from MIT and Princeton participating in the Amazon Robotics Challenge are using GelSight technology to give robots a sense of touch, reports Tom Simonite for Wired. Simonite explains that the, “rubbery membranes on the robot’s fingers are tracked from the inside by tiny cameras as they are deformed by objects it touches.”

USA Today

In this video for USA Today, Sean Dowling highlights Pic2Recipe, the artificial intelligence system developed by CSAIL researchers that can predict recipes based off images of food. The researchers hope the app could one day be used to help, “people track daily nutrition by seeing what’s in their food.”

BBC News

Researchers at MIT have developed an algorithm that can identify recipes based on a photo, writes BBC News reporter Zoe Kleinman. The algorithm, which was trained using a database of over one million photos, could be developed to show “how a food is prepared and could also be adapted to provide nutritional information,” writes Kleinman.

New Scientist

MIT researchers have developed a new machine learning algorithm that can look at photos of food and suggest a recipe to create the pictured dish, reports Matt Reynolds for New Scientist. Reynolds explains that, “eventually people could use an improved version of the algorithm to help them track their diet throughout the day.”

Wired

CSAIL researchers have trained an AI system to look at images of food, predict the ingredients used, and even suggest recipes, writes Matt Burgess for Wired. The system could also analyze meals to determine their nutritional value or “manipulate an existing recipe to be healthier or to conform to certain dietary restrictions," explains graduate student Nick Hynes.

Forbes

Kevin Murnane of Forbes spotlights five innovations developed by CSAIL researchers in 2016. Murnane highlights an ingestible origami robot, a 3-D printed robot with solid and liquid components, a robot that can assist with scheduling decisions, an artificial neural network that can explain its decisions, and an algorithm that can predict human interactions. 

Scientific American

A new system developed by MIT researchers can predict how a scene will unfold, similar to how humans can visually imagine the future, reports Ed Gent for Scientific American. Graduate student Carl Vondrick explains that the system is “an encouraging development in suggesting that computer scientists can imbue machines with much more advanced situational understanding."

NBC News

Steven Melendez of NBC News writes that a new system developed by CSAIL researchers can predict the future by examining a photograph. Grad student Carl Vondrick explains that the system’s ability to forecast normal behavior could allow it to be used for applications like self-driving cars.

New Scientist

New Scientist reporter Victoria Turk writes that MIT researchers have developed a system that can predict the future based off of a still image. Turk writes that the system could enable “an AI assistant to recognize when someone is about to fall, or help a self-driving car foresee an accident.”

HuffPost

Writing for The Huffington Post, Adi Gaskell highlights how CSAIL researchers have developed system to help robots work together successfully. Gaskell explains that the system allows three robots to “work successfully together to ensure items are delivered accurately in an unpredictable environment.”

Boston Globe

Researchers from MIT and IBM are joining forces to develop systems that enable machines to recognize images and sounds as people do, reports Hiawatha Bray for The Boston Globe. James DiCarlo, head of the Department of Brain and Cognitive Sciences, notes that as researchers build systems that can interpret events, “we learn ways our own brains might be doing that.”

New York Times

Writing for The New York Times, Steve Lohr features Prof. Tomaso Poggio’s work “building computational models of the visual cortex of the brain, seeking to digitally emulate its structure, even how it works and learns from experience.” Lohr notes that efforts like Poggio’s could lead to breakthroughs in computer vision and machine learning. 

The Wall Street Journal

Prof. Ramesh Raskar has been awarded the Lemelson-MIT prize for his “trailblazing work which includes the co-invention of an ultra-fast imaging camera that can see around corners, low-cost eye-care solutions and a camera that enables users to read the first few pages of a book without opening the cover,” writes Krishna Pokharel for The Wall Street Journal

Popular Science

MIT researchers have developed a new algorithm to create videos from still images, writes G. Clay Whittaker for Popular Science. “The system "learns" types of videos (beach, baby, golf swing...) and, starting from still images, replicates the movements that are most commonly seen in those videos,” Whittaker explains.