Skip to content ↓

Topic

Computer vision

Download RSS feed: News Articles / In the Media / Audio

Displaying 91 - 105 of 179 news clips related to this topic.
Show:

BBC News

Graduate student Achuta Kadambi speaks with the BBC’s Gareth Mitchell about the new depth sensors he and his colleagues developed that could eventually be used in self-driving cars. “This new approach is able to obtain very high-quality positioning of objects that surround a robot,” Kadambi explains. 

Fortune- CNN

Fortune reporter David Morris writes that MIT researchers have tricked an artificial intelligence system into thinking that a photo of a machine gun was a helicopter. Morris explains that, “the research points towards potential vulnerabilities in the systems behind technology like self-driving cars, automated security screening systems, or facial-recognition tools.”

New Scientist

Abigail Beall of New Scientist writes that MIT researchers have developed an algorithm that can trick an AI system, highlighting potential weaknesses in new image-recognition technologies used in everything from self-driving cars to facial recognition systems. “If a driverless car failed to spot a pedestrian or a security camera misidentified a gun the consequences could be incredibly serious.” 

Wired

CSAIL researchers have tricked a machine-learning algorithm into misidentifying an object, reports Louise Matsakis for Wired. The research, “demonstrates that attackers could potentially create adversarial examples that can trip up commercial AI systems,” explains Matsakis. 

Boston Globe

Using video to processes shadows, MIT researchers have developed an algorithm that can see around corners, writes Alyssa Meyers for The Boston Globe. “When you first think about this, you might think it’s crazy or impossible, but we’ve shown that it’s not if you can understand the physics of how light propagates,” says lead author and MIT graduate Katie Bouman.

Newsweek

CSAIL researchers have developed a system that detects objects and people hidden around blind corners, writes Anthony Cuthbertson for Newsweek. “We show that walls and other obstructions with edges can be exploited as naturally occurring ‘cameras’ that reveal the hidden scenes beyond them,” says lead author and MIT graduate Katherine Bouman.

New Scientist

MIT researchers have developed a new system that can spot moving objects hidden from view by corners, reports Douglas Heaven for New Scientist. “A lot of our work involves finding hidden signals you wouldn’t think would be there,” explains lead author and MIT graduate Katie Bouman. 

Wired

Wired reporter Matt Simon writes that MIT researchers have developed a new system that analyzes the light at the edges of walls to see around corners. Simon notes that the technology could be used to improve self-driving cars, autonomous wheelchairs, health care robots and more.  

NPR

CSAIL researchers have developed an artificial neural network that generates recipes from pictures of food, reports Laurel Dalrymple for NPR. The researchers input recipes into an AI system, which learned patterns “connections between the ingredients in the recipes and the photos of food,” explains Dalrymple.

Wired

A team of researchers from MIT and Princeton participating in the Amazon Robotics Challenge are using GelSight technology to give robots a sense of touch, reports Tom Simonite for Wired. Simonite explains that the, “rubbery membranes on the robot’s fingers are tracked from the inside by tiny cameras as they are deformed by objects it touches.”

USA Today

In this video for USA Today, Sean Dowling highlights Pic2Recipe, the artificial intelligence system developed by CSAIL researchers that can predict recipes based off images of food. The researchers hope the app could one day be used to help, “people track daily nutrition by seeing what’s in their food.”

BBC News

Researchers at MIT have developed an algorithm that can identify recipes based on a photo, writes BBC News reporter Zoe Kleinman. The algorithm, which was trained using a database of over one million photos, could be developed to show “how a food is prepared and could also be adapted to provide nutritional information,” writes Kleinman.

New Scientist

MIT researchers have developed a new machine learning algorithm that can look at photos of food and suggest a recipe to create the pictured dish, reports Matt Reynolds for New Scientist. Reynolds explains that, “eventually people could use an improved version of the algorithm to help them track their diet throughout the day.”

Wired

CSAIL researchers have trained an AI system to look at images of food, predict the ingredients used, and even suggest recipes, writes Matt Burgess for Wired. The system could also analyze meals to determine their nutritional value or “manipulate an existing recipe to be healthier or to conform to certain dietary restrictions," explains graduate student Nick Hynes.

Forbes

Kevin Murnane of Forbes spotlights five innovations developed by CSAIL researchers in 2016. Murnane highlights an ingestible origami robot, a 3-D printed robot with solid and liquid components, a robot that can assist with scheduling decisions, an artificial neural network that can explain its decisions, and an algorithm that can predict human interactions.