Skip to content ↓

Topic

Image Processing

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 4 of 4 news clips related to this topic.
Show:

Fortune- CNN

Fortune reporter David Morris writes that MIT researchers have tricked an artificial intelligence system into thinking that a photo of a machine gun was a helicopter. Morris explains that, “the research points towards potential vulnerabilities in the systems behind technology like self-driving cars, automated security screening systems, or facial-recognition tools.”

New Scientist

Abigail Beall of New Scientist writes that MIT researchers have developed an algorithm that can trick an AI system, highlighting potential weaknesses in new image-recognition technologies used in everything from self-driving cars to facial recognition systems. “If a driverless car failed to spot a pedestrian or a security camera misidentified a gun the consequences could be incredibly serious.” 

Wired

CSAIL researchers have tricked a machine-learning algorithm into misidentifying an object, reports Louise Matsakis for Wired. The research, “demonstrates that attackers could potentially create adversarial examples that can trip up commercial AI systems,” explains Matsakis. 

Fortune- CNN

Fortune reporter Jonathan Vanian writes that researchers from MIT’s Computer Science and Artificial Intelligence Laboratory have developed a new method to restore old, malfunctioning code. The system, called Helium, “discovers the most crucial lines of code that the original programmers developed to make it function, and then builds a revised version of the program.”