Science
MIT researchers have identified a method to help AI systems avoid adversarial attacks, reports Matthew Hutson for Science. When the researchers “trained an algorithm on images without the subtle features, their image recognition software was fooled by adversarial attacks only 50% of the time,” Hutson explains. “That compares with a 95% rate of vulnerability when the AI was trained on images with both obvious and subtle patterns.”