Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 1081 - 1095 of 1317 news clips related to this topic.
Show:

Inside Higher Ed

Chris Bourg, director of the MIT Libraries, speaks with Lindsay McKenzie of Inside Higher Ed about how libraries can help foster interdisciplinary discussions about artificial intelligence.  McKenzie writes that Bourg notes MIT’s, “long history of interdisciplinary research at its AI labs, the earliest of which was founded in 1959.”

NPR

With virtual personal assistants becoming more commonplace, Research Affiliate Jimena Canales suggests in an NPR article that it may be time to reconsider our views of them. Despite knowing that AI is not real, “the boundary between the simulated and the real is as contested as it ever was,” she writes. 

co.design

Neural networks developed by CSAIL researchers that can identify the contents of images, videos, and audio are the basis for a new system that has added background sound to Google Street View, writes Mark Wilson of Co.Design

Fortune- CNN

Fortune reporter David Morris writes that MIT researchers have tricked an artificial intelligence system into thinking that a photo of a machine gun was a helicopter. Morris explains that, “the research points towards potential vulnerabilities in the systems behind technology like self-driving cars, automated security screening systems, or facial-recognition tools.”

New Scientist

Abigail Beall of New Scientist writes that MIT researchers have developed an algorithm that can trick an AI system, highlighting potential weaknesses in new image-recognition technologies used in everything from self-driving cars to facial recognition systems. “If a driverless car failed to spot a pedestrian or a security camera misidentified a gun the consequences could be incredibly serious.” 

Wired

CSAIL researchers have tricked a machine-learning algorithm into misidentifying an object, reports Louise Matsakis for Wired. The research, “demonstrates that attackers could potentially create adversarial examples that can trip up commercial AI systems,” explains Matsakis. 

New Scientist

Media Lab researchers have teamed up with UNICEF on a new website that uses AI to show what cities would look like if they had gone through the war in Syria. As Timothy Revell notes in New Scientist, “such destruction is hard to imagine and can lead to fewer people contributing to fundraising campaigns,” which is something the researchers hope this project will change.  

Financial Times

Prof. Erik Brynjolfsson speaks with the Financial Times about a new report that aims to assess how quickly intelligent machines are progressing. This effort “was prompted by growing concerns about their impact on things such as employment,” writes Richard Waters.

CBC News

CBC News’ Anna Maria Tremonti explores a new study by MIT researchers that examines how children interact with AI toys. The study shows, “how children can develop emotional ties with the robots, which was cause for concern for the MIT researcher,” Tremonti explains. 

The Boston Globe

Writing for The Boston Globe, President L. Rafael Reif issues a call for allies to help address the changing nature of work in the age of automation. “Automation will transform our work, our lives, our society," writes Reif. "Whether the outcome is inclusive or exclusive, fair or laissez-faire, is up to us.”

BBC News

Graduate student Anish Athalye speaks with the BBC about his work examining how image recognitions systems can be fooled. "More and more real-world systems are starting to incorporate neural networks, and it's a big concern that these systems may be possible to subvert or attack using adversarial examples,” Athalye explains. 

New Scientist

New Scientist reporter Abigail Beale writes that MIT researchers have been able to trick an AI system into thinking an image of a turtle is a rifle. Beale writes that the results, “raise concerns about the accuracy of face recognition systems and the safety of driverless cars, for example.”

Guardian

Guardian reporter Alex Hern writes that in a new paper MIT researchers demonstrated the concept of adversarial images, describing how they tricked an AI system into thinking an image of a turtle was an image of a gun. The researchers explained that their work “demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought.”

Fortune- CNN

Valentina Zarya writes for Fortune that MIT researchers have developed an AI system that can generate horror stories. The system, named Shelley, learned its craft by reading a Reddit forum containing stories from amateur horror writers. The bot, Shelley, also tweets a line for a new story every hour, encouraging Twitter users to continue the story.

CBS Boston

MIT Media Lab researchers have created an AI program that can write horror stories in collaboration with humans via Twitter, reports David Wade for CBS Boston. “Over time, we are expecting her to learn more from the crowd, and to create even more scarier stories,” says postdoctoral associate Pinar Yanardag.