Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 796 - 810 of 1027 news clips related to this topic.
Show:

Wired

CSAIL researchers have tricked a machine-learning algorithm into misidentifying an object, reports Louise Matsakis for Wired. The research, “demonstrates that attackers could potentially create adversarial examples that can trip up commercial AI systems,” explains Matsakis. 

New Scientist

Media Lab researchers have teamed up with UNICEF on a new website that uses AI to show what cities would look like if they had gone through the war in Syria. As Timothy Revell notes in New Scientist, “such destruction is hard to imagine and can lead to fewer people contributing to fundraising campaigns,” which is something the researchers hope this project will change.  

Financial Times

Prof. Erik Brynjolfsson speaks with the Financial Times about a new report that aims to assess how quickly intelligent machines are progressing. This effort “was prompted by growing concerns about their impact on things such as employment,” writes Richard Waters.

CBC News

CBC News’ Anna Maria Tremonti explores a new study by MIT researchers that examines how children interact with AI toys. The study shows, “how children can develop emotional ties with the robots, which was cause for concern for the MIT researcher,” Tremonti explains. 

The Boston Globe

Writing for The Boston Globe, President L. Rafael Reif issues a call for allies to help address the changing nature of work in the age of automation. “Automation will transform our work, our lives, our society," writes Reif. "Whether the outcome is inclusive or exclusive, fair or laissez-faire, is up to us.”

BBC News

Graduate student Anish Athalye speaks with the BBC about his work examining how image recognitions systems can be fooled. "More and more real-world systems are starting to incorporate neural networks, and it's a big concern that these systems may be possible to subvert or attack using adversarial examples,” Athalye explains. 

New Scientist

New Scientist reporter Abigail Beale writes that MIT researchers have been able to trick an AI system into thinking an image of a turtle is a rifle. Beale writes that the results, “raise concerns about the accuracy of face recognition systems and the safety of driverless cars, for example.”

Guardian

Guardian reporter Alex Hern writes that in a new paper MIT researchers demonstrated the concept of adversarial images, describing how they tricked an AI system into thinking an image of a turtle was an image of a gun. The researchers explained that their work “demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought.”

Fortune- CNN

Valentina Zarya writes for Fortune that MIT researchers have developed an AI system that can generate horror stories. The system, named Shelley, learned its craft by reading a Reddit forum containing stories from amateur horror writers. The bot, Shelley, also tweets a line for a new story every hour, encouraging Twitter users to continue the story.

CBS Boston

MIT Media Lab researchers have created an AI program that can write horror stories in collaboration with humans via Twitter, reports David Wade for CBS Boston. “Over time, we are expecting her to learn more from the crowd, and to create even more scarier stories,” says postdoctoral associate Pinar Yanardag.

HuffPost

MIT researchers have developed an artificial neural network that can generate horror stories by collaborating with people on Twitter, HuffPost reports. Pinar Yanardag, a postdoc at the Media Lab, explains that the system is, “creating really interesting and weird stories that have never really existed in the horror genre.”

Associated Press

Associated Press reporter Matt O’Brien details how Media Lab researchers have developed a new system, dubbed Shelley, that can generate scary stories. O’Brien explains that, “Shelley's artificial neural network is generating its own stories, posting opening lines on Twitter, then taking turns with humans in collaborative storytelling.”

Newsweek

Newsweek reporter Joseph Frankel writes that MIT Media Lab researchers have developed an AI system named Shelley that uses human input to write short horror stories. Frankel explains that Shelley, “tweets out one or two sentences as the start of a new horror story, then calls for users to respond with their own lines.”

New Scientist

New Scientist reporter Timothy Revell writes that researchers from the MIT Media Lab have developed a new AI system that can tell scary stories. Revell explains that the system is “powered by deep learning algorithms that have been trained on stories collected from the subreddit /r/nosleep where people share their own original eerie works.”

WGBH

During an appearance on WGBH’s Greater Boston, Prof. Regina Barzilay speaks with Jim Braude about her research and the experience of winning a MacArthur grant. Barzilay explains that the techniques she and her colleagues are developing to apply machine learning to medicine, “can be applied to many other areas. In fact, we have started collaborating and expanding.”