Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 751 - 765 of 830 news clips related to this topic.
Show:

New Scientist

New Scientist reporter Abigail Beale writes that MIT researchers have been able to trick an AI system into thinking an image of a turtle is a rifle. Beale writes that the results, “raise concerns about the accuracy of face recognition systems and the safety of driverless cars, for example.”

Guardian

Guardian reporter Alex Hern writes that in a new paper MIT researchers demonstrated the concept of adversarial images, describing how they tricked an AI system into thinking an image of a turtle was an image of a gun. The researchers explained that their work “demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought.”

Boston Globe

Using video to processes shadows, MIT researchers have developed an algorithm that can see around corners, writes Alyssa Meyers for The Boston Globe. “When you first think about this, you might think it’s crazy or impossible, but we’ve shown that it’s not if you can understand the physics of how light propagates,” says lead author and MIT graduate Katie Bouman.

Newsweek

CSAIL researchers have developed a system that detects objects and people hidden around blind corners, writes Anthony Cuthbertson for Newsweek. “We show that walls and other obstructions with edges can be exploited as naturally occurring ‘cameras’ that reveal the hidden scenes beyond them,” says lead author and MIT graduate Katherine Bouman.

New Scientist

MIT researchers have developed a new system that can spot moving objects hidden from view by corners, reports Douglas Heaven for New Scientist. “A lot of our work involves finding hidden signals you wouldn’t think would be there,” explains lead author and MIT graduate Katie Bouman. 

Wired

Wired reporter Matt Simon writes that MIT researchers have developed a new system that analyzes the light at the edges of walls to see around corners. Simon notes that the technology could be used to improve self-driving cars, autonomous wheelchairs, health care robots and more.  

Associated Press

IBM is joining forces with MIT to establish a new lab dedicated to fundamental AI research, reports the AP. The new lab will focus on, “advancing the hardware, software and algorithms used for artificial intelligence. It also will tackle some of the economic and ethical implications of intelligent machines and look at its commercial application.”

Bloomberg

IBM has invested $240 million to develop a new AI research lab with MIT, reports Jing Cao for Bloomberg News. “The MIT-IBM Watson AI Lab will fund projects in four broad areas, including creating better hardware to handle complex computations and figuring out applications of AI in specific industries,” Cao explains. 

CNBC

CNBC reporter Jordan Novet writes that MIT and IBM have established a new lab to pursue fundamental AI research. Novet notes that MIT, “was home to one of the first AI labs and continues to be well regarded as a place to do work in the sector.”

Boston Globe

Boston Globe reporter Andy Rosen writes that MIT and IBM have established a new AI research lab.  “It’s amazing that we have a company that’s also interested in the fundamental research,” explains Anantha Chandrakasan, dean of the School of Engineering. “That’s very basic research that may not be in a product next year, but provides very important insights.”

Fortune- CNN

Writing for Fortune, Barb Darrow highlights how IBM has committed $240 million to establish a new joint AI lab with MIT. Darrow explains that, “the resulting MIT–IBM Watson AI Lab will focus on a handful of key AI areas including the development of new 'deep learning' algorithms.”

New Scientist

New Scientist reporter Matt Reynolds writes that MIT researchers have developed a new system that can determine how much pain a patient is experiencing. “By examining tiny facial expressions and calibrating the system to each person, it provides a level of objectivity in an area where that’s normally hard to come by,” explains Reynolds. 

Newsweek

An algorithm developed by Prof. Iyad Rahwan and graduate student Bjarke Felbo has been trained to detect sarcasm in tweets that use emojis, writes Josh Lowe for Newsweek.  After reading over 1 billion tweets with emojis, the algorithm predicted, “which emoji would be associated with a given tweet based on its emotional tone,” explains Lowe. 

Wired

Wired reporter Liz Stinson writes that researchers from MIT and Google have developed a new algorithm that can automatically retouch images on a mobile phone. “The neural network identifies exactly how to make it look better—increase contrast a smidge, tone down brightness, whatever—and apply the changes in under 20 milliseconds,” Stinson explains. 

NPR

CSAIL researchers have developed an artificial neural network that generates recipes from pictures of food, reports Laurel Dalrymple for NPR. The researchers input recipes into an AI system, which learned patterns “connections between the ingredients in the recipes and the photos of food,” explains Dalrymple.