Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 676 - 690 of 869 news clips related to this topic.
Show:

Time

Graduate student Joy Buolamwini writes for TIME about the need to tackle gender and racial bias in AI systems. “By working to reduce the exclusion overhead and enabling marginalized communities to engage in the development and governance of AI, we can work toward creating systems that embrace full spectrum inclusion,” writes Buolamwini.

BBC News

In this video, graduate student Nima Fazeli speaks with the BBC News about his work developing a robot that uses sensors and cameras to learn how to play Jenga. “It’s using these techniques from AI and machine learning to be able to predict the future of its actions and decide what is the next best move,” explains Fazeli.

CBS News

CBS This Morning spotlights how MIT researchers have developed a new robot that can successfully play Jenga. “It is an automated system that has had a learning period first,” explains Prof. Alberto Rodriguez. “It uses the information from the camera and the force sensor to interpret its interactions with the Jenga tower.”

CNN

MIT researchers have developed a robot that can play Jenga. “It "learns" whether to remove a specific block in real time, using visual and tactile feedback, in much the same way as a human player would switch blocks if the tower started to wobble,” reports Jack Guy for CNN.

Popular Science

A new robot developed by MIT researchers uses AI and sensors to play the game of Jenga, reports Rob Verger for Popular Science. “It decides on its own which block to push, [and] which blocks to probe; it decides on its own how to extract them; and it decides on its own when it’s a good idea to keep extracting them, or to move to another one,” says Prof. Alberto Rodriguez.

Wired

Wired reporter Matt Simon writes that MIT researchers have engineered a robot that can teach itself to play the game of Jenga. As Simon explains, the development is a “big step in the daunting quest to get robots to manipulate objects in the real world.”

Associated Press

Associated Press reporter Tali Arbel writes that MIT researchers have found that Amazon’s facial detection technology often misidentifies women and women with darker skin. Arbel writes that the study, “warns of the potential of abuse and threats to privacy and civil liberties from facial-detection technology.”

The Washington Post

A new study by Media Lab researchers finds that Amazon’s Rekognition facial recognition system performed more accurately when identifying lighter-skinned faces, reports Drew Harrell for The Washington Post. The system “performed flawlessly in predicting the gender of lighter-skinned men,” writes Harrell, “but misidentified the gender of darker-skinned women in roughly 30 percent of their tests.”

The Verge

Verge reporter James Vincent writes that Media Lab researchers have found that the facial recognition system Rekognition performed worse at identifying an individual’s gender if they were female or dark-skinned. In experiments, the researchers found that the system “mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time,” Vincent explains.

New York Times

MIT researchers have found that the Rekognition facial recognition system has more difficulty identifying the gender of female and darker-skinned faces than similar services, reports Natasha Singer for The New York Times. Graduate student Joy Buolamwini said “the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations,” writes Singer.

New York Times

New York Times reporter Steve Lohr writes about the MIT AI Policy Conference, which examined how society, industry and governments should manage the policy questions surrounding the evolution of AI technologies. “If you want people to trust this stuff, government has to play a role,” says CSAIL principal research scientist Daniel Weitzner.

The Wall Street Journal

Provost Martin Schmidt and SHASS Dean Melissa Nobles speak with Wall Street Journal reporter Sara Castellanos about MIT’s efforts to advance the study of AI and its ethical and societal implications through the MIT Stephen A. Schwarzman College of Computing. Schmidt says this work “requires a deep partnership between the technologists and the humanists.”

BBC News

Prof. Aleksander Madry and graduate student Anish Athalye speak with BBC News reporter Linda Geddes about how AI systems can be tricked into seeing or hearing things that aren’t actually there. “People are looking at it as a potential security issue as these systems are increasingly being deployed in the real world,” Athalye explains.

Gizmodo

Gizmodo reporter Jennings Brown writes that researchers from the MIT Media Lab are developing a machine learning system that can develop addresses for regions of the planet that don’t have a recognized address system. Brown explains that the researchers “compared their results to an unmapped suburban region and found that their system labeled more than 80 percent of the populated portions.”

TechCrunch

CSAIL researchers have developed a new technique to recreate paintings from a single photograph, reports John Biggs for TechCrunch. “The project uses machine learning to recreate the exact colors of each painting and then prints it using a high-end 3D printer that can output thousands of colors using half-toning,” Biggs explains.