Skip to content ↓

Topic

Algorithms

Download RSS feed: News Articles / In the Media / Audio

Displaying 256 - 270 of 570 news clips related to this topic.
Show:

Associated Press

Associated Press reporter Tali Arbel writes that MIT researchers have found that Amazon’s facial detection technology often misidentifies women and women with darker skin. Arbel writes that the study, “warns of the potential of abuse and threats to privacy and civil liberties from facial-detection technology.”

The Washington Post

A new study by Media Lab researchers finds that Amazon’s Rekognition facial recognition system performed more accurately when identifying lighter-skinned faces, reports Drew Harrell for The Washington Post. The system “performed flawlessly in predicting the gender of lighter-skinned men,” writes Harrell, “but misidentified the gender of darker-skinned women in roughly 30 percent of their tests.”

The Verge

Verge reporter James Vincent writes that Media Lab researchers have found that the facial recognition system Rekognition performed worse at identifying an individual’s gender if they were female or dark-skinned. In experiments, the researchers found that the system “mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time,” Vincent explains.

New York Times

MIT researchers have found that the Rekognition facial recognition system has more difficulty identifying the gender of female and darker-skinned faces than similar services, reports Natasha Singer for The New York Times. Graduate student Joy Buolamwini said “the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations,” writes Singer.

TechCrunch

TechCrunch reporter John Biggs writes that MIT researchers have developed a new system that allows users to reverse-engineer complex items by deconstructing objects and turning them into 3-D models. Biggs writes that the system is a “surprisingly cool way to begin hacking hardware in order to understand it’s shape, volume and stability.”

The Wall Street Journal

In an article for The Wall Street Journal, Benjamin Powers highlights Affectiva and Koko, two MIT startups developing AI systems that respond to human emotions.

The Wall Street Journal

Provost Martin Schmidt and SHASS Dean Melissa Nobles speak with Wall Street Journal reporter Sara Castellanos about MIT’s efforts to advance the study of AI and its ethical and societal implications through the MIT Stephen A. Schwarzman College of Computing. Schmidt says this work “requires a deep partnership between the technologists and the humanists.”

WGBH

Graduate student Irene Chen speaks with WGBH’s Living Lab Radio about her work trying to reduce bias in health care algorithms. “The results that we’ve shown from healthcare algorithms are so powerful that we really do need to see how we could implement those carefully, safely, robustly and fairly,” she explains.

Gizmodo

Gizmodo reporter Jennings Brown writes that researchers from the MIT Media Lab are developing a machine learning system that can develop addresses for regions of the planet that don’t have a recognized address system. Brown explains that the researchers “compared their results to an unmapped suburban region and found that their system labeled more than 80 percent of the populated portions.”

Forbes

Forbes reporter Samar Marwan speaks with Rana el Kaliouby, CEO and cofounder of the MIT startup Affectiva, about her work developing new technology that can read human facial expressions. Marwan explains that el Kaliouby and Prof. Rosalind Picard started developing the technology at MIT, “to focus on helping children on the autism spectrum better understand how other people were feeling.”

TechCrunch

MIT researchers have developed a new system to detect contaminated food by scanning a product’s RFID tags, reports Devin Coldewey for TechCrunch. The system can “tell the difference between pure and melamine-contaminated baby formula, and between various adulterations of pure ethyl alcohol,” Coldewey explains.

Boston Globe

Boston Globe reporter Katie Johnston speaks with several MIT researchers about their work developing technology that is aimed at improving collaboration between humans and robots. Prof. Julie Shah notes that offloading easier decisions onto a machine “would allow people to focus on the parts of job that truly require human judgment and experience.”

Boston Herald

Boston Herald reporter Jordan Graham writes that MIT researchers have developed an autonomous system that allows fleets of drones to navigate without GPS and could be used to help find missing hikers. “What we’re trying to do is automate the search part of the search-and-rescue problem with a fleet of drones,” explains graduate student Yulun Tian.

Xinhuanet

MIT researchers have developed a language translation model that operates without human annotations and guidance, reports Liangyu for Xinhua news agency. The system, which may enable computer-based translations of the thousands of languages spoken worldwide, is “a step toward one of the major goals of machine translation, which is fully unsupervised word alignment,” Liangyu explains.

Fortune- CNN

Fortune reporter Aaron Pressman highlights how Prof. Julie Shah is working on making human-robot collaboration on the assembly line more effective through the use of collaborative robots, dubbed cobots. Pressman writes that Shah “is working on software algorithms developed with machine learning that will teach cobots how and when to communicate by reading signals from the humans around them.”