Skip to content ↓

Topic

Diversity and inclusion

Download RSS feed: News Articles / In the Media / Audio

Displaying 151 - 165 of 210 news clips related to this topic.
Show:

Wired

Wired reporter Lily Hay Newman highlights graduate student Joy Buolamwini’s Congressional testimony about the bias of facial recognition systems. “New research is showing bias in the use of facial analysis technology for health care purposes, and facial recognition is being sold to schools,” said Buolamwini. “Our faces may well be the final frontier of privacy.” 

WBUR

WBUR reporter Pamela Reynolds highlights graduate student Joy Buolamwini’s piece, “The Coded Gaze,” which is currently on display as part of the “Avatars//Futures” exhibit at the Nave Gallery. Reynolds writes that Buolamwini’s piece “questions the inherent bias of coding in artificial intelligence, which has resulted in facial recognition technology unable to recognize black faces.”

Forbes

The Sloan School of Management and the Ruderman Family Foundation’s LINK20 have started a new week-long program aimed at equipping social justice and inclusion advocates “with theories and strategies in the areas of digital leadership, networking and entrepreneurship to become high-impact social influencers,” reports Sarah Kim for Forbes.

Time

Graduate student Joy Buolamwini writes for TIME about the need to tackle gender and racial bias in AI systems. “By working to reduce the exclusion overhead and enabling marginalized communities to engage in the development and governance of AI, we can work toward creating systems that embrace full spectrum inclusion,” writes Buolamwini.

Fast Company

In an article for Fast Company about hackathons, Dan Formosa highlights how the Make the Breast Pump Not Suck Hackathon held at MIT was an inclusive event focused on addressing issues of bias, inequality and accessibility, noting how the organizers “went to extremes to assure diversity.”

Wired

Prof. Joi Ito, director of the Media Lab, writes for Wired about how AI systems can help perpetuate longstanding discriminatory practices. “By merely relying on historical data and current definitions of fairness, we will lock in the accumulated unfairnesses of the past,” argues Ito, “and our algorithms and the products they support will always trail the norms.”

Associated Press

Associated Press reporter Tali Arbel writes that MIT researchers have found that Amazon’s facial detection technology often misidentifies women and women with darker skin. Arbel writes that the study, “warns of the potential of abuse and threats to privacy and civil liberties from facial-detection technology.”

The Washington Post

A new study by Media Lab researchers finds that Amazon’s Rekognition facial recognition system performed more accurately when identifying lighter-skinned faces, reports Drew Harrell for The Washington Post. The system “performed flawlessly in predicting the gender of lighter-skinned men,” writes Harrell, “but misidentified the gender of darker-skinned women in roughly 30 percent of their tests.”

The Verge

Verge reporter James Vincent writes that Media Lab researchers have found that the facial recognition system Rekognition performed worse at identifying an individual’s gender if they were female or dark-skinned. In experiments, the researchers found that the system “mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time,” Vincent explains.

New York Times

MIT researchers have found that the Rekognition facial recognition system has more difficulty identifying the gender of female and darker-skinned faces than similar services, reports Natasha Singer for The New York Times. Graduate student Joy Buolamwini said “the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations,” writes Singer.

WGBH

Graduate student Irene Chen speaks with WGBH’s Living Lab Radio about her work trying to reduce bias in health care algorithms. “The results that we’ve shown from healthcare algorithms are so powerful that we really do need to see how we could implement those carefully, safely, robustly and fairly,” she explains.

Fortune- CNN

Fortune reporters Aaron Pressman and Adam Lashinsky highlight graduate student Joy Buolamwini’s work aimed at eliminating bias in AI and machine learning systems. Pressman and Lashinsky note that Buolamwini believes that “who codes matters,” as more diverse teams of programmers could help prevent algorithmic bias. 

Boston Globe

In an effort to promote transparency and knowledge sharing, HUBweek 2018 will feature a semi-permanent glass structure showcasing new innovations being developed around the region, reports Cynthia Fernandez for The Boston Globe.

TechCrunch

A study co-authored by MIT researchers finds that robots can develop prejudices against other robots not working on their team, writes John Biggs for TechCrunch. The researchers also found that, like humans, prejudices were reduced when there were “more distinct subpopulations being present within a population.”

Boston Globe

HUBweek, an annual festival co-founded by MIT that focuses on ideas for the future, will include a two-day Change Maker Conference this year. J.D. Capelouto writes for The Boston Globe, another HUBweek founder, that this new event “will address a variety of topics, including enabling technologies, diversity, inclusion and accessibility, and civic thinking.”