Skip to content ↓

Topic

Diversity and inclusion

Download RSS feed: News Articles / In the Media / Audio

Displaying 181 - 195 of 210 news clips related to this topic.
Show:

WGBH

A recent study from Media Lab graduate student Joy Buolamwini addresses errors in facial recognition software that create concern for civil liberties. “If programmers are training artificial intelligence on a set of images primarily made up of white male faces, their systems will reflect that bias,” writes Cristina Quinn for WGBH.

Boston Magazine

Spencer Buell of Boston Magazine speaks with graduate student Joy Buolamwini, whose research shows that many AI programs are unable to recognize non-white faces. “‘We have blind faith in these systems,’ she says. ‘We risk perpetuating inequality in the guise of machine neutrality if we’re not paying attention.’”

NBC Boston

NBC Boston reporter Frank Holland visits MIT to discuss the Institute’s ties to slavery, which is the subject of a new undergraduate research course. “MIT and Slavery class is pushing us into a national conversation. A conversation that’s well underway in the rest of country regarding the role of slavery and institutions of higher learning,” said Dean Melissa Nobles.

Boston 25 News

Mel King, who founded the Community Fellows Program in 1996, spoke to Crystal Haynes at Boston 25 News for a feature about his lifelong efforts to promote inclusion and equal access to technology. Haynes notes that King, a senior lecturer emeritus at MIT, “is credited with forming Boston into the city it is today; bringing groups separated by race, gender and sexuality together in a time when it was not only unexpected, but dangerous.”

The Economist

An article in The Economist states that new research by MIT grad student Joy Buolamwini supports the suspicion that facial recognition software is better at processing white faces than those of other people. The bias probably arises “from the sets of data the firms concerned used to train their software,” the article suggests.

Quartz

Dave Gershgorn writes for Quartz, highlighting congress’ concerns around the dangers of inaccurate facial recognition programs. He cites Joy Buolamwini’s Media Lab research on facial recognition, which he says “maintains that facial recognition is still significantly worse for people of color.”

New Scientist

Graduate student Joy Buolamwini tested three different face-recognition systems and found that the accuracy is best when the subject is a lighter skinned man, reports Timothy Revell for New Scientist. With facial recognition software being used by police to identify suspects, “this means inaccuracies could have consequences, such as systematically ingraining biases in police stop and searches,” writes Revell.

Marketplace

Molly Wood at Marketplace speaks with Media Lab graduate student Joy Buolamwini about the findings of her recent research, which examined widespread bias in AI-supported facial recognition programs. “At the end of the day, data reflects our history, and our history has been very biased to date,” Buolamwini said.

co.design

Recent research from graduate student Joy Buolamwini shows that facial recognition programs, which are increasingly being used by law enforcement, are failing to identify non-white faces. “When these systems can’t recognize darker faces with as much accuracy as lighter faces, there’s a higher likelihood that innocent people will be targeted by law enforcement,” writes Katharine Schwab for Co. Design

Gizmodo

Writing for Gizmodo, Sidney Fussell explains that a new Media Lab study finds facial-recognition software is most accurate when identifying men with lighter skin and least accurate for women with darker skin. The software analyzed by graduate student Joy Buolamwini “misidentified the gender of dark-skinned females 35 percent of the time,” explains Fussell.

Quartz

A study co-authored by MIT graduate student Joy Buolamwini finds that facial-recognition software is less accurate when identifying darker skin tones, especially those of women, writes Josh Horwitz of Quartz. According to the study, these errors could cause AI services to “treat individuals differently based on factors such as skin color or gender,” explains Horwitz.

The New York Times

Steve Lohr writes for the New York Times about graduate student Joy Buolamwini’s findings on the biases of artificial intelligence in facial recognition. “You can’t have ethical A.I. that’s not inclusive,” Buolamwini said. “And whoever is creating the technology is setting the standards.”

NPR

Graduate student Joy Buolamwini is featured on NPR’s TED Radio Hour explaining the racial bias of facial recognition software and how these problems can be rectified. “The minimum thing we can do is actually check for the performance of these systems across groups that we already know have historically been disenfranchised,” says Buolanwini.

Smithsonian Magazine

In an article co-written for Smithsonian, Prof. John Van Reenen writes about an analysis he and his colleagues conducted examining how socioeconomic background, race and gender can impact a child’s chances of becoming an inventor. The researchers found that, “young people’s exposure to innovators may be an important way to reduce these disparities and increase the number of inventors.”

Boston Globe

Prof. Junot Díaz speaks with Boston Globe reporter James Sullivan about his new children’s book, “Islandborn.” The book was inspired by two of his godchildren, who asked him to write a book featuring kids that looked like them. Díaz related to their request, noting that as a child, he felt “the world I was immersed in wasn’t represented at all.”