Skip to content ↓

Topic

Equity and inclusion

Download RSS feed: News Articles / In the Media / Audio

Displaying 181 - 195 of 254 news clips related to this topic.
Show:

Quartz

Quartz reporter Leah Fessler writes that Facebook COO Sheryl Sandberg’s Commencement address at MIT featured a call for graduates to help create more inclusive technologies and workplaces. “It’s not the technology you build that will define you. It’s the teams you build and what people do with the technology you build,” Sandberg advised.

Yahoo! News

Facebook COO Sheryl Sandberg advocated for MIT graduates to be, “clear-eyed optimists” while speaking at MIT’s 2018 Commencement exercises, reports Ethan Wolff-Mann for Yahoo! Finance. “It’s not enough to be technologists. We have to make sure that technology serves people,” she said.

Associated Press

AP reporter Collin Binkley writes that during her Commencement address at MIT, Facebook COO Sheryl Sandberg called for equality in the technology sector. "Build workplaces where everyone — everyone — is treated with respect," she said. "We need to stop harassment and hold both perpetrators and enablers accountable. And we need to make a personal commitment to stop racism and sexism."

Newsweek

To prove that the data used to train machine learning algorithms can greatly influence its behavior, MIT researchers input gruesome and violent content into an AI algorithm, writes Benjamin Fearnow for Newsweek. The result is “Norman,” an AI system in which “empathy logic simply failed to turn on,” explains Fearnow.

HuffPost

HuffPost reporter Thomas Tamblyn writes that MIT researchers developed a new AI system that sees the worst in humanity to illustrate what happens when bias enters the machine learning process. “An AI learns only what it is fed, and if the humans that are feeding it are biased (consciously or not) then the results can be extremely problematic.”

Forbes

Forbes contributor Frederick Daso describes how two female MBA students at the MIT Sloan School of Management, Preeti Sampat and Jaida Yang, started their own venture capital firm in an effort to, “bridge the geographical and diversity gaps in the current early-stage investing ecosystem.”

The Atlantic

Writing for The Atlantic, MIT lecturer Amy Carleton describes the focus on public policy, as well as engineering and product design, at this year’s “Make the Breast Pump Not Suck” hackathon. “What emerged [at the inaugural hackathon] was an awareness that the challenges surrounding breastfeeding were not just technical and equipment-based,” explains Carleton.

WGBH

A recent study from Media Lab graduate student Joy Buolamwini addresses errors in facial recognition software that create concern for civil liberties. “If programmers are training artificial intelligence on a set of images primarily made up of white male faces, their systems will reflect that bias,” writes Cristina Quinn for WGBH.

Boston Magazine

Spencer Buell of Boston Magazine speaks with graduate student Joy Buolamwini, whose research shows that many AI programs are unable to recognize non-white faces. “‘We have blind faith in these systems,’ she says. ‘We risk perpetuating inequality in the guise of machine neutrality if we’re not paying attention.’”

NBC Boston

NBC Boston reporter Frank Holland visits MIT to discuss the Institute’s ties to slavery, which is the subject of a new undergraduate research course. “MIT and Slavery class is pushing us into a national conversation. A conversation that’s well underway in the rest of country regarding the role of slavery and institutions of higher learning,” said Dean Melissa Nobles.

Boston 25 News

Mel King, who founded the Community Fellows Program in 1996, spoke to Crystal Haynes at Boston 25 News for a feature about his lifelong efforts to promote inclusion and equal access to technology. Haynes notes that King, a senior lecturer emeritus at MIT, “is credited with forming Boston into the city it is today; bringing groups separated by race, gender and sexuality together in a time when it was not only unexpected, but dangerous.”

The Economist

An article in The Economist states that new research by MIT grad student Joy Buolamwini supports the suspicion that facial recognition software is better at processing white faces than those of other people. The bias probably arises “from the sets of data the firms concerned used to train their software,” the article suggests.

Quartz

Dave Gershgorn writes for Quartz, highlighting congress’ concerns around the dangers of inaccurate facial recognition programs. He cites Joy Buolamwini’s Media Lab research on facial recognition, which he says “maintains that facial recognition is still significantly worse for people of color.”

New Scientist

Graduate student Joy Buolamwini tested three different face-recognition systems and found that the accuracy is best when the subject is a lighter skinned man, reports Timothy Revell for New Scientist. With facial recognition software being used by police to identify suspects, “this means inaccuracies could have consequences, such as systematically ingraining biases in police stop and searches,” writes Revell.

Marketplace

Molly Wood at Marketplace speaks with Media Lab graduate student Joy Buolamwini about the findings of her recent research, which examined widespread bias in AI-supported facial recognition programs. “At the end of the day, data reflects our history, and our history has been very biased to date,” Buolamwini said.