Skip to content ↓

Topic

Media

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 19 news clips related to this topic.
Show:

The Conversation

Writing for The Conversation, postdoc Ziv Epstein SM ’19, PhD ’23, graduate student Robert Mahari and Jessica Fjeld of Harvard Law School explore how the use of generative AI will impact creative work. “The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression,” the authors note.  

Politico

Prof. Cynthia Breazeal discusses her work exploring how artificial intelligence can help students impacted by Covid, including refugees or children with disabilities, reports Ryan Heath for Politico. “We want to be super clear on what the role is of the robot versus the community, of which this robot is a part of. That's part of the ethical design thinking,” says Breazeal. “We don't want to have the robot overstep its responsibilities. All of our data that we collect is protected and encrypted.”

The Washington Post

Writing for The Washington Post, Prof. Sinan Aral explores the information war underway over traditional and social media about the Russian invasion of Ukraine. “While it is hard to pinpoint the extent to which the information war is contributing to the overwhelming international unity against Putin’s aggression,” writes Aral, “one thing is clear: Social media, mainstream media and the narrative framing of the invasion of Ukraine undoubtedly will play an important role in how this conflict ends.”

Los Angeles Times

Assia Boundaoui, a fellow at the MIT Open Documentary Lab, writes for The Los Angeles Times about her experience as a Muslim American filmmaker. “Despite the many ways we have been marginalized within the film industry, Muslim and Middle Eastern filmmakers will continue to tell our stories – stories where our humanity is assumed, not a subject of debate,” writes Boundaoui.

Bloomberg

Prof. David Rand and Prof. Gordon Pennycook of the University of Regina in Canada found that people improved the accuracy of their social media posts when asked to rate the accuracy of the headline first, reports Faye Flam for Bloomberg. “It’s not necessarily that [users] don’t care about accuracy. But instead, it’s that the social media context just distracts them, and they forget to think about whether it’s accurate or not before they decide to share it,” says Rand.

New York Times

In an opinion piece for The New York Times, Prof. Nicholas Ashford calls for creating systems that could help address the spread of misinformation in broadcast media. “Public trust in the media industry has been declining for years,” writes Ashford. “It can be restored by securing media companies’ commitment to practicing fact-checking and presenting contrasting perspectives on issues important to news consumers.”

Quartz

Quartz reporter Nicolás Rivero highlights a study co-authored by Prof. David Rand that examines the effectiveness of labeling fake news on social media platforms. “I think most people working in this area agree that if you put a warning label on something, that will make people believe and share it less,” says Rand. “But most stuff doesn’t get labeled, so that’s a major practical limitation of this approach.”

The Boston Globe

Writing for The Boston Globe, Prof. D. Fox Harrell, Francesca Panetta and Pakinam Amer of the MIT Center for Advanced Virtuality explore the potential dangers posed by deepfake videos. “Combatting misinformation in the media requires a shared commitment to human rights and dignity — a precondition for addressing many social ills, malevolent deepfakes included,” they write.

Fortune

Researchers at MIT’s Center for Advanced Virtuality have created a deepfake video of President Richard Nixon discussing a failed moon landing. “[The video is] meant to serve as a warning of the coming wave of impressively realistic deepfake false videos about to hit us that use A.I. to convincingly reproduce the appearance and sound of real people,” write Aaron Pressman and David Z. Morris for Fortune.

Boston 25 News

Boston 25’s Chris Flanagan reports that MIT researchers developed a website aimed at educating the public about deepfake technology and misinformation. “This project is part of an awareness campaign to get people aware of what is possible with both AI technologies like our deepfake, but also really simple video editing technologies,” says Francesca Panetta, XR creative director at MIT’s Center for Advanced Virtuality.

Space.com

MIT researchers created a deepfake video and website to help educate the public of the dangers of deepfakes and misinformation, reports Mike Wall for Space.com. “This alternative history shows how new technologies can obfuscate the truth around us, encouraging our audience to think carefully about the media they encounter daily,” says Francesca Panetta, XR creative director at MIT’s Center for Advanced Virtuality.

Scientific American

Scientific American explores how MIT researchers created a new website aimed at exploring the potential perils and possibilities of deepfakes. “One of the things I most love about this project is that it’s using deepfakes as a medium and the arts to address the issue of misinformation in our society,” says Prof. D. Fox Harrell.

Fast Company

A study co-authored by MIT researchers finds that asking social media users to evaluate the accuracy of news headlines can reduce the spread of Covid-19 misinformation.  “Asking users to rate content gets them to think about accuracy and generates useful input for the platforms,” explains Prof. David Rand.

Quartz

In an article for Quartz about the role the media will play in influencing voters in India’s upcoming general election, Sahil Wajid highlights Prof. Emeritus Noam Chomsky’s book, “Manufacturing Consent: The Political Economy of the Mass Media.” The book is a “seminal work on systemic bias afflicting the corporate news industry,” writes Wajid.

Salon

Prof. Thomas Malone writes for Salon about Elon Musk’s recent proposal to create a “media credibility rating site” where the public could rate journalists and media outlets. “Crowdsourcing can work—even when most people in the crowd can’t do the task well—if there is some independent way of recognizing and giving special weight to the crowd members who are good at the task.”