Skip to content ↓

Topic

Information spreading

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 13 of 13 news clips related to this topic.
Show:

Bloomberg

Bloomberg Opinion columnist Parmy Olson spotlights a new study by MIT researchers that finds AI chatbots can be highly persuasive when reinforced with facts and could potentially be used to help tackle conspiracy theories. “The scientists invited more than 2,000 people who believed in different conspiracy theories to summarize their positions to a chatbot — powered by OpenAI’s latest publicly available language model — and briefly debate them with the bot,” Olson writes. “On average, participants subsequently described themselves as 20% less confident in the conspiracy theory; their views remained softened even two months later.” 

The Washington Post

Washington Post columnist Dana Milbank spotlights postdoctoral research associate Brian Guay’s research examining why “Republicans share between 200 percent and 500 percent more fake news (fabrications published by sites masquerading as news outlets) than Democrats.” Guay explains that “the issue primarily seems to be a supply issue. There’s just way more fake news on the right than the left.”

Mashable

A new working paper by researchers from MIT and Yale finds that “80 percent of Americans think social media companies should take action to reduce the spread of misinformation,” reports Rachel Kraus for Mashable. “Our data suggests [Musk’s views] are not representative,” says Prof. David Rand. "A lot of people in Silicon Valley have this kind of maybe libertarian, extreme free speech orientation that I don't think is in line with actually how most Americans and social media platform users think about things."

The Washington Post

Writing for The Washington Post, Prof. Sinan Aral explores the information war underway over traditional and social media about the Russian invasion of Ukraine. “While it is hard to pinpoint the extent to which the information war is contributing to the overwhelming international unity against Putin’s aggression,” writes Aral, “one thing is clear: Social media, mainstream media and the narrative framing of the invasion of Ukraine undoubtedly will play an important role in how this conflict ends.”

STAT

STAT reporter Faye Flam spotlights research from Prof. David Rand, University of Regina Prof. Gordon Pennycook and their colleagues that shows people “really want to share accurate information but give into the temptation to share juicy bits of gossip they think will please their friends or that make them look good.”

The Boston Globe

Boston Globe reporter Kevin Lewis spotlights how MIT researchers surveyed thousands of Democrats and Republicans to rate the reliability of nonpolitical news headlines. “People genuinely believe that opposing partisans are more gullible, even when that stereotype is costly to them,” writes Lewis. “On the other hand, that stereotype can be corrected with evidence.”

Salon

Salon reporter Amanda Marcotte spotlights a study by MIT researchers that finds correcting misinformation on social media platforms often leads to people sharing more misinformation. Research affiliate Mohsen Mosleh explains that after being corrected Twitter users " retweeted news that was significantly lower in quality and higher in partisan slant, and their retweets contained more toxic language." 

Fast Company

Fast Company reporter Mark Wilson writes that a new study by researchers from MIT and Google finds that simple user experience interventions can help stop people from sharing misinformation on Covid-19. “Researchers introduced several different prompts through a simple popup window, all with a single goal: to get people to think about the accuracy of what they’re about to share,” writes Wilson. “When primed to consider a story’s accuracy, people were up to 20% less likely to share a piece of fake news.”

Motherboard

A new study by MIT researchers finds that correcting people who were spreading misinformation on Twitter led to people retweeting and sharing even more misinformation, reports Matthew Gault for Motherboard. Prof. David Rand explains that the research is aimed at identifying “what kinds of interventions increase versus decrease the quality of news people share. There is no question that social media has changed the way people interact. But understanding how exactly it's changed things is really difficult.” 

New York Times

In an opinion piece for The New York Times, Prof. Nicholas Ashford calls for creating systems that could help address the spread of misinformation in broadcast media. “Public trust in the media industry has been declining for years,” writes Ashford. “It can be restored by securing media companies’ commitment to practicing fact-checking and presenting contrasting perspectives on issues important to news consumers.”

National Geographic

National Geographic reporters Monique Brouillette and Rebecca Renner spotlight Prof. Sinan Aral’s research exploring why untrue information tends to spread so quickly. “Human attention is drawn to novelty, to things that are new and unexpected,” says Aral. “We gain in status when we share novel information because it looks like we're in the know, or that we have access to inside information.”

New York Times

In an article for The New York Times, Prof. David Rand examines what makes people susceptible to believing false or misleading information. Rand and his co-author write that their research “suggests that the solution to politically charged misinformation should involve devoting resources to the spread of accurate information and to training or encouraging people to think more critically.”

The Atlantic

In an article for The Atlantic, Prof. Ethan Zuckerman proposes creating a public social media platform that focuses on “aggregating and curating, pushing unfamiliar perspectives into our feeds and nudging us to diversity away from the ideologically comfortable material we all gravitate towards.”