Skip to content ↓

Topic

Social media

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 147 news clips related to this topic.
Show:

NECN

Graduate student Nouran Soliman speaks with NBC Boston about the use of “personhood credentials,” a new technique that can be used to verify online users as human beings to help combat issues such as fraud and misinformation. “We are trying to also think about ways of implementing a system that incorporates personal credentials in a decentralized way,” explains Soliman. “It's also important not to have the power in one place because that compromises democracy.” 

The Hill

Researchers from MIT and Oxford University has found “social media platforms’ suspensions of accounts may not be rooted in political biases, but rather certain political groups’ tendency to share misinformation,” reports Miranda Nazzaro for The Hill. “Thus, even under politically neutral anti-misinformation polices, political asymmetries in enforcement should be expected,” researchers wrote. “Political imbalance in enforcement need not imply bias on the part of social media companies implementing anti-misinformation polices.” 

New York Times

Prof. David Rand speaks with New York Times reporters Tiffany Hsu and Stuart A. Thompson about the challenges of stopping the spread of misinformation. “It seems like an easy enough problem: there’s the true stuff and there’s the false stuff, and if the platforms cared about it, they would just get rid of the false stuff,” says Rand. “Then we started working on it and it was like, ‘Oh God.’ It’s actually way more complicated.”

New Scientist

Postdoc Xuhai Xu and his colleagues have developed an AI program that can distribute pop-up reminders to help limit smartphone screen time, reports Jeremy Hsu for New Scientist. Xu explains that “a random notification to stop doomscrolling won’t always tear someone away from their phone. But machine learning can personalize that intervention so it arrives at the moment when it is most likely to work,” writes Hsu.

Wired

MIT researchers have found that users of a tool developed to fight misinformation on X were “much more likely to fact-check posts expressing political views that differ from their own,” reports Victoria Elliott and David Gilbert for Wired.  Prof. David Rand explains, “while around 80 percent of the tweets that users chose to annotate were, in fact, misleading, users overwhelmingly tended to priorities political content.” 

The Atlantic

Writing for The Atlantic, Prof. Deb Roy makes the case that “new kinds of social networks can be designed for constructive communication—for listening, dialogue, deliberation, and mediation—and they can actually work.” Roy adds: “We can and should create social networks designed for public discourse that prioritize inclusion, where underheard voices and perspectives can flourish, and where people take and offer disagreement in good faith.”

CBS Boston

Graduate student Kaylee Cunningham speaks with CBS Boston about her work using social media to help educate and inform the public about nuclear energy. Cunningham, who is known as Ms. Nuclear Energy on TikTok, recalls how as a child she was involved in musical theater, a talent she has now combined with her research interests as an engineer. She adds that she also hopes her platform inspires more women to pursue STEM careers. “You don't have to look like the stereotypical engineer,” Cunningham emphasizes.

Scientific American

Professor Alex Pentland and Alex Lipton, a Connection Science Fellow at MIT, write for Scientific American about how social media can impact financial systems. “Before Twitter and Facebook, a spooked investor or customer would have to call, personally visit or even e-mail and text colleagues to urge them to withdraw funds from a troubled bank,” explain Pentland and Lipton. “Nowadays sophisticated clients can act as soon as they read a Tweet. Social media alerts everyone all at once, and a few clicks on a computer screen can wipe an account clean.”

GBH

Institute Prof. Daron Acemoglu and Prof. Aleksander Mądry join GBH’s Greater Boston to explore how AI can be regulated and safely integrated into our lives. “With much of our society driven by informational spaces — in particular social media and online media in general — AI and, in particular, generative AI accelerates a lot of problems like misinformation, spam, spear phishing and blackmail,” Mądry explains. Acemoglu adds that he feels AI reforms should be approached “more broadly so that AI researchers actually work in using these technologies in human-friendly ways, trying to make humans more empowered and more productive.”

Scientific American

Prof. Alexey Makarin and his colleagues have found that following the arrival of Facebook, depression, anxiety and diminished academic performance increased across U.S. colleges, reports Jesse Greenspan for Scientific American. “Makarin says much of the harm they documented came from social comparisons,” explains Greenspan.

National Public Radio (NPR)

Prof. Alexey Makarin speaks with NPR’s Michaeleen Doucleff about his research examining the impact of social media on teen mental health. "The body of literature seems to suggest that indeed, social media has negative effects on mental health, especially on young adults' mental health," says Makarin.

Politico

Prof. Aleksander Mądry’s testimony before a House subcommittee was highlighted by Politico fellow Mohar Chatterjee in a recent newsletter exploring how large tech companies are dominating how generative AI technologies are developed and utilized. During his testimony, Mądry emphasized that “very few players will be able to compete, given the highly specialized skills and enormous capital investments the building of such systems requires.”

Financial Times

Writing for the Financial Times, Prof. David Rand explores how social media platforms could channel partisan motivations to help moderate the spread of misinformation online. “Combating misinformation is a challenge requiring a wide range of approaches,” writes Rand. “Our work suggests that an important route for social media companies to save democracy from misinformation is to democratize the moderation process itself.”

Times Higher Education

Writing for Times Higher Ed, Prof. Andres Sevtsuk explores how campus design can boost communication and exchange between researchers. “Low-rise, high-density buildings with interconnected walkways and shared public spaces are more likely to maximize encounters,” writes Sevtsuk. “In colder climates, having indoor walking paths between buildings can help ensure that encounters continue during colder parts of the year.”

Politico

At MIT’s AI Policy Forum Summit, which was focused on exploring the challenges facing the implementation of AI technologies across a variety of sectors, SEC Chair Gary Gensler and MIT Schwarzman College of Computing Dean Daniel Huttenlocher discussed the impact of AI on the world of finance. “If someone is relying on open-AI, that's a concentrated risk and a lot of fintech companies can build on top of it,” Gensler said. “Then you have a node that's every bit as systemically relevant as maybe a stock exchange."