Skip to content ↓

Topic

Social media

Download RSS feed: News Articles / In the Media / Audio

Displaying 61 - 75 of 140 news clips related to this topic.
Show:

Motherboard

MIT researchers examined why a third of Wikipedia deliberations go unresolved and developed a new tool that could be used to help resolve more discussions, reports Samantha Cole for Motherboard. Cole explains that “the tool uses the data they found and analyzed in this research, to summarize threads and predict when they’re at risk of going stale.”

The Conversation

Writing for The Conversation, Prof. Tauhid Zaman discusses his research showing how a small number of very active social media bots can have a significant impact on public opinion. Zaman notes that his findings are “a reminder to be careful about what you read – and what you believe – on social media.”

Marketplace

Prof. Dean Eckles speaks with Marketplace reporter Sabri Ben-Achour about the significance of engagement among existing users on social media platforms like Snapchat and Facebook. “Are people sharing things their friends are going to want to see?” says Eckles. “How many users on Snap are actually sending new snaps?”

Boston Globe

Hiawatha Bray of The Boston Globe writes that fake news articles are destined for the same fate as spam emails thanks to research from MIT postdoc Ramy Baly, who is developing software to flag fake news sites. Baly hopes to “create a consumer news app that would direct users to reliable news sources from every point on the political compass.”

Fast Company

Researchers from MIT and the Qatar Computing Research Institute have developed a machine learning tool that can identify fake news, reports Steven Melendez for Fast Company. Melendez writes that the system “uses a machine learning technique known as support vector machines to learn to predict how media organizations will be classified by Media Bias/Fact Check.”

Marketplace

Prof. Sinan Aral speaks with Marketplace reporter Molly Wood about the proliferation of fake news. “If platforms like Facebook are to be responsible for the spread of known falsities, then they could use policies, technologies or algorithms to reduce or dampen the spread of this type of news, which may reduce the incentive to create it in the first place,” Aral explains.

CommonHealth (WBUR)

WBUR’s Carey Goldberg recommends a video with neuroscientists at the McGovern Institute “for a quick, light and smart explanation” of the ‘Yanny vs. Laurel’ debate. “The same acoustic information is hitting everyone’s ears,” says graduate student Kevin Sitek. “But the brain is then going to interpret that differently, based on experience.”

US News & World Report

Coryanne Hicks of U.S. News & World Report highlights research by Prof. Andrew Lo and graduate student Pablo Azar in an article about using Twitter to spot financial trends. The study predicted market shifts based on the emotional context of tweets, finding that "when people start to get nervous, you can detect that very clearly," says Lo.

NBC

Gradute student Jonny Sun speaks with Seth Myers on Late Nate with Seth Meyers about his new book, “Everyone's a Aliebn When Ur a Aliebn Too.” The book, which follows an alien who comes to earth and learns to celebrate people’s differences, features intentional typos to emphasize “a common theme throughout the story…that it’s ok to be imperfect,” says Sun.

NPR

NPR’s Laurel Wamsley speaks with Ethan Zuckerman, director of MIT’s Center for Civic Media, about how to create a better social network. Zuckerman, reports Wamsley, thinks a decentralized structure is part of the answer and says the algorithm that sits behind the news feed is ripe for reinvention.

The Boston Globe

The Boston Globe highlights some of the notable speakers who will deliver remarks at commencements across New England in the coming weeks, including Sheryl Sandberg, COO of Facebook, who will speak at MIT’s ceremony.

US News & World Report

A study led by research scientist Nick Obradovich found that people’s behavior on social media may be influenced by weather conditions. “Positive posts increased as the temperature rose,” reports Robert Preidt in US News & World Report, but “precipitation, humidity levels of 80 percent or higher, and high amounts of cloud cover were associated with a greater number of negative posts.”

The Verge

Squadbox, developed by graduate student Amy Zhang, allows a user’s “squad” to sift through online messages and scan for contextual harassment language that software might miss. “Squadbox currently only works with email,” Shannon Liao writes for The Verge. “[B]ut the team behind it hopes to eventually expand to other social media platforms.”

co.design

Graduate student Amy Zhang, has developed an application, known as Squadbox, that seeks to disarm internet harassers by enlisting the help of a user’s friends, who act as inbox “moderators.” “According to what the harassed person has specified beforehand, the moderator can delete any abusive messages, forward on clean messages, or send along messages with tags,” writes Katharine Schwab for Co.Design.

The Guardian

Researchers from the Media Lab and Sloan found that humans are more likely than bots to be “responsible for the spread of fake news,” writes Paul Chadwick for The Guardian. “More openness by the social media giants and greater collaboration by them with suitably qualified partners in tackling the problem of fake news is essential.”