Skip to content ↓

Topic

Behavior

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 166 news clips related to this topic.
Show:

Salon

Researchers from MIT and elsewhere have suggested that “the impact of news that is factually inaccurate — including fake news, misinformation and disinformation — pales in comparison to the impact of news that are factually accurate but misleading,” reports Sandra Matz for Salon. “According to researchers, for example, the impact of slanted news stories encouraging vaccine skepticism during the COVID-19 pandemic was about 46-fold greater than that of content flagged as fake by fact-checkers,” writes Matz. 

Knowable Magazine

Knowable Magazine reporter Katherine Ellison spotlights Future You, a new program developed by researchers at MIT that “offers young people a chance to chat with an online, AI-generated simulation of themselves at age 60.” 

VICE

Researchers at MIT and elsewhere have developed “Future You” – an AI platform that uses generative AI to allow users to talk with an AI-generated simulation of a potential future you, reports Sammi Caramela for Vice. The research team hopes “talking to a relatable, virtual version of your future self about your current stressors, future goals, and your beliefs can improve anxiety, quell any obsessive thoughts, and help you make better decisions,” writes Caramela. 

The Hill

Researchers from MIT and Oxford University has found “social media platforms’ suspensions of accounts may not be rooted in political biases, but rather certain political groups’ tendency to share misinformation,” reports Miranda Nazzaro for The Hill. “Thus, even under politically neutral anti-misinformation polices, political asymmetries in enforcement should be expected,” researchers wrote. “Political imbalance in enforcement need not imply bias on the part of social media companies implementing anti-misinformation polices.” 

Financial Times

A new working paper by MIT Prof. Antoinette Schoar and Brandeis Prof. Yang Sun explores how different people react to financial advice, reports Robin Wigglesworth for Financial Times. “The results indicate that most people do update their beliefs in the direction of the advice they receive, irrespective of their previous views,” writes Wigglesworth. 

New Scientist

Researchers at MIT and elsewhere have found that “human memories can be distorted by photos and videos edited by artificial intelligence,” reports Matthew Sparkes for New Scientist. “I think the worst part here, that we need to be aware or concerned about, is when the user isn’t aware of it,” says postdoctoral fellow Samantha Chan. “We definitely have to be aware and work together with these companies, or have a way to mitigate these effects. Maybe have sort of a structure where users can still control and say ‘I want to remember this as it was’, or at least have a tag that says ‘this was a doctored photo, this was a changed photo, this was not a real one’.”

TechCrunch

TechCrunch reporters Kyle Wiggers and Devin Coldewey spotlight a new generative AI model developed by MIT researchers that can help counteract conspiracy theories. The researchers “had people who believed in conspiracy-related statements talk with a chatbot that gently, patiently, and endlessly offered counterevidence to their arguments,” explain Wiggers and Coldewey. “These conversations led the humans involved to stating a 20% reduction in the associated belief two months later.”

Newsweek

New research by Prof. David Rand and his colleagues has utilized generative AI to address conspiracy theory beliefs, reports Marie Boran for Newsweek. “The researchers had more than 2,000 Americans interact with ChatGPT about a conspiracy theory they believe in, explains Boran. “Within three rounds of conversation with the chatbot, participants’ belief in their chosen conspiracy theory was reduced by 20 percent on average.” 

Popular Science

A new study by researchers from MIT and elsewhere tested a generative AI chatbot’s ability to debunk conspiracy theories , reports Mack Degeurin for Popular Science. “In the end, conversations with the chatbot reduced the participant’s overall confidence in their professed conspiracy theory by an average of 20%,” writes Degeurin. 

Los Angeles Times

A new study by researchers from MIT and elsewhere has found that an AI chatbot is capable of combating conspiracy theories, reports Karen Kaplan for The Los Angeles Times. The researchers found that conversations with the chatbot made people “less generally conspiratorial,” says Prof. David Rand.  “It also increased their intentions to do things like ignore or block social media accounts sharing conspiracies, or, you know, argue with people who are espousing those conspiracy theories.”

The New York Times

A new chatbot developed by MIT researchers aimed at persuading individuals to stop believing unfounded conspiracy theories has made “significant and long-lasting progress at changing people’s convictions,” reports Teddy Rosenbluth for The New York Times. The chatbot, dubbed DebunkBot, challenges the “widely held belief that facts and logic cannot combat conspiracy theories.” Professor David Rand explains: “It is the facts and evidence themselves that are really doing the work here.”

Mashable

A new study by Prof. David Rand and his colleagues has found that chatbots, powered by generative AI, can help people abandon conspiracy theories, reports Rebecca Ruiz for Mashable. “Rand and his co-authors imagine a future in which a chatbot might be connected to social media accounts as a way to counter conspiracy theories circulating on a platform,” explains Ruiz. “Or people might find a chatbot when they search online for information about viral rumors or hoaxes thanks to keyword ads tied to certain conspiracy search terms.” 

Forbes

Researchers at MIT have found that knowledge spillovers are more likely to occur when people are within 20 meters of one another, reports Tracy Brower for Forbes. Knowledge spillover “can occur intentionally—when you ask a question or gather around a white board to work through issues,” explains Brower. “Or it can be unintentional—when you’re near your team and you overhear a great idea or get passive exposure to the work going on with others.” 

New York Times

Prof. Simon Johnson and Prof. David Autor speak with New York Times reporter Emma Goldberg about the anticipated impact of AI on the job market. “We should be concerned about eliminating them,” says Prof. Simon Johnson, of the risks posed by automating jobs. “This is the hollowing out of the middle class.”

Scientific American

Prof. Sherry Turkle shares the benefits of being polite when interacting with AI technologies, reports Webb Wright for Scientific American, underscoring the risks of becoming habituated to using crass, disrespectful and dictatorial language. “We have to protect ourselves,” says Turkle. “Because we’re the only ones that have to form relationships with real people.”