Skip to content ↓

Topic

Behavior

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 164 news clips related to this topic.
Show:

VICE

Researchers at MIT and elsewhere have developed “Future You” – an AI platform that uses generative AI to allow users to talk with an AI-generated simulation of a potential future you, reports Sammi Caramela for Vice. The research team hopes “talking to a relatable, virtual version of your future self about your current stressors, future goals, and your beliefs can improve anxiety, quell any obsessive thoughts, and help you make better decisions,” writes Caramela. 

The Hill

Researchers from MIT and Oxford University has found “social media platforms’ suspensions of accounts may not be rooted in political biases, but rather certain political groups’ tendency to share misinformation,” reports Miranda Nazzaro for The Hill. “Thus, even under politically neutral anti-misinformation polices, political asymmetries in enforcement should be expected,” researchers wrote. “Political imbalance in enforcement need not imply bias on the part of social media companies implementing anti-misinformation polices.” 

Financial Times

A new working paper by MIT Prof. Antoinette Schoar and Brandeis Prof. Yang Sun explores how different people react to financial advice, reports Robin Wigglesworth for Financial Times. “The results indicate that most people do update their beliefs in the direction of the advice they receive, irrespective of their previous views,” writes Wigglesworth. 

New Scientist

Researchers at MIT and elsewhere have found that “human memories can be distorted by photos and videos edited by artificial intelligence,” reports Matthew Sparkes for New Scientist. “I think the worst part here, that we need to be aware or concerned about, is when the user isn’t aware of it,” says postdoctoral fellow Samantha Chan. “We definitely have to be aware and work together with these companies, or have a way to mitigate these effects. Maybe have sort of a structure where users can still control and say ‘I want to remember this as it was’, or at least have a tag that says ‘this was a doctored photo, this was a changed photo, this was not a real one’.”

TechCrunch

TechCrunch reporters Kyle Wiggers and Devin Coldewey spotlight a new generative AI model developed by MIT researchers that can help counteract conspiracy theories. The researchers “had people who believed in conspiracy-related statements talk with a chatbot that gently, patiently, and endlessly offered counterevidence to their arguments,” explain Wiggers and Coldewey. “These conversations led the humans involved to stating a 20% reduction in the associated belief two months later.”

Newsweek

New research by Prof. David Rand and his colleagues has utilized generative AI to address conspiracy theory beliefs, reports Marie Boran for Newsweek. “The researchers had more than 2,000 Americans interact with ChatGPT about a conspiracy theory they believe in, explains Boran. “Within three rounds of conversation with the chatbot, participants’ belief in their chosen conspiracy theory was reduced by 20 percent on average.” 

Popular Science

A new study by researchers from MIT and elsewhere tested a generative AI chatbot’s ability to debunk conspiracy theories , reports Mack Degeurin for Popular Science. “In the end, conversations with the chatbot reduced the participant’s overall confidence in their professed conspiracy theory by an average of 20%,” writes Degeurin. 

Los Angeles Times

A new study by researchers from MIT and elsewhere has found that an AI chatbot is capable of combating conspiracy theories, reports Karen Kaplan for The Los Angeles Times. The researchers found that conversations with the chatbot made people “less generally conspiratorial,” says Prof. David Rand.  “It also increased their intentions to do things like ignore or block social media accounts sharing conspiracies, or, you know, argue with people who are espousing those conspiracy theories.”

The New York Times

A new chatbot developed by MIT researchers aimed at persuading individuals to stop believing unfounded conspiracy theories has made “significant and long-lasting progress at changing people’s convictions,” reports Teddy Rosenbluth for The New York Times. The chatbot, dubbed DebunkBot, challenges the “widely held belief that facts and logic cannot combat conspiracy theories.” Professor David Rand explains: “It is the facts and evidence themselves that are really doing the work here.”

Mashable

A new study by Prof. David Rand and his colleagues has found that chatbots, powered by generative AI, can help people abandon conspiracy theories, reports Rebecca Ruiz for Mashable. “Rand and his co-authors imagine a future in which a chatbot might be connected to social media accounts as a way to counter conspiracy theories circulating on a platform,” explains Ruiz. “Or people might find a chatbot when they search online for information about viral rumors or hoaxes thanks to keyword ads tied to certain conspiracy search terms.” 

Forbes

Researchers at MIT have found that knowledge spillovers are more likely to occur when people are within 20 meters of one another, reports Tracy Brower for Forbes. Knowledge spillover “can occur intentionally—when you ask a question or gather around a white board to work through issues,” explains Brower. “Or it can be unintentional—when you’re near your team and you overhear a great idea or get passive exposure to the work going on with others.” 

New York Times

Prof. Simon Johnson and Prof. David Autor speak with New York Times reporter Emma Goldberg about the anticipated impact of AI on the job market. “We should be concerned about eliminating them,” says Prof. Simon Johnson, of the risks posed by automating jobs. “This is the hollowing out of the middle class.”

Scientific American

Prof. Sherry Turkle shares the benefits of being polite when interacting with AI technologies, reports Webb Wright for Scientific American, underscoring the risks of becoming habituated to using crass, disrespectful and dictatorial language. “We have to protect ourselves,” says Turkle. “Because we’re the only ones that have to form relationships with real people.”

Financial Times

A new working paper by Prof. Anna Stansbury and Research Associate Kyra Rodriguez looks at the “class gap” among US Ph.D.-holders in science, social science, engineering and health, reports Soumaya Keynes for the Financial Times. The paper found “those whose parents did not have a college degree are 13 per cent less likely to end up with tenure at a top university than those with more educated parents. They also tend to end up at lower-ranked institutions,” Keynes explains.

Fortune

Prof. of the practice Donald Sull speaks with Fortune reporter Lindsey Leake about the common misconceptions found in corporate company culture. “People often think that high performance is an excuse for abusive behavior—they confuse disrespectful and bullying behavior for maintaining high standards,” say Sull. “But it’s possible to set the bar for performance high without berating or bullying people. And to the extent these toxic managerial behaviors drive high performers out of the organization, the abusive behavior undermines performance.”