Skip to content ↓

Topic

Behavior

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 208 news clips related to this topic.
Show:

The Atlantic

Writing for The Atlantic, Prof. Deb Roy explores the impact of chatbots on language and learning development. “The ordinary forces that tether speech to consequence—social sanction, legal penalty, reputational loss—presuppose a continuous agent whose future can be made worse by what they say,” writes Roy. “With LLMs, there is no such locus. …When the speaker is an LLM, the human stakes that ordinarily anchor speech have nowhere to attach.” 

Forbes

Ahead of Superbowl Sunday, Forbes reporter Sandy Carter highlights a study by MIT researchers that identifies the factors needed to make an effective team. “What they found challenged many assumptions,” explains Carter. “First, team intelligence wasn’t about average IQ or brilliance of the smartest person in the room. It depended on three factors: How socially attuned the team members were to one another, whether conversations were shared rather than dominated by a few voices, and the presence of women in the group.”

GBH

Prof. Rebecca Saxe speaks with GBH’s Morning Edition host Mark Herz about the importance of maintaining social commitments. “People who have community and social relationships have better physical and mental health,” explains Saxe. “It actually helps with mortality. You live longer if you have strong social relationships.” 

Smithsonian Magazine

Two new research papers by scientists from MIT and other institutions find that AI chatbots are successful at shifting the political beliefs of voters, and that the “most persuasive chatbots are those that share lots of facts, although the most information-dense bots also dole out the most inaccurate claims,” reports Sarah Kuta for Smithsonian Magazine. “If you need a million facts, you eventually are going to run out of good ones and so, to fill your fact quota, you’re going to have to put in some not-so-good ones,” says Visiting Prof. David Rand. 

New Scientist

A new study by MIT researchers has found that “AI chatbots were surprisingly effective at convincing people to vote for a particular candidate or change their support for a particular issue,” reports Alex Wilkins for New Scientist. “Even for attitudes about presidential candidates, which are thought to be these very hard-to-move and solidified attitudes, the conversations with these models can have much bigger effects than you would expect based on previous work,” says Visiting Prof. David Rand. 

The Washington Post

Researchers at MIT and elsewhere “examined how popular chatbots could change voters’ minds about candidates in the United States, Canada and Poland,” reports Will Oremus for The Washington Post

The Guardian

Prof. Pat Pataranutaporn speaks with The Guardian reporter Madeleine Aggeler about the impact of AI on human relationships. “If you converse more and more with the AI instead of going to talk to your parents or your friends, the social fabric degrades,” says Pataranutaporn. “You will not develop the skills to go and talk to real humans.” 

Tech Brew

Researchers at MIT have studied how chatbots perceived the political environment leading up to the 2024 election and its impact on automatically generated election-related responses, reports Patrick Kulp for Tech Brew. The researchers “fed a dozen leading LLMs 12,000 election-related questions on a nearly daily basis, collecting more than 16 million total responses through the contest in November,” explains Kulp.  

Financial Times

Financial Times reporter Melissa Heikkilä spotlights how MIT researchers have uncovered evidence that increased use of AI tools by medical professionals risks “leading to worse health outcomes for women and ethnic minorities.” One study found that numerous AI models “recommended a much lower level of care for female patients,” writes Heikkilä. “A separate study by the MIT team showed that OpenAI’s GPT-4 and other models also displayed answers that had less compassion towards Black and Asian people seeking support for mental health problems.” 

Forbes

Researchers at MIT have found that generative AI “not only repeats the same irrational tendencies of humans during the decision making process but also lacks some of the positive traits that humans do possess,” reports Tamsin Gable for Forbes. “This led the researchers to suggest that AI cannot replace many tasks and that human expertise remains important,” adds Gable. 

Fast Company

Prof. Philip Isola speaks with Fast Company reporter Victor Dey about the impact and use of agentic AI. “In some domains we truly have automatic verification that we can trust, like theorem proving in formal systems. In other domains, human judgment is still crucial,” says Isola. “If we use an AI as the critic for self-improvement, and if the AI is wrong, the system could go off the rails.”

Bloomberg

Bloomberg reporter F. D. Flam spotlights postdoctoral associate Pat Pataranutaporn and his research exploring how AI technologies and chatbots can impact human memories. “This latest research should spur more discussion of the effects of technology on our grasp of reality, which can go beyond merely spreading misinformation,” writes Flam. “Social media algorithms also encourage people to embrace fringe ideas and conspiracy theories by creating the false impression of popularity and influence.”

NPR

Prof. Sherry Turkle speaks with NPR’s Ted Radio Hour host Manoush Zomorodi about her research on the impact of AI usage on people’s relationships with their technology. “You know, we built this tool, and it's making and shaping and changing us,” says Turkle. “There is no such thing as just a tool. And looking back, I think I did capture the new thing that was happening to people's psychologies, really because of my method, which was just to listen to people. And I think that my work was not esoteric in the sense that it spoke directly to those feelings of disorientation. The culture had met something uncanny, and I tried to really speak to that feeling.”

Forbes

A study by MIT researchers has found “our behavior is often more predictable than we think,” reports Diane Hamilton for Forbes. “This research focused on how people pay attention in complex situations,” explains Hamilton. “The AI model learned what people remembered and what they ignored. It identified patterns in memory and focus.” 

Forbes

Forbes reporter Eric Wood spotlights various studies by MIT researchers exploring the impact of ChatGPT use on behavior and the brain. “As stated, the impact of AI assistants is likely dependent on the users, but since AI assistants are becoming normative, it’s time for counseling centers to assess for maladaptive uses of AI, while also promoting the possible benefits,” explains Wood.