Skip to content ↓

Topic

Behavior

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 201 news clips related to this topic.
Show:

Tech Brew

Researchers at MIT have studied how chatbots perceived the political environment leading up to the 2024 election and its impact on automatically generated election-related responses, reports Patrick Kulp for Tech Brew. The researchers “fed a dozen leading LLMs 12,000 election-related questions on a nearly daily basis, collecting more than 16 million total responses through the contest in November,” explains Kulp.  

Financial Times

Financial Times reporter Melissa Heikkilä spotlights how MIT researchers have uncovered evidence that increased use of AI tools by medical professionals risks “leading to worse health outcomes for women and ethnic minorities.” One study found that numerous AI models “recommended a much lower level of care for female patients,” writes Heikkilä. “A separate study by the MIT team showed that OpenAI’s GPT-4 and other models also displayed answers that had less compassion towards Black and Asian people seeking support for mental health problems.” 

Forbes

Researchers at MIT have found that generative AI “not only repeats the same irrational tendencies of humans during the decision making process but also lacks some of the positive traits that humans do possess,” reports Tamsin Gable for Forbes. “This led the researchers to suggest that AI cannot replace many tasks and that human expertise remains important,” adds Gable. 

Fast Company

Prof. Philip Isola speaks with Fast Company reporter Victor Dey about the impact and use of agentic AI. “In some domains we truly have automatic verification that we can trust, like theorem proving in formal systems. In other domains, human judgment is still crucial,” says Isola. “If we use an AI as the critic for self-improvement, and if the AI is wrong, the system could go off the rails.”

Bloomberg

Bloomberg reporter F. D. Flam spotlights postdoctoral associate Pat Pataranutaporn and his research exploring how AI technologies and chatbots can impact human memories. “This latest research should spur more discussion of the effects of technology on our grasp of reality, which can go beyond merely spreading misinformation,” writes Flam. “Social media algorithms also encourage people to embrace fringe ideas and conspiracy theories by creating the false impression of popularity and influence.”

NPR

Prof. Sherry Turkle speaks with NPR’s Ted Radio Hour host Manoush Zomorodi about her research on the impact of AI usage on people’s relationships with their technology. “You know, we built this tool, and it's making and shaping and changing us,” says Turkle. “There is no such thing as just a tool. And looking back, I think I did capture the new thing that was happening to people's psychologies, really because of my method, which was just to listen to people. And I think that my work was not esoteric in the sense that it spoke directly to those feelings of disorientation. The culture had met something uncanny, and I tried to really speak to that feeling.”

Forbes

A study by MIT researchers has found “our behavior is often more predictable than we think,” reports Diane Hamilton for Forbes. “This research focused on how people pay attention in complex situations,” explains Hamilton. “The AI model learned what people remembered and what they ignored. It identified patterns in memory and focus.” 

Forbes

Forbes reporter Eric Wood spotlights various studies by MIT researchers exploring the impact of ChatGPT use on behavior and the brain. “As stated, the impact of AI assistants is likely dependent on the users, but since AI assistants are becoming normative, it’s time for counseling centers to assess for maladaptive uses of AI, while also promoting the possible benefits,” explains Wood.

The New York Times

In an Opinion piece for The New York Times, columnist David Brooks highlights a recent MIT study that explores the impact of ChatGPT use on brain function by asking subjects to write essays while using large language models, traditional search engines, or only their own brains. “The subjects who relied only on their own brains showed higher connectivity across a bunch of brain regions,” explains Brooks. “Search engine users experienced less brain connectivity and A.I. users least of all.”

Forbes

A study by MIT researchers monitored and compared the brain activity of participants using large language models, traditional search engines, and only their brains to write an essay on a given topic, reports Hessie Jones for Forbes. The study “found that the brain-only group showed much more active brain waves compared to the search-only and LLM-only groups,” Jones explains. “In the latter two groups, participants relied on external sources for information. The search-only group still needed some topic understanding to look up information, and like using a calculator — you must understand its functions to get the right answer. In contrast, the LLM-only group simply had to remember the prompt used to generate the essay, with little to no actual cognitive processing involved.”  

TechCrunch

Researchers at MIT have found that ChatGPT users “showed minimal brain engagement and consistently fell short in neural linguistic, and behavioral aspects,” reports Kyle Wiggers for TechCrunch. “To conduct the test, the lab split 54 participants from the Boston area into three groups, each consisting of individuals ages 18 to 39,” explains Wiggers. “The participants were asked to write multiple SAT essays using tools such as OpenAI’s ChatGPT, the Google search engine, or without any tools.” 

Newsweek

Researchers from MIT have found that “extended use of LLMs for research and writing could have long-term behavioral effects, such as lower brain engagement and laziness,” reports Theo Burman for Newsweek. “The study found that the AI-assisted writers were engaging their deep memory processes far less than the control groups, and that their information recall skills were worse after producing work with ChatGPT,” explains Burman. 

Bloomberg

Researchers at MIT have found that “AI agents can make the workplace more productive when fine-tuned for different personality types, but human co-workers pay a price in lost socialization,” reports Kaustuv Basu for Bloomberg. The researchers concluded “found that humans using AI raised their productivity by 60%—partly because those workers sent 23% fewer social messages,” writes Basu. 

Forbes

MIT researchers have found that ChatGPT use can lead to a decline in cognitive engagement, reports Robert B. Tucker for Forbes. “Brain regions associated with attention, memory, and higher-ordered reasoning were noticeably less active” in study participants, Tucker explains.

Fast Company

Researchers at MIT have found that the use of ChatGPT can “reduce activity in brain regions associated with memory and learning,” reports Eve Upton-Clark for Fast Company. “ChatGPT users felt less ownership over their essays compared to the other groups,” writes Upton-Clark. “They also struggled to recall or quote from their own essays shortly after submitting them—showing how reliance on the LLM bypassed deep memory processes.”