Skip to content ↓

Topic

Behavior

Download RSS feed: News Articles / In the Media / Audio

Displaying 16 - 30 of 202 news clips related to this topic.
Show:

Fast Company

Researchers at MIT have found that the use of ChatGPT can “reduce activity in brain regions associated with memory and learning,” reports Eve Upton-Clark for Fast Company. “ChatGPT users felt less ownership over their essays compared to the other groups,” writes Upton-Clark. “They also struggled to recall or quote from their own essays shortly after submitting them—showing how reliance on the LLM bypassed deep memory processes.” 

Boston.com

Researchers at MIT have found that “people who used ChatGPT to write a series of essays suffered a ‘cognitive cost’ compared to others who used only their brains or a traditional search engine,” reports Ross Cristantiello for Boston.com. “The researchers found that as users relied on ‘external support’ more and more, their brain connectivity gradually scaled down,” explains Cristantiello. “Subjects who began the tests using ChatGPT before being told to use only their brains showed ‘weaker neural connectivity’ and ‘under-engagement’ of certain networks in their brains.”  

USA Today

A study by MIT researchers finds that individuals who relied solely on ChatGPT to write essays had "lower levels of brain activity and presented less original writing,” reports Greta Cross for USA Today. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research," the researchers explain.


 

The Hill

Researchers at MIT have found that ChatGPT use can “harm an individual’s critical thinking over time,” reports Rachel Scully for The Hill. “They discovered that subjects who used ChatGPT over a few months had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels,’” explains Scully. 

Salon

A study by researchers at MIT examines how the use of large language models impacts the human brain, reports Elizabeth Hlavinka for Salon. Research scientist Nataliya Kos'myna says the results “suggest large language models could affect our memory, attention and creativity.” 

The Boston Globe

Writing for The Boston Globe, graduate students Manuj Dhariwal SM '17 and Shruti Dhariwal SM '18 highlight new efforts to reframe the language used to describe the ways humans are interacting with AI technologies. “It is a subtle reframing, but one that we urgently need as AI systems become interwoven with our creative, social, and emotional worlds,” they write. “The point is not necessarily to choose one over the other — but to clearly distinguish one from the other.” 

Financial Times

Prof. Pattie Maes speaks with Financial Times reporter Cristina Criddle about recent developments aimed at increasing AI memory retention. “The more a system knows about you, the more it can be used for negative purposes to either make you buy stuff or convince you of particular beliefs,” says Maes. “So you have to start thinking about the underlying incentives of the companies that offer these services.” 

Salon

A study by Prof. Rebecca Saxe and her colleagues has found that the medial prefrontal cortex in infants is active when exposed to faces, reports Elizabeth Hlavinka for Salon. “Maybe it’s not that [at] first babies do visual processing and only later are connected to social meaning,” says Saxe. “Maybe these brain regions are active because babies are responding to the social meaning of people and faces as early on as we can measure their brains.”

Nature

Researchers at MIT have conducted a survey to understand how people interact with AI companions, reports David Adam for Nature. The researchers found that 12% [of users] were drawn to the apps to help them cope with loneliness and 14% used them to discuss personal issues and mental health,” writes Adam. “Forty-two per cent of users said they logged on a few times a week, with just 15% doing so every day. More than 90% reported that their sessions lasted less than one hour.” 

The New York Times

Prof. Sherry Turkle speaks with New York Times reporter Sopan Deb about how humans interact with artificial intelligence, specifically chatbots such as ChatGPT. “If an object is alive enough for us to start having intimate conversations, friendly conversations, treating it as a really important person in our lives, even though it’s not, it’s alive enough for us to show courtesy to,” says Turkle. 

The Boston Globe

Boston Globe reporter Kevin Lewis spotlights a new study by MIT researchers that found “debate training can improve your chances of attaining leadership positions.” The researchers found that employees who received debate training were “more likely to have earned a promotion, even controlling for their pretraining management level, tenure, gender, and where they were born,” writes Lewis. “The training increased participants’ self-reported assertiveness, which appears to explain the effect on promotions.”

Forbes

Forbes reporter Tracey Follows spotlights the MIT Media Lab’s Advancing Humans with AI (AHA) project, a “new research program asking how can we design AI to support human flourishing.” At the launch event, Prof. Sherry Turkle “raised a specific and timely concern: what is the human cost of talking to machines that only pretend to care?”

TechCrunch

Researchers at MIT have concluded that AI does not develop “value systems” over time, reports Kyle Wiggers for TechCrunch. “For me, my biggest takeaway from doing all this research is to now have an understanding of models as not really being systems that have some sort of stable, coherent set of beliefs and preferences,” says graduate student Stephen Casper. “Instead, they are imitators deep down who do all sorts of confabulation and say all sorts of frivolous things.”

Gizmodo

A new study by researchers at MIT explores how AI chatbots can impact people’s feelings and mood, reports Matthew Gault for Gizmodo. “One of the big takeaways is that people who used the chatbots casually and didn’t engage with them emotionally didn’t report feeling lonelier at the end of the study,” explains Gault. “Yet, if a user said they were lonely before they started the study, they felt worse after it was over.”

The Guardian

Researchers at MIT and elsewhere have found that “heavy users of ChatGPT tend to be lonelier, more emotionally dependent on the AI tool and have fewer offline social relationships,” reports Rachel Hall for The Guardian. “The researchers wrote that the users who engaged in the most emotionally expressive personal conversations with the chatbots tended to experience higher loneliness – though it isn’t clear if this is caused by the chatbot or because lonely people are seeking emotional bonds,” explains Hall.