Skip to content ↓

Topic

Behavior

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 182 news clips related to this topic.
Show:

The Boston Globe

Writing for The Boston Globe, graduate students Manuj Dhariwal SM '17 and Shruti Dhariwal SM '18 highlight new efforts to reframe the language used to describe the ways humans are interacting with AI technologies. “It is a subtle reframing, but one that we urgently need as AI systems become interwoven with our creative, social, and emotional worlds,” they write. “The point is not necessarily to choose one over the other — but to clearly distinguish one from the other.” 

Financial Times

Prof. Pattie Maes speaks with Financial Times reporter Cristina Criddle about recent developments aimed at increasing AI memory retention. “The more a system knows about you, the more it can be used for negative purposes to either make you buy stuff or convince you of particular beliefs,” says Maes. “So you have to start thinking about the underlying incentives of the companies that offer these services.” 

Salon

A study by Prof. Rebecca Saxe and her colleagues has found that the medial prefrontal cortex in infants is active when exposed to faces, reports Elizabeth Hlavinka for Salon. “Maybe it’s not that [at] first babies do visual processing and only later are connected to social meaning,” says Saxe. “Maybe these brain regions are active because babies are responding to the social meaning of people and faces as early on as we can measure their brains.”

Nature

Researchers at MIT have conducted a survey to understand how people interact with AI companions, reports David Adam for Nature. The researchers found that 12% [of users] were drawn to the apps to help them cope with loneliness and 14% used them to discuss personal issues and mental health,” writes Adam. “Forty-two per cent of users said they logged on a few times a week, with just 15% doing so every day. More than 90% reported that their sessions lasted less than one hour.” 

The New York Times

Prof. Sherry Turkle speaks with New York Times reporter Sopan Deb about how humans interact with artificial intelligence, specifically chatbots such as ChatGPT. “If an object is alive enough for us to start having intimate conversations, friendly conversations, treating it as a really important person in our lives, even though it’s not, it’s alive enough for us to show courtesy to,” says Turkle. 

The Boston Globe

Boston Globe reporter Kevin Lewis spotlights a new study by MIT researchers that found “debate training can improve your chances of attaining leadership positions.” The researchers found that employees who received debate training were “more likely to have earned a promotion, even controlling for their pretraining management level, tenure, gender, and where they were born,” writes Lewis. “The training increased participants’ self-reported assertiveness, which appears to explain the effect on promotions.”

Forbes

Forbes reporter Tracey Follows spotlights the MIT Media Lab’s Advancing Humans with AI (AHA) project, a “new research program asking how can we design AI to support human flourishing.” At the launch event, Prof. Sherry Turkle “raised a specific and timely concern: what is the human cost of talking to machines that only pretend to care?”

TechCrunch

Researchers at MIT have concluded that AI does not develop “value systems” over time, reports Kyle Wiggers for TechCrunch. “For me, my biggest takeaway from doing all this research is to now have an understanding of models as not really being systems that have some sort of stable, coherent set of beliefs and preferences,” says graduate student Stephen Casper. “Instead, they are imitators deep down who do all sorts of confabulation and say all sorts of frivolous things.”

Gizmodo

A new study by researchers at MIT explores how AI chatbots can impact people’s feelings and mood, reports Matthew Gault for Gizmodo. “One of the big takeaways is that people who used the chatbots casually and didn’t engage with them emotionally didn’t report feeling lonelier at the end of the study,” explains Gault. “Yet, if a user said they were lonely before they started the study, they felt worse after it was over.”

The Guardian

Researchers at MIT and elsewhere have found that “heavy users of ChatGPT tend to be lonelier, more emotionally dependent on the AI tool and have fewer offline social relationships,” reports Rachel Hall for The Guardian. “The researchers wrote that the users who engaged in the most emotionally expressive personal conversations with the chatbots tended to experience higher loneliness – though it isn’t clear if this is caused by the chatbot or because lonely people are seeking emotional bonds,” explains Hall. 

CBS News

Graduate student Cathy Fang speaks with CBS News reporter Lindsey Reiser about her research studying the effects of AI chatbots on people’s emotional well-being. Fang explains that she and her colleagues found that how the chatbot interacts with the user is important, “but also how the user interacts with the chatbot is equally important. Both influence the user’s emotional and social well-being.” She adds: “Overall, we found that extended use is correlated with more negative outcomes.”

Fortune

Researchers at MIT and elsewhere have found “that frequency chatbot users experience more loneliness and emotional dependence,” reports Beatrice Nolan for Fortune. “The studies set out to investigate the extent to which interactions with ChatGPT impacted users’ emotional health, with a focus on the use of the chatbot’s advanced voice mode,” explains Nolan. 

Forbes

Forbes reporter Tanya Arturi highlights research by Prof. Basima Tewfik on the impact of imposter syndrome. Tewfik’s “studies indicate that the behaviors exhibited by individuals experiencing imposter thoughts (such as increased effort in communication and interpersonal interactions) can actually enhance job performance,” explains Arturi. “Instead of resisting their feelings of self-doubt, professionals who lean into these emotions may develop stronger interpersonal skills, outperforming their non-imposter peers in collaboration and teamwork.” 

Business Insider

A new study by Prof. Jackson Lu and graduate student Lu Doris Zhang finds that assertiveness is key to moving up the career ladder, and that debate training could help improve an individual’s chances of moving into a leadership role, reports Julia Pugachevsky for Business Insider. “If someone knows when to voice their opinions in a diplomatic and fruitful way, they will get more attention,” says Lu. 

The Washington Post

A new study co-authored by Prof. David Rand found that there was a “20 percent reduction in belief in conspiracy theories after participants interacted with a powerful, flexible, personalized GPT-4 Turbo conversation partner,” writes Annie Duke for The Washington Post. “Participants demonstrated increased intentions to ignore or unfollow social media accounts promoting the conspiracies, and significantly increased willingness to ignore or argue against other believers in the conspiracy,” writes Duke. “And the results appear to be durable, holding up in evaluations 10 days and two months later.”