Skip to content ↓

Topic

Behavior

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 177 news clips related to this topic.
Show:

The Boston Globe

Boston Globe reporter Kevin Lewis spotlights a new study by MIT researchers that found “debate training can improve your chances of attaining leadership positions.” The researchers found that employees who received debate training were “more likely to have earned a promotion, even controlling for their pretraining management level, tenure, gender, and where they were born,” writes Lewis. “The training increased participants’ self-reported assertiveness, which appears to explain the effect on promotions.”

Forbes

Forbes reporter Tracey Follows spotlights the MIT Media Lab’s Advancing Humans with AI (AHA) project, a “new research program asking how can we design AI to support human flourishing.” At the launch event, Prof. Sherry Turkle “raised a specific and timely concern: what is the human cost of talking to machines that only pretend to care?”

TechCrunch

Researchers at MIT have concluded that AI does not develop “value systems” over time, reports Kyle Wiggers for TechCrunch. “For me, my biggest takeaway from doing all this research is to now have an understanding of models as not really being systems that have some sort of stable, coherent set of beliefs and preferences,” says graduate student Stephen Casper. “Instead, they are imitators deep down who do all sorts of confabulation and say all sorts of frivolous things.”

Gizmodo

A new study by researchers at MIT explores how AI chatbots can impact people’s feelings and mood, reports Matthew Gault for Gizmodo. “One of the big takeaways is that people who used the chatbots casually and didn’t engage with them emotionally didn’t report feeling lonelier at the end of the study,” explains Gault. “Yet, if a user said they were lonely before they started the study, they felt worse after it was over.”

The Guardian

Researchers at MIT and elsewhere have found that “heavy users of ChatGPT tend to be lonelier, more emotionally dependent on the AI tool and have fewer offline social relationships,” reports Rachel Hall for The Guardian. “The researchers wrote that the users who engaged in the most emotionally expressive personal conversations with the chatbots tended to experience higher loneliness – though it isn’t clear if this is caused by the chatbot or because lonely people are seeking emotional bonds,” explains Hall. 

CBS News

Graduate student Cathy Fang speaks with CBS News reporter Lindsey Reiser about her research studying the effects of AI chatbots on people’s emotional well-being. Fang explains that she and her colleagues found that how the chatbot interacts with the user is important, “but also how the user interacts with the chatbot is equally important. Both influence the user’s emotional and social well-being.” She adds: “Overall, we found that extended use is correlated with more negative outcomes.”

Fortune

Researchers at MIT and elsewhere have found “that frequency chatbot users experience more loneliness and emotional dependence,” reports Beatrice Nolan for Fortune. “The studies set out to investigate the extent to which interactions with ChatGPT impacted users’ emotional health, with a focus on the use of the chatbot’s advanced voice mode,” explains Nolan. 

Forbes

Forbes reporter Tanya Arturi highlights research by Prof. Basima Tewfik on the impact of imposter syndrome. Tewfik’s “studies indicate that the behaviors exhibited by individuals experiencing imposter thoughts (such as increased effort in communication and interpersonal interactions) can actually enhance job performance,” explains Arturi. “Instead of resisting their feelings of self-doubt, professionals who lean into these emotions may develop stronger interpersonal skills, outperforming their non-imposter peers in collaboration and teamwork.” 

Business Insider

A new study by Prof. Jackson Lu and graduate student Lu Doris Zhang finds that assertiveness is key to moving up the career ladder, and that debate training could help improve an individual’s chances of moving into a leadership role, reports Julia Pugachevsky for Business Insider. “If someone knows when to voice their opinions in a diplomatic and fruitful way, they will get more attention,” says Lu. 

The Washington Post

A new study co-authored by Prof. David Rand found that there was a “20 percent reduction in belief in conspiracy theories after participants interacted with a powerful, flexible, personalized GPT-4 Turbo conversation partner,” writes Annie Duke for The Washington Post. “Participants demonstrated increased intentions to ignore or unfollow social media accounts promoting the conspiracies, and significantly increased willingness to ignore or argue against other believers in the conspiracy,” writes Duke. “And the results appear to be durable, holding up in evaluations 10 days and two months later.”

The Wall Street Journal

Postdoctoral Associate Pat Pataranutaporn speaks with Wall Street Journal reporter Heidi Mitchell about his work developing Future You, an online interactive AI platform that “allows users to create a virtual older self—a chatbot that looks like an aged version of the person and is based on an AI text system known as a large language model, then personalized with information that the user puts in.” Pataranutaporn explains: “I want to encourage people to think in the long term, to be less anxious about an unknown future so they can live more authentically today.” 

Salon

Researchers from MIT and elsewhere have suggested that “the impact of news that is factually inaccurate — including fake news, misinformation and disinformation — pales in comparison to the impact of news that are factually accurate but misleading,” reports Sandra Matz for Salon. “According to researchers, for example, the impact of slanted news stories encouraging vaccine skepticism during the COVID-19 pandemic was about 46-fold greater than that of content flagged as fake by fact-checkers,” writes Matz. 

Knowable Magazine

Knowable Magazine reporter Katherine Ellison spotlights Future You, a new program developed by researchers at MIT that “offers young people a chance to chat with an online, AI-generated simulation of themselves at age 60.” 

VICE

Researchers at MIT and elsewhere have developed “Future You” – an AI platform that uses generative AI to allow users to talk with an AI-generated simulation of a potential future you, reports Sammi Caramela for Vice. The research team hopes “talking to a relatable, virtual version of your future self about your current stressors, future goals, and your beliefs can improve anxiety, quell any obsessive thoughts, and help you make better decisions,” writes Caramela. 

The Hill

Researchers from MIT and Oxford University has found “social media platforms’ suspensions of accounts may not be rooted in political biases, but rather certain political groups’ tendency to share misinformation,” reports Miranda Nazzaro for The Hill. “Thus, even under politically neutral anti-misinformation polices, political asymmetries in enforcement should be expected,” researchers wrote. “Political imbalance in enforcement need not imply bias on the part of social media companies implementing anti-misinformation polices.”