Skip to content ↓

In the Media

Media Outlet:
Scientific American
Publication Date:

MIT researchers have found that user bias can drive interactions with AI chatbots, reports Nick Hilden for Scientific American.  “When people think that the AI is caring, they become more positive toward it,” graduate student Pat Pataranutaporn explains. “This creates a positive reinforcement feedback loop where, at the end, the AI becomes much more positive, compared to the control condition. And when people believe that the AI was manipulative, they become more negative toward the AI—and it makes the AI become more negative toward the person as well.”

Related News

Laptop screen shows two search prompts as default. One says "What do you want?" and the other says "What can I help you with today?"

Is AI in the eye of the beholder?

Study shows users can be primed to believe certain things about an AI chatbot’s motives, which influences their interactions with the chatbot.