
A team from MIT and Arizona State University asked more than 300 participants to interact with mental health AI programmes and primed them on what to expect.
A manager at artificial intelligence firm OpenAI caused consternation recently by writing that she just had “a quite emotional, personal conversation” with her firm’s viral chatbot ChatGPT. “Never tried therapy before but this is probably it?” Lilian Weng posted on X, formerly Twitter, prompting a torrent of negative commentary accusing her of downplaying mental illness.
However, Weng’s take on her interaction with ChatGPT may be explained by a version of the placebo effect outlined this week by research in the Nature Machine Intelligence journal.
A team from Massachusetts Institute of Technology (MIT) and Arizona State University asked more than 300 participants to interact with mental health AI programmes and primed them on what to expect.
Some were told the chatbot was empathetic, others that it was manipulative and a third group that it was neutral.
Those who were told they were talking with a caring chatbot were far more likely than the other groups to see their chatbot therapists as trustworthy.
“From this study, we see that to some extent the AI is the AI of the beholder,” said report co-author Pat Pataranutaporn.