“LLMs have become a growing area of interest for both researchers and individual help-seekers alike, raising questions around their capacity to assist and even replace human therapists,” wrote Zainab Iftikhar of Brown University, and colleagues. “In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations.”
Iftikhar and colleagues first had conversations with peer counselors who conducted 110 self-counseling sessions with cognitive behavioral therapy (CBT)–prompted LLMs, including various versions of OpenAI’s GPT Series, Anthropic’s Claude, and Meta’s Llama. The counselors discussed with the researchers their challenges in working with the LLM models.
Next, the researchers simulated 27 therapy sessions with an LLM counselor using publicly available transcripts of sessions with human therapists; three licensed clinical psychologists with CBT experience independently evaluated these simulations to explore how the LLMs might violate ethical standards, such as those from the American Psychological Association.
The researchers reported that they found numerous ethical violations, including:
“We call on future work to create ethical, educational and legal standards for LLM counselors—standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy,” the researchers wrote.
For related information, see the Psychiatric News Special Report “AI-Induced Psychosis: A New Frontier in Mental Health.” You can also listen to the accompanying “PsychNews Special Report” podcast—featuring AI-generated hosts.
(Image: Getty Images/iStock/Vertigo3d)