Artificial intelligence (AI) tools powered by large language models (LLMs), such as OpenAI’s ChatGPT and Google’s Bard, have the potential to rapidly change the practice of medicine. An online survey of psychiatrists who participated in an APA webinar about AI suggests that many have experimented with the use of ChatGPT for answering clinical questions, but their opinions are mixed on whether the benefits of use outweigh the risks. A report on the survey results appears in the March issue of Psychiatric Research.
“Knowing how the field of psychiatry currently understands the potential of these tools and seeks to use them in the future can help guide the development of [large language models] and ensure they are implemented safely and in alignment with the field’s needs,” wrote Charlotte Blease, Ph.D., of Uppsala University in Sweden, Abigail Worthen of APA, and John Torous, M.D., M.B.I., of Harvard University.
Blease and colleagues invited more than 800 APA members and affiliates who attended an AI in psychiatry webinar last August to participate in an online survey. The survey asked participants to reflect on their use of ChatGPT and other LLMs in clinical practice, how strongly they agreed or disagreed with a series of statements about potential benefits of the tools, and the impact on the mental health of their patients. There was also a section for additional comments.
A total of 138 psychiatrists completed the online survey, including 75 men and 58 women (5 people preferred not to answer) of diverse ages (those aged 30 to 39 years comprised the highest percentage of those surveyed at 28%; those 29 and under comprised the lowest percentage, at 8%).
Over 40% of survey respondents reported using ChatGPT-3.5 to “assist with answering clinical questions” and 33% reported use of ChatGPT-4.0. Nearly 70% agreed that these AI tools already are making or will make documentation more efficient and 21% agreed the tools are or will improve diagnostic accuracy. Almost 90% agreed or somewhat agreed that clinicians need more support and training in understanding these tools.
Other survey findings included the following:
- 86% agreed or somewhat agreed with the statement that patients can use AI tools such as ChatGPT and Bard to better understand their medical records.
- 76% agreed or somewhat agreed with the statement that patients using these tools better understand their health.
- 79% agreed or somewhat agreed that patients using these tools worry more about their privacy.
“With respect to benefits and harms—echoing disparate opinions in the closed-ended questions about whether these tools would improve diagnostic accuracy or decrease disparities in healthcare—respondents offered mixed opinions,” Blease and colleagues wrote. “Some expressed optimism that these tools could strengthen patient safety, access, and the quality of care, while others pointed to the potential for harm urging that current models fabricate information, embed harmful biases, and risk patient privacy.”
The authors describe several limitations of the survey—for example, the convenience sample, restriction to participants who attended the APA webinar, and the low response rate of 18% “likely influenced results,” they wrote. “We recommend that future surveys strive for stratified sampling techniques that permit correlative analyses of participants experiences and opinions according to gender, age, and workplace environment.”
For related information, see the Psychiatric News articles “Harnessing AI for Psychiatric Use Requires More Nuanced Discussion” and “ChatGPT Not Yet Ready for Clinical Practice,” and the APA blog post “The Basics of Augmented Intelligence: Some Factors Psychiatrists Need to Know Now.” A recording of APA’s AI in psychiatry webinar is posted here.
(Image: Getty Images/iStock/Sean Anthony Eddy)
Don't miss out! To learn about newly posted articles in Psychiatric News, please sign up here.