• Menu menu
  • menu open menu
Publications
Health

Errors in AI-Transformed Patient-Centered Mental Health Documentation Written by Psychiatrists: Qualitative Pre-Post Study

Beteiligte Autor*innen der JOANNEUM RESEARCH:
Authors
Ozkara Menekseoglu P, Weibezahl M, Ellingsen M, Sterkenburg J, Kharko A, Hochwarter S, Schwarz J
Abstract:
Background: Patients’ digital access to their personal health data is becoming increasingly common worldwide. However, medical documentation often contains technical language and sensitive information, which can lead to potential misunderstandings and distress among patients. These issues may be particularly impactful in mental health contexts. Large language models (LLMs) offer a promising approach by transforming clinician-generated health notes into language that is more patient-centered, nonmedicalized, and empathetic. However, risks related to accuracy and clinical safety have not been adequately investigated in psychiatry. Objective: This study aimed to qualitatively analyze the errors introduced by LLMs when transforming notes written by psychiatrists into patient-facing formats. It also highlights the implications for clinical communication and patient safety. Methods: Clinical notes (n=63) written by 19 psychiatrists in an outpatient treatment setting were collected, anonymized, and translated from German to English by humans. OpenAI GPT-3.5 Turbo was used to develop a preprompt that transformed these notes into a patient-centered, lay-readable form through an iterative process. Three psychiatrists qualitatively analyzed the LLM-revised documentation using Kuckartz content analysis. They compared the preconversion and postconversion notes to systematically identify and categorize LLM-induced errors. Results: Five categories of clinically relevant errors were identified: (1) clinical misinterpretations, particularly in critical assessments such as suicidality, where nuanced terminology was oversimplified or inaccurately represented; (2) attribution errors, where behaviors or roles within family dynamics or interactions were incorrectly attributed to different individuals; (3) content distortion errors, which were characterized by speculative additions, emotional exaggerations, and inappropriate contextual assumptions; (4) abbreviation and terminology errors, which resulted from inaccurate expansions of medical abbreviations and terms; and (5) structural and syntax errors, which resulted in ambiguity, particularly when the original notes were brief or bulleted. Despite significant improvements in the readability and overall linguistic fluency of the converted notes, these errors occurred. Conclusions: LLMs have the potential to transform psychiatric notes into patient-friendly formats. However, critical errors remain prevalent and can impair clinical judgment, understanding of patient circumstances, clarity of medication regimens, and interpretation of clinical observations. To safely integrate artificial intelligence–generated documentation into psychiatric care, clinician oversight and targeted model refinement are essential. Future research should explore strategies to mitigate these errors, assess their comprehensive clinical impact, and incorporate patient and provider perspectives to ensure robust implementation.
Title:
Errors in AI-Transformed Patient-Centered Mental Health Documentation Written by Psychiatrists: Qualitative Pre-Post Study

Publikationsreihe

Name
JMIR Mental Health
ISSN
2368-7959
More files and links
Jahr/Monat:
2026
/ 4

Related publications

Zum Inhalt springen