A recent study published in JAMA Network Open has revealed intriguing insights into patient preferences regarding healthcare messages. Researchers from Duke University conducted a comprehensive survey involving over 1,400 participants from the health system’s patient advisory committee. The study explored how patients perceive and prefer messages based on whether they are generated by artificial intelligence (AI) or humans, as well as the impact of disclosure about the message's origin. Interestingly, while patients preferred the content of AI-generated messages for their detailed and empathetic nature, they were less satisfied when explicitly informed that the messages were crafted by AI.
The research highlighted that patients found AI-generated messages to be more informative and empathetic compared to those written by humans. These messages tended to be longer and provided greater detail, which contributed to higher satisfaction levels with the content itself. However, the preference shifted when patients were aware of the AI authorship. Despite the superior quality of the content, knowing that the message was not penned by a human led to decreased overall satisfaction.
Further analysis showed that patients valued the depth and empathy present in AI-drafted communications. These messages often contained more comprehensive information, addressing concerns in greater detail and demonstrating a level of understanding that resonated with recipients. Yet, the psychological factor of knowing an AI wrote the message introduced skepticism and reduced trust, even though the actual content was appreciated. This dichotomy presents a significant challenge for healthcare providers aiming to leverage AI in patient communication without compromising patient trust and satisfaction.
The study also examined the role of transparency in shaping patient perception. Patients were presented with messages under three conditions: no disclosure, disclosure that the message was written by a human, or disclosure that it was written by AI. It became evident that the absence of disclosure or the presence of human-authored attribution led to higher satisfaction rates. Transparency about AI authorship, however, had a negative impact on patient satisfaction, despite the superior content quality.
When patients were told that the messages were AI-generated, they exhibited lower satisfaction levels. This finding underscores the importance of how information is communicated and perceived. While AI can produce highly effective and empathetic messages, the lack of a human touch in the disclosure process seems to undermine patient trust. To bridge this gap, healthcare systems may need to rethink how they introduce and explain AI tools to patients, ensuring that the benefits of AI-generated content are clearly communicated alongside assurances of personalized care. This approach could help maintain high levels of patient satisfaction and trust in the healthcare system.