3Q's for QI | Q&A with Dr. Eden English
Dec 13, 2024In-basket burden is frequently cited as a contributor to burnout. In a research letter published in JAMA Network Open, IHQSE graduate, Dr. Eden English, discusses her recent study of the use of large language models (LLMs) for drafting replies to patient messages within a large health system. The tool, PAM Chat (Patient Advice Message ChatGPT), was deployed across nine clinics, where nurses were the most favorable users, noting improvements in efficiency, empathy, and tone. However, different roles within the care team—nurses, medical assistants, and clinicians—had varying levels of satisfaction, highlighting the need for role-specific customization. This study underscores the potential of LLMs to reduce clinician burnout and enhance patient care, while also emphasizing the importance of transparency and continuous refinement of AI tools in healthcare. We spoke with Dr. English about the potential impact that this work could have on mitigating administrative burden on healthcare providers.
1. How did you approach this work?
We started our PAM Chat journey with fake data in our test environment. We simulated patient messages, and adjusted our prompt until the generated draft replies were deemed satisfactory to our reviewers. Then we started with 3 clinics, enabling the functionality for providers only for two weeks before enabling for all staff. After a few months of 3 clinics using it without serious concerns, we enabled it for 6 more clinics. We kept meeting weekly to review feedback from the users and upgraded our prompts to the GPT almost weekly. Training needs were minimal, but we always ensured that staff and providers were aware that AI can hallucinate. So, any draft must be vetted by the user prior to sending. Our use rates hovered around 10%, which was felt to be useful enough to continue expansion. We are now in over 25 clinics, creating drafts for over 80,000 messages a month. We plan to continue to expand the technology quarterly.
2. Why is this work important?
We explored a practical solution to clinician burnout, a growing issue in healthcare. With the increasing volume of patient messages, clinicians often face high levels of stress and administrative burden. By using large language models (LLMs) enabled draft replies, healthcare teams can save time, improve efficiency, and enhance communication quality, ultimately reducing burnout. The study also highlights the need for role-specific customization of AI tools, ensuring that each team member, from nurses to physicians, can benefit from AI in ways that match their responsibilities and challenges. By advancing AI integration in healthcare, this work can help improve both clinician well-being and patient care.
3. How do you think this work might impact healthcare?
This work has the potential to impact healthcare by addressing one of its most pressing issues: clinician burnout. By automating routine tasks like drafting patient message replies, AI tools like PAM Chat can help reduce administrative burdens, allowing clinicians to focus more on patient care. This can lead to improved job satisfaction, reduced stress, and better overall efficiency within healthcare teams. Our study highlights the fact the even though providers wouldn't overwhelmingly recommend the tool to a colleague, 51% did say PAM Chat was fun to use. With continued improvements to the technology, the usability of the tool should continue to improve.