Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study
The objective of this study was to measure internal medicine resident engagement with an electronic medical record-based dashboard providing feedback on their use of routine laboratory tests relative to service averages. From January 2016 to June 2016, residents were e-mailed a snapshot of their personalized dashboard, a link to the online dashboard, and text summarizing the resident and service utilization averages. We measured resident engagement using e-mail read-receipts and web-based tracking. We also conducted 3 hour-long focus groups with residents. Using grounded theory approach, the transcripts were analyzed for common themes focusing on barriers and facilitators of dashboard use. Among 80 residents, 74% opened the e-mail containing a link to the dashboard and 21% accessed the dashboard itself. We did not observe a statistically significant difference in routine laboratory ordering by dashboard use, although residents who opened the link to the dashboard ordered 0.26 fewer labs per doctor-patient-day than those who did not (95% confidence interval, −0.77 to 0.25; P = 0 .31). While they raised several concerns, focus group participants had positive attitudes toward receiving individualized feedback delivered in real time.
© 2017 Society of Hospital Medicine
Recent efforts to reduce waste and overuse in healthcare include reforms, such as merit-based physician reimbursement for efficient resource use1 and the inclusion of cost-effective care as a competency for physician trainees.2 Focusing on resource use in physician training and reimbursement presumes that teaching and feedback about utilization can alter physician behavior. Early studies of social comparison feedback observed considerable variation in effectiveness, depending on the behavior targeted and how feedback was provided to physicians.3-5 The widespread adoption of electronic medical record (EMR) software enables the design of feedback interventions that provide continuous feedback in real-time via EMR-based practice dashboards. Currently, little is known about physician engagement with practice dashboards and, in particular, about trainee engagement with dashboards aimed to improve cost-effective care.
To inform future efforts in using social comparison feedback to teach cost-effective care in residency, we measured internal medicine resident engagement with an EMR-based utilization dashboard that provides feedback on their use of routine laboratory tests on an inpatient medicine service. Routine labs are often overused in the inpatient setting. In fact, one study reported that 68% of laboratory tests ordered in an academic hospital did not contribute to improving patient outcomes.6 To understand resident perceptions of the dashboards and identify barriers to their use, we conducted a mixed methods study tracking resident utilization of the dashboard over time and collecting qualitative data from 3 focus groups about resident attitudes toward the dashboards.
METHODS
From January 2016 to June 2016, resident-specific rates of routine lab orders (eg, complete blood count, basic metabolic panel, complete metabolic panel, liver function panel, and common coagulation tests) were synthesized continuously in a web-based dashboard. Laboratory orders could be placed either individually on a day-to-day basis or ordered on a recurrent basis (eg, daily morning labs ordered on admission). The dashboard contained an interactive graph, which plotted the average number of labs per patient-day ordered by each resident over the past week, along with an overall graph for all services for comparison (Appendix Figure). Residents could click on an individual day on the graph to review the labs they ordered for each patient. The dashboard also allowed the user to look up each patient’s medical record to obtain more detailed information.
All residents received an e-mail describing the study, including the purpose of the intervention, basic description of the feedback intervention (dashboard and e-mail), potential risks and benefits, duration and scope of data collection, and contact information of the principal investigator. One hundred and ninety-eight resident-blocks on 6 general medicine services at the Hospital of the University of Pennsylvania were cluster-randomized with an equal probability to 1 of 2 arms: (1) those e-mailed a snapshot of the personalized dashboard, a link to the online dashboard, and text containing resident and service utilization averages, and (2) those who did not receive the feedback intervention. Postgraduate year (PGY) 1 residents were attributed only orders by that resident. PGY2 and PGY3 residents were attributed orders for all patients assigned to the resident’s team.
The initial e-mails were timed to arrive in the middle of each resident’s 2-week service to allow for a baseline and follow-up period. The e-mail contained an attachment of a snapshot of the personalized graphic dashboard (Appendix Figure), a link to the online dashboard, and a few sentences summarizing the resident utilization average compared to the general medicine service overall, for the same time interval. They were followed by a reminder e-mail 24 hours later containing only the link to the report card. We measured resident engagement with the utilization dashboard by using e-mail read-receipts and a web-based tracking platform that recorded when the dashboard was opened and who logged on.
Following completion of the intervention, 3-hour-long focus groups were conducted with residents. These focus groups were guided with prescripted questions to prompt discussion on the advantages and drawbacks of the study intervention and the usage of dashboards in general. These sessions were digitally recorded and transcribed. The transcripts were reviewed by 2 authors (KR and GK) and analyzed to identify common themes by using a grounded theory approach.7 First, the transcripts were reviewed independently by each author, who each generated a broad list of themes across 3 domains: dashboard usability, barriers to use, and suggestions for the future. Next, the codebook was refined through an iterative series of discussions and transcript review, resulting in a unified codebook. Lastly, all transcripts were reviewed by using the final codebook definitions, resulting in a list of exemplary quotes and suggestions.
The study was approved by the University of Pennsylvania Institutional Review Board and registered on clinicaltrials.gov (NCT02330289).