NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Randell R, Alvarado N, Elshehaly M, et al. Design and evaluation of an interactive quality dashboard for national clinical audit data: a realist evaluation. Southampton (UK): National Institute for Health and Care Research; 2022 May. (Health and Social Care Delivery Research, No. 10.12.)
Design and evaluation of an interactive quality dashboard for national clinical audit data: a realist evaluation.
Show detailsBackground
A key function of national clinical audits (NCAs) is to reduce variation in care quality by stimulating quality improvement (QI). However, variation in provider engagement means that the potential for national audit data to inform QI is not being realised. This study sought to develop and evaluate a quality dashboard (QualDash) to support clinical teams and managers to better understand and make use of national audit data.
Objectives
- To develop a programme theory that explains how and in what contexts use of QualDash will lead to improvements in care quality.
- To use the programme theory to co-design QualDash.
- To use the programme theory to co-design an adoption strategy.
- To understand how and in what contexts QualDash leads to improvements in care quality.
- To assess the feasibility of conducting a cluster randomised controlled trial (RCT).
Methods
The study design drew on realist evaluation and the biography of artefacts approach. In phase 1, we conducted 54 interviews with staff across five NHS trusts. Participants included clinicians, audit support staff, quality and safety committee members, trust board members and those who commission health-care services. Interviews explored use of a range of national audits, but focused on the Myocardial Ischaemia National Audit Project (MINAP) and the Paediatric Intensive Care Audit Network (PICANet). Framework analysis was used to analyse the interview data. We developed a programme theory explaining how and in what contexts NCA data stimulated QI and identified initial dashboard requirements. Requirements were prioritised in a workshop with suppliers of other audits using a variation of the nominal group technique. Twenty-one participants attended, representing 19 NCAs.
In phase 2, QualDash was developed in collaboration with staff from one trust. The first co-design workshop was held with seven people, including clinicians and audit support staff who worked with MINAP and PICANet data and representatives from other trust groups (e.g. information managers). In groups, participants undertook a ‘story generation’ activity, an approach from information visualisation design. Participants then sketched out a dashboard that would provide minimally sufficient information to answer their most pressing questions at a glance. As an additional source of data to inform dashboard design, seven meetings at which audit data were discussed were observed across four trusts. Findings from the workshop and observations were used to develop a QualDash prototype.
In a second co-design workshop, feedback on the prototype was obtained from seven participants, first using a paper-based activity and then using the think-aloud technique and System Usability Scale (SUS) questionnaire. The think-aloud technique was also used with five staff from another trust, who also rated the usability using the SUS questionnaire. In addition, dashboard usability was assessed using heuristic evaluation, which was undertaken by four participants with expertise in human–computer interaction, health informatics, visualisation and clinical audit. A heuristic evaluation checklist that was developed and validated for evaluating health-care dashboards and a set of heuristics from visualisation literature that seek to assess the potential utility of a visualisation were used.
Development of QualDash confirmed what functionality would be available to staff, from which a programme theory was developed, which explained how and in what contexts QualDash might stimulate QI. Theory construction drew on the phase 1 situation analysis that provided insight into current supports and constraints on the use of NCA data and enabled theorisation about how the impact of QualDash would be influenced by these existing factors.
In phase 3, we developed an adoption strategy through focus groups with 23 participants from the five trusts, including clinicians, audit support staff, information staff and information technology staff. Transcripts were analysed thematically. For each trust, data were indexed and we summarised the discussion of each strategy, including how it should be delivered at each trust and why participants felt that it might work to support QualDash uptake and use. Ideas about the mechanisms through which QualDash would be adopted were added to the QualDash programme theory.
In phase 4, we made QualDash available in the five trusts. QualDash evaluation involved a multisite case study and interrupted time series (ITS) analysis. We collected data across the five trusts using observations, interviews, a questionnaire based on the technology acceptance model (TAM) and log files. We undertook 148.5 hours of observations. At the end of the evaluation, the questionnaire was distributed to 35 participants who were known to have used QualDash or who had seen it demonstrated or used in meetings. Twenty-three questionnaires were completed. Qualitative data collection and data analysis were iterative, enabling ongoing testing and refinement of the QualDash programme theory. We gathered further data in the light of revisions and refined QualDash in response to participants’ feedback. Fieldnotes were analysed thematically. Log files were analysed to determine the number of uses of QualDash per audit per month, broken down by role. We produced summary statistics for each TAM item. An ITS analysis of the effect of QualDash on data quality was undertaken with data from four trusts.
In phase 5, feasibility of conducting a cluster RCT of QualDash was assessed, using predefined progression criteria. We also considered, in the context of the COVID-19 pandemic, how QualDash would need to be adapted to support different scenarios, specifically daily monitoring of NCA data and, using a different data set, population health monitoring. Seven interviews were conducted and transcripts were analysed using framework analysis.
Findings
Phase 1 interviews revealed that NCA data are largely used by clinical teams, whereas staff at the organisational level (e.g. board and subcommittees that report to the board, such as quality and safety committees) perceived an imbalance between the benefits of NCA participation and the resources consumed by participation, leading them to question their legitimacy. There was significant variation between trusts in the extent to which clinical teams engaged with NCA data, with data more likely to be used in trusts in which there are greater resources, particularly technology for accessing data and audit support staff with the skills and time to produce data visualisations. In addition, data timeliness and quality and features of the audits themselves were important, such as whether or not they were mandatory and the perceived importance of metrics. Nursing staff perceived PICANet to be of little relevance to them because it did not capture what they considered to be important markers of care quality. The majority of tasks undertaken using NCA data involved only two variables, suggesting that QualDash should use simple visualisation techniques that users were already familiar with, such as bar graphs and pie charts. Other key requirements included presentation of all important metrics when first accessing the dashboard and ability to ‘drill down’ (e.g. selecting to view the data by certain groups), the ability to customise visualisations (e.g. selecting the time period over which data are displayed) and support for creating reports and presentations.
In phase 2, the first co-design workshop revealed several key findings:
- For each metric, there are ‘entry-point tasks’ (i.e. the primary tasks a user will want to undertake in relation to the metric, which involve monitoring a small number of measures over time).
- Investigation of further detail of a metric involves one or more of three subtasks: (1) breaking down measure(s) for patient subcategories, (2) linking with other metric-related measures and (3) expanding in time to include different temporal granularities.
- Metrics have independent task sequences (i.e. what a user will want to explore after the entry-point tasks will vary according to the metric).
The QualDash prototype was designed with the intention of addressing key constraints on use of NCA data captured in our NCA programme theory, while also incorporating requirements from the phase 1 interviews and the learning about task sequences gathered from the first co-design workshop. To provide more equal opportunity for sites currently not resourced to produce visualisations, QualDash provides immediate visualisations of key metrics. A visualisation called a QualCard is generated for each key metric, providing a quick view of all such metrics on accessing QualDash. The QualCards can be expanded, providing three customisable visualisations to support tasks associated with the key metric. QualDash sought to improve access to timely data, providing users with a means to visualise data that they collect for the NCAs, without having to wait for data to be returned to them from audit suppliers. To this end, QualDash was located on site servers, giving users control over how often data were uploaded. Usability scores from the two think-aloud participant groups were 74 points in the first session and 89.5 points in the second session, indicating very good usability.
In phase 3, attitudes about what was needed for adoption of QualDash were consistent with suggestions from phase 1 and similar across sites (i.e. the need for a ‘champion’, raising awareness through e-bulletins and demonstrations at meetings, and quick reference tools). Through discussion, details of the strategies evolved and we gathered further ideas from participants regarding why these strategies would work. In particular, it was suggested that, although multiple people may work together as champions, a clinical champion was needed and this clinical champion would have the authority to encourage dashboard use.
In phase 4, locating QualDash on local servers led to challenges in dashboard installation. QualDash was installed in four trusts by the end of July 2019 and in the fifth trust in December 2019. There were variable levels of use across sites. In some cases, old computers and difficulties in getting Google Chrome (Google Inc., Mountain View, CA, USA) or RStudio (RStudio, Boston, MA, USA) installed constrained uptake and use. Issues arose as staff explored their site data using QualDash. This revealed that not all measures were configured as users expected, which constrained QualDash use where data reporting routines were already established. This also highlighted the need for additional labelling to make users aware of which measures they were interacting with and how they had been configured. That QualDash could easily be customised was important in addressing some of these concerns. QualDash provided greatest benefit for teams constrained in their ability to use NCA data. In such contexts, QualDash increased data engagement by facilitating access and interaction and reduced time spent in preparation of reports. QualDash was used to support improvements in data quality, although the interrupted times series analysis did not provide evidence of improved data quality. The questionnaire revealed positive attitudes to QualDash in terms of ease of use and usefulness, although these results should be treated with caution because of the small and possibly biased sample. Observations in this phase also revealed the labour-intensive work involved in data collection for NCAs, with use of paper data collection forms and time-consuming cross-checking.
In phase 5, a trial of QualDash was assessed as feasible and designed, with a stepped-wedge factorial design. Interviews with individuals associated with Gold Command revealed that they were used to working with data and saw this work as essential to decision-making, working with a wide range of data sources and tools to support their use of data. Data timeliness was reported as especially important for population health monitoring. There was a desire to bring together different data sources, with participants wanting a dashboard that would help them to identify priorities to focus on.
Conclusions
Implications for national clinical audits
Our study suggests that the following strategies may be beneficial for NCAs in increasing engagement:
- involving a range of professional groups in the choice of metrics to ensure that the metrics have relevance to all members of the multidisciplinary team, with careful consideration of the amount of data to be collected
- moving from an emphasis on cumulative, retrospective reports to real-time reporting, clearly presenting the ‘headline’ metrics important to organisational-level staff
- wider use of routinely collected clinical data to populate NCA data fields
- further use of technologies, such as dashboards, that help staff to explore and report NCA data in meaningful ways.
Implications for quality dashboard design
Our study suggests those designing quality dashboards to support engagement with NCA data may find it beneficial to include the following:
- ‘at-a-glance’ visualisation of key metrics that are considered markers of safe and effective care on first logging into the dashboard
- simple visualisations, such as bar graphs and pie charts, configured in line with existing visualisations used by teams, with clear labelling of metrics
- functionality that supports current queries and tasks, including creation of reports and presentations
- ability to explore relationships between variables and drill down to look at specific subgroups of patients
- low requirements in terms of computing resources, including the ability to work on any web browser.
Implications for practice
For health-care organisations seeking to introduce a quality dashboard, our study suggests that the following strategies may be beneficial.
Clinical champion
If a clinical champion promotes the use of the dashboard, highlighting its benefits, staff who trust the champion’s opinion may be more willing to use it.
Avoiding the ‘dodgy brush’
Dashboards should be tested using real data prior to roll-out by staff who already use those data and are expert in their interpretation, enabling revision prior to roll-out so that metric configurations fit with user expectations. This will give champions confidence that metrics are calculated appropriately and, therefore, they will be willing to promote dashboard use.
Routines for using audit data
If data presented by the dashboard are not already used routinely, routines for integrating dashboard use into the work practices of clinical teams should be established.
Involvement of audit support staff
If clinical teams are already using the data the dashboard displays, supported by audit support staff, adoption activities should focus on engaging and training audit support staff, promoting not just features of the dashboard, but showing how it allows audit support staff to undertake their work more easily or quickly.
Customisation as design
The process of customising the dashboard to meet local user expectations should be seen as part of the adoption strategy.
Recommendations for research
Future research should include:
- investigation of the extent to which NCA dashboards are used and the strategies NCAs are using to encourage uptake
- a realist review of the impact of computer-based dashboards on quality and safety of care
- a rigorous evaluation of the impact of computer-based quality dashboards on the processes and outcomes of care
- a rigorous evaluation of the effectiveness of different strategies for encouraging use of dashboards.
Study registration
This study is registered as ISRCTN18289782.
Funding
This project was funded by the National Institute for Health and Care Research (NIHR) Health and Social Care Delivery Research programme and will be published in full in Health and Social Care Delivery Research; Vol. 10, No. 12. See the NIHR Journals Library website for further project information.
- Scientific summary - Design and evaluation of an interactive quality dashboard f...Scientific summary - Design and evaluation of an interactive quality dashboard for national clinical audit data: a realist evaluation
Your browsing activity is empty.
Activity recording is turned off.
See more...