NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Horrocks S, Pollard K, Duncan L, et al. Measuring quality in community nursing: a mixed-methods study. Southampton (UK): NIHR Journals Library; 2018 Apr. (Health Services and Delivery Research, No. 6.18.)
Introduction
In this appendix we present the key discussion points from 10 stakeholder engagement events. Eight workshops and two facilitated conference sessions were held over the space of 4 months, from June to September 2016. The workshops were held in different cities across England: Birmingham, Bridgwater (Somerset), Bristol, Leeds, London, Nottingham, Southampton and York. The two conference sessions were in the programme for the conference ‘Community Nursing: Innovation and Transformation’ held in Leeds in July 2016 and for the QNI conference held in London in September 2016. Three or four different members of the research management group with a range of professional backgrounds, including a member of SURG, together with an independent consultant with expertise in group facilitation and commissioning, facilitated each event. After the first workshop, at least one facilitator at each event had been involved in a previous event.
The purpose of the stakeholder engagement events was to:
- test the emerging findings from the case study sites to check our analyses and interpretation
- review and improve draft good practice statements and identify further areas of good practice (see Appendices 3 and 4)
- engage key stakeholders in our research and its outputs to enhance impact.
The workshops were designed to engage a mixed audience of commissioners, service managers, front-line staff, patients and carers, and involve them in a deliberative dialogue exercise.69 This was achieved in the majority of the events, although, owing to practical reasons, the composition varied and not all stakeholders were represented at every workshop. The eight workshops were attended by a total of 120 delegates; attendance ranged from 10 to 27 individuals at a single event. All stakeholders were represented at three events; it proved impossible to recruit any patients or carers to three events and commissioners were not present at two other events. However, all of the workshops involved at least two of the three stakeholder groups targeted.
Each workshop lasted for 3 hours and followed the same format. Each started with short presentations to set the scene and context for the research. This typically included a keynote presentation focused on quality and community nursing delivered by a local quality or service lead, followed by a short presentation by a member of the research team on the what, why and how of the research. We emphasised here that the research focus was on the ‘how’, that is ‘how is quality measured?’, rather than the ‘what’, that is, ‘what does a good QI for community nursing look like?’ The next stage involved the use of deliberative dialogue,69 where participants worked in small groups to share, discuss and capture their thoughts, ideas and experiences in relation to four findings and associated good practice statements presented (see Boxes 1–4). Key points from these discussions were then fed back to the wider group for further consideration.
Each workshop was evaluated to seek feedback on format, quality of presentations and facilitation, as well as asking participants to identify an action they could take as a result of attending the workshop. The feedback from the first workshop was used to review and refine the workshop content and format for future events.
The two conferences had a greater focus on reaching community nurses. Twenty-four nurses attended the session at the conference held in Leeds and 122 took part in the session at the QNI conference. The conference sessions used the same presentation as the workshops and lasted for approximately 45 minutes. Owing to these time constraints, the discussions focused on one finding only (see Finding 1: indicators are not always fit for purpose). Owing to the scale of these conferences and the timings of the conference programmes, we were unable to evaluate the individual sessions.
The findings presented and discussed at the stakeholder engagement events were those that emerged from data collected through interviews and focus groups with commissioners, providers, front-line staff, patients and carers, across all five case studies. Owing to time constraints it was obviously not possible to offer the opportunity to delegates to discuss more than a few of the many findings which emerged from the study. The findings presented at the stakeholder engagement events were selected because they were consistent across all five sites and suitable for discussion by all stakeholder groups:
- Not all indicators selected are fit for purpose as the selection process does not always involve all the right people.
- Indicators rolled out from hospital settings often do not work well in community nursing settings.
- Staff do not always receive adequate information about indicators either before or after they collect relevant data.
- Quality in community nursing is hard to measure and a focus on collecting numbers does not give a true reflection of the service being delivered.
It was interesting to note that these findings were all challenges inherent in current processes. The draft good practice statements developed were based on the evidence collected from the case study sites.
To facilitate discussion in their small groups, participants were asked to consider the following questions:
- Is this finding a key issue for you?
- Do the good practice statements address this issue?
- How could the good practice statements be put into practice?
- How else could this issue be addressed?
Stakeholder discussions
In the following section, key points brought up during the stakeholder discussions at both the workshops and the conference sessions are presented for each of the four findings and their associated good practice statements (see Boxes 1–4). There were common messages and themes that emerged from these discussions which highlighted a degree of interdependency between the four findings.
Finding 1: indicators are not always fit for purpose (Box 1)
Most delegates agreed with this finding, feeling that it reflected their local context and situation. Those that did not fully identify with it had started to put in place processes to improve the level of involvement. However, it became apparent that some front-line staff were not even aware there was a process in place to identify and select QIs or CQUINs.
Reasons given for the importance of engaging and involving all the right stakeholders in the process of selection included the fact that different stakeholders bring different perceptions, expectations, experience and expertise to the process. Gaps in the involvement of some stakeholders, particularly front-line staff and patients and carers, were identified by delegates, reflecting the finding discussed. In addition, suggestions around who should be involved in the process included those working in interdependent organisations (primary, secondary and domiciliary care providers), researchers and experts in quality measurement. The involvement of community nurses in the selection process was proposed by participants as a mechanism to help improve the practical and clinical appropriateness of indicators selected and increase ownership and understanding among front-line staff of the importance of QIs.
A number of challenges and barriers were identified that could prevent active engagement with all stakeholders in indicator selection. These included the short time scales allowed for the selection process; the time and resources needed to engage and involve the relevant stakeholders; the availability of staff; the ability to engage patients who are often housebound, frail and/or lacking capacity; and the multiplicity of providers and commissioners. Ideas for improving the process and overcoming some of these challenges were put forward, some of which had already been put into practice locally. These included using tools, such as stakeholder analysis, to identify relevant stakeholders; starting the process earlier in the year to enable more time for involvement; holding specific workshops for stakeholder engagement or building it into existing fora; using existing data to identify areas for improvement and develop indicators; and providing staff with training about the processes involved in selecting QIs. There was a general feeling that a more bottom-up approach was needed and that this should utilise existing information, learning and feedback to identify areas for improvement.
Delegates agreed that it was important to actively seek feedback from patients and carers about the quality of the services they receive; there was a particular emphasis on consulting carers in some workshops. Associated barriers and challenges, in addition to the patient group characteristics identified above, included practicalities such as postage, time and resources to gather feedback; issues of representation and difficulties of engaging those who are seldom heard; and ethical issues and implications of staff collecting feedback from patients, including bias, fear of impact on their own care and the lack of anonymity.
There were a number of ideas and examples of current mechanisms that could be put into practice to help improve the engagement and involvement of patients and their carers. This included involving the voluntary sector, such as Healthwatch, to represent the patient voice and to use volunteers to collect feedback from patients. Utilising existing patient groups, such as patient participation groups in general practices, or establishing new patient reference groups were also suggested together with specific events for carers such as carer roadshows. It was also thought that 360-degree feedback could be built into existing processes (e.g. staff and student appraisals), and also that social media could be an effective tool for engagement. There was a strong message from delegates that patients and their carers should be involved throughout their care, from the co-design of services to the selection of QIs, as well as participating in care planning and goal-setting.
There was agreement that there needs to be greater clarity around the purpose and goal of indicators selected. Concerns were raised around the current focus of QIs with participants questioning whether or not they actually measure quality. There was recognition that indicators tend to be more task focused than outcome focused which has possible unintended consequences on care and makes many of them feel like a tick-box exercise. It was suggested that indicators currently focus more on money, organisational or commissioner issues rather than those important to patients; they are thought to tend to be used to judge the quality of care, not to improve it.
There was a consistent message that indicators need to be more person centred, with a greater focus on individual patients’ goals and also to be outcome focused. In particular, the point was made that one size does not fit all. The level of the organisation at which indicators are set was also raised as an issue, suggesting that this should be more at the micro, patient, clinical or local level. It was felt that this could be achieved through the involvement of patients in their care planning, using patient-reported outcome measures and goal-setting tools as an indicator of quality.
As in our wider study data, some of the workshop attendees highlighted the lack of a national suite of outcome measures to facilitate benchmarking. The facilitators noted that there was a lack of awareness of existing Pick Lists and menus of QIs in some of the workshops, even among commissioner attendees. This desire for benchmarking raises a potential tension between developing a bottom-up, patient-centred approach and the advantages of having a shared national suite of indicators.
In terms of the good practice statements, the issue of definitions was raised with regard to some of the terminology used. It was suggested that there needed to be more clarity around what is meant by ‘involvement and engagement’ and what is meant by ‘fit for purpose’.
Finding 2: quality indicators that work in acute settings are not necessarily suitable for community (Box 2)
Workshop participants were quick to identify QIs that matched this finding, citing the FFT, NST, staff well-being and falls CQUINs as examples. There were discussions around the origins and drivers for these indicators and it was noted that many of them had been set at a national level for acute settings, provoking the question of how to raise the community service profile so as to influence the national level. An example related to the staff well-being CQUIN target set for 2016/17. This CQUIN obliges provider organisations to ensure that healthy eating options are available to staff at all times, an unrealistic target in community settings.
As found in our interview data, the workshop participants felt that contextual differences were an important factor to successful indicator implementation. Examples given of the difference in context included environmental factors (a person’s home vs. hospital), interdependencies with other services (whole systems) and associated differing power bases, nursing caseloads and IT systems.
A lack of consideration of the contextual differences was often perceived to hinder the successful implementation and usefulness of indicators. Suggestions made for overcoming these barriers included providing flexibility and time to adapt indicators to the community setting, with an idea of working in partnership with the acute sector. This would need to take into account the local environment, patient needs, local service provision, IT and data collection methods/systems (utilising existing data and data collection methods). An example was given of where the development of leaders and champions for the PU QI had improved compliance in data collection and outcomes. Other suggestions included improvements in communication and IT/information sharing.
Participants agreed with the good practice statement that staff need to feel they have ownership of the indicators and that it reflects the service and quality of care being delivered. Some attendees went further, suggesting that information should also be provided to help understand what is working well, share best practice and identify areas for improvement. Discussion linked back to the importance of involving front-line staff in the selection and design of the QIs to facilitate this and the need for data to be relevant and easy to collect, as well as being more person centred.
There was some difference of opinion about whether or not using the SMART acronym was appropriate or even feasible. Concerns were raised around how certain elements of care and the human experience could be measured. Participants provided alternative suggestions, from a revision of the SMART acronym to refocusing an indicator on patient-centred outcome measures. Many agreed that indicators need to be measurable, that the data need to be easy to collect and should utilise both quantitative and qualitative information, including feedback from patients. These opinions all reflect the wider study findings.
Most delegates agreed that it was a good idea to limit the number of QIs. Those that expressed some uncertainty about this statement raised concerns about possible tensions between setting a limit and comprehensively addressing quality issues appropriately.
Finding 3: front-line staff do not receive adequate information about quality indicators (Box 3)
Front-line staff felt that finding 3 resonated with them, with many stating that they often do not receive information about QIs before they are implemented, nor do they receive feedback about how the data they have collected are used. It was clear from the workshops that the understanding of QIs, the consequences of not meeting indicator targets and the awareness of the processes involved in indicator selection and application varied among front-line staff attending. It was generally felt that the issue identified in this finding needs to be addressed.
It was also agreed that there should be feedback mechanisms in place for staff that are timely, relevant, meaningful, open and transparent. Delegates at one workshop suggested that there should be a cycle of involving staff in indicator selection, preparation for data collection, data collection and general feedback. Issues that emerged during discussion included some of the current challenges of data collection (e.g. the time required to collect data and the impact this can have on front-line nurses’ workload). It was also felt that there was often a lack of clarity around the purpose or importance of the data being requested, as well as issues with data quality and IT solutions. These opinions all agreed with the study’s wider interview and focus group data.
Improving the ownership of indicators was seen as an important step to improving the implementation and usefulness of indicators. It was agreed that nurses need to recognise them as an important mechanism in enabling the delivery of high-quality care. Participants suggested that ensuring indicators have a clear purpose that links back to the patient, service specification, quality domains, key strategic objectives and the NHS constitution, would improve ownership and make it easier for community nurses to understand why they are important. Greater clarity around the importance of the data to be collected and how it would be used was also needed. Use of protected learning time and ‘softer’ intelligence, where staff are given the opportunity each quarter to ask ‘how are things going?’, as well as more time allowed for implementation, engagement and feedback, was suggested as a good way to improve staff understanding of, and engagement with, indicator processes. An example was given by one of the delegates of using a team meeting to share information about an indicator with staff, with subsequent improvement in the amount and quality of indicator data collected.
Communication and information sharing was a strong theme across all the discussions. Participants reflected that there needs to be improved communication between and within organisations around QIs. There were examples given of where organisations had tried to address information sharing and communication through improving IT. This included open access to team folders and organisational dashboards, building into existing mechanisms such as community nurse fora, team meetings, clinical updates, newsletters and websites. Other examples included introducing new mechanisms such as quality boards in community offices, developing champions, using supervision time and embedding relevant information into induction processes. IT was seen as barrier to and a facilitator of this, with examples such as access to team folders seen as a solution, but lack of information and data sharing, particularly where there were interdependent agencies, acting as a barrier.
There were some concerns around the wording of these good practice statements, in that delegates felt that they represented staff as being slightly ‘done to’ rather than as partners in the process. This links back to the need to ensure staff feel engaged and involved in the whole process rather than just an agent for data collection.
Finding 4: not all aspects of nursing care are easy to measure (Box 4)
Although the majority of attendees agreed with finding 4, in two workshops there was some disagreement around whether or not quality is hard to measure. One argument related to the fact that it is possible to measure the improvement in a patient’s health (e.g. if a PU resolves then this is measurable). However, other delegates raised the difficulty of measuring the emotional or human element of care.
There was general agreement that using a mixture of both quantitative and qualitative data (also referred to as ‘hard’ and ‘soft’ or ‘numbers’ and ‘narrative’), gave a better understanding of quality of care. Some front-line staff attendees offered examples of situations where they felt they had been able to provide really good care to patients; however, as this care depended to a considerable extent on the nurses’ skilful use of empathic perception and/or interpersonal skills, it had not been possible for them to record either the care given or the outcome so that this information could feed into the formal quality monitoring process. Delegates at one workshop raised the difficulty in quantifying quality, with those at another noting that quality has a ‘qual’ in it as in ‘qualitative’. In some workshops this finding led back to previous discussions around the focus of QIs. Participants reiterated the need for a greater focus on person-centred goals and outcomes to support quality improvement, feeling that this in turn would influence the type of data collected.
There were challenges considered and barriers identified to achieving more comprehensive data on the quality of community nursing care. These included the lack of skills, resources and time to support data collection and analysis; issues with IT and current data collection and feedback mechanisms; and also the long-term nature of some goals.
To help implement good practice, a number of ideas were put forward primarily focused on the collection of qualitative data, linking back to ideas proposed for collecting feedback from patients. This ranged from utilising existing data and data collection mechanisms such as case note audits, peer review and ‘deep dives’ to using policy-drivers such as the Social Value Act 2012 as a lever.101 The involvement of the voluntary sector and volunteers in data collection was again raised as a possibility, alongside holding focus groups and collecting patient stories and case studies. The importance of finding ways to capture staff feedback was also noted and suggestions included utilising reflective practice and revalidation processes. There were examples given where patients had been invited in to share their stories and experiences at team meetings or as part of a focus group and it was thought that these had been valuable processes.
Attendees re-emphasised the need for data collection to be simple and where possible, to utilise existing data and systems. In one workshop, however, there was a suggestion that there needed to be a shift in focus from data collection to good record keeping.
It was suggested that the word ‘consider’ should be removed from the first good practice statement, as it was felt that quality measurement should always involve mixed methods. Although participants agreed in principle with the good practice statement that ‘data collected is of high quality’, discussions raised the issue of the meaning of high-quality data. It was felt that this would need further description or definition within the good practice statement, as ‘quality’ means different things to different people. Some of the descriptions offered by delegates of data that might be seen as ‘high quality’ included their being robust, validated or triangulated.
Cross-cutting themes
A number of cross-cutting themes emerged from discussions that reflected many of the challenges currently affecting community nursing.75 These included issues around recruitment and retention, changing caseloads and increasing workload; the impact of indicator data collection on staff workload and the time available to care for patients; the interdependency of nursing services on other services and the move towards integration of care; the challenges with time scales, IT and documentation; and the lack of communication, engagement and training around QIs with front-line staff. All of these themes reflect our wider study data.
Interestingly, discussion also highlighted the relationship between the four findings presented, with interdependencies noted between all of them. There was a consistent message that the appropriate involvement of the right people, good communication throughout the process and clarity of purpose would address many of the challenges presented within the findings. Another common message was the need to change the focus of indicators to concentrate more closely on patients and their outcomes, together with a focus on improvement rather than judgement.
Workshop evaluation
Eighty-eight evaluation forms were completed across the eight workshops, on 84 (95.45%) of which the workshop was rated overall as either excellent or good. Sixty participants identified an action they could take forward following the workshop. These included:
Evolve CQUIN differently from now on – with those who will be collecting/involved in the indicator collection.
As a commissioner, ensure that development of outcome measures is carried out collaboratively.
Think about greater use of PROMs [patient-reported outcome measures].
Review the research when available, discuss with colleagues.
The research team felt that holding the workshops was a very valuable exercise, as it enabled testing of some of our study findings with wider stakeholder groups, as well as enhancing the quality of the good practice guidance developed following data analysis. However, it was also noted that this series of stakeholder engagement events was extremely resource intensive. In particular, administrative input into the process was substantial. It is recommended that funding for these valuable engagement activities needs to be properly estimated in grant applications submitted to funders.
Summary
Through these events we have reached and engaged with > 260 people, with 120 participants attending the eight workshops together with 146 people in attendance at the conference presentations.
The discussions at the stakeholder engagement events were very useful. First, they not only revealed that the four study findings presented reflected attendees’ views of their local experiences, but also confirmed some of the wider study findings not presented at these events. This suggests that our analysis and interpretation of our study data are accurate and that our findings transfer to other areas of the country. It was notable that none of the attendees found our findings surprising. In turn, the facilitators of the events reflected that the additional information emerging from the discussions did not reveal anything unexpected.
Second, workshop attendees’ feedback confirmed that the good practice statements were reflective of the findings and addressed the key issues. Useful suggestions for improvements were made and these have been incorporated into the final version of the good practice guidance (see Appendix 4). Focusing discussion on how the good practice statements could be implemented helped to identify some of the challenges and possible solutions, enabling delegates to share good practice within their own organisations, which further informed the good practice statements.
The workshops were very positively evaluated and undoubtedly enhanced the robustness of the study findings as well as the good practice guidance developed as a study output. However, they were extremely resource intensive, particularly in terms of administrative input; the cost of such initiatives needs to be properly estimated in grant applications to funders.
- Stakeholder engagement - Measuring quality in community nursing: a mixed-methods...Stakeholder engagement - Measuring quality in community nursing: a mixed-methods study
Your browsing activity is empty.
Activity recording is turned off.
See more...