Included under terms of UK Non-commercial Government License.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Raine R, Fitzpatrick R, Barratt H, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Southampton (UK): NIHR Journals Library; 2016 May. (Health Services and Delivery Research, No. 4.16.) doi: 10.3310/hsdr04160-121
Challenges, solutions and future directions in the evaluation of service innovations in health care and public health.
Show detailsAbstract
As the field of implementation science comprises a range of different ontological and disciplinary orientations, there is not always consensus on the boundaries of the research agenda or the challenges faced. Explicit discussion about the terminology used in the field would be useful to clarify differences in perspective and conceptual foundations. There is a need to generate an in-depth understanding of intervention design and development by considering the role of theory, the influence of context and meaningful involvement of relevant end-users. Theory-driven, pragmatic evaluation designs are proposed as a solution to produce evidence of intervention effects, and the potential of ‘implementation laboratories’ is also discussed. A balance has to be sought, however, between rigidly controlled studies and adaptive evaluations that allow emergent changes to the intervention or the implementation process through timely feedback. Practical recommendations would support the development of the field in evaluating the implementation of evidence-based practice.
Scientific summary
Implementation science comprises a range of methodologies, frameworks and theories utilising a variety of assumptions and terminology. The challenges inherent within different approaches were debated in the session on ‘Interventions to disseminate and implement evidence-based practice’ and have been summarised here.
Participants at the London meeting in 2015, described in the Introduction to this volume, debated the need for an overarching definition to establish the scope of implementation science as a field of enquiry. One definition suggested implementation science as encompassing the ‘study of theories, process, models and methods of implementing evidence-based practice’. There was also discussion on a more strategic function of the term as a ‘boundary object’ to communicate different meanings across disciplines and practice communities. Differences between complicated and complex interventions were particularly highlighted, acknowledging the role of complexity, non-linearity and unpredictability.
Discussions at the meeting also focused on the importance of clearly defining the problem being addressed, in order to design rigorous evaluations that ask the right questions while maximising implementation pragmatism. The relevance of a ‘pragmatic’ approach was deemed necessary in terms of both intervention design and implementation.
The role of theory-based intervention design and development was extensively considered. There was consensus on the significant challenges around how theory becomes applied, when and to what end. It was argued that using conceptually transferable mechanisms to explain interventions and how these might work differently in different contexts and across time could improve transferability, adaptation and scale-up.
Participants at the meeting further debated the need for providing timely feedback and ‘good enough’ evidence on the effect of interventions in practice. Although, on the one hand, the adoption of more iterative, adaptive and incremental designs may enable the incorporation of formative evaluation and feedback, this could also limit the power of studies designed on the basis of statistically measured effects.
The importance of paying attention to context was mentioned throughout the session, along with consideration of the ontological and epistemological assumptions that underlie different notions of how context influences intervention design and implementation. Other topics included the need to tailor user engagement to the problem at hand to ensure relevance, and understanding the role of researchers at the intersection of academia and clinical practice.
Consistent messages were identified at the London meeting about the need for a theory-driven understanding of intervention design, implementation and evaluation. Practical recommendations about how and when to use the range of evaluation options available would further support the development of the field and support health service improvement.
Background
Debates on challenges for the implementation and dissemination of evidence-based practice have been inherently fragmented within different fields and particular areas of expertise. The increasing emphasis on implementation, through both the Medical Research Council framework for complex interventions1 and the more recent guidance on process evaluation,2 has been accompanied by a rapid and broad development of a range of approaches, frameworks and theories utilising a variety of assumptions and terminology. Although there are some helpful reviews of the area,3,4 there are few widely accepted approaches to implementing evidence-based practice and evaluating implementation. This lack of clarity has proven a barrier to considering the bigger picture of intervention design, implementation, development and evaluation in a comprehensive manner. It has also hindered the development of recommendations that recognise the diverse interests of health-care and implementation communities, while being directly relevant to researchers and policy-makers.
This essay presents a summary of the presentations and discussions from the session on implementing and disseminating evidence-based practice that took place at the meeting in London in 2015 described in the Introduction to this volume (referred to below as the London meeting). Key themes include delineating the scope of implementation science as a field of enquiry and conceptualising what constitutes complex interventions in this context; designing rigorous evaluations that ask the right questions while maximising pragmatism of implementation in the real world of practice; using theory-based intervention design and development methodologies; providing timely and appropriate feedback; adequately studying context and engaging in user involvement; and understanding the challenges of scaling up implementation interventions and their evaluation. Participants also reflected on the role of researchers in implementation.
Terminology and definitions
What is implementation science?
A key area of contention at the London meeting concerned the need (or otherwise) for an overarching definition of implementation science to clarify the subject matter of the field. One of the participants argued that a strict or constraining definition might not serve a useful purpose. Instead, it was suggested that a certain level of flexibility in the way ‘implementation science’ is understood could contribute to the term acting as a ‘boundary object’5 between different communities of research and practice. This flexibility would allow an organising vision to be developed which could potentially pull people together in the direction of a shared understanding.
Attempts were made to combine different views, with the proposal that implementation science be defined as the ‘study of theories, process, models and methods of implementing evidence-based practice’. Apart from considering the effectiveness of interventions in terms of clinical outcomes, the aim of implementation science would be to research approaches and methods for closing the gap between what ‘we think we know’ from evidence, and what is routinely practised. It was acknowledged that the subject matter of implementation science is multifaceted, ranging from the introduction, integration, adaptation and sustainability of innovations, to user engagement, sense-making and division of labour. Intended and unintended consequences of adoption and adaptation would need to be considered in a recursive relationship with intervention design (telecare was mentioned as an illustrative example).
The discussion on terminology at the London meeting extended to consider differences between terms such as implementation and improvement science. It was argued, however, that an extensive discussion on terminology might be esoterically serving the interests of scientists/academics, rather than solving key underlying issues and being of direct relevance to those trying to evaluate how best to close the gap between evidence and practice and to those implementing improvement projects ‘on the ground’. Countering that, it might be important to clarify terminology in order to generate a better understanding of epistemological and ontological underpinnings. It was also deemed necessary to develop a common language to engage clinicians. The word ‘science’ was considered useful in establishing credibility and signifying the authority required for the field to be recognised in the medical world. However, it can also be argued that the term implementation ‘research’ might better signify the substance and mission of this area, as it does not imply that a new discipline is created. The group concluded that it would be useful if people at least explained the terms they are using, especially if coming from different disciplines and ontological positions.
One of the presentations at the meeting further elaborated on the subject matter of implementation science, by reflecting on different conceptualisations of the ‘gap’ between evidence and practice. Drawing on Best and Holmes,6 three approaches were discussed: linear (i.e. knowledge transfer), relationship (i.e. partnerships and coproduction) and system models (i.e. implementation being one component of how an organisation develops/changes). By conceptualising the gap in different ways, the evaluations of interventions are built on different underlying epistemologies and assumptions, and differing relationships with stakeholders. This underpinning frame of reference should be taken in account when evaluating interventions to implement and disseminate evidence-based practice.
What do we mean by complex interventions?
The term ‘complex interventions’ was another area of contention at the meeting, primarily around conceptual clarity and consistency of use. There was a range of views about the term, with some favouring more ‘interrogation’ of its meaning, while others provided a more definite picture of what complex interventions comprise. Reference was made to a presentation on randomised controlled trials earlier in the day that defined interventions (1) as complicated to place emphasis on the internal variability of the task (e.g. building a car); and (2) as complex to encapsulate their relationship and variable interactions with context (e.g. raising a child is complex because of the uncertainty and unpredictability of social behaviour). These issues are picked up in a number of other essays emerging from the meeting and appearing in this volume (see Essays 2, 3, 6 and 7).
The role of complexity, non-linearity and unpredictability was particularly highlighted and many acknowledged the need for a more in-depth understanding of change mechanisms, or underlying theories of change, when interventions have fuzzy boundaries and their purpose is not clearly defined. This has implications for the methodologies used to study implementation. One of the participants at the meeting also noted how complexity emerges when spreading change that does not directly fit with existing culture. Different levels of complexity were illustrated by discussing the example of shared care between patients and health professionals as an intervention where multiple feedback loops and power boundaries might lead to variable outcomes. By contrast, clinical interventions, such as changing the boundaries for reporting abnormality in liver function tests, were deemed to be less complex as their rationale would be less challenging for local health-care cultures.
Intervention design and development
Many of the topics discussed referred to the need for a clear definition of the problem being addressed, the role of theory and the relevance of a ‘pragmatic’ approach in terms of both intervention design and implementation.
Asking the right question and defining the ‘problem’
One of the presentations suggested that intervention design has been one of the weakest elements in implementation science over the last 30 years. Intervention design often happens without consulting previous literature and theory, in what has been characterised as ‘an expensive version of trial-and-error’.7,8
Before deciding on the optimal design, participants agreed that it was important to fully define the implementation challenge with enough specificity, along with understanding the determinants and barriers to implementing interventions and policies in practice. Specifying the right question should be an interdisciplinary process carried out in conjunction with those implementing the intervention. The right design and appropriate methods would follow from the specification of the question. Discussions also acknowledged a need for finding a middle-ground between resource-intensive interventions that are difficult to implement in practice and ‘light’ interventions that have little potential for making a difference. This highlights how implementation research and implementation practice are not always viewed as separate, but can be inherently interconnected.
Prospective use of theory was deemed important to identify targets for interventions. However, despite a significant pool of theoretical knowledge available to inform intervention design, how theory becomes applied, when and to what end remains a challenge. The groups at the meeting discussed the importance of establishing what the problem is first (what we need to know) and then looking at which theories might be useful, rather than pre-determining what theory to use. One of the speakers argued for an explicit process of developing interventions, based on an understanding of the determinants of the ‘problem’ and the perceived mechanism of action of the proposed intervention, as well as logistics and practicalities. Although, pragmatically, implementation researchers are not always involved in prospectively embedding theory in intervention design – for example, where the intervention has been developed by a third party such as a health-care organisation – evaluations can still adopt a theoretically informed approach to explain mechanisms of action.
‘Timely and good enough’ feedback
Another challenge highlighted was that of providing timely feedback about the effect of interventions and their influence on practice. In cases where intervention studies take years to complete, findings may have diminished relevance to the changing context of the health service. An increased focus on the adoption of more iterative, adaptive and incremental designs may enable the incorporation of formative evaluation and feedback, although the extent of modification of the intervention as a result of this type of feedback gives rise to additional issues about how to make sense of the modifications and their effects. Two examples of projects currently incorporating elements of formative development were given:
- Translating Research in Elder Care (TREC) uses routine data from a minimum data set to provide feedback to care homes piloting interventions and studying the effects on performance (www.kusp.ualberta.ca/Research/ActiveProjects/TREC.aspx).
- A chronic disease management project from the first round of the Greater Manchester Collaboration for Leadership in Applied Health Research and Care (CLAHRC), using Quality and Outcomes Framework information overlaid with a quality improvement intervention, Plan-Do-Study-Act methodology and facilitation, with data fed back on a monthly basis and benchmarked against other sites (http://clahrc-gm.nihr.ac.uk/our-work-2008-2013/chronic-kidney-disease/).
Along with timeliness, the balance between formative and summative evaluation naturally raised questions about what is ‘good enough’ evidence for service and practice change, what constitutes ‘enough’ information and at what point would early findings be fed back constructively. It was argued that the potential of generating more implementable evidence at initial trial stage warrants further thought.9 Discussions seemed to indicate a need for balancing ‘rigorous’ controlled studies with opportunities to design trials in a way that facilitates the emergence of findings throughout the process. Although stepped-wedge designs were mentioned as a possible methodology, the issues relating to changes in the intervention throughout the process would impact on the statistical power of this type of study, unless it was carefully designed with each step being a separate comparison.
The role of contextual influences was also deemed particularly important, given the need to deliver timely evidence to understand how to embed interventions in the real world. Data accessibility and quality were also considered as issues that might prove challenging in implementation research projects, along with the ability to extract data and using the appropriate analysis tools.
The need to build a cumulative science: the example of audit and feedback
Presentations at the London meeting discussed ways of maximising pragmatism in trials of implementation strategies, referring to audit and feedback (A&F) interventions as an area with a robust evidence base about likely effectiveness of commonly used strategies. The latest Cochrane systematic review of A&F interventions identified 140 randomised trials that, on average, improve care processes (median absolute improvement of +4%), but with a substantial variation in the observed effects (interquartile range of +0.5% to +16%) under specific circumstances or when delivered in certain ways.10 Metaregression identified some features associated with larger effects from A&F: gaining more information about the context and the nature of the targeted behaviour(s), adding co-interventions or considering the intervention content and delivery strategies.11 Further analysis identified that no new information on common effect modifiers is emerging from new trials,12 yet we still need to expand our understanding of how to optimally use A&F.
A pragmatic implementation trial: the ASPIRE example
Policy and practice need to be informed by rigorous evaluations of implementation strategies. As argued by one of the presentations, pragmatic randomised trials generally provide the most valid estimates of ‘real-world’ intervention effects. To judge the level of pragmatism in trials, the revised PRagmatic Explanatory Continuum Indicator Summary tool (PRECIS-2) proposes nine domains: eligibility criteria, recruitment, setting, organisation, flexibility (delivery), flexibility (adherence), follow-up, primary outcome and primary analysis.13 As an example meeting these criteria, ASPIRE (Action to Support Practices Implementing Research Evidence) is a National Institute for Health Research-funded programme aiming to develop and evaluate an adaptable implementation package targeting ‘high-impact’ clinical practice recommendations in primary care (http://medhealth.leeds.ac.uk/info/650/aspire/132/what_is_the_aspire_programme).
Through a process of selecting guideline recommendations, analysing variations in adherence and identifying those with most scope for improvement, the ASPIRE programme involves primary care professionals and patients in developing implementation packages for each of four priorities. The effects of these implementation packages are currently being evaluated in two balanced incomplete block cluster randomised trials. In one trial, 80 practices are randomised to packages to either improve diabetes control or reduce risky prescribing. In the other, 64 practices are randomised to packages to either improve blood pressure control or increase the use of anticoagulation in atrial fibrillation.
However, maximising pragmatism is necessary but insufficient by itself to build a cumulative science of implementation. Theory-guided approaches to both intervention development and evaluation offer advantages in terms of generalisability from using a common conceptual framework, the potential to understand mechanisms of action and the ability to predict consequences in other settings. The role of theory in the study of implementation is further elaborated in Using theory in implementation science.
Embedding evaluations within health-care systems: the example of ‘implementation laboratories’
‘Implementation laboratories’ were proposed as a way of utilising existing large-scale service implementation programmes to embed sequential randomised trials that would be testing different ways of delivering implementation interventions in head-to-head comparisons at scale. By being embedded in the health system, implementation laboratories may be useful in clarifying the roles of different stakeholders, creating buy-in and allowing service providers to sustainably develop priorities based on clinical need. This would create a synergistic relationship between clinicians and researchers: the intervention would be developed jointly by the two groups, as would be the interpretation of the results, while clinicians would deliver the intervention and collect data as part of their routine processes. By relying on existing structures, the actual cost of the research would be marginal.
Other benefits for a health system may include it becoming a learning organisation, demonstrable improvements in its quality improvement activities and linkages to academic experts. Implementation science may also benefit from the ability to test important (but potentially subtle) variations in the intervention that may be important effect modifiers. The development of multiple implementation laboratories addressing the same intervention would additionally provide the opportunity to establish a metalaboratory (i.e. a cross-laboratory steering group involving health system participants and internationally leading implementation researchers) to maximise learning (including planned replications and prospective meta-analysis). Examples of existing implementation laboratories include the Affinite programme with the NHS Blood and Transplant programme in the UK14 and the Ontario Health Implementation Laboratory in Canada.
The role of theory could inform which variations of the intervention should be tested, for example whether or not embedding action/coping plans leads to more effective A&F interventions (based on control theory), or whether or not A&F delivered by credible sources is more effective (drawing on social influence theory). However, variations such as mode of delivery, perceived credibility of source or presenter, frequency of feedback, comparator or level of aggregation would result in a significant number of head-to-head combinations of trials. Follow-up discussions questioned the feasibility of creating intervention laboratories for complicated or complex interventions and noted how theory could be used to identify the necessary elements for intervention design, rather than just guiding which combinations of trials are needed in the first place.15
Using theory in implementation science
As is strongly argued in other essays in this volume (particularly Essays 6 and 7), theory plays a pivotal role in evaluative methods. Theory-guided approaches to implementation research (rather than purely for intervention design) were considered important for generalisability. It was suggested that using conceptually transferable mechanisms to explain interventions and how these might work differently, in different contexts and across time, can improve transferability, adaptation and scale-up. Theory was deemed to offer a common language to explore and identify influences on practice, enhancing the transparency of intervention development and description.
Which theories?
Phenomena studied by implementation science were deemed multifaceted and dynamic in a way that excludes universal explanations. Combining specific theories to understand different aspects of a problem was suggested as a potentially more productive approach (‘horses for courses’).
There is a range of different types of knowledge and theoretical perspectives (e.g. middle range, inductive vs. deductive, theory vs. frameworks) with a number of them being used in implementation research and a good summary provided by Nilsen.4 Examples mentioned included the Theoretical Domains Framework used prospectively in the Translation Research in a Dental Setting (TRiaDS) programme16 and in ASPIRE (http://medhealth.leeds.ac.uk/info/650/aspire/132/what_is_the_aspire_programme). Some suggested that increased learning could be gained from theories like diffusion of innovations, Normalisation Process Theory,17 the Promoting Action on Research Implementation in Health Services (PARIHS) framework,18 the Knowledge to Action framework19 and the Consolidated Framework for Implementation Research.20
Others distinguished between levels of analysis, proposing health and social psychology theories at a more micro level and health services research theories at an organisational level. The usefulness of realist methods and context–mechanism–outcome configurations was also debated. Participants conceded that research should be both problem-driven as well as theory-driven and identified a need to unpack the tensions between the two. They also recognised that theory use can be driven by fads which vary over time. Discussions concluded that the need to use more theory should be accompanied by using it in better ways, not just following the theory path for the sake of it.
The need for developing tools for tracking theoretical fidelity and the integrity of implementation interventions was highlighted. If we accept that interventions change through their enactment with context, then we need to find ways to ‘track implementation interventions’. There are a number of existing frameworks which can be used, including the conceptual framework for implementation fidelity21–24 with ongoing work to expand this described, which includes a learning loop that captures the adaptation processes evolving over time.
Engagement with context and research users
Several features of the context of implementation were discussed at the meeting, along with the role of and potential for engagement with research users.
Studying context
Increasing focus on theory-guided interventions in implementation science led to consideration of the role and influence of context in more depth. One of the presentations outlined different ways of paying attention to context, including natural experiments, such as the CLAHRCs and other relevant cases in the literature.25 It was also explained how realist methods can elucidate ‘what works for whom, how and in what context’, as part of implementing interventions which in themselves change as a result of being embedded in particular contexts (capturing generative causation). Examples of realist evaluations have been discussed in the literature,26 along with debates on how to do realist evaluation within an implementation context, how to identify specific mechanisms and how to demonstrate their interaction with context. The concept of realist trials was also introduced, accompanied by a note on the epistemological and ontological contradictions underlying this approach. This points to relevant debates in the literature that discuss whether randomised controlled trials based on positivist ontological and epistemological assumptions can be synergistically reconciled with a realist approach to context and complexity that draws attention to non-linearity, emergence, adaptation, path-dependence and human agency.27–29
One view of context was articulated as consisting of specific elements that need to be taken in account in each setting, such as environment characteristics that differ between, for example, an emergency department and an inpatient ward. By identifying the six most common elements of the context or those that are more likely to have an impact, the ability to scale-up and transfer between contexts can be improved.
User engagement
The importance of user engagement was consistently mentioned, but at the same time it was acknowledged as one of the most challenging aspects of intervention design and implementation. While there was consensus on the importance of user involvement to create appropriately designed interventions, this did not always mean that these interventions would be more ‘implementable’. Participants recognised that there is no set recipe for engagement, but this needs to be tailored depending on the problem at hand to ensure relevance. For example, in some cases the end-user might be the patient; in others it might be the hospital Chief Executive Officer who needs to be engaged. Many conceded that the problem should be owned by the clinical community, with the researchers responsible for helping clinicians think through the situation, use appropriate theoretical lenses and contribute with external knowledge. However, the discussion appeared to be based on an unarticulated assumption that ‘researchers’ are not already clinicians themselves. Difficulties identified included involving the right people (e.g. which constituencies do people represent), especially those who are going to encounter the most barriers during implementation at different stages of the process (design, conduct and evaluation). Participants also recognised the importance of acknowledging the politics of user engagement, in terms of vested interests on the side of practitioners and academics that need to be negotiated and resolved.
Scaling up
The challenge of scaling up interventions was also mentioned, especially regarding how scale-up could be incorporated in evaluation methods and approaches to implementation science in order to improve the chances of successful spread. The role of theory building was again particularly highlighted in supporting implementation across different settings, thus enabling transferability of the intervention. Another aspect of this discussion was the tension between flexibility and standardisation of interventions: how to scale up and at the same time allow adaptation to local structures and increased ownership.
Experiences were described where a lack of understanding about prerequisites to implementation and scale-up had led to interventions having to be deconstructed retrospectively to understand the mechanisms of action. This was especially found to be a limitation when funders of projects decide to spread innovations without a clear understanding of whether or not these are ‘implementable’ before scale-up.
The role of researchers
Having identified challenges around providing timely and good enough evidence, discussions at the London meeting ensued on the role and responsibilities of researchers as implementation scientists. Participants debated whether or not implementation scientists could act as mediators or brokers responsible for translating research in different settings to ensure relevance between existing studies and the clinical world. It was also suggested that pressure from policy-makers to deliver change in ways that may seem unrealistic could be countered by sharing expertise on what has already been found to be ineffective or should be avoided.
Participants recognised, however, that career incentives in many areas of academia do not always overlap with health service priorities and dissemination in academic journals was deemed to have little impact on policy-making. Some also mentioned how they had encountered difficulties aligning implementation research studies with career paths driven by performance measures of traditional clinical academic disciplines. Overall, there was consensus about the fact that researchers working in the area have to reconcile career incentives and structures that may often seem conflicting (high-impact, high-quality research and direct benefit to patients in practice).
Implications
This summary indicates the need for appropriate communication of the aims and objectives of implementation science to a wider audience. As the field is constituted through the contribution of different ontological and disciplinary perspectives, consensus on the boundaries of the research agenda for implementation scientists might be useful, with some agreement on terminology and conceptual foundations, such as the meaning of complex interventions.
Consistent messages were identified throughout the session at the London meeting about the need for a clear understanding of intervention design and development on the basis of engaging end users and adequately considering contextual influences. Theory-driven, pragmatic evaluation designs were suggested as suitable to producing evidence of intervention effects, along with a consideration of the potential of implementation laboratories. A balance was sought between evaluations that allow emergence through timely feedback and more rigidly controlled studies where the ‘intervention’ itself does not change (adaptive vs. fixed designs). Persisting limitations with current application of theory remain a challenge towards developing transferable concepts or building what was described as ‘cumulative science’.
Practical recommendations that can be meaningfully adopted across research and implementation communities would support the future development of the field in implementing evidence-based practice to improve patient care and outcomes in health-care settings. Implementation scientists operating at the intersection of academic enquiry and practical application may need to be better equipped and supported to respond to what may often be viewed as conflicting priorities.
Acknowledgements
We would like to thank the session co-chairs for facilitating the breakout discussions (Sue Mawson, Julie Reed and Paul Wilson) and the note-takers for providing us with material for this essay (Elizabeth Gibbons and Simon Turner).
Contributions of authors
Chrysanthi Papoutsi (Health Services Researcher) summarised the presentations and discussions held during the session on ‘Interventions to disseminate and implement evidence-based practice’, drafted and revised this essay.
Ruth Boaden (Professor, Implementation Science) contributed to the writing of the essay, provided comments and suggested additional references.
Robbie Foy (Professor, Primary Care), Jeremy Grimshaw (Professor, Implementation Science and Health Services Research) and Jo Rycroft-Malone (Professor, Implementation Science and Health Services Research) reviewed the essay and provided comments.
References
- 1.
- Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ 2008;337:a1655. [PMC free article: PMC2769032] [PubMed: 18824488]
- 2.
- Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ 2015;350:h1258. 10.1136/bmj.h1258. [PMC free article: PMC4366184] [PubMed: 25791983] [CrossRef]
- 3.
- van Achterberg T. Introduction to Section 4: Implementation of Complex Interventions. In Richards DA, Hallberg IR, editors. Complex Interventions in Health: An Overview of Research Methods. London: Routledge; 2015. pp. 261–4.
- 4.
- Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci 2015;10:53. 10.1186/s13012-015-0242-0. [PMC free article: PMC4406164] [PubMed: 25895742] [CrossRef]
- 5.
- Star SL, Griesemer JR. Institutional ecology, translations’ and boundary objects: amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907–39. Soc Stud Sci 1989;19:387–420. 10.1177/030631289019003001. [CrossRef]
- 6.
- Best A, Holmes B. Systems thinking, knowledge and action: towards better models and methods. Evid Policy 2010;6:145–59. 10.1332/174426410X502284. [CrossRef]
- 7.
- French SD, Green SE, O’Connor DA, McKenzie JE, Francis JJ, Michie S, et al. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework. Implement Sci 2012;7:38. 10.1186/1748-5908-7-38. [PMC free article: PMC3443064] [PubMed: 22531013] [CrossRef]
- 8.
- Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N. Changing the behavior of healthcare professionals: the use of theory in promoting the uptake of research findings. J Clin Epidemiol 2005;58:107–12. 10.1016/j.jclinepi.2004.09.002. [PubMed: 15680740] [CrossRef]
- 9.
- Murray E, Treweek S, Pope C, MacFarlane A, Ballini L, Dowrick C, et al. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Med 2010;8:63. 10.1186/1741-7015-8-63. [PMC free article: PMC2978112] [PubMed: 20961442] [CrossRef]
- 10.
- Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 2012;6:CD000259. 10.1002/14651858.cd000259.pub3. [PubMed: 22696318] [CrossRef]
- 11.
- Ivers NM, Sales A, Colquhoun H, Michie S, Foy R, Francis JJ, et al. No more ‘business as usual’ with audit and feedback interventions: towards an agenda for a reinvigorated intervention. Implement Sci 2014;9:14. 10.1186/1748-5908-9-14. [PMC free article: PMC3896824] [PubMed: 24438584] [CrossRef]
- 12.
- Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O’Brien MA, French SD, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Int Med 2014;29:1534–41. 10.1007/s11606-014-2913-y. [PMC free article: PMC4238192] [PubMed: 24965281] [CrossRef]
- 13.
- Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ 2015;350:h2147. 10.1136/bmj.h2147. [PubMed: 25956159] [CrossRef]
- 14.
- Gould NJ, Lorencatto F, Stanworth SJ, Michie S, Prior ME, Glidewell L, et al. Application of theory to enhance audit and feedback interventions to increase the uptake of evidence-based transfusion practice: an intervention development protocol. Implement Sci 2014;9:92. 10.1186/s13012-014-0092-1. [PMC free article: PMC4243714] [PubMed: 25070404] [CrossRef]
- 15.
- ICEBeRG (Improved Clinical Effectiveness through Behavioural Research Group). Designing theoretically-informed implementation interventions. Implement Sci 2006;1:4. 10.1186/1748-5908-1-4. [PMC free article: PMC1436012] [PubMed: 16722571] [CrossRef]
- 16.
- Clarkson JE, Ramsay CR, Eccles MP, Eldridge S, Grimshaw JM, Johnston M, et al. The translation research in a dental setting (TRiaDS) programme protocol. Implement Sci 2010;5:57. 10.1186/1748-5908-5-57. [PMC free article: PMC2920875] [PubMed: 20646275] [CrossRef]
- 17.
- May CR, Mair F, Finch T, MacFarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: Normalization Process Theory. Implement Sci 2009;4:1–9. 10.1186/1748-5908-4-29. [PMC free article: PMC2693517] [PubMed: 19460163] [CrossRef]
- 18.
- Rycroft-Malone J. The PARIHS Framework – a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual 2004;19:297–304. 10.1097/00001786-200410000-00002. [PubMed: 15535533] [CrossRef]
- 19.
- Wilson KM, Brady TJ, Lesesne C, on behalf of the NCCDPHP Workgroup on Translation. Peer reviewed: an organizing framework for translation in public health: the knowledge to action framework. Prev Chronic Dis 2011;8:A46. [PMC free article: PMC3073439] [PubMed: 21324260]
- 20.
- Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009;4:50. 10.1186/1748-5908-4-50. [PMC free article: PMC2736161] [PubMed: 19664226] [CrossRef]
- 21.
- Masterson-Algar P, Burton CR, Rycrof-Malone J, Sackley CM, Walker MF. Towards a programme theory for fidelity in the evaluation of complex interventions. J Eval Clin Pract 2014;20:445–52. 10.1111/jep.12174. [PubMed: 24840165] [CrossRef]
- 22.
- Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci 2007;2:40. 10.1186/1748-5908-2-40. [PMC free article: PMC2213686] [PubMed: 18053122] [CrossRef]
- 23.
- Hasson H. Systematic evaluation of implementation fidelity of complex interventions in health and social care. Implement Sci 2010;5:67–75. 10.1186/1748-5908-5-67. [PMC free article: PMC2942793] [PubMed: 20815872] [CrossRef]
- 24.
- Hasson H, Blomberg S, Dunér A. Fidelity and moderating factors in complex interventions: a case study of a continuum of care program for frail elderly people in health and social care. Implement Sci 2012;7:1–11. 10.1186/1748-5908-7-23. [PMC free article: PMC3342887] [PubMed: 22436121] [CrossRef]
- 25.
- Scarbrough H, D’Andreta D, Evans S, Marabelli M, Newell S, Powell J, et al. Networked innovation in the health sector: comparative qualitative study of the role of Collaborations for Leadership in Applied Health Research and Care in translating research into practice. Health Serv Del Res 2014;2(13). 10.3310/hsdr02130. [PubMed: 25642509] [CrossRef]
- 26.
- Salter KL, Kothari A. Using realist evaluation to open the black box of knowledge translation: a state-of-the-art review. Implement Sci 2014;9:115. 10.1186/s13012-014-0115-y. [PMC free article: PMC4172789] [PubMed: 25190100] [CrossRef]
- 27.
- Bonell C, Fletcher A, Morton M, Lorenc T, Moore L. Realist randomised controlled trials: a new approach to evaluating complex public health interventions. Soc Sci Med 2012;75:2299–306. 10.1016/j.socscimed.2012.08.032. [PubMed: 22989491] [CrossRef]
- 28.
- Bonell C, Fletcher A, Morton M, Lorenc T, Moore L. Methods don’t make assumptions, researchers do: a response to Marchal et al. Soc Sci Med 2013;94:81–2. 10.1016/j.socscimed.2013.06.026. [PubMed: 23931948] [CrossRef]
- 29.
- Marchal B, Westhorp G, Wong G, Van Belle S, Greenhalgh T, Kegels G, et al. Realist RCTs of complex interventions – an oxymoron. Soc Sci Med 2013;94:124–8. 10.1016/j.socscimed.2013.06.025. [PubMed: 23850482] [CrossRef]
List of abbreviations
- A&F
audit and feedback
- ASPIRE
Action to Support Practices Implementing Research Evidence
- CLAHRC
Collaboration for Leadership in Applied Health Research and Care
- Declared competing interests of authors: Ruth Boaden is a member of the Researcher-Led Commissioning Board for the National Institute for Health Research (NIHR). Jeremy Grimshaw reports grants from Canadian Institutes of Health Research and held a Canada Research Chair, during the conduct of the work reported here. Jo Rycroft-Malone is the current Director of the NIHR Health Services and Delivery Research programme.
- This essay should be referenced as follows: Papoutsi C, Boaden R, Foy R, Grimshaw J, Rycroft-Malone J. Challenges for implementation science. In Raine R, Fitzpatrick R, Barratt H, Bevan G, Black N, Boaden R, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res 2016;4(16). pp. 121–32.
- Abstract
- Scientific summary
- Background
- Terminology and definitions
- Intervention design and development
- ‘Timely and good enough’ feedback
- The need to build a cumulative science: the example of audit and feedback
- A pragmatic implementation trial: the ASPIRE example
- Embedding evaluations within health-care systems: the example of ‘implementation laboratories’
- Using theory in implementation science
- Engagement with context and research users
- Scaling up
- The role of researchers
- Implications
- Acknowledgements
- References
- List of abbreviations
- Challenges for implementation science - Challenges, solutions and future directi...Challenges for implementation science - Challenges, solutions and future directions in the evaluation of service innovations in health care and public health
Your browsing activity is empty.
Activity recording is turned off.
See more...