Included under terms of UK Non-commercial Government License.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Raine R, Fitzpatrick R, Barratt H, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Southampton (UK): NIHR Journals Library; 2016 May. (Health Services and Delivery Research, No. 4.16.) doi: 10.3310/hsdr04160-85
Challenges, solutions and future directions in the evaluation of service innovations in health care and public health.
Show detailsAbstract
The scale and complexity of major system change in health care (typically involving multiple change processes, organisations and stakeholders) presents particular conceptual and methodological challenges for evaluation by researchers. This essay summarises some current approaches to evaluating major system change from the field of management and organisational research, and discusses conceptual and methodological questions for further developing the field. It argues that multilevel conceptual frameworks and mixed-methods approaches are required to capture the complexity and the heterogeneity of the mechanisms, processes and outcomes of major system change. Future evaluation designs should aim to represent key components of major system change – the context, processes and practices, and outcomes – by looking for ways that quantitative and qualitative methods can enrich one another. Related challenges in ensuring that findings from evaluating major system change are used by decision-makers to inform policy and practice are also discussed.
Scientific summary
The scale and complexity of major system change in health care (typically involving multiple change processes, organisations and stakeholders) presents particular conceptual and methodological challenges for its evaluation by researchers. This essay summarises some current approaches to evaluating major system change from the field of management and organisational research, and discusses conceptual and methodological questions for further developing the field.
Major system change can be seen as a complex intervention; measuring effectiveness is challenging, as contextual factors and processes play a key and often dominant role. In evaluation design, the potential impact of an intervention, and unintended effects, should be captured at different levels of change. Furthermore, rather than seeking to assess ‘effectiveness’, we advocate exploration of the broader concept of the ‘value’ of an intervention.
Developing a theoretical framework based on a synthesis of existing theories of organisational change could aid both the design and the evaluation of major system changes. This would help to advance the field through the accumulation of knowledge across projects. However, critical thinking should be employed to ensure that frequently used concepts, including ‘leadership’ and ‘culture’, are unpicked and not just taken for granted.
Multilevel conceptual frameworks and mixed-methods approaches are needed to attempt to capture the complexity and the heterogeneity of the mechanisms, processes and outcomes of major system change.
Future evaluation designs should aim to represent key components of major system change – the context, processes and practices, and outcomes – by looking for ways that quantitative and qualitative methods can enrich one another, for example by combining ‘what works at what cost’ with analysis of ‘how’ and why’ change takes place. To impact on decision-making concerning policy and practice, researchers should work closely with decision-makers on evaluation design and tailor their findings to different stakeholders, although more focus is needed on overcoming political and structural challenges to collaboration.
Introduction
The complexity and scale of major system change in health care (i.e. typically involving multiple change processes, organisations and stakeholders) presents particular conceptual and methodological challenges for its evaluation by researchers. Drawing on a realist review of literature in this field,1 we take major system change in health care to involve ‘interventions aimed at coordinated, system-wide change affecting multiple organisations and care providers, with the goal of significant improvements in the efficiency of healthcare delivery, the quality of patient care, and population-level patient outcomes’. The authors of the review used the term ‘large-system transformation’ rather than major system change,1 but we assume here that the two terms are synonymous. The aim of this essay is to summarise some current approaches to evaluating major system change from the field of management and organisational research, and to discuss conceptual and methodological questions for further developing the field. Related challenges in ensuring that findings from evaluating major system change are used by decision-makers to inform policy and practice are also discussed.
Four talks were given during the plenary on major system change at the London meeting in 2015 described in the Introduction to this volume. Jean-Louis Denis made the case for using process- and practice-based research to evaluate complex interventions in context, in order to maintain the balance with investment in approaches from the clinical sciences that tend to focus on outcome-driven research. Ruth McDonald discussed the use of theory in evaluating major change projects in health systems, suggesting that often taken-for-granted concepts (such as ‘leadership’ and ‘culture’) need to be contested and re-evaluated through ongoing critical theory development and in dialogue with empirical research. Naomi J Fulop and Simon Turner described challenges in evaluating major system change at scale using social science theory and methods. They argued that multilevel approaches and mixed methods are necessary for representing the complexity of major system change, but practical challenges for researchers lie in grappling with the scale, significant time and politics often associated with major change programmes. Brian Mittman gave a number of reasons for viewing major system change as a form of complex intervention and the challenges this presents for evaluation, notably that neither interventions nor settings are fixed and their main effects are often weak relative to contextual factors. All four presentations underlined the importance of seeing major system change as complex interventions, that are situated, involve multiple processes of change and operate at multiple levels. There was consensus that multilevel analytical frameworks and mixed-methods approaches are needed to attempt to capture the complexity and the heterogeneity of mechanisms, processes and outcomes of major system change in future evaluations.
Stimulated by the talks and the roundtable discussions that followed at the London meeting, we identified five key themes relating to the evaluation of major system change that we discuss in this essay: (1) type of change and complexity; (2) defining and measuring effectiveness; (3) the role and use of theory in evaluation; (4) the contribution of mixed methods to evaluation; and (5) the use of knowledge from evaluations of major system change to inform policy and practice. Following discussion of each theme, the conclusion summarises the implications, both practical and methodological, for evaluating major system change.
It is important to note that this essay presents a partial view on evaluating major system change, influenced by the particular emphasis of the talks and discussions at the roundtable event, rather than an inclusive, systematic review of the different methodological approaches to evaluating major system change.
Theme 1: type of change and complexity
This theme focuses on the distinctive characteristics of major system change and describes the conceptual and methodological challenges that its complexity and scale present for evaluation.
Characteristics of major system change
Major system change has a number of characteristics that distinguish it from change at a smaller scale, for example change that involves a single organisation or health-care delivery site. Returning to Best et al.’s1 definition, three key characteristics of major system change can be identified. First, major system change often involves the participation of multiple stakeholders, from both within and outside the health-care service. Second, the changes desired are system-wide, meaning that the aim is to produce a collective impact on outcomes across different, often heterogeneous, health-care organisations within a system. Third, major system change involves co-ordinated change over a larger canvas, with mechanisms needed to engage and align stakeholders during the planning and implementation of change, such as leadership, resources or a political mandate from government. Additionally, from an evaluation perspective, major system change can be understood as a complex intervention. It has multiple (sometimes conflicting) goals; it involves change processes at multiple levels; and it takes place within and across heterogeneous settings and often over a significant period of time.
The scale and complexity of major system change creates challenges for its evaluation. It has many of the characteristics of complex interventions. This contrasts with simple interventions that are more likely to have a single fixed component, a stable process, distinct goal, and are applied in relatively homogenous settings, although this is also contested. Major system change involves interventions that change over time, for example they may be adapted on the basis of formative feedback from evaluation; the settings in which they are introduced are not fixed and can be modified; the main effects of the intervention are often weak and contextual factors often dominate; and all three aspects (interventions, settings and effects) can vary over time and space. As we discuss below, evaluation approaches need to be equipped to track this complex set of interactions over time.
Scale and complexity: conceptual implications for evaluations
The distinctive characteristics of major system change when compared with change at a smaller scale, including its relative complexity, imply a range of conceptual and methodological implications for its evaluation by researchers. The first conceptual implication stems from the understanding of major system change as a multilevel process,2 that is, one involving change processes at the macro level (political, economic and societal context), meso level (organisational) and micro level (sociopsychological behaviour of individuals and groups). When compared with smaller-scale change, major system change is likely to involve significant interaction (and potential tensions) between these different levels. For instance, at the macro level, the external political environment may directly influence change processes, rather than being a mere ‘backdrop’ to change. For example, a key factor in the implementation of the Scottish Patient Safety Programme was the early involvement of politicians and policy-makers who helped to assemble a national infrastructure to support delivery of the programme.3 In relation to the reconfiguration of stroke services in major metropolitan areas of England, the ‘top-down’ implementation of change in London, underpinned by political authority and financial and performance management levers, enabled services to be fully centralised, while a less radical transformation of services took place in Greater Manchester where a more ‘bottom-up’ (network-based) approach was used.4
The second conceptual implication is that, given the need for behaviour to be co-ordinated across a health-care system to achieve major system change, collective organisational structures with political authority, along with the agency of individuals, are likely to be important in the implementation of change. One potential barrier to major system change is the presence of multiple stakeholders’ interests associated with the different types of organisations involved. Health-care systems are ‘pluralistic settings’ in which perceived costs and benefits of change may differ by stakeholder group.5 Patients, their families and the public are also key stakeholders in the development of major system change programmes.1 Some evidence suggests that more ‘bottom-up’ approaches to change may not be appropriate at larger scales of change where multiple stakeholders, with potentially divergent interests, may impede implementation.6 One implication of this is that complex interventions, and consequently complex change processes, should be thought of as a mix of bottom-up activity with top-down guidance.
A third conceptual implication is that, given the scale of change involved in major system change, collective social processes that transcend organisational boundaries and energise change may play an important role in achieving major system change, such as social movements7 and collaborative communities of medical professionals.8 For example, transformation of Denver’s health system was aided by political support, including the ‘symbolic’ role of prominent citizens.9 The nature of these social processes, and the methods needed to identify and evaluate them, will differ from those associated with the study of face-to-face interactions that are often assumed to influence improvement, for example within clinical micro systems.10
Scale and complexity: methodological implications for evaluations
The study of change at a large scale also raises methodological challenges. First, there is the problem of representing change processes across a wide range of settings (e.g. heterogeneous provider and purchasing organisations) and over significant periods of time. Ethnographic case studies, that involve ‘thick description’11 of everyday practices through sustained observation within different sociocultural contexts, appear well suited to generating a detailed understanding of how change processes unfold within a small number of settings over time, but may be more limited in representing the breadth of change processes taking place over a large scale. One way forward is to combine ethnography with methods at other levels, for example wider stakeholder interviews or documentary analysis to capture the macro system context.12 Context and qualitative methods in health services research are further considered in Essay 7 in this volume.
Second, major system change programmes may have multiple, often conflicting, goals and may involve multiple components, some of which may be ill-defined by programme leaders, not visible to those carrying out the evaluation, transient in nature or not applied equally across different sites. One way forward is for practitioners and researchers to work closely together in order to build up an understanding of both the intervention’s goals and components and its appropriate evaluation. In England, collaboration is being enabled through organisational partnerships such as the National Institute for Health Research Collaborations for Leadership in Applied Health Research and Care (CLAHRCs). However, close collaboration via CLAHRCs has thrown up ‘political’ challenges, including tensions at times between ‘service-centred’ and ‘research-centred’ models of knowledge production.13
Theme 2: defining and measuring effectiveness
This theme focuses on the ways in which the effectiveness of major system change might be defined and measured in evaluations. According to the Oxford English dictionary, effectiveness is the degree to which something is successful in producing a desired result. There was much debate about the feasibility of attributing success to ‘something’ (e.g. an intervention), the measurement of success and identifying outcomes. It was suggested that the characteristics often associated with major system change – its complexity, heterogeneity and instability – renders attempts to evaluate its effectiveness challenging.
Influence of context on an intervention’s effectiveness
One of the key challenges in evaluating a complex intervention is the observation that outcomes are often only partially related to the intervention itself; contextual factors/processes play a key and often dominant role. Consequently, it is often impossible to estimate the inherent effectiveness of an intervention deployed as part of a major system change owing to the difficulty of separating the intervention from the context in which it is applied. Even among those who consider that it is possible to decouple the intervention and context, current thinking suggests that it is vital to take into account how contextual factors influence implementation.14 Even a relatively simple intervention, based on a single fixed component, might have a range of effects depending on the analytical level that is being studied and the co-occurring factors and influences on outcomes. For example, a doctor prescribing an oral antibiotic medicine to a patient might be regarded as a ‘simple’ intervention, and yet its effectiveness may vary among different people owing to biological (e.g. absorption rate and amount), psychological (motivation), interpersonal (doctor and patient relationship) and wider sociocultural factors.15 Responses to an intervention or innovation that aims to produce major system change can differ owing to the influence of contextual variables at the system (macro), organisational (meso) and clinical (micro) levels.16 Taking into account potential change at multiple levels, more recognition and appropriate methods are required to capture the impact of an intervention, as well as its unintended consequences, which may be larger and occur at more levels than was anticipated by programme designers.
Identification of outcomes
The identification of appropriate outcomes or results from major system change was regarded as a key challenge. While often appealing to policy-makers and other stakeholders, it was suggested that single measures of effectiveness (e.g. whether or not an intervention improved a clinical outcome) was unduly narrow and neglected other potential benefits, and unintended consequences, of a given intervention. One suggestion was that a wider concept of ‘value’ needed to be developed, that goes beyond measuring effectiveness in binary terms (i.e. whether it ‘works’ or ‘does not work’), to capture a broader array of potential benefits and limitations of an intervention. This might include the impact on patient experience, effects on people within organisations and increased understanding of how to manage change. In judging which outcomes to include, it is important to consider the audiences for whom the evaluation is being produced. For instance, a taxpayer of a publicly funded health service such as the NHS may also be interested in ‘hard’ measures of effectiveness, including cost. In addition to effectiveness and cost, health service managers and researchers are also likely to be interested in what else can be learnt from an intervention programme, for example barriers and enablers to the approach to implementation adopted, and then use this information to adapt the current approach or inform the planning of future programmes.
There was consensus in our discussions that multiple outcomes, that meet the needs of different audiences, should be included in evaluation design and that further work was needed to articulate and agree how wider ‘value’ beyond effectiveness should be measured. Measuring both effectiveness and value appropriately in different contexts should be informed by greater dialogue between service leaders, researchers and policy-makers. At the service level, researchers can contribute to this dialogue by engaging with how programme designers define ‘effectiveness’ and whether or not they have a programme theory which might lead them to expect certain impacts.17 Information on outcomes can also be valuable in acting as a trigger to induce changes in the process and practices deployed by actors and organisations. At the wider policy level, researchers can help to broaden the definition of outcomes or impact beyond effectiveness. For example, an analysis of health-care reform in the UK, Canada and the Netherlands highlights changes in expectations among various publics, in the instruments used to regulate or transform the systems and in the relations of power among concerned publics and key stakeholders.18
Measuring outcomes and improving practice
It was suggested that measures of the overall effectiveness of a major change programme should be qualified (e.g. that such outcomes measure results on average) and complemented with finer-grained analyses of experiences of change in a variety of places, among specific stakeholder groups and over different periods of time. Where results deviate from the average, this variation can be used as a source of insight through qualitative research into which contexts and components of an intervention are most effective and why barriers to implementation may emerge in some contexts and not others. However, recognising the interplay between intervention and context raises methodological challenges with measuring the influence of the latter on perceived success, as well as the ways in which the intervention influences the context. For instance, uptake of the Canadian ‘Heart Health Kit’ (a patient- education resource for preventing cardiovascular disease) among physicians in Alberta was influenced by attributes of the innovation itself as well as contextual and situational factors (e.g. local collegial interaction among potential users).19
The embedding of an intervention in a particular context limits the generalisability of the findings to diverse settings. However, if the mechanisms of action associated with an intervention are well understood, this evidence may be transferable to other settings as lessons concerning the mechanisms of action that underpin a given intervention’s effectiveness.20 Here, there is a role for theories in providing external validity of evaluative research of complex interventions. For instance, understanding the interactions between context and interventions implies developing robust theories to examine the relationship between these two.21 Insights into the effectiveness of an intervention in contexts with particular characteristics could inform future planning: that is whether efforts should be focused on adapting the intervention itself or the underlying context in which it is situated in order to enable improvement.
From the perspective of health-care professionals, relevant questions to improve practice might include understanding how a programme can be adapted and customised in order to increase effectiveness, and how to modify or manage the organisation or setting in order to increase effectiveness. In relation to these questions, it was suggested that the purpose of research is to contribute to the evaluation of a programme’s effectiveness by explaining ‘how, why, when and where does it work’ and, as researchers, address the question of ‘how can I make it work better?’. With regard to how the programme operates and why it produces its effects in different contexts, it is important that evaluations take account of potential changes over time in the answers to these ‘how’ and ‘why’ questions. Change can be interpreted as either episodic (i.e. radical or exceptional) or continuous (i.e. as an ongoing process of becoming).22,23 To reflect this, evaluation should seek to analyse how and why an intervention’s effects are produced both initially and continuously over time.
Theme 3: the role and use of theory in the evaluation of major system change
This theme outlines the role and value of using theories to inform the evaluation of major system changes. The explicit application of formal theory is needed for both the design and evaluation of major system changes to understand the conditions of context that affect success, to enhance the transferability of learning from changes introduced in one context to another context and therefore to aid accumulation of knowledge across projects.17 However, the importance of theory in designing, implementing and evaluating interventions arguably remains under-recognised. The ability to informally theorise is often used in day-to-day activities, yet many are alienated by the idea of formally applying theory.17
Selecting and applying theories
Different people make sense of phenomena in different ways. A health services researcher will not necessarily view and interpret the mechanisms involved in change and the interaction with the surrounding context in the same way as an anthropologist, geographer, organisational scientist or health policy-maker. Within our occupational silos, we become accustomed to adopting certain familiar theories. We must be aware that adopting a certain theory results in seeing the world through a particular lens: as the adage goes ‘when all you have is a hammer, everything looks like a nail’. Our understanding of major system change is shaped by the theories that we use to describe change processes, as individual theories may highlight different parts of the process or encourage similar processes to be interpreted in different ways. Thus, the theoretical standpoint adopted influences the design, conduct and analysis of an evaluation. It is therefore important for evaluators to recognise the importance of selecting appropriate theory and the influence that theory selection will bear on the interpretation of findings. For these reasons, rather than identifying an individual theory to frame a major system change, implementers and evaluators should contemplate adopting multiple relevant theories and ensure that the composition of the team is made up of an appropriate mix of theorists. Having different theoreticians involved increases the number and breadth of questions asked and may help to paint a more diverse and vivid picture of the context in which a major change is delivered.
Deriving insight from theories at different scales
Theories can be broadly categorised as one of three types according to the scale at which they are applied: grand theory, mid-range theory (big theory) and programme theory (small theory). Grand theories are ‘formulated at high levels of abstraction’ and ‘make generalisations that apply across many different domains’.17 For example, having studied health-care system reforms in the USA, Canada and the UK in the 1990s, Tuohy24 argues that different patterns of change resulted from the particular logic of each of the systems and that reform was influenced by the distribution of power between different institutions (governments, markets and medical profession) and mechanisms of social control. Mid-range theories ‘are intermediate between minor working hypotheses and the all-inclusive speculations comprising a master conceptual scheme’.17 Examples of mid-range theories include ‘diffusion of innovations’16 and ‘normalisation process theory’.25 Programme theories specify the way in which an intervention is thought to work. They specify the structures (inputs), processes (actions) and outcomes (results) that are anticipated with the links between these providing the theory of change. The influence of behaviours and contextual factors on these components should also be incorporated.17
Insight can be derived through dialogue between theories at different scales. For instance, a systematic review of factors affecting innovation adoption in health services categorised these factors according to the theoretical level at which they operate: the sociopolitical climate, system readiness and incentives (grand level); social networks, champions and boundary spanners (mid level); and internal communication, feedback and resources (programme level).16 Combining theories at different scales may be particularly important in understanding the multilevel influences on major system change and how factors at different levels influence each other (e.g. by mediating or moderating the implementation of a change). As a consequence, multiple levels of analysis are required to assess the links between theory and primary research findings.
Developing theory
Researchers can be criticised for using theory at a level that is too high and too general, taking concepts ‘off the shelf’ without exploring how applicable these concepts are to the real world. For example, conclusions are often drawn about the importance of ‘leadership’ and ‘culture’ without unpacking what it is about these factors that makes them so influential. It was proposed that we need to continue to build theories about what concepts like ‘leadership’ really mean with regard to major system change. The evaluation of specific programmes, therefore, affords the opportunity to contribute to the refinement or generation of theory at a wider scale.
Using theory in the context of politically charged evaluations
Evaluation of large health system changes places evaluators in a highly political context.26 Policies are usually the product of political decisions and the recommendations of evaluators are intended to inform policy. Both policy-makers and researchers have an agenda whether or not they are consciously aware of this, highlighting the importance of formally applying theory and making this explicit and accessible to stakeholders. An evaluation may assess whether or not the theory adopted by policy-makers was appropriate. For example, an evaluation of the Commissioning for Quality and Innovation Framework, launched by the Department of Health in England in 2009, sought to refine the theory behind the framework by exploring how it was envisaged to work and comparing this with actual practice.27 Policy-makers had based the framework on existing literature and had theorised that financially incentivising clinical teams to set and achieve desired quality targets would be a successful way to encourage behaviour change and thereby drive up quality. In reality, the evaluation team found that the task of selecting quality targets was often undertaken by managers rather than clinical staff, and that staff were concerned that setting quality targets high would put the hospital at financial risk; thus, the focus often shifted from ‘high’ quality targets to ‘achievable’ quality targets.27
Theme 4: the contribution of different methods to evaluation
Major system changes in health care can be evaluated using a wide range of methods. Within this theme we describe some of these methods, distinguishing between quantitative, qualitative and mixed-methods approaches.
Quantitative approaches to evaluating major system change
Quantitative approaches are used to measure the effectiveness of interventions. Methods adopted include randomised controlled trials (RCTs), natural experiments, interrupted time series (ITS) designs, controlled before-and-after studies and uncontrolled before-and-after studies. The merits of alternative approaches are extensively rehearsed in Essays 2 and 3 in this volume.
The appeal of a well-designed and well-conducted RCT is that observed differences in outcomes between intervention and control groups can be attributed to the change introduced in the intervention group. Examples of RCTs carried out to evaluate major system change in health care include ‘The Health Insurance Experiment’, a RCT of the effect of payments for health care on health-care usage and quality of care received conducted in California between 1974 and 1981;28 ‘The Oregon Health Insurance Experiment’, an ongoing RCT launched in 2008 that is designed to evaluate the impact of the Medicaid insurance system in the USA on health service use, patient outcomes and economic outcomes;29–31 the ‘Whole System Demonstrators’, three large pilots of telehealth launched by the Department of Health in England in 2006 that have been evaluated using a range of methods including a RCT;32 and the ‘Head Start’ programme, a RCT of a complex intervention designed to improve the ‘school readiness’ of children from low-income backgrounds in the USA.33
However, there are methodological and implementation challenges in conducting RCTs. Ettelt and Mays34 discuss these challenges in relation to health policy-making in England. A particular concern regarding results from RCTs is that estimates of effectiveness may not be directly generalisable to the target population of interest if, for example, subgroups of patients/carers are excluded from the trial.35 The use of observational data is, therefore, advocated to ‘assess and strengthen the generalisability of RCT-based estimates of comparative effectiveness’.35 Furthermore, RCTs are expensive and time-consuming to conduct in comparison with other methods and, in some situations, it is not possible, feasible or ethical to randomise people or organisations to intervention or control conditions.
In 2012 the Medical Research Council published guidance on the use of natural experiments to empirically study population health interventions in situations where randomisation and indeed manipulation of exposure to an intervention is not possible. Natural experiments are therefore defined as ‘events, interventions or policies which are not under the control of researchers, but which are amenable to research which uses the variation in exposure that they generate to analyse their impact’.36
Interrupted time series designs can be adopted in cases where randomisation is not possible but introduction and rollout of the intervention can be regulated. For example, Yellend et al.37 provide a protocol for an ITS evaluation of a system reform addressing refugee maternal and child health inequalities in Melbourne, Australia. It is preferable for ITS designs to employ concurrent control sites.38
Although absence of randomisation is justified in certain circumstances, the use of control groups is essential to estimate whether or not the intervention has a statistically significant impact on outcomes compared with an alternative. For example, Benning et al.39 conducted a RCT of a patient safety programme in the UK known as the ‘Safer Patients Initiative’. The trial revealed robust improvements in control sites that were almost as great as the improvements observed in the intervention sites. Had control groups not been in place, it is possible that the improvements in patient safety seen in the intervention group would have been attributed to the intervention and spurious conclusions about effectiveness might have been drawn.
Pronovost and Jha38 summarise the pitfalls associated with uncontrolled before-and-after studies. Their summary draws on the example of the widely celebrated ‘Partnerships for Patients Program’ in the USA, which was evaluated using a simple before-and-after design without the use of concurrent controls. They contend that the study design adopted, the absence of valid metrics, the lack of peer review and deficient transparency in reporting make it almost impossible to determine whether or not the programme led to better patient care. Furthermore, before-and-after studies assume that there is a clear definition of what constitutes both ‘before’ and ‘after’, although complex interventions are likely to continue to evolve over time and, thus, regular follow-up measurements are recommended.40
The appropriateness of different quantitative methods must be considered on a case-by-case basis. Researchers should strive to use the most robust feasible design, adopting control groups and regular measurement, and therefore avoiding uncontrolled before-and-after studies. Those who are evaluating major system changes should be mindful that adopting quantitative methods in isolation may be insufficient. For example, in their discussions of the methodological considerations needed to evaluate the introduction of electronic health records in the English NHS, Takian et al.40 propose the need for an ‘interpretive approach’ which considers the impact of the national and local context. In addition, Moore et al.41 comment that ‘effect sizes do not provide policy-makers with information on how an intervention might be replicated in their specific context, or whether or not trial outcomes will be reproduced’, thus highlighting the need to consider qualitative and mixed methods.
Qualitative approaches to evaluating major system change
Experimental and quasi-experimental research designs are appropriate to respond to certain type of research questions and objectives, such as effectiveness, but are not appropriate to respond to research questions that address the ‘how’ and ‘why’ questions that modulate the effectiveness of complex interventions in ‘natural’ settings. Qualitative approaches can help to address these questions by highlighting the contextual factors, and mechanisms of action, that contribute to the effectiveness of an intervention programme, including reasons for results varying across different settings. Process evaluations are useful in accessing the ‘black box’ of an intervention, that is, understanding the mechanisms through which it produces its effects, and can be used alone or in tandem with other methods (e.g. RCT studies).15 At the more qualitative end of the evaluation spectrum, case studies are a useful approach for analysing processes of major system change. As described by Yin,42 case studies involve ‘in-depth inquiry into a specific and complex phenomenon (the ‘case’), set within its real-world context’. The conduct of case studies involves analysis of the case (‘how things are’ or an intervention), the context (social, political, financial and so forth), and the interaction between the case and the context and the ways in which they influence one another. These methodological issues are further rehearsed in Essay 7.
Rather than describing one method, case studies can be undertaken using a range of methods (including descriptive and theory-driven approaches) and can form a critical component of mixed-methods studies that also aim to explain outcomes.43 The choice of method depends for the most part on the research questions driving the evaluation. For example, a standalone process evaluation, based on descriptive and theory-driven case studies, was used to conduct a retrospective cross-sectional study of nine health-care mergers in London, as the interest was in how and why the organisational context in which mergers took place, including differences in organisational culture among providers, influenced processes of change.44 Suggested reporting standards for organisational case studies related to health care have recently been published.45
Another theory-driven method is realist evaluation which asks ‘what works for whom and under what circumstances?’ in an attempt to uncover context–intervention–outcome relationships in change programmes.46 Realist evaluation recognises that context is complex and always changing. For example, Greenhalgh et al.47 carried out a realist evaluation of the ‘modernisation’ of stroke, sexual health and kidney services in London. Marchal et al.48 provide a useful overview of studies using realist evaluation in health systems research and propose a need for more methodological guidance on use of the method (e.g. on defining ‘mechanisms’ and ‘context’).
Comparative case study research
Although clinical researchers may more easily replicate their observations by studying the effectiveness of a drug on groups of patients, the evaluation of major system change often involves single case studies, making replication of results and assessments of potential generalisability difficult. It is, however, possible to generate cumulative knowledge about the factors that influence major system change through comparative case studies. This involves adopting a structured and theoretically driven approach to summarise, compare and contrast in-depth information derived from two or more studies of major system change.
Comparative research on different major system change programmes can be carried out using studies that were not originally designed with this purpose in mind. For example, Langley et al.49 retrospectively compared cases of large scale health-care transformation in Alberta and Quebec, Canada, in order to explore identity struggles associated with the merger of organisations. Data from a wide range of actors involved in and/or experiencing the change are essential to enable comparisons within and across cases and to generate significant theoretical insights. However, if analysis is planned and undertaken retrospectively, in-depth data of this quality may not be readily available and outputs may not be timely enough to inform practice. On the other hand, prospective data collection is resource intense, requiring considerable time and forward planning between researchers and service leaders to collect as well as risk if proposed changes are not implemented.
Structured comparisons should be made within and across cases and should examine the personnel, process, context and content features of the interventions. For example, Cloutier et al.50 undertook a comparative case study to generate theories about how organisations in Quebec had implemented health-care reform. The study involved a detailed analysis of practice within and between different cases to examine how health-care managers ‘recreate’ reform through conceptual work and testing ideas. The findings suggest that way that people work both dilutes and gives shape to reform at the same time and offers ‘improved understanding of the importance of managerial agency in enacting reform, and the dynamics that lead to slippage in complex reform contexts’.50
Hypotheses should be used to structure the comparisons between cases. For example, Øvretveit and Klazinga51 conducted a mixed-method systematic comparison of factors affecting implementation success of six large-scale quality improvement programmes in the Netherlands. The researchers assessed whether or not there was evidence from their evaluation to support or refute 17 hypotheses about what might predict successful implementation of improvement programmes. The hypotheses were created by an expert team comprising researchers who had worked on the six improvement programmes following a review of the literature. Systematically describing and comparing the different change programmes and their fit (or lack of) with the proposed hypotheses led to the creation of a list of factors thought to be key to the successful implementation of large-scale change. A comparative case study approach has also been used to test hypotheses regarding factors critical to the success of large-scale quality improvement initiatives in Sweden.52
Challenges of comparative case study research
Øvretveit and Klazinga51 discuss four main challenges of comparative case study research into large-scale change, all of which are key considerations for decision-makers when deliberating whether or not apparently successful interventions are replicable elsewhere. Challenges of description arise when limited information is given to describe a change programme, the surrounding context and developments over time. Challenges of attribution arise when the study designs employed (e.g. process research using case studies) make it difficult to say with certainty to what extent a change programme has produced certain outcomes rather than some other factor(s). Challenges of generalisation relate to the extent to which findings from a major system change programme may be generalisable elsewhere. Again, the use of comparative case studies across different programmes can aid assessment of generalisation in different settings. Theories, and research hypotheses that aim to test these, are key to harnessing the cumulative power of doing multiple comparative case studies for the evaluation of complex interventions. Finally, challenges of use contemplate the utility of research findings for the user. Evaluation designs should strive to be ‘useful’; in some situations this may mean that conducting research which gives less-certain answers about whether or not a complex intervention ‘works’ and more information about the associated processes and context is a better option than conducting a study to answer a single question about effectiveness.
Use of mixed-methods approaches
There is a case for outcomes research, which tends to be more quantitative, and process-based research, which relies more on qualitative methods, to be used in a balanced way, with insights drawn from both approaches and neither dominating. As defined by Langley et al.,53 process studies ‘address questions about how and why things emerge, develop, grow, or terminate over time’, which differs from quantitative studies that tackle ‘variance questions dealing with covariation among dependent and independent variables’. Evaluations involving mixed methods are one way of bringing together quantitative and qualitative analyses of major system change. For example, mixed methods have been used to evaluate reconfiguration of acute stroke services across two large metropolitan areas in England (London and Greater Manchester), combining quantitative analysis of the impact of change on patient outcomes and cost using a controlled before-and-after study with a process evaluation of ‘how’ and ‘why’ different approaches to the planning and implementation of change were adopted in each area.54 Additionally, mixed-methods approaches have been used to evaluate the Advancing Quality pay-for-performance programme in North-West England55 and a large-scale transformational change programme in the North East.56
Theme 5: using knowledge from evaluation to inform policy and practice
This theme focuses on the challenges that arise in ensuring that findings from major system change evaluations are used by decision-makers to inform policy and practice. The creation of knowledge about how major system change programmes can best be delivered and evaluated and the informed assessment of the potential transferability of successful major system change programmes to other settings are central to the impact of management and organisational research in health care.
Accumulation of knowledge: theoretical and methodological considerations
The third theme in this essay demonstrated the importance of employing theory. Theory allows inferences about social and organisational process to be extrapolated beyond a surface description of individual cases, thus playing a vital role in the assessment of potential transferability of findings from a given context to another context and aiding the accumulation of knowledge across studies.
The sections on qualitative and mixed methods within the fourth theme of this essay detail the contribution of process- and practice-based research to evaluate major system change in health care. Case studies are often employed to describe how and why change emerges (and potentially disappears) over time. A particular challenge of case study designs of this type is moving beyond the production of idiosyncratic descriptive accounts of change towards a deeper understanding of the underlying generative mechanisms that interact to create change. It is essential to explore how to gain generalisable insights that have value in terms of transferability beyond the original context in which major system change was deployed in order that others might improve their own systems with increased ease and efficiency. Comparative case study research can be a useful method in this regard, as described in the fourth theme of this essay.
Creating actionable findings
When discussing how to build actionable knowledge in the field of major system change in health care, participants suggested that a potential way forward would be to create a theoretical framework based on a synthesis of existing theories of organisational change. Such a framework would resemble the Theoretical Domains Framework (TDF) which was developed by Michie et al.57–60 to make behaviour change theories accessible for implementation researchers. The TDF contains constructs from 33 different theories that explain behaviour change in individuals. It enables researchers to use theory to target the behaviour change of patients or health-care professionals by providing operational guidance on the development of complex interventions that are designed to reduce the gap between clinical practice and the evidence base.61 The benefits of creating a framework containing constructs from organisational change theories would be twofold: the framework would provide a theoretically informed ‘road map’ for those implementing major system changes and would essentially create a feedback loop, allowing researchers to use findings from one study to inform another.
Communicating findings
The likelihood that findings from major system change evaluations are used by decision-makers to inform policy and practice is linked to the ability of evaluators to communicate their findings. For maximum impact, researchers must tailor the dissemination of their findings to different stakeholders. Furthermore, the dissemination piece must describe and debate issues relating to the implementation of the major system change, rather than solely focusing on headline-grabbing statistics relating to outcomes, in order that decision-makers and other stakeholders can assess the potential for transferability to other settings. However, researchers can, arguably, be poor at communicating, marketing and ‘selling’ their knowledge, and could perhaps learn lessons from the large health-care consultancies in this regard. The way in which academic careers are structured, with a particular focus on peer-reviewed journal articles, can be a disincentive to invest time in dissemination via other channels of communication, including those channels that are likely to be accessed by decision-makers. Further investment in science communication could help to correct this imbalance and therefore drive an increase in evidence-based approaches to major system change.
The role of researchers: collaborators or evaluators?
Decision-makers do not solely rely on research evidence when making decisions about the major changes required to improve health care and how these changes should be implemented. Policy documents, data collected by organisations, journalistic accounts and anecdotal accounts are important sources of information that are likely to influence decisions regarding major system change. Rigorous evaluation that applies research methods is time-consuming and costly relative to data gathering using these alternative forms of evidence. One school of thought suggests that it is, therefore, necessary for researchers to work more dynamically and in closer partnership with other stakeholders in order to inform and strengthen the implementation and evaluation of ongoing change. However, there can be political challenges associated with working in this way, as outlined in this essay. Furthermore, another school of thought advocates that researchers should maintain a critical distance in order to evaluate objectively.
The predominant view expressed at the roundtable event was that researchers should play one of two roles: they should either ‘sit at the table’ with decision-makers and help them to plan the interventions and their implementation (but not conduct the evaluation of this) or independently conduct an evaluation. Thus, it can be proposed that there is a need for separate implementation and evaluation teams in order to maintain transparency of the evaluation methodology. Nevertheless, where introduction of a major system change and evaluation activities take place contemporaneously, researchers should strive to produce evaluation findings in a timely manner in order to guide the ongoing implementation of the change to enhance relevance and impact of the evaluation. Researchers have produced guidance to help decision-makers to decide whether or not and how to introduce change.62 Considerations of implementation are further addressed in Essay 8.
Expectations of researchers versus expectations of decision makers
Researchers are usually commissioned to conduct evaluations of major system changes in health care by decision-makers who are the driving force behind the change. Challenges may arise for researchers when they are asked to evaluate a change that is not readily ‘evaluable’ or when there is not clear consensus between what decision-makers want and what researchers are able to offer. For example, decision-makers may hope that researchers are able to evaluate the effectiveness of a complex major system change at a whole-system level, yet it is not always possible to deliver this. There then begins a negotiation between decision-makers and researchers to determine and agree what is possible and meaningful for both parties.
It is also important to establish a shared understanding of the way in which evaluation findings will be used from the outset. For example, different stakeholders may have different understandings of the term ‘pilot study’.63 In the eyes of the decision-maker, a pilot study may strongly signify the intended direction of travel. The evaluation is intended to shape but not dramatically alter this direction of travel: it is the oar that steers the boat. On the other hand, a researcher may understand the words ‘pilot study’ to mean a test to determine whether or not the intervention should be continued. They may expect an intervention to be terminated if the findings of the pilot study suggest that the change is not effective or is having unintended consequences. In reality, this may not be the case. For example, an urgent care telephone triage service known as ‘NHS 111’ was introduced as a pilot in four geographically defined areas in England in 2010. The aim of the telephone service was to ‘improve access to urgent care, increase efficiency by directing people to the “right place first time” including self-care advice, increase satisfaction with urgent care and the NHS generally, and in the longer term reduce unnecessary calls to the 999 emergency ambulance service and so begin to rectify concerns about the inappropriate use of emergency services’64 [quote reproduced as this is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/]. An evaluation of the pilot demonstrated that the introduction of the NHS 111 telephone service actually increased ambulance use in pilot sites in comparison with control sites.64 Despite the negative findings uncovered in the pilot evaluation, the NHS 111 telephone service was subsequently rolled out nationwide.
Ettelt et al.65 provide further reflections on the tensions that can arise as a result of the different perspectives held by researchers and decision-makers.
Conclusion
There was broad consensus from the meeting that major system change is a complex and unstable process, which operates at multiple levels, and is context dependent and thus varies by place and over time. The complexity, heterogeneity and instability of major system change also presents challenges for defining and measuring its effectiveness, although a range of approaches are being used from quantitative measures of outcomes through to broader measures of the ‘value’ of change used in qualitative and process-based studies. The use of different methods and perspectives to study major system change, and their combination in the design of evaluations (e.g. mixed-methods approaches), is important in order to represent different aspects of the complexity of major system change and to evaluate its effectiveness, which should be broadly defined in terms of potential impacts.
Theory allows inferences about social and organisational process to be extrapolated beyond a surface description of individual cases, thus playing a vital role in the assessment of potential transferability of findings from a given context to another context and aiding the accumulation of knowledge across studies. The creation of a theoretical framework based on a synthesis of existing theories of organisational change could aid both the design and the evaluation of major system changes and help to advance the field through accumulation of knowledge across projects. However, there is a tension between, on the one hand, the need to accumulate a stable body of knowledge (e.g. on implementation) that could be used to inform policy and practice and, on the other, the maintenance of critical thinking in relation to existing theory to ensure that concepts that have achieved widespread acceptance are not just taken for granted, but remain provisional and open to challenge.
There are implications for furthering the development of both process-based and outcome-based studies of major system change, as well as identifying and pursuing novel ways of bringing the two approaches together. In particular, future evaluation designs should aim to represent and capture key components of the dynamics of major system change – the context, process and practices, and outcomes – as understanding of these elements is currently held back by the tendency to specialise in either qualitative or quantitative methods of research, rather than look for common ground where they can enrich one another.
Acknowledgements
Contributions of authors
Simon Turner (Senior Research Associate, Health Services Research) and Lucy Goulding (Post-Doctoral Researcher, Health Services Research) wrote the first draft of the essay, under the guidance of Naomi J Fulop (Professor, Health Care Organisation and Management).
Jean-Louis Denis (Professor, Health Systems Research) and Ruth McDonald (Professor, Health Science Research and Policy) commented on the draft and provided additional material.
All authors approved the final version of the essay.
References
- 1.
- Best A, Greenhalgh T, Lewis S, Saul JE, Carroll S, Bitz J. Large-system transformation in health care: a realist review. Milbank Q 2012;90:421–56. 10.1111/j.1468-0009.2012.00670.x. [PMC free article: PMC3479379] [PubMed: 22985277] [CrossRef]
- 2.
- Rousseau DM. Reinforcing the micro/macro bridge: organizational thinking and pluralistic vehicles. J Manage 2011;37:429–42. 10.1177/0149206310372414. [CrossRef]
- 3.
- Haraden C, Leitch J. Scotland’s successful national approach to improving patient safety in acute care. Health Aff (Millwood) 2011;30:755–63. 10.1377/hlthaff.2011.0144. [PubMed: 21471498] [CrossRef]
- 4.
- Turner S, Ramsay A, Perry C, Boaden R, McKevitt C, Morris S, et al. Lessons for major system change: centralization of stroke services in two metropolitan areas of England [published online ahead of print 24 January 2016]. J Health Serv Res Policy 2016. 10.1177/1355819615626189. [PMC free article: PMC4904350] [PubMed: 26811375] [CrossRef]
- 5.
- Langley A, Denis J-L. Beyond evidence: the micropolitics of improvement. BMJ Qual Saf 2011;20:i43–6. 10.1136/bmjqs.2010.046482. [PMC free article: PMC3066842] [PubMed: 21450770] [CrossRef]
- 6.
- Conrad DA, Grembowski D, Hernandez SE, Lau B, Marcus-Smith M. Emerging lessons from regional and state innovation in value based payment reform: balancing collaboration and disruptive innovation. Milbank Q 2014;92:568–623. 10.1111/1468-0009.12078. [PMC free article: PMC4221757] [PubMed: 25199900] [CrossRef]
- 7.
- Waring J. A Movement for Improvement? A Qualitative Study on the Use of Social Movement Strategies in the Implementation of a Quality Improvement Intervention. Presentation at Health Services Research Network Symposium, Nottingham Conference Centre, Nottingham, UK, 1–2 July 2015.
- 8.
- Adler PS, Kwon SW, Heckscher C. Perspective-professional work: the emergence of collaborative community. Organ Sci 2008;19:359–76. 10.1287/orsc.1070.0293. [CrossRef]
- 9.
- Harrison MI, Kimani J. Building capacity for a transformation initiative: system redesign at Denver Health. Health Care Manage Rev 2009;34:42–53. 10.1097/01.HMR.0000342979.91931.d9. [PubMed: 19104263] [CrossRef]
- 10.
- Barach P, Johnson JK. Understanding the complexity of redesigning care around the clinical microsystem. Qual Saf Health Care 2006;15(Suppl. 1):10–16. 10.1136/qshc.2005.015859. [PMC free article: PMC2464878] [PubMed: 17142602] [CrossRef]
- 11.
- Geertz C. Thick Description: Toward an Interpretive Theory of Culture. In Lincoln Y, Denzin N, editors. Turning Points in Qualitative Research: Tying Knots in a Handkerchief. Oxford: Altamera Press; 2003. pp.143–68.
- 12.
- Robert GB, Anderson JE, Burnett SJ, Aase K, Andersson-Gare B, Bal R, et al. A longitudinal, multi-level comparative study of quality and safety in European hospitals: the QUASER study protocol. BMC Health Serv Res 2011;11:285. 10.1186/1472-6963-11-285. [PMC free article: PMC3212959] [PubMed: 22029712] [CrossRef]
- 13.
- Currie G, Lockett A, El Enany N. From what we know to what we do: lessons learned from the translational CLAHRC initiative in England. J Health Serv Res Policy 2013;18(Suppl. 3):27–39. 10.1177/1355819613500484. [PubMed: 24127358] [CrossRef]
- 14.
- Eccles MP, Armstrong D, Baker R, Cleary K, Davies H, Davies S, et al. An implementation research agenda. Implement Sci 2009;4:1–7. 10.1186/1748-5908-4-18. [PMC free article: PMC2671479] [PubMed: 19351400] [CrossRef]
- 15.
- Richards DA. The Complex Intervention Framework. In Richards DA, Hallberg, IR, editors. Complex Interventions in Health: An Overview of Research Methods. London: Routledge; 2015, pp. 1–15.
- 16.
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581–629. 10.1111/j.0887-378X.2004.00325.x. [PMC free article: PMC2690184] [PubMed: 15595944] [CrossRef]
- 17.
- Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf 2015;24:228–38. 10.1136/bmjqs-2014-003627. [PMC free article: PMC4345989] [PubMed: 25616279] [CrossRef]
- 18.
- Tuohy CH. Reform and the politics of hybridization in mature health care states. J Health Polit Policy Law 2012;37:611–32. 10.1215/03616878-1597448. [PubMed: 22466051] [CrossRef]
- 19.
- Scott SD, Plotnikoff RC, Karunamuni N, Bize R, Rodgers W. Factors influencing the adoption of an innovation: an examination of the uptake of the Canadian Heart Health Kit (HHK). Implement Sci 2008;3:41. 10.1186/1748-5908-3-41. [PMC free article: PMC2567341] [PubMed: 18831766] [CrossRef]
- 20.
- Dixon-Woods M, Bosk C, Aveling EL, Goeschel CA, Pronovost PJ. Explaining Michigan: developing an ex post theory of a quality improvement program. Milbank Q 2011;89:167–205. 10.1111/j.1468-0009.2011.00625.x. [PMC free article: PMC3142336] [PubMed: 21676020] [CrossRef]
- 21.
- Fulop N, Robert G. Context for Successful Improvement: Evidence Review. London: The Health Foundation; 2015.
- 22.
- Tsoukas H, Chia R On organizational becoming: rethinking organizational change. Organ Sci 2002;13:567–82. 10.1287/orsc.13.5.567.7810. [CrossRef]
- 23.
- Langley A, Denis J-L. [Les dimensions négligées du changement organisationnel.] Télescope 2008;14:13–32.
- 24.
- Tuohy C. Accidental Logics: The Dynamics of Policy Change in the United States, Britain and Canada. Oxford: Oxford University Press; 1999.
- 25.
- May C. Towards a general theory of implementation. Implement Sci 2013;8:18. 10.1186/1748-5908-8-18. [PMC free article: PMC3602092] [PubMed: 23406398] [CrossRef]
- 26.
- Weiss C. Evaluation: Methods for Studying Programs and Policies. 2nd edn. Upper Saddle River, NJ: Prentice Hall; 1998.
- 27.
- McDonald R, Kristensen SR, Zaidi S, Sutton M, Todd S, Konteh F, et al. Evaluation of the Commissioning for Quality and Innovation Framework Final Report. Manchester: University of Manchester; 2013. URL: http://hrep
.lshtm.ac .uk/publications/CQUIN _Evaluation_Final_Feb2013–1 .pdf (accessed August 2015). - 28.
- Brook RH, Keeler EB, Lohr KN, Newhouse JP, Ware JE, Rogers WH, et al. The Health Insurance Experiment: A Classic RAND Study Speaks to the Current Health Care Reform Debate. Santa Monica, CA: RAND Corporation; 2006. URL: www
.rand.org/pubs/research_briefs/RB9174 (accessed August 2015). - 29.
- Finkelstein A, Taubman S, Wright B, Bernstein M, Gruber J, Newhouse JP, et al. The Oregon Health Insurance Experiment: evidence from the first year. Q J Econ 2012;127:1057–106. 10.1093/qje/qjs020. [PMC free article: PMC3535298] [PubMed: 23293397] [CrossRef]
- 30.
- Taubman S, Allen H, Wright B, Baicker K, Finkelstein A. Medicaid increases emergency department use: evidence from Oregon’s Health Insurance Experiment. Science 2014;343:263–8. 10.1126/science.1246183. [PMC free article: PMC3955206] [PubMed: 24385603] [CrossRef]
- 31.
- Baicker K, Finkelstein A, Song J, Taubman S. The Impact of Medicaid on Labor Force Activity and Program Participation: Evidence from the Oregon Health Insurance Experiment. NBER working paper 19547. Cambridge, MA: National Bureau of Economic Research; 2013. [PMC free article: PMC4145849] [PubMed: 25177042]
- 32.
- Steventon A, Bardsley M, Billings J, Dixon J, Doll H, Hirani S, et al. Effect of telehealth on use of secondary care and mortality: findings from the Whole System Demonstrator cluster randomised trial. BMJ 2012;344:e3874. 10.1136/bmj.e3874. [PMC free article: PMC3381047] [PubMed: 22723612] [CrossRef]
- 33.
- US Department of Health and Human Services. Head Start Impact Study Final Report. Washington, DC: US Department of Health and Human Services; 2010.
- 34.
- Ettelt S, Mays N. RCTs – how compatible are they with contemporary health policy-making? Br J Health Manag 2015;21:379–82. 10.12968/bjhc.2015.21.8.379. [CrossRef]
- 35.
- Steventon A, Grieve R, Bardsley M. An approach to assess generalizability in comparative effectiveness research: a case study of the whole systems demonstrator cluster randomized trial comparing telehealth with usual care for patients with chronic health conditions. Med Decis Making 2015;35:1023–36. 10.1177/0272989X15585131. [PMC free article: PMC4592957] [PubMed: 25986472] [CrossRef]
- 36.
- Medical Research Council. Using Natural Experiments to Evaluate Population Health Interventions: Guidance for Producers and Users Of Evidence. MRC; 2012. URL: www
.mrc.ac.uk/naturalexperimentsguidance (accessed August 2015). - 37.
- Yelland J, Riggs E, Szwarc J, Casey S, Dawson W, Vanpraag D, et al. Bridging the gap: using an interrupted time series design to evaluate systems reform addressing refugee maternal and child health inequalities. Implement Sci 2015;10:62. 10.1186/s13012-015-0251-z. [PMC free article: PMC4425879] [PubMed: 25924721] [CrossRef]
- 38.
- Pronovost P, Jha AK. Did hospital engagement networks actually improve care? N Eng J Med 2014;371:691–3. 10.1056/NEJMp1405800. [PubMed: 25140953] [CrossRef]
- 39.
- Benning A, Dixon-Woods M, Nwulu U, Ghaleb M, Dawson J, Barber N, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ 2011;342:d199. 10.1136/bmj.d199. [PMC free article: PMC3033437] [PubMed: 21292720] [CrossRef]
- 40.
- Takian A, Dimitra P, Cornford T, Sheikh A, Barber N. Building a house on shifting sand: methodological considerations when evaluating the implementation and adoption of national electronic health record systems. BMC Health Serv Res 2012;12:105. 10.1186/1472-6963-12-105. [PMC free article: PMC3469374] [PubMed: 22545646] [CrossRef]
- 41.
- Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ 2015;350:h1258. 10.1136/bmj.h1258. [PMC free article: PMC4366184] [PubMed: 25791983] [CrossRef]
- 42.
- Yin RK. Validity and generalization in future case study evaluations. Evaluation 2013;19:321–32. 10.1177/1356389013497081. [CrossRef]
- 43.
- Yin RK. Case Study Research Design and Methods. 4th edn. London: Sage; 2009.
- 44.
- Fulop N, Protopsaltis G, King A, Allen P, Hutchings A, Normand C. Changing organisations: a study of the context and processes of mergers of health care providers in England. Soc Sci Med 2005;60:119–30. 10.1016/j.socscimed.2004.04.017. [PubMed: 15482872] [CrossRef]
- 45.
- Rodgers M, Thomas S, Harden M, Parker G, Street A, Eastwood A. Developing a methodological framework for organisational case studies: a rapid review and consensus development process. Health Serv Deliv Res 2016;4(1). [PubMed: 26740990]
- 46.
- Pawson R, Tilley N. Realistic Evaluation, London: Sage; 1997.
- 47.
- Greenhalgh T, Humphrey C, Hughes J, Macfarlane F, Butler C, Pawson R. How do you modernize a health service? A realist evaluation of whole-scale transformation in London. Milbank Q 2009;87:391–416. 10.1111/j.1468-0009.2009.00562.x. [PMC free article: PMC2881448] [PubMed: 19523123] [CrossRef]
- 48.
- Marchal B, van Belle S, van Olmen J, Hoerée T, Kegels G. Is realist evaluation keeping its promise? A review of published empirical studies in the field of health systems research. Evaluation 2012;18:192–212. 10.1177/1356389012442444. [CrossRef]
- 49.
- Langley A, Golden-Biddle K, Reay T, Denis J-L, Hébert Y, Lamothe L, et al. Identity struggles in merging organizations: renegotiating the sameness–difference dialectic. J Appl Behav Sci 2012;48:135–67. 10.1177/0021886312438857. [CrossRef]
- 50.
- Cloutier C, Denis J-L, Langley A, Lamothe L. Agency at the managerial interface: public sector reform as institutional work [published online ahead of print 1 June 2015]. J Public Adm Res Theory 2015.
- 51.
- Øvretveit J, Klazinga N. Learning from large-scale quality improvement through comparisons. Int J Qual Health Care 2012;24:463–9. 10.1093/intqhc/mzs046. [PubMed: 22879374] [CrossRef]
- 52.
- Øvretveit J, Andreen-Sachs M, Carlsson J, Gustafsson H, Hansson J, Keller C, et al. Implementing organisation and management innovations in Swedish healthcare: lessons from a comparison of 12 cases. J Health Organ Manag 2012;26:237–57. 10.1108/14777261211230790. [PubMed: 22856178] [CrossRef]
- 53.
- Langley A, Smallman C, Tsoukas H, Van de Ven AH. Process studies of change in organization and management: unveiling temporality, activity, and flow. Acad Manage J 2013;56:1–13. 10.5465/amj.2013.4001. [CrossRef]
- 54.
- Fulop N, Boaden R, Hunter R, McKevitt C, Morris S, Pursani N, et al. Innovations in major system reconfiguration in England: a study of the effectiveness, acceptability and processes of implementation of two models of stroke care. Implement Sci 2013;8:19. 10.1186/1748-5908-8-5. [PMC free article: PMC3545851] [PubMed: 23289439] [CrossRef]
- 55.
- McDonald R, Boaden R, Roland M, Kristensen SR, Meacock R, Lau Y-S, et al. A qualitative and quantitative evaluation of the Advancing Quality pay-for-performance programme in the NHS North West. Health Serv Deliv Res 2015;3(23). [PubMed: 25996026]
- 56.
- Hunter DJ, Erskine J, Hicks C, McGovern T, Small A, Lugsden E, et al. A mixed-methods evaluation of transformational change in NHS North East. Health Serv Deliv Res 2014;2(47). [PubMed: 25642553]
- 57.
- Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. ‘Psychological Theory’ Group. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care 2005;14:26–33. 10.1136/qshc.2004.011155. [PMC free article: PMC1743963] [PubMed: 15692000] [CrossRef]
- 58.
- Michie S, Johnston M, Francis J, Hardeman W, Eccles M. From theory to intervention: mapping theoretically derived behavioral determinants to behavior change techniques. Applied Psychol 2008;57:660–80. 10.1111/j.1464-0597.2008.00341.x. [CrossRef]
- 59.
- Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci 2009;4:40. 10.1186/1748-5908-4-40. [PMC free article: PMC2717906] [PubMed: 19607700] [CrossRef]
- 60.
- Michie S, Atkins L, West R. The Behaviour Change Wheel. A Guide to Developing Interventions. London: Silverback Publishing; 2014.
- 61.
- French SD, Green SE, O’Connor DA, McKenzie JE, Francis JJ, Michie S, et al. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework. Implement Sci 2012;7:38. 10.1186/1748-5908-7-38. [PMC free article: PMC3443064] [PubMed: 22531013] [CrossRef]
- 62.
- Brach C, Lenfestey N, Roussel A, Amoozegar J, Sorensen A. Will It Work Here? A Decision maker’s Guide to Adopting Innovations. Rockville, MD: Agency for Healthcare Research and Quality; 2008.
- 63.
- Ettelt S, Mays N, Allen P. The multiple purposes of policy piloting and their consequences: three examples from national health and social care policy in England. J Soc Policy 2015;44:319–33. 10.1017/S0047279414000865. [CrossRef]
- 64.
- Turner J, O’Cathain A, Knowles E, Nicholl J. Impact of the urgent care telephone service NHS 111 pilot sites: a controlled before and after study. BMJ Open 2013, 3:e003451. 10.1136/bmjopen-2013-003451. [PMC free article: PMC3831104] [PubMed: 24231457] [CrossRef]
- 65.
- Ettelt S, Mays N, Allen P. Policy experiments: investigating effectiveness or confirming direction? Evaluation 2005;21:292–307. 10.1177/1356389015590737. [CrossRef]
List of abbreviations
- CLAHRC
Collaboration for Leadership in Applied Health Research and Care
- ITS
interrupted time series
- RCT
randomised controlled trial
- TDF
Theoretical Domains Framework
- *
Co-first authors contributed equally to this work
- Declared competing interests of authors: Simon Turner and Naomi J Fulop were supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care (CLAHRC) North Thames at Bart’s Health NHS Trust. Lucy Goulding is employed by King’s Improvement Science. King’s Improvement Science is part of the NIHR CLAHRC South London and comprises a specialist team of improvement scientists and senior researchers based at King’s College London. Its work is funded by King’s Health Partners (Guy’s and St Thomas’ NHS Foundation Trust, King’s College Hospital NHS Foundation Trust, King’s College London and South London and Maudsley NHS Foundation Trust), Guy’s and St Thomas’ Charity, the Maudsley Charity and The Health Foundation. Naomi J Fulop reports grants from NIHR during the conduct of the study, grants from NIHR and grants from The Health Foundation outside the submitted work. Simon Turner reports grants from NIHR outside the submitted work. Lucy Goulding reports grants from King’s Health Partners and NIHR outside the submitted work. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
- This essay should be referenced as follows: Turner S, Goulding L, Denis JL, McDonald R, Fulop NJ. Major system change: a management and organisational research perspective. In Raine R, Fitzpatrick R, Barratt H, Bevan G, Black N,Boaden R, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res 2016;4(16). pp. 85–104.
- Abstract
- Scientific summary
- Introduction
- Theme 1: type of change and complexity
- Theme 2: defining and measuring effectiveness
- Theme 3: the role and use of theory in the evaluation of major system change
- Theme 4: the contribution of different methods to evaluation
- Theme 5: using knowledge from evaluation to inform policy and practice
- Conclusion
- Acknowledgements
- References
- List of abbreviations
- Major system change: a management and organisational research perspective - Chal...Major system change: a management and organisational research perspective - Challenges, solutions and future directions in the evaluation of service innovations in health care and public health
Your browsing activity is empty.
Activity recording is turned off.
See more...