U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Sheaff R, Charles N, Mahon A, et al. NHS commissioning practice and health system governance: a mixed-methods realistic evaluation. Southampton (UK): NIHR Journals Library; 2015 Mar. (Health Services and Delivery Research, No. 3.10.)

Cover of NHS commissioning practice and health system governance: a mixed-methods realistic evaluation

NHS commissioning practice and health system governance: a mixed-methods realistic evaluation.

Show details

Appendix 2Supplementary information on methods

This appendix supplements the corresponding sections in Chapter 4, Methods, of the main body of the report.

Leximancer analysis

Leximancer software, which automates quantified content analysis, proceeds unless otherwise reconfigured as follows.

  1. The software divides the study text into two-sentence blocks.
  2. It eliminates stop-words (proper names, ‘and’, ‘the’, interviewer name and other terms that are known a priori to be uninformative).
  3. Frequent words and frequently associated words are selected as ‘seed words’ (‘concepts’).
  4. The software codes the two-sentence blocks according to what concepts are present.
  5. It counts the occurrences of codes.
  6. The most frequently associated concepts are defined as themes (which can be traced back to their textual sources).
  7. By default, Leximancer surfaces the main concepts inductively. The researcher can also rerun Leximancer further times, selecting those inductively found concepts that are relevant to her research questions for grouping into themes. In the present case, we selected the concepts and themes relevant to the power mechanisms discussed in Chapter 2.

The term ‘concept’ has in Leximancer analysis a narrower than usual meaning, denoting ‘co-occurring words’ (as opposed to the more usual use, in research, of a theoretically informed essential definition).

The study ‘text’ can be a combination of documents (including transcripts, laws), spreadsheets, audio and video material. We therefore ran three analyses:

  1. Equity and Excellence: Liberating the NHS195 and the official support documents branded as explaining and elaborating it, except for the Assignment for Transition and human resources management documents,297,298 which were irrelevant to a CMO analysis
  2. oral material: speeches and interviews
  3. the 2012 Act196 with the official explanatory ‘factsheets’, including one on service quality.209228

First, we used Leximancer’s default setting to find inductively what themes (and component concepts) were present in the sample. We coded the concepts found as relating to the context, mechanism or outcome of NHS commissioning policy or as ‘stop-words’ (e.g. ‘change’, ‘future’, ‘things’, ‘and’, ‘the’, etc.) for being uninformative, ambiguous (e.g. ‘substitute’), trivial or irrelevant. Approbations, however vague (e.g. ‘best’, ‘improve’), were also coded as outcomes, that is policy or service outcomes. We collapsed duplicate concepts (e.g. ‘patient’ + ‘patients’, ‘GP’ + ‘GPs’, ‘better’ + ‘best’ + ‘improve’). We assumed that the conjunction of concepts or themes denoting a mechanism and/or a context and/or an outcome denoted an existing or proposed CMO relationship. A count of these conjunctions showed which CMO relationships received most coverage in the texts. Because some of these conjunctions (textual proximities) may reflect nothing more than drafting accidents, this method may bias towards overestimating the number of CMO relationships stated in the texts, but if any such overcounting is more or less evenly distributed across the texts, as we have assumed, it will not bias the relative frequencies of the different CMO assertions. From the blocks of texts where Leximancer had found these conjunctions, we extracted any descriptions of CMO relationships and classified the mechanisms according to which media of power they used.

Cognitive frame analysis

Although the quantitative (Leximancer) analysis located which CMO relationships the texts most often mentioned, they were too broad, ambiguous or brief in saying how these mechanisms worked or would work. We therefore made a cognitive frame analysis of data from our interviews with parliamentarians and top-level health managers to elaborate and supplement the accounts of CMO relationships found in the policy texts. In doing so, we again sought to relate the accounts and explanations (frames) that our informants used to the categories (CMO; media of power) required for the present study. Mostly the informants’ accounts were consistent, but where they differed (in emphasis rather than contradicting each other) we took the more often expressed view as the one more likely to guide commissioning in practice.

When policies are controversial, a simple précis of policy documents and transcripts is likely to oversimplify the programme theory by omitting relevant aims, mechanisms and implicit background assumptions. It might also overemphasise spurious rationalisations and the polemics. A more sophisticated discourse analysis is required. We adopted a ‘rhetorical’ variant.293 This is ‘critical’ in the sense of not necessarily taking all managerial and political rhetoric at face value or as entirely coherent, valid and normatively persuasive. We therefore dispute the suggestion that such critiques are impossible;294 its realist character is just what gives realistic evaluation its critical facet, because a programme theory can be evaluated empirically. Therefore, to the extent that they rely on these empirical assumptions, so can the policies that through the medium of political discourse express a programme theory. Nevertheless, taking policy statements at face value is a necessary starting point and our default assumption until we find reasons to suspend it.

We took the following signs as calling into question if policy documents should be taken only at face value:

  1. silences or obviously ambiguous policy positions about important mechanisms or outcomes
  2. statements contradicting the balance of evidence available when the policy was formulated
  3. apparent contradictions among policy statements that (all supporting the policy) ought to be consistent.

Then a realistic evaluator has to infer and impute the missing assumptions in order to reconstitute the programme theory as completely and explicitly as possible and (to avoid evaluating a ‘straw man’ theory later on) in the most credible form consistent with the explicitly stated elements. We did so by inviting policy-makers themselves to elaborate the missing material at interview.

We collated the descriptions of CMO relationships found by these methods and paraphrased them as statements of the form ‘Doing X in circumstances M will cause agent A to do Y’ (or a logically equivalent statement, e.g. ‘If A does X, B will do Y’), the form required for empirically testing CMO assumptions. In this way we identified the CMO relationships by which policy-makers and top managers assumed NHS commissioning would achieve its intended service outcomes.

Cross-sectional analysis of published managerial data

Before making the regression analyses, we checked for multicolinearity by measuring VIF among the potential independent and control variables in the regression analyses, retaining only variables whose VIF was below the conservative threshold of VIF = 2.5 (hence also the conventional threshold VIF = 5.0).

All analyses were at PCT level. To show effects of, say, provider competition, it is necessary (but not sufficient) to find non-trivial correlations, with coefficients of the correct sign, between the commissioner characteristics and the service outcome variables. Since there are a number of such variables, there would be multiple potential correlations for each commissioner characteristic for which we had data. The higher the proportion of such correlations found having the sign that the relevant element of programme theory predicts, and the stronger those correlations, the stronger would be the evidence supporting the assumption that provider competition has an impact on the policy-relevant service outcomes. Such findings would support the inference that, if PCTs could stimulate (continuing the example) provider competition, the PCTs would thereby help to realise those outcomes. Conversely, the absence of any such correlation, or the presence of correlations with the opposite sign from what the programme theories outlined above assume, would be prima facie evidence against those assumptions.

In the event (see Chapter 7, subsection Provider competition) we found few of the correlations that the programme theory assumed, in particular regarding competition. We therefore tested the robustness and sensitivity of our findings by rerunning the analyses for only the PCTs with the highest levels of competition, that is those in the top quartile for:

  1. spend on independent (i.e. for-profit) sector
  2. spend on local government sector and voluntary sector combined (not separated in the published data)
  3. Herfindahl index.

The top quartile was selected because it contained 38 sites; a smaller selection would allow test results only of dubious validity.

Evidence synthesis

Across the case studies, data were synthesised by framework analysis. Conceptually this was equivalent to constructing, for each research question, a data grid in which each row contained data about a specific aspect of that research question, and each column represented a site, and then populating the cells with the relevant data from the case study collections of ‘pithy sentences’, findings from the cross-sectional analysis, action learning set findings, international comparisons and other published studies. We noted what common or divergent patterns there were across cells and then ‘read off’ the patterns as answers to our research questions. This method also revealed where it was necessary to add new categories or concepts to accommodate unforeseen empirical findings. By combining primary and secondary sources, we were able to compare (indeed check) our own findings against those from other studies.

As necessary, we derived the (equivalent to) row headings for each such systematic comparison from the analytical framework in Chapter 2, the programme theory assumptions found by discourse analysis (see Chapter 5) and by deduction from the research question itself. In this way we nuanced the framework analysis for each research question. Analysing the reconfiguration of commissioning structures (RQ2) required a comparison of longitudinal accounts of the formation and development of commissioning structures in each study site during the study period. Once these histories had been elicited, they too could be systematically compared in the above way. Regarding RQ3(a), the ways in which commissioners changed their commissioning practice in an attempt to influence their providers demonstrated that the commissioners had at least that much freedom of manoeuvre in practice. The limits to this freedom were found by discovering what practical, resource and policy restrictions there were on their freedom to exercise the media of power listed in Chapter 2 over their providers. We identified these limits from our case study materials, policy and regulatory statements. To examine some of the effects of client-based commissioning [RQ3(b)], we relied more on the cross-sectional than the case study data. Analysing the similarities and differences in commissioning practice for different care groups provided the basis for testing some of the theories discussed in Chapter 2 (RQ4).204

Copyright © Queen’s Printer and Controller of HMSO 2015. This work was produced by Sheaff et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.

Included under terms of UK Non-commercial Government License.

Bookshelf ID: NBK284566

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.4M)

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...