U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Brocklehurst P, Tickle M, Birch S, et al. Impact of changing provider remuneration on NHS general dental practitioner services in Northern Ireland: a mixed-methods study. Southampton (UK): NIHR Journals Library; 2020 Jan. (Health Services and Delivery Research, No. 8.6.)

Cover of Impact of changing provider remuneration on NHS general dental practitioner services in Northern Ireland: a mixed-methods study

Impact of changing provider remuneration on NHS general dental practitioner services in Northern Ireland: a mixed-methods study.

Show details

Chapter 3Difference in difference

Introduction

The decision by policy-makers in Northern Ireland to pilot an NHS dental contract for adult care based on capitation, rather than the existing largely FFS payment system, offered an opportunity to undertake a mixed-methods study to investigate the impact of a change in the remuneration system in Northern Ireland on productivity and the quality of care provided. As the dental pilots in Northern Ireland were to switch from FFS remuneration to capitation and then back to FFS after 12 months, it provided a unique opportunity to observe and document the scale of effect, the issues around implementation and any unintended consequences of the two changes in remuneration (FFS to capitation, capitation to FFS). [Although the current NHS dental contract in Northern Ireland is described as FFS in this report, remuneration across the province has three elements: (1) a fee per item of service – approximately 60% of remuneration received by GDPs and the majority of care provided to adults is FFS; (2) capitation and continuing care payments – approximately 20% of remuneration received by GDPs (predominantly children); and (3) practice allowances – approximately 20% of remuneration received by GDPs (and payable to designated dentists or PPs).]

The evidence from the literature would suggest that practitioners respond very quickly to changes in remuneration systems to ensure the viability of their practices.57 Changes to the NHS dental contract in England in 2006 saw an immediate drop in specific areas of clinical activity that reduced profit margins for the practice and an increase in clinical activity in areas where profit margins could be improved.6,7

The aim of the research was to evaluate the impact of a change in the system of provider remuneration on the productivity, quality of care and health outcomes of NHS dental services in Northern Ireland. The objectives are highlighted in Chapter 2.

Methods

Permissions were granted from the University of Manchester Research Ethics Committee (reference 15236) on 10 June 2015.

Study design

A DiD design was employed to quantitatively measure any change in activity levels across the intervention and control groups in each of the three phases of the study:

  • phase 1 – 12-month baseline period prior to introduction of the new capitation contract in the intervention group of practices (August 2014 to August 2015)
  • phase 2 – 12-month capitation period for the intervention group of practices (August 2015 to August 2016)
  • phase 3 – 12-month period following reversion of the intervention group of practices back to FFS (August 2016 to August 2017).

Figure 1 provides a graphical depiction of the DiD design, which compared the difference in outcomes before and after the change in the contract model in the pilot practices with outcomes in a group of matched controls.59

FIGURE 1. The DiD design used in the study.

FIGURE 1

The DiD design used in the study.

The DiD estimator examined the impact of the contractual change in the intervention practices compared with the control practices at the following points of change:

  • baseline FFS to capitation
  • capitation to reversion FFS
  • baseline FFS to reversion FFS.

Analyses were performed at the practice level. A single outcome measure was used in each DiD model. Clustered standard errors (SEs) were determined to adjust for estimates of the correlation over time. This is because outcomes measured at the patient level (e.g. number of patients, number of treatments delivered to patients) could be influenced over time by a higher-level structure (the practice).

We also used the same DiD approach and outcome measures to analyse activity data at the individual GDP level. The analyses assessed changes in behaviour for the average number of ‘designated GDPs’ in each practice and the average number of ‘other’ GDPs with an NHS contract. A designated GDP is a GDP who is paid practice allowances by the NHS in Northern Ireland. These allowances are to help cover some of the running costs of NHS dental practices. A ‘designated GDP’ is likely to be an equity-owning PP and is labelled as such in the text and results tables. A second analysis was undertaken for all (other) GDPs with an NHS contract who were not ‘designated GDPs’. These were likely to be ADs (non-equity owning) working at the practice and are labelled as such in the text and results tables.

Table 1 describes the outcome measures used in this study. Three broad groups of outcome measures were chosen to assess the impact of the change in remuneration on:

TABLE 1

TABLE 1

Outcome measurement definitions

  1. access
  2. treatment (service mix)
  3. finance.

The impact on access is important to assess, as one would expect capitation to increase access, with GDPs wishing to ensure that they hit their capitation targets. The outcome measures used to measure patient access to services were different types of registration as a proportion of the total number of patients on the practice list (NHS dental registration lasts for a period of 24 months):

  • mean (total) number of registrations
  • mean number of reregistrations (rollover of existing registrations)
  • mean number of new patients
  • proportion of lapsed patients who returned to the practice list
  • mean number of patients lost to the practice.

The service mix measures were chosen to identify any changes in care provision, which would be important to policy and can be broadly categorised as:

  • complex treatment involving extensive clinical time or work involving a dental laboratory [endodontics, indirect restorations (crowns) and dentures]
  • treatment of established disease – direct restorations (fillings) and extractions
  • preventative care, for example examinations, fissure sealants, radiographs, two-visit periodontal treatments
  • other composite measures of activity (number of treatment plans and treatment items).

The third group of outcome measures were financial in nature and are important from a policy perspective and to dental practices as small businesses. Financial measures included total health service income and revenue from patients’ charges. Data on income from private practice was not available.

The contents of the HS45 form provided the data for access, service activity and financial information; this form is replicated in Appendix 1. This form is the Northern Ireland equivalent of the FP17 form used by the NHS in England. It is a means of claiming payment for any care or treatment detailed in the Statement of Remuneration.60 Most practices submit their claims for treatment completed electronically and all the information included in the paper HS45 form is replicated in the electronic version. A HS45 or electronic equivalent is submitted every time a health service course of treatment (including examination only) is claimed. The GDPs also usually tick the part 3 ‘reregistration’ element in order to roll over the patient’s registration for a further 24 months.

The BSO of Northern Ireland is responsible for collating and paying claims for activity. The BSO extracted data from the submitted HS45 forms for both intervention and control practices for the analyses. Prior to analysis, the BSO cleaned the data set to:

  • prevent the block transfer of patients between GDPs working in the same practice appearing as new patients
  • identify and separate treatment item ‘tail data’ (treatments that took place towards the end of one study period that were claimed in the following study period)
  • identify large fluctuations over time in treatment activity caused by maternity leave, major business change (newly started practices hiring staff or closing practices transferring patients to other practices) or takeover activity.

The outcome measures for access and service mix were calculated relative to the practice list size (per 1000 patient registrations). This was important because analyses were based on a comparison of group averages (e.g. changes between study periods within and between control and intervention practices). Using proportionate measures gave each practice an equal weight in determining the group average. Measures of activity alone (not relative to practice size) would have given greater weight in the calculations in our analysis to a small number of larger pilot practices.

Population

The population under study was practices with an NHS contract in Northern Ireland. A selection process overseen by the DHSSPS and NIHSCB was undertaken to select the practices in the intervention group (i.e. those whose process of payment would move across to one based on a capitation-type payment). Prior to the evaluation described here, the DHSSPS and NIHSCB had recruited two practices to a pre-pilot stage that started on 13 November 2014 and had run for 6 months. The purpose of this pre-pilot stage was to determine the financial risk (modelling the drop in PCR). This helped the DHSSPS and NIHSCB ensure that the budget for the full pilot was underwritten.

To select the intervention practices, the DHSSPS and NIHSCB arranged evening meetings in November and December 2014 for GDPs to raise awareness about the pilots. Practices expressing an interest in participating attended information evenings held on 2 and 3 March 2015. Offers were made to practices submitting a formal application to participate on 8 May 2015. The inclusion criteria were determined by the NIHSCB and detailed in an expression of interest document. Practices had to:

  • be a provider in one of the following categories –
    • a health and social care trust
    • a GDP whose name is included in a dental list
    • an NIHSCB employee or pilot scheme employee
    • a qualifying body
    • an individual who is providing PDS
  • have a commitment to the NHS maintained for the duration of the pilot
  • have full registration with the General Dental Council
  • have appropriate indemnity
  • have completed a vocational training/dental foundation training programme (or have an appropriate exemption).

The final selection of the intervention group was undertaken by an internal NIHSCB panel using the criteria above. Additional criteria that were also considered included practice size (classified as small or large), geography (classified as urban or non-urban) and extent of health service commitment (classified as above or below average). A total of 21 practices submitted expressions of interest to be involved in the pilots: two did not meet the essential criteria – one did not meet the minimum 50% commitment to the NHS and one was unable to provide a service for 37.5 hours per week. A total of 12 practices were selected by the panel and financial templates were produced for all of the selected practices. The selected practices had 2 weeks to consider the expression of interest document and to reply by 22 May 2015; nine practices subsequently signed the service-level agreement to take part in the pilot. These nine practices were added to the two practices that participated in the first-wave practices, making a total of 11 practices who participated in this study. In our original protocol we stated that we would ‘conduct 2 focus groups with dentists involved in the pilot practices to discuss any observed changes in activity and identify areas that dentists feel the simple capitation contract can be improved’. These focus groups did not take place as the DHSSPS/NIHSCB decided that the two pilot practices would be included in the full evaluation. In addition, the DHSSPS/NIHSCB decided to proceed with a simple capitation contract with no changes to that used in the first-wave pilot. These 11 practices and their patients, who were registered during the study period, made up the population who received the intervention. The characteristics of the intervention practices are set out in Tables 3 and 4.

TABLE 3

TABLE 3

Characteristics of pilot and control practices at baseline

TABLE 4

TABLE 4

Baseline variable comparison of treatment outcomes between groups

The intervention period lasted 12 months, during which time each practice was required to maintain a health service registered patient list consistent in size and profile with the list registered under the practice on 31 December 2014. The contract stipulated that pilot practices would be closely monitored for any variance outside the tolerance of a decrease in registration of 5%, in which case practices would be asked to address any variance outside the tolerance with the possibility of renegotiation of contract terms and conditions. The level of remuneration under capitation was based on each practitioner’s gross NHS earnings during a reference period, which ran from 1 January 2014 to 31 December 2014. This included appropriate prospective adjustments to reflect and subsequent changes to the Statement of Dental Remuneration made in the period leading up to the start of the pilot. During the pilot, practitioners received their agreed gross income paid in 12 equal monthly instalments. In August 2015, the intervention practices switched to the new capitation contract and returned to FFS 12 months later.

Selection of the control practices

It is not possible to test for the presence of observable characteristics that influence the recorded observations in natural experiments (i.e. there is no certainty that the DiD estimates are free from selection bias). To assess how violations of unconfounded treatment assignment (i.e. selection bias) could affect our study conclusions, we tested the sensitivity of the DiD estimates by using a matched control group of practices with characteristics (which might influence participation in the pilot) that overlapped with the intervention group as closely as possible.

The control group of practices was selected using a two-stage process. The first stage was by stratified random sampling of all NIHSCB practices in Northern Ireland using the following strata: practice list size, proportion of patients who are children, proportion of adults exempt from NHS fees and geographic location of the practice (region size defined by the Northern Ireland Small Areas Code in the 2011 Census geography61). Out of a total of 45 practices initially identified, 15 could not be used because of data inconsistencies, leaving 30 potential control practices. The data inconsistencies were large gaps in the data in some months followed by spikes in the data recorded in later months as the backlog of data records was processed. This occurred because of a change in a computer system that processed dental data sent by practices to the Health and Social Care Board in Northern Ireland. The second stage of selecting control practices involved matching the 11 intervention practices to control practices using a propensity score (PS) approach, which identified 18 matched control practices.

In a DiD model, selection of the intervention group causes the control group to be a poor approximation of the ‘unobservable outcome’ used to estimate a causal effect of treatment in the model, when there are unobserved influences that affect participation in the pilot as well as the outcomes of practices in the intervention group and control group, or both. For example, the standard DiD estimator will have bias in our study if, regardless of the stratified sampling method that was used to (initially) select the control group, the practices that were motivated to join the pilot (intervention practices) have a higher or lower probability of responding to remuneration incentives by altering their health-care provision than a situation in which intervention practices were instead chosen at random and hence were not particularly motivated to join the pilot.

A probit model was used to determine whether or not there were any practice characteristics that influenced the decision to participate in the pilot: 41 observations (baseline) from 30 control practices and 11 intervention practices were tested (Table 2). No statistically significant differences were found between the two groups.

TABLE 2

TABLE 2

Probit model with intervention assignment (control or pilot practice) as its dependent variable

Regardless of these findings, selection bias cannot be ruled out in this study if there were unobserved characteristics, which influenced the motivation of practices to participate in the pilot. We examined the plausibility of this possibility by considering whether or not there were fixed differences between the control and intervention practices in the number of different kinds of treatments delivered in the baseline year. The results are presented in Tables 3 and 4.

Fissure sealants was the only outcome measure for which the monthly number of treatments delivered during the baseline period differed statistically significantly between intervention and control practices. This suggests that unobservable characteristics that could influence treatment outcomes were likely to be absent or balanced between the two groups. Taken with the evidence of no statistical differences between the groups in the practice characteristics presented in Tables 24, it appears that the assignment of practices to the control group was as good as it could be without randomisation. However, the findings in Tables 24 are best understood as exploratory work and it is possible that statistically insignificant differences are explained by a low sample size of practices. Further detail about the selection process for the controls can be found in Appendix 2.

In the analysis, we undertook DiD analyses on the initial 30 control practices (henceforward referred to as ‘unmatched controls’) and the 18 matched control practices identified by propensity scoring (henceforward referred to as ‘matched controls’) and compared the difference in outcomes. To avoid unnecessary clutter in the results section, we present the results of the matched controls. The full analysis, which includes the unmatched controls, is in Appendix 3.

Multiple testing

There was a large number of outcome measures (n = 22); this initially included two additional variables to measure different types of fissure sealants, which are not included in this report. These outcome measures used data provided by the BSO, and each outcome measure was a dependent variable in three types of DiD models [i.e. practice level (main analysis), PPs and ADs] that were estimated with and without the use of a PS matching process. Consequently, a large number (n = 132) of DiD estimates (variables) were examined for statistical significance, and the more variables that were tested, the more likely a variable would be statistically significant by chance. Specifically, for the 132 DiD models used, the chance of finding one or more significant differences in the DiD estimator was 1 – (0.95) to the power of 132 or 99.89%, which is the chance that one of the estimators in the models will appear to be significant purely by chance. Šidák and Bonferroni corrections are two methods to counteract the problem of multiple comparisons. For each model, the Šidák and Bonferroni adjustments were calculated (based on the correlation of outcome measure to all other outcomes and the number of observations in that model) and reported in the results tables to identify whether or not either adjustment lowered the critical value of 0.05 to below the p-value of the DiD estimate. If the Šidák and Bonferroni adjustments were above the p-value of the estimate, this indicated that even though the chance of a rare event (incorrectly rejecting a null hypothesis, i.e. making a type I error) had increased because multiple hypotheses were tested, the likelihood of this rare event occurring did not increase to such an extent that the null hypothesis (that the DiD estimate is zero) could no longer be reasonably (i.e. a decision based on the sample distribution of estimate) rejected.

Results of the difference-in-difference analyses

These results for each type of outcome measure (access, service mix and financial) are summarised in the next three sections. All analyses were evaluated using a 0.05 threshold level of statistical significance. A narrative summary of these findings is included below. All figures are expressed per month per 1000 registered patients to adjust for the different size of the practices. Analyses for PPs and ADs are presented in Appendix 4. Findings are presented after adjusting for multiple significance tests undertaken, as appropriate (full details are in Appendix 3).

Results: access outcomes

The difference between intervention and control practices in the number of registered patients significantly increased during the capitation period (p < 0.01) by 1.5 registrations per month per 1000 registered patients (Tables 5 and 6; Figure 2) when compared with baseline and decreased after the capitation period. This was caused by an increase in registrations in the intervention practices. No statistically significant difference was found between FFS at baseline and at reversion.

TABLE 5

TABLE 5

Overview of the access outcome data

TABLE 6

TABLE 6

Overview of the DiD coefficient on access outcomes

FIGURE 2. Mean number of registered patients.

FIGURE 2

Mean number of registered patients.

There was no statistically significant difference in the number of patients who were ‘rolled over’ (i.e. when patients already registered with the practice have a new course of treatment and reregister) (see Tables 5 and 6; Figure 3).

FIGURE 3. Mean number of reregistrations.

FIGURE 3

Mean number of reregistrations.

The difference between intervention and control practices, in terms of the number of new patients (i.e. the difference in activity between intervention and control practices per month per 1000 registered patients), fell significantly (p < 0.01) between the reversion period and the capitation period, by 6.8 new registrations, and between the reversion period and baseline, by 5.7 new registrations (see Tables 5 and 6; Figure 4). These statistically significant changes were as a result of changes over time in the control group. There was a statistically significant decrease (p < 0.05) in newly registered patients per month in control practices in the reversion period compared with the capitation period, whereas the only statistically significant change in intervention practices was a decrease during the capitation period compared with FFS at baseline (p < 0.05). The non-significant (p > 0.05) difference between intervention and control practices in the number of new patients joining the practice list per month when capitation was compared with the FFS baseline period suggests that the pilot contract did not cause an immediate change in new registrations in intervention practices compared with control practices.

FIGURE 4. Number of new patients.

FIGURE 4

Number of new patients.

The difference between intervention and control practices in the number of lapsed patients was significant and was caused by a reduction of 27.1 registrations per month per 1000 registered patients. This was caused by a reduction in the number of lapsed patients in the intervention group. The difference increased further by 7.8 registrations per month per 1000 registered patients in the reversion FFS period (see Tables 5 and 6; Figure 5). This was because of a comparatively large reduction (p < 0.05) in returning patients, of 22.8 registrations per month per 1000 registered patients, in intervention practices in the capitation period, and there was an increase (p < 0.05) in returning patients in control practices of 2.61 registrations per month per 1000 registered patients. The number of returning patients in control practices increased (p < 0.05) after the capitation period by 7.19 registrations per month per 1000 registered patients, and the increase in intervention practices was much smaller, with 0.62 returning patients per month per 1000 registered patients (p < 0.05). There was a large change of 33.7 returning patients per month per 1000 registered patients (p < 0.05) in the difference between intervention and control practices in the number of registered patients on the practice list per month when the FFS reversion period was compared with the FFS baseline period (see Tables 5 and 6). This suggests that the changes that occurred in the capitation period in intervention practices compared with control practices did persist after the pilot had ended.

FIGURE 5. Proportion of lapsed patients who returned to the practice list.

FIGURE 5

Proportion of lapsed patients who returned to the practice list.

There was no statistically significant change in the difference between intervention and control practices when the capitation period was compared with FFS at baseline, but there was when compared with the FFS reversion period (p = 0.01) and when FFS in the reversion period was compared with FFS during the baseline period (p = 0.02). For an average practice, the difference decreased in the reversion period by 13.4 patients lost to the practice per month per 1000 registered patients when compared with the capitation period, and 15.6 patients lost to the practice per month per 1000 registered patients when compared with FFS at baseline (see Tables 5 and 6; Figure 6). This was as a result of a statistically significant decrease (p < 0.05) in patients lost to the practice in control practices during the reversion period compared with the capitation period, whereas there was a statistically significant increase in patients lost to the practice in intervention practices (p < 0.05). There was not a statistically significant change (p > 0.05) in the per-month number of patients lost to the practice in the control and intervention practices in the capitation period compared with the baseline period.

FIGURE 6. Number of patients lost to the practice.

FIGURE 6

Number of patients lost to the practice.

Results: treatment outcomes

The difference between intervention and control practices in the mean number of treatments with a gross cost of ≥ £280 per month per 1000 registrations did not significantly change (p > 0.05) between study periods (Tables 7 and 8; Figure 7). There was also no significant difference (p > 0.05) in the number of treatments with a gross cost of ≥ £280 per month per 1000 registrations delivered by PPs in intervention and control practices and no significant difference (p > 0.05) for ADs (see Appendix 4).

TABLE 7

TABLE 7

Overview of the treatment outcome data 1

TABLE 8

TABLE 8

Overview of the DiD coefficient on treatment outcomes 1

FIGURE 7. Monthly number of treatments with a gross cost of ≥ £280.

FIGURE 7

Monthly number of treatments with a gross cost of ≥ £280.

The difference between intervention and control practices in the mean number of direct restoration treatments per month per 1000 registrations changed between the study periods. The difference significantly increased (p < 0.05), with 49.0 fewer direct restoration treatments per month per 1000 registrations. This was caused by a reduction in activity in the intervention practices in the capitation period compared with baseline FFS (see Tables 7 and 8). This difference increased (p < 0.05) by 62.0 direct restoration treatments per month per 1000 registrations in the reversion period compared with capitation (see Tables 7 and 8; Figure 8). There was no evidence of a long-term effect from the pilot because the difference between intervention and control practices was not significant (p > 0.05) when the FFS reversion period was compared with FFS at baseline.

FIGURE 8. Number of direct restoration treatments.

FIGURE 8

Number of direct restoration treatments.

The difference between intervention and control practices in the number of indirect restorations per month per 1000 registrations changed significantly between study periods (see Tables 7 and 8; Figure 9). The difference increased (p < 0.05) and was caused by 3.7 fewer indirect restorations per month per 1000 registrations in the intervention practices in the capitation period compared with baseline FFS; it decreased (p < 0.05) with 2.3 fewer indirect restorations per month per 1000 registrations in the reversion period compared with the capitation period (relative to the control practices). There was no evidence of a long-term effect from the pilot because the difference was not significant (p > 0.05) between intervention and control practices in the FFS reversion period compared with FFS at baseline.

FIGURE 9. Number of indirect restoration treatments.

FIGURE 9

Number of indirect restoration treatments.

A similar picture was seen for ADs, with the difference increasing (p < 0.05) with 2.8 fewer indirect restorations per month per 1000 registrations being undertaken in the intervention practices in the capitation period compared with baseline FFS, and decreasing (p < 0.05) with 2.8 fewer indirect restorations per month per 1000 registrations in the reversion period compared with the capitation period (see Appendix 4). This was not the case for PPs. There was a statistically significant increase (p < 0.05) of 1.6 in the difference in indirect restorations per month per 1000 registrations delivered by the mean number of PPs in intervention and control practices in the FFS reversion period compared with FFS at baseline.

The DiD analyses identified a significant difference between intervention and control practices in the number of extractions per month per 1000 registrations between study periods. The difference increased (p < 0.05) and was caused by 6.8 fewer extractions per month per 1000 registrations being undertaken. This was caused by a reduction of activity in the intervention practices in the capitation period compared with baseline FFS (see Tables 7 and 8; Figure 10). The difference decreased (p < 0.05) with 5.6 more extractions in the reversion period compared with capitation (see Table 8). Any change in treatment provision did not persist after capitation ceased and there was no significant difference (p > 0.05) between intervention and control practices in the number of extractions per month per 1000 registrations in the FFS reversion period compared with FFS at baseline.

FIGURE 10. Plot of the monthly number of extraction treatments.

FIGURE 10

Plot of the monthly number of extraction treatments.

A similar picture was seen in extractions, where the difference compared with baseline FFS increased (p < 0.05) and was caused by five fewer extractions for ADs and increased (p < 0.05) by 3.12 for PPs. During the FFS reversion period, compared with capitation, the situation reversed with the difference decreasing (p < 0.05) by 6.3 for ADs but did not significantly change (p > 0.05) for PPs. There seemed to be a persistent change in practice for PPs with a statistically significant increase (p < 0.05) of 2.42 in the number of extractions per month per 1000 registrations delivered in the FFS reversion period compared with FFS at baseline, which was not found for ADs.

In the capitation period compared with baseline FFS, the difference between intervention and control practices in the number of radiographs significantly increased (p < 0.05), caused by a reduction of 40.9 radiographs per month per 1000 registrations. This was caused by a reduction of activity in the intervention practices in the capitation period compared with baseline FFS (see Tables 7 and 8; Figure 11). This decreased (p < 0.05) by 34.1 in the reversion period compared with capitation (see Tables 7 and 8). There was no evidence of a long-term change in provision as the difference was not significant (p > 0.05) between intervention and control practices in the number of radiographs taken in the FFS reversion period compared with FFS at baseline.

FIGURE 11. Number of radiograph treatments.

FIGURE 11

Number of radiograph treatments.

For ADs, the difference in radiographs taken in the capitation period compared with baseline FFS increased (p < 0.05) by 38.0; it also increased significantly (p < 0.05) by 16.3 for PPs (see Appendix 4). When comparing the FFS reversion period with capitation, the difference significantly (p > 0.05) decreased by 38.1 for ADs but did not significantly change (p > 0.05) for PPs.

The difference between the intervention and control group in fissure sealants provided increased significantly (p < 0.05) by 9.3 fissure sealant treatments per month per 1000 registrations in the capitation period compared with baseline FFS and decreased (p < 0.05) by 10.4 in the reversion period compared with the capitation period (Table 10). This was caused by a reduction of activity in the intervention practices in the capitation period compared with baseline FFS (Tables 9 and 10; Figure 12). There was no significant difference (p > 0.05) between intervention and control practices in the number of fissure sealant treatments provided per month in the FFS reversion period compared with FFS at baseline.

TABLE 10

TABLE 10

Overview of the DiD coefficient on treatment outcomes 2

TABLE 9

TABLE 9

Overview of the treatment outcome data 2

FIGURE 12. Number of fissure sealant treatments.

FIGURE 12

Number of fissure sealant treatments.

Similar changes were seen for both ADs and PPs (see Appendix 4). The difference in direct restoration treatments in the capitation period compared with baseline FFS increased (p < 0.05), with 56.2 fewer restorations for ADs and 14.3 fewer restorations for PPs. In the FFS reversion period compared with the capitation period, the difference decreased (p < 0.05) with 62.1 fewer restorations for ADs and with 16.2 fewer restorations for PPs.

For ADs, the difference in fissure sealants provided during the capitation period compared with baseline FFS increased (p < 0.05) by 11.3 but this did not significantly change for PPs (see Appendix 4). The difference in fissure sealant treatments provided per month in the FFS reversion period compared with the capitation period decreased (p < 0.05) by 11.1 for ADs but did not significantly change (p > 0.05) for PPs.

The difference between intervention and control practices in the number of two or more visits periodontal treatments provided increased (p < 0.05) by 3.8 during the capitation period compared with baseline FFS and decreased (p < 0.05) by 3.5 in the reversion period compared with the capitation period (see Tables 9 and 10; Figure 13). This was caused by a reduction of activity in the intervention practices in the capitation period compared with baseline FFS (see Tables 9 and 10). There was no evidence of a long-term effect in provision as there was no significant difference (p > 0.05) between intervention and control practices when comparing the FFS reversion period to FFS at baseline.

FIGURE 13. Number of two or more periodontal treatments.

FIGURE 13

Number of two or more periodontal treatments.

There was a difference in the pattern of care between ADs and PPs. For ADs, the difference during the capitation period compared with baseline FFS increased significantly (p < 0.05) by 3.8 but it did not significantly change for PPs (see Appendix 4). During the FFS reversion period compared with the capitation period provision, the difference between intervention and control practices significantly decreased (p < 0.05) by 4.1 but it did not significantly change (p > 0.05) for PPs.

The difference in provision of root canal treatments increased (p < 0.05) by 2.7 root canal treatments per month per 1000 registrations during the capitation period compared with baseline FFS and decreased (p < 0.05) by 2.4 in the reversion period compared with the capitation period (see Tables 9 and 10; Figure 14). This was caused by a reduction in activity in the intervention practices in the capitation period compared with baseline FFS (see Table 10). There was no evidence of a long-term effect in provision as a result of the pilot, as the difference was not significant (p > 0.05) between intervention and control practices in the number of root canal treatments provided per month in the FFS reversion period compared with FFS at baseline.

FIGURE 14. Number of root canal treatments.

FIGURE 14

Number of root canal treatments.

Significant increases in the difference in provision of root canal treatments between intervention and control practices between FFS baseline and the capitation period were observed [2.4 for ADs and 0.9 (p = 0.05) for PPs (see Appendix 4)]. When comparing the FFS reversion period with the capitation period, provision of root canal treatments decreased (p < 0.05) by 2.4 for ADs but did not significantly change for PPs.

The difference increased significantly (p < 0.05) by 34.3 treatment plans per month per 1000 registrations in the capitation period compared with baseline FFS, but decreased significantly (p < 0.05) by 28.7 in the reversion period compared with the capitation period (see Tables 9 and 10). This was caused by a reduction of activity in the intervention practices in the capitation period compared with baseline FFS (see Tables 9 and 10; Figure 15). There was no evidence of a long-term effect as a result of the pilot because the difference was not significant (p > 0.05) between intervention and control practices in the number of treatment plans provided per month in the FFS reversion period compared with the FFS baseline period.

FIGURE 15. Number of treatment plans.

FIGURE 15

Number of treatment plans.

The difference in treatment plans provided per month in the capitation period compared with baseline FFS increased (p < 0.05) by 39.1 for ADs but did not significantly change (p > 0.05) for PPs (see Appendix 4). Likewise, there was a significant (p > 0.05) difference in the number of treatment plans provided per month by ADs in the FFS reversion period compared with the capitation period, with a decrease of 32.3 treatment plans but no statistically significant change for PPs.

The difference in number of treatment items provided increased significantly (p < 0.05) by 174.8 treatment items during the capitation period compared with baseline FFS and significantly decreased (p < 0.05) by 173.9 in the reversion period compared with the capitation period (see Tables 9 and 10; Figure 16). This was caused by a reduction in activity in the intervention practices in the capitation period compared with baseline FFS (see Table 10). There was no evidence of a long-term effect on the number of treatment items provided from the pilot because there was no significant difference between intervention and control practices in the FFS reversion period compared with FFS at baseline.

FIGURE 16. Number of treatment items.

FIGURE 16

Number of treatment items.

For ADs, the difference between those working in intervention and control practices in terms of the number of treatment items provided in the capitation period compared with baseline FFS increased (p < 0.05) by 174.5; it also increased (p < 0.05) by 70.4 for PPs (see Appendix 4). Corresponding significant decreases in the difference were seen for both ADs (179.0) and PPs (47.0) in the FFS reversion period compared with the capitation period.

Results: financial outcomes

The difference between intervention and control practices in the mean percentage of patient fee contribution to NHS dental practice income per month did not significantly change (p > 0.05) between study periods (Tables 11 and 12; Figure 17).

TABLE 11

TABLE 11

Overview of the financial outcome data

TABLE 12

TABLE 12

Overview of the DiD coefficient on financial outcomes

FIGURE 17. Mean percentage of patient fee contribution.

FIGURE 17

Mean percentage of patient fee contribution.

The difference between intervention and control practices in NHS dental practice income per month changed between study periods. The difference in NHS dental practice income per month increased significantly (p < 0.05), by £5920, in the capitation period compared with baseline FFS, and decreased significantly (p < 0.05), by £5248, in the reversion period compared with capitation. This was caused by a reduction of activity in the intervention practices in the capitation period compared with baseline FFS (see Tables 11 and 12; Figure 18). There was no evidence of a long-term effect from the pilot because the difference in NHS dental practice income per month between intervention and control practices was not significantly different (p > 0.05) between the FFS reversion period and the FFS at baseline.

FIGURE 18. Mean NHS dental practice income.

FIGURE 18

Mean NHS dental practice income.

The difference between intervention and control practices in patient fee contributions per month changed between study periods. The difference in patient fee contributions per month increased significantly (p < 0.05), by £2403, in the capitation period compared with baseline FFS, and decreased significantly (p < 0.05), by £2028, in the reversion period compared with capitation. This was caused by a reduction in activity in the intervention practices in the capitation period compared with baseline FFS (see Tables 11 and 12; Figure 19). There was no evidence of a long-term effect from the pilot because the difference between intervention and control practices in patient fee contributions per month was not significantly different (p > 0.05) between the FFS reversion period and the FFS at baseline.

FIGURE 19. Mean patient fee contributions.

FIGURE 19

Mean patient fee contributions.

For all of the analyses presented above, no difference is seen between the analyses conducted with matched practices (with PSs) and unmatched practices (see Appendix 3).

Triangulation of difference-in-difference analyses using an interrupted time series analysis

Although the DiD approach is widely used to assess the effect of an intervention in observational studies, the associated statistical inference used to obtain and test the resulting effect estimate relies on quite strong assumptions that are not always applicable in practice. Moulton62 shows that where observations are correlated within groups SEs may be seriously underestimated, resulting in confidence intervals (CIs) for the DiD effect that are too narrow, and an inflated risk of type I error. In other words, the precision of the effect estimate is often overestimated, leading to an increased probability of falsely concluding that an intervention effect is present. This problem is exacerbated when the number of groups considered is small, as in the usual DiD case of four groups (two groups compared at each of two time points), and can persist even where ‘robust’ SE estimation is used, as was the case in this study.63

One reason for this downwards bias of SE estimates is that the DiD analysis does not account for the possibility that observations within a group may be correlated for reasons other than the presence or absence of the intervention. In the present study, observations within a practice are likely to be correlated; we would expect observations from the same practice to be more similar than observations from different practices on average, meaning that some practices yield observations that are consistently above the group average whereas others are consistently below it. Although matching was used to balance the control and intervention groups in terms of practice size, rurality and deprivation, remaining imbalances in these and other practice-level factors may be expected to contribute to between-practice variation to some degree. We might also expect some correlation over time; observations that are close together are likely to be more similar than observations more distant in time. Although robust SE estimation was used in this study to adjust for within-group correlation, Donald and Lang63 find that the adjustment procedure may itself be unreliable in cases with few groups, as in the usual DiD approach, leading to a downwards bias in SE estimates and increased risk of type I error.62

In order to investigate the sensitivity of the DiD analysis to within-group correlation, a secondary analysis was performed on four outcomes (from Appendix 3) using an ITS approach.64,65 This allows for the hierarchical structure of the data: repeated monthly outcome measures are nested within practices, by fitting a linear model for each outcome over time, grouped by practice. A random intercept term was included to allow for correlation between repeated measures within each practice. This means that the observations for each practice are assumed to vary about the practice mean, which may be higher or lower than the overall study mean. An autoregressive (AR3) structure was also used to allow for residual correlation between closer measurements compared with those more distant in time. Discontinuities (i.e. ‘jumps’) are allowed at each transition point (i.e. 12 months indicating the change from FFS to capitation, and again at 24 months for the reversion from capitation to FFS) to represent any sudden change in outcome, and a different slope is permitted for each phase to represent any differences in trend between phases. Furthermore, different jumps at a transition and different slopes on the same phase are allowed for the intervention and control groups. The size of these differences may each be interpreted as a component of the intervention effect, and the simultaneous test that all four coefficients are zero (difference between groups in the intercept at month 12 transition from FFS to capitation; slope from months 12 to 24, the capitation period; intercept at month 24 reversion from capitation to FFS; and slope from months 24 to 36, FFS) was used as the test of the null hypothesis ‘no intervention effect’. However, a broader qualitative interpretation of results may be obtained by examining the effect estimates and CIs, and by comparing the resulting fitted plots for each group. Robust SE estimates were used to compensate for the non-normal distribution of model residuals, due in part to positively skewed outcomes. This was felt to be preferable to transforming outcomes to reduce skew, which would have made interpretation and comparison of results with those found in the DiD approach more difficult.

The four outcomes chosen for the secondary analysis were number of treatments costing ≥ £280 per 1000 registrations, number of direct restorations per 1000 registrations, number of fissure sealants per 1000 registrations and number of items per course of treatment. These were determined a priori to represent a range of interventional and preventative clinical activity and financial domains (Table 13).

TABLE 13

TABLE 13

The ITS analyses of the four selected activity domains included in the DiD

Plots are presented for each of the four outcomes considered, showing the observations for the control and intervention practices over time with the fitted model superimposed (Figures 20 and 21). Each fitted line represents one practice; these are parallel within each group (all practices are assumed to follow the same model) but the shape may vary between groups. The spread of the fitted lines within each group indicates the extent of between-practice variation (mean fitted model for each group is shown in red).

FIGURE 20. The ITS analysis of treatment costs of ≥ £280.

FIGURE 20

The ITS analysis of treatment costs of ≥ £280. (a) Control practices; and (b) intervention practices.

FIGURE 21. The ITS analysis of direct restorations.

FIGURE 21

The ITS analysis of direct restorations. (a) Control practices; and (b) intervention practices.

For treatments costing ≥ £280, the fitted model is very similar for both groups, as can be seen in Figure 22. The 95% CI for each jump and slope effect contains zero, and from the simultaneous test that all four effects are equal to zero we find that there is insufficient evidence to conclude that this outcome differs according to group. This conclusion is consistent with the DiD analysis.

FIGURE 22. The ITS analysis of fissure sealants.

FIGURE 22

The ITS analysis of fissure sealants. (a) Control practices; and (b) intervention practices.

For direct restorations (Figure 23), the fitted models look somewhat different for the control and intervention groups. There is a drop immediately following the introduction of capitation for the intervention group compared with the control group. The estimated difference in jumps here is 25.6 (95% CI –55.41 to 4.29), suggesting that the change in the number of direct restorations performed by practices in the intervention group at the start of capitation was around 26 per 1000 registrations lower on average than the corresponding change in the control group. There was also a positive jump for both groups at the end of year 2, although the increase appears larger in the intervention group. The estimated difference in jumps here is 30.77 (95% CI 0.34 to 61.20), suggesting that the number of direct restorations performed by practices in the intervention group increased by around 31 per 1000 registrations on average than for the control group when FFS was reintroduced. From the simultaneous test that all four effects are equal to zero, we conclude that there is strong evidence of an intervention effect on direct restorations.

FIGURE 23. The ITS analysis of items per plan.

FIGURE 23

The ITS analysis of items per plan. (a) Control practices; and (b) intervention practices.

These results are consistent with those found in the DiD analysis in that the same conclusion is reached regarding evidence of an effect, and in the sense that the estimated DiD effects are in the same direction as those found in the ITS analyses. At the start of year 2, the DiD estimate is –51.00 per 1000 registrations, which lies within the CI found here; the DiD estimate for the end of year 2 is 61.79 per 1000 registrations, lying just beyond the upper limit of the CI found here. The larger estimated effect size at both time points found by the DiD analysis is due in part to the issue described above: that with no recognition of within-practice correlation, all variation in outcomes is attributed to group-level factors, including the DiD effect. In the ITS model, by comparison, over half of the total variation is attributed to between-practice variation via the use of the random intercept term, leading to more modest effect size estimates. However, the direction of effect was identical.

Again, for fissure sealants, differences in the fitted models for control and intervention groups are apparent from examining Figure 24 in Appendix 1. There appears to be a slight increase at the start of year 2 for control practices, compared with a drop for intervention practices. The difference is estimated to be –7.82 (95% CI –17.55 to 1.90), suggesting that the change in the number of fissure sealants provided by practices in the intervention group at the start of year 2 was around 8 per 1000 registrations less, on average, than the corresponding change for the control group. A similar-sized, but opposite, effect was found at the end of year 2: a more marked increase in the intervention group compared with the control group. The difference is estimated to be 7.44 (95% CI –1.41 to 16.29), suggesting that the change in the number of fissure sealants provided by practices in the intervention group at the start of year 2 was around 7 per 1000 registrations more on average than the corresponding change for the control group. From the simultaneous test that all four effects are equal to zero, we conclude that there is evidence of an intervention effect on fissure sealants; the test interpreted in isolation would find a statistically significant intervention effect at the 5% level, but cautious interpretation should be considered owing to the multiple testing of the clinical outcomes.

Again, results are consistent with the DiD analysis in finding a negative effect at the start of year 2 (–7.82 compared with the DiD estimate of –9.96) and a positive effect at the transition from year 2 to year 3 (7.44 compared with the DiD estimate of 10.97), suggesting that intervention practices reduced the number of fissure sealant treatments performed during year 2 compared with control practices, and then increased this service at the start of year 3. Again, and for the same reason, effect estimates are somewhat larger for the DiD analysis although this is less marked than for direct restorations. The same conclusion is reached regarding the test for intervention effect; some evidence of an effect is found if the test is interpreted in isolation but after adjustment for multiple testing we cannot reject the null hypothesis of no effect.

For the final outcome considered here (i.e. the number of items per course of treatment), there again appears to be some difference between the fitted models for each group in Figure 25 in Appendix 1. Again, there seems to be a drop at the start of year 2 for the intervention group compared with the control group, followed by a corresponding increase at the end of year 2/start of year 3. However, neither of these jump effects was found to be significantly different from zero by the ITS analysis. Another apparent difference evident in the plots is in the trend for year 3; a slight downwards slope is seen for the control group whereas the gradient is positive over the same period for the intervention group. The estimated difference in slopes is 0.04 (95% CI 0.02 to 0.06), suggesting that the number of items per course of treatment in the intervention group was increasing at a rate of around 0.04 per month more than the corresponding (negative) rate for the control group. In substantive terms, we may interpret this as showing that although any sudden increase at the end of year 2 was slight for the intervention group, a gradual increase was maintained over year 3. From the simultaneous test that all four effects are equal to zero, we conclude that there is strong evidence of an intervention effect on number of items per course of treatment.

Again, results are consistent with the DiD analysis in finding a negative effect at the start of year 2 (–0.14 compared with a DiD estimate of –0.37) and a positive effect at the end of year 2 (0.19 compared with a DiD estimate of 0.46), suggesting that intervention practices reduced the number treatment items per course of treatment during year 2 compared with control practices, and then increased this service at the start of year 3. There is also evidence from the ITS analysis that the number continued to increase more quickly during year 3 in the intervention group than in the control group. Again, and for the same reason, effect estimates are somewhat larger for the DiD analysis. Strong evidence of an intervention effect is found by both analyses.

Overall comparison of difference-in-difference and interrupted time series results

A direct comparison of the two analyses is not straightforward because the two approaches differ in several key ways:

  • The ITS model is fitted simultaneously to the data for the entire study duration. This approach includes the capitation–initial FFS and reversion FFS–capitation comparisons considered by the DiD analysis, but does not directly compare the FFS reversion and initial phases.
  • For each outcome, the ITS model assumes a linear relationship over time, with different slopes allowed for the intervention and control groups in each phase of the study; the DiD analysis does not assume a linear relationship (or any particular functional form), but assumes that the dependencies for the two groups are parallel in each phase.
  • The ITS model uses a random intercept term to allow for each practice’s observations to vary around its own mean, accounting for between-practice variation and within-practice correlation, assumed to be zero by the DiD analysis.
  • The ITS model allows for autocorrelation of observations (that observations closer in time may be more similar than those more distant in time), also assumed to be zero by the DiD analysis.
  • The ITS analysis uses the simultaneous test of the null hypothesis that both jump and both slope coefficients are zero for the study duration as a test of ‘no intervention effect’, whereas the DiD analysis tests separately whether or not the estimated DiD effects are equal to zero at the pairwise comparison of interest.

However, a broad comparison may be made by comparing the ‘jump’ effects estimated at the 12- and 24-month year 1–2 and year 2–3 transition points with the DiD effect estimates for the corresponding periods. These were found to agree in direction, although the ITS estimates were smaller in magnitude with lower estimated precision. All ITS CIs included zero, except for direct restorations at the second transition. This is likely to be as a result of the more realistic treatment of within-practice correlation using random intercepts and autocorrelation, whereas the DiD analysis attributes all variation to group-level factors as described above.

The between-practice variation, as a proportion of the total variation, was found to be between 45% (fissure sealants) and 69% (items per treatment plan), suggesting that the use of random effects is warranted by substantial between-practice variation. The autoregressive (AR3) correlation structure was selected as the best fit for the data from several possibilities including independent errors and autoregressive and moving average structures for lags of between 1 and 5 months, where fit was assessed using the Akaike and Bayesian information criteria.

While the ITS model provides a better fit for the data than the simple linear regression model that underlies the DiD analysis, the statistical power of the ITS analysis to conclude that the estimated jump and slope effects are significantly different from zero is low in this study, which was not designed with this analysis in mind. Using Snijders and Bosker’s methods,66 the power of the ITS analysis to detect a jump effect of the size estimated at the year 1–2 transition point was estimated to be 61% for direct restorations, 38% for fissure sealants and 18% for items per plan for a sample of the size and covariance structure similar to that used here, compared with the 80% power often used as a target when planning a study. This implies a high risk in each case that the 95% CI for a true effect of the size estimated would contain zero. The estimated number of practices required to exclude zero from the resulting CIs ranges from 95 for direct restorations to 150 for items per plan, assuming the same ratio of control to intervention practices, 3 years’ monthly observations and similar distribution of outcomes as seen in this sample. However, more detailed calculations would be needed to plan the required sample size to estimate the full range of effects considered by the DiD analysis, using the ITS approach.

Discussion

The study provides helpful information about the direction of effect seen when a capitation-based contract is introduced. It is likely that a permanent change to capitation would lead to immediate changes of a similar direction and magnitude to those found in the pilot, but that behaviour in terms of access and activity is likely to find an equilibrium somewhere between the FFS and capitation levels recorded in the pilot.

The pilot contract did not appear to cause any reduction in access to care, although it is unclear whether or not this was caused by a concern of GDPs that they would fall below the contract threshold. It also did not dramatically increase access: the mean number of new patients per month per 1000 registered patients for control practices was not statistically different between baseline and capitation periods. A possible explanation was a lack of incentives for practices to increase their list size, perhaps because an average control practice had been operating long enough to grow to an optimal (i.e. profit-maximising) dental practice size, although this does not explain the increase found in the reversion period, which suggests that there was spare capacity to take on new patients. The results also suggest that any drive to expand the practice register (or front-load treatments) prior to the capitation period in intervention practices could have been by finding entirely new patients to treat in the baseline period (this was also suggested in the findings for the number of lapsed patients who returned to the practice list outcome).

The large drop in returning patients in the intervention practices in the capitation phase could be explained by those practices prioritising the recruitment of lapsed patients during the baseline period. This finding is consistent with the finding of an increase in the overall number of registrations in intervention practices in advance of the change to the capitation period. These findings are also consistent with remuneration incentives in the pilot contract. Findings suggest that intervention practices may have been reregistering patients who had not had treatment at the practice for 2 years (and as a consequence their registration lapsed) during the baseline period, thereby freeing up GDP time in the capitation period, and avoiding falling behind the tolerance level for registrations (of 5%) in the capitation period. This ‘freeing up time’ behaviour is also suggested from all treatment outcomes findings, as there was a decrease in activity delivered in intervention practices during the capitation study phase.

The analyses showed a statistically significant reduction in clinical activity, including prevention. The only exception was for the mean number of treatments with a gross cost of ≥ £280. This suggests an overall reduction in activity across the pilot period with no differential selection (i.e. GDPs working in the pilot practices did less ‘across the board’ and did not favour ‘cherry picking’ the provision of more profitable treatments as a result of the change in payment). The lack of a statistical significance between baseline FFS and reversion FFS suggests that GDPs quickly returned to baseline levels of activity (i.e. that the opportunity to practice differently was not sustained). This suggests that financial incentives remain one of the more potent factors behind behaviour change (i.e. original productivity incentives under a FFS system rapidly influence behaviour so much so that activity quickly returns to baseline levels). The further analysis in Appendix 4 appears to show a difference in activity levels between ADs and PPs, with the former being more sensitive to changes in remuneration, perhaps because they are more reliant on NHS income than PPs who have higher private practice incomes. The PPs, as practice owners, also receive approximately 20% of their NHS remuneration in the form of allowances, insulating them further from the predominantly FFS system of remuneration experienced by associates.

Although there was no change in the mean proportion of patient fee contribution to NHS dental practice income per month, there was a statistically significant reduction in the mean NHS dental practice income per month (£5920) and the mean patient charge contributions per month (£2403). This would suggest that the relative mix of fee-paying patients did not change, but there was a reduction in overall income and patient fees, commensurate with the reduction in clinical activity.

What is not known is how practices’ private income changed during the capitation period. There were no statistically significant changes in the financial outcomes when comparing the time period immediately prior to (baseline period) and after (reversion period) the pilot, relative to the control group. This suggests that GDPs returned to their previous level of activity once capitation had ended. This is unsurprising given the rapid return to the same level of baseline activity, once capitation had ended. As highlighted above, these results emphasise the strong relationship between activity and co-payment income with the remuneration system employed.

The fact that no difference was seen in the results between ‘matched’ practices (with PSs) and ‘unmatched’ practices (see Appendix 3) suggests that the findings were not influenced by the choice of the matching process. The use of ITS on selected activity variables was important in order to triangulate the results of the DiD as far as possible. This accounted for variation across practices (random intercept) and the fact that two observations over a short time period are often similar (autocorrelation structure), attributing the remaining variation within practices to the presence or absence of the intervention. The analysis revealed identical directions of effect to the DiD, although the 95% CIs were broadened in all cases. The results from the power analysis of the ITS highlights some of the limitations of this approach for this sample and potentially justifies the use of the DiD for policy-related research when the number of observations can often be limited owing to financial or pragmatic structural constraints. Ultimately, this is a policy-driven piece of research and the DiD is the most appropriate model to use given the data limitation on the number of practices that participated in the pilot. However, the ITS analysis is evidence that a larger sample size is needed to draw a robust conclusion because it is possible that the difference between its findings and the DiD analysis are caused by the DiD-generated CIs being artificially narrow. This is not a limitation unique to this study setting. The assumptions underlying a DiD approach should always be carefully scrutinised, as the smaller SE can lead to erroneous conclusions if care is not exercised.64,65

Concluding remarks

Overall, the move to a capitation-based payment system appears to suppress clinical activity, including prevention. Equally, GDPs returning to a FFS remuneration system appear to return to levels seen in the baseline period. It is likely that a permanent change to capitation would lead to immediate changes similar to those found in the pilot, but that behaviour in terms of access and activity would find an equilibrium somewhere between the FFS and capitation levels recorded in the pilot. It is not clear whether or not capitation improved access to services, given that GDPs in the pilot may have been wary not to drop below the 5% threshold stipulated in the pilot contract. If increasing access (to those with greatest need) is a policy goal for future contract reform, consideration needs to be given to how best to incentivise opening lists to new patients, rather than setting capitation payment thresholds that can be easily achieved by renewing lapsed registrations. Although the proportion of co-payments as part of overall costs remained the same, the reduction in activity produced a significant fall in PCR. This would be an important consideration for any policy roll-out.

Image 14-19-12-fig24
Image 14-19-12-fig25
Copyright © Queen’s Printer and Controller of HMSO 2020. This work was produced by Brocklehurst et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
Bookshelf ID: NBK553371

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (5.9M)

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...