Appendix DMethods and Results of Quantitative Analyses

Publication Details

Rationale for Our Quantitative Approach

The aim of the quantitative approach was to construct a simple mathematical model, perform 3 types of analyses (decision analysis, cost-effectiveness analysis and value of information analysis) and quantify the most influential parameters in the model based on the analyses. For decision and cost-effectiveness analyses we prioritize based on one way sensitivity analysis. In value of information analyses we prioritize groups of parameters by calculating the expected value of perfect information for parameters.

Regarding the decision and cost effectiveness analyses, we note that we do not use modeling to identify the optimal treatment choice, e.g., the choice that maximizes quality adjusted life years in the decision analysis or the choice that optimizes the balance of costs and effectiveness in cost-effectiveness analysis. We use the models as tools to identify where future research is needed the most, albeit indirectly: presumably, we would recommend further research to inform on the parameters that are most uncertain in the one-way sensitivity analyses in the decision and cost-effectiveness models.

This is an atypical use of modeling—and stops short of providing what would normally be considered a most important piece of information, namely insights on treatment choices for different assumptions and circumstances. By their very nature our models are not developed with the same rigor as some of the well-known and elaborate models that are used to analyze clinical decisions; they have not been calibrated using external data; and their predictions have not been validated in external data. They are operational models that are constructed to make use of the summary information obtained from an evidence report, and can offer only broad insights. However, we believe that even if they have not been vetted enough to analyze treatment choices, they are able to serve the purpose for which they have been developed, that is rank the model parameters which are obtained from the current state of science, the evidence report, according to the uncertainty they exert on the quantity of interest.

Our third type of analysis, the value of information analysis, is a theoretically motivated way to perform such “prioritizing” of future research. Again, we use the value of information analyses to rank groups of parameters based on their expected value of perfect information; we do not focus on the actual analyses results. As above, we believe that although by ranking we lose a lot of potential information from the analyses we performed, we gain by having results that are less likely to change if we use better models.

Methods for Decision Modeling, Cost-Effectiveness Analysis, and Value of Information Analysis

During the construction of the model it was our intent to maximize the use of data from the Stanford CER report. When necessary we considered additional information preferably from recently published studies that are applicable to the US setting. This is an operational approach that could be routinely followed as part of the preparation of future research needs products following completion of a CER report.

Briefly, we followed an operational process to develop a simple decision model that compares different treatment options (PCI with BMS, PCI with DES, CABG). We aimed to build a probabilistic decision model. Such models allow for a quantitative expression of the uncertainty around model parameters: appropriate distributions for model parameters are selected using standard methods and Monte Carlo simulation methods are used to sample from those distributions and propagate the uncertainty through the decision model, providing the uncertainty around model estimates of effectiveness and costs.1,2 Probabilistic sensitivity analysis is currently recommended for all decision and cost-effectiveness analyses.2,3

We began the modeling exercise by attempting to identify decision models evaluating a similar decisional context by searching a curated database of cost-effectiveness analyses maintained at the Tufts Medical Center1 (Cost-effectiveness Analysis Registry, Tufts Medical Center, Boston, MA, last accessed August 25, 2010) for relevant published studies. As expected, several decision models are available related to the choice of coronary artery bypass graft surgery (CABG) vs. percutaneous coronary intervention (PCI). We originally attempted to replicate the model presented by Rao et al,4 because it was published relatively recently, assumed a societal perspective, conducted a probabilistic sensitivity analyses and reported all data that were necessary to replicate the original results. We were able to perfectly replicate their results regarding treatment effectiveness (to the third decimal, i.e. within the margin of random error) and had moderate success in replicating their results regarding costs (we could not resolve a 10% discrepancy in costs). Because Rao et al.4 used estimates applicable to the United Kingdom’s National Health System (NHS), and we wanted our model to be applicable to a typical USA practice setting, we proceeded by adapting the model to incorporate elements from the analyses reported in Yock et al.5 and Bischof et al.6 This required a simplification of the model, specificaly the consideration of different repeat revascularization procedures in aggregate (instead of modeling repeat revascularization with CABG or PCI). Analyses of expected utility using the modified model provided results similar to those in Rao et al.4

We considered three treatment alternatives to be of interest: CABG, PCI with bare metal stents (BMS) and with three potential outcomes after the initial revascularization procedure: becoming asymptomatic, experiencing a procedural stroke and experiencing treatment complications resulting in procedural death. Following the initial procedure, for surviving patients, we considered a 6-state Markov model. The states we modeled were (1) asymptomatic (no prior stroke), (2) recent myocardial infarction (no prior stroke), (3) asymptomatic (post stroke), (4) recent myocardial infarction (post stroke), (5) repeat stroke in patients who had a prior stroke; (6) dead. Appendix Figure D1 presents a schematic of the Markov model structure we used for all analyses described herein.

Appendix Figure D1. Markov model structure for the CABG arm.

Appendix Figure D1

Markov model structure for the CABG arm. The same structure (with different transition probabilities) was used to model PCI strategies as well

We considered three types of populations in 3 different models for three different types of populations.

  • “RCT-type participants:” a base case scenario where a cohort of 65 year olds with typical age, sex and race-related mortality for the USA population. The decision is between revascularization with DES, BMS and CABG. Most or our data were drawn from randomized clinical trials RCTs, and were used to parameterize this model.
  • “Elderly participants (older than 75 years):” a cohort of elderly (75 year old) CAD patients with nonacute CAD. The decision is between revascularization with DES, BMS and CABG.
  • “Diabetics:” a cohort of 65-year old diabetic patients. The decision is between revascularization with DES and CABG.

All analyses were carried out with a 10 year time horizon, with one year Markov cycles. Costs and effects were discounted at 3% per year. Statistical analyses were conducted using Stata version SE 11.1 (Stata Corp. College Station, TX) and decision, cost-effectiveness and value of information analyses were conducted using specialized software (TreeAge Pro Healthcare, Williamstown, MA). For probabilistic decision and cost-effectiveness analyses, as well as expected value of perfect information (EVPI) analyses we based our calculations on 10,000 model iterations. For expected value of partial perfect information (EVPPI) we based our calculations on 1000 iterations of the external sampling loop and 1000 iterations of the internal loop.7,8

Model Parameters

Baseline probabilities for the PCI BMS arm: We used the recently published COURAGE trial to obtain probabilities of both procedural and long term outcomes in patients undergoing PCI with BMS.9 Cumulative event rates for stroke, myocardial infarction and revascularization (during a follow-up of 4.6 years) were converted to annual event rates from which we estimated transition probabilities using standard methods.1 In addition we calculated the excess mortality due to CAD based on the difference in mortality rates between the average 65 year old USA population and the COURAGE trial.9

Treatment effects: To fully utilize the data available from the Stanford CER, we extracted data from the report pertaining to the following outcomes: procedural strokes, procedural mortality, deaths, myocardial infarctions and mortality. We only extracted data from studies that used stents (either BMS or DES) in their PCI arms, since we considered studies of balloon angioplasty not to be reflective of current medical practice. Using these data, we performed meta-analyses of all trials using random effects models and relative risks (RR) as the metric of choice for the four outcomes of interest.2 Since the number of events for all outcomes was low and the between group differences were small, the relative risk is a good approximation for the hazard ratio (HR), and can thus be applied to event rates. To obtain estimates of treatment effects comparing DES and CABG we used data from the recently published SYNTAX trial.10 Finally, to obtain estimates of the treatment effect comparing BMS and DES we used a recently published network meta-analysis of treatments for non-acute CAD.11 To obtain consistent estimates of treatment effects for all three treatments of interest (DES, BMS, CABG) we performed a mixed treatment meta-analysis of the treatment effect estimates obtained from pairwise comparisons of the three strategies.12

Utilities: We obtained utility estimates from Rao et al,4 based on a National Institute for Health and Clinical Excellence (NICE) report on coronary revascularzation.13 These estimates were derived from the ARTS clinical trial, using the EQ-5D instrument.14 To ensure the validity of these estimates, we used the “Utility Weights” function of the CEA Registry (see above) to obtain a comprehensive list of QALY estimates used in cost-effectiveness analyses exploring similar decisional contexts and compared our estimates with those used in published analyses. In all cases the estimates we used were close to the average of estimates used by others, providing reassurance of the representativeness of our analyses. Further the uncertainty of the utility weights in Rao et al. corresponded to the scatter of the utility weight distribution from the CEA Registry.

Costs: Costs were obtained from published sources, relevant to the USA clinical setting. Specifically, we obtained cost estimates from two recent cost-effectiveness analyses by Yock et al.5 and Bischof et al.6 To estimate the cost of stroke we used the Diagnosis Related Group (DRG) weights from the Centers for Medicare and Medicaid Services (fiscal year 2007) following the calculation described in Bischof et al.6 to estimate the costs of stroke (DRG code 559, Acute ischemic stroke with use of thrombolytic agent).

Appendix Table D1 summarizes details of our data sources, Appendix Table D2 demonstrates how we obtained cost estimates for repeat revascularization procedures and Appendix Table D3 presents the parameters estimates we used in the three decision scenarios we explored.

Appendix Table D1. Data sources for model parameters.

Appendix Table D1

Data sources for model parameters.

Appendix Table D2. Calculation of cost of revascularization.

Appendix Table D2

Calculation of cost of revascularization.

Appendix Table D3. Parameter estimates and distributions fit for probabilistic sensitivity analysis.

Appendix Table D3

Parameter estimates and distributions fit for probabilistic sensitivity analysis. Distribution parameters are presented as means with standard deviations

Model Analyses

We performed the following analyses in a stepwise fashion: a decision analysis focusing on expected benefits (where QALYs was the decision relevant quantity [outcome]), a cost-effectiveness analysis (based on incremental cost-effectiveness ratios, ICERs, for pairs of compared treatments) and a value of information analysis (quantifying the expected value of perfect information, EVPI, for all parameters and the expected value of perfect parameter information, EVPPI, for specific groups of these model parameters). From each analysis we conducted, we present the output that we consider most relevant to making decisions about future research needs.

Decision analysis: Here the interest is in maximizing QALYs regardless of costs. To summarize the influence of parameters such as the treatment effects on the decision, we generate tornado graphs from one-way sensitivity analysis. Specifically, for each parameter of interest we calculated the range of expected utilities for values of the parameter of interest ranging from its lower to its upper confidence interval. The graphs are often called tornado graphs because parameters are graphed in decreasing order of influence thus giving the visual impression of a tornado.

Cost-effectiveness analysis: We carried out our analyses from a societal perspective and assumed that the aim of the decisionmaker was to choose. We generated tornado graphs depicting the range of ICERs in one-way sensitivity analysis of the parameters of interest within their 95% CI.

Value-of-information analysis (VOI): Decisions based on current knowledge may be proven wrong in the future because of uncertainties in the underlying data used, so a VOI analysis assigns a monetary value to reducing uncertainty in a decisional context.1 VOI analysis attempts to answer the question: how much is it worth to have information on all or some parameters of a decision problem (what is the opportunity cost of uncertainty)? When the value of simultaneously eliminating all model uncertainty is of interest then the overall EVPI can be calculated. If the value of eliminating uncertainty only regarding one specific parameter, or a group of related parameters, is of interest then EVPPI can be estimated. As both types of analysis convert effectiveness to monetary units to provide the total potential value of conducting future research, they require the specification of willingness-to-pay thresholds. It is widely appreciated that such thresholds are rather arbitrary, and we performed sensitivity analysis from a lower bound of zero to $200,000 per QALY gained.1 In addition to a willingness-to-pay threshold, typically VOI analysis requires an estimate of the effective population for whom the treatment strategies under study would be applicable. Because our aim was to rank potential research areas within each population of interest, we used the per-person estimates and avoided any comparison across populations. Details of methods for methods of calculating EVPI and EVPPI have been published elsewhere.1,7,19

All analyses were repeated for the subpopulation specific analyses (elderly and diabetic patients).

Subpopulation Analyses

For all subpopulation analyses our approach to estimating event rates and probabilities in PCI arms and utilizing relative metrics (hazard ratios) to express the probabilities in the PCI arms was the same as the approach we utilized for the typical RCT population. In addition, the tree and Markov process structure was the same as in the typical RCT population analysis. We briefly discuss here the different sources of data for the two additional subpopulations, the elderly and diabetic patients, particularly regarding treatment effects; further details about the data sources and parameter estimates are presented in Appendix Tables 1 and 3.

Elderly: To perform quantitative analyses in elderly (older than 75 years) patients we modified our base-case model by utilizing the baseline mortality of this older population (based on vital statistics of the USA population), modeling the treatment effect for mortality in this subgroup based on the Hlatky et al.17 meta-analysis (we used the estimate of the HR for mortality from the oldest subgroup of patients presented in the analysis, i.e.,>65 years of age). Because we did not have an estimate for the age group of interest regarding other long term outcomes (i.e. stroke, MI or revascularization) and because the estimate from the Hlatky et al. meta-analysis pertained to slightly younger patients we “reproduced” the additional uncertainty in these estimates by doubling the standard error of the relative risks for all long term outcomes.17

Diabetics: For short term, procedural outcomes we used the data from the PCI arm of the COURAGE trial9 and calculated the RR for these outcomes comparing PCI with CABG from our meta-analysis of stent trials included in the Stanford CER.16 We modeled the long term event rates and treatment effects on myocardial infarction, stroke and repeat revascularization based on the CARDia trial.15 Briefly, this was a randomized comparison of PCI with stent placement vs. CABG where the majority (68%) of the stents used were drug eluting. Outcomes at 1 year of followup where reported in 2010. To represent the cost of clopidogrel prophylaxis, we modeled a fixed cost increment for PCI procedures as we did for the DES arm of the typical RCT population model.18 For the decision, cost-effectiveness and VOI analysis in diabetic patients we did not model a separate BMS placement strategy as our source of data only presented data in aggregate for BMS and DES.

Results

The following paragraphs show results per model (subpopulation) and type of analysis (decision, cost-effectiveness, or value of information analysis).

Typical RCT population

Decision Analysis

To identify parameters with great uncertainty regarding effectiveness, we performed one-way sensitivity analysis for each parameter of interest, separately comparing DES and BMS to CABG. For these analyses each parameter was evaluated for a range equal to its 95% confidence interval and the difference in effectiveness between the largest and smallest value was recorded. The larger this range the more influential the uncertainty of that specific parameter on the relative effectiveness of the two strategies being compared. We then ranked the parameters from the one with the greatest uncertainty to that with the lowest uncertainty. Appendix Figure D2 presents the tornado graph for BMS and Appendix Figure D3 the graph for DES. Parameters relevant to DES do not appear in the BMS tornado graph but BMS-related parameters appear in both graphs due to the specific parameterization of our model (please consult the appendix methods for a detailed explanation of model structure). In general, within each graph, parameters toward the top of the graph are more “influential” in sensitivity analysis along their 95% confidence interval, indicating higher priority for future research.

Appendix Figure D2. Tornado graph of decision analysis, comparing BMS with CABG in the typical RCT population.

Appendix Figure D2

Tornado graph of decision analysis, comparing BMS with CABG in the typical RCT population.

Appendix Figure D3. Tornado graph of decision analysis, comparing DES with CABG in the typical RCT population.

Appendix Figure D3

Tornado graph of decision analysis, comparing DES with CABG in the typical RCT population.

Cost-Effectiveness Analyses

We also performed one-way sensitivity analysis across the range of uncertainty (i.e. the 95% confidence interval) of each parameter under a cost-effectiveness framework. Appendix Figure D4 and D5 present tornado graphs for the same parameters evaluated in decision analysis but here the outcome is the ICER (not QALYs).

Appendix Figure D4. Tornado graph of cost-effectiveness analysis, comparing BMS with CABG in the typical RCT population.

Appendix Figure D4

Tornado graph of cost-effectiveness analysis, comparing BMS with CABG in the typical RCT population.

Appendix Figure D5. Tornado graph of cost-effectiveness analysis, comparing DES with CABG in the typical RCT population.

Appendix Figure D5

Tornado graph of cost-effectiveness analysis, comparing DES with CABG in the typical RCT population.

Value of Information Analysis

Expected value of perfect information (EVPI)

For the base case scenario, the overall EVPI (i.e. the value of eliminating any uncertainty around the model parameters) was more that $1250 per patient per year at a willingness-to-pay threshold of $50,000, indicating that there is substantial value in conducting further research to reduce the uncertainty regarding the comparative effectiveness of BMS, DES and CABG. This value places an upper bound on the cost society should be willing to incur to obtain information on this research topic. Appendix Figure D6 presents the estimated EVPI over a wide range of willingness to pay thresholds (0 to $200,000/QALY).

Appendix Figure D6. EVPI over willingness to pay for future research comparing DES, BMS and CABG for the typical RCT population.

Appendix Figure D6

EVPI over willingness to pay for future research comparing DES, BMS and CABG for the typical RCT population.

Expected value of partial perfect information (EVPPI)

Because different study designs can be used to obtain addition information for the different model parameters, analyses of specific sources of uncertainty (i.e. of specific model parameters) are more informative regarding the prioritization of future research. For this reason, we calculated EVPPI for the following groups of model parameters: (1) treatment effects on long-term outcomes (separately for DES vs. CABG and BMS vs. CABG) (2) procedural treatment effects, (3) long term event rates in BMS arms, (4) the probability of procedural outcomes in BMS arms (5) the probability of death after stroke and MI and (6) patient preferences (utilities).1 Because accurate estimates of costs can be obtained with relatively straightforward research designs (mostly survey-type research or analysis of existing databases) we chose not to display the EVPPI of cost estimates in the following figures. Appendix Figure D7 presents the EVPI for comparing three strategies (DES, BMS and CABG) over a wide range of willingness to pay thresholds (0 to $200,000/QALY).

Appendix Figure D7. Graph of EVPPI for future research comparing DES, BMS and CABG for different groups of parameters over a wide range of willingness to pay for the base case scenario in the typical RCT population.

Appendix Figure D7

Graph of EVPPI for future research comparing DES, BMS and CABG for different groups of parameters over a wide range of willingness to pay for the base case scenario in the typical RCT population.

Diabetic Patients

Note that the decision tree (and consequently the decisional context) for the diabetic subpopulation analyses is slightly different from that used for the base case and elderly analyses. This difference was due to modeling considerations regarding the sources of data for each of these analyses, as explained in more detail in the supplementary methods. Results are listed in the same order as in the first model.

Appendix Figure D14. Tornado graph of decision analysis, comparing PCI with CABG in the model representing diabetic patients

Appendix Figure D15. Tornado graph of cost-effectiveness analysis, comparing PCI with CABG in the model representing diabetic patients

Appendix Figure D16. EVPI over willingness to pay for future research comparing PCI and CABG for the model representing diabetic patients

Appendix Figure D17. Graph of EVPPI for future research comparing PCI and CABG for different groups of parameters over a wide range of willingness to pay for the model representing diabetic patients

Appendix D. References

1.
Briggs A, Claxton K, Schulpher M. Decision modelling for health economic evaluation. Oxford: Oxford University Press; 2006.
2.
Griffin S, Claxton K, Hawkins N, et al. Probabilistic analysis and computationally expensive models: Necessary and required? Value Health. 2006 Jul;9(4):244–252. [PubMed: 16903994]
3.
Claxton K, Sculpher M, McCabe C, et al. Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra. Health Econ. 2005 Apr;14(4):339–347. [PubMed: 15736142]
4.
Rao C, Aziz O, Panesar SS, et al. Cost effectiveness analysis of minimally invasive internal thoracic artery bypass versus percutaneous revascularisation for isolated lesions of the left anterior descending artery. BMJ. 2007 Mar 24;334(7594):621. [PMC free article: PMC1831990] [PubMed: 17337457]
5.
Yock CA, Boothroyd DB, Owens DK, et al. Cost-effectiveness of bypass surgery versus stenting in patients with multivessel coronary artery disease. Am J Med. 2003 Oct 1;115(5):382–389. [PubMed: 14553874]
6.
Bischof M, Briel M, Bucher HC, et al. Cost-Effectiveness of Drug-Eluting Stents in a US Medicare Setting: A Cost-Utility Analysis with 3-Year Clinical Follow-Up Data. Value Health. 2009 Mar 11 [PubMed: 19490551]
7.
Brennan A, Kharroubi S, O’hagan A, et al. Calculating partial expected value of perfect information via Monte Carlo sampling algorithms. Med Decis Making. 2007 Jul;27(4):448–470. [PubMed: 17761960]
8.
Oakley JE, Brennan A, Tappenden P, et al. Simulation sample sizes for Monte Carlo partial EVPI calculations. J Health Econ. 2010 May;29(3):468–477. [PubMed: 20378190]
9.
Boden WE, O’Rourke RA, Teo KK, et al. Optimal medical therapy with or without PCI for stable coronary disease. N Engl J Med. 2007 April 12;356(15):1503–1516. [PubMed: 17387127]
10.
Serruys PW, Morice MC, Kappetein AP, et al. Percutaneous coronary intervention versus coronary-artery bypass grafting for severe coronary artery disease. N Engl J Med. 2009 Mar 5;360(10):961–972. [PubMed: 19228612]
11.
Trikalinos TA, Alsheikh-Ali AA, Tatsioni A, et al. Percutaneous coronary interventions for non-acute coronary artery disease: a quantitative 20-year synopsis and a network meta-analysis. Lancet. 2009 Mar 14;373(9667):911–918. [PMC free article: PMC2967219] [PubMed: 19286090]
12.
Lumley T. Network meta-analysis for indirect treatment comparisons. Stat Med. 2002 Aug 30;21(16):2313–2324. [PubMed: 12210616]
13.
Hill R, Bagust A, Bakhai A, et al. Coronary artery stents: a rapid systematic review and economic evaluation. Health Technol Assess. 2004 Sept;8(35):iii–242. [PubMed: 15361315]
14.
Serruys PW, Unger F, van Hout BA, et al. The ARTS study (Arterial Revascularization Therapies Study). Semin Interv Cardiol. 1999 December;4(4):209–219. [PubMed: 10738354]
15.
Kapur A, Hall RJ, Malik IS, et al. Randomized comparison of percutaneous coronary intervention with coronary artery bypass grafting in diabetic patients. 1-year results of the CARDia (Coronary Artery Revascularization in Diabetes) trial. J Am Coll Cardiol. 2010 Feb 2;55(5):432–440. [PubMed: 20117456]
16.
Bravata D, McDonald K, Gienger A, et al. Comparative effectiveness of percutaneous coronary interventions and coronary artery bypass grafting for coronary artery disease, Comparative Effectiveness Review No 9 (prepared by Stanford UCSF-EPC). 2007. [PubMed: 20704052]
17.
Hlatky MA, Boothroyd DB, Bravata DM, et al. Coronary artery bypass surgery compared with percutaneous coronary interventions for multivessel disease: a collaborative analysis of individual patient data from ten randomised trials. Lancet. 2009 Apr 4;373(9670):1190–1197. [PubMed: 19303634]
18.
Filion KB, Roy AM, Baboushkin T, et al. Cost-effectiveness of drug-eluting stents including the economic impact of late stent thrombosis. Am J Cardiol. 2009 Feb 1;103(3):338–344. [PubMed: 19166686]
19.
Tappenden P, Chilcott JB, Eggington S, et al. Methods for expected value of information analysis in complex health economic models: developments on the health economics of interferon-beta and glatiramer acetate for multiple sclerosis. Health Technol Assess. 2004 Jun;8(27):iii–78. [PubMed: 15215017]

Footnotes

1

Cost-effectiveness Analysis Registry, maintained at the Center for the Evaluation of Value and Risk in Health, Institute for Clinical Research and Health Policy Studies, Tufts Medical Center. Available at https://research​.tufts-nemc​.org/cear/default.aspx.

2

In parameterizing the model we treated the relative risks as hazard ratios. This approximation is generally valid, as evident from the Taylor expansions of the complementary log log transformation (which is the link function for hazard ratios) and the log transformation (the link function for relative risks).