U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Thomson RG, De Brún A, Flynn D, et al. Factors that influence variation in clinical decision-making about thrombolysis in the treatment of acute ischaemic stroke: results of a discrete choice experiment. Southampton (UK): NIHR Journals Library; 2017 Jan. (Health Services and Delivery Research, No. 5.4.)

Cover of Factors that influence variation in clinical decision-making about thrombolysis in the treatment of acute ischaemic stroke: results of a discrete choice experiment

Factors that influence variation in clinical decision-making about thrombolysis in the treatment of acute ischaemic stroke: results of a discrete choice experiment.

Show details

Appendix 6Further information on discrete choice experiment design and analysis

The DCE approach follows random utility theory, in which an individual, n, is assumed to choose the utility-maximising option i when presented with a choice set, Cn, containing alternative scenarios following:

Uin=v+εin=α+βXin+εin,
(1)

where v is the systematic component, α refers to the ASC, β is related to the vector of coefficients, X is the vector of k attributes and εin relates to the random component (unobservable variation). A respondent is assumed to choose the scenario j among alternatives J if the utility derived from that alternative is greater than the utility from any other alternative in the choice set.

The model will estimate the probability of a chosen alternative, j, as a function of the attributes k. In the current study, the utility derived from the chosen option is described by:

UOffer_of_thrombolysis=α+β1_Age+β2_Sex+β3_Ethnicity+β4_Symtom_onset_time+β5_Frailty                       +β6_Prestroke_dependency+β7_Prestroke_Cognitive_status+β8_SystolicBP+β9_NIHSS_score+ε.
(2)

Further details on discrete choice experiment analysis

The initial analysis employed the benchmark case of a conditional logit model (or clogit) which is based on three assumptions: (1) independence of irrelevant alternatives; (2) error terms are independent and identically distributed across observations; and (3) no preference heterogeneity (i.e. identical preferences across respondents). Alternative model specifications were also tested, including mixed logit and generalised multinomial logit models. Goodness-of-fit criteria, including Akaike and Bayesian information criteria, were used to determine the best model for the data.

Based on the data analysis plan, the objectives of the research, results of preliminary analyses, and Akaike and Baysian goodness-of-fit criteria, the mixed-effects logistic regression was deemed most appropriate. Mixed-logit regression models were optimal as they allow for the examination of unobserved preference heterogeneity: that is varying model estimates across individuals. Mixed-logit regression facilitated the examination of heterogeneity among respondents (which was expected) and relaxed the assumption of independence from irrelevant alternatives, which is an underlying assumption of the clogit model. The mixed-logit regression allowed for increased flexibility by specifying certain coefficients to be randomly distributed across individuals. Estimation by maximum simulated likelihood was undertaken using the user-written ‘mixlogit’ Stata programme (Arne Hole, Boston College Department of Economics, Boston, MA, USA). All estimation results reported were generated assuming the random parameters were normally distributed and using 250 Halton draws to simulate the likelihood functions to be maximised. There is an inherent trade-off between the number of Halton draws and the time taken to compute various models. It is suggested that an analysis build up models working from the default of 50, up to 100, 200, 250, 500 and up to 1000, if appropriate. However, given the number of random effects specified in the current study, it was not feasible to compute models with 500 or 1000 Halton draws and therefore 250 was set for each model.

Effects coding was used for the analysis. This refers to a way of using categorical predictor variables in estimation models. It is similar to dummy coding but uses ones, zeros and minus ones to represent information on factor levels. Effects coding facilitates reliable estimates of main effects and interaction effects (if included/required) and allows for estimation of all levels.134

Copyright © Queen’s Printer and Controller of HMSO 2017. This work was produced by Thomson et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.

Included under terms of UK Non-commercial Government License.

Bookshelf ID: NBK410187

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.6M)

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...