U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Gliklich RE, Leavy MB, Dreyer NA, editors. Registries for Evaluating Patient Outcomes: A User’s Guide [Internet]. 4th edition. Rockville (MD): Agency for Healthcare Research and Quality (US); 2020 Sep.

Cover of Registries for Evaluating Patient Outcomes: A User’s Guide

Registries for Evaluating Patient Outcomes: A User’s Guide [Internet]. 4th edition.

Show details

Chapter 13Analysis, Interpretation, and Reporting of Registry Data To Evaluate Outcomes

1. Introduction

Registries have the potential to produce data that are an important source of information regarding healthcare patterns, decision making, and delivery, as well as the subsequent association of these factors with patient outcomes. Registries, for example, can provide valuable insight into the safety and/or effectiveness of an intervention or the efficiency, timeliness, quality, and patient centeredness of a healthcare system. Registry data may also be linked to other data sources, such as administrative health insurance claims databases, electronic health records (EHRs), or biorepositories, to investigate hypotheses or questions that are secondary to the original reason for data collection.

The utility and applicability of registry data and linked datasets (collectively, ‘registry data’) rely heavily on an understanding of how the data were derived and why they were recorded, and then on the quality of the data analysis plan and its users’ ability to interpret the results. Analysis and interpretation of these data begin with a series of core questions:

  • Study purpose: Is the intent descriptive or comparative and does it address natural history, effectiveness, safety or other characterization?
  • Patient population: Who was studied and how did they come to be included in the registry?
  • Data quality: How were the data collected and reviewed, and was any verification or validation performed?
  • Data completeness: How were missing data handled for the main exposures and outcomes of interest and main confounders?
  • Data analysis: What analyses were performed and how were differences in risk factors accounted for? Were any sensitivity analyses conducted to estimate the impact of bias on the observed results?

While registry data present many opportunities for meaningful analysis, there are inherent challenges to making appropriate inferences. Principal concerns include availability of the key variables of interest as well as data quality, since registries and secondary data sources vary in terms of their data quality assurance procedures and information on data quality or curation is not reported consistently. More importantly, nonrandomized studies of all designs are susceptible to systematic errors arising from mismeasurement of analytic variables,1 unmeasured confounding,2 and poor choice of reference group.3,4 These factors must be considered when making inferences based on analyses of registry data.5

This chapter explains how analysis plans are constructed for registry data, how they differ depending on the purpose of the analysis, and how registry design and conduct and the characteristic(s) of any linked data can affect analysis and interpretation. The analytic techniques generally used for registry data are presented, addressing how conclusions may be drawn from the data and what caveats are appropriate. The chapter also describes how timelines for data analysis can be built in at registry inception and how to determine when the registry data are sufficient to begin analysis. Case Examples 23, 24, and 25 provide examples of registry analyses. Chapters 6 and 11 provide more information on the use of secondary data sources in registries.

2. Research Questions and Registry Purposes

Every registry study should start with a research question or focus. For example, disease registries commonly provide descriptive information, such as the typical clinical features of individuals with a disease, variations in phenotype, and the path to diagnosis and clinical progression of the disease over time (i.e., natural history) but they may also compare the effectiveness and safety of various treatments. These registries play a particularly important role in the study of many conditions, and especially for rare diseases.

In the case of studies where the aim is to examine the associations between specific exposures and outcomes, there is a vocal school of thinking that requires prespecification (and registration announcing such prespecification) of the study hypotheses or research questions as a requirement of assuring credible results.6 While aimed at avoiding publication bias, there are many strong counterarguments.710 The key area of agreement centers around the importance of transparency and making study protocols including analytic plans available for review and publishing results with enough detail to allow replication for confirmation or refutation.

Regardless of whether the hypotheses are prespecified, most productive registry-based research begins with a clear statement of objectives. These objectives might be descriptive or may involve a comparison. Some examples of objectives include:

  • Measure the incidence of a disease in a specific population,
  • Characterize the patterns or costs of treatment for a disease in a specific population
  • Measure the occurrence of outcomes among patients with a disease.
  • Compare the incidence of a particular disease in two or more subgroups defined by common characteristics (e.g., etiologic research)
  • Compare the cost or quality of care for a particular disease in two or more subgroups (e.g., health services research or disparities research)
  • Compare the rate of outcomes among two or more subgroups of patients (often defined by different types or levels of treatment) with a particular disease (e.g., clinical research)

In all cases, the overarching objective is to obtain an accurate and valid estimate of the frequency of an outcome’s occurrence, or its relative frequency compared across groups.11 An additional objective may be to generalize study results to a broader population. A valid estimate is one that is likely to be true. A precise estimate is one that has little variability. A generalizable estimate is one that provides information pertinent to the target population, the population for which the study’s information provides a basis for potential action, such as a public health or medical intervention. This is discussed further in the following section.

3. Patient Population

The purpose of registry-based research is to provide information about a specific patient population to which all study results are meant to apply. To determine how well the study results apply to the target population, five populations, each of which is a subset of the preceding population, need to be considered, along with how well each population represents the preceding population. These five subpopulations are shown in Figure 13-1.

This figure shows the flow of participants into an analysis. The first box is the target population. The second box is the accessible population. The third box is the intended population. The fourth box is the actual population. The final box is the analytic population.

Figure 13-1

Patient populations.

The target population is defined by the study’s purpose. To assess the appropriateness of the target population, one must ask the question, “Is this really the population that we need to know about?” For example, the target population for a registry of oral contraceptive users would include women of childbearing age who could become pregnant and are seeking to prevent pregnancy. Studies often miss important segments of the population in an effort to make the study population more homogeneous. For example, a study to assess a medical device used to treat patients for cardiac arrhythmias that defines only men as its target population would be less informative than it could be, because the device is designed for use in both men and women. Studies using linked datasets may miss segments of the population because of limitations in the secondary data source; for example, a study that linked registry data to claims data from a private payer would miss patients covered by other payers (e.g., Medicare) as well as the uninsured.

The accessible population is defined using inclusion criteria and exclusion criteria. The inclusion criteria define the population that will be used for the study and generally include geographic (e.g., hospitals or clinics in the New England region), demographic, disease-specific, and temporal (e.g., specification of the included dates of hospital or clinic admission), as well as other criteria. Conversely, the exclusion criteria seek to eliminate specific patients from study and may be driven by an effort to assure an adequate-sized population of interest for analysis. The same goals may be said of inclusion criteria, since it is difficult to separate inclusion from exclusion criteria (e.g., inclusion of adults aged 18 and older vs. exclusion of children younger than 18).

The accessible population may lose representativeness to the extent that convenience plays a part in its determination, because people who are easy to enroll in the registry or capture in secondary data sources may differ in some critical respects from the population at large. Similarly, to the extent that homogeneity plays a part in determining the accessible population, it is less likely to be representative of the entire population because certain population subgroups will be excluded.

Factors to be considered in assessing the accessible population’s representativeness of the target population include all the inclusion and exclusion criteria mentioned above. One method of evaluating representativeness is to describe the demographics and other key descriptors of the study population and to contrast its composition with patients with similar characteristics who are identified from another database, such as might be obtained from health insurers, health maintenance organizations, or the U.S. Surveillance Epidemiology and End Results (SEER) cancer registries.12 For example, the Get With The Guidelines (GWTG)-Stroke registry was linked with Medicare claims data to examine the representativeness of the registry’s population of Medicare beneficiaries admitted for ischemic stroke.13

However, simple numerical/statistical representativeness is not the main issue. Representativeness should be evaluated in the context of the purpose of the study—that is, whether the study results can reasonably be generalized or extrapolated to other populations of interest outside of those included in the accessible population. For example, suppose that the purpose of the study is to assess the effectiveness of a drug in U.S. residents with diabetes. If the accessible population includes no children, then the study results may not apply to children, since children often metabolize drugs very differently from adults.

On the other hand, consider the possibility that the accessible population is generally drawn from a geographically isolated region, whereas the target population may be the entire country or the world. In that case, the accessible population is not geographically representative of the target population, but that circumstance would have little or no impact on the representativeness of the study findings to the target population if the action of the drug (or its delivery) does not vary geographically (which we would generally expect to be the case, unless pertinent racial/genetic or dietary factors were involved or if risk factors for the outcome differ geographically). Therefore, in this example, the lack of geographical representativeness would not affect interpretation of results.14 In common practice, representativeness is interpreted to draw accessible populations that represent typical patients and practitioners, rather than referring to representative samples in the statistical sense of the term.

The reason for using an intended population rather than the whole accessible population for the study is simply a matter of convenience and practicality. The issues to consider in assessing how well the intended population represents the accessible population are similar to those for assessing how well the accessible population represents the target population. The main difference is that the intended population may be specified by a sampling scheme, which often tries to strike a balance among representativeness, convenience, and budget. If the study is designed to estimate a rate of occurrence of in a specific population, then a random sample of the accessible population would be considered representative of the accessible population. It is important to note that for many, if not most, registry-based studies, a complete roster of the accessible population does not exist outside of single health system. More commonly, the intended population is compared with the accessible population in terms of pertinent variables.

To the extent that convenience or other design (e.g., stratified random sample) is used to choose the intended population, one must consider the extent to which the sampling of the accessible population may affect any interpretations from the study. For example, suppose that, for the sake of convenience, only patients who attend clinic on Mondays are included in the study. If patients who attend clinic on Mondays are similar in every relevant respect to other patients, that may not constitute a limitation. But if Monday patients are substantially different from patients who attend clinic on other days of the week (e.g., well-baby clinics are held on Mondays) and if those differences affect the outcome that is being studied (e.g., proportion of baby visits for “well babies”), then that sampling strategy would substantially alter the interpretations from the study and would be considered a meaningful limitation.

The extent to which the actual population is not representative of the intended population (or typical patients and/or practitioners) is generally a matter of real-world issues that prevent recruitment of a more comprehensive population. It is important to consider the likely underlying factors that caused those subjects not to be included in the analysis of study results and how that might affect the interpretations from the registry. For example, consider a study of a newly introduced medication, such as an anti-inflammatory drug that is thought to be as effective as other products and to have fewer side effects but that is more costly. Inclusion in the actual population may be influenced by prescribing practices governed by a health insurer. For example, if a new drug is approved for reimbursement only for patients who have “failed” treatment with other anti-inflammatory products, the resulting actual population will be systematically different from the target population of potential anti-inflammatory drug users. The actual population may be refractory to treatment or may have more comorbidities (e.g., gastrointestinal problems), and may be specifically selected for treatment beyond the intention of the study-specified inclusion criteria. In fact, registries of newly introduced drugs and devices may often include patients who are different from the ultimate target population of broad interest.

A related issue is that bias that could result from recruitment of early adopters,15 in which practitioners who are quick to use a novel healthcare intervention or therapy differ from those who use it only once it is well established. For example, a study of the use of a new surgical technique may initially enroll largely academic physicians and only much later enroll community-based surgeons. If the outcomes of the technique differ between the academic surgeons (early adopters) and community-based surgeons (later adopters), then the initial results of the study may not reflect the true effectiveness of the technique in widespread use. In fact, “operator experience” is an important factor in understanding the effectiveness of various surgical approaches or of devices that require surgical implantation. Patients selected for treatment with a novel therapy may also differ with regard to factors such as severity or duration of disease and prior treatment history, including treatment failures. For example, patients with more severe or late-stage disease who have failed other treatments might be more likely to use a newly approved product that has shown efficacy in treating their condition. Later on, patients with less severe disease may start using the product.

Finally, the analytic population includes all those patients who meet the criteria for analysis. In some cases, it becomes apparent that there are too few cases of a particular type, or too few patients with certain attributes, such that these subgroups do not contribute enough information for meaningful analysis. Analytic populations are also created to meet specific needs. For example, an investigator may request a dataset that will be used to analyze a subset of the registry population, such as those who had a specific treatment or condition.

Patients who are included in the analytic population for a given analysis may also be subject to selection or inclusion criteria (admissibility criteria), and these may affect interpretation of the resulting analyses. For example, if only patients who survive long enough to be admitted to hospital are included in a study, then immortal time bias may bias the results.16 Another example is when patients who remain enrolled in a registry and attend followup visits through 2 years after registry initiation are included in analysis of adherence to therapy. In this event, it is possible or likely that adherence among those who remain enrolled in the registry will be different from adherence among those who do not. Differential loss to followup, whereby patients who are lost may be more likely to experience adverse outcomes, such as mortality, than those who remain under observation, is a related issue that may lead to biased results. (See Chapter 3.)

Selection of a study population inevitably involves balancing accuracy and generalizability concerns, as well as cost and feasibility considerations. For example, restriction is one of the most effective strategies for control of confounding through study design.17 If one is concerned about confounding by sex, a simple and effective strategy to control that confounding is to restrict the study population to a single sex. However, such restriction reduces the study’s precision by decreasing the sample size, and may also reduce the generalizability of the results (only applicable to a segment of the target population). An alternative would be to include both sexes and to stratify the analysis by sex. While this approach would improve the generalizability of the results and allow for an evaluation of confounding, the precision of the estimated association would be reduced, and perhaps substantially reduced, if the estimate of effect in men was substantially different from the estimate of effect in women. In this circumstance, the study becomes effectively two studies.

4. Data Quality

In addition to a full understanding of study design and methodology, analysis of registry events and outcomes will benefit from an assessment of data quality. When registry data are linked to secondary data sources, this quality assessment must consider both the quality of the registry data as well as the original purpose, inherent limitations, likelihood of differential followup, and quality of any data linked to the registry data; the accuracy of matching the data to specific patients should also be considered. One must examine whether most if not all important covariates were collected, how analytic variables were defined in registry and secondary data sources, data completeness for key variables of interest, how missing data were handled, and data accuracy.

Linked datasets may offer the opportunity to validate some registry data. For example, pharmacy data can be used to confirm that prescriptions were actually filled and may provide more accurate identification of medication use than the data recorded in a registry. The frequency of pharmacy refills is often an indicator of adherence and may be more accurate than patient-reported adherence. Similarly, registries that are derived primarily from patient-reported data may use record linkage for clinical validation of events of special interest or to supplement patient-reported information with clinical or administrative data.

4.1. Collection of Important Covariates

While registries are constructed, they generally serve a particular purpose that drives data collection strategies. However, registry information collected for one purpose (e.g., provider performance feedback) may later be used for another purpose, provided that the terms of data access, including informed consent, allows such additional uses.

For example, suppose the research question addresses the comparative effectiveness of two treatments for a given disease using an existing registry. To be meaningful, the registry should have accurate, well-defined, and complete information, including potential confounding and effect-modifying factors; population characteristics of those with the specified disease; exposures (whether patients received treatment A or B); and patient outcomes of interest. Confounding factors are variables that influence both the exposure (treatment selection) and the outcome in the analyses. These factors can include patient factors (age, gender, race, socioeconomic factors, disease severity, or comorbid illness); provider factors (experience, skills); and system factors (type of care setting, quality of care, or regional effects). It is generally not possible to identify all confounding factors in planning a registry, nor is it possible to collect all confounding factors of interest (e.g., genetic factors, complete family histories, occupational and environmental exposures and socioeconomic factors that may influence disease occurrence or treatment benefits and risks). However, it is desirable to give serious thought to what will be important and how the necessary data can be collected. While effect modification (i.e., when the magnitude of the effect of the primary exposure on an outcome differs depending on the level of a third variable) is not a threat to validity if properly accounted for in the analyses, it is important to consider potential effect modifiers for data collection and analysis to evaluate whether an association varies within specific subgroups.18 Analysis of registry data requires information about such variables so that the confounding covariates can be accounted for, using one of several analytic techniques covered in upcoming sections of this chapter. In addition, as described in Chapter 3 and above, eligibility for entry into the study may be restricted to individuals within a certain range of values for potential confounding factors in order to reduce the effects of these factors. Such restrictions may also affect the generalizability of the study.

4.2. Definition of Analytic Variables

Registries typically capture data using data elements with clear, unambiguous definitions that are determined in advance of data collection (see Chapters 3, 4, and 5). It is essential that such information is documented in an accessible manner and made available in the context of analysis files so that users of registry data understand the data definitions. Individual registry data elements may be transformed into composite endpoints for analytic purposes, and definitions of how these variables were created must be documented and made available to those who conduct registry study analyses and included in any reports. When registry data are linked to another data source for a specific study, it is equally important to define the analytic variables in the secondary data source. To enhance transparency and facilitate data curation and study replication, the protocol for the linked study should provide a clear, unambiguous definition of the exposures and outcome being studied, a description of how they will be measured, and a discussion of strengths and limitations of using these variables.

If the study objective is to compare the rates of outcome occurrence across subgroups, then the protocol should provide a definition of the exposure and comparator(s). It is critical that both the index condition (i.e., the “exposed” or “treated” group) and the reference group/condition (e.g., those not exposed to the study treatment of interest, those treated with another method, or those treated any other way). Attention should also be given to identifying and accurately measuring potential confounders and effect modifiers in the primary and secondary data source, as discussed in the prior section.

4.3. Data Completeness and Curation

Assuming that a registry or secondary data source has the necessary data elements, the next step is to characterize data completeness for the key variables needed for the primary objectives. Recognizing that registry-based studies, by definition, are observational in nature, completeness needs to be assessed in that context. For example, patients will present according to the dictates of routine care practices at a facility, tempered by patients’ ability to take time off from work or travel to the site. Thus, patients may not present for followup visits according to what is expected. Similarly, the practice of ordering specific diagnostic or laboratory tests will often vary by physician practice. The variable nature of real-world medical care also may influence how often patients complete surveys and patient-reported outcome (PRO) measures, if these questionnaires are completed when a patient present for care. Even during routine care, patients may miss a visit or decline to undergo a procedure or test, and providers may elect to forego expected tests for a few or a specific subset of their patients. Demographics, test results, and other key information may not be documented in the registry due to lack of availability, refusal to provide, or incorrect documentation (e.g., the values are inconsistent or out-of-range). These scenarios, among other potential issues, result in missing or inconsistent data in the registry databases and secondary data sources, but are to be expected.

Data curation describes the process of reviewing data for completeness and accuracy. For registry-based studies, it may be possible to query certain data elements that are missing or that fall outside of the standard expected range. Chapter 11 provides more information on queries. In addition, when incorporating secondary data into registries, it is essential to evaluate whether the data import performed according to specifications and whether any data transformations were made according to expectations. In some cases, external quality control checks may be useful for making these determinations (e.g., is the patient distribution (by age, ethnicity, etc.) similar to the distribution that would be expected for this population).

Recognizing the types of issues that are inherent in non-interventional research, evaluations of data should focus first on determining whether data are missing largely at random or if there is a systematic bias in the data.19 For example, when looking at secondary data, a common key question is whether the patient is likely to obtain followup for the conditions of interest in the database. This question also has parallels for primary data collection, since healthcare providers will depend on patient reports for events not treated by that provider, and researchers should consider whether there are likely to be motivations that affect reporting completeness or accuracy(e.g., reporting motor vehicle accidents while impaired.)

4.4. Data Accuracy and Validation

While observational registry studies are usually not required to meet U.S. Food and Drug Administration and International Conference on Harmonisation standards of Good Clinical Practice developed for clinical trials, sponsors and contract research organizations that conduct registry studies are responsible for ensuring the accuracy of study data to the extent possible. Plans for data quality assurance, data verification, and site monitoring (if any) should be developed at the beginning of a study and adhered to throughout its lifespan. Chapter 11 discusses in detail approaches to data collection and quality assurance, including data management, site monitoring, and source data verification.

Ensuring the accuracy and validity of data and programming at the analysis stage also requires consideration. The Office of Surveillance and Epidemiology (OSE) of the Food and Drug Administration’s Center for Drug Evaluation and Research uses the manual Standards of Data Management and Analytic Process in the Office of Surveillance and Epidemiology for database analyses conducted within OSE; the manual addresses many of these issues and may be consulted for further elaboration on these topics.20 Topics addressed that pertain to ensuring the accuracy of data just before and during analysis include developing a clear understanding of the data at the structural level of the database and variable attributes. Creating analytic programs with careful documentation and an approach to variable creation and naming conventions that is straightforward and, when possible, consistent with the Clinical Data Interchange Standards Consortium (CDISC) initiative; are useful whether primary or secondary data are used. Similarly, some verification of programming and analytic dataset creation by a second analyst is also considered good practice.

4.5. Validation Substudiesi

Validation substudies may be used to evaluate the accuracy of certain data elements or study assumptions and can inform estimates of the potential impact of bias on study results.2,21 Registry-based research is often amenable to collection of internal validation data, for example by medical record review. In addition, many databases have internal protocols that constantly validate at least some aspects of the data. The validation data generated by these protocols, if accessible, may provide an initial indication of the data quality. To facilitate data collection for study-specific internal validation studies, investigators should consider the important threats to the validity of their research while designing their study, and should allocate project resources accordingly.

For example, in the study of statin use related to ALS and neurodegenerative diseases described above,22 the ICD-10 code used to identify cases (G12.2) corresponded to diagnoses of ALS or other motor neuron syndromes. The investigators therefore selected a random sample of 25 individuals from among all those who satisfied the case definition, and a clinician investigator reviewed their discharge summaries. The proportion of these 25 who did not have ALS (32 percent) was used to inform a bias analysis to model the impact of these false-positive ALS diagnoses. Assuming a valid bias model, the bias analysis results showed that the null association was unlikely to result from the nondifferential misclassification of other diseases as ALS.

In this example, there was no effort to validate that non-cases of ALS were truly free of the disease. Non-cases are seldom validated, because false-negative cases, especially of rare diseases, occur very rarely. Furthermore, validating the absence of disease often requires a study-supported medical examination of the non-case patients, an expensive, time-consuming, and invasive procedure. Prevalent diseases with a lengthy preclinical period and relatively simple diagnostic tests, such as diabetes, are more amenable to validation of non-cases. The ALS example also illustrates that an internal validation study requires protocol planning and allocation of study resources to collect the validation data. A protocol should be written that specifies how participants in the validation sample will be selected from the study population. Participation in the validation substudy might require informed consent to allow medical record review, whereas the database data itself might be available without individual informed consent. These aspects should be resolved in the planning stage, and the analytic plan should include a section devoted to bias modeling and analysis.2

4.6. Other Data Quality Issues Relevant to Linked Datasetsii

4.6.1. Changes in Coding Conventions Over Time

A common problem with secondary data sources is the impact of changes in coding conventions over the lifetime of the database. These changes can take the form of diagnostic drift,23 changes in discharge coding schemes, changes in the definition of grading of disease severity, or even variations in the medications on formulary in one region but not others at different points in time. For example, the Danish National Registry of Patients (DNRP) is a database of patient contacts at Danish hospitals. From 1977 to 1993, discharge diagnoses were coded according to ICD-8, and from 1994 forward discharge diagnoses were coded according to ICD-10. ICD-10 included a specific code for chronic obstructive pulmonary disease (COPD, J44), whereas ICD-8 did not [ICD-8 496 (COPD not otherwise specified) did not appear in the DNRP]. In addition, from 1977 to 1994 the DNRP registered discharge diagnoses for only inpatient admissions, but from 1995 forward discharge diagnoses from outpatient admissions and emergency room contacts were also registered. COPD patients seen in outpatient settings before 1995 were therefore not registered; this excluded patients who likely had less severe COPD on average. The change in ICD coding convention in 1994 and the exclusion of outpatient admissions before 1995 presented a barrier to estimating the time trend for incidence of all admissions for COPD in any period that overlapped these two changes to the DNRP.24

The General Practice Research Database (GPRD) was a medical records database capturing information on approximately 5 percent of patients in the United Kingdom25 (as of March 2012, the GPRD became the Clinical Practice Research Database). Information was directly entered into the database by general practitioners trained in standardized data entry. When the GPRD was initiated in 1987, diagnoses were recorded using Oxford Medical Information Systems (OXMIS) codes, which were similar to ICD-9 codes. In 1995, the GPRD adopted the Read coding system, a more detailed and comprehensive system that groups and defines illnesses using a hierarchical system. Without knowledge of this shift in coding and how to align codes for specific conditions across the different coding schemes, studies using multiple years of data could produce spurious findings.

4.6.2. Precision Considerations When Standard Errors Are Small

The large size of the study population that can often be included in a registry-based study is a strength, but it also requires special attention. The sample size allows adjustment for multiple potential confounders with little potential for over-fitting or sparse data bias,26 and allows design features such as comparisons of different treatments for the same indication (comparative effectiveness research) to reduce the potential for confounding by indication.27 Nonetheless, systematic errors remain a possibility, and these systematic errors dominate the uncertainty when estimates of association are measured with high precision as a consequence of a large sample size.28 When confidence intervals are narrow, systematic errors remain, and/or inference or policy action will potentially result, investigators have been encouraged to employ quantitative bias analysis to more fully characterize the total uncertainty.21 Bias analysis methods have been used to address unmeasured confounding,29 selection bias,30 and information bias29,31 in registry-based research.

5. Data Analysis

Statistical methods commonly used for descriptive purposes include those that summarize information from continuous variables (e.g., mean, median) or from categorical variables (e.g., proportions, rates). Registries may describe a population using incidence (the proportion of the population that develops the condition over a specified time interval) and prevalence (the proportion of the population that has the condition at a specific point in time). Another summary estimate that is often used is an incidence rate. The incidence rate (also known as absolute risk) takes into account both the number of people in a population who develop the outcome of interest and the person-time at risk, or the length of time contributed by all people during the period when they were in the population and the events were counted.

For analytical studies, the association between a risk factor and outcome may be expressed as relative risk, odds ratio, or hazard ratio, depending on the nature of the data collected, the duration of the study, and the frequency of the outcome. The standard textbooks cited here have detailed discussions regarding epidemiologic and statistical methods commonly used for the various analyses supported by registries.18,3235

It is always important to consider the role of confounding. Although those planning a study try to collect as much data as possible to address known confounders, there is always the chance that unknown or unmeasured confounders will affect the interpretation of analyses derived from observational studies. It is important to consider the extent to which bias (systematic error stemming from factors that are related to both the decision to treat and the outcomes of interest [confounders]) could have distorted the results. For example, a bias known as confounding by indication36 results from the fact that physicians do not prescribe medicine at random: the reason a patient is put on a particular regimen is often associated with their underlying disease severity and may, in turn, affect treatment outcome. To detect such a bias, the distribution of various prognostic factors at baseline is compared for patients who receive a treatment of interest and those who do not. A related concept is channeling bias, in which drugs with similar therapeutic indications are prescribed to groups of patients who may differ with regard to factors influencing prognosis.37 Other types of bias include detection bias38 (e.g., when comparison groups are assessed at different points in time or by different methods), selective loss to followup in which patients with the outcomes of most interest (e.g., sickest) may differentially drop out of one treatment group than another, and performance bias (e.g., systematic differences in care other than the intervention under study, such as a public health initiative promoting healthy lifestyles directed at patients who receive a particular class of treatment). In addition to such biases, analyses need to account for effect modification.39 The presence of effect modification may also be identified after the data are collected.

Confounding may be evaluated using stratified analysis, multivariable analysis, sensitivity analyses, and simple or quantitative bias analysis or simply by graphical comparison of characteristics and events between groups.40 The extensive information and large sample sizes available in some registries also support use of more advanced modeling techniques for addressing confounding by indication, such as the use of propensity scores to create matched comparison groups, or for stratification or inclusion in multivariable risk modeling.4144 New methods also include the high-dimensional propensity score (hd-PS) for adjustment using administrative data.45 Examples are too numerous for a few selections to be fully representative, but registries in nearly every therapeutic area, including cancer,46 cardiac devices,47 organ transplantation,48 and rare diseases,49 have published the results of analyses incorporating approaches based on propensity scores. Nonetheless, application of these and other methods may not fully control for unmeasured confounding.50

Groupings within a study population, such as patients seen by a single clinician or practice, residents of a neighborhood, or other “clusters,” may themselves impact or predict health outcomes of interest. Such groupings may be accounted for in analysis through use of analytic methods including analysis of variance (ANOVA), and hierarchical or multilevel modeling.5154

Heterogeneity of treatment effect is also an important consideration for comparative effectiveness research as the effect of a treatment may vary within subgroups of heterogeneous patients.55 Stratification on the propensity score has been used to identify heterogeneity of treatment effect and may identify clinically meaningful differences between subgroups based on pre-treatment characteristics.

For economic analyses, the analytic approaches often encountered are cost-effectiveness analyses and cost-utility studies. To examine cost-effectiveness, costs are compared with clinical outcomes measured in units such as life expectancy or years of disease avoided.56 Cost-utility analysis, a closely related technique, compares costs with outcomes adjusted for quality of life (utility) using measures known as quality-adjusted life years. Since most new interventions are more effective but also more expensive, another analytic approach examines the incremental cost-effectiveness ratio and contrasts that to the willingness to pay. (Willingness-to-pay analyses are generally conducted on a country-by-country basis, since various factors relating to national health insurance practices and cultural issues affect willingness to pay.) The use of registries for cost-effectiveness evaluations is a fairly recent development, and consequently, the methods are evolving rapidly. More information about economic analyses can be found in standard textbooks.5762

A number of biostatistics and epidemiology textbooks cover in depth the issues raised in this section and the appropriate analytic approaches for addressing them—for example, “time-to-event” or survival analyses63 and issues of recurrent outcomes and repeated measures, with or without missing data,64 in longitudinal cohort studies. Other texts address a range of regression and nonregression approaches to analysis of case-control and cohort study designs65 that may be applied to registries. For further information on how to quantify bias, please see Lash, Fox, and Fink.2

5.1. Factors To Consider in the Analysis

Registry results are most interpretable when they are specific to well-defined endpoints or outcomes in a specific patient population with a specific treatment status. Registry analyses may be more meaningful if variations of study results across patient groups, treatment methods, or subgroups of endpoints are reported. In other words, analysis of a registry should explicitly provide the following information:

  • Patient: What are the characteristics of the patient population in terms of demographics, such as age, gender, race/ethnicity, insurance status, and clinical and treatment characteristics (e.g., history of significant medical conditions, disease status at baseline, and prior treatment history)?
  • Exposure (or treatment): Exposure could be therapeutic treatment such as medication or surgery; a diagnostic or screening tool; behavioral factors such as alcohol, smoking habits, and diet; or other factors such as genetic predisposition or environmental factors. What are the distributions of the exposure in the population? Is the study objective specific to any one form of treatment? Is a new user design being used?66 Does the exposure definition (index and reference group) and analysis avoid immortal-time bias?67 Are there repeated measures at predetermined intervals or is the exposure intermittent?
  • Endpoints (or outcomes): Outcomes of interest may encompass effectiveness or comparative effectiveness, the benefits of a healthcare intervention under real-world circumstances,68 and safety—the risks or harms that may be associated with an intervention. Examples of effectiveness outcomes include survival, disease recurrence, symptom severity, quality of life, and cost-effectiveness. Safety outcomes may include infection, sensitivity reactions, cancer, organ rejection, and mortality. Endpoints must be precisely defined at the data collection and analysis stages. Are the study data on all-cause mortality or cause-specific mortality? Is information available on pathogen-specific infection (e.g., bacterial vs. viral)? Are there competing risks?69
  • Covariates: As with all observational studies, comparative effectiveness research requires careful consideration, collection, and analysis of important confounding and effect modifying variables. For medication exposures, are dose, duration, and calendar time under consideration? Directed acyclic graphs (DAGs) can be useful tools to illustrate how the exposure (or treatment), outcome and covariates are related.70,71
  • Time: For valid analysis of risk or benefit that occurs over a period of time following therapy, detailed accounting for time factors is required. For exposures, dates of starting and stopping a treatment or switching therapies should be recorded. For outcomes, the dates when followup visits occur, and whether or not they lead to a diagnosis of an outcome of interest, are required in order to take into account how long and how frequently patients were followed. Dates of diagnosis of outcomes of interest, or dates when patients complete a screening tool or survey, should be recorded. At the analysis stage, results must also be described in a time-appropriate fashion. For example, is an observed risk consistent over time (in relation to initiation of treatment) in a long-term study? If not, what time-related risk measures should be reported in addition to or instead of cumulative risk? When exposure status changes frequently, what is the method of capturing the population at risk? Many observational studies of intermittent exposures (e.g., use of nonsteroidal anti-inflammatory drugs or pain medications) use time windows of analysis, looking at events following first use of a drug after a prescribed interval (e.g., 2 weeks) without drug use. Different analytic approaches may be required to address issues of patients enrolling in a registry at different times and/or having different lengths of observation during the study period.
  • Potential for bias: Successful analysis of observational studies also depends to a large extent on the ability to measure and analytically address the potential for bias. Refer to Chapter 3 for a description of potential sources of bias. Directed acyclic graphs can also be useful for understanding and identifying the source of bias.70,71 For details on how to quantify potential bias, see the textbook by Lash, Fox, and Fink.2

The choice of comparators is also a challenging issue. When participants in a cohort are classified into two or more groups according to certain study characteristics (such as treatment status, with the “standard of care” group as the comparator), the registry is said to have an internal or concurrent comparator. The advantage of an internal comparator design is that patients are likely to be more similar to each other, except for their treatment status, than patients in comparisons between registry subjects and external groups of subjects. When defining the comparator group, it is important not to introduce immortal time bias.67 In addition, consistency in measurement of specific variables and in data collection methods make the comparison more valid. Internal comparators are particularly useful for treatment practices that change over time. Comparative effectiveness studies may often necessitate use of an internal comparator in order to maximize the comparability of patients receiving different treatments within a given study, and to ensure that variables required for multivariable analysis are available and measured in an equivalent manner for all patients to be analyzed.

Unfortunately, it is not always possible to have or sustain a valid internal comparator. For example, there may be significant medical differences between patients who receive a particularly effective therapy and those who do not (e.g., underlying disease severity or contraindications), or it may not be feasible to maintain a long-term cohort of patients who are not treated with such a medication. It is known that external information about treatment practices (such as scientific publications or presentations) can result in physicians changing their practice, such that they no longer prescribe the previously accepted standard of care. There may be a systematic difference between physicians who are early adopters and those who start using the drug or device after its effectiveness has been more widely accepted. Early adopters may also share other practices that differentiate them from their later-adopting colleagues.15

In the absence of a good internal comparator, one may have to leverage external comparators to provide critical context to help interpret data revealed by a registry. An external or historical comparison may involve another study or another database that has disease or treatment characteristics similar to those of registry subjects. Such data may be viewed as a context for anticipating the rate of an event. One widely used comparator is the U.S. SEER cancer registry data, because SEER provides detailed annual incidence rates of cancer stratified by cancer site, age group, gender, and tumor staging at diagnosis. SEER represents 28 percent of the U.S. population.12 A procedure for formalizing comparisons with external data is known as standardized incidence rate or ratio;39 when used appropriately, it can be interpreted as a proxy measure of risk or relative risk.

Use of an external comparator, however, may present significant challenges. For example, SEER and a given registry population may differ from each other for a number of reasons. The SEER data cover the general population and have no exclusion criteria pertaining to history of smoking or cancer screening, for example. On the other hand, a given registry may consist of patients who have an inherently different risk of cancer than the general population, resulting from the registry’s having excluded smokers and others known to be at high risk of developing a particular cancer. Such a registry would be expected to have a lower overall incidence rate of cancer, which, if SEER incidence rates are used as a comparator, may complicate or confound assessments of the impact of treatment on cancer incidence in the registry.

However, use of external comparators is becoming an important tool for regulators, and external comparators for phase II clinical trials may come from registries.19

Regardless of the choice of comparator, similarity between the groups under comparison should not be assumed without careful examination of the study patients. Different comparator groups may result in very different inferences for safety and effectiveness evaluations; therefore, analysis of registry findings using different comparator groups may be used in sensitivity analyses or bias analyses to determine the robustness of a registry’s findings.

5.2. Developing a Statistical Analysis Plan

5.2.1. Need for a Statistical Analysis Plan

It is good practice develop a statistical analysis plan (SAP) that describes the analytical principles and statistical techniques to be employed to address the primary and secondary objectives, as specified in the study protocol or plan, before embarking on data analysis. A registry may require a primary “master SAP” as well as subsequent, supplemental SAPs. Supplemental SAPs might be triggered by new research questions emerging after the initial master SAP was developed or might be needed because the registry has evolved over time (e.g., additional data collected, data elements revised).

Although the evolving nature of data collection practices in some registries poses challenges for data analysis and interpretation, it is important to keep in mind that the ability to answer questions emerging during the course of the study is one of the advantages (and challenges) of a registry. In the specific case of long-term rare-disease registries, many of the relevant research questions of interest cannot be defined a priori but arise over time as disease knowledge and treatment experience accrue. Supplemental SAPs can be developed only when enough data become available to analyze a particular research question. At times, the method of statistical analysis may have to be modified to accommodate the amount and quality of data available.

To the extent that the research question and SAP are formulated before the data analyses are conducted and results are used to answer specific questions or hypotheses, such supplemental analysis retains much of the intent of prespecification rather than being wide-ranging exploratory analyses (sometimes referred to as “fishing expeditions”). The key to success is to provide sufficient details in the SAP that, together with the study protocol and the case report forms, the overall process of the data analysis and reporting are well described.

5.2.2. Preliminary Descriptive Analysis To Assist SAP Development

During SAP development, one aspect of a registry that is somewhat different from a randomized controlled trial is the necessity to understand the distribution (sometimes referred to as the “shape”) of the data collected to inform subsequent stratified analyses.39 This may be crucial for a number of reasons.

Given the broad inclusion criteria that most registries tend to propose, there might be a wide distribution of patients, treatment, and/or outcome characteristics. The distribution of age, for example, may help to determine if more detailed analyses should be conducted in the “oldest old” age group (80 years and older) to help understand health outcomes in this subgroup that might be different from those of their younger counterparts.

Unless a registry is designed to limit data collection to a fixed number of regimens, the study population may experience many treatments, considering the possible combination of various dose levels, drug names, frequency and timing of medication use (e.g., acute, chronic, intermittent), and sequencing of therapies. The scope and complexity of these variations often constitute one of the most challenging aspects of analyzing a registry. Grouping of treatment into regimens for analysis should be done carefully, guided by clinical experts in that therapeutic area as well as by study purpose. The full picture of treatment patterns may become clear only after a sizable number of patients have been enrolled. Consequently, the treatment definition in an SAP may be refined during the course of a study. Furthermore, there may be occasions where a use of a therapeutic regimen of interest is much less frequent than anticipated, so that specific study objectives focusing on this group of patients might become unfeasible.

5.3. Timing of Analyses During the Study

Unlike a typical clinical trial, registries, especially those that take several years to complete, may conduct intermediate analyses before all patients have been enrolled and/or all data collection has been completed. Such midcourse analyses may be undertaken for several reasons. First, many of these registries focus on serious safety outcomes. For such safety studies, it is important for all parties involved to actively monitor the frequency of such events at regular predefined intervals so that further risk assessment or risk management can be considered. The timing of such analyses may be influenced by regulatory requirements. Second, it may be of interest to examine treatment practices or health outcomes during the study to capture any emerging trends. Finally, it may also be important to provide intermediate or periodic analysis to document progress, often as a requirement for continued funding.

While it is useful to conduct such periodic analysis, careful planning should be given to the process and timing and whether the need for such analyses are programmatic (e.g., for annual progress reports) or for scientific purposes. For scientific purposes, among the first questions generally asked is whether a sufficient number of patients have been enrolled, sufficient followup time has elapsed to observe events of interest and/or whether a sufficient number of events have occurred. For example, some events, such as site reactions to injections, can be observed after a relatively short duration, compared with events like cancers, which may have a long induction or latency. If there are too few patients or insufficient time has elapsed, premature analyses may lead to the unreliable conclusions. However, it is inappropriate to delay analysis so long that an opportunity might be missed to observe emerging safety outcomes. Investigators should use sound clinical and epidemiological judgment when planning interim or periodic analyses and be sure that any resultant reports address both the strengths and limitations of such analyses.

5.3.1. Patient Censoring

At the time of a registry analysis, events may not have occurred for all patients. For these patients, the data are said to be censored, indicating that the observation period of the registry was stopped before all events occurred (e.g., mortality). In these situations, it is unclear when the event will occur, if at all. In addition, a registry may enroll patients until a set stop date, and patients entered into the registry earlier will have a greater probability of having an event than those entered more recently because of the longer followup. An important assumption, and one that needs to be assessed in a registry, is how patient prognosis varies with the time of entrance into the registry. This issue may be particularly problematic in registries that assess innovative (and changing) therapies. Patients and outcomes initially observed in the registry may differ from patients and outcomes observed later in the registry timeframe, either because of true differences in treatment options available at different points in time, or because of the shorter followup for people who entered later. Patients with censored data, however, contribute important information to the registry analysis. When possible, analyses should be planned so as to include all subjects, including those censored before the end of the followup period or the occurrence of an event. One method of analyzing censored data to estimate the conditional probability of the event occurring is to use the Kaplan-Meier method.72 In this method, for each time period, the probability is calculated that those who have not experienced an event before the beginning of the period will still not have experienced it by the end of the period. The probability of an event occurring at any given time is then calculated from the product of the conditional probabilities of each time interval.

5.3.2. Sensitivity Analyses

Sensitivity analysis refers to a procedure used to determine how robust the study result is to alterations of various parameters. If a small parameter alteration leads to a relatively large change in the results, the results are said to be sensitive to that parameter. Sensitivity and bias analyses may be used to determine how the final study results might change when taking into account those lost to followup. A hypothetical simple sensitivity analysis is presented in Table 13-1.

Table 13-1. Impact of loss to followup on incidence rates per 1,000 in a hypothetical study of 1,000 patients in a registry.

Table 13-1

Impact of loss to followup on incidence rates per 1,000 in a hypothetical study of 1,000 patients in a registry. Assuming that the incidence rate among patients lost to followup is X times the rate of incidence estimated in those who stayed in the registry: (more...)

Table 13-1 illustrates the extent of change in the incidence rate of a hypothetical outcome assuming varying degrees of loss to followup, and differences in incidence between those for whom there is information and those for whom there is no information due to loss to followup. In the first example, where 10 percent of the patients are lost to followup, the estimated incidence rate of 111/1,000 people is reasonably stable; it does not change too much when the (unknown) incidence in those lost to followup changes from 0.5 times the observed to 3 times the observed, with the corresponding incidence rate that would have been observed ranging from 106 to 156 per 1,000. On the other hand, when the loss to followup increases to 30 percent, the corresponding incidence rates that would have been observed range from 94 to 242. This procedure could be extended to a study that has more than one cohort of patients, with one being exposed and the other being nonexposed. In that case, the impact of loss to followup on the relative risk could be estimated by using sensitivity analysis.

5.4. Missing Dataiii

The intent of any analysis is to make valid inferences from the data. Missing data can threaten this goal both by reducing the information yield of the study and, in many cases, by introducing bias. Understanding the types of and reasons for missing data can help guide the selection of the most appropriate analytical strategy for handling the missing data, or the potential bias that may be introduced by such missing data. Typically, missing data result from item nonresponse, left truncation, and right censoring. The concepts below apply to both registry data and secondary data sources with an exception that data may be missing in secondary data sources simply because those data elements are not intended to be collected. For example, patient’s race is not reported in administrative claims data.

5.4.1. Reasons for Missing Data

5.4.1.1. Item Nonresponse

Item nonresponse, which occurs when a participant completes a case report form (CRF) or survey without providing a response for one or more of the data elements, may be the most common reason for missing data. As discussed in Chapter 11, CRFs may incorporate checks to ensure that complete, valid data are entered. These checks may prevent CRFs from being marked as complete if data are missing. However, item nonresponse may still occur, either because CRFs are not marked as complete or because some data elements are not considered critical to the study objectives. For example, a recent analysis of the characteristics of missing data in three patient registries found that 71 percent of patients in one registry were missing data for body mass index (BMI), an optional field.73 Item nonresponse also occurs when patients complete PROs using paper forms and leave some fields blank or enter illegible data.

5.4.1.2. Threats From the Left: Truncation

The issue of left truncation, a form of selection bias, arises when events of interest occur prior to a patient’s enrollment in the registry and (typically) pre-empt enrollment in the registry. Applebaum et al. define left truncation as occurring “when subjects who otherwise meet entry criteria do not remain observable for a later start of followup.”74 For example, in a study of miscarriage which enrolls pregnant women, some patients will be left truncated because “an unknown proportion of the source population experiences losses prior to enrollment.”75 Thus, left truncation results in data missing in the observed cohort due to non-enrollment, leading the study sample to not accurately reflect the underlying target population, in this example, pregnant women at risk for miscarriage.

A related bias can be introduced due to entry of already-exposed individuals into a registry or other data source. Consider, for example, a registry designed to study disease progression over several years in patients with a rare disease. Ideally, the registry would enroll only patients at the time of diagnosis, with the goal of collecting detailed baseline and diagnostic information for all patients. However, limiting the registry enrollment to only those newly diagnosed patients would reduce the sample size significantly, and, in the case of a rare disease, likely render the registry infeasible. To enroll sufficient patients, the registry may include both existing (prevalent) patients and newly diagnosed (incident) patients. This enrollment strategy, while practical, has the potential to introduce significant bias for numerous reasons, including under-ascertainment of early events. Examples of the latter include venous thromboembolism risk in women taking third generation over-the-counter drugs relative to earlier products, falls after initiating benzodiazepines, and nonsteroidal anti-inflammatory drugs (NSAIDs) and peptic ulcers.66

The concept of “baseline” will be different for patients who are newly diagnosed versus those with an existing diagnosis at the time of enrollment, and comparisons of symptoms, treatment effectiveness, and disease progression would need to account for these differences. In particular, the patients with existing diagnoses may be missing information on symptoms at diagnosis or other tests or procedures related to their diagnosis that occurred prior to study enrollment.76 Ray gives an overview of this issue in the context of medication effects, suggesting that focusing on new users (or newly exposed people, generally) is a strategy which can minimize bias, and should be considered whenever logistically feasible.66

5.4.1.3. Threats From the Right: Loss to Followup, Censoring, Competing Risks

Loss to followup and right censoring occur when information is missing during the study period or at the conclusion rather than the inception of the registry. In studies that collect long-term followup data, participants may be lost to followup if they formally withdraw from the registry or simply stop completing surveys or coming for scheduled visits. Attrition of this nature occurs for many reasons, including factors both related to the study objectives (e.g., the participant becomes too ill to complete study visits) and unrelated (e.g., the participant moves or changes his/her email address without notifying study staff). Broadly speaking, if the attrition is associated with the study outcomes, it introduces a form of selection bias into the registry that must be described and accounted for in analyses to the extent possible (known as informative censoring in the context of randomized clinical trials).77 Whether it introduces bias or not, loss to followup can limit the ability of the registry to examine long-term outcomes and can have an impact on statistical power. Registries that aim to collect long-term followup data are encouraged to develop retention targets, actively monitor retention against those targets, and take proactive measure to minimize loss to followup, as needed. Strategies to retain participants and minimize loss to followup are discussed extensively in Chapters 3, 10, and 11.

A related concept to loss-to-followup is administrative right censoring, which occurs when the registry ends before an outcome of interest occurs for all subjects (which is typically the case). This is especially common in pregnancy registries, which are designed to assess outcomes of pregnancies during which the mother (or, in some cases, the father) was exposed to medical products. Pregnancy registries typically collect information on congenital defects that are ascertained at birth or shortly after birth (e.g., 30-day followup or, often at most, one year), but are not designed to detect defects or developmental delays that are diagnosed later in life.78 Right censoring occurs in other types of registries as well. For example, a registry designed to study the effectiveness of a cancer treatment may conduct survival analyses after following patients for five years. Some patients will have died during that period, and their survival after treatment will be known. However, for patients who are still alive at the conclusion of the study, survival after treatment will be right censored due to the close of the registry. In general, missing data due to administrative right censoring will not introduce bias in analysis, but bias is possible if there are strong temporal trends in risk of the outcome.

Finally, competing risks must be considered. A competing risk is an event that prevents the outcome or outcomes of interest not merely from being observed, but from happening in the first place. For example, in a study of incidence of heart attack, death (by any cause besides heart attack) prevents incident heart attack from occurring; in a study of breast cancer, preventive double mastectomy likewise may be considered a competing risk for breast cancer. Competing risks can lead to missing data in certain settings; sometimes a study may be interested in the risk of breast cancer in all individuals – including those who, due to beliefs about their personal risks breast cancer status that these women would have had, had they not gotten a mastectomy, can be regarded as a variety of missing data; in other cases, competing risks do not lead to such clear instances of missing data. See Lau et al. for a more involved discussion of competing risks and missing data, as well as analytic approaches.79

5.4.1.4. Data Gaps in Secondary Data Sources

Item nonresponse, left truncation, and right censoring are specific examples of the more general problem of data gaps in secondary data sources. While registries collect data continuously, secondary data sources may only pertain to a particular subgroup of a larger population, and membership in that subgroup may be dynamic. Examples include individuals covered by Medicaid and members enrolled in managed care plans. In both examples, the databases pertain to participants in a health insurance program, and membership in those programs can change frequently. Data are collected only while the participants are members. If membership is lost and restored again later, there will be a data gap. Importantly, membership in these plans might be related to other characteristics that affect health, such as socioeconomic status or employment.80 Similar problems can arise when there are gaps in residency and the database is based on national healthcare data, or when individuals have health insurance from more than one source.

Data gaps in secondary data sources can also arise when medications are dispensed in the hospital, since many databases do not capture in-hospital medication use, leading to a form of information bias. In drug safety studies examining mortality risk related to the use of a particular medication, missing in-hospital medication use can result in spurious estimates of treatment effects.81 This bias was illustrated in a case-control study examining mortality risk related to inhaled corticosteroid use from the Saskatchewan, Canada, database. Analyses that failed to account for missed corticosteroid use during hospitalization events preceding death or the matched date for controls showed a beneficial effect (RR=0.6; 95% CI, 0.5 to 0.73). The RR estimates changed markedly once the missing in-hospital corticosteroid use was included (RR=0.93; 95% CI, 0.76 to 1.14 and RR=1.35; 95% CI, 1.14 to 1.60).81 This bias has also been observed in studies of injectable medications in dialysis patients where hospitalization events preceding death resulted in spuriously low effect estimates.82

5.4.2. Analytic Implications and Management Strategies for Missing Data

When considering the potential impact of the missing data on the study findings, it is may be helpful to consider whether the data are missing largely at random or due to systematic reasons (e.g., followup is provided outside the health system or by other providers). The type, pattern and amount of missing data will help guide the selection of appropriate management strategies for an analysis.

5.4.2.1. Complete Case Strategy

The complete case strategy limits the analysis to patients with complete information for all variables. A simple deletion of all incomplete observations, however, is not appropriate or efficient in all circumstances, and it introduces significant bias if the deleted cases are substantively different from the retained, complete cases (i.e., not missing completely at random). Therefore, the complete case strategy is inefficient and not generally used. For example, patients with diabetes who were hospitalized because of inadequate glucose control might not return for a scheduled followup visit at which HbA1c was to be measured. Those missing values for HbA1c would probably differ from the measured values because of the reason for which they were missing. Similarly, the availability of the results of certain tests or measurements may depend on what is covered by patients’ health insurance (a known value), since registries do not typically pay for testing. Patients without this particular measurement may still contribute meaningfully to the analysis. In order to include patients with missing data, one of several imputation techniques may be used to estimate the missing data.

5.4.2.2. Single and Multiple Imputation

Unlike complete case analysis, patients with missing data are retained in the analysis when imputation methods are used. Imputation methods replace missing observations with values predicted in some manner, often from a model.

Single imputation can be useful when dates are partially missing. For example, if the day element of an adverse event (AE) date is missing and is not retrievable, consideration can be given to imputation of the missing day element on the middle (e.g., 15th) day of the month. This date would need to be constrained by underlying study issues (for example, the date must be after the study enrollment date, and study discontinuation date). If treatment changes during this month, the relationship with the timing of the treatment change should be considered when discussing the appropriate imputation method. This issue is closely related to that of interval censoring.

In general, however, use of multiple imputation methods is strongly preferred to single imputation. In multiple imputation, multiple datasets are produced with different values imputed for each missing variable per dataset, thus reflecting the uncertainty around the true values of the missing variables. The multiple values may be derived from the posterior probability distributions for the missing values.83 As noted, the result of a multiple imputation process is multiple complete datasets for analysis from which a single summary finding is estimated. Standard errors are obtained through a combination of the between-model variance and the within-model standard errors, using Rubin’s Rules for Imputation.84 If data are missing at random (MAR), multiple imputation will generally produce unbiased results if the model includes the correct set of covariates, and unlike single imputation will propagate error correctly. In the presence of data that are missing systematically, multiple imputation in general cannot fully correct any bias due to missing data.

5.4.2.3. Maximum Likelihood Methods

Maximum likelihood estimation (MLE) is an analytic maximization procedure which provides the values of the model parameters that maximize the sample likelihood, i.e., the values that make the observed data “most probable.” MLE has the advantage of using all available data and does not require data to be sorted by a fixed number of study visits. Under the assumption of that data are missing at random, MLE is efficient and provides unbiased estimates. Calculating MLE’s and fitting them into regression models for statistical inference often requires specialized software, especially when data are missing for predictor variables. This is still a challenge today, but as time goes by, more statistical packages are upgrading to contain MLE analysis capability. When data are missing for dependent variables only, likelihood-based methods including the well-known mixed models for repeated measurements can be used for analyzing data with monotone or non-monotone missingness patterns.85

In observational research, not all studies have meaningful “visits,” or have data collected at a given visit for all subjects. For example, it is quite common that a subject can have unscheduled visits, and a lab test can be “missing” simply because the test was not ordered by the physician. Longitudinal data of this nature can be analyzed by the random intercepts model.86 In this model, each subject is assumed to have a random effect, which follows a normal distribution. The time variable can be modeled as a random effect, a fixed effect, or both.87 The model has the flexibility of allowing linear, quadratic or other forms of the time effect, and inclusion of the interaction effects between other covariates and time as fixed effects. If the repeated-measures are for a binary dependent variable, or count data, the above-mentioned benefits can be obtained by fitting the generalized liner mixed effects model. The generalized linear mixed model also assumes MAR.

5.4.2.4. Sensitivity Analyses

In addition to the above approaches, “scenario-based” sensitivity analyses should be considered for missing data. Investigators can identify “worst case” scenarios for the missing data: for missing outcomes, one such “worst case” scenario might be to assume that all exposed missing outcomes are events, while all unexposed missing outcomes are nonevents (or vice-versa). Such scenario-based approaches can help set boundaries on effect size in ways that are useful for contextualizing main results. However, since scenario-based analyses are by their nature specific to the data and situation under study, it is important to consider carefully what questions are of most substantive relevance to the study question at hand. Ideally, sensitivity analyses using different analytic approaches for missing data should be pre-specified in the protocol or a separate data analysis plan, and not done post-hoc.

Readers interested in learning more about methods for handling missing data and the potential for bias are directed to other useful resources by Greenland,70 Greenland and Finkle,88 Hernán and Robins,89 Hernán and colleagues,40 Daniel and colleagues,77 Westreich,90 and Lash, Fox, and Fink.2 It is important to keep in mind that the impact of data completeness will differ, depending on the extent of missing data and the intended use of the registry. It may be less problematic with regard to descriptive research than research intended to support decision making. For all registry-based studies, it is important to have a strategy for how to identify and handle missing data as well as how to explicitly report on data completeness to facilitate interpretation of study results. (See Reporting section below.)

5.5. Machine Learning and Natural Language Processing

Machine learning (ML) uses computer algorithms to identify patterns in large datasets with a multitude of variables, and has emerged as a highly effective method for prediction and decision making in a multitude of disciplines, including natural language processing (NLP), predicting patient outcomes, and identifying eligible patients for registry analyses.9194

NLP describes the computerized approach to analyzing text,95 for example, doctors’ notes for patients. With the increasing adoption of electronic medical record (EMR) systems, more of patients’ text data are becoming electronic and therefore available for computer processing. However, the data in medical narrative documents are unstructured and cannot be directly utilized for analyses. Two basic approaches have been used in attempts to provide structure to the text reports a clinician creates. One of these involves avoiding the creation of unstructured data altogether through the use of a form-based user interface so that the clinician chooses to mark appropriate concepts and values from a list of possible entries. This approach has been criticized by clinicians because of its frequent inability to capture many of the nuances represented in free text documentation, as well as the increased time it requires. The other approach has been to use NLP to perform information retrieval on narrative medical documents and to convert the data in documents into a form suitable for various applications. A medical NLP system is one that is applied to processing clinical documents (such as radiology or pathology reports, or discharge summaries) that are produced during the actual clinical care of a patient.95

ML methods have potential pitfalls including overfitting,96 bad sample data or training sets not being representative. Thus, results from ML need to be vigorously validated. A common and important validation strategy is cross-validation. In cross-validation, a model is fitted using only a portion of the sample data. The model is then applied to another portion of the data to test performance. Ideally, a model will perform equally well on both portions of the data. If it does not, it is likely that the model has been over fit. Another validation is manual review of patients’ records, which usually applies to validation of NLP.97

6. Interpretation and Reporting of Registry Data

Proper interpretation of registry data is grounded in a strong understanding of the strengths and limitations of the registry methods including the analyses. Interpretation also should also be tempered, in part, by whether the analyses are attempting to confirm or refute other studies or if the registry is providing first reports. If the purpose of the registry is explicit, the actual population studied is reasonably representative of the target population, the data have been curated to enhance quality, and the analyses performed so as to reduce potential biases, then the interpretation of the registry data should allow a realistic picture of the quality of medical care, the natural history of the disease studied, or the safety, effectiveness, or value of a clinical evaluation. Each of these topics needs to be discussed in the interpretation of the registry data, and potential shortcomings should be explored.

In interpreting the findings, the precision of the estimated effect measure from the study should be discussed. Confidence intervals are important tools that provide a range of the effect measures consistent with the study findings. Statistical significance alone does not exclusively determine the clinical importance of the findings because some registries include large amounts of healthcare data, and very small effect measures can be statistically significant. Considerations should also be given to the clinical significance of the effect estimates and potential biases.98

Assumptions or biases that could have influenced the outcomes of the analyses should be highlighted and separated from those that do not affect the interpretation of the registry results. Documentation of how data were collected and coded, data completeness and how missing data were addressed when reporting registry findings is important to provide transparency and to allow readers to accurately interpret registry findings. To this end, three useful guidelines are the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement,99 the Patient-Centered Outcomes Research Institute (PCORI) Methodology Report,100 and the GRACE checklist for observational studies of comparative effectiveness.101

7. Summary

In summary, a meaningful analysis requires careful consideration of study design features and the nature of the data collected. Most typical epidemiological study analytical methods can be applied, and there is no one-size-fits-all approach for registry-based research. Efforts should be made to carefully evaluate the presence of biases and to control for identified potential biases during data analysis. This requires close collaboration among clinicians, epidemiologists, statisticians, study coordinators, and others involved in the design, conduct, and interpretation of the study.

Footnotes

i

Adapted from Lash TL, Sorensen HT, Bradbury BD, et al. ‘Analysis of Linked Registry Datasets Registries.’ In: Gliklich R, Dreyer N, Leavy M, eds. Registries for Evaluating Patient Outcomes: A User’s Guide. Third edition. Two volumes. (Prepared by the Outcome DEcIDE Center [Outcome Sciences, Inc., a Quintiles company] under Contract No. 290 2005 00351 TO7.) AHRQ Publication No. 13(14)-EHC111. Rockville, MD: Agency for Healthcare Research and Quality. April 2014.

ii

Adapted from Lash TL, Sorensen HT, Bradbury BD, et al. ‘Analysis of Linked Registry Datasets.’ In: Gliklich R, Dreyer N, Leavy M, eds. Registries for Evaluating Patient Outcomes: A User’s Guide. Third edition. Two volumes. (Prepared by the Outcome DEcIDE Center [Outcome Sciences, Inc., a Quintiles company] under Contract No. 290 2005 00351 TO7.) AHRQ Publication No. 13(14)-EHC111. Rockville, MD: Agency for Healthcare Research and Quality. April 2014.

iii

Adapted from Mack C, Su Z, Westreich D. Managing Missing Patient Data in Patient Registries. White Paper, addendum to Registries for Evaluating Patient Outcomes: A User’s Guide, Third Edition. (Prepared by L&M Policy Research, LLC, under Contract No. 290-2014-00004-C.) AHRQ Publication No. 17(18)-EHC015-EF. Rockville, MD: Agency for Healthcare Research and Quality; February 2018. [PubMed: 29671990]

References for Chapter 13

1.
Rothman K, Greenland S, Lash TL. Validity in Epidemiologic Studies. In: Rothman K, Greenland S, Lash TL, editors. Modern Epidemiology. 3rd ed. Philadelphia: Lippincott Williams & Wilkins; 2008. p. 128–47.
2.
Lash TL, Fox MP, Fink AK. Applying quantitative bias analysis to epidemiologic data: Springer; 2009.
3.
Rothman K, Greenland S, Poole C, et al. Causation and Causal Inference. In: Rothman K, Greenland S, Lash TL, editors. Modern Epidemiology. 3rd ed. Philadelphia: Lippincott Williams & Wilkins; 2008. p. 5–31.
4.
Rothman K, Greenland S, Lash TL. Case-Control Studies. In: Rothman K, Greenland S, Lash TL, editors. Modern Epidemiology. 3rd ed. Philadelphia: Lippincott Williams & Wilkins; 2008. p. 111–27.
5.
Sedrakyan A, Marinac-Dabic D, Normand SL, et al. A framework for evidence evaluation and methodological issues in implantable device studies. Med Care. 2010;48:(6 Suppl):S121–8. PMID: 20421824. DOI: 10.1097/MLR.0b013e3181d991c4. [PubMed: 20421824] [CrossRef]
6.
Berger ML, Sox H, Willke RJ, et al. Good practices for real-world data studies of treatment and/or comparative effectiveness: Recommendations from the joint ISPOR-ISPE Special Task Force on real-world evidence in health care decision making. Pharmacoepidemiol Drug Saf. 2017;26(9):1033–9. PMID: 28913966. DOI: 10.1002/pds.4297. [PMC free article: PMC5639372] [PubMed: 28913966] [CrossRef]
7.
Editors. The registration of observational studies--when metaphors go bad. Epidemiology. 2010;21(5):607–9. PMID: 20657291. DOI: 10.1097/EDE.0b013e3181eafbcf. [PubMed: 20657291] [CrossRef]
8.
Poole C. A vision of accessible epidemiology. Epidemiology. 2010;21(5):616–8. PMID: 20657293. DOI: 10.1097/EDE.0b013e3181e9be3f. [PubMed: 20657293] [CrossRef]
9.
Lash TL. Preregistration of study protocols is unlikely to improve the yield from our science, but other strategies might. Epidemiology. 2010;21(5):612–3. PMID: 20657295. DOI: 10.1097/EDE.0b013e3181e9bba6. [PubMed: 20657295] [CrossRef]
10.
Lash TL, Collin LJ, Van Dyke ME. The Replication Crisis in Epidemiology: Snowball, Snow Job, or Winter Solstice? Curr Epidemiol Rep. 2018;5(2):175–83. [PMC free article: PMC8075285] [PubMed: 33907664]
11.
Greenland S, Rothman K. Measures of Occurrence. In: Rothman K, Greenland S, Lash TL, editors. Modern Epidemiology. 3rd ed. Philadelphia: Lippincott Williams & Wilkins; 2008. p. 32–50.
12.
National Cancer Institute. Surveillance Epidemiology and End Results. https://seer​.cancer.gov/. Accessed June 10, 2019.
13.
Reeves MJ, Fonarow GC, Smith EE, et al. Representativeness of the Get With The Guidelines-Stroke Registry: comparison of patient and hospital characteristics among Medicare beneficiaries hospitalized with ischemic stroke. Stroke. 2012;43(1):44–9. PMID: 21980197. DOI: 10.1161/STROKEAHA.111.626978. [PubMed: 21980197] [CrossRef]
14.
Rothman KJ, Gallacher JE, Hatch EE. Why representativeness should be avoided. Int J Epidemiol. 2013;42(4):1012–4. PMID: 24062287. DOI: 10.1093/ije/dys223. [PMC free article: PMC3888189] [PubMed: 24062287] [CrossRef]
15.
Schneeweiss S, Gagne JJ, Glynn RJ, et al. Assessing the comparative effectiveness of newly marketed medications: methodological challenges and implications for drug development. Clin Pharmacol Ther. 2011;90(6):777–90. PMID: 22048230. DOI: 10.1038/clpt.2011.235. [PubMed: 22048230] [CrossRef]
16.
Suissa S. Effectiveness of inhaled corticosteroids in chronic obstructive pulmonary disease: immortal time bias in observational studies. Am J Respir Crit Care Med. 2003;168(1):49–53. PMID: 12663327. DOI: 10.1164/rccm.200210-1231OC. [PubMed: 12663327] [CrossRef]
17.
Rothman K, Greenland S, Lash TL. Design strategies to improve study accuracy. In: Rothman K, Greenland S, Lash TL, editors. Modern Epidemiology. 3rd ed. Philadelphia: Lippincott Williams & Wilkins; 2008. p. 162–82.
18.
Rothman K, Greenland S. Modern Epidemiology. 2nd ed. Philadelphia: Lippincott Williams & Wilkins; 1998.
19.
Dreyer NA. Advancing a Framework for Regulatory Use of Real-World Evidence: When Real Is Reliable. Ther Innov Regul Sci. 2018;52(3):362–8. PMID: 29714575. DOI: 10.1177/2168479018763591. [PMC free article: PMC5944086] [PubMed: 29714575] [CrossRef]
20.
U.S. Food and Drug Administration. Office of Surveillance and Epidemiology, Center for Drug Evaluation and Research. Standards for Data Management and Analytic Processes in the Office of Surveillance and Epidemiology (OSE). March 3, 2008. hhttps://www​.fda.gov/media/72761/download. Accessed June 10, 2019.
21.
Greenland S, Lash TL. Bias Analysis. In: Rothman K, Greenland S, Lash TL, editors. Modern Epidemiology. 3rd ed. Philadelphia: Lippincott Williams & Wilkins; 2008. p. 345–80.
22.
Sorensen HT, Riis AH, Lash TL, et al. Statin use and risk of amyotrophic lateral sclerosis and other motor neuron disorders. Circ Cardiovasc Qual Outcomes. 2010;3(4):413–7. PMID: 20530788. DOI: 10.1161/CIRCOUTCOMES.110.936278 [PubMed: 20530788] [CrossRef]
23.
Anderson IB, Sorensen TI, Prener A. Increase in incidence of disease due to diagnostic drift: primary liver cancer in Denmark, 1943–85. BMJ. 1991;302(6774):437–40. PMID: 2004170. DOI: 10.1136/bmj.302.6774.437. [PMC free article: PMC1669338] [PubMed: 2004170] [CrossRef]
24.
Lash TL, Johansen MB, Christensen S, et al. Hospitalization rates and survival associated with COPD: a nationwide Danish cohort study. Lung. 2011;189(1):27–35. PMID: 21170722. DOI: 10.1007/s00408-010-9274-z. [PubMed: 21170722] [CrossRef]
25.
Rodriguez LA, Perez-Gutthann S, Jick SS. The UK General Practice Research Database. In: Strom BL, editor. Pharmacopepidemiology. 3rd ed. Chichester, UK: John Wiley & Sons, LTD; 2000. p. 375–85.
26.
Greenland S, Schwartzbaum JA, Finkle WD. Problems due to small samples and sparse data in conditional logistic regression analysis. Am J Epidemiol. 2000;151(5):531–9. PMID: 10707923. DOI: 10.1093/oxfordjournals.aje.a010240. [PubMed: 10707923] [CrossRef]
27.
Sturmer T, Jonsson Funk M, Poole C, et al. Nonexperimental comparative effectiveness research using linked healthcare databases. Epidemiology. 2011;22(3):298–301. PMID: 21464649. DOI: 10.1097/EDE.0b013e318212640c. [PMC free article: PMC4012640] [PubMed: 21464649] [CrossRef]
28.
Greenland S. Randomization, statistics, and causal inference. Epidemiology. 1990;1(6):421–9. PMID: 2090279. [PubMed: 2090279]
29.
Lash TL, Schmidt M, Jensen AO, et al. Methods to apply probabilistic bias analysis to summary estimates of association. Pharmacoepidemiol Drug Saf. 2010;19(6):638–44. PMID: 20535760. DOI: 10.1002/pds.1938. [PubMed: 20535760] [CrossRef]
30.
Fink AK, Lash TL. A null association between smoking during pregnancy and breast cancer using Massachusetts registry data (United States). Cancer Causes Control. 2003;14(5):497–503. PMID: 12946045. [PubMed: 12946045]
31.
Lash TL, Fox MP, Thwin SS, et al. Using probabilistic corrections to account for abstractor agreement in medical record reviews. Am J Epidemiol. 2007;165(12):1454–61. PMID: 17406006. DOI: 10.1093/aje/kwm034. [PubMed: 17406006] [CrossRef]
32.
Kleinbaum DG, Kupper LL, Miller KE, et al. Applied regression analysis and other multivariable methods. Belmont, CA: Duxbury Press; 1998.
33.
Hennekens CH, Buring JE, Mayrent SL. Epidemiology in medicine. 1st ed. Boston: Little, Brown and Company; 1987.
34.
Aschengrau A, Seage G. Essentials of epidemiology in public health. 2003.
35.
Rosner B. Fundamentals of biostatistics. 5th ed. Boston: Duxbury Press; 2000.
36.
Salas M, Hofman A, Stricker BH. Confounding by indication: an example of variation in the use of epidemiologic terminology. Am J Epidemiol. 1999;149(11):981–3. PMID: 10355372. DOI: 10.1093/oxfordjournals.aje.a009758. [PubMed: 10355372] [CrossRef]
37.
Petri H, Urquhart J. Channeling bias in the interpretation of drug effects. Stat Med. 1991;10(4):577–81. PMID: 2057656. [PubMed: 2057656]
38.
Higgins J, Green S. The Cochrane Collaboration. The Cochrane handbook for systematic reviews of interventions. 2006. http://www​.cochrane.org​/sites/default/files​/uploads/Handbook4.2.6Sep2006.pdf. Accessed August 15, 2012.
39.
Swihart BJ, Caffo B, James BD, et al. Lasagna plots: a saucy alternative to spaghetti plots. Epidemiology. 2010;21(5):621–5. PMID: 20699681. DOI: 10.1097/EDE.0b013e3181e5b06a. [PMC free article: PMC2937254] [PubMed: 20699681] [CrossRef]
40.
Hernan MA, Hernandez-Diaz S, Werler MM, et al. Causal knowledge as a prerequisite for confounding evaluation: an application to birth defects epidemiology. Am J Epidemiol. 2002;155(2):176–84. PMID: 11790682. DOI: 10.1093/aje/155.2.176. [PubMed: 11790682] [CrossRef]
41.
Mangano DT, Tudor IC, Dietzel C, et al. The risk associated with aprotinin in cardiac surgery. N Engl J Med. 2006;354(4):353–65. PMID: 16436767. DOI: 10.1056/NEJMoa051379. [PubMed: 16436767] [CrossRef]
42.
Cepeda MS, Boston R, Farrar JT, et al. Comparison of logistic regression versus propensity score when the number of events is low and there are multiple confounders. Am J Epidemiol. 2003;158(3):280–7. PMID: 12882951. DOI: 10.1093/aje/kwg115. [PubMed: 12882951] [CrossRef]
43.
Sturmer T, Joshi M, Glynn RJ, et al. A review of the application of propensity score methods yielded increasing use, advantages in specific settings, but not substantially different estimates compared with conventional multivariable methods. J Clin Epidemiol. 2006;59(5):437–47. PMID: 16632131. DOI: 10.1016/j.jclinepi.2005.07.004. [PMC free article: PMC1448214] [PubMed: 16632131] [CrossRef]
44.
Glynn RJ, Schneeweiss S, Sturmer T. Indications for propensity scores and review of their use in pharmacoepidemiology. Basic Clin Pharmacol Toxicol. 2006;98(3):253–9. PMID: 16611199. DOI: 10.1111/j.1742-7843.2006.pto_293.x. [PMC free article: PMC1790968] [PubMed: 16611199] [CrossRef]
45.
Schneeweiss S, Rassen JA, Glynn RJ, et al. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology. 2009;20(4):512–22. PMID: 19487948. DOI: 10.1097/EDE.0b013e3181a663cc. [PMC free article: PMC3077219] [PubMed: 19487948] [CrossRef]
46.
Reeve BB, Potosky AL, Smith AW, et al. Impact of cancer on health-related quality of life of older Americans. J Natl Cancer Inst. 2009;101(12):860–8. PMID: 19509357. DOI: 10.1093/jnci/djp123. [PMC free article: PMC2720781] [PubMed: 19509357] [CrossRef]
47.
Brodie BR, Stuckey T, Downey W, et al. Outcomes with drug-eluting stents versus bare metal stents in acute ST-elevation myocardial infarction: results from the Strategic Transcatheter Evaluation of New Therapies (STENT) Group. Catheter Cardiovasc Interv. 2008;72(7):893–900. PMID: 19016465. DOI: 10.1002/ccd.21767. [PubMed: 19016465] [CrossRef]
48.
Shuhaiber JH, Kim JB, Hur K, et al. Survival of primary and repeat lung transplantation in the United States. Ann Thorac Surg. 2009;87(1):261–6. PMID: 19101309. DOI: 10.1016/j.athoracsur.2008.10.031. [PubMed: 19101309] [CrossRef]
49.
Grabowski GA, Kacena K, Cole JA, et al. Dose-response relationships for enzyme replacement therapy with imiglucerase/alglucerase in patients with Gaucher disease type 1. Genet Med. 2009;11(2):92–100. PMID: 19265748. DOI: 10.1097/GIM.0b013e31818e2c19. [PMC free article: PMC3793250] [PubMed: 19265748] [CrossRef]
50.
Hernan MA, Robins JM. Instruments for causal inference: an epidemiologist’s dream? Epidemiology. 2006;17(4):360–72. PMID: 16755261. DOI: 10.1097/01.ede.0000222409.00878.37. [PubMed: 16755261] [CrossRef]
51.
Merlo J, Chaix B, Yang M, et al. A brief conceptual tutorial of multilevel analysis in social epidemiology: linking the statistical concept of clustering to the idea of contextual phenomenon. J Epidemiol Community Health. 2005;59(6):443–9. PMID: 15911637. DOI: 10.1136/jech.2004.023473. [PMC free article: PMC1757045] [PubMed: 15911637] [CrossRef]
52.
Holden JE, Kelley K, Agarwal R. Analyzing change: a primer on multilevel models with applications to nephrology. Am J Nephrol. 2008;28(5):792–801. PMID: 18477842. DOI: 10.1159/000131102. [PMC free article: PMC2613435] [PubMed: 18477842] [CrossRef]
53.
Diez-Roux AV. Multilevel analysis in public health research. Annu Rev Public Health. 2000;21:171–92. PMID: 10884951. DOI: 10.1146/annurev.publhealth.21.1.171. [PubMed: 10884951] [CrossRef]
54.
Leyland AH, Goldstein H. Multilevel modeling of health statistics. Chichester, UK: John Wiley & Sons, LTD; 2001.
55.
Varadhan R, Segal JB, Boyd CM, et al. A framework for the analysis of heterogeneity of treatment effect in patient-centered outcomes research. J Clin Epidemiol. 2013;66(8):818–25. PMID: 23651763. DOI: 10.1016/j.jclinepi.2013.02.009. [PMC free article: PMC4450361] [PubMed: 23651763] [CrossRef]
56.
Palmer AJ. Health economics--what the nephrologist should know. Nephrol Dial Transplant. 2005;20(6):1038–41. PMID: 15840678. DOI: 10.1093/ndt/gfh824. [PubMed: 15840678] [CrossRef]
57.
Neumann PJ. Opportunities and barriers. Using cost-effectiveness analysis to improve health care. New York, NY: Oxford University Press; 2004.
58.
Edejer TT-T, Baltussen R, Adam T, et al. Making choices in health: WHO guide to cost-effectiveness analysis. Geneva: World Health Organization; 2004.
59.
Drummond M, Stoddart G, Torrance G. Methods for the economic evaluation of health care programmes. 3rd ed. New York: Oxford University Press; 2005.
60.
Muennig P. Designing and conducting cost-effectiveness analyses in medicine and health care. New York: John Wiley & Sons, LTD; 2002.
61.
Haddix AC, Teutsch SM, Corso PS. Prevention effectiveness: a guide to decision analysis and economic evaluation. New York: Oxford University Press; 2003.
62.
Gold MR, Siegel JE, Russell LB, et al. Cost-effectiveness in health and medicine: the Report of the Panel on Cost-Effectiveness in Health and Medicine. New York: Oxford University Press; 1996.
63.
Kleinbaum DG, Klein M. Survival analysis: a self-learning text. 2nd ed. New York: Springer; 2005.
64.
Twisk JWR. Applied longitudinal data analysis for epidemiology – a practical guide. Cambridge, UK: Cambridge University Press; 2003.
65.
Newman SC. Biostatistical methods in epidemiology. New York: John Wiley & Sons, LTD; 2001.
66.
Ray WA. Evaluating medication effects outside of clinical trials: new-user designs. Am J Epidemiol. 2003;158(9):915–20. PMID: 14585769. DOI: 10.1093/aje/kwg231. [PubMed: 14585769] [CrossRef]
67.
Suissa S. Immortal time bias in observational studies of drug effects. Pharmacoepidemiol Drug Saf. 2007;16(3):241–9. PMID: 17252614. DOI: 10.1002/pds.1357. [PubMed: 17252614] [CrossRef]
68.
Haynes RB, Sackett DL, Guyatt GH, et al. Clinical epidemiology. 3rd ed. New York: Lippincott Williams & Wilkens; 2005.
69.
Andersen PK, Geskus RB, de Witte T, et al. Competing risks in epidemiology: possibilities and pitfalls. Int J Epidemiol. 2012;41(3):861–70. PMID: 22253319. DOI: 10.1093/ije/dyr213. [PMC free article: PMC3396320] [PubMed: 22253319] [CrossRef]
70.
Greenland S, Pearl J, Robins JM. Causal diagrams for epidemiologic research. Epidemiology. 1999;10(1):37–48. PMID: 9888278. [PubMed: 9888278]
71.
Hernan MA, Hernandez-Diaz S, Robins JM. A structural approach to selection bias. Epidemiology. 2004;15(5):615–25. PMID: 15308962. [PubMed: 15308962]
72.
Bland JM, Altman DG. Survival probabilities (the Kaplan-Meier method). BMJ. 1998;317(7172):1572. PMID: 9836663. DOI: 10.1136/bmj.317.7172.1572. [PMC free article: PMC1114388] [PubMed: 9836663] [CrossRef]
73.
Mendelsohn AB, Dreyer NA, Mattox PW, et al. Characterization of Missing Data in Clinical Registry Studies. Therapeutic Innovation & Regulatory Science. 2015;49(1):146–54. PMID: 30222467. DOI: 10.1177/2168479014532259. [PubMed: 30222467] [CrossRef]
74.
Applebaum KM, Malloy EJ, Eisen EA. Left truncation, susceptibility, and bias in occupational cohort studies. Epidemiology. 2011;22(4):599–606. PMID: 21543985. DOI: 10.1097/EDE.0b013e31821d0879. [PMC free article: PMC4153398] [PubMed: 21543985] [CrossRef]
75.
Howards PP, Hertz-Picciotto I, Poole C. Conditions for bias from differential left truncation. Am J Epidemiol. 2007;165(4):444–52. PMID: 17150983. DOI: 10.1093/aje/kwk027. [PubMed: 17150983] [CrossRef]
76.
Cain KC, Harlow SD, Little RJ, et al. Bias due to left truncation and left censoring in longitudinal studies of developmental and disease processes. Am J Epidemiol. 2011;173(9):1078–84. PMID: 21422059. DOI: 10.1093/aje/kwq481. [PMC free article: PMC3121224] [PubMed: 21422059] [CrossRef]
77.
Daniel RM, Kenward MG, Cousens SN, et al. Using causal diagrams to guide analysis in missing data problems. Stat Methods Med Res. 2012;21(3):243–56. PMID: 21389091. DOI: 10.1177/0962280210394469. [PubMed: 21389091] [CrossRef]
78.
Hernandez-Diaz S, Chambers C, Ephross S, et al. “Pregnancy Registries.” In: Gliklich R, Dreyer N, Leavy M, eds. Registries for Evaluating Patient Outcomes: A User’s Guide. Third edition. Two volumes. (Prepared by the Outcome DEcIDE Center [Outcome Sciences, Inc., a Quintiles company] under Contract No. 290 2005 00351 TO7.) AHRQ Publication No. 13(14)-EHC111. Rockville, MD: Agency for Healthcare Research and Quality. April 2014. http://www​.effectivehealthcare.ahrq.gov. [PubMed: 24945055]
79.
Lau B, Cole SR, Gange SJ. Competing risk regression models for epidemiologic data. Am J Epidemiol. 2009;170(2):244–56. PMID: 19494242. DOI: 10.1093/aje/kwp107. [PMC free article: PMC2732996] [PubMed: 19494242] [CrossRef]
80.
Riley GF. Administrative and claims records as sources of health care cost data. Med Care. 2009;47(7 Suppl 1):S51–5. PMID: 19536019. DOI: 10.1097/MLR.0b013e31819c95aa. [PubMed: 19536019] [CrossRef]
81.
Suissa S. Immeasurable time bias in observational studies of drug effects on mortality. Am J Epidemiol. 2008;168(3):329–35. PMID: 18515793. DOI: 10.1093/aje/kwn135. [PubMed: 18515793] [CrossRef]
82.
Bradbury BD, Wang O, Critchlow CW, et al. Exploring relative mortality and epoetin alfa dose among hemodialysis patients. Am J Kidney Dis. 2008;51(1):62–70. PMID: 18155534. DOI: 10.1053/j.ajkd.2007.09.015. [PubMed: 18155534] [CrossRef]
83.
Little RJA, Rubin DB. Statistical analysis with missing data. New York: John Wiley & Sons; 1987.
84.
Rubin D. Multiple Imputation for Nonresponse in Surveys. New York: John Wiley & Sons, LTD; 1987.
85.
A Guide to Planning for Missing Data. Clinical Trials with Missing Data. p. 71–129.
86.
Allison PD. Handling Missing Data by Maximum Likelihood. Paper 312-2012, SAS Global Forum 2012. http://www​.statisticalhorizons​.com/wp-content​/uploads/MissingDataByML.pdf. Accessed June 10, 2019.
87.
Loughin TM. SAS® for Mixed Models, 2nd edition Edited by Littell, R. C., Milliken, G. A., Stroup, W. W., Wolfinger, R. D., and Schabenberger, O. Biometrics. 2006;62(4):1273–4. DOI: 10.1111/j.1541-0420.2006.00596_6.x. [CrossRef]
88.
Greenland S, Finkle WD. A critical look at methods for handling missing covariates in epidemiologic regression analyses. Am J Epidemiol. 1995;142(12):1255–64. PMID: 7503045. DOI: 10.1093/oxfordjournals.aje.a117592. [PubMed: 7503045] [CrossRef]
89.
Hernán MA, Robins JM (2019). Causal Inference. Boca Raton: Chapman & Hall/CRC, forthcoming.
90.
Westreich D. Berkson’s bias, selection bias, and missing data. Epidemiology. 2012;23(1):159–64. PMID: 22081062. DOI: 10.1097/EDE.0b013e31823b6296. [PMC free article: PMC3237868] [PubMed: 22081062] [CrossRef]
91.
Motwani M, Dey D, Berman DS, et al. Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: a 5-year multicentre prospective registry analysis. European heart journal. 2017;38(7):500–7. PMID: 27252451. DOI: 10.1093/eurheartj/ehw188. [PMC free article: PMC5897836] [PubMed: 27252451] [CrossRef]
92.
Linden A, Yarnold PR. Combining machine learning and matching techniques to improve causal inference in program evaluation. J Eval Clin Pract. 2016;22(6):864–70. PMID: 27353301. DOI: 10.1111/jep.12592. [PubMed: 27353301] [CrossRef]
93.
Oermann EK, Rubinsteyn A, Ding D, et al. Using a Machine Learning Approach to Predict Outcomes after Radiosurgery for Cerebral Arteriovenous Malformations. Sci Rep. 2016;6:21161. PMID: 26856372. DOI: 10.1038/srep21161. [PMC free article: PMC4746661] [PubMed: 26856372] [CrossRef]
94.
Lotsch J, Ultsch A. Machine learning in pain research. Pain. 2018;159(4):623–30. PMID: 29194126. DOI: 10.1097/j.pain.0000000000001118. [PMC free article: PMC5895117] [PubMed: 29194126] [CrossRef]
95.
Al-Haddad MA, Friedlin J, Kesterson J, et al. Natural language processing for the development of a clinical registry: a validation study in intraductal papillary mucinous neoplasms. HPB (Oxford). 2010;12(10):688–95. PMID: 21083794. DOI: 10.1111/j.1477-2574.2010.00235.x. [PMC free article: PMC3003479] [PubMed: 21083794] [CrossRef]
96.
Glowacki J, Reichoff M. Effective Model Validation using Machine Learning. Milliman White Paper. May 2017. http://us​.milliman.com​/uploadedFiles/insight​/2017/effective-model-validation-machine-learning.pdf. Accessed June 10, 2019.
97.
Jones BE, South BR, Shao Y, et al. Development and Validation of a Natural Language Processing Tool to Identify Patients Treated for Pneumonia across VA Emergency Departments. Appl Clin Inform. 2018;9(1):122–8. PMID: 29466818. DOI: 10.1055/s-0038-1626725. [PMC free article: PMC5821510] [PubMed: 29466818] [CrossRef]
98.
U.S. Food and Drug Administration. Best Practices for Conducting and Reporting Pharmacoepidemiologic Safety Studies Using Electronic Healthcare Data. May 2013. https://www​.fda.gov/files​/drugs/published​/Best-Practices-for-Conducting-and-Reporting-Pharmacoepidemiologic-Safety-Studies-Using-Electronic-Healthcare-Data-Sets.pdf, June 10, 2019.
99.
von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495–9. PMID: 25046131. DOI: 10.1016/j.ijsu.2014.07.013. [PubMed: 25046131] [CrossRef]
100.
Patient-Centered Outcomes Research Institute. PCORI Methodology Standards. April 2018. https://www​.pcori.org​/sites/default/files​/PCORI-Methodology-Standards.pdf. Accessed June 4, 2019.
101.
Dreyer NA, Bryant A, Velentgas P. The GRACE Checklist: A Validated Assessment Tool for High Quality Observational Studies of Comparative Effectiveness. J Manag Care Spec Pharm. 2016;22(10):1107–13. PMID: 27668559. DOI: 10.18553/jmcp.2016.22.10.1107. [PMC free article: PMC10398313] [PubMed: 27668559] [CrossRef]

Case Examples for Chapter 13

Case Example 23. Understanding Baseline Characteristics of Combined Datasets Prior to Analysis

DescriptionThe Kaiser Permanente Anterior Cruciate Ligament Reconstruction (KP ACLR) Registry was established to collect standardized data on ACLR procedures, techniques, graft types, and types of fixation and implants. The objectives of the registry are to identify risk factors that lead to degenerative joint disease, graft failure, and meniscal failure; determine outcomes of various graft types and fixation techniques; describe the epidemiology of ACLR patients; determine and compare procedure incidence rate at participating sites; and provide a framework for future studies tracking ACLR outcomes.
SponsorKaiser Permanente
Year Started2005
Year EndedOngoing
No. of Sites42 surgical centers and 240 surgeons
No. of Patients>40,000

Challenge

The KP ACLR Registry aimed to collaborate with the Norwegian Ligament Reconstruction Registry on a series of studies to proactively identify patient risk factors as well as surgical practices and techniques associated with poor surgical outcomes. Combining data from these two registries would allow for faster identification of certain risk factors and evaluation of low frequency events.

Proposed Solution

The first step was to compare the patient cohorts of the registries and the surgical practices of the two countries. Aggregate data were shared between the registries in tabular form. Analysis was conducted to identify differences that would be important to consider when making inferences about a population other than that covered by the registry. Commonalities were also identified to determine when inferences could be made from each other’s analysis and when data do not need to be adjusted.

Results

The analysis found that the registries generally had similar distributions of age, gender, preoperative patient-reported knee function, and knee-related quality of life. Differences were observed between the two registries in race, sports performed at the time of injury, time to surgery, graft use, and fixation type. While these differences needed to be accounted for in future analyses of combined datasets from both registries, the results indicated that analyses of the combined datasets were likely to produce findings that could be generalized to a wider population of ACLR patients.

Following this comparison, two hypothesis-driven analyses were conducted, investigating questions using the combined registry datasets.

Key Point

Combining or pooling registry data can be a valuable approach to achieving a larger sample size for data analysis. However, it is important to identify cohort and practice differences and similarities between registries before making generalizations of registry findings to other populations or sharing data for collaboration projects.

For More Information

Case Example 24. Using Registry Data To Evaluate Outcomes by Practice

DescriptionThe Epidemiologic Study of Cystic Fibrosis (ESCF) Registry was a multicenter, encounter-based, observational, postmarketing study designed to monitor product safety, define clinical practice patterns, explore risks for pulmonary function decline, and facilitate quality improvement for cystic fibrosis (CF) patients. The registry collected comprehensive data on pulmonary function, microbiology, growth, pulmonary exacerbations, CF-associated medical conditions, and chronic and acute treatments for children and adult CF patients at each visit to the clinical site.
SponsorGenentech, Inc.
Year Started1993
Year EndedPatient enrollment completed in 2005; followup complete.
No. of Sites215 sites over the life of the registry
No. of Patients32,414 patients and 832,705 encounters recorded

Challenge

Although guidelines for managing cystic fibrosis patients have been widely available for many years, little is known about variations in practice patterns among care sites and their associated outcomes. To determine whether differences in lung health existed between groups of patients attending different CF care sites, and to determine whether these differences were associated with differences in monitoring and intervention, data on a large number of CF patients from a wide variety of CF sites were necessary.

As a large, observational, prospective registry, ESCF collected data on a large number of patients from a range of participating sites. At the time of the outcomes study, the registry was estimated to have data on over 80 percent of CF patients in the United States, and it collected data from more than 90 percent of the sites accredited by the U.S. Cystic Fibrosis Foundation. Because the registry contained a representative population of CF patients, the registry database offered strong potential for analyzing the association between practice patterns and outcomes.

Proposed Solution

In designing the study, the team decided to compare CF sites using lung function (i.e., FEV1 [forced expiratory volume in 1 second] values), a common surrogate outcome for respiratory studies. Data from 18,411 patients followed in 194 care sites were reviewed, and 8,125 patients from 132 sites (minimum of 50 patients per site) were included. Only sites with at least 10 patients in a specified age group (ages 6–12, 13–17, and 18 or older) were included for evaluation of that age group. For each age group, sites were ranked in quartiles based on the median FEV1 value at each site. The frequency of patient monitoring and use of therapeutic interventions were compared between upper and lower quartile sites after stratification for disease severity.

Results

Substantial differences in lung health across different CF care sites were observed. Within-site rankings tended to be consistent across the three age groups. Patients who were cared for at higher-ranking sites had more frequent monitoring of their clinical status, measurements of lung function, and cultures for respiratory pathogens. These patients also received more interventions, particularly intravenous antibiotics for pulmonary exacerbations. The study concluded that frequent monitoring and increased use of appropriate medications in the management of CF are associated with improved outcomes.

Key Point

Stratifying patients by quartile of lung function, age, and disease severity allowed comparison of practices among sites and revealed practice patterns that were associated with better clinical status. The large numbers of patients and sites allowed for sufficient information to create meaningful and informative stratification, and resulted in sufficient information within those strata to reveal meaningful differences in site practices.

For More Information

  • Johnson C, Butler SM, Konstan MW, et al. Factors influencing outcomes in cystic fibrosis: a center-based analysis. Chest. 2003;123:20–7. PMID: 12527598. DOI: 10.1378/chest.123.1.20. [PubMed: 12527598] [CrossRef]
  • Padman R, McColley SA, Miller DP, et al. Infant care patterns at Epidemiologic Study of Cystic Fibrosis sites that achieve superior childhood lung function. Pediatrics. 2007;119:E531–7. PMID: 17332172. DOI: 10.1542/peds.2006-1414. [PubMed: 17332172] [CrossRef]

Case Example 25. Using Registry Data To Study Patterns of Use and Outcomes

DescriptionThe Palivizumab Outcomes Registry was designed to characterize the population of infants receiving prophylaxis for respiratory syncytial virus (RSV) disease, to describe the patterns and scope of the use of palivizumab, and to gather data on hospitalization outcomes.
SponsorMedImmune, LLC
Year Started2000
Year Ended2004
No. of Sites256
No. of Patients19,548 infants

Challenge

RSV is a leading cause of serious lower respiratory tract disease in infants and children and hospitalizations nationwide for infants under 1 year of age. Palivizumab was approved by the U.S. Food and Drug Administration (FDA) in 1998 and is indicated for the prevention of serious lower respiratory tract disease caused by RSV in pediatric patients at high risk of RSV disease. Two additional large retrospective surveys conducted after FDA approval studied the effectiveness of palivizumab in infants, again showing that it reduces the rate of RSV hospitalizations. To capture postlicensure patient demographic outcome information, the manufacturer wanted to create a prospective study that identified infants receiving palivizumab. The objectives of the study were to better understand the population receiving the prophylaxis for RSV disease and to study the patterns of use and hospitalization outcomes.

Proposed Solution

A multicenter registry study was created to collect data on infants receiving palivizumab injections. No control group was included. The registry was initiated during the 2000–2001 RSV season. Over 4 consecutive years, 256 sites across the United States enrolled infants who had received palivizumab for RSV under their care, provided that the infant’s parent or legally authorized representative gave informed consent for participation in the registry. Data were collected by the primary healthcare provider in the office or clinic setting. The registry was limited to data collection related to subjects’ usual medical care. Infants were enrolled at the time of their first injection, and data were obtained on palivizumab injections, demographics, and risk factors, as well as on medical and family history.

Followup forms were used to collect data on subsequent palivizumab injections, including dates and doses, during the RSV season. Compliance with the prescribed injection schedule was determined by comparing the number of injections actually received with the number of expected doses, based on the month that the first injection was administered. Infants who received their first injection in November were expected to receive five injections, whereas infants receiving their first injection in February would be expected to receive only two doses through March. Data were also collected for all enrolled infants hospitalized for RSV and were directly reported to an onsite registry coordinator. Testing for RSV was performed locally, at the discretion of the healthcare provider. Adverse events were not collected and analyzed separately for purposes of this registry. Palivizumab is contraindicated in children who have had a previous significant hypersensitivity reaction to palivizumab. Cases of anaphylaxis and anaphylactic shock, including fatal cases, were reported following initial exposure or re-exposure to palivizumab. Other acute hypersensitivity reactions, which might have been severe, were also reported on initial exposure or re-exposure to palivizumab. Adverse reactions occurring greater than or equal to 10 percent and at least 1 percent more frequently than placebo are fever and rash. In postmarketing reports, cases of severe thrombocytopenia (platelet count <50,000/microliter) and injection site reactions were reported.

Results

From September 2000 through May 2004, the registry collected data on 19,548 infants. The analysis presented injection rates and hospitalization rates for all infants by month of injection and by site of first dose (pediatrician’s office or hospital). The observed number of injections per infant was compared with the expected number of doses based on the month the first injection was given. Over 4 years of data collection, less than 2 percent (1.3%) of enrolled infants were hospitalized for RSV. This analysis confirmed a low hospitalization rate for infants receiving palivizumab prophylaxis for RSV in a large nationwide cohort of infants from a geographically diverse group of practices and clinics. The registry data also showed that the use of palivizumab was mostly consistent with the 2003 guidelines of the American Academy of Pediatrics for use of palivizumab for prevention of RSV infections. As the registry was conducted prospectively, nearly complete demographic information and approximately 99 percent of followup information was captured on all enrolled infants, an improvement compared with previously completed retrospective studies.

Key Point

A simple stratified analysis was used to describe the characteristics of infants receiving injections to help prevent severe RSV disease. Infants in the registry had a low hospitalization rate, and these data support the effectiveness of this treatment outside of a controlled clinical study. Risk factors for RSV hospitalizations were described and quantified by presenting the number of infants with RSV hospitalization as a percentage of all enrolled infants who were hospitalized. These data supported an analysis of postlicensure effectiveness of RSV prophylaxis, in addition to describing the patient population and usage patterns.

For More Information

  • Leader S, Kohlhase K. Respiratory syncytial virus-coded pediatric hospitalizations, 1997–1999. Ped Infect Dis J. 2002;21(7):629–32. PMID: 12237593 DOI: 10.1097/01.inf.0000019891.59210.1c. [PubMed: 12237593] [CrossRef]
  • Frogel M, Nerwen C, Cohen A, et al. Prevention of hospitalization due to respiratory syncytial virus: Results from the Palivizumab Outcomes Registry. J Perinatol. 2008;28:511–7. PMID: 18368063. DOI: 10.1038/jp.2008.28. [PubMed: 18368063] [CrossRef]
  • American Academy of Pediatrics—Committee on Infectious Disease. Red Book 2003: Policy Statement: Revised indications for the use of palivizumab and respiratory syncytial virus immune globulin intravenous for the prevention of respiratory syncytial virus infections. Pediatrics. 2003;112:1442–6. PMID: 14654627. [PubMed: 14654627]
©2020 United States Government, as represented by the Secretary of the Department of Health and Human Services, by assignment.

All rights reserved. The Agency for Healthcare Research and Quality (AHRQ) permits members of the public to reproduce, redistribute, publicly display, and incorporate this work into other materials provided that it must be reproduced without any changes to the work or portions thereof, except as permitted as fair use under the U.S. Copyright Act. This work contains certain tables and figures noted herein that are subject to copyright by third parties. These tables and figures may not be reproduced, redistributed, or incorporated into other materials independent of this work without permission of the third-party copyright owner(s). This work may not be reproduced, reprinted, or redistributed for a fee, nor may the work be sold for profit or incorporated into a profit-making venture without the express written consent of AHRQ. This work is subject to the restrictions of Section 1140 of the Social Security Act, 42 U.S.C. § 1320b-10. When parts of this work are used or quoted, the following citation should be used:

Bookshelf ID: NBK562558

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (4.0M)

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...