NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Epstein R, Fonnesbeck C, Williamson E, et al. Psychosocial and Pharmacologic Interventions for Disruptive Behavior in Children and Adolescents [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2015 Oct. (Comparative Effectiveness Reviews, No. 154.)
Psychosocial and Pharmacologic Interventions for Disruptive Behavior in Children and Adolescents [Internet].
Show detailsTopic Refinement and Review Protocol
Initially a panel of key informants gave input on the Key Questions (KQs) to be examined; these KQs were posted on the Agency for Healthcare Research and Quality (AHRQ) website for public comment for 4 weeks and revised as needed. We drafted a protocol for the review and recruited technical experts to provide content and methodological expertise on the development of the review.
Searching for the Evidence
Search Strategy
Searches were executed between September 2013 and June 2014. We conducted search update during peer review of the draft report. We developed search strategies using a combination of subject headings (i.e., controlled vocabulary) and keywords (Appendix A).We included broad terms for psychosocial interventions, as well as interventions by name (e.g., “Parent-Child Interaction Therapy”, “Incredible Years”, and “Positive Parenting Program”). We included terms to describe drug classes and individual agents. We built the search strategies in tandem with the refinement of the KQs and Analytic Framework to ensure that the literature retrieval was representative of the project scope. The preliminary results were vetted by clinical and methodologic subject matter experts. We did not conduct a separate search for longitudinal cohort studies of adverse events, but did conduct a separate search for existing systematic reviews and requested drug package inserts to obtain information on harms.
Databases
To ensure comprehensive retrieval of relevant studies, we used the following key databases: the MEDLINE medical literature database (via the PubMed interface), EMBASE, and PsycInfo®. We used the Comparative Effectiveness Plus interface for The Iowa Drug Information Service (IDIS) database to identify regulatory information from the following sources: Food and Drug Administration (FDA) approval packages, FDA Advisory Committee Reports, boxed warnings, Priority Clinical Practice Guidelines, AHRQ Evidence Reports and AHRQ Comparative Effectiveness Reviews, Pivotal Studies, National Institute for Health and Clinical Excellence (NICE) Clinical Guidelines or Technology Appraisal Guidance.
Hand Searching
We used hand searching of recent systematic reviews and other relevant publications to identify additional studies not captured by the database searches. We also reviewed the references lists of the included studies.
Gray Literature
We searched the websites of agencies/organizations as well as other sources (e.g., Clinicaltrials.gov, meeting abstracts, FDA) for context and relevant data, in the area of treatment for disruptive behavior disorders in children. We retrieved the medical and statistical evaluations for relevant drugs from the FDA (www.fda.gov/Drugs/DevelopmentApprovalProcess/DevelopmentResources/ucm049872.htm). For KQ5, we reviewed and extracted information from package inserts, regulatory sources, and unpublished data for all relevant drug interventions to identify data on harms and side effects.
Scientific Information Packets (SIPs)
We requested Scientific Information Packets (SIP) and regulatory information from the Scientific Resource Center (SRC) for individual pharmacologic agents. The SRC SIP coordinator requested information from industry stakeholders and managed the information retrieval. We received responses from three of the 20 requests and confirmed that the studies referenced in the information packets were included in our literature searches.
Screening
We conducted two levels of screening using explicit inclusion and exclusion criteria and documented the assessments using an abstract screening form and full text screening form (Appendix B). The abstract screening form contained questions about the primary exclusion and inclusion criteria for initial screening. We used a more detailed form (full-text screening form) to examine the full-text of references that met criteria for inclusion in abstract review.
Initially, we reviewed the titles and abstracts from all references retrieved by the literature and hand searches. References that met the prespecified criteria for inclusion, as determined by one reviewer, were promoted for second level screening (i.e., full text review). To be excluded at the abstract screening level, two reviewers had to determine, independently, that a reference did not meet one or more criterion for inclusion. Conflicts (i.e., disagreements between reviewers) were promoted for a second level review, as were references with insufficient information to make a decision about eligibility.
All references promoted to full text review were screened by at least two reviewers against the inclusion/exclusion criteria. Discrepancies were resolved by a senior team member or through team consensus. We retained the citations for all retrievals, and recorded the screening results and complete inclusion and exclusion data.
Inclusion and Exclusion Criteria
The inclusion and exclusion criteria for the review were derived from our understanding of the literature, refinement of the review topic with the Task Order Officer and Key Informants, and feedback on the KQs obtained during the public posting period.
Population
The target population for this review is children under 18 years of age who are being treated for a disruptive behavior (Table 2). Eligible studies had to focus on the treatment of the disruptive behavior and include children exhibiting disruptive behaviors as a primary problem (e.g., conduct disorder, oppositional defiant disorder, and intermittent explosive disorder). We considered also, studies that included subjects who were not diagnosed with a disorder but who were being treated for disruptive behaviors that were measured by and found to be above the clinical cutoff on a validated measure.
We included studies of interventions that targeted parents of children with a disruptive behavior if the study explicitly defined the eligible patient population to include a child with a disruptive behavior (as defined above) and the study reported one or more child outcome. We excluded studies of disruptive behavior secondary to other conditions (e.g., treatment of substance abuse, developmental delay, intellectual disability, pediatric bipolar disorder). In the case of ADHD, we excluded studies of ADHD-related disruptive behaviors but included studies of non-ADHD-related disruptive behaviors in populations of children with ADHD if the children were identified as also having another disruptive behavior disorder. Our quantitative analysis further excluded studies that did not report baseline and end of treatment means and standard deviations using one of the three most commonly used outcome measures.
Interventions
We sought studies of psychosocial interventions such as: behavior management training, social skills training; cognitive-behavioral therapy; functional behavioral interventions; parent training; dialectical behavior training; psychotherapy; and contingency management methods. Studies of parent- or family-focused interventions were included if the study included children with a DBD (as defined above) and measured and reported at least one child behavior or functional outcome. We included studies that evaluated an intervention targeting the health or wellbeing of the parent or caretaker of a child with DBD only if the study reported child outcomes. For the purposes of this review, we did not include information technology-based and assisted services, media, diet, or exercise.
We did not include studies of prevention in asymptomatic, undiagnosed, or at-risk participants because we wanted to focus our review on children with disruptive behaviors that would be treated if they presented in healthcare settings. We focused our review on studies that included children who scored above the clinical threshold on a validated scale and/or who were formally diagnosed with a DBD. We did not include studies designed exclusively to assess, measure, screen, or diagnose disease or symptoms. We did not include universal interventions such as those implemented in the school setting, studies of systems-level interventions, or studies of interventions targeting organizational delivery of care. Other excluded interventions were: dietary supplements and specialized diets; allied health interventions (e.g., speech/language therapy, occupational, and physical therapy); complementary and alternative medicine interventions (e.g., acupuncture, herbal, and folk remedies); physical activity and recreational programs (e.g., yoga, exercise training); and invasive medical interventions (e.g., surgery, deep brain stimulation).
Eligible pharmacologic interventions included both FDA-approved medications for the treatment of a behavior disorder or management of disruptive behaviors in children and medications used off-label for disruptive behavior. We identified specific pharmacologic agents from the following broad classes of drugs: alpha-agonists, anticonvulsants, second-generation (i.e., atypical) antipsychotics, beta-adrenergic blocking agents (i.e., beta-blockers), central nervous system stimulants, first-generation antipsychotics, selective serotonin reuptake inhibitors, mood stabilizers, and antihistamines.
We considered studies of a combined (i.e., co-administered, co-therapy, conjunctive, or adjunctive) intervention that included one or more of the eligible psychosocial or pharmacologic interventions identified in Key Questions 1-3 or was a uniquely described combination intervention designed or implemented specifically to treat children with disruptive behavior.
Outcomes
For Key Questions 1-4 and 6, eligible studies had to report at least one behavioral or functional outcome listed in the Analytic Framework (Figure 1). Studies had to report child outcomes to be considered for inclusion. We extracted information on long-term outcomes when they were reported. For Key Question 5, we included studies that reported harms (i.e., adverse effects) for an intervention included in Key Questions 1-4.
Timing
Eligible studies were not limited to intervention timing or duration of followup, but we limited the search to studies published in or after 1994. We conducted a preliminary screening of records retrieved from a search with no limits to the publication year. We screened approximately 1500 records published 20 or more years ago, and found that the study populations were inadequately described and poorly characterized, rendering a large number of the older studies unusable for this review. In order to include studies of patients meeting the population criteria for this review, the team agreed to limit the retrieval of primary study data to those studies published in or after 1994, as this date cutoff aligns with the availability of the DSM-IV.16
Setting
We focused on interventions in the clinical setting, including medical or psychosocial care delivered to individuals by clinical professionals, as well as individually focused programs to which clinicians refer patients. We excluded studies that were conducted exclusively in hospitalized participants (i.e., in-patients). We also excluded studies of a systems-level intervention (e.g., delivered universally in the school or juvenile detention setting).
Study Characteristics
We sought randomized controlled trials (RCTs) and nonrandomized controlled studies (i.e., prospective and retrospective cohort studies). We did not include case control studies as they are not an optimal study design for assessing causal inferences or measuring treatment effects. We did not include studies without comparators (e.g. case series) for the same reason.
For Key Questions 1-4, we sought original data from primary study publications. We identified and included data from related publications (i.e., publications reporting relevant outcomes from a study reported in a separate publication) if the primary study publication met inclusion criteria for the review. For Key Question 5, we included adverse events and harms data (for interventions identified in Key Questions 1-4) from studies, systematic reviews, and regulatory reports to augment the harms data collected from the controlled prospective studies meeting the review inclusion criteria.
We did not specify a minimum sample size (i.e., number of participants per arm) for eligible studies. We restricted the review to studies published in English-language papers. TEP confirmed that key discipline specific publications from non-U.S. countries and international conferences present and publish material in English, minimizing the likelihood of language bias. However, we assessed abstracts from non-English language reports to assess the robustness of this assumption.
Data Extraction and Data Management
Data Extraction
We created data extraction forms to collect detailed information on the study characteristics, interventions, comparators, outcomes, outcome measures, and study quality and/or risk of bias (see Study Characteristics and Outcomes Data Files in the Systematic Review Data Repository). We enumerated the variables most important to this topic with input from Key Informants and Technical Experts and used the extraction forms to record participant characteristics, intervention characteristics, outcomes, and potential modifiers of treatment effects from each included study. The forms included detailed instructions and labels to reinforce coding reliability and consisted of items with mutually exclusive and exhaustive answer options to promote consistency. A senior level team member reviewed the data extraction against the original articles for quality control. The study and data abstraction forms were used to develop summary tables across selected groups of studies.
We recorded descriptive data for each study that met the full text screening criteria including study design, year, location, setting, randomization, blinding, elements of study quality, and related publications. We flagged related publications and extracted nonduplicate study data. We categorized location by country with the exception of Puerto Rico, which we categorized separately from the U.S. due to cultural differences in the study population. We recorded the source of funding and authors' competing interest disclosures for all studies included in the review.
We recorded intervention characteristics and components in detail, noting data elements not reported or unavailable from the primary or related study publications. We classified interventions according to their treatment components, specifically: 1) interventions including only a child component; 2) interventions including only a parent component; and 3) multicomponent interventions. Multicomponent interventions were defined as those that included two or more of a child component, parent component, or other component (e.g., teacher component, family together component).
We categorized outcomes broadly as behavioral or functional. We extracted information on how the outcome was measured and the outcome measurement time points. We include broad measures of quality of life and social functioning.
To assess the evidence on harms, we first collected adverse outcomes reported in studies included for effectiveness. We also identified the evidence for harms of pharmacologic interventions used to treat disruptive behavior reported in the gray literature, including integrated safety reports from the U.S. Food and Drug Administration's regulatory documents.
We recorded potential modifiers to determine whether specific variables affected treatment response. We anticipated that patient age and certain disorder characteristics (such as disease severity) would be robust predictors of outcomes.
We also extracted information on intervention delivery, intervention setting, and environmental factors (e.g., parental engagement) that may account for variations in observed treatment effects. The potential modifiers represent categories of variables that we anticipated may be linked to treatment effects. We extracted the reported variables from included studies and organized the information into meaningful groups to permit syntheses.
Data Management
We registered the review protocol (Registration #CRD42014007552) with PROSPERO, an international database of prospectively registered systematic reviews in health and social care. We used DistillerSR (Evidence Partners, Ottawa, Canada) for screening references. We tracked the literature search retrieval and screening results in EndNote. We used forms to extract the study data, and transferred the data to Excel. We deposited the data that were used in the meta-analyses into the Systematic Review Data Repository (SRDR) system.
Assessment of Methodological Risk of Bias of Individual Studies
We assessed the risk of bias of studies for behavioral outcomes of interest specified in the PICOTS (Table 1) according to the guidance in the “Methods Guide for Effectiveness and Comparative Effectiveness Reviews.”70 Two senior investigators independently assessed each included study. Disagreements between assessors were resolved through discussion.
We used the Cochrane Risk of Bias Tool71 (Appendix C) to assess risk of bias for randomized controlled trials (RCTs) of effectiveness. Reviewers rated six items from five domains of potential sources of bias (i.e., selection, reporting, performance, detection, and attrition) and one item for “other” sources of bias. We assessed for detection bias by evaluating outcome measurement and assessment methods to detect effects. We evaluated potential risk of bias associated with fidelity for psychosocial interventions and included those assessments in the category of “other bias.” To assess risk of bias for study designs other than RCTs, we used the RTI Item Bank72 for cohort studies (i.e., nonrandomized controlled trials) and the AMSTAR tool for systematic reviews and meta-analyses (Appendix C).73-75 To assess the risk of bias associated with the reporting of harms, we used a four question modified tool adapted from the McMaster Assessment of Harms Tool (Appendix C).76
Determining Risk of Bias Ratings
We assigned studies an overall rating of “low,” “moderate,” or “high” risk of bias. We expected RCTs to receive positive assessments for questions about randomization, allocation concealment, and blinding in order to be designated “low risk of bias.” We considered the feasibility of blinding in psychosocial studies and did not downgrade where it would have been impossible. Cohort studies that received positive scores on all items were assessed as “low risk of bias.” Cohort studies with two or fewer negative ratings were assessed as “moderate risk of bias” and studies with more than two negative scores were assessed as “high risk of bias.” We required that studies assessed for harms reporting receive a positive rating (i.e., affirmative response) on all four questions to receive a rating of “good.” Studies with at least three positive responses were considered “fair” quality and those with less than three positive responses were assessed as “poor” quality.
Data Synthesis
We examined the appropriateness of each study for inclusion in a meta-analysis. Studies that were too heterogeneous or otherwise unsuitable to contribute data to the meta-analysis were included as part of a narrative synthesis.
Qualitative Synthesis of Results
We qualitatively synthesized the literature based on the data extracted (described above) for each Key Question. We present behavioral outcomes (KQ1 and KQ2) and harms data (KQ5) in summary tables within the text. For the qualitative summary of KQ1, we organized the results by age (preschool, school age, and teenage) and characterized the studies as those that evaluated a child-only, a parent only, or a multicomponent intervention, based on the active treatment arm. We defined multicomponent interventions as those that included two or more of a child component, parent component, or other component (e.g., teacher component, family together component). We further grouped the summary of studies for KQ1 by named interventions (e.g., PCIT, Triple P, and Incredible Years) where possible. This categorization provided an organizational structure to characterize the literature and highlight key findings for similar interventions. For KQ2 we grouped the studies by individual pharmacologic agent or by pharmacologic class.
Quantitative Synthesis of Results
We developed a Bayesian multivariate, mixed treatment (network) meta-analysis to address the comparative effectiveness of psychosocial interventions for improving behavioral outcomes for children treated for disruptive behaviors (Key Question 1). We used Bayesian multivariate, mixed treatment (network) meta-analytic methods77-79 to use both direct and indirect evidence for comparing a large suite of treatments. Network meta-analysis allows for a broader, integrated view of the available evidence, allowing for the relative merits of a set of treatments to be more readily compared. This approach borrows strength from indirect comparisons of interventions that have not been compared head-to-head in the same study. By combining direct and indirect evidence in the same framework, the resulting meta-analysis may be more robust, with more precise meta-estimates, than traditional meta-analyses. In the absence of network meta-analysis, we would have been compelled to construct a number of smaller, separate meta-analyses that would have been less powerful and less comprehensive, with more evidence excluded relative to a unified network meta-analysis. Further, our model was multivariate, in the sense that multiple outcome measures were considered simultaneously; this improves the analysis by recognizing that outcomes are correlated, estimating that correlation directly as part of the analysis. We present additional details of the meta-analysis methods in Appendix D.
Twenty-eight of the 66 studies included in the qualitative review in KQ1 met the additional criteria for inclusion in our meta-analysis. These additional criteria were that the study was an RCT that reported baseline and end-of-treatment means and standard deviations using one (or more) of the three most prevalent of the 16 instruments used in this literature to examine parent reported outcomes: (1) Eyberg Child Behavior Inventory (ECBI), Intensity Subscale; (2) ECBI, Problem Subscale; and (3) Child Behavior Checklist (CBCL), Externalizing (T-score) (see Appendix E for a description of the instruments). Other instruments were not included in the analysis because of heterogeneity of constructs examined and an inadequate number of studies per measure.
To account for the large suite of interventions employed by the constituent studies, we classified the study arms of each included study according to their treatment components or as a control. Specifically, the treatment arms of each study were classified as one of the following types: (1) interventions including only a child component; (2) interventions including only a parent component; and (3) multicomponent interventions. Multicomponent interventions were defined as those that included two or more of a child component, parent component, or other component (e.g., teacher component, family together component). All interventions classified as multicomponent included a parent component. Study arms not identified by any of these three classes were defined as a control arm (i.e., waitlist control or treatment-as-usual arm). Recognizing that these treatment categories are broad, encompassing a range of specific interventions, each component was modeled as a random effect. This allowed for variation in treatment effect within each class, due to factors not explicitly modeled here. All measurement instruments shared the same study arm treatment effect in our model.
Studies were included in the meta-analysis if they reported baseline and end-of-treatment means and standard deviations from one of the three metrics listed above. The baseline was subtracted from the end-of-treatment mean and used as the response measure, along with the sum of their standard deviations. The three outcomes were modeled jointly as a multivariate normal likelihood, with any unmeasured outcomes treated as missing data; this allowed for the covariance among measures to be accounted for and estimated.
The age of subjects in each study arm was included in the model as a categorical covariate, broadly grouped into either prekindergarten, preteen child or teenage categories. The preteen child was used as the baseline value because it was the most prevalent among studies. The age covariate was combined additively with the intervention component effects and control/treatment-as-usual means to model the observed treatment differences relative to baseline. Though we considered age-by-treatment interactions, there was not enough balance among the age and treatment combinations to include them in the final model.
All unknown parameters were given weakly-informative prior distributions and estimated using Markov chain Monte Carlo80 methods via the PyMC 2.3 software package.81 The model was run for 200,000 iterations, with the first 150,000 samples conservatively discarded as burn-in, leaving 50,000 for inference.
Incorporating Existing Systematic Reviews
We located reviews published between 2005 and 2014 and evaluated each for relevance using the review PICOTS (Appendix B). We summarize review data from relevant psychosocial and pharmacologic interventions in the “Discussion” section of the report and in a table in Appendix F. For the systematic reviews reporting harms, we assessed quality using AMSTAR73 and summarized the findings in KQ5.
Grading the Strength of Evidence
Strength of Evidence Assessments
We referenced the recommendations from the AHRQ EHC Methods Guidance and updated guidance for grading the strength of a body of evidence.82,83 In accordance with the methods guidance, we first assessed and graded “domains” using established concepts of the quantity and quality of evidence, and coherence or consistency of findings. Two senior staff independently graded the body of evidence; disagreements were resolved through discussion.
We assessed strength of evidence for the direction or estimate of effect for the behavioral outcomes and interventions listed in Table 3.
We assessed an overall evidence grade based on the ratings for the following domains: study limitations; directness; consistency; precision; and reporting bias. We considered additional domains, as appropriate: dose-response association, plausible confounding, and strength of association (i.e., magnitude of effect). The fifth required domain, reporting bias, includes publication bias, selective outcome reporting, and selective analysis reporting.82 To assess publication bias in the pharmacologic literature, we sought study protocols and data from regulatory sources and compared this information to the results in the published literature. The issue of publication bias in psychological science is difficult to address given the current lack of standards regarding the registration of study protocols in social sciences. We attempted to minimize the potential for bias introduced by the “file drawer effect” (i.e., nonpublication of studies with nonsignificant results) by expanding the literature search to include unpublished sources (e.g., meeting abstracts) and asking Key Informants about current research or developments in the field that may not yet be published.
Overall Strength of Evidence
We summarize the four grades (high, moderate, low, and insufficient) we used for the overall assessment of the body of evidence in Table 4 (adapted from the AHRQ “Methods Updated Guidance for Grading the Strength of a Body of Evidence”82). When no studies were available for an outcome or comparison of interest, we graded the evidence as insufficient.
Assessing Applicability
We assessed the applicability of the findings to the population being treated for disruptive behavior disorders and the settings in which treatment occurs. We summarized common features of the study population and documented diagnoses. We considered patient age, intervention setting, treatment history, co-occurring diagnoses, and symptom severity reported in the included studies and the degree to which the populations studied reflect the target population for practice. As resource-poor environments may be limited in the options and types of interventions available, we characterized the resources needed including types of providers or involvement of nonclinical providers or families to implement effective interventions and provide the end users with adequate data on feasibility and implementation planning. We present applicability tables for each intervention in Appendix G.
- Methods - Psychosocial and Pharmacologic Interventions for Disruptive Behavior i...Methods - Psychosocial and Pharmacologic Interventions for Disruptive Behavior in Children and Adolescents
- Acknowledgments - Psychosocial and Pharmacologic Interventions for Disruptive Be...Acknowledgments - Psychosocial and Pharmacologic Interventions for Disruptive Behavior in Children and Adolescents
- IS4 family transposase ORF 2 [Acinetobacter baumannii]IS4 family transposase ORF 2 [Acinetobacter baumannii]gi|1428844027|emb|SSQ70677.1||gnl|W KO|SSQ70677Protein
- outer membrane protein E [Acinetobacter baumannii]outer membrane protein E [Acinetobacter baumannii]gi|1428844024|emb|SSQ70655.1||gnl|W KO|SSQ70655Protein
Your browsing activity is empty.
Activity recording is turned off.
See more...