NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Trikalinos TA, Dahabreh IJ, Lee J, et al. Defining an Optimal Format for Presenting Research Needs [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2011 Jun. (Methods Future Research Needs Reports, No. 3.)
Empirical Assessment of Evidence-based Documents
Systematic Reviews With Meta-Analysis
Our literature search identified 414 systematic reviews with meta-analysis. Of the first 98 randomly selected abstracts, 48 were not considered eligible and were excluded. The remaining 50 studies were considered potentially eligible and were retrieved in full text. No studies were excluded after full text review (Figure 1).
The majority of systematic reviews included some discussion of future research needs (n=40 out of 50, 80%). Most identified specific research questions that should be addressed by future studies (n=36, 72%). However, specific research designs were suggested in 23 out of 50 papers (46%). In 20 out of these 23 papers the recommendation was that more randomized controlled trials are necessary. Only 13 (26%) studies devoted a whole paragraph to discuss future research needs. None of the papers reported whether any specific methodology was used to identify or prioritize future research needs. Table 1 summarizes our findings from systematic reviews with meta-analysis.
Cost-Effectiveness and Cost-Utility Analyses
Our literature search for cost-effectiveness and cost-utility analyses identified 612 citations. Of the first 121 randomly selected abstracts, 66 were not considered eligible and were excluded. The remaining 55 studies were considered potentially eligible and were retrieved in full text. Of those, 5 studies were excluded resulting in 50 eligible studies (Figure 2).
Cost-effectiveness and cost-utility analyses discussed future research needs less frequently (29 studies, 58%) compared to systematic reviews (p=0.030 by Fisher’s exact test). Twenty four (48%) reported specific key questions that merit further research and only 10 (20%) proposed specific designs to address these questions. Compared to systematic reviews, cost-effectiveness and cost-utility analyses were less likely to propose further randomized trials (4 out of 10), and more likely to propose observational designs (3 of 10) for future research. Similar to systematic reviews, the text devoted to future research needs was limited, with only 5 (10%) devoting a whole paragraph. Four studies used formal methods to identify or prioritize research needs; three of those employed VOI methodologies and the fourth used a variance components analysis. Table 2 summarizes findings.
VOI Analyses
Our MEDLINE search for VOI analyses identified 3,723 citations; 3,490 were excluded at the abstract level and 233 were considered potentially eligible and were retrieved in full text. Of those, 167 were excluded after full text screening and 66 studies, reporting on 72 independent VOI analyses, were included in this review. The search flow, including a list of reasons for exclusion, is presented in Figure 3. VOI analyses are specifically dedicated to appraising the value of future research on a given topic. They are frequently based on systematic reviews of the relevant literature and use cost-utility analysis methodologies. For these reasons we have extracted a more extensive set of methodological and reporting characteristics for the VOI studies that we considered.
Figure 4 depicts the increase in published VOI applications since the mid-1990s. The majority of studies have originated from the United Kingdom (UK), and most have been conducted as part of Health Technology Assessments for the UK-based agencies, including the National Institute for Health and Clinical Excellence (NICE). Consequently, the majority of studies had received government funding.
Table 3 summarizes the methods employed and the reporting practices in the 72 different VOI analyses. Almost all studies calculated EVPI, the value of obtaining perfect information for all parameters (n=68, 94%) but fewer studies calculated EVPPI (n=42, 58%) and only a minority calculated EVSI (5 studies, 7%) or EVSI-P (1 study, 1%).
Figure 5 shows examples of typical graphs used in economic and VOI analyses, along with a brief explanation on their interpretation. In general, graphical presentations of results were underutilized, with most studies presenting “standard graphs” for cost-effectiveness analyses or cost-utility analyses, such as cost-effectiveness acceptability and frontier graphs. Thirty seven (51%) presented line graphs of EVPI over different willingness to pay thresholds but only a minority presented EVPPI bar charts (n=14, 19%) or EVPPI line graphs over willingness to pay (2 studies, 3%).
A minority of studies (n=25, 35%) proposed specific study designs for future research. Of those, the majority suggested that further RCTs are necessary (n=22, 88%); however, observational studies were also proposed (n=6, 24%).
Overall, the frequency of recommending further randomized controlled trials was significantly different between meta-analyses, cost-effectiveness analyses, and VOI analyses (p=0.008 by Fisher’s exact test), with such a recommendation being more common among meta-analyses and VOI analyses and less common among cost-effectiveness analyses. There was also a suggestion that the frequency of proposing observational designs was different (p=0.059 by Fisher’s exact test) with such designs being more frequently proposed by cost-effectiveness analyses and less frequently by meta-analyses or VOI analyses.
Qualitative Interviews
A number of themes emerged during the qualitative interviews. They are described below.
Face Validity of the Stakeholder Group and of Stakeholder Participation
By their very nature, assessments of future research needs are subjective. This is true not only for future research needs that have been prioritized by stakeholders (such as clinicians, researchers, insurers, payers, or funders) using qualitative methods, but also for exercises that are informed by quantitative approaches such as decision, economic, or VOI modeling, since modeling depends on assumptions. Therefore the face validity of a future research needs document will depend not only on the appropriateness and soundness of the methods used, but also on the composition of the stakeholder group and the assumptions used in modeling. Most potential users of a future research needs document are likely to be aware of the challenges in identifying and prioritizing future research needs; however, optimal presentation methods can increase the usability of Future Research Needs documents.
All interviewed experts agreed that when presenting results of qualitative research with stakeholders, it is important to justify the appropriateness of the stakeholder group, to convincingly demonstrate their expertise, and to state that all stakeholders had opportunity to provide input. While all interviewed experts agreed that it is impossible to include all important leaders in a field, they differed in how strongly they would criticize an exercise that did not include a specific thinker whose opinion they value: Expert A stated that it is likely that the face validity of the whole process could suffer, while others did not feel as strongly. Therefore, a description of the credentials of the stakeholders and the perspective they bring is probably sufficient for the reader to judge the face validity of the composition of the group.
All interviewed experts concurred that it is important to clearly state whether all stakeholders participated in a meaningful way. For example, in large teleconferences (with more than 6 to 9 people) it is uncommon that all participants contribute. The latter situation would be an example of questionable face validity. Thus, the future research needs documents should assess and report the degree to which stakeholders were engaged in the process.
Description of Methods Other Than Stakeholder Selection
As in all scientific documents, methods should be described concisely and should follow standard reporting guidelines whenever these are available (expert B). Expert B did not suggest specific reporting guidelines, but we identified several both for reporting qualitative research8–11 and for modeling.12–15
Based on input from the interviewees, the length of the methods section, the detail presented and the technical language used should be similar to what one would read in a general medical journal. For example, three to five pages of double-spaced text may be an appropriate length. To economize space, a future research needs document could refer to standard guidance or methods documents that could be developed by the Effective Healthcare Program, and report all detailed descriptions to an appendix, as needed.
Description of Future Research Needs
The experts interviewed agreed that different potential users of a future research needs document have different interest or needs. Based on the interviews, we decided to distinguish between a more abstract presentation of the areas that merit future research (hereafter called “areas that merit future research”) and a more detailed presentation of a research design along with specification of populations, interventions or exposures, comparators (if applicable) and outcomes (hereafter called by the acronym PICO).
All experts agreed that a description of the “area” that merits future research is important. Examples of descriptions of areas meriting further research may be “effectiveness of drug eluting stents versus bypass surgery in a coronary artery disease” or “quantification of preferences or quality of life ratings for patients who experienced stroke or other health events.”
There was disagreement as to whether a more detailed specification of PICO elements is useful. An example of a detailed statement would be: “What is the effectiveness of sirolimus eluting stents versus on-pump bypass surgery with respect to revascularization in patients with coronary artery disease who are older than 75 years of age and have diabetes.”h Experts A and B cautioned that a more detailed description may be overinterepreted as being too prescriptive. One of the two experts was concerned that a too prescriptive document may “undermine [the current paradigm of] investigator-initiated research,” and that in an extreme case this “would drive the best researchers out of the field.” The other expert noted that inevitably, a PICO-level description could select, for example, one subpopulation over equally important subpopulations and that this could be a point of contention. However, both experts A and B agreed that their reservations were dependent on the framing of a future research needs document. Experts C and D did not share the reservations of experts A and B in any appreciable degree. They suggested that specifying the PICO elements can be useful in that it shows examples of what research is needed.
Description of the Ranking of Future Research Needs
All experts appreciated that explicit ranking of future research needs is subjective and challenging. They suggested that a tiered presentation of future research needs may be preferable, as it may attract less criticism. The interviewer suggested that such a tiered presentation could, for example, group research needs into thematic entities according to whether they address effectiveness of interventions, safety of interventions, role of testing, disease epidemiology, health care costs, patient preferences, or development of new resources (e.g., planning new registries). The experts apparently agreed but did not expand the discussion.
Description of How Proposed Research Designs Are Selected
All experts agreed that it is useful to provide advantages and disadvantages of various research designs, and to describe the algorithms that were used to favor one design over another.
Description of Feasibility of Future Research and of Projected Future Research Cost
Experts B and C commented that is very difficult to assess the actual feasibility of a future study. For example, many planned trials were overly optimistic regarding their projected accrual rates, and were terminated early. It is even more difficult to project the cost of future research. When asked explicitly, experts A and D were also skeptical of any projections regarding the feasibility or cost of future research.
The interviewer proposed that operational definitions of feasibility and future research costs could be used. For example, if it is deemed that a randomized trial is important, one could do a power analysis under several scenarios and compare the calculated trial sample sizes with the largest trials in the field, as a yardstick for research feasibility or cost. The experts agreed that this can be useful, but cautioned that this will yield only an approximate estimate. Three experts commented that funders are unlikely to rely on projections of research feasibility or cost, but appreciated that such projections may serve as a sounding board for those who actually prioritize future research needs.
Appropriateness of Using Modeling To Inform Prioritization of Research Needs
The experts interviewed had different familiarity with modeling and quantitative methods. Here, “modeling” is taken to mean quantitative analyses that enumerate choices (decisions) under hypothetical clinical scenarios and explore their potential outcomes by assigning probabilities of outcome occurrence and utility values; as such, modeling encompasses decision analyses, cost-effectiveness/utility analyses, and value of information analyses. All the experts were receptive to using modeling to inform prioritization of future research, but all agreed that modeling should not be the only method used to prioritize future research. The experts agreed that the assumptions required and insights afforded by modeling methods should be clearly stated. An example of a clear statement is “based on modeling, the comparative effectiveness of the treatments for frequency of revascularizations, rather than the prevalence of coronary artery disease, is a more important target for future research.” Three experts favored graphs over tables to present relevant insights from modeling (one expert was not asked).
Footnotes
- c
One study explicitly stated that further research on the topic it examined was not necessary.
- d
Three studies used VOI methods and one study used a variance components analysis (attributable variance) to determine parameters that required further research.
- e
Relative frequencies may not sum to 100% when studies compared more than one active intervention (for example a surgical and a medical treatment), reported on more than two outcomes (for example life years and quality-adjusted life years), were supported by multiple funding sources, performed multiple types of sensitivity analyses. Percentages have been rounded to the nearest integer.
- f
Relative frequencies may not sum to 100% when studies compared more than one active intervention (for example a surgical and a medical treatment), reported on more than two outcomes (for example, life years and quality-adjusted life years), were supported by multiple funding sources, performed multiple types of sensitivity analyses. Percentages have been rounded to the nearest integer.
- g
In two cases a recommendation that no further research is necessary was based on low estimated EVPI values.
- h
Emphasis added to highlight differences in the specificity of descriptions
- Results - Defining an Optimal Format for Presenting Research NeedsResults - Defining an Optimal Format for Presenting Research Needs
Your browsing activity is empty.
Activity recording is turned off.
See more...