U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Raftery J, Hanney S, Greenhalgh T, et al. Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme. Southampton (UK): NIHR Journals Library; 2016 Oct. (Health Technology Assessment, No. 20.76.)

Cover of Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme

Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme.

Show details

Appendix 3The included studies in the updated review

TABLE 14

Included studies

NumberAuthorsYearTypeLocation of research assessed (or to be assessed)Programme/specialtyConcepts and techniques/methods for assessing health research impactImpact: examined and foundComments: strengths/weaknesses; factors associated with impact
1Action Medical Research542009ApplicationUKAction Medical Research: training fellowshipsPayback Framework. Survey to past fellows (130: 72% completed)Furthering medical science and developing research capacity; knowledge production; patient and health-care sector benefits; economic benefits; and benefits to the charityThere was a good response rate; however, it relied solely on self-reported data. Example of assessing capacity building/training award that moves beyond the capacity building – also consider impact of the research conducted, but as here, such studies sometimes take a career perspective, making comparisons with projects more difficult
Found: impact on career development – 78% had some research involvement; 30% had an impact on clinical guidelines; 42% on clinical practice or service improvements; and 46% patient benefit, much from follow-on research
2Adam et al.842012ApplicationCataloniaCatalan Agency for Health Information, Assessment and Quality: clinical and HSRROI payback model from CAHS. Bibliometric analysis; surveys to researchers [99 (71%) response]; interviews – researchers (n = 15), decision-makers (n = 8); in-depth case study of translation pathways. Include the ROI focus on assessment in relation to targets the programme intended to reachThe article focused on two of the impact categories: advancing knowledge and impact on informed decision-making by policy-makers, managers, health-care professionals, patients, etc.Achieved aim of filling a gap in the knowledge needs; study informed funding agency’s subsequent actions, but both internal and external validity ‘have room for improvement’. Noted limitations of attribution, time lags, counterfactual
Factors: the studies ‘provide reasons to advocate for oriented research to fill specific knowledge gaps’. Interactions and participation of health-care and policy decision-makers in the projects was crucial
Found: 40 out of 70 said decision-making changes been induced by research results: 29 of those said research had changed clinical practice, a maximum of 16 said that there had been organisational/management/policy changes
3Anderson552006ApplicationAustraliaNSW health surveyPayback Framework adapted. Subanalysis of a survey of users of the health survey; follow-up telephone survey of selected respondents; case studies using supplementary data sourcesCategories adapted from Payback Framework. Emphasis on policy impactsUseful analysis of assessment problems because diverse potential users of population surveys. Low response rate and difficult to go beyond policy impact:
  • Simpler surveys of whether and how data are used by policy makers may be the only realistic option

Found: some examples of uses to inform policy. Most of the information could not have come from other sources
4Arnold1332012BothEUEU FPs: including brain researchDeveloped a list of expected impacts from FP in terms of outputs, outcomes, mid-term impacts and long-term impacts. Attempted a more systematic approach to the role of funding in the social shaping of the research and innovation system. Limited account of methods in this paper that draws on a range of reportsExpected impacts – particular emphasis on knowledge and commercialisationFuller account given in a report
Found: ‘impact mechanisms’ in the brain research FP case study area included: knowledge, agenda-setting, promoting self-organisation of stakeholder communities; co-ordinating or influencing policy; leveraging funding for R&D; mobility and development of human capital; behavioural additionality – learning a ‘new’ innovation model
5Aymerich et al.562012BothCataloniaNetwork centre for research in epidemiology and public healthAdaption of Payback Framework, informed by CAHS and Berra et al.58 Survey of PIs of 217 small-scale projects (173 responded: 80%) ‘on the outcomes and potential impact of each project’; and a survey of external reviewers, with the aim of assessing the performance of each project in terms of how far the objectives were metSurvey to PIs covered three main categories of social impact, each with various dimensions, e.g. knowledge translation (including ‘To carry out clinical trials’ ‘To the development of synthesis reports of scientific evidence, clinical practice or public health guidelines, or similar products’Good response rate; opinions of external reviewers and researchers. Claim to report:
  • a tool for ex-post evaluation with good discriminating ability that makes it possible to measure the extent to which a project’s objectives have been met . . . and the extent to which the project contributed benefits in terms of impact on the group’s scientific performance and social payback

However, considered potential not actual health-care benefits. Mixing of data for impact on synthesis reports in the same category as guidelines makes comparisons with other studies difficult
Found: PIs reported on ‘knowledge translation’. From the 173 projects, 59 examples contributing to synthesis reports or guidelines, many projects thought to have potential health-care benefits
6Banzi et al.42011MethodologicalN/AAll health researchCochrane review of approaches: highlighted five categories of impact from the CAHS/Payback Framework approachHighlighted the payback categories: knowledge; capacity building; informing policies and product development; health and health sector; economic and socialIndependent review confirmed findings from HTA review2 that the Payback Framework is the most frequently used approach. Limitations in previous studies: often based on information from researchers or funders; attribution; counterfactual and rarely a control situation; choosing appropriate time
  • A shared and comprehensive conceptual framework does not seem to be available yet and its single components (epidemiological, economic, and social) are often valued differently in different models

7Barnhoorn et al.3462013MethodologicalEUEU: public health innovation projectsDescribed the development and operation of a web survey of country informantsGeneral focus on uptake of the innovations but no details given of the specific impacts. No details supplied about resultsIllustrates increased interest in assessing impact; encouraged more thinking on the issue
8Battles et al.3472014ApplicationUSAAHRQ: Healthcare-Associated Infections Prevention ProgrammeNo conceptual framework stated and no methods explicitly described as the account appears in a section of the journal headed: Agency Perspective. Presumably an ‘insider account’Examined: explored the knowledge produced, potential and actual impacts in terms of improved health careBeing conducted from within the agency provided an understanding of how the programme developed. However, it is not a formal or independent study of impact. As an agency covering both health-care research and quality, AHRQ seemed to be well placed to be involved in implementing/rolling out findings of a successful project
Found: a range of projects with notable findings and others that were promising. Some considerable impact:
  • CUSP for CLABSI [central line-associated bloodstream infections] achieved a successful national roll-out and significant results, reducing CLABSI rates by 40% from baseline

9Bennett et al.572013BothKenya and UgandaFIC: research training programmesInformed by the Payback Framework and contribution mapping, and by approaches to assessing research capacity building. Case studies based on semistructured interviews (53 trainees and others), focus groups (26 participants), structured surveys, document review, e.g. reviewing policy documents for citations to trainees’ publicationsContribution to policy and programme development
  • There were numerous examples of work conducted by former FIC trainees that influenced national and global policies

Impact not solely as a result of FIC training grants, but that support often critical. Various barriers to policy impacts
Multimethod approach. An important example is that it shows considerable impacts from development training grants. However, some of the lines of attribution are particularly difficult to establish for training grants
  • Facilitators for this influence included . . . professional networks spanning research and policy communities

10Bernstein et al.3482006MethodologicalCanadaCIHRPayback Framework: describes how CIHR adapted the multidimensional categorisation of benefits from the Payback Framework. Describes the sources of information that might be used for each categoryFive main categories of Payback Framework with some adaptions in the details: knowledge production; research targeting and capacity building; informing policy; health and health sector benefits; economic benefitsCould be seen as an important step between the Payback Framework and its adoption/adaptation in the ROI report by the CAHS
11Catalan Agency for Health Technology Assessment and Research582006ApplicationCataloniaTV3 telethon for biomedical research in Catalonia (different specialty each year)Payback Framework: adapted and applied somewhat differently from how the original categorisation and model were brought together. Review of documents related to all projects (320) funded from 1993 to 2002. Survey of researchers for all completed projects (164 PIs = 72%). Bibliometric analysis. Desk analysis for comparisons with other programmesExamined: patents, publications, research training and targeting of future research, application of the results in health care and health gainMultimethods approach provides an assessment of how the programme contributed significantly to the Catalan health research system. However, difficulties were encountered with collecting the diverse data and most of the data on impacts on health care seemed to be potential impacts. Findings were presented in a way that makes it difficult to compare with some other quantitative assessments of multiproject programmes, but built on in a series of later studies
Found: considerable impact in terms of primary results (publications), secondary results (PhDs, targeting further research), and potentially on health care
12Boaz et al.52009MethodologicalN/AReview covering policy-making in any field but particular focus on strategic levels and on waste, environment and pollution policyA total of 14 frameworks revealed: many little used; the lower in the list, the more it is used (as at 2007):
  1. BSC
  2. correlation matrix
  3. RIF
  4. ROAMEF
  5. Results-based management
  6. logical framework approach
  7. RAPID outcome assessment
  8. simulation
  9. episode studies
  10. social analysis
  11. benchmarking
  12. outcome mapping
  13. HERG payback model
  14. economic analysis
In addition, 16 approaches, with the three most used being documentary analysis, case study analysis and semistructured interviews
Very few frameworks are actually being used
In the international development literature, studies traditionally favour qualitative, participatory evaluations with a focus on learning and service improvement’. Methods such as ethnographic site visits and storytelling are used to capture multiple voices in what are usually presented as ‘positive utilisation narratives’. However, government donors, in particular, increasingly question the veracity of such narratives and favour external evaluations that use quantitative methods and predefined performance indicators
EU research programmes – usually assessed by panel reviews
Extensive systematic search of multiple databases to identify primary papers. Good review of difficulties assigning attribution
  • The approach selected to evaluate the impact of research programmes on policy also needs to be sensitive to the context in which the evaluation will be undertaken

Panel reviews attract criticism for their reliance on experts, but they do have the advantage of building ownership through the participation of individuals from diverse EU countries. This is particularly important in a European context and underlines the importance of selecting methods that are fit for purpose and appropriate to the needs of key stakeholders
13Bodeau-Livinec et al.822006ApplicationFranceFrench Committee for the Assessment and Dissemination of Technological Innovations: HTANone stated, but approach to scoring impact appears to follow the earlier studies of the HTA Council (CETS) in Quebec reported in the 2007 review: Jacob.43,44 Semidirective interviews with stakeholders affected by the recommendations (n = 14); and case studies that used surveys in hospitals to examine the impact of the recommendations (n = 13)Examined: the impact in terms of interest in the recommendations and how they are used in decision-making, etc.Using two mutually supportive approaches to data collection increases credibility. However, small numbers were interviewed and there was difficulty in establishing attribution. Main factor fostering compliance with recommendations ‘appears to be the existence of a system of regulation’. Reviewed other studies:
  • All these experiences together with our own work suggest that the impact of HTA on practices and introduction of new technologies is higher the more circumscribed is the target of the recommendation

Found: widespread interest, ‘used as decision-making tools by administrative staff and as a negotiating instrument by doctors in their dealings with management . . . ten of thirteen recommendations had an impact on the introduction of technology in health establishments’: seven considerable and three moderate
14Brambila et al.1372007BothGuatemalaPopulation council: programme of operation research projects in reproductive health in GuatemalaDeveloped an approach that considered process, impact and contextual factors. Drew on literature such as Weiss.34 Key informant interviews; document review; and site visits to health centres and NGOs implementing or testing one or more operations research interventions. Based on the information collected the evaluation team scored 22 projects (out of 44 conducted between 1988–2001). Used a 3-point scale to score each project on 14 process indicators, 11 impact indicators and six context indicatorsAim:
  • to evaluate ‘impact’ (the extent to which change occurred)

Indicators included: was the intervention effective in improving service delivery; did the organisation act on the results after the operations research study; if effective, was it scaled up; was it replicated in another country; did the donor, or other donors, fund new activities based on the results
Used the impact literature to inform detailed analysis of a programme. However, no indication given on how the 22 projects were selected for scoring. The several 5-year cycles of funding:
  • allowed for the accumulation of evidence in addition to the development of collaborative ties between researchers and practitioners, which ultimately resulted in changes to the service delivery environment

In highlighting how impact can arise from a long-term approach it refers to the even longer-term projects such as Matlab in Bangladesh, which has had
  • a profound impact on national health policy, donor priorities and public health action

Found: out of the 22, 13 projects found that the intervention was effective in improving results, three found interventions were not effective; in 14 studies, implementing agency acted on the results; nine interventions scaled up in the same organisation; five were adopted by another organisation in Guatemala; some studies led to policy changes, mainly at the programme level
15Brutscher et al.372009MethodologicalCanada (review conducted from UK for CAHS report)Conducted for health research by reviewing approaches in all fieldsReviews eight frameworks for evaluating research. (For a full analysis of how we drew on these see Appendix 5). Four are health specific: Leiden University Medical Centre (in our review see Mostert et al.100); measure of research impact and achievement – Australia, but ex ante; Payback Framework; and congressionally directed medical research programmes. Three are for research in general: VINNOVA (Swedish government agency for innovation – see Eriksen and Hervik94); UK Department for Innovation, Universities and Skills (economic impacts – see Department for Innovation, Universities and Skills106); and EU FP. One is used widely for USA government programmes: programme assessment rating tool (see Williams et al.92)Analysed: five key elements in the frameworks: evaluation objectives; outcome measures; levels of aggregation; timing and evaluation methodsIdentified there were likely to be trade-offs in the choice of key elements of evaluation frameworks
  • In particular, the choice of an evaluation objective, we find, is immensely important. It, directly or indirectly, influences the appropriateness of all other key elements

Found: significant differences between the evaluation frameworks. Only four frameworks go as far as assessing actual impacts
16Bumgarner et al.3492006ApplicationInternational research (USA research organisation and review conducted by team from USA)Center for Global Development: international developmentNo framework stated. In-depth interviews (more than 150); worldwide audience survey (more than 1200 respondents); documentary analysis and mapping; and case studiesDoes research agenda meet the needs of its policy-making targets? Is the research high quality? Does it influence other researchers? Is the communications strategy achieving its desired impact? Is it building appropriate partnerships?Comprehensive evaluation of impact of the programme of work of the Centre, independently conducted, and commissioned by a consortium of organisations that fund the centre. Having a large number of interviews addressed the challenge of showing impact on policy. In addition, evidence was cross-checked with publication dates and the perceived relevance of the research was important. However, the wide range of activities and the large potential audience were challenging to address. Much of this international research was not in health. Center for Global Development’s research and advocacy work for policy influence widely seen as ‘timely, empirically or analytically based, and highly effective among its audience’
Found: many policy-makers in the audience survey discuss or use Center for Global Development’s policy research and analysis, which is generally well respected (though some complaints). Some limitations on the extent of outreach. Some work had a major impact on policy, including on the Making Markets for Vaccines Initiative350 where its work turned an existing idea into a concrete policy proposal
17Bunn592010BothUKVarious health topics covered by systematic reviewsPayback Framework and RIF (Kuruvilla et al.123). Bibliometrics and documentary reviewKnowledge production, research targeting, informing policy development, impact on practiceLimitations discussed; many factors associated with impact identified. PhD study examining impact of authors’ own systematic reviews and links to the author’s wider attempt to highlight the impact of Cochrane reviews351
Found:
  • The reviews had influenced the development of national and international policy, although much of the impact was at a ‘micro’ level in the form of practice guidelines

18Bunn and Kendall602011BothUKHealth visitingInformed by aspects of Payback Framework and RIF. Mixture of working forwards and backwards. Documentary and literature review (starting with 30 published policy documents and checking for impact from research), citation tracking (Web of Science, Scopus, and Google Scholar) and interviews with researchers (n = 7) about the impact of their own researchImpact on policyRange of methods. Drew on analysis in Hanney et al.2 to combine mostly working backwards from policy documents in the specific field, but also forwards from individual researchers. However, not focused on a specific programme of research
Found:
  • Although there were examples of policy documents being informed by health visiting research it was not always clear what role research had played in the development of recommendations

Researchers gave examples of impact on local, national and international policy impact, though often from bodies of research
19Bunn et al.612014ApplicationUKNIHR: Cochrane systematic reviews from the supported CRGsApplied same framework as developed in Bunn59 that was informed by aspects of Payback Framework and RIF. Questionnaires to review groups (20; 87% responded); documentary review; data verification and analysis; semistructured telephone interviews with users (i.e. eight guideline developers); detailed study of 60 reviews: questionnaire to review lead authors (60; 48% responded) and bibliometric analysisAs above on Bunn et al.:59 knowledge production, research targeting, informing policy development, impact on practiceStrength that overall considerable data collection using a range of methods, with interviews providing explanations for figures showing the impact being made on policies:
  • Results from the semi-structured interviews suggest that searching for relevant Cochrane reviews is part of the guideline development process

Also limitations on use of reviews, e.g. out of date or not fit for the guideline scope. Possible weakness: only able to conduct detailed analysis on 60 of the 1502 reviews. Identified limited evidence on impact on practice:
  • more work is needed to develop suitable methods for evaluating the impact of systematic reviews

Found: 1502 reviews between 2007–11. Of the 60 reviews, 27 had been cited > 100 times. Identified 40 examples where reviews informed further primary research. Thirteen (22%) of the surveys from authors said had influenced primary studies. Overall, there were 722 citations in 248 guidelines or evidence reviews behind them. A total of 481 reviews were in at least one guideline. Eight CRGs and 12 authors gave examples of impact on practice or services, but most did not know
20Buykx et al.3522012MethodologicalAustraliaHSRDeveloped the health services RIF, synthesising RIF (Kuruvilla et al.123), Lavis et al.,353,354 and the Payback Framework – the version developed by Kalucy et al.65Identified main areas of impact to assess: research related, policy, service and societal. Also planned to consider whether the impact originated with the researcher (i.e. producer push) or the end-user (i.e. user pull)Planned to use to evaluate own research
21Cacari-Stone et al.1762014BothUSAPublic (ethnic) researchCBPR policy-making framework. Consists of four linked circles: context (which sets the stage for) the CBPR process (which feeds into) the policy-making process (which influences) outcomes
Six case studies (using multiple methods – individual in-person interviews, focus groups, policy-maker telephone interviews, archival media and document review, and participant observation)
Outcomes include political action, policies (formal/informal), changed policy landscape (e.g. improved distributive justice) and hence health outcomesStrong, trusting pre-existing community–campus relationships. Tension for change (e.g. widespread local awareness of injustice). Dedicated funding for the CBPR activity. Wide range of research methods used to collect multiple data sources (‘street science’ and academic research) to produce robust and credible account of local inequalities. Effective use of these data in civic engagement activities including media campaign and broad-based public awareness with different sectors of the community
Diverse and effective networking with intersectoral partners including advocacy organisations
22Caddell et al.962010ApplicationCanadaIWK Health Centre, Halifax, NS, Canada. Research operating grants (small grants): women and children’s healthRIF: adapted. Online questionnaire to 64 principal investigators and co-principal investigators (39 completed surveys: 61%)Five subsections: impact on research; policy; practice; society; personal: self or career developmentPioneering study of small grants. However, relatively low response rate and sample size, and reliant on a single data collection method – self report from researchers. An association between presenting at conferences and practice impacts. Authors stress the link between research and excellence in health care:
  • It is essential that academic health centres engage actively in ensuring that a culture of research inquiry is maintained and that support is available to those researchers that may ultimately contribute to the excellence in health care

Found: 16% policy impact: 8% in health centre; 8% beyond; 32% said resulted in a change in clinical practice; 55% informed clinical practice by providing broader clinical understanding and increased awareness; 46% improved quality of care; 87% improved research skills
23CAHS72009MethodologicalCanadaAll health researchAppendix C reviewed a range of frameworks starting with Payback Framework on which the framework in the main report built. Other frameworks reviewed: Walt and Gilson’s analytical model, RIF, research utilisation ladder, Lavis’s decision-making model, the AP Weiss logic model approach, HTA organisation assessment framework, societal impact framework, and the BSCAppendix E identifies indicators of impact organised according to the five categories of impact from the Payback FrameworkThe full report is probably the most systematic and important analysis of research impact assessment in the review period. An account of the report’s key recommendations is contained in our comments below on the article by Frank and Nason.115 Appendix D analyses a series of key issues facing research evaluation including: attribution (and the role of logic models and case studies in addressing this); the counterfactual (‘Having a framework that can understand the different external contextual factors that may have been involved in impacts makes understanding the counterfactual easier’; internal and external threats to evaluation validity; time lags to impact
24CIHR3552005MethodologicalCanadaAll CIHR health researchAll three approaches to measuring research impact reviewed were found to have intellectual agreement on key issues, although the ways of conceptualising returns differed. It was decided to adapt the five-dimensional categorisation in the Buxton and Hanney’s payback model39 for CIHR’s framework. Recommended methods include a variety of approaches should be used as appropriate for subject area and stakeholder concerns. Include case studies or narratives and indicators of achievement in specific areas defined by the five impact categoriesAdapted the five-dimensional categorisation in the Buxton and Haney’s payback model39Following an expert meeting, a draft framework was developed. It was reconciled with CIHR’s Common Performance Measurement and Evaluation Framework to ensure consistency with existing evaluation activities and to build on initiatives under way within the 13 institutes of CIHR
25Carden3562009ApplicationInternational studies (Canadian report)International development: wide range (one case study health)Case studies (n = 23): interviews and documentary analysisPolicyFull report from International Development Research Centre allows detailed analysis of strengths and weaknesses of case study approach. Seems to be separate from funder(s) of projects
26Cohen et al.522015BothAustraliaNHMRC: intervention studies in various programmesAdapted categories from the Payback Framework and the CAHS framework, and aspects of case study scoring from UK’s REF. Mixed-method sequential methodology. Chief investigators of 70 eligible intervention studies who completed two surveys and an interview were included in the final sample (n = 50; 71%), on which post-research impact assessments were conducted. Data from the surveys and interviews triangulated with additional documentary analysis to develop comprehensive case studies. Case studies that indicated policy and practice impacts were summarised and the reported impacts were scored by an expert panel using criteria for four impact dimensions: corroboration, attribution, reach and importance. The scoring system separately considered reach and importance/significance ‘so as not to downplay the potential impact of smaller studies, or studies with small target groups’Developed a categorisation of impacts and mapped it onto the Payback categories. There were four main categories: scholarly outputs, translational outputs, policy and practice impacts, and long-term outcomes. Each has subcategories, five of which relevant to this study: within the translational outputs subcategory ‘intervention packaged for implementation’, and within policy and practice:
  • changes to practice, changes to services, policy change, commercialisation

Case studies on all of the projects that met the inclusion criteria rather than applying any selection criteria. Multiple methods combined to form an innovative approach that included attempting independent verification of claimed impacts and advances in the approach to scoring impact case studies. Addressed issues such as attribution and corroboration by the data collection methods used and the scoring system. However, overall, there were issues with the scoring; the whole process was resource intensive. Had sufficient time been allowed for the impacts to occur? Authors note that:
  • We found that single intervention research studies can and do have concrete and measurable post-research real-world impacts . . . on policy and practice

This study adds to the view that the degree of impact identified (at least over relatively short time scales) might vary depending on the type or research, and context in which it is conducted and the findings presented
Found: 19 (38%) of studies had at least one impact in the categories listed. There were a total of 21 translational impacts and 21 policy and practice impacts. Examples given of each. Scoring resulted in projects being classified into high, medium and low impact score groups. Case studies provided illustrations of high impact
27Contandriopoulos et al.1632010MethodologicalN/AHSRReview of knowledge exchange; evidence into policyHow research knowledge is taken up and used in health-care organisations and policy settings – and why this process often failsKnowledge may be individual or collective. Collective knowledge exchange differs when there is disagreement on values and priorities. If this is the case, scientific criteria, such as ‘strength of evidence’, may be overshadowed by political ones
28Currie et al.3572005BothCanadaHealth promotion and community developmentImpact model for community–campus partnerships. Methods not clearly stated. Implicitly, the model is used to guide the development of bespoke metrics and indicators that can be used to track the project. A brief example sketched of the work of the Research Alliance for Children with Special Needs. See King et al.188 for a later applicationThree domains of impact in community campus partnerships:
  1. Knowledge generation (research)
  2. Knowledge sharing (publications, website and other outputs)
  3. Research education and training (capability development in community to do research)
Each domain is analysed in terms of (proximal) functions, outputs, utilisation (e.g. requests for information, hits on website), mid-term impacts and long-term impacts
Strength: ‘systems’ model that recognises multidirectional influence, direct and indirect. Impact at multiple levels (individual, organisational, system)
Limitation (to some extent acknowledged by researchers): model is outcome focused and does not address ‘structural elements of partnerships and audiences, nor processes that could be utilized to enhance research impacts’. (Hence does not look at partnership dynamics, absorptive capacity, social capital, or engage with literature on CBPR or with the political/emancipatory nature of much CBPR)
29de Goede et al.1582012BothThe NetherlandsEpidemiology and public healthAdaptation of Weiss’s model of research utilisation (meaning most policy impact is indirect and non-linear, achieved via interaction and enlightenment). Three case studies: interviews, ethnography and document analysis, with special focus on analysing the interactions in deliberative meetings among policy-makersModel is more focused on processes than end-impacts: assumption that much of the impact is indirect and diffuse, hence it is the interactions and influences that matter. Structured framework for capturing the complexity of research utilisation, consisting of three phases:
  1. describe the research network and the policy network
  2. describe the types of research utilisation (classified as instrumental, conceptual, symbolic
  3. describe the (reciprocal) interactions between researchers and policy-makers
Explore barriers to effective uptake of research: expectation (are policy-makers ‘ready’ for these findings?), transfer (how effectively and appropriately are findings communicated?), acceptance (are findings seen as credible and true?) and interpretation (what value do policy-makers place on them?). A significant barrier to uptake was different framings of problems (epidemiologists focused on ‘healthy behaviours’ whereas policy-makers took a more social framing and considered things like school drop-outs or domestic violence). Some policy-makers found the local health messages so irrelevant to their view of the problem that they did not use them at all. Early involvement of policy-makers in the process of producing local health messages appeared to make it more likely that they would be used
Found: most impacts were conceptual. Case studies considered the fate of ‘local health messages’ produced by researchers but intended for local policy influence. Policy-makers rarely took a local health message and immediately implemented it in ways that resonated with researchers’ model of the problem
30de Goede et al.3582012BothThe NetherlandsEpidemiology and public healthExtension of de Goede158 described above. Again adaptation of Weiss’s model of research utilisation. Extended above study by including a quantitative scale for measuring kinds of utilisation. Case studies. This paper reports development and use of a questionnaire to 155 local policy-makers. Sets of questions linked to each of the three ways evidence used (see the next column)Use of epidemiological evidence instrumentally (in specific and direct ways to solve a particular problem), conceptually (in a more general way to improve understanding of problems) or symbolicallyMore about how research is used than the impact of a programme of research. The various ways that the research was used were treated as dependent variables, with various independent variables tested for correlation (e.g. whether the policy-maker had previous experience of research or was involved in the research process)
31Deloitte Access Economics252011BothAustraliaNHMRC: subset – cardiovascular disease, cancer, sudden infant death syndrome, asthma and muscular dystrophyROI analysis (cost–benefit analysis). Outcomes measured as: net benefit and benefit-to-cost ratio. Collation of funding data and estimation of projected benefitsGains in well-being measured as reductions in DALYs gains in averted costs incorporated as well as productivity and indirect gains. Benefit-to-cost ratio: cardiovascular disease, 6.1; cancer, 2.7; sudden infant death syndrome, 1.2; and muscular dystrophy, 0.7Valid attempt to value monetised health benefits and equate with a lagged investment period, also accounting to some extent for problems of attribution. However, weaknesses include the use of projected health gains – ‘unknown unknowns’ and a weak basis for time lag between R&D and health gains. Does not seem to account for delivery costs of new interventions. Some disagreement about robustness of DALYs
32Dembe et al.1242014MethodologicalUSAAll healthThe translational research impact scale: informed by a logic model from W.K. Kellogg Foundation, by the RIF (Kuruvilla et al.123) and the Becker Library (Sarli et al.118). Identified 79 possible indicators used in 25 previous articles and reduced them to 72 through consulting a panel of experts, but further work was being undertaken to develop the requisite measurement processes
  • Our eventual goal is to develop an aggregate composite score for measuring impact attainment across sites

Three main domains: research-related impacts; translational impacts (that include ‘improvements result in better quality of care’); societal impactsWould be comprehensive; however, feasibility yet to be reported on and number of subdomains and indicators in each domain varies considerably
33Department for International Development82014MethodologicalInternational studies (review conducted in UK)Review of studies assessing impact of international development research, including healthDevised own theory of change that combines four major pathways by which research has been hypothesised to contribute to development. There are four pathways going from the supply of research and from the demand for research outputs towards poverty reduction and improved quality of life: economic growth, human capital, products and technologies, and evidence-informed policy/practiceBroadly structured review around the pathways developed: economic growth, human capital, pro-poor products and technologies, evidence-informed policy/practice, quantifying economic impactReviewed and organised a large number of papers and usefully challenges some assumptions about the benefits from research, but did not include some important papers from the health sector, e.g. Brambila et al.,137 showing how long-term programmes of health research can make an impact
34Donovan1282008MethodologicalAustraliaAll research fieldsRQF recommended the impact of Australian higher education could be assessed by research groups producing:
  • A statement of claims against impact criteria, up to four case studies illustrating examples of impact, and details of end users who can verify the impact claims

Assessment panels would review the evidence portfolios and:
  • apply their collective expert judgement to determine the validity of the claims made against the impact criteria. Impact ratings will be assigned . . . The Working Group recommended the Payback consensus scoring approach as particularly suited for this purpose

Wider economic, social, environmental and cultural benefits of researchDescribes the RQF that became a key element in the development of methods to assess the impact of research in a national scheme across the entire system of higher education. Although it was not eventually introduced in Australia, it was drawn on in recommendations that the REF adopt impact case studies
35Donovan et al.622014ApplicationAustralia (assessment by UK team)National Breast Cancer Foundation: key programmesPayback Framework. Documentary analysis, bibliometrics, survey (242 sent – 63% response rate), 16 case studies and cross-case analysis. Applied to a range of funding schemes used by the charityThe five payback categories: knowledge, research, policy and product development, health and economyThorough study funded by the charity that announced the findings would be used to inform their research strategy; however, many projects had only recently been completed
Found: citation rate between 2006 and 10 was double the world benchmark; 185 higher degrees (121 PhDs); 46% career progression; 66% generated tools for future research use; leveraged additional AUS$1.4 for each dollar spent; 10% impact on policy – 29% expected to do so; 11% contributed to product development; 14% impact on practice/behaviour, 39% expected. Brief accounts of case studies included some important examples of impacts achieved
36Drew et al.902013BothUSANIEHS: all programmes of environmental health sciences researchHigh-impacts tracking system. A framework informed by stream of work from NIEHS (Engel-Cox et al.,63 Orians et al.17), also the Becker Library approach (Sarli et al.118) ‘closely aligned with the categories we used in our logic models, and also informed our ontology’. Also informed by the development in the UK of researchfish. High-impacts tracking system:
  • an innovative, Web-based application intended to capture and track short- and long-term research outputs and impacts

Imports much of the data from existing NIH databases of grant information, also text of progress reports and notes of programme officers/manager
Outputs: scientific findings, publications, patents, collaborations, animal models, biomarkers, curricula and guidelines, databases and software, measurement instruments and sensorsThis is one part of ambitious plans being developed by NIEHS for use by them. However, they are still evolving and recognise the need to develop ways to capture long-term impacts and ‘expert opinion and review of specific program areas’. A growing interest in UK developments in impact assessment illustrated by fact that informed by the researchfish approach
Impacts: improved health/disease reduction, exposure reduction, policies and regulations, community benefit, economic benefit
37Druce et al.3592009ApplicationInternationalIAVINone stated beyond assessing the extent to which IAVI met its strategic objectives over period 2003–7. Qualitative and quantitative: documentary review; interviews (100+); and field visitsAssessed initiative’s accomplishments in the following: R&D; clinical trials; advocacy and communications; policyComprehensive evaluation independently commissioned and conducted, using a range of methods. Denied access to individual partnership agreements for reasons of confidentiality. Not specifically an assessment of impacts, but the IAVI objectives included items such as policy. However, IAVI produces policy analysis more than producing research used in policy
Found: added scientific knowledge; built capacity; been a leader in advocacy for HIV vaccines; ‘important value-adding contributions to policy issues’
38Engel-Cox et al.632008BothUSANIEHS: intended for all programmes of environmental health sciences researchNIEHS logic framework developed and identified a range of outcomes informed by the Payback Framework and Bozeman’s public value mapping;360 then added a list of metrics for logic model components. The logic model is complex, as in addition to inputs, activities, outputs, and outcomes (short term, intermediate and long term), there are also four pathways: NIEHS and other government pathways; grantee institutions; business and ‘industry; and community. Each pathway illustrates the research process that would be carried out most directly by a given institutional partner that is being evaluated.’ The model also included the knowledge reservoir, and contextual factors. No explicit description of methods used to conduct the illustrative case studies, but implied documentary review and ‘expert elicitation’From payback and value mapping: translation into policy, guidelines, improved allocation of resources, commercial development; new and improved products and processes; the incidence, magnitude, and duration of social change; HSC welfare gain and national economic benefit from commercial exploration and a healthy workforce; environmental quality and sustainability. Range of impacts identified in cases studiesImportant methodological development, illustrated in two case studies rather than a full application. Builds comprehensively on earlier work. Having the various pathways allows a broader perspective to be developed (e.g. by the grantee institution pathway) than that of individual projects. However, challenges include:
  • the lack of direct attribution of NIEHS-supported work to many of the outcome measures and the lack of robust electronic databases that can be easily searched to help establish these linkages

The logic model is complex and:
  • Distinctions drawn between the institutional pathways are artificial to some degree, and there is considerable cross-over between sub-models

39Ensor et al.3612009ApplicationNepalMaternity: skilled attendanceNone stated. Key informant interviews to identify the role of specific research in a key policyPolicy: specific area of financing of skilled attendance at birthFactors for rapid utilisation: the research processes helped ensure wide communication of the findings; political expediency meant that there was a key political champion advocating a strong policy response
40Eriksen and Hervik942005ApplicationSwedenVINNOVA: neck injuries researchVINNOVA approach. Mixed approach: economic analysis of benefits for Swedes, the industry and the world compared with the costs. Classic R&D assessment of contribution to the research field: publications, research training, quality of the research, development of networks, etc.: desk analysis followed by review by a panel. Best estimates also made of value of future researchBenefits to society, the companies involved and the research fieldWide-ranging approach including important long-term impacts from long-term funding of a centre. Admitted ‘there is a great deal of uncertainty in these calculations’. Perhaps it is even more challenging than is acknowledged in the report
Found:
  • The main impression, therefore, is that the systems for protection against injury described above provide major economic benefits

41Evans et al.1012014ApplicationUKPublic involvement in health researchRealist evaluation. Mixed-method case studyFactors and interactions that influence successful involvement of patients and the public in researchStrength – clear and rigorous application of realist method supplemented with other perspectives where appropriate. Various mechanisms – collaborative action, relationship building, engagement, motivation, knowledge exchange and learning – which interact with context to produce different outcomes in different parts of the programme
42Expert Panel for Health Directorate of the European Commission’s Research Innovation Directorate General532013ApplicationEUEU FPs 5, 6 and 7: public health projectsPayback Framework. Documentary review: all 70 completed projects; 120 ongoing; key informant interviews with particularly successful and underperforming projects (n = 16). A data extraction form was constructed based on the categories from the Payback Framework, with each of the main categories broken down into a series of specific questions. Distinction made between expected and achievedDimensions adapted from Payback Framework: knowledge production; research capacity building; informing health policy (and practice added after pilot); health and health sector benefits; economic and social impact; disseminationUsed documentary review, therefore, for completed projects had data about whole set. However,
  • Extensive follow-up of the post-project impact of completed projects was not possible . . . we were unable to determine whether project deliverables were accessed and used by the relevant end-users, nor could we examine possible effects on population health

The study did not include the health equity aspect included in the Payback Framework, or some aspects that could help analysis of the level of attribution. Illustrates a way of conducting impact assessment broadly (apart from the selected interviews) using a framework to interrogate documents and thus have comprehensive coverage of a programme without requiring additional data from the researchers. Also shows the limitations of such an approach
Found: only six out of the 70 completed projects did not achieve the primary intended output; 56 peer-reviewed publication; 42% took actions to engage or inform policy-makers; four projects change of policy (22% expected to do so); six impact on health service delivery; seven impact on health practitioners; six impact on health; one beneficial impact on small and medium enterprise
43Frank and Nason1152009MethodologicalCanadaAll healthAdapted the Payback Framework to develop the CAHS framework. No application but identified 66 validated indicators. Framework based on a report overseen by an expert international panel and supported by a series of commissioned appendices. The panel recognised that ‘The 66 validated indictors currently listed do not cover the full spectrum of possibilities’ and identified a series of implementation challengesAdapted the five categories in the dimensional categorisation of the Buxton and Hanney’s payback model:
  • advancing knowledge, capacity building, informing decision-making [changed from the original informing policy and product development]; health benefits; and broad economic and social benefits [changed from broader economic benefits]’

The authors report the CAHS panel:
  • recommend a method that builds on the advantages of the ‘payback model’ but adapts it to target-specific impacts in multiple domains at multiple levels

Based on the work of an international panel of experts informed by a major review and analysis of many aspects of impact assessment. Highlighted a series of challenges facing any assessment of research impacts, including: attribution, the counter-factual and time lags. The CAHS framework has informed a series of studies. Attempting to develop an inclusive set of indicators has generated additional challenges, whereas the Payback Framework put more emphasis on addressing issues such as attribution through use of case studies
44Garfinkel et al.3622006MethodologicalUSAPerinatal healthSocietal outcomes map. ‘Technology road mapping’ can be described as ‘graphical overviews of potential solutions over time to specific concerns’, aimed at clarifying what inputs are needed to produce desired outcomesStart by cataloguing desired outcomes, then specify inputs, identify potential research and policy paths, and mine the academic literature quantitatively to profile a research domainEssentially an ‘engineering’ theory. Speculative and deterministic, seems to be a formalised brainstorming process that generates complicated (but not complex) boxes-and-arrows diagrams. Marginal for our review and does not link to assessment of specific programmes of research
45Gibbons et al.1791994MethodologicalN/AUniversity–society relationshipsCo-production. ‘Mode 2’ knowledge is:
  1. generated within its context of application
  2. transdisciplinary and intersectoral rather than narrowly ‘academic’
  3. produced in increasingly diverse ways and contexts
  4. highly reflexive, i.e. no longer seen as the objective investigation of hard reality but the intersection of multiple views and approaches on how science should be ‘done’
  5. accountable to a wide range of users, not merely to academic peer reviewers
Reframing of impact in terms of the increasingly complex and diverse infrastructures and relationships that support knowledge production rather than as dissemination, implementation or translation of research ‘findings’Pre-dates the current review period, but important for the philosophical taxonomy. Strength: novel and important reconceptualisation. Weakness: few detailed empirical examples hence (when initially published) largely speculative
46Gibson et al.3632014ApplicationUSAComparative effectiveness research: four technologiesBefore/after. Time trend. Each of the Comparative Effectiveness Trials identified a clinical practice guideline citing it and included the publication date of each in the analysisPractice change‘This study demonstrates that evaluating the impact on clinical practice, based on results of published CER trials and CPGs, is complex’
Found: no clear pattern of utilisation in the first four quarters after publication. (While this study was not measuring impact by inclusion on a guideline, all four were rapidly cited on one and would have been counted as making an impact in other studies)
47Glasgow et al.1782012MethodologicalN/AHealth researchImplementation science. New model proposedAlignment of mode 1, research-based evidence, with mode 2, socially engaged implementation processStrength: attempt to integrate mode 1 and mode 2. Weakness: may have inadvertently compromised core assumptions and principles of some models in producing the hybrid. Speculates that by aligning the ‘internal validity’ of RCTs with the ‘external validity’ of social engagement, will have greater impact
48Godin and Doré3642005MethodologicalN/AScience in general, including healthDeveloped an approach based on a range of dimensions of society, beyond the economic one, on which science has an impact. Challenges identified (see columns 8 and 9) might help inform methods for impact assessmentIdentified a very preliminary list of 72 impacts and indicators within 11 dimensions on which impact could be assessed: science, technology, economy, culture, society, policy, organisation, health, environment, symbolic and trainingSuggests three challenges must be met before one conducts any measurement of these types of impact:
  • One is to distinguish conceptually between output and impact (or outcome). The second is to identify specifically the transfer mechanisms by which science translates into impact. The last is to develop appropriate and reliable instruments and indicators

49Gold and Taylor1382007ApplicationUSAAHRQ: Integrated Delivery Systems Research NetworkDocumentary review; descriptive interviews (85); four case studies. MixedChanges in operations. ‘Of the 50 completed projects studied, 30 had an operational effect or use’Partnerships. Success factors: responsiveness of project work to delivery system needs, ongoing funding, development of tools that helped users see their operational relevance
50Graham et al.852012BothCanadaAlHS: tested on several programmes, e.g. independent investigators to programme (all fields)Developed and tested an AIHS version of the CAHS framework. Archival review of applications, progress reports, etc. For example, from 215 grantees on independent investigators programme whose awards ended 2004–8Started with the five CAHS categories (knowledge, capacity, decision-making, health, social and economic). Added additional one on organisational performance and additional indicators on items such as innovationMainly methodological describing how developed AIHS version of CAHS framework, with a particular focus on developing data capture approaches for the many indicators identified. The products and tools generated by AIHS through the framework’s implementation included:
  • (1) a performance measurement system that tracks progress to impact; (2) aggregated and pooled reporting capabilities through the standardization of indicators and metrics across programs

The third point highlights the organisational focus:
  • adoption of additional impact categories, indicators, and measures which improved the organization’s ability to assess and demonstrate its contributions to health system impacts in addition to the contribution of its grantees

Found (for independent investigators programme): advancing knowledge, e.g. 3901 publications; building capacity, e.g. CAD$217M leveraged in additional funding; informing decision-making, e.g. guidelines were developed in collaborations with health authorities, industry, government, and non-profit organisations; health care, e.g. 42 improvements to health care were identified through improvements to 30 therapeutics and 12 diagnosis/prognosis techniques; economic and social, e.g. 10 products in development, 5 spin-offs
51Grant et al.382009MethodologicalUKReview for HEFCE: all university researchFour methods for evaluating impact of university research. Reviewed against HEFCE criteria. Recommended adoption of case study approach as set out in the RQF from Australia (see Donovan128)Wide range of impactsMethodological review funded by HEFCE to inform its plans for assessing university research impact. Recommendation informed the development of the REF – see HEFCE33
52Grazier et al.3652013MethodologicalUSANIH CTSA: clinical research units programmeROI analysis (cost–benefit analysis) Essentially adopts a traditional cost–benefit approach. Paper details development of a ROI protocol to enable project-based evaluations of CTSA programme awards. Model development as an iterative process involving stakeholders to identify important components – beginning simple and may be limited by difficulties in identifying, measuring and monetising benefits/costs, hence qualitative data can support quantitativeROI – timing and magnitude of expected gains/timing and magnitude of expected costs. Proposes methods (e.g. survey, scoping, interviews) for identify availability, accessibility and quality of data and suggests supplementing with qualitative dataAcknowledges that although not all benefits can be quantitatively measured, it is important for wider understanding of impact. Does it add much in comparison to other, more formal/considered approaches? Conjecture: value of return will be a function of a number of characteristics. These include awards through the CTSA and other sources; the institutions at the time of the award, before it and after; the investigator; number of collaborations in the award; length and extent of ‘exposure’ to the clinical research unit of research programmes; all dependant on the scope and boundary discussions with stakeholders and on the synthesised model constructed. Note difficulties in attribution if there are multiple sources of funding, and time lag between investment and health gain
53Group of Eight and Australian Technology Network of Universities1052012BothAustraliaAll research fieldsEIA based on the REF. Case studies (n = 162) developed by researchers in institutions to describe the impact achieved from 2007 to mid-2012 by their research conducted from 1992. Case studies then assessed and scored by expert panels containing many people from outside higher education. Panels rated the impacts according to their reach and significanceExplicitly adopted the definition of impact set out in the UK’s 2014 REF assessment exercise. (The EIA trial was designed after the criteria for the REF had been set out, but was conducted before the REF). Focus on measuring the innovation dividend in areas of: defence, economic development, society (including health) and environmentHad the strength of building on the 2014 REF. Reported that the case study methodology ‘to assess research impact is applicable as a way forward to a national assessment of impact’. Weaknesses or problems related to the time taken to put together the case studies, especially if the exercise was scaled up to a national impact assessment exercise, and given the time involved in assessing the case studies ‘more extensive Panel briefings would be essential should this assessment method be adopted at national level’
Found: 87% of cases rated as being in the top three categories out of five (plus a not classified category)
  • High quality research carried out in Australian universities has had enormous benefits for the health, security, prosperity, cultural and environmental wellbeing of Australia, the region and the world

54Guinea et al.642015BothInternationalSeventh EU FP: international development public health research projectImpact-oriented monitoring informed by Payback Framework. Tested and propose elements: (1) project results framework (to be developed by each project co-ordinator during grant negotiation process to help link objectives with activities, results, impacts, and can be updated throughout life of the project); and (2) co-ordinators survey (web based and linked to five payback categories plus dissemination; completed at end of project and after 3 years, and in middle of projects of ≥ 4 years); end-user survey (web based, to people identified by project co-ordinator at end of project); assessment tool (online/web to assess individual projects at end of project and 3 years after on the basis of data gathered as above). In developing tool also conducted nine case studiesBased on five dimensions of Payback multidimensional categorisation, but with the additional category being ‘Pathway to impact’: advancing knowledge; capacity building and research targeting; informing decision-making, practice and policy; population health and health sector benefits; broad economic and social benefits; dissemination and knowledge transferDeveloped own comprehensive methodology informed by existing frameworks and tested a range of methods, but had a low response rate: 28 out of 116 projects. Large-scale EU project funded in the light of EU Court of Auditors criticism of lack of evaluation of EU FP4, FP5 and part of FP6. Aim:
  • Structured information is intended to facilitate and underpin the decision-making of EC officers in charge of project management, and support them in the design of future research topics and programmes

Some interesting methodological observations:
  • results from case studies revealed a high concordance with the coordinators’ survey on several facets, for instance, in . . . providing evidence of project performance, and revealing some types of impacts

Found: generally findings mentioned only in relation to commenting on methodology
55Guthrie et al.92013MethodologicalUSA (review conducted in UK)All fieldsA total of 14 frameworks, six reviewed in detail (REF,33 Excellence in Research for Australia,366 STAR METRICS®,367 CAHS,7 NIHR Dashboard,368 Productive Interactions81). Criteria analysed included purpose: analysis, accountability, advocacy, allocationThe research evaluation frameworks and tools examined covered evaluation in general, not just impacts. Found considerable variety in role and potential of the different frameworks. Suggested CAHS payback could have advantagesFull report analysing many aspects of research evaluation systems. Conducted for Association of American Medical Colleges. In various places our review draws on aspects of this analysis
56Guthrie et al.272015BothUKHTA programmeROI. Selected 10 key HTA studies, mostly RCTs but a few systematic reviews, and applied desk analysisKey impact: per patient QALY gains associated with the intervention monetised at a health-care opportunity cost of between £20,000–30,000 net of health-care costs. Net benefit calculated as a hypothetical full year implementationHas the strength compared with most other ROI studies of having a clear picture of the cost of the research inputs and detailed case study analysis. Weaknesses: small sample size (10/743); does not adequately address attribution problems but assumes HTA studies were responsible for all post-HTA research implementation as they were considered to be ‘definitive’ evidence
Found: only 12% of potential net-benefit would cover the £367M invested in HTA programme
57Gutman et al.1322009BothUSARWJF: ALR programme (see also Ottoson et al.369 on the same programme)The conceptual model used in the ALR programme ‘was used to guide the evaluation’ but aspects needed refinement to give more emphasis to context, attracting additional research funding, and translating research into policy change. Aspects of Weiss’s model used for analysing policy contributions. A retrospective, in-depth, descriptive study utilising multiple methods, both qualitative and quantitative. Qualitative data derived mainly from 88 interviews with key informants: a random sample of grantees; other funding organisations; policy and advocacy organisations; programme leadership and RWJF staff. Quantitative data derived primarily from a web-based survey of grantee investigators (sent to PIs, chief investigators and applicants); of the 74 projects, 68 responses analysed. Analysed the early examples of policy impacts by using Weiss’s framework and found examples of instrumental, political and conceptual useBuilding knowledge base, including creation of a new field; building human resources; growing financial resources; contribution to policy debate and changeComprehensive data collection from diverse sources attempted to assess the impact of the research programme as part of a wider intervention; however, only 16% of competitively awarded grants had been completed prior to the year of the evaluation. The author commended on the limitations:
  • the study design is descriptive rather than quasi-experimental, and therefore does not include a comparison group composed of other RWJF national programs

The author also commented:
  • some approaches utilised by the program worked well, including developing a multifaceted, ongoing, interactive relationship with advocacy and policy-maker organisations

Grantees who completed both interviews and surveys generally gave similar responses, but researchers included in the random sample of interviewees gave a higher percentage of policy impact than researchers surveyed; questions bit different
Found:
  • ALR was the catalyst to build a new field of transdisciplinary research focused on policy and environmental factors conducive to physical activity

ALR investigators leveraged more than two-thirds of the ALR investment; dissemination included synthesis and translation activities by the programme office, 55% of PIs produced policy-related products, effective two-way liaison with some organisations brokered by the ALR programme, interviewees from various organisations reported relationship with the ALR programme beneficial including bolstered the case for action, provided materials, etc. Generally thought to be too early for much policy impact, but 25% of survey, 43% of interviewees reported a policy impact
58Hage1122009MethodologicalCanada (part of appendix A of the CAHS report)Treatment sector of the Canadian health care systemInformed development of CAHS frameworkIdentifies a series of meso-level metrics of impact that identify, for example, the detailed aspects of impacts on health care that might arise from different phases of research and shows how these might help an impact assessmentReferred to in the main CAHS report which states:
  • This aligns with the paper by Hage (appendix A, p. A79), which argues that meso-level factors—those at the health category level—are vital in understanding the impacts and pathways to impacts of health research

    CAHS7

Although only a few aspects of the proposals seem to have been taken up in the final main CAHS report, it did inform the thinking
59Hanney et al.22007BothUKNHS: HTA programmePayback Framework. Multiple methods approach, literature review, funder documents, survey, case studies, interviews (n = 16) (between 1993 and 2003)The review found that the Payback Framework is the most widely used approach/model. Impact on knowledge generation can be quantified more than that on policy, behaviour or health gain. A higher level of impact on policy than is often assumed. Primary study: used categories from Payback Framework. The HTA programme had considerable impact in terms of publications, dissemination, policy and behaviour. Different parts of the programme had different impacts, as expected: NICE TARs had 96% impact on past policy compared to 60% for primary and secondary HTA research. The mean number of publications per project is 2.93. Case studies showed large diversity in levels and forms of impacts and the way in which they arise. NICE TARs demonstrate the importance of having a customer (receptor) body for having impact on policyThe survey showed that the data on payback can be collected but more than one-third of the projects did not respond. The review conducted as part of the study identified the Payback Framework as the most appropriate approach to use to assess the impact of the HTA programme. It facilitated capture of key factors in achieving high levels of impact, i.e. the agenda setting to meet the needs of the health-care system, the generally high scientific quality of the research; and the existence of a range of ‘receptor bodies’ to receive and use the findings
60Hanney et al.3702006ApplicationUKOne diabetes researcher’s body of research over a certain periodBibliometrics, surveys to key authors, semistructured interviews with the researcher and experts/users. The bibliometric analysis traced citations through first-, second- and third- generation papers with qualitative analysis of the importance of the work in citing papersPartly informed by informed by the categories from the Payback Framework, with the articles describing the knowledge produced being particularly importantQualitative approaches important alongside the bibliometric analysis
Found: various examples of impact, and not all papers thought to have make an impact were highly cited
61Hanney et al.512013BothUKAsthma UK: all programmes of asthma researchPayback. Survey, documents, case studies, some expanding the approachFive categories from the Payback Framework: publications, including follow on; targeting further research and capacity building; policy influence and product development; health and health sector benefits; broader economic benefitsExtended Payback Framework to assess impact from long-term professorial chair funding and cofunding with MRC of a research centre. Also, as intended, informed strategy of the medical research charity
Found: various categories of social impact arose from only a minority of projects (13% on policy, 17% on product development, 6% on health gain) but some important influence on guidelines, potentially major breakthroughs in several asthma therapies, establishment of pioneering collaborative research centre
62Hanney et al.462013MethodologicalN/AInvolvement in research in any health field/impact on health-care performanceThe review identified papers assessing how far there were improvements in health-care performance associated with engagement in research. Hourglass review – focused and wide review. Built a matrix using an iterative approach:
  • our initial planning and mapping exercise explored several major theoretical approaches that could, potentially, contribute to informing the conduct of the review and to building a framework within which to analyse the mechanisms through which engagement in research can improve performance

To map and explore plausible mechanisms through which research engagement might improve health services performance at clinician, team, service or organisational levels. (Improve understanding of the impact of engagement in health research.) Identified two main dimensions to categorise studies that assessed whether research engagement led to improved performance or not:
  • We have called these two dimensions the degree of intentionality and the scope of the impact

Of the 33 studies, 28 studies were positive in showing impacts: 13 described broader impact and 15 described specific impact
The focused review collated more evidence than previously thought, and although it was generally positive it was difficult to interpret
One difficulty of applying the matrix arose because some of the papers in our focused review have features that fitted into more than one category on a certain dimension. Nevertheless, it was:
  • important to attempt to make such categorisations because of the potentially very different mechanisms that may be at work in these different circumstances on the two dimensions

63Hansen et al.1112013MethodologicalThe European Observatory on Health Systems and PoliciesPublic healthSummarises four conceptual frameworks: Payback; RIF; European Commission seventh Framework (see next column); and research utilisation ladder. All essentially variants of case studyEuropean Commission FP7: Description of main dissemination activities and exploitation of results; synergies with science education (involving students or creating science material); engagement with civil society and policy-makers (e.g. NGOs, government, patient groups) and production of outputs which could be used by policy-makers; use of dissemination mechanisms to reach the general public in appropriate languages; use and dissemination (peer-reviewed journal articles, patent applications, intellectual property rights, spin-offs); and the employment consequences of the projectNon-systematic review that usefully summarises some approaches but omits others. Importantly suggests:
  • it may not be necessary to develop another version, but rather find clever ways to combine elements from different frameworks to best fit the particularities of a certain research topic

64Hera1362014ApplicationAfricaAHSI-RESKey element of the design: adoption of an interactive model of knowledge translation. A theory-driven approach was used by constructing post hoc results frameworks for 6 of the 10 research projects according to a generic theory of change model for the programme. Then participatory workshops were held with the research teams to test the frameworks against the reality of implementation. Data gathered by a range of methods including documentary review; interviews at programme level; project-level information – for six projects workshops (see next column), and for the remaining four a total of 12 interviews with 16 members of research teams; participant observation of end-of-programme workshop, at which the team also presented some preliminary findingsRelevance; research capacity building; policy impact; social and gender equity; sustainabilityThere was a range of methods and it did assess using the logic model of the programme; however, it was completed just before the end of the programme, therefore, only identifying early impact. Abstract:
  • Research teams who started the policy dialogue early and maintained it throughout the study, and teams that engaged with decision-makers at local level, district and national levels simultaneously were more successful in translating research results into policy action

Timing of evaluation raises interesting questions. Positive – were able to observe the final programme workshop and present preliminary findings. Negative – too early for some of the impact, but interactive approach of whole programme led to some policy impact during project
Found: a highly relevant structure that responded to the needs for research capacity building (but mainly only in institutions with little background in the field) and knowledge translation. Policy impact was created during the research process: 7/10 projects reported policy impact already. More progress in focusing research on pro-poor issues than on gender
  • . . . the uptake of generated evidence in national health policy or international policy guidelines. Because of the interactive nature of AHSI-RES significant results have already been achieved, but the policy dialogue is not yet complete and further uptake can be anticipated

65HEFCE332012MethodologicalUKREF: Medicine and Life SciencesREF Impact Assessment. Impact template is a description of the environment and activities in a higher education institution oriented to maximising impact
Impact case study is a four-page description of a research project/programme and ensuing impact, with references and corroborating sources
  • . . . benefits to one or more areas of the economy, society, culture, public policy and services, health, production, environment, international development or quality of life, whether locally, regionally, nationally or internationally

  • . . . manifested in a wide variety of ways including, but not limited to: the many types of beneficiary (individuals, organisations, communities, regions and other entities); impacts on products, processes, behaviours, policies, practices; and avoidance of harm or the waste of resources

At the time that this guidance was published, the approach was largely untested, though there had been a ‘dry run’. The development of the approach can be traced through from the RQF in Australia (Donovan128), through the HEFCE commissioned review (Grant et al.38) to these plans
66Home3712008ApplicationUKUK prospective diabetes study: type 2 diabetes mellitusDrew on application of aspects of the Payback Framework Home had coauthored in Hanney et al.370 Narrative review by expert diabetes ‘insider’Publications, guidelines, education material, changes in monitoring, treatment and health outcomesThe UK Prospective Diabetes Study resembled a programme in that it consisted of a group of clinical trials, epidemiological analyses and health modelling studies. Not a formal impact assessment but more of an insider’s review drawing on author’s experience as a leading diabetes medical academic, and his involvement in a previous research impact assessment in the field
Found: 85 full papers, 78% in leading journals; cited in many guidelines, and not just in those relating to diabetes, but traces a complex picture of how citations of papers can get overtaken by citation of reviews that cite the papers: ‘considerable impact on available educational material’. Influenced monitoring and treatment:
  • By inference it must be responsible for a significant part of the improvement in health outcomes in people with Type 2 diabetes over the last decade

67Jagosh et al.1732012BothN/ACBPRCBPR. Systematic review using realist principlesIf delivered effectively, CBPR can:
  1. ensure culturally and logistically appropriate research
  2. enhance recruitment capacity
  3. generate professional capacity and competence in stakeholder group
  4. result in productive conflicts followed by useful negotiation
  5. increase the quality of outputs and outcomes over time
  6. increase the sustainability of project goals beyond funded time frames and during gaps in external funding
  7. create system changes and new unanticipated projects and activities
Strength: rigorous application of realist methodology. Weakness: findings pertain only to CBPR; relatively small sample of high-quality studies. Factors: extent to which research designs are culturally and logistically appropriate; extent of measures to develop capacity and capability in stakeholder groups; how and to what extent conflicts are managed; and extent to which mutual trust builds over time
68JISC3722013BothUKJisc Business and Community Engagement programme: partnership projects in a range of fields including healthThe central team provided projects with an evaluation framework designed to help projects ‘to target and identify the emerging impact of their project’. Various projects drew on existing frameworks and tools including the Research Contribution Framework,373 stakeholder analysis. This report is based on detailing the learning from nine tripartite partnership projects set up to develop capacity in universities to embed impact analysis in research using the expertise of business and community engagement practitioners and information management specialists. Projects were ‘experiential and action learning approaches’Progress in the individual projects including: drawing on existing knowledge and frameworks about impact; developing and testing a model; delivering a system that is theoretically robust, practical to use and meets needs of stakeholdersNot health specific but some projects related to health, e.g. the Emphasising Research Impacts project at Newcastle University faculty of Medical Sciences. Of marginal relevance because not a standard research impact assessment because the programme consisted of projects developing the capacity to enhance and assess impact. However, this is another dimension of the increasing emphasis on impact assessment
Found: the overall accounts report challenges, but, ‘in spite of these challenges, the projects reported many positive outcomes’ – improved understanding of impact, improved evaluation strategies embedded within research groups, greater awareness of training needs, ‘improved focus on who to engage to maximise the potential of the impact of the research’
69Johnston et al.752006BothUSANational Institute of Neurological Disorders and Stroke: programme of clinical trials – all pre 2000 Phase III RCTsROI analysis. Health economic modelling used to estimate ROI from 28 RCTsNIH funding of Phase III trials equated with quasi-societal returns (aggregate treatment costs/savings and health gains measured in QALYs, valued based on USA GDP per capita) based on projected usage. Eight RCTs had adequate data to estimate usage/cost/effects. Twenty-eight trials with a total cost of US$335M were included. Six trials (21%) led to improvements in health. Four (14%) resulted in cost savings. There were 470,000 QALY in 10 years since funding of trials at cost of US$3.6B. The projected net benefit was US$15.2B. Yearly ROI 46%Innovative quantitative attempt to value benefits of a specific programme of research, net the costs of delivery of new interventions. Incorporate standard health economic information. However, limited by incomplete data and reliance on published work for model inputs. Inherent problems with cost–utility analyses and imprecise estimates. Unclear about the quality of data on usage, in particular. Another example of economic value assessment showing large gains, and an important development in the field because of its application to the programme level
70Kagan et al.3742009MethodologicalGlobal research (analysis conducted in USA)NIH: global HIV clinical trials research programmeConstruct a new conceptual framework through a participatory process for an evaluation system for the National Institute of Allergy and Infectious Diseases HIV/AIDS clinical trials programmeDeveloped a concept map of success factors. The evaluation framework depicts a broad range of factors that affect the success of the clinical research networks. There is an average importance rating for each of the ideas (on a 1–5 scale) and average importance rating of the statements within each cluster, biomedical objectives (highest average rating score: 4.11), scientific agenda setting (4.06), collaboration communication and harmonisation (3.81), operations and management (3.80), Division of AIDS policies and procedures (3.96), resource utilisation (4.00), community involvement (4.05) and relevance to participantsStrengths: careful analysis of a pioneering and participatory process to develop a new conceptual framework mapping success factors
Limitations: the results of the development of the conceptual framework are context specific. The ideas generated, organised and prioritised by the stakeholders were specific to the Division of AIDS clinical networks, thus limiting the generalisability of the results
71Kalucy et al.652009ApplicationAustraliaPrimary health carePayback. Telephone interviews (n = 13) plus bibliometric analysis of publications and analysis of range of project documentsPayback categoriesPioneering application of the Payback Framework – tested in the context of plans for the introduction of RQF in Australia (see Donovan128). Some limitations:
  • Interviews were labour intensive
  • Incomplete data, e.g. some key players were unavailable for interview; early documentation was missing; researchers could generally provide evidence of dissemination but not of system-wide, indirect and economic impacts
  • Some questions were not understood by interviewees and did not provide useful data
Concluded:
  • Assessing impact of a substantial number of projects would be more feasible if the burden of response could be reduced by refining and streamlining the methods

Found: in testing the approach the Payback Framework was found to be robust and applicable. Advantage over bibliometric analysis – picked up many more publications especially in applied research where most outputs were not indexed on Web of Science. Four case studies consisted of one RCT, one action research study and two case note audits. The ‘logic model’ of the Payback Framework worked better for the RCT than for the action research study where there was input from ‘users’ at every stage in the research
72King et al.3752009MethodologicalCanadaHSC, community–campus partnershipsCIROP questionnaires. (See King et al.84 for application as part of wider study)Four domains, psychometrically independent: personal knowledge development; personal research skill development; organisational/group access to and use of information; community and organisational development (for research), i.e.:
  1. how much did this person feel they had gained in [research] knowledge?
  2. how much did this person feel they had gained in research skills?
  3. how much did the organisation this person worked for seek information from the researchers and use it to improve services, etc.?
  4. how much did the community/organisation improve in its capacity and capability to undertake research?
Seems psychometrically ‘internally valid’ (76% of variance was accounted for by the principal component analysis) but seems to measure a very small range of possible impacts, and only measures people’s perceptions of these. The aim was ‘to develop a generic survey measure of the influence of research partnerships on skills, decisions, and community capacity, in the eyes of target audience members’, i.e. it was intended to measure research-oriented outputs, not service-oriented ones. Hence very researcher-focused
73King et al.1882010ApplicationCanadaCommunity development for HSCImpact model for community–campus partnerships: see Currie et al.357 The impact model specifies:
  1. the functions of research partnerships (i.e. knowledge generation and sharing, research education/training)
  2. types of outputs corresponding to these functions (e.g. information products)
  3. indicators of the utilisation of these outputs (e.g. website use statistics)
  4. mid-term impacts (i.e. impacts on knowledge, research skills, actual application of ideas, findings, and materials)
  5. long-term impacts (e.g. enhanced quality of life, consumer satisfaction) – although this study did not look at point 5
The study analysed community partnerships’ structure, process and outcomes. Structure measured by number and types of partner, local/national orientation, ratio of university: community staff and grant income. Process measured by indicators of research utilisation. Outcome measured using CIROP scale (see King et al.375). CIROP items used as dependent variables in a regression analysis. Main findings: mean impact scores indicate that research partnerships have small to moderate impacts on community and organisational development, and personal research skill development, but moderate to fairly great levels of impact on personal knowledge developmentInnovative because it applies an ambitious impact model (Currie et al.357) and outcome measure scale (King et al.375), but convenience sample. Could be questions about how far the instruments have captured the key dimensions of the partnerships – very quantitative and tick-list focused. The finding that ‘personal knowledge development’ increased more than other dependent variables may be an artefact of the study design (asking personal recall of what happened)
74Kingwell et al.1392006ApplicationAustraliaNHMRC: projects completed in 1992, 1997 and 2003No framework explicitly described, but used the new end-of-grant form developed by the NHMRC to capture impacts. Expert panel review of end-of-grant reports from researchers completing in 2003: 139 reports out of 454 expected (29%); retrospective surveys of investigators of earlier projects using a simplified version of new end-of grant report as the survey instrument: 1997 – 131/259 in contactable sample (51%); 1992 – response too low to use for full analysis but examples of impact identified. Separately computer-assisted telephone interview survey of recipients of people awards: 596 of 1897 (31%) completed surveyKnowledge gain; health gain: any research with a self-reported effect on clinical practice or other health service delivery practice or outcomes, changes in public health practice, or changes in health policy. Wealth gain: any research with self-reported commercial activity, including commercial potential and patents. People award recipients: career prospectsHelped identify ways to take impact assessment forward, e.g. on issues of timing, but generally quite low response rates. Also highlighted some projects with clinically relevant outcomes for showcasing to the community
Findings: papers per grant for 1997 ranged from basic research (7.0) to health services (3.0). For 2003, basic (7.5) and health services (4.3). For 1997, 24% of grants were deemed to have affected clinical practice, 14% public health practice and 9% health policy. Commercial potential: 41% of grants were deemed to have such potential. Patents arose from 20%. 89% of people award recipients thought career prospects improved (but there are barriers)
75Kogan and Henkel491983BothUKDepartment of Health and Social Security- funded programmes: wide rangeDeveloped a collaborative approach informed by Weiss’s taxonomy of research utilisation, Caplan et al.376 etc. Case study of the new ‘Rothschild’ approach to organising and funding of Department of Health and Social Security research: including ethnography, participant observation, document analysisEvaluation of the model introduced by Rothschild in which government departments had a chief scientific officer charged with commissioning research from scientists. Assessment of how far the department could successfully commission research to meet the needs of policy-makersRigorous and extensive ethnography of the process of government commissioning of research/policy-making, more so than on impacts of specific programmes. Now somewhat dated, but likely that key principles are still transferable. Science and government are very different cultural ‘worlds’, but also mutually shaping and interdependent. The key to success is sustained interaction over time. Linear models fail to do justice to the sheer complexity of both research and government. The policy timescale fits poorly to the research cycle
76Kok and Schuit1162012MethodologicalN/AHealth researchContribution Mapping. A three-phase process map that includes actors, activities and alignment efforts during research formulation (vision, aims, set-up), research production and ‘knowledge extension’ (dissemination and utilisation by both linked and unlinked actors). The contribution map is produced in a four-stage process beginning with an in-depth interview with the lead researcher, covering the three phases above and showing the contribution of the research to the realignment of actors and activities‘Impact’ is conceptualised as a new alignment of people, ideas, theories and artefacts. Knowledge may be formalised in journal articles but may also be captured more diffusely as ‘the knowledge available to a group of people’ or inscribed in technologies, e.g. magnetic resonance imagingElegant and richly theorised framework based on actor–network theory; however, no empirical application described in this paper (but contributed to thinking in Bennett et al.57). It challenges the view that impact can be attributed to a single research project:
  • Research does not work like a cannon shooting knowledge into the world of action, where the targeting and force of the knowledge determines its ‘impact’.

    Instead, the productivity of research for health ultimately lies with the users who have to pick up and combine knowledges (in the plural), using them for their own purposes . . .

    The consequence is that achieved changes cannot realistically be attributed to a single research project as ‘source’

77Kryl et al.3772012BothUKNICE guidelines: dementia, chronic obstructive pulmonary diseaseAnalysis of the papers cited on guidelines to check for the origin/funder of the researchIdentified if papers had an author from UK institution. Checked funder acknowledgement and categorised by type/organisationIt argues that it is possible to track the source of funding, but improved accessibility of data and reporting of funding needed
  • We found that there is great potential for national and international guidelines to be used as sources of information to help further our understanding on the impact of research on practice: the challenge is to be able to harness that information in an efficient way

Found: over one-third of papers have at least one UK-based author. In over 40% of cited papers, no funding acknowledgement was found. The MRC, Department of Health/NHS and the Wellcome Trust ‘were overtly linked to only a small proportion of papers cited in the guidelines’
78Kuruvilla et al.1232006MethodologicalUKHealth researchRIF. Policy impact assessment element was informed by Weiss’s taxonomy. Semistructured interview and document analysis leading to one-page ‘researcher narrative’, which was sent to the researcher for validationFour broad areas of impact:
  1. research-related impacts
  2. policy impacts
  3. service impacts: health and intersectoral
  4. societal impacts
Each has a range of subcategories (see Chapter 3). Application described in Kuruvilla et al.97
Pragmatic, carefully tested and richly informed by an extensive literature review. Designed to help researchers develop their narratives of impact. Inclusive and imaginative, and has strong face validity. The way RIF intended to be used as follows:
  • designed as a ‘DIY’ approach with descriptive categories that prompt researchers to systematically think through and describe the impact of their work.

    While initially sceptical, LSHTM researchers found [it] prompted them to identify a wide range of impacts related to their work in a relatively systematic manner (compared to the ad hoc approaches they had previously used)

79Kuruvilla et al.972007ApplicationUKHealth researchRIF. (See Kuruvilla et al.123) Case studies: 11 projects in total, selected for maximum variety. Semistructured interview and document analysis leading to one-page ‘researcher narrative’ which was sent to the researcher for validationSee companion paper above123
Prior relationships with policy-makers, reputation in field meaning invitation to bid for funding.
Research networks, collaborations were key in helping communication
Communication with other academics was straightforward but communication with policy-makers was challenging. Media and funders’ websites/reports were important channels. Policy impact occurred through different mechanisms, theorised using Weiss’s 1998 taxonomy.343 Instrumental use (research findings drive policy-making); mobilisation of support (research provides support for policy proposals); conceptual use; redefining/wider influence. The structured impact narratives facilitated analysis across projects
Describes the successful application of above framework. The framework helped develop researcher impact narratives which were mostly found to be objectively verifiable ‘and facilitated comparisons across projects, highlighting issues for research management and assessment,’ but some ‘putative impacts were not as easily verifiable within the scope of this study, for example social capital or economic impact’. It was useful to help researchers ‘think through and describe the impact of their work across a range of instances when they are asked to account for these: in writing grant proposals, in research assessment exercises and in contributing to complex research, policy and service interventions aimed at improving health and promoting societal development’. Despite many strengths, not specifically designed for application to assess impact of programmes of funded research
80Kwan et al.662007ApplicationHong KongHealth and HSR fundPayback Framework. Adapted Payback survey sent to 205 PIs of completed projects: 178 (87%) responded. Statistical analysis including multivariate analysisKnowledge production; research targeting and capacity building; informing policy- and decision-making; application of the findings through changed behaviour; health and health services benefitRigorous adaptation and application of existing framework plus detailed statistical analysis. High response rate and some real examples very briefly described, however it relied solely on self-reports by PIs. Multivariate analysis found investigators’ participation in policy committees as a result of the research and liaison with potential users before and during research were significantly associated with health service benefit, policy and decision-making, and change in behaviour
Found: 5.4 publications per project
  • career advancement – 34%
  • higher qualifications – 38%
  • use in policy-making – 35%
  • changed behaviour – 49%
  • health service benefit – 42%
  • subsequent research – 45%
81Latour3782005MethodologicalN/ACritical social scienceActor–network theory. Case studyAn actor–network consists of both people and technologies (or artefacts); they are inherently unstable. ‘Impact’ is conceptualised as achieving a new alignment of the actor-network through a process called ‘translation’Strength: novel, imaginative and potentially useful framework for considering multistakeholder research networks. Weakness: the claim that ‘objects have agency’ is widely contested; exclusive focus on ‘network effects’ mean that human agency is undertheorised
82Laws et al.882013ApplicationAustraliaSchools Physical Activity and Nutrition SurveyBanzi’s research impact model. Used the framework proposed by Banzi et al.,4 which has a range of potential areas of impact ‘which largely reflect the range of other commonly used models, for example, the payback framework’. Semistructured interviews with PIs (n = 3) and users (n = 9) of Schools Physical Activity and Nutrition Survey data; bibliometric analysis; verification using documentary evidence. Triangulation of data to produce case studiesFive categories: advancing knowledge; capacity building; policy impacts; practice impacts; broader impactsCombined several methods and triangulated data to produce detailed analysis including illustrative quotes, but noted may have been some social desirability response bias. Discusses difficulty of attributing impacts to a single piece of research, particularly the longer-term societal, health and economic impacts. ‘The use of “contribution mapping” as proposed by Kok and colleagues may provide an alternative way forward.’ Factors: perceived credibility of survey findings; active dissemination; contextual factors, including continuity and partnerships between researchers and end-users, mechanisms and structures in place to implement recommendations; good fit with organisational culture
Found: each of the three surveys reported in multiple peer-review articles (32 in total); two PhDs and two post-doc positions; broad agenda-setting for policy and some examples of underpinning new policies; informed programme planning; more difficult to identify broader health, economic or societal impacts
83Lewis et al.1132009BothCanadaManitoba Centre for Health PolicyROI analysis. Stakeholder interviews; bibliometrics/altmetrics; cost analysisPolicy and culture of decision-making; financial; health status; public confidence; capacity building. The impact of Manitoba Centre for Health Policy demonstrated in ‘numerous ways, including reputation, research revenues and productivity, varying influence on policy and system management, and a major cultural and intellectual influence on the Manitoba environment’. Quantifiable ROI was 200%Relatively explicit set of criteria/framework from which to evaluate. Problems of attribution. Described as ROI, akin to Payback Framework in many respects
84Liebow et al.912009ApplicationUSANIEHS: Extramural Asthma Research ProgrammeNIEHS logic model (see Engel-Cox et al.63). The logic model tailored to inputs, outputs and outcomes of the NIEHS asthma portfolio. Data from existing NIH databases were used and in some cases data matched with that from public data on, for example, the US Food and Drug Administration website for the references in new drug applications, plus available bibliometric data and structured review of expert opinion stated in legislative hearingsPublications, clinical policy and application of findings, community interventions, environmental policy and practice, health outcomes, and technology developmentsBased on key aspects of the framework specifically developed previously for the research funder. Aim to obtain readily accessible, consistently organised indicator data could not in general be realised:
  • Beyond publications, indicators of other activities, outputs, and outcomes are not as well supported

Did not use all the pathways set out in the original Engel-Cox framework. Highlights that other activities beyond current databases would be needed to gather the data
Found: PI publications from 0 to > 100; 2057 publications attributable to 30 years’ funding; PI membership of various advisory panels, etc.; four patents; matching of databases identified NIEHS-funded trials cited in new drug applications, but not able to link trends in environmental impacts or health and social impacts to specific research activity
85Lomas1542007MethodologicalN/AHSRKnowledge brokering; linkage and exchangeHow the link between researchers and policy-makers works – and why it so often fails to workStrength: clear and authoritative summary of a component of the literature. Weakness: ‘non-systematic’ review, hence omits other perspectives
  • Researchers and policy-makers come from different worlds and do not understand each other. There is much evidence that effective knowledge exchange occurs through sustained linkage and dialogue

86Longmore672014ApplicationUKAddenbrooke’s Charitable Trust: fellowship schemePayback Framework informed the study. Developed a fellowship review form based on items such as researchfish and the Payback Framework. Used the form to gather data from fellows through face-to-face interviews or via e-mails (14/18 fellows participated)Career development; publications and dissemination; leveraged funding; capacity building; patient or health-care benefits; intellectual property/research tools/spinoutsThe fellowship scheme was a small-scale (support for 1 year or less) attempt to nurture future clinical academics. Total value of 18 awards < £1M. The way the evaluation of such a small-scale scheme conducted reflects growing interest in impact assessment
Found: most secured external fellowships leading to PhDs; research projects have ‘potential to inform future research that may ultimately deliver benefits for patients’. Some fellows reported the research improved aspects of how they approached patients
87Martin and Tang3792007BothUKRange of UK publicly funded research, e.g. the social benefit of preventative dental careUpdates the earlier Science Policy Research Unit framework for assessing the channels that might lead to economic and social benefits from public funding of basic research. Channels are: useful knowledge; skilled graduates and researchers; new instruments and methodologies; enhanced problem-solving capacity; new firms; provision of social knowledge. Desk analysis. Examined a series of case studies to identify the elements of the framework that might apply in each oneKey focus is on the channels, but some channels are items sometimes included in lists of impactsImportant analysis, informed by a review of case studies, of the exploitation channels to consider when assessing the benefits from publicly funded research. Shows the linear model has shortcomings, and highlights how some impacts will take a long time to move through the various channels and that incorrect science policy options might be adopted unless the long-term impacts are taken into account
Found: the social benefit of preventative dental care would be improved oral health care and avoidance of fillings – identified two exploitation channels for this research: spin-off; scientific instrumentation. Despite problems facing research impact assessment:
  • a growing body of empirical work demonstrates that those benefits are substantial

88Martin1742010MethodologicalUKSocial researchCo-production (focus on practitioners)Practitioner involvement in research. Involvement may be nominal (to confer legitimacy on a project), instrumental (to improve delivery), representative (to avoid creating dependency) or transformative (to enable people to influence their own destiny). The greater the involvement, the greater the potential impact in HSRAn elegant and simple tool for assessing (semiquantitatively) the level of lay involvement in a study. Weakness: not tested
89McCarthy3802012ApplicationEUEU: Public Health Innovation and Research in EuropeNone stated. Through the European Public Health Association, experts assessed the uptake of the eight public health collaborative projects (within areas of health promotion, health threats and health services), for 30 European countries. National public health associations reviewed the reports. Methods varied between countries. Following stakeholder workshops, or internal and external consultations, 11 national reports discussed impacts of the public health innovationsFocus on uptake of the innovations: impact on policy and practiceQuite wide-ranging input into the study: in total, 111 stakeholders were involved in workshops. Reports only produced for 11 countries. Methods varied: in Ireland, one person provided information. Strategies noted most often to spread the results were: reports; websites and national conferences; and seminars and lectures. Background: European Court of Auditors critical of the first EU Public Health Programme – project co-ordinators could not demonstrate ‘take up’ by target groups
Found: in 11 countries, there were reports on the eight innovations for 45 (51%) of the possible public health markets. The innovations contributed positively to policy, practice and research, across different levels and in different ways, in 35 (39%) markets, while competing innovation activities were recorded in 10 (11%) markets
90McLean et al.3812012MethodologicalCanadaCIHR: knowledge translation funding programmesA framework inspired by the principles of integrated knowledge translation. Develops a logic model for evaluating the knowledge translation funding programme. The paper is a protocol but sets out planned methods:
  • study will employ a novel method of participatory, utilisation-focused evaluation

Mixed methods using quantitative and qualitative data with participation from researchers, knowledge users, knowledge translation experts, other health research funding organisations. Environment scan, documentary review, interviews, targeted surveys, case studies and an expert review panel
Immediate outcomes:
  • (knowledge user and researcher partnerships established; knowledge generated; relevant research results are disseminated and/or applied by partners and knowledge users; advancement of knowledge translation science)

    Intermediate outcomes (knowledge users and researchers learn from each other; knowledge users are informed by relevant research; generalisable knowledge is created and disseminated)

Long-term outcomes (improved health, more effective health services and products and a strengthened health-care system)
The logic model used includes a focus on the processes as intermediate outcomes
91Meagher et al.952008BothUKESRC: all responsive mode funded projects in psychology
  • Flows of knowledge, expertise and influence

Survey of chief investigators, survey of heads of department, two focus groups, 20 semistructured interviews, media-related searches, case studies. The team reported:
  • no evident contradictions between results obtained by different methods. However, the level of detail provided did vary; unsurprisingly, some methods, such as the survey, enabled us to achieve greater breadth of data while others, such as the case study interviews, enabled us to probe issues in more depth

Collected data on six domains:
  1. primary knowledge producers (who were they)
  2. knowledge users, beneficiaries, brokers and intermediaries (who were they, how were they involved)
  3. impacts (outcomes)
  4. research impact processes (what led to the outcomes)
  5. lessons learned and recommendations
  6. methods for identifying and assessing non-academic research impacts
Thorough methods, but noted as a limitation, that the exclusive focus on responsive mode projects:
  • meant there was no special research orientation towards users or ‘relevant themes’

Authors question the value of tracking impacts in the absence of specific activities aimed at facilitating uptake
  • Changes in practice or policy often appear to stem from a general ‘awareness-raising’ or conceptual shift. Precise measurement of the impact of research upon a particular change in practice is likely to be unattainable

Found: conceptual (indirect, enlightenment-based) impacts were more common than instrumental (direct, knowledge-driven) ones
92MRC762013BothUKMRC: all programmesThe MRC economic impact report ‘is part of the research council’s performance management framework implemented by the Department for Business, Innovation and Skills (BIS)’. Combination of information from the MRC’s own databases and data gathered from PIs through researchfishThe BIS metrics framework 2011/12 includes range of items: budget allocation and funding leveraged from other sources; expenditure on different types of research; number of researchers funded in various categories; publications; research training; knowledge exchange activities; commercialisation: patents, spin-off, IP income; policy influenceBroad picture of activity and some key impacts by drawing on extensive data gathering through researchfish, etc. Questions about some of the data categories, e.g. classifying systematic reviews as a policy document, and questions of definition of economic impact. The MRC Economic Impact Report has been published each year since 2005. It illustrates several themes:
  1. the increasing number of annual reports on impact
  2. the impact reporting for medical research part of a national research-wide initiative
  3. very broad definition of what counted as ‘economic impact’
  4. highlights issue of reporting on the number of posts (of various kinds) supported by the funding as part of the list of metrics included in the impact report
Found: data collected for all items, including:
  • 2,879 reports of policy influences between 2006 and 2012 . . . in 1,083 awards

A total of 2267 were influences on policy setting processes, e.g. participation in an advisory committee; ‘610 reports of value/policy changes induced through citation in key policy documents between 2006 and 2012’, including clinical guidelines, systematic reviews. Some examples described
93MRC1032013ApplicationUKMRC: all programmesResearchfish
  1. Publications
  2. Collaborations
  3. Further funding
  4. Next destination (career progression)
  5. Engagement activities
  6. Influence on policy
  7. Research material
  8. Intellectual property
  9. Development of products or interventions
  10. Impacts on the private sector
  11. Awards and recognition
Lots of impact, e.g. publications – 83% published: 40,000 in total. Normalised citation impact twice world average. See above76 also for policy impacts, etc.
Can be implemented regularly and collects data from a wide range of researchers but possibly neglects some areas of impact. Also questions about how far it is fully completed
94Meijer802012MethodologicalThe NetherlandsDutch research but EU perspective: all fieldsScientific and societal quality of research. Combines three approaches:
  1. logical framework analysis
  2. science communication
  3. productive interactions, plus interactions within the research community
  • The method consisted of a process to get a societal relevance score per research department based on its (research) outreach to relevant societal stakeholders. These quantitative scores were then compared to standardised scientific quality scores (CWTS indicator) based on scientific publications and citations of peer-reviewed articles

Creating societal relevance is a four-step process:
  1. defining a societal mission and objectives of a research group
  2. defining stakeholders and activities/interactions
  3. measuring societal relevance
  4. reflection on findings
Long list of ‘possible indicators’ of impacts of research in private sector, public professional sector and general public
Methodologically innovative but not published in peer-reviewed journal and speculative. Seems tied in with a wider Dutch-led effort to measure effect of research funding in context of competitiveness of EU countries. See also Mostert et al.100
95Meyer1522009MethodologicalCanada: (part of appendix A of CAHS report)Clinical researchStarting point is the Payback Framework, as amended in CIHR versionsProvides details about health and economic gains being the most important of Payback Framework categories, and can be important for assessing the impact of clinical research to work towards showing them but that is more difficultPart of the evidence base that encouraged the CAHS panel to adopt an adapted version of Payback Framework. It contributes to the discussion about assessing negative impacts
96Milat et al.892013ApplicationAustraliaNew South Wales Health Promotion Demonstration Research Grants SchemeBanzi’s research impact model. This draws on the range of five potential areas of impact set out in the Payback Framework (see also Laws et al.88 above for a parallel study). Semistructured interviews with CIs (n = 17) and end-users (n = 29) of the 15 projects. Thematic coding of interview data and triangulation with other data sources to produce case studies for each project. Case studies individually assessed against four impact criteria and discussed to reach group assessment consensus at a verification panel meeting where key influences of research impact also identifiedAdvancing knowledge and better targeting of future research; capacity building; informing policies and product development; health, societal and economic impactsDetailed multimethod case study analysis of all (n = 15) projects in the programme, including a range of elements in the various payback categories. An independent panel conducted scoring. Illustrative quotes were supplied. Some potential for social response bias as some end-users may have been inclined to over-inflate positive impacts. The team identified a range of factors linked to high-impact projects. These included:
  • the nature and quality of the intervention itself . . . high quality research, champions who advocated for adoption, and active dissemination strategies. Our findings also highlight the need for strong partnerships between researchers and policy makers/practitioners to increase ownership over the findings and commitment to action

Found: both CIs and end-users indicated capacity building occurred through staff development, partnership building and follow-on research funding; 13/15 projects scored above the minimum for impact on policy and practice combined, and 10/15 were in the moderate or high categories; no project independently assesses as high impact in health, societal and economic impacts category, but 13/15 were above the minimum
97Milat et al.1292015MethodologicalN/AAll health, but started with focus on public health and health promotionReview:
  • A total of 16 different impact assessment models were identified, with the ‘payback model’ being the most frequently used conceptual framework

The most frequently applied methods: publications and citations analysis, interviews with PIs, peer assessment, case studies and documentary analysis. Only four of the included studies interviewed non-academic end-users of research
The review notes the growth of hybrids of previous conceptual frameworks that categorise impacts and benefits in many dimensions and try to integrate them. Of the main frameworks analysed, ‘all attempted to quantify a mix of more proximal research and policy and practice impacts, as well as more distal societal and economic benefits of research’While the review identified just 31 primary studies and one systematic review that met their review criteria, ‘88% of studies that met the review criteria were published since 2006’
The attempts to broaden evaluation of research ‘raise an important question of how to construct an impact assessment process that can assess multidimensional impacts while being feasible to implement on a system level’. The potential for bias because of the involvement of PIs in impact assessments means end-users should routinely be interviewed in impact assessments and claims should be verified by documentary evidence
98Moreira3822013ApplicationN/AHealth careCo-production. Case study with extensive ethnography of health service researchers and systematic reviewersIntersectoral interaction between university, society and marketStrength: rigorous application of the ‘mode 1 vs. mode 2’ taxonomy to a detailed case study. ‘Impact’ occurs through the coevolution of three activities: market-driven reforms oriented to efficiency (‘market’), epidemiologically driven research oriented to clinical effectiveness (‘laboratory’) and patient and public involvement (‘forum’). This process is messy, organic, largely unpredictable and contested
99Morlacchi and Nelson3832011BothUSA-focused international studyNational Heart Lung and Blood Institute stream of research on the LVADPropose medical practice evolves as a result of progress along three pathways: ‘improvements in the ability to develop effective medical technologies, learning in medical practice, and advances in biomedical scientific understanding of disease’. Longitudinal and contextual case study of the development of the LVAD using interviews with key actors, direct observation and documentary analysis to produce an historical analysisStudy analyses sources of advances in medical practice, and challenges the idea that scientific understanding of disease is the single sourceStudy has a different focus than most others: it attempts to show the impact on advances in medical practice made by three pathways, of which scientific understanding is only one
Found: case study of the emergence of the LVAD therapy showed the importance of progress along all three pathways, though an essential aspect was the collective and cumulative learning that requires experience that can only be gained through the use of the LVAD
100Mostert et al.1002010BothThe NetherlandsLeiden University Medical Centre: various departments/groups including public healthSocietal quality score. Van Ark and Klasen’s125 theory of communication in which audiences are segmented into different target groups needing different approaches. Scientific quality depends on communication with the academic sector; societal quality depends on communication with groups in society – specifically, three groups: lay public, health-care professionals and private sector
  • Step 1: list of indicators; count how many of each indicator occurred in each group
  • Step 2: allocate weightings to each indicator (e.g. a television appearance is worth x, a paper is worth y)
  • Step 3: multiply 1 by 2 = ‘societal quality’ for each indicator
  • Step 4: use the average of all the indicators in a group to get the total societal quality score for each department
Three types of communication: knowledge production, e.g. papers, briefings, radio/television, services, products; knowledge exchange, e.g. running courses, giving lectures, participating in guideline development, responding to invitations to advise or give invited lectures (these can be divided into ‘sender to receiver’, ‘mutual exchange’ and ‘receiver to sender’); and knowledge use, e.g. citation of papers, purchase of products; earning capacity, i.e. the ability of the research group to attract external fundingCareful development of a new approach to assessing research impact appropriate for the specific circumstances of the medical faculties being integrated with their academic hospitals. Heavily quantitative – basically a counting exercise ‘how many of x have you done?’ Only looks at process as they say ultimate societal quality takes a long time to happen and is hard to attribute to a single research group. Did not control for the size of the group. Only a weak correlation was found between social and scientific quality
101Muir et al.1072005MethodologicalAustraliaAll public researchNone explicitly stated: focus on ‘measurements/indictors to monitor economic benefit flowing from commercialisation of research funded by the public sector’Metrics for commercialisation of public research need to be broadened to match understanding that commercialisation contributes ‘to Australia’s economic, social and environmental well-being. This is achieved through developing intellectual property, ideas, know-how and research-based skills resulting in new and improved products, services and business processes transferable to the private sector.’ Fourteen metrics covering: IP, consultancies and contracts, skills development and transfer. They form the basis of future data collectionCommissioned by the Australian government to inform policy on impact assessment in an area that has been important in Australia
102Murphy772012BothNew ZealandNew Zealand publicly sponsored clinical trialsMultistrand mixed methods. Survey; cost–benefit analysisHealth outcomes; stakeholder perceptions (perceived value); economic outcomes (CBA) with QALYs as measurement of health benefit. QALYs valued using societal valuation of statistical life and health-care system opportunity cost (based on average Pharmaceutical Management Agency positive recommendation). Suggestion that benefit outweighs costs for all stakeholdersTriangulation methods. Use of well-validated QALYs in economic modelling. But only applied to two trials as a PhD study, and limited to assessing the benefits that accrued for patients in the trial
103Nason et al.472011ApplicationIrelandHealth Research Board: all health fieldsPayback Framework
  • Part 1 (context): consultation/interviews with eight ‘key’ informants from stakeholders
  • Part 2: case study selection – eight high-impact studies
  • Part 3: case studies built around the Payback Framework
Applied to a wide range of biomedical and health research, and focused on examples from the full portfolio of the main national funder of health research
  1. Knowledge production
  2. Research targeting and capacity building
  3. Informing policy and product development
  4. Health and health sector benefits
  5. Broad social and economic benefits
Several of the case studies were expanded to consider whole streams of research to more comprehensively identify wider benefits. However, difficult to compare across case studies, because of the limited number owing to the resource-intensive nature of exercise. Article expands concept of economic impact to consider: major growth in funding and focus on ‘knowledge economy’ strategy, enabling international sources of funding to be attracted
Found: a range of impacts in all categories including: world-class articles; new clinical assays; improved recovery time; development of new drug company
104NHS SDO682006ApplicationUKNIHR: SDO programmeInformed by the Payback Framework. Purposive selection of 23 projects to reflect the range of SDO research and where some evidence of impact was known to be available. Data collected in two stages, starting with primary outputs (publications); snowballing to capture data on secondary outputs (policy and practice impact). Internal sources used first: annual reports, projects database, programme managers and leads. External databases checked and all non-case study PIs e-mailed to provide data. Secondary outputs (policy documents, etc.) identified using web searches and sent to PI for verification. Eleven of the 23 projects purposively selected as case studies, including semistructured interviews with researchers and usersTwo main categories taken from the Payback Framework: primary outputs (publications: SDO reports, academic papers, policy); and secondary outputs (citing in policy documents, practice guidance, newsletters, website and the media). In the 11 case studies, the impact and utilisation of the research is presented in five domains: services, policy, practice, research and capacity buildingThe range of methods used did identify a range of impacts, and the accounts of the case studies provide good examples. However, 2006 was quite early to conduct an impact assessment of first 5 years of programme from 2001 to 2006
Found: publications – all had at least one published by SDO and average of 1.7 articles; policy and practice guidelines: 12 projects cited in total of 24 documents, e.g. five citations in NICE guidance
105Niederkrotenthaler et al.3842011MethodologicalAustriaAll fieldsSocietal impact factor tool developed to consider the effect of a publication on a wide set of non-scientific areas, and also the motivation behind the publication, and efforts by the authors to translate their findings. Self-evaluation of papers by authors: in three main categories they score their paper, and provide textual justification/evidence for the score (see dimensions assessed). The self-evaluation sheet would then be sent to a reviewer for independent checking. Authors would be invited to submit their publications for reassessment if any developmentsIt was intended that the tool would be refined, but the version tested had three main elements: the aim of the publication (1 point if aim was gain of knowledge, application of knowledge or increase in awareness); the authors efforts to translate their research into social action (1 point); size of translation (geographical area: 1, 2 or 3 points; status: 1 or 2 points; target group: 1 point for individuals, 2 points for subgroups or 3 points for whole population)Niederkrotenthaler et al.384 claim an advantage of their tool over that developed, for example by Sarli et al.,118 is that the tool does not specify the precise nature of any kind of translation (e.g. devices, guidelines) but leaves that to the author applying for a score to describe. But, as they admit, the tool ‘cannot be considered ready for routine implementation’. Because it aims to develop the equivalent to the scientific impact factor, the focus is at the publication level, which might seem even more limiting than a focus on projects
106Oliver and Singer3852006ApplicationUSACHBRP: HSRInformed by framework for analysis of policy design and political feasibility based on the typologies of Wilson386,387 and Arnold.388 Sixteen interviews with 20 key informants and documentary analysisImpact of HSR on legislative debates and decisionsTheoretically informed analysis of the role of research identified various examples of use in Californian health policy-making; however, few details were given on the precise methods used. It uses a range of models from political science to help analyse the theoretical and actual role for a university-based body specifically designed to feed research and analysis into the legislative process around health insurance mandates
Found:
  • participants inside and outside of state government have used the [CHBRP] . . . reports as both guidance in policy design and as political ammunition . . . Almost every respondent noted that CHBRP reports, however authoritative, served as ‘political ammunition’ in the manner described by Weiss (1989)

107Oortwijn692008ApplicationThe NetherlandsZonMw Health Care Efficiency Research programme: HTAPayback Framework. Logic model; survey data collected from PIs of 43 studies conducted using health-care efficiency research funds (response rate 79%); case study analysis (including 14 interviews) of five HTA projects. Developed and applied a two-round scoring systemKnowledge production
Research benefits
Informing policy
Changing health practice
Broader impact on health
There was a total of 101 papers, 25 PhD theses, citation in guidelines in six projects, implementation of new treatment strategies in 11 projects
Use of triangulation methods and presentation of scores that account for wider range of impacts. Potentially not long enough to witness benefits for many of projects. The programme was mainly conducted in academic hospitals, with a large responsive mode element and most studies were prospective clinical trials
108Orians et al.172009ApplicationUSANIEHS: Extramural Asthma Research ProgrammeNIEHS logic model (see Engel-Cox et al.63). Web-based survey of 1151 asthma researchers who received funding from the NIEHS or comparison federal agencies from 1975–2005. A total of 725 responded (63%). While the researchers all received federal funds, most of the questions covered respondents’ body of research. Key informant interviews with end-users (n = 16). Analysis of the NIEHS model in the light of the findings. Companion article to Liebow et al.91 that described the attempt to apply the NIEHS framework using databasesWide range of impacts considered as set out in Engel-Cox et al.63 Examples set out below from survey findings of asthma researchersLarge numbers surveyed and the focus on their role as researchers rather than on specific projects allowed nuanced assessment of dissemination and product development. However, the contribution to understanding outcomes is more limited:
  • this method does not support attribution of these outcomes to specific research activities nor to specific funding sources. Nor did we gain many insights from the survey into the pathways by which research is translated and used to affect these outcomes

Asking about research from any funder reduces the relevance for assessing the impact of specific programmes of research, but study concluded:
  • the model guiding this assessment, with its “pathways”, is a reasonable representation of how research may result in such impacts, at least as they pertain to asthma research

Found: include: papers – 96%; research tool and methods – 29%; improved environmental measurement techniques – 20%; spin-off companies – 4%; licensing a patent – 38% of patent holder; changes in guidelines – 19%; changes in environmental standards/regulations indoor air – 8%; changes in business practices regarding air – 8%; changes in public knowledge – 33%; changes in clinical practice – 27%. End-users saw research use being in various categories: professional development; intervention/regulation, e.g. reducing environmental tobacco smoke and exposures to lead, etc.; new drug development and regulation; clinical practice
109Ottoson et al.3692009ApplicationUSARWJF: ALR programmeUtilisation-focused evaluation. Telephone interviews with 136 key informants (first-line consumers and implementers, policy-shapers) representative of four out of five levels in logic model (professional community, policy, scientific community, funders); bibliometric analysisIs a field emerging? (If so, what is its name?) Is there an awareness of ALR within the field? Has ALR contributed to policy discussions?Neat logic model. Limitations of snowball sampling in being representative of opinions. It makes five recommendations, which imply association with impact/greater utilisation, e.g. bridging research and policy (‘substantial and coordinated investment’); boosting visibility and relevance of policy (engage end-users and intermediaries early in research process); emphasising collaboration and co-ordination. Pair of evaluations of the same initiative – Gutman et al.132
Found: it had contributed to development of transdisciplinary field; ‘ALR’s contributions to policy discussions were found across a spectrum of policy-development phases’
110Ovseiko et al.3892012BothUKOxford University Clinical MedicineREF pilot impact indicators. Describes survey (48% response) and other approaches used for data collection for this piloting of the REFDelivering highly skilled people; creating new businesses, improving the performance of existing businesses, or commercialising new products or processes; attracting R&D investment from global business; better-informed public policy-making or improved public services; improved patient care or health outcomes; cultural enrichment, including improved public engagement with science and research; improved social welfare, social cohesion or national securityImportant contribution to the analysis of the development of the REF. Problems with retrospective collection of data. Existence of self-selection bias? All known REF gaming issues. It concluded that:
  • Assessing impact is feasible

Found:
  • While the majority of the proposed indicators have some validity, there are significant challenges in refining the current indicators and methodologies in a number of ways

111Penfield et al.142014MethodologicalUK in particularAll fieldsFrameworks examined: Payback; SIAMPI; Australian RQF; RAND report that led to REF impact case study approach (Grant et al.38). Plus overview of specific indicators. Metrics, e.g. social ROI; narrative (case study) surveys and testimonies; citations outside academia/documentation. Also describes history of the impact component of the REFCites the REF definition of impact:
  • an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia

Rounded analysis led to the conclusion:
  • While aspects of impact can be adequately interpreted using metrics, narratives, and other evidence, the mixed-method case study approach is an excellent means of pulling all available information, data, and evidence together, allowing a comprehensive summary of the impact within context

112Percy-Smith et al.3902006ApplicationScotlandHealth Scotland: five evidence informed policy and practice initiativesNot restricted to a single framework but drew heavily on Social Care Institute for Excellence taxonomy (see Walter et al.391). Also drew on and amended Health Development Agency framework in the light of findings to produce a new, more complex model (see next column). ‘Mixed method’: literature review plus document analysis plus stakeholder seminar (n = 1) plus stakeholder interviews (n = 22) plus survey of participants in learning networks (631 completed forms received). Evaluated at three levels: the programme as a whole, the five separate initiatives and the individual practitionersImpact was assessed in terms of:
  • production of evidence briefings/syntheses
  • production/commissioning of new research
  • evidence into policy – dissemination and active engagement
  • evidence into practice: dissemination/active engagement/support for practice change
  • capacity building in relation to research, research use and evaluation
  • practice into evidence – collection, appraisal and collation
Successful programmes were characterised by a high degree of opportunism (i.e. use policy windows). Extensive ‘reach’ into professional and other stakeholder groups. Clarity and accessibility of presentation of key messages
  • engagement with target audiences
  • ownership and involvement of stakeholder groups in the production of materials
  • dissemination events well run
  • networks – up to a point
Where practice change occurred, the main facilitators were:
  1. relevance of evidence/guidance to practice
  2. credibility of evidence/guidance
  3. availability of good practice examples
Personal relationships with policy-makers meaning researchers became first port of call for advice
Some case studies produced to showcase good practice may have been questionable in some aspects of quality: hence, there is a quality control issue in relation to ‘practice based evidence’
113Pittman and Almeida3442006BothInternationalInternational Development Research Centre and Pan American Health Organization: programme on social protection in health in Latin America and the CaribbeanResearch programme informed by concept of linkage between researchers and users (Lomas342) and therefore was structured with the intention of achieving impact. Not described in detail but appears to be mostly an insider account while the initiative was still under way and drawing also on early papers and discussions with researchersThe linkage model aims to ensure opportunities for users to be involved throughout a project, and that as a consequence impacts will ariseThe detailed knowledge of the programme enabled an informed analysis; however, there were timing problems. The programme itself ran into difficulties of providing incentives for decision-makers:
  • to remain active in the project, when their timelines tend to be more short-term

Also authors recognised it was too early for a final assessment of whether or not the type of research design will have an impact on innovation in social protection in health. The linkages between researchers and users seen as very important to achieving the impacts that had already arisen. Interesting contribution to debates about whether or not scientific quality and impacts are associated:
  • all but one team felt the scientific rigor of the projects had improved as a result of the extended planning and interaction with decision-makers as well

Found: a negotiated research question that influenced ‘not only the project design, but the decision-makers’ ways of thinking about the problem as well’. In four out of the five cases, turnover among government officials impaired the process, but in the fifth team:
  • the interaction has led to use of data in decision-making, as well as a clear recognition by both parties that different kinds of evidence were at play

114Poortvliet et al.1402010BothBelgiumKCE: HTA, HSR and good clinical practiceDeveloped own framework. Documentary review; two group discussions, with 11 KCE experts, with two KCE mangers; interviews with stakeholders (n = 20); web-based survey to project managers: 66 external (28% responded) and 101 KCE (72% responded) – a total of 88 managers reported on 126 projects; nine detailed case studies selected by stratified random approach; and international comparisons with three agencies using documentary/literature review and interviews (n = 3)Dissemination (outputs, dissemination activities, stakeholders addressed, engagement of stakeholders in outputs, actual take-up of outputs); contribution to the decision-making process (familiarity with research projects, research utilisation, relevance of projects for policy-making/practice, reactions on finished projects from stakeholders); external image of KCEComprehensive range of methods and analysis that focused on the processes as well as impacts. However, in addition to some conflicts of interest, the data collection phases were simultaneous, thus reducing scope for triangulation through a sequence of data collection activities. While some important conditions for achieving impact were not realised, the report concludes that KCE had established some:
  • Research questions come from primary stakeholders. There is involvement from these stakeholders in agenda and priority setting. The quality of KCE research itself is high and in general beyond discussion. The relevance of KCE research findings is generally judged as high

Some similarities with other/earlier findings about the HTA impact: of the three fields included, more HTA project co-ordinators thought research made a policy impact than in GCP or HSR
Found: 16 stakeholders said findings influenced decision-making, four said not in their organisation; 58% of project co-ordinators thought projects contributed to policy development: more for HTA than Good Clinical Practice or HSR
115Reed et al.702011ApplicationAustraliaPrimary care researchPayback Framework. Online survey to 41 contactable CIs (out of 59 projects). Asked impacts expected, how many achieved. Some projects excluded as still under way, others refused. Out of 23 completed, 17 were relevantFive domains: research transfer (including knowledge production); research targeting, capacity building and absorption; informing policy and product development; health and health sector benefits; and broader economic benefitsInclusion of questions about what expectations CIs had had about impact: allowed interesting comparison with what achieved. However, quite a large number of CIs could not be located, or refused to participate. For those who did, there may have been ‘risks of a bias towards positive benefits’ and relied solely on survey. Interesting, comparison with Kalucy et al.65 from the same team:
  • In an earlier study we collected qualitative data through interviews with CIs and gathered copious information that provided more context to better understand the results

Found: 13 CIs (76%) considered achieved at least half impacts expected; 11 PhDs came from 10 projects; further research, 65% achieved; 13 projects (76%) expected to influence national/state policy-making, four (24%) did so, but eight (47%) influenced decision-making at organisational, local or regional level [combined nine separate projects (53%) had policy/decision impact]; 10 (59%) expected to lead to improved health outcomes, five (29%) did so. Few broader economic impacts achieved. Three of the examples of impact overall were unexpected
116Higher Education Funding Council1062015BothUKAll medical, health, biological, agricultural, veterinary and food sciencesREF 2014. Panel reflections on the methods and results of the REF 2014 exercise. However, did not have access to the findings of the analysis funded by HEFCE of the impact case studies that was still on-going at the time of publication of the reportAll aspects of impacts assessed in the REFGiven the scale and apparent success of the REF, the comments supporting the case study approach are highly important. The main panel recognised the difficulties in demonstrating the link between research and impact which may be non-linear, but thought that the narrative case study largely succeeded in capturing the complex links between research and impact Submissions could be strengthened in future
  • if HEIs were proactive in collecting more qualitative and quantitative data evidencing the reach and significance of the impact. International MPA members cautioned against attempts to ‘metricise’ the evaluation of the many superb and well-told narrations describing the evolution of basic discovery to health, economic and societal impact

    p. 11

While cautioning against attempts to metricise the evaluation, MPA said in future exercises should attempt to provide more accurate quantification of the extent and reach of the impact, and also about the proportionality of an individual contribution to ‘impact’
Found: MPA
  • believes that the collection of impact case studies provide a unique and powerful illustration of the outstanding contribution that research in the fields covered by this panel is making to health, wellbeing, wealth creation and society within and beyond the UK

International members described it as:
  • the boldest, largest and most comprehensive exercise of its kind of any country’s assessment of its science . . . to our knowledge, [it is] the first systematic and extensive evaluation of research impact on a national level. We applaud this initiative by which impact, with its various elements, has received considerable emphasis

MPA said best case studies had a:
  • clear and compelling narrative linking the research programme to the claimed impact; verifiable evidence (qualitative or quantitative) to support the claimed impact provided within the text . . . and (where appropriate) spread of the impact beyond the immediate beneficiaries to a much broader and possibly global audience

117Rispel and Doherty1352011ApplicationSouth AfricaCHPNone explicitly stated. Interviews with 25 purposively selected key informants (12 CHP alumni, seven current members of staff, six external stakeholders); documentary review; aspects of ‘insider account’ as both researchers had spent some time at CHP – but also time working elsewhereContribution to health policy development and implementation. Found: CHP ‘has contributed directly to health policy development and implementation while also changing the way government understood or approached policy issues’. ‘All key informants acknowledged that CHP had a significant impact on national health policy at one point or another’Authors claim that:
  • there is great value in comparing and contrasting our own ‘insider-outsider’ perspectives with those of pure “insiders and outsiders” as this has helped to clarify the reasons behind differences of opinion

A range of methods:
  • triangulating information from the interviews, earlier evaluations, and external review reports

Only able to interview a few government officials with no prior link to CHP. Factors: research quality and trustworthiness; strategic alliances and networking; capacity building – training future policy leaders
  • At CHP, discussing research ideas and the possible implementation of research findings with policy makers at the start-up of projects, and then presenting findings face-to-face, became a powerful mode of influencing policy

Reported examples of instrumental and conceptual impact on policy
118Ritter and Lancaster3922013BothAustraliaNational Drug and Alcohol Research Centre: illicit drug epidemiological monitoring systemsInformed by previous frameworks, developed a three-component approach to assessing research influence:
  • examination of the use of research in policy documents (which speaks to research utilisation theory); use of research in policy processes (which speaks to interactive engagement theories) and dissemination of research in the public sphere via the media

Retrospective systematic review of data sourced from the public domain (using the names of the centre’s two monitoring systems as search terms): policy documents; policy processes; media mentions. The review considers the number of mentions and ‘type of mention’: ways in which the research being used (e.g. conceptually), purpose for which referenced (e.g. informing priority areas), value placed on the research (e.g. providing important knowledge in decision-making process)
Use of research in policy documents, policy processes and mediaDeveloped an original framework and successfully used it to structure and implement their study. Then also analysed the findings in relation to a series a key theoretical perspectives on the link between policy and research, e.g. epistemic communities393 and linkage and exchange.394 Had the strength of being entirely independent of the researchers for data collection. However, it recognised that the approach did not extend to considering how far the research findings have changed drug policies in Australia
  • The approach is less complicated than others that have been suggested (Donovan and Hanney;45 Hanney et al., 2003;35 Lavis et al., 2003354) but goes beyond a simple checklist approach (Smith, 2001), whilst also being grounded in policy theory

Found: the majority of major drug strategy documents do not reference research, but the monitoring systems referenced in more detailed policy documents. In terms of policy processes, it found that 18 parliamentary committees and inquiries contained a total of 87 mentions of one of the monitoring systems, and often these were in submissions from other bodies. Sixty-eight mentions in the media: 0.2 of total drug mentions, but only 7.4% of drug-related media stories refer to any research
119Rosas et al.3952013BothUSANIH: HIV/AIDS Clinical Trials NetworkProcess marker approach. Out of 419 publications in 2006–8, selected 22 from the network’s flagship studies in terms of scientific priority as primary interventional clinical trials. Obtained data about protocol dates from the network database. Identified publication date and citations on Web of Science. Used PubMed database to identify which citations were meta-analyses, reviews, and guidelines. Operationalises ‘date markers’ in citation data to track dissemination progress by selecting key end pointsCitation of research in reviews, meta-analyses and guidelinesDoes not need any direct input from the researchers of the projects assessed. However, used a very small sample and the simple citation metrics used have limitations. The time from study approval to citation in guidelines is shorter than identified in some other papers, but the HIV/AIDS field ‘is quite specialized’
Found: 11 of the 22 publications were cited in guidelines within 2 years of publication, mean time from study approval to first guideline citation: 74.1 months
120Royal Netherlands Academy of Arts and Sciences792010ApplicationThe NetherlandsAll fieldsUpdate of evaluation framework previously used by the organisations to assess research, not only its impact, at the level of research organisations and groups or programmes. Self-evaluation and external review, including site visit every 6 yearsList a range of specific measures, indicators or more qualitative, that might be used in the self-evaluation. Also for assessment of societal relevance: ‘Several methods have been developed for specific areas (the payback methods for health research, for example) and new methods are being developed’ and here gives a link to the website for the ERiC project – see Spaapen et al.109 Uses the concept of societal quality to refer to productive interactions (as with some other Dutch studies)As the major approach to assessing publicly funded research in the Netherlands has the strength of ‘a broad scope’ including a focus on the societal relevance of research. However, while the range of options of methods to show societal relevance provides flexibility, there might be uncertainties for institutions and programmes in deciding the best approach for a formal assessment
121RSM McClure Watters et al.712012ApplicationUKNorthern Ireland Executive: HSC researchPayback Framework. Desk analysis of documents and literature, consultations with stakeholders, survey informed by Payback Framework, three case studies, benchmarking. Surveys to all 169 PIs for projects funded between 1998 and 2011 who could be contacted. There was a response rate of 84 (50%)Used five Payback categories: knowledge/dissemination; benefits to future research (career development, qualifications, extended category to include jobs supported by the research funding); informing policy; health and health sector benefits (health gain, improved service delivery, cost reductions, increased equity); economic benefits (note: main factor considered here was additional funding brought in to support the HSC-funded projects, and leveraging for further funds)Wide range of methods used to provide comprehensive picture of the context and impact of HSC research funding. There were both strengths and weaknesses in the focus on the economic impact. Given the clear geographic focus of the HSC R&D funding, it was valid to demonstrate the role of the funding from the Northern Ireland Executive in leveraging funding from outside Northern Ireland to support health research in Northern Ireland, but, ‘Much more detailed analysis would be necessary to demonstrate what proportion of this follow-on money was fully leveraged by the original HSC funding’. The case studies ‘provide good examples of the incremental nature of the impacts associated with research within the Health and Social Care field’
Found: 66 PhDs/master’s/MDs supported; considerable career progression; over 100 posts supported; 19% impact on policy development; 20% health gain; 13% cost reductions; 17% increased equity; additional funding covered as much as HSC-funding on projects, substantial leveraged funds for follow-on projects came from outside Northern Ireland
122Runyan et al.3962014BothUSAInjury Control Research CentersNot explicitly stated beyond noting that further refinement and identification of systems to support their use:
  • should consider carefully the correspondence of the indicators with the agency’s implementation logic model

Article mainly describes the methods used to develop the set of 58 indicators (27 priority, 31 supplemental)
The background refers to ‘impacts of academic research centres’: they have a teaching role as well as research role; therefore, indicators are comprehensive. Proposed indicators include: scholarly products; funding (including leveraged funds); policy-related work in relation to government and/or organisational policies (including both consultations and actual impact on policies); technical assistance (including consultation requests, e.g. participation on planning committees for community initiatives); improving community capacity (e.g. collaborations with community members involving collecting or evaluating data); interventions developedDespite detailed work to develop the list of indicators, the process still needed further refinement and piloting. While the authors reflect the view that the richness of centres’ contributions exceeds the sum of the individual components, it is not clear how far the list of indicators reflect previous theoretical discussions about how this perspective can inform research impact assessments. A growing number of studies assess the impact of long-term funding for research centres, and might take a broader perspective than that applied to specific programmes of research. This feeds into discussion about the role of impact assessment within wider assessment of research:
  • centers will be more receptive to the evaluation process if it is clear how the data are being used and if the information is considered in the context of how centers are designed and operate

123Rycroft-Malone et al.1022013ApplicationUKNIHR: CLAHRCs – HSRRealist evaluation. Mixed-method case study including interviews, ethnography, document analysis and participant observationWhat works for whom in what circumstances in multistakeholder research collaborations, using a CLAHRC as a worked exampleStrength: clear and rigorous application of realist method supplemented with other perspectives where appropriate. Weakness: not yet replicated in other CLAHRCs or similar models. Various mechanisms – collaborative action, relationship-building, engagement, motivation, knowledge exchange and learning – which interact with context to produce different outcomes in different parts of the programme
124Sainty992013ApplicationUKOccupational Therapy Research FoundationBecker Medical Library model. All 11 grantees who had completed a UK Occupational Therapy Research Foundation-funded project were invited to complete a ‘personalised impact assessment form’ (equivalent to a survey). Eight responded (73%). Two researchers were invited to provide an independent review of the collated findingsBased on the Becker Model, four main categories: research output/advancement of knowledge (e.g. publications, PhD completion, career progression, follow-on funding); clinical implementation (e.g. assessment tools/outcome measures generated, guidelines, loan of final report from library, training materials, clinicians report change in practice); community or public benefits (e.g. service users engagement activities in project, presentations to public); economic benefitsStudy informed by a literature review and based on a model. Possibility of recall and selection bias in responses from researchers – the only data source. In relation to the clinically related activities of three projects:
  • Important to note, was the extent to which respondents highlighted this as being in the context of the participating services or host organisations

This links with a growing number of examples where it is the local application that is recorded which important. It has credibility in that claiming impact in relation to local area know best and fits with the claims that engaging in research is likely to improve health care
Found: one PhD, one MPhil, six career progression, four further grants; and three projects – local clinical application
125Sarli et al.1182010Methodological (plus case study)USAAll healthDeveloped a new approach called The Becker Medical Library model for Assessment of Research. Started from the logic model of W.K. Kellogg Foundation397 ‘which emphasises inputs, activities, outputs, outcomes, and impact measures as a means of evaluating a program’. Methods proposed in the new model: main emphasis is on the indicators for which the data are to be collected (see column Impact: examined and found), but referring to the website on which indicators made available authors state:
  • Specific databases and resources for each indicator are identified and search tips are provided’. However, in relation to their pilot case study state; ‘For some indicators, supporting documentation was not publicly available. In these instances, contact with policy makers or other officials was required

For each of a series of main headings lists the range of indicators, and the evidence for each indicator. Main headings: research outputs; knowledge transfer; clinical implementation; and community benefitA comprehensive list, but could be questions about the diversity of items included in some of the categories, and how far they have been fully linked with the organising framework. It was also challenging to establish a clear pathway of diffusion of research output into knowledge transfer, clinical implementation or community benefit outcomes as a result of a research study. This was, in part, due to ‘the difficulty of establishing a direct correlation from a research finding to a specific indicator’. The Becker Model is mainly seen as a tool for self-evaluation:
  • may provide a tool for research investigators not only for documenting and quantifying research impact, but also . . . noting potential areas of anticipated impacts for funding agencies

126Sarli and Holmes3982012MethodologicalUSAAll healthThe Becker Medical Library model for Assessment of Research – updateSame basic method as in original model, but updated ‘to include additional indicators of research impact based on review of other research projects’Claimed that the changes also ‘reflect the authors’ intention to make the model more user friendly’. It has similarities with researchfish. After this, the web version updated on regular basis: https://becker​.wustl​.edu/impact-assessment (accessed 19 July 2016)
127SHRF862013ApplicationCanadaSHRFCAHS. Review by external consultant: 22 interviews, including with five researchers whose SHRF-funded work formed the basis of case studiesFive categories from CAHS’s framework: research capacity (e.g. personnel, additional research activity funding, infrastructure); advancing knowledge; informing decision-makers; health impacts; broad economic and social impacts (e.g. research activity)Applies a framework SHRF had helped develop, but framework applied to just five case studies. Among the claimed facilitators of impact are SHRF’s ‘group-building grants, creation of research groups and networks’. The emphasis on the economic benefits in terms of research jobs attracted is linked to research infrastructure
Found: examples of capacity building and researcher retention: three out of five research groups reported limited impact on clinical and policy decision-makers, but one case study describes how a clinic launched as a demonstration project has now been used as the model by others; some guidelines been developed; ‘for every dollar awarded the researcher attains four dollars from external sources of funding . . . suggested that with the presence of special research infrastructure . . . comes higher paying jobs, resulting in a higher tax base and a highly sought knowledge economy’
128Schapper et al.722012BothAustraliaMurdoch Children’s Research InstituteInstitute’s own research performance evaluation framework ‘based on eight key research payback categories’ from the Payback Framework and draw on the RIF (Kuruvilla et al.123). A committee oversees the annual evaluation with a nominee from each of six themes and external member and chairperson. Evaluation ‘seeks to assess quantitatively the direct benefits from research’. Data are gathered centrally and verified by the relevant theme. Theme with highest score on a particular measure are awarded maximum points, others are ranked relative to this. Each theme nominates the best three research outcomes over 5 years, then interviewed by research strategy team using detailed questionnaire to gain evidence and verify outcomes. Research outcomes assessed using a questionnaire based on the RIFThree broad categories: knowledge creation; inputs to research; and commercial, clinical and health outcomes. The six major areas of outcomes: development of an intervention; development of new research methods or applications; communication to a broad audience; adoption into practice and development of guidelines and policy; translation into practice – implementation of guidelines and policy; and impact of translation and on healthFramework developed for use in the institute to provide a balanced assessment across the wide range of modes of research conducted in the institute. Despite issues of the weighting to give various factors, and the relative scoring of data the evaluation ‘is generally viewed positively by researchers at the Institute’. However, it might appear rather formulaic. Impact embedded into the performance evaluation and strategic management of a research institute; ‘provides a fair and transparent means of disbursing internal funding. It is also a powerful tool for evaluating the Institute’s progress towards achieving its strategic goals, and is therefore a key driver for research excellence.’ It claims that the evaluation ‘is unique’
129Schumacher and Zechmeister1342013BothAustriaInstitute for Technology Assessment and Ludwig Boltzmann Institute for HTA: HTAAn account that goes with the Zechmeister and Schumacher83 study in developing a framework informed by various strands of previous work including Weiss’s enlightenment concept and a ‘multidimensional concept of impact’. Combination of interviews (same 15 as Zechmeister and Schumacher83), download analysis, retrospective routine data analysis and media analysisWhether or not the HTA research programmes ‘have had an impact on the Austrian healthcare system’. Considered seven impact categories: awareness, acceptance, process, decision, practice, final outcomes (i.e. economic impact), and enlightenmentComprehensive methods. They developed a matrix to show the methods used, the impact categories to which they relate and the indicators addressed by each method. ‘The strength of our approach is that it takes into account that HTA research may affect all system levels, in a multi-dimensional manner rather than a linear one.’ However, there were only 15 interviews so they were not able to cover all target groups and ‘selection bias may have occurred . . . Further limitations are a lack of benchmarks.’ The study seems to underplay extent to which earlier studies used a combination of methods
Found: rising downloads, some reports high media interest; increasingly used for investment decisions and negotiation; ‘Economic impact was indicated by reduced expenditures due to HTA recommendation . . . in places, an “HTA culture” can be recognized’
130Scientific Management Review Board3992014MethodologicalUSANIHDescribes early stages in developing an approach. Assessments should:
  • examine connections between the generation and communication of basic and clinical knowledge and the impact of this knowledge along different translational pathways

Assessments should attribute outcomes to all contributors, allow sufficient time to have elapsed and ‘begin with identifying the purpose of the study and its audiences’Identified issues but progress limited. Relevance for our current review is that it highlights the view from the world’s largest health research funder that considerable further work is required to develop an approach to assess the value of biomedical research supported by the NIH. Referring to data systems such as STAR METRICS says:
  • NIH’s data infrastructure was built primarily to manage grants and contracts during their life cycle, not to track outcomes

131Scott et al.732011BothUSANIH: Mind–Body Interactions and Health programmePayback Framework incorporated into the design as a conceptual framework and adapted with greater focus on how the agenda for the programme was shaped and the extent of community outreach and engagement. Centres: documentary review, database analysis, interviews (centre PIs), bibliometric data, construction of narrative templates for centres based on the combined data. Similar approach for projectsMain payback categories as related to the stages of the payback model: primary outputs (knowledge production; research targeting and capacity building – including an emphasis on the effect on the host institution); secondary outputs; final outcomes (improvements in health, well-being and quality of life, changes to health-care service delivery, and broader social and economic impacts)Adaption of conceptual framework to meet needs of specific evaluation, and use of wide range of complementary methods, but no findings presented in this paper. They considered the range of issues facing impact assessment: timing of evaluation with many outcomes not likely to be evident at the time evaluations usually requested – used term latency to describe that situation; attribution
132Shiel and Di Ruggiero1142009MethodologicalCanada (part of appendix A of CAHS report)Population and public health researchRecommends that the CAHS should adopt the Payback Framework, as amended by CIHR. Identifies sources for data for the various payback categories: bibliometrics; case studies; evaluation studies of clinical guidelines; database analysis; special studies of specific itemsShows how framework could be introduced to cover population and public health research. Covers the main payback categories: knowledge production; research targeting and capacity; informing policy; health and health sector benefits; and economic impactsAnalyses and contrasts payback studies and ROI studies (and by ROI mean the economic valuation studies, and not the broader use of the term by the CAHS who put the term ROI in the title of their report that recommended a variation of Payback). Also briefly highlights receptor capacity benefits from conducting research (e.g. p. 53)
133Solans-Domènech et al.872013ApplicationCataloniaAgency for Health Quality and Assessment of Catalonia: respiratory diseaseROI model from CAHS: this paper describes a study within the overall study described in Adam et al.84 Interviews with 23 key informants: researchers and decision-makers. Differences between achieved and expected impact described: expected defined as what hoped to achieve at startFive main categories from CAHS: advancing knowledge; capacity building; informed decision-making; health and social; and broad economicThe detailed case studies allowed a thorough analysis of expected and achieved impacts, but there were only six cases:
  • teams that include decision-makers or users of health of health information were more effective in achieving outcomes in health policy or practice from the research findings

Found: 3/6 projects achieved all expected impacts; there were some unexpected impacts, and some expected impacts not achieved in final outcomes and adoption phase
134Soper et al.1852013BothUKNIHR: CLAHRCs – HSRDid not start with a specific theoretical perspective in the evaluation – adopted a formative and emergent approach, but it informed by various studies: Quebec Social Research Council, Quality Enhancement Research Initiative, Need to Know project. Stakeholder survey of CLAHRCs, in-depth case studies of two CLAHRCs, validation interviews with nine CLAHRCsFrom the two CLAHRC case studies they looked at 1. Establishing the CLAHRC, 2. Working as a CLAHRC, and 3. Emerging impacts and legacies. Stated that ‘both CLAHRCs had some comparatively rapid success in making an impact on health care provided locally and more widely across the NHS’. A common feature was use of range of knowledge transfer and exchange strategies. Authors used Mannion et al.’s framework to assess CLAHRCs efforts to encourage cultural change and explain variable success with different groups. Four types of culture: clan, hierarchical, developmental and rationalDid not specify which theories they would test, they considered how best to relate their data to existing theories. Instead they used a formative and emergent approach to analyse whether CLAHRCs met their remit or not. Limited to two case studies of CLAHRCs and people closely associated with them
135Spaapen et al.1092007BothThe NetherlandsMostly describes methodological development, illustrated by earlier case studies, including one on the pharmaceutical science programme at Groningen UniversitySciQuest approach of the ERiC initiative. Mixed-method case studies using qualitative methods, a quantitative instrument called contextual response analysis and quantitative assessment of financial interactions (grants, spin-outs, etc.). SciQuest methodology is deliberately non-prescriptive and context-sensitive. There are four key stages:
  1. Mission: in this phase the mission of a group/programme is established
  2. The REPP: a phase in which a more or less objective (quantitative) picture of the group’s production and interaction with the environment is established
  3. Stakeholders: a phase in which the environment is consulted about the impact of the group’s work (in the pharmaceutical sciences case study six user surveys were sent out, three returned)
  4. Feedback: in this phase the results of phases 2 and 3 are confronted with phase 1, to organise a debate on the strategy of the group
Productive interactions (direct, indirect and financial) must happen for impact to occur. There are three social domains: science and certified knowledge; industry and market; and policy and societal. For the REPP in the pharmaceutical sciences example they set 15 benchmarks – five for each domain. For each benchmark there were five levels. The faculty had the highest benchmark in: relative citation impact; representation in editorial boards, involvement in industry/market; and additional grants societal/policyIt focuses the assessment on the context and is designed to overcome what were seen as the linear and deterministic assumptions of logic models. The primary goal of the study of the pharmaceutical sciences faculty was to support the faculty in conducting the self-evaluation required under the assessment system for academic research in the Netherlands. SciQuest is described as a ‘fourth generation’ evaluation approach. It draws on the work of Gibbons et al.179 on ‘Mode 2 knowledge production’. SciQuest is theoretically elegant but there are concerns about its feasibility for regular use:
  • The REPP profiles represent the various existing constellations of interactions between a program and its environment. Because of the large quantity of indicators, it is not always easy to transform them interpretatively in the profile to construct such a constellation

However, as noted, a link to this 2007 report was included in the new evaluation guidelines drawn up by Royal Netherlands Academy of Arts and Sciences79
136Spaapen et al.812011BothEU – led from the NetherlandsFour case studies: health, ICT, nanotech, social/human sciencesProductive interactions. Based on SIAMPI collaboration. SIAMPI ‘involves two central tasks: to enlighten the mechanisms by which social impact occurs and to develop methods to assess social impact. SIAMPI . . . developed an analytical framework for the study of productive interactions and social impact’. Productive interactions, based on three categories of interaction:
  1. direct personal contacts
  2. indirect (e.g. via publications)
  3. financial or material
Mixed-method case studies using qualitative methods (interviews with researchers and beneficiaries), a quantitative instrument called contextual response analysis and quantitative assessment of financial interactions (grants, spin-outs, etc.)
The focus of the study was mainly on assessing the interactions. They found:
  • the distinction between impact and interaction may be fuzzy
  • a wide variety of different interactions was documented
  • social impacts were not always wholesale and directly attributable, they were also ‘piecemeal’ or incremental alterations of policies or professional practices
  • social impacts take a long time to emerge
  • when networks become larger, social impacts become more remote from the research process
  • the quality of the interactions varied from very incidental and informal to highly organised and professionalised network
  • in health care in particular, changes might involve political decision-making in a field of stakeholders with varying interests
  • most interactions with stakeholders were shaped by multiple productive interactions
  • productive interactions can be managed and institutionalised
Definition:
  • Social impact of scientific research refers to measurable effects of the work of a research group or program or a research funding instrument in a relevant social domain. The effect regards the human well-being (‘quality of life’) and or the social relations between people or organisations

The SIAMPI/ERiC stream is widely seen as a major, innovative contribution to the field and is sensitive to institutional goals, however, it is resource intensive, challenging to implement and ‘assumes interactions are good indicators of impact’ (Guthrie et al.9)
137Spoth et al.4002011MethodologicalUSAFamily-focused prevention sciencePrimarily a review. Brief account of the PROSPER model ‘for the delivery of evidence-based intervention’, but also involves evaluation of interventions that are community–university partnerships. Collection of outcome dataIncludes analysis of long-term effects. Explores four translational impact factors: effectiveness of interventions; extensiveness of their population coverage; efficiency of interventions; and engagement of populationsNot a traditional model for assessing research impact, but the review covers a range of issues
138Sridharan et al.4012013BothInternationalTeasdale-Corti Global Health Research Partnership ProgrammeThe evaluation was informed by a theory-driven approach (from Pawson et al. 2004) that emphasised iterative learning and assessing performance in light of anticipated timelines of impact. Interviews with planners of the programme. Analysis of the 8/14 final reports then available, and their proposals. Surveys: 1–87 grant recipients attending an initiative symposium; second to four different groups: co-PIs from Canada, co-PIs from southern countries, users, leadership award recipients; each group receiving a different version. Filmed interviews with some grantees. Bibliometric analysis. Brief case studies of three grantees including skype™ (Microsoft Corporation, Redmond, WA, USA) interviews. Documentary analysis and iterative development of the theory of changeWide range of impacts identified in the proposals and linked to the activities that supported the theory of change. Included knowledge production, capacity building to conduct and use research, enhanced recognition of the role for research in policy and practice and alignment with local needs, strengthened health systems and enhanced health equity, improved healthVery wide-ranging and aimed to conduct a theory-driven evaluation, but limitations in how far they were able to implement a theory-driven approach and the timing limited what impacts could be identified, including from those projects that had not finished at the time of the evaluation. Again, increasing desire to include impacts in programme evaluation, but the timing of such evaluations is too soon for many of the impacts to have arisen. Also the findings mainly presented in terms of the different methods of data collection, which add clarity in terms of the report on each method, but reduces the overall description of how far the various impacts achieved
Found:
  • wide range of research [publication], capacity building and knowledge translation outputs

Case studies show examples of impact on policies and practice. The proposal analysis identified various discussions about addressing health inequalities, the evaluation found little seemed to have been done on this
139Sullivan et al.4022011BothUKUK cancer centres (part of international programme of accreditation – but not a specific stream of funding)Scientometric/bibliometric analysis, including guidelines. Broad bibliometric analysis but key issue:
  • introduces two new indicators of a centre’s impact, namely the percentages of their papers that are cited on a set of 43 UK cancer clinical guidelines and in stories appearing on the BBC website

Found:
  • We have found substantial variation in the propensity of papers originating from UK cancer centres to be cited on guidelines and in the media. There does not appear to be a correlation with the conventional citation impact of the papers, nor indeed with the size of the centre

Method not applied to funded programmes in this study but might have potential for such application
140Sutherland et al.4032011BothUKAny field: but example on research into means of conserving wild bee populations in UKPropose ‘a quantitative approach in which impact scores for individual research publications are derived according to their contribution to answering questions of relevance to research end users’. Tested the proposed approach by evaluating the impact of the bee research. Identified key interventions (n = 54), searched for publications (n = 159) that test the interventions. Forty-four stakeholders allocated 1000 points between the different interventions according to how they should be prioritised. Three experts assessed the evidence for each intervention and contribution and relevance of each publication. Finally an impact score was calculated for each paper using an equation that incorporated the various elements (priority score, certainty of knowledge, contribution of knowledge, relevance and number of interventions for which the paper provides evidence)How far the knowledge would be relevant, etc. for important policies and practiceDeveloped a new approach having discussed the problems with either assessing impact through stages to direct benefits to society or by counting the amount of communication. The approach could possibly be adapted for application to a programme of research
It does not face the problems identified with attempts to track the actual impact from research, but the method is time-consuming. The authors report ‘A potential bias is introduced by the practitioners’ prior knowledge’, but the attempt to identify ways to reduce this suggest that the approach is attempting to identify a way of scoring impact that is insulated from the very context within which knowledge has in reality to exist if it is to make a real-world impact
141The Madrillon Group742011ApplicationUSANIH: Mind–Body Interactions and Health ProgrammePayback Framework incorporated into the design as a conceptual framework (called the Research Payback Framework) and adapted with greater focus on how the agenda for the programme was shaped and the extent of community outreach and engagement (see Scott et al.73). Mixed-methods cross-sectional evaluation design. Used qualitative and quantitative data to build three snapshots of the programme as a whole, the research centres and the research projects. The request for semistructured interviews received 100% response rate from PIs of all 15 centres and all 44 investigator-initiated projects. Impacts of centres scored by adapting the scales used previous in payback studies and presenting the scores as radar graphsMain payback categories: knowledge production; research targeting and capacity building; influence on policy; influence on health outcomes and health-care delivery; and broader economic and social impacts)Thorough multimethod study applying and adapting conceptual framework. Conducted innovative analysis through examining three overlapping levels (programme, centre and projects), but while interviews were used rather than a survey, most of the data on wider benefits came from researchers and ‘[d]etermining whether a research project could be credited with an effect within a given benefit category still involved a degree of subjectivity’. The NIH feasibility study had led to a call stating that the Payback Framework should be used for this assessment:
  • As a conceptual framework, the Payback Framework drew attention to a range of outcomes less often considered in other NIH program evaluations and ordered these in a logical manner

Found: achieved the programmatic goals and objectives of facilitating interdisciplinary collaboration in research ideas and building research capacity for mind–body research through the development of research personnel and funding of research core services. The centres and projects ‘produced clear and positive effects across all five of the Payback Framework research benefits categories’. Projects: 34% influenced policies; 48% led to improved health outcomes
142Theobald et al.4042009ApplicationKenya/Malawi/NigeriaLiverpool School of Tropical Medicine’s Global Health Development Group: three operational research projectsCase studies organised using RAPID framework from Overseas Development Institute. They selected three cases using pre-defined criteria, including that it was known they had demonstrated an impact on health policy and practice. Analysed the research impact process using RAPID framework for research-policy links, which focuses on: the political context; links between policy-makers and other stakeholders; and the evidence. External influences examined as the wider context. Desk analysis with inputs from the PIs and the Global Health Development Working GroupImpact on health system strengthening and promotion of equity in health service provisionUsing a common framework for the analysis of three case studies allowed the analysis to go further than had been reported in individual accounts of the projects, but there were only three case studies and were from one research group rather than a single funder’s programme – paper is at margins of meeting inclusion criteria
Key factor:
  • working in partnership with all relevant stakeholders to refine and develop research agendas and undertake research builds an appreciation of research that intensifies the effectiveness of the research to policy and practice interface

Found: in all cases, new knowledge and approaches were needed to fulfil policy requirements; they all involved partnerships between researchers and users; the links with policy-makers were not only at policy agenda setting but developing partnerships at multiple levels and with multiple players was key in all cases; the use of equity considerations was central in each case. Also capacity building was important
143Tremblay et al.4052010BothCanadaCFI: research infrastructure in a range of fieldsDeveloped their own method: the Outcome Measurement Study Unit of analysis is multidisciplinary research themes at individual institutions. A theme includes CFI expenditures in a range of research infrastructure, laboratories, etc., data and computing platforms. Methods combine: research institution completes an in-depth questionnaire covering a series of indicators – the 50 plus questions cover quantitative and qualitative data; expert panel review of the survey data plus other documents followed by a visit to the institutionEvaluates contributions of CFI expenditure to improving outcomes in five categories: strategic research planning; research capacity (physical and human); training of highly qualified personnel; research productivity; and innovation and extrinsic benefits. The indicators listed for each include for the final category: leverage of CFI expenditures; type and amount of knowledge and technology exchange; key innovations – economic, social and organisational, evolving industrial clusters. Panel report for each outcome measurement studyThorough analysis of approaches to impact assessment led to an approach that combines a range of methods and multiple outcome categories and ‘provides for a richness of analysis’. However, it is quite resource intensive for those being assessed and the assessors; some of the categories of outcomes, and indicators, relate to items such as recruitment of researchers that in other approaches would probably be viewed as inputs because they are directly paid for by the research funding. Perhaps different factors come into play because this is infrastructure funding, which might therefore facilitate the recruitment of researchers
Found: examples of social and economic benefits include:
  • Improvements in health care (e.g. improved surgical treatment of brain tumours through pre-op MRI and intra-op ultrasound)

144Trochim et al.4062008BothUSAPilot application to National Cancer Institute’s TTURCEvaluation of large initiatives – a logic model. Concept mapping, logic modelling, a detailed researcher survey, content analysis and systematic peer-evaluation of progress reports, bibliometric analysis and peer evaluation of publications and citations, and financial expenditures analysisConcept mapping produced domains of collaboration, scientific integration, professional validation, communication and health impacts. Some of these were mainly short-term and process oriented; others were medium or longer term, and outcome oriented
This paper mostly about the short-term processes and interactions (details of managing the research programme and, especially, how to measure transdisciplinary collaboration). Brief on long-term markers:
  • TTURC researchers report considerable impact on policies at the state and local levels and on practice with respect to tobacco control

Describes just a single worked example. Authors also noted that the researchers surveyed expressed:
  • optimism that their research will help lead to significant positive health outcomes, especially for consumption and prevalence [however] they reflect researcher attitudes but do not constitute evidence of long-term impact

145UK Evaluation Forum192006MethodologicalUKAll health fieldsDiscusses a range of approaches to assessing impact of medical research, but uses the categorisation of economic impacts from Buxton et al.28 as an organising framework for key sectionsReviewed a range of dimensions of impact but argued the socioeconomic benefits were an area where there was particular need for further studiesAn important step in developing health research impact assessment as a field of study, in particular assessment of economic impacts. Key recommendations included:
  • Research funders should identify and fund further research into evaluation methods with a rigour and quality equivalent to other research fields . . . UK research funders should support research to assess the economic impact of UK medical research, which should include critiques of existing economic approaches

Subsequently a consortium of funders was formed under the umbrella of the UK Evaluation Forum to advance such studies
146Upton et al.1492014MethodologicalUKAll fields: especially in relation to knowledge exchangePropose an approach that would shift the focus from the outcomes of knowledge exchange to the process of engagement between academics and external audiences. They claim:
  • by rewarding the process of knowledge exchange, rather than specific impacts, this approach would incentivize participation by a broader range of academics

Focused on analysing roles for assessing outcomes or processesRecommendations for assessing impact, in particular, related to the higher education innovation funding, knowledge exchange funding
It was based on responses from many academics, but the situation might have changed as a result of the REF. Addresses key issue of assessment of processes or outcomes
Found:
  • evidence on the varied types of knowledge exchange activity undertaken by academics in different disciplines and institutional types calls into question the likely effectiveness of knowledge exchange drivers based on outcomes

147Van de Ven and Johnson2032006MethodologicalN/AGeneral: could be applied to HSRCo-production. Interdisciplinarity and productive intersectoral conflict enable the salient features of reality to surface in the Co-production process through ‘arbitrage’ (see Chapter 4)Intersectoral conflict, especially the ‘two cultures’ of academia and policy-makingStrength: introduces and justifies the pragmatic approach to exploring and achieving impact. Weakness: no well worked-up examples
148Walter et al.3912004MethodologicalScotlandSocial careSocial Care Institute for ExcellenceThree models of research utilisation:
  1. The research-based practitioner model
  2. The embedded research model
  3. The organisational excellence model. Care delivery organisations develop a research-minded culture by creating partnerships with local universities, adapting research findings to local settings and encouraging ongoing learning within organisations. Review found limited evidence of its effectiveness in practice
Pre-dates our review period, but thought important to include because applied in Percy-Smith et al.390
149Weiss1202007MethodologicalUSAAll healthDraws on the United Way model for measuring programme outcomes to develop a medical research logic model. Moves from inputs to activities, outputs and outcomes: initial, intermediate and long term. Discusses various approaches that could be used, e.g. surveys of practitioners to track awareness of research findings; changes in guidelines and education and training; use of DALYs or QALYs to assess patient benefitRange of dimensions from the outputs such as publications, to clinician awareness, to guidelines, etc., to implementation and patient well-beingWhile it draws on much existing literature, it is seen as important article in moves towards increased interest in assessment of research impact, especially through the use of logic models
150Weiss341979MethodologicalN/ASocial scienceWeiss’s taxonomy of research utilisation. Impact is rarely direct (either knowledge-driven or problem-solving). It is usually indirect, through sustained interaction between policy-makers and scientists, and occurs through conceptual changes (e.g. ‘enlightenment’). Research findings may be used instrumentally, and also symbolically to support a political decision, or a study may be commissioned tactically to delay a decisionUtilisation of research findings in policy-makingPlausible explanatory model for successes and failures of efforts to get social science research to influence policy. May be less relevant for other types of research/impact
151Wellcome Trust932014BothUKWellcome Trust: all funding, some health programmes highlightedFramework has six outcome measures and 12 indicators of success. Range of qualitative and quantitative measures linked to the indicators and collected annually. Wide range of internal and external sources, including end of grant forms. The information gathering and report production led by the evaluation team though it is reliant on many sources
Complementing the quantitative and metric-based information contained in volume 1 of the report, volume 2 contains a series of research profiles that describe the story (to date) of a particular outcome or impact. The Wellcome Trust research profiles – taking the form of highlights and histories – are agreed with the researchers involved and validated by senior trust staff
  • The Assessment Framework Report predominantly describes outputs and achievements associated with Trust activities though, where appropriate, inputs are also included where considered a major Indicator of Progress

Another example of a major funder including impact in annual collection of data about the work funded
  • Case studies and stories have gained increasing currency as tools to support impact evaluation, and are a core component of institution submissions within the UK Research Excellence Framework

  • In future years, as the Trust further integrates its online grant progress reporting system throughout its funding activities (through Wellcome Trust e-Val and MInet) it will be easier to provide access to, and updates on grant-associated outputs throughout their lifecycle

Discoveries: applications of research – contributions to the development of enabling technologies, products and devices, uptake of research into policy and practice; engagement; research leaders; research environment; influence
Found: 6% of grants ending 2012/13 reported filing a patent; 17% engaged with commercial collaborators during their research; £218M in venture capital; 40 inventions during 2012/13. A total of 28% of grants that ended in 2012/13 reported engagement with policy-makers and health-care professionals; 14% reported production of software and/or databases
152Westhorp1642014MethodologicalN/AEvaluation and programme planningRealist evaluation. This approach involves mixed-method case studies aimed at developing and testing programme theories about what works for whom in what circumstancesNo empirical examples given. It does not specifically refer to research impact, but impact in general. Reading across to research, the assumption is that different research findings will have different impacts in different contexts because the context–mechanism–outcome link will be differentStrength: clear and authoritative summary of the realist approach and how it might be applied in impact assessment. Weakness: no examples given, and is for programme evaluation in general
153White1721996MethodologicalN/ACBPRCBPRLay involvement in research. Involvement may be nominal (to confer legitimacy on a project), instrumental (to improve delivery), representative (to avoid creating dependency) or transformative (to enable people to influence their own destiny)Elegant and simple tool for assessing (semiquantitatively) the level of lay involvement in a study. Weakness: not tested in HSR. The greater the involvement, the greater the potential impact
154Williams et al.782008ApplicationUKNHS HTA TARs: economic evaluationsThe whole study focused on the use of economic evaluations in NHS decision-making. The part that is relevant for this review is the specific case study on the use of economic evaluations by NICE technology appraisal committees because they explicitly drew on one programme of research, i.e. the HTA TARs. In that case study, the economic evaluations were specifically commissioned to be used in the committee’s decision-making. Data collection informed by grounded theory. Documentary review for the seven identified appraisal topics included; 30 semistructured interviews with members of the NICE appraisal committee; group discussion with NICE technical support team; observation of meetings related to the selected topics. Triangulation of dataUse of economic evaluations in decision-making by NICE about whether or not to recommend specific interventions were made available for patients treated by the NHSProvides an interesting contrast with many other studies in which the evidence is not produced so directly to be used by formal decision-making structures operating as a ‘receptor body’. However, the economic evaluation was just one part of the evidence from the TAR
Found:
  • economic analysis is highly integrated into the decision-making process of NICE’s technology appraisal programme

155Williams et al.922009BothUSANIOSHDeveloped the NIOSH logic model: inputs, activities, outputs, transfer, intermediate customs, intermediate outcomes, final customers, intermediate outcomes and end outcomes. Also developed outcome worksheets based on the historical tracing approach (see Project Hindsight407 and TRACES408), which reverse the order:
  • This reversal of order moves beyond the program theory articulated in the logic model and essentially places the burden on research programs to trace backward how specific outcomes were generated from research activities

The report describes the development of tools that research programmes could apply to develop an outcome narrative to demonstrate and communicate impact to the National Academies’ external expert review panels established to meet the requirements of the Government’s Performance Assessment Rating Tool
The expert panels receiving the documents would score the impact on a 5–0 scale, and separately score relevance of the research, i.e. did the programme appropriately set priorities among research needs and how engaged is the programme in transfer activities?
Intermediate outcomes include: adoption of new technologies; changes in workplace policies, practices and procedures; changes in the physical environment and organisation of work; and changes in knowledge, attitudes, and behaviour of the final customers (i.e. employees, employers). End outcomes include:
  • reduced work-related hazardous exposures or reductions in occupational injuries, illnesses, and fatalities within a particular disease- or injury-specific area

Thorough account of the development and implementation of tools to prepare data for an outcome narrative for expert panel assessment that has the strength of involving analysis that moves both forwards from the research and backwards from the outcomes. However, inevitably, this account provides only a partial picture and does not provide a complete example of what was prepared for the panel by any programme, or how the panel scored it. Also, while working backwards was seen as a strength in that it focused attention on a collective body of research rather than individual projects, it is not entirely clear what happens to the data identified when working backwards that comes from research programmes other than the one for which the narrative is being prepared. This places the report in the context of increased interest in the USA in evaluating/accounting for the outcomes from publicly funded programmes: Government Performance and Results Act and the Performance Assessment Rating Tool. Quotes from Government document on the Programme Assessment Rating Tool:
  • The most significant aspect of program effectiveness is impact- the outcome of the program, which otherwise would not have occurred without the program intervention

Found: the report provides a few examples of impacts found by some NIOSH programmes, but they are illustrations and not full presentation of findings, e.g. the Hearing Loss Prevention programme initially identified 44 outcomes across 10 major research areas
156Wilson et al.4092010ApplicationUKApplied and public health research but from a range of UK programmesAs part of a wider survey the questions related to impact informed by questions from RIF. Survey to 485 PIs (232 completed) mainly focusing on dissemination of their research in general, but asked PIs to select one example of a project they had disseminated and answer more questions including some on research impact. It compared the responses to the various questions to analyse dissemination factors associated with the reporting of research impactImpact on policy, health services and citation in clinical guidelinesComparison of data from separate questions on (a) aspects of dissemination and (b) impacts claimed provides interesting findings, but only partially relevant for impact from programmes because the focus was on PIs from a range of programmes and they could chose examples of research on which to concentrate, etc. Those ‘receiving dissemination advice and support, and/or who believe that researchers need to do more than publish academic journal articles were more likely to report policy impacts . . . having access to dissemination support . . . appears to increase the chance that research findings are misreported’. ‘Although only a minority indicated that they routinely recorded formal or informal feedback about the impact of their research, when asked about impact in relation to specific research they had recently completed most respondents were able to provide examples’
Found: 70% provided some details on impact on health policy and practice; 51% research led to discussions or interactions with policy-makers or cited in policy documents; 28% cited in clinical guidelines; 49% research had, or likely to have, influence on the acceptability or availability of a health intervention or on the organisation of health service: evidence claimed for this included citation in guidelines, or named specific interventions where acceleration anticipated. A total of 29 respondents felt their findings had been misrepresented or used in ways they felt inappropriate; 15 of which referred to the media
157Wooding et al.362014BothAustralia/Canada/UKLeading medical research funders: cardiovascular and strokeThe study, called Project Retrosight, built on Payback Framework – developed new methods to expand analysis. Web-based survey, interviews, bibliometric analysis
A total of 29 case studies (12 Canadian, nine UK and eight Australian). Scoring of case studies by experts and research team. Analysis of factors associated with impact by comparing the scores with features of the research processes recorded in the case studies
Assessed factors associated with impact in two groupings of the five payback categories – academic impact and wider impact. The 29 case studies revealed a number of findings: diverse range of impacts, variations between the impacts of basic biomedical and clinical research, no correlation between knowledge production and wider impacts, engagement with practitioners and patients is associated with high academic impact and high wider impacts, etc. Biomedical research = more academic impact, clinical research = more wider impacts (on health policies and health gain). Collaboration = wider impactUsing case studies enabled Project Retrosight to identify paybacks which otherwise would not have been detected. A complex set of steps, methods and innovation were required to understand success and translation. These methods allowed a large and robust data set which, when applied, developed new approaches to understand factors associated with high and low payback. However, there were only 29 case studies, so there could have been inconsistencies between projects and countries. Health research funder wishing to achieve wider impacts should not use academic impacts as a proxy. Pathways to impact = funders should consider ways to assist researchers to maximise potential for research to be translated
158Wooding et al.1042009BothUKARC: wide rangeThe framework developed and applied in this study subsequently was named the RAND/ARC Impact Scoring System – see Grant et al.,38 although that name not used in this report. It was developed from the Payback Framework and linked to research pathways developed to show the roles of different types of research within an overall portfolio. A web-based tick-list survey completed by PIs. It was piloted in 2007 on 136 grants ending in 2002 and 2006 (118 replies: 87% response rate). All data from each grant presented on one sheet called an ‘impact array’ in which each row of blocks shows all the answers from one grant, and each column represents all the answers to a single question. Each coloured block shows a positive response to the question. The research pathway and impact array data were then combined to explore which types of research gave rise to particular types of impactIncluded: future research (e.g. further funding/new collaborations, contribution to careers, new tools); dissemination; health policy (citations on guidelines, contribution to guideline committees and discussion of health policy); training (undergraduates and health professionals); interventions/products (new treatments, public health advice, etc.)The author’s claim:
  • Overall the indications are that the instrument is an effective and low burden way of collecting an overview of the impacts arising from a portfolio of research . . . [But] depends on the researchers’ knowledge and honesty, and simplifies quantification of impacts and their attribution

While the presentation of the data in an impact array clearly shows the overall picture in which some types of impact are much more common than others, it provides a rather complex picture. Quantification was recognised as an issue because while adopting a ticklist approach provided simplicity, it limited the information obtained, e.g. could not record if more than one example of an impact from the same grant. Development work on the Research Impact Questionnaire fed in further developments of surveys for routine assessment such as e-Val
Found: > 80% of grants generated new research tools, 50% of which were shared; 2.5% of research led ‘to diagnostics, therapeutics or public health advice that is in or nearing use, and 7.6% has generated intellectual property that has been protected or is in the process of being so’ and six projects (5%) to policy impact
159Xia4102011ApplicationUSAAHRQ: The Effective Health Care ProgrammeNone stated. Database analysis: they examined the AHRQ online database and compiled a list of conclusions from completed comparative effectiveness reviews in various therapy areas. Then they compared these conclusions to the current access of these therapies in a selection of the largest US plans by lines covered using their online formulary databasesImpact of AHRQ findings on formulary decisions of health-care organisationsVery little detail available from abstract and findings not strong, but potentially relevant for analysis of NIHR HTA programme
Found:
  • It appears that these comparative effectiveness reviews by the AHRQ have some indirect impact on formulary access in the US. However, price and contracting, in addition to efficacy and safety are among the key determinants for plans

160Zachariah et al.1412014ApplicationInternationalWorld Health Organization/ Special Programme for Research and Training in Tropical Diseases Structured Operational Research and Training Initiative: adopted an existing training initiativeNone stated. Retrospective cohort study, by survey, of attendees of eight operational research capacity building courses: 93 enrolled, 83 completed. A total of 89 published papers and 88 replied about policy and practice. For each reported impact claimed from a published paper, a description of the attributed effect was requested. Categorisation was done by two independent researchersCourse completion (i.e. capacity building); attendees published papers assessed for impact on policy and practiceHigh response rate to survey; rare example of assessing wider impacts from capacity building course and range of example briefly described. However, the effects were self-reported and may have been responder bias. Possible factors for rapid translation include:
  • participant selection that favours programme staff, studies being focused on healthcare delivery; research questions being of direct programme relevance; early engagement and buy-in of programme managers and/or policy makers; and inclusion of stakeholders as co-authors on publications

Found: 65 (74%) stated impact on policy and practice, including: 11 influenced national strategic policy and/or national guidelines; 12 national data monitoring was adapted; 13 change in routine implementation of a national programme; 14 change in routine implementation of a local programme
161Zechmeister and Schumacher832012BothAustriaInstitute for Technology Assessment and Ludwig Boltzmann Institute for HTA: HTADeveloped own methods but were partly informed by the work in Quebec described in our 2007 review,2 i.e. Jacob.43,44 Identified all HTA reports aimed at use before re-imbursement decisions made (category 1) or for disinvestment of oversupplied technologies (category 2). There were 11 full HTA reports, 58 rapid assessments. Descriptive quantitative analysis of administrative data informed by studies such as Jacob (1993, 1997) and 15 interviews with administrators and payersImpact on ‘reimbursements/investment and disinvestment decisions that may have consequences for volume supplied, expenditure/or resource distribution . . . an impact in terms of rationalisation (reducing oversupply) or re-distribution of resources into evidence-based technologies, and we attempt to calculate this in monetary dimensions’It uses two main methods and ‘were able to present a rich picture of different perspectives’. The report provides a brief narrative of a range of examples and the calculations of the cost savings. However, while they were able to claim the majority of reports ‘have been used in decision processes’ the wording about the level of impact is much less clear:
  • they had at least a selective effect on volumes supplied, expenditure and/or resource distribution

It raises a series of important questions about the nature of informing decisions: 67 of the 69 reports ‘were used for’ decisions and other factors also played a role in most cases. It look at both investment and disinvestment decisions
Found: five full HTA reports and 56 rapid assessments ‘were used for reimbursement decisions’. Four full HTAs and two rapid assessments ‘used for disinvestment decisions and resulted in reduced volumes and expenditure’. Two full HTAs had no impact. Other factors also played a role, but in only 45% of reports ‘the recommendation and decision were totally consistent’ and 53% partially consistent

AHRQ, Agency for Healthcare Research and Quality; AHSI-RES, Africa Health Systems Initiative Support to African Research Partnerships; AIDS, acquired immunodeficiency syndrome; ALR, Active Living Research; Application, application; ARC, Arthritis Research Campaign; Both, conceptual and application; CFI, Canadian Foundation for Innovation; CHBRP, California Health Benefits Review Program; CHP, Centre for Health Policy; CIHR, Canadian Institutes of Health Research; CIROP, Community Impacts of Research Oriented Partnerships; Co-I, coinvestigator; CRG, Cochrane Review Group; CTSA, Clinical and Translational Science Awards; FIC, Fogarty International Center; FP, Framework Programme; GCP, good clinical practice; HEI, higher education institute; HIV, human immunodeficiency virus; HSC, Health and Social Care; HSR, health services research; IAVI, International AIDS Vaccine Initiative; ICT, information and communications technology; KCE, Belgium Health Care Knowledge Centre; LVAD, left ventricular assist device; MD, Doctor of Medicine; Methodological, conceptual/ methodological; MPA, Main Panel A; MPhil, Master of Philosophy; N/A, not applicable; NGO, non-governmental organisation; NIH, National Institutes of Health; NSW, New South Wales; PhD, Doctor of Philosophy; PI, principal investigator; PROSPER, PROmoting School–community–university Partnerships to Enhance Resilience; R&D, research and development; RAPID, Research and Policy in Development programme; REPP, Research Embedment and Performance Profile; ROAMEF, Rationale, Objectives, Appraisal, Monitoring, Evaluation and Feedback; ROI, return on investment; RWJF, Robert Wood Johnson Foundation; SDO, Service and Delivery Organisation; SHRF, Saskatchewan Health Research Foundation; TTURC, Trans-disciplinary Tobacco Use Research Centre.

Copyright © Queen’s Printer and Controller of HMSO 2016. This work was produced by Raftery et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.

Included under terms of UK Non-commercial Government License.

Bookshelf ID: NBK390709

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (3.0M)

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...