Included under terms of UK Non-commercial Government License.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Raftery J, Hanney S, Greenhalgh T, et al. Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme. Southampton (UK): NIHR Journals Library; 2016 Oct. (Health Technology Assessment, No. 20.76.)
Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme.
Show detailsTABLE 14
Number | Authors | Year | Type | Location of research assessed (or to be assessed) | Programme/specialty | Concepts and techniques/methods for assessing health research impact | Impact: examined and found | Comments: strengths/weaknesses; factors associated with impact |
---|---|---|---|---|---|---|---|---|
1 | Action Medical Research54 | 2009 | Application | UK | Action Medical Research: training fellowships | Payback Framework. Survey to past fellows (130: 72% completed) | Furthering medical science and developing research capacity; knowledge production; patient and health-care sector benefits; economic benefits; and benefits to the charity | There was a good response rate; however, it relied solely on self-reported data. Example of assessing capacity building/training award that moves beyond the capacity building – also consider impact of the research conducted, but as here, such studies sometimes take a career perspective, making comparisons with projects more difficult |
Found: impact on career development – 78% had some research involvement; 30% had an impact on clinical guidelines; 42% on clinical practice or service improvements; and 46% patient benefit, much from follow-on research | ||||||||
2 | Adam et al.84 | 2012 | Application | Catalonia | Catalan Agency for Health Information, Assessment and Quality: clinical and HSR | ROI payback model from CAHS. Bibliometric analysis; surveys to researchers [99 (71%) response]; interviews – researchers (n = 15), decision-makers (n = 8); in-depth case study of translation pathways. Include the ROI focus on assessment in relation to targets the programme intended to reach | The article focused on two of the impact categories: advancing knowledge and impact on informed decision-making by policy-makers, managers, health-care professionals, patients, etc. | Achieved aim of filling a gap in the knowledge needs; study informed funding agency’s subsequent actions, but both internal and external validity ‘have room for improvement’. Noted limitations of attribution, time lags, counterfactual Factors: the studies ‘provide reasons to advocate for oriented research to fill specific knowledge gaps’. Interactions and participation of health-care and policy decision-makers in the projects was crucial |
Found: 40 out of 70 said decision-making changes been induced by research results: 29 of those said research had changed clinical practice, a maximum of 16 said that there had been organisational/management/policy changes | ||||||||
3 | Anderson55 | 2006 | Application | Australia | NSW health survey | Payback Framework adapted. Subanalysis of a survey of users of the health survey; follow-up telephone survey of selected respondents; case studies using supplementary data sources | Categories adapted from Payback Framework. Emphasis on policy impacts | Useful analysis of assessment problems because diverse potential users of population surveys. Low response rate and difficult to go beyond policy impact:
|
Found: some examples of uses to inform policy. Most of the information could not have come from other sources | ||||||||
4 | Arnold133 | 2012 | Both | EU | EU FPs: including brain research | Developed a list of expected impacts from FP in terms of outputs, outcomes, mid-term impacts and long-term impacts. Attempted a more systematic approach to the role of funding in the social shaping of the research and innovation system. Limited account of methods in this paper that draws on a range of reports | Expected impacts – particular emphasis on knowledge and commercialisation | Fuller account given in a report |
Found: ‘impact mechanisms’ in the brain research FP case study area included: knowledge, agenda-setting, promoting self-organisation of stakeholder communities; co-ordinating or influencing policy; leveraging funding for R&D; mobility and development of human capital; behavioural additionality – learning a ‘new’ innovation model | ||||||||
5 | Aymerich et al.56 | 2012 | Both | Catalonia | Network centre for research in epidemiology and public health | Adaption of Payback Framework, informed by CAHS and Berra et al.58 Survey of PIs of 217 small-scale projects (173 responded: 80%) ‘on the outcomes and potential impact of each project’; and a survey of external reviewers, with the aim of assessing the performance of each project in terms of how far the objectives were met | Survey to PIs covered three main categories of social impact, each with various dimensions, e.g. knowledge translation (including ‘To carry out clinical trials’ ‘To the development of synthesis reports of scientific evidence, clinical practice or public health guidelines, or similar products’ | Good response rate; opinions of external reviewers and researchers. Claim to report:
|
Found: PIs reported on ‘knowledge translation’. From the 173 projects, 59 examples contributing to synthesis reports or guidelines, many projects thought to have potential health-care benefits | ||||||||
6 | Banzi et al.4 | 2011 | Methodological | N/A | All health research | Cochrane review of approaches: highlighted five categories of impact from the CAHS/Payback Framework approach | Highlighted the payback categories: knowledge; capacity building; informing policies and product development; health and health sector; economic and social | Independent review confirmed findings from HTA review2 that the Payback Framework is the most frequently used approach. Limitations in previous studies: often based on information from researchers or funders; attribution; counterfactual and rarely a control situation; choosing appropriate time
|
7 | Barnhoorn et al.346 | 2013 | Methodological | EU | EU: public health innovation projects | Described the development and operation of a web survey of country informants | General focus on uptake of the innovations but no details given of the specific impacts. No details supplied about results | Illustrates increased interest in assessing impact; encouraged more thinking on the issue |
8 | Battles et al.347 | 2014 | Application | USA | AHRQ: Healthcare-Associated Infections Prevention Programme | No conceptual framework stated and no methods explicitly described as the account appears in a section of the journal headed: Agency Perspective. Presumably an ‘insider account’ | Examined: explored the knowledge produced, potential and actual impacts in terms of improved health care | Being conducted from within the agency provided an understanding of how the programme developed. However, it is not a formal or independent study of impact. As an agency covering both health-care research and quality, AHRQ seemed to be well placed to be involved in implementing/rolling out findings of a successful project |
Found: a range of projects with notable findings and others that were promising. Some considerable impact:
| ||||||||
9 | Bennett et al.57 | 2013 | Both | Kenya and Uganda | FIC: research training programmes | Informed by the Payback Framework and contribution mapping, and by approaches to assessing research capacity building. Case studies based on semistructured interviews (53 trainees and others), focus groups (26 participants), structured surveys, document review, e.g. reviewing policy documents for citations to trainees’ publications | Contribution to policy and programme development
| Multimethod approach. An important example is that it shows considerable impacts from development training grants. However, some of the lines of attribution are particularly difficult to establish for training grants
|
10 | Bernstein et al.348 | 2006 | Methodological | Canada | CIHR | Payback Framework: describes how CIHR adapted the multidimensional categorisation of benefits from the Payback Framework. Describes the sources of information that might be used for each category | Five main categories of Payback Framework with some adaptions in the details: knowledge production; research targeting and capacity building; informing policy; health and health sector benefits; economic benefits | Could be seen as an important step between the Payback Framework and its adoption/adaptation in the ROI report by the CAHS |
11 | Catalan Agency for Health Technology Assessment and Research58 | 2006 | Application | Catalonia | TV3 telethon for biomedical research in Catalonia (different specialty each year) | Payback Framework: adapted and applied somewhat differently from how the original categorisation and model were brought together. Review of documents related to all projects (320) funded from 1993 to 2002. Survey of researchers for all completed projects (164 PIs = 72%). Bibliometric analysis. Desk analysis for comparisons with other programmes | Examined: patents, publications, research training and targeting of future research, application of the results in health care and health gain | Multimethods approach provides an assessment of how the programme contributed significantly to the Catalan health research system. However, difficulties were encountered with collecting the diverse data and most of the data on impacts on health care seemed to be potential impacts. Findings were presented in a way that makes it difficult to compare with some other quantitative assessments of multiproject programmes, but built on in a series of later studies |
Found: considerable impact in terms of primary results (publications), secondary results (PhDs, targeting further research), and potentially on health care | ||||||||
12 | Boaz et al.5 | 2009 | Methodological | N/A | Review covering policy-making in any field but particular focus on strategic levels and on waste, environment and pollution policy | A total of 14 frameworks revealed: many little used; the lower in the list, the more it is used (as at 2007):
| Very few frameworks are actually being used In the international development literature, studies traditionally favour ‘qualitative, participatory evaluations with a focus on learning and service improvement’. Methods such as ethnographic site visits and storytelling are used to capture multiple voices in what are usually presented as ‘positive utilisation narratives’. However, government donors, in particular, increasingly question the veracity of such narratives and favour external evaluations that use quantitative methods and predefined performance indicators EU research programmes – usually assessed by panel reviews | Extensive systematic search of multiple databases to identify primary papers. Good review of difficulties assigning attribution
|
13 | Bodeau-Livinec et al.82 | 2006 | Application | France | French Committee for the Assessment and Dissemination of Technological Innovations: HTA | None stated, but approach to scoring impact appears to follow the earlier studies of the HTA Council (CETS) in Quebec reported in the 2007 review: Jacob.43,44 Semidirective interviews with stakeholders affected by the recommendations (n = 14); and case studies that used surveys in hospitals to examine the impact of the recommendations (n = 13) | Examined: the impact in terms of interest in the recommendations and how they are used in decision-making, etc. | Using two mutually supportive approaches to data collection increases credibility. However, small numbers were interviewed and there was difficulty in establishing attribution. Main factor fostering compliance with recommendations ‘appears to be the existence of a system of regulation’. Reviewed other studies:
|
Found: widespread interest, ‘used as decision-making tools by administrative staff and as a negotiating instrument by doctors in their dealings with management . . . ten of thirteen recommendations had an impact on the introduction of technology in health establishments’: seven considerable and three moderate | ||||||||
14 | Brambila et al.137 | 2007 | Both | Guatemala | Population council: programme of operation research projects in reproductive health in Guatemala | Developed an approach that considered process, impact and contextual factors. Drew on literature such as Weiss.34 Key informant interviews; document review; and site visits to health centres and NGOs implementing or testing one or more operations research interventions. Based on the information collected the evaluation team scored 22 projects (out of 44 conducted between 1988–2001). Used a 3-point scale to score each project on 14 process indicators, 11 impact indicators and six context indicators | Aim:
| Used the impact literature to inform detailed analysis of a programme. However, no indication given on how the 22 projects were selected for scoring. The several 5-year cycles of funding:
|
Found: out of the 22, 13 projects found that the intervention was effective in improving results, three found interventions were not effective; in 14 studies, implementing agency acted on the results; nine interventions scaled up in the same organisation; five were adopted by another organisation in Guatemala; some studies led to policy changes, mainly at the programme level | ||||||||
15 | Brutscher et al.37 | 2009 | Methodological | Canada (review conducted from UK for CAHS report) | Conducted for health research by reviewing approaches in all fields | Reviews eight frameworks for evaluating research. (For a full analysis of how we drew on these see Appendix 5). Four are health specific: Leiden University Medical Centre (in our review see Mostert et al.100); measure of research impact and achievement – Australia, but ex ante; Payback Framework; and congressionally directed medical research programmes. Three are for research in general: VINNOVA (Swedish government agency for innovation – see Eriksen and Hervik94); UK Department for Innovation, Universities and Skills (economic impacts – see Department for Innovation, Universities and Skills106); and EU FP. One is used widely for USA government programmes: programme assessment rating tool (see Williams et al.92) | Analysed: five key elements in the frameworks: evaluation objectives; outcome measures; levels of aggregation; timing and evaluation methods | Identified there were likely to be trade-offs in the choice of key elements of evaluation frameworks
|
Found: significant differences between the evaluation frameworks. Only four frameworks go as far as assessing actual impacts | ||||||||
16 | Bumgarner et al.349 | 2006 | Application | International research (USA research organisation and review conducted by team from USA) | Center for Global Development: international development | No framework stated. In-depth interviews (more than 150); worldwide audience survey (more than 1200 respondents); documentary analysis and mapping; and case studies | Does research agenda meet the needs of its policy-making targets? Is the research high quality? Does it influence other researchers? Is the communications strategy achieving its desired impact? Is it building appropriate partnerships? | Comprehensive evaluation of impact of the programme of work of the Centre, independently conducted, and commissioned by a consortium of organisations that fund the centre. Having a large number of interviews addressed the challenge of showing impact on policy. In addition, evidence was cross-checked with publication dates and the perceived relevance of the research was important. However, the wide range of activities and the large potential audience were challenging to address. Much of this international research was not in health. Center for Global Development’s research and advocacy work for policy influence widely seen as ‘timely, empirically or analytically based, and highly effective among its audience’ |
Found: many policy-makers in the audience survey discuss or use Center for Global Development’s policy research and analysis, which is generally well respected (though some complaints). Some limitations on the extent of outreach. Some work had a major impact on policy, including on the Making Markets for Vaccines Initiative350 where its work turned an existing idea into a concrete policy proposal | ||||||||
17 | Bunn59 | 2010 | Both | UK | Various health topics covered by systematic reviews | Payback Framework and RIF (Kuruvilla et al.123). Bibliometrics and documentary review | Knowledge production, research targeting, informing policy development, impact on practice | Limitations discussed; many factors associated with impact identified. PhD study examining impact of authors’ own systematic reviews and links to the author’s wider attempt to highlight the impact of Cochrane reviews351 |
Found:
| ||||||||
18 | Bunn and Kendall60 | 2011 | Both | UK | Health visiting | Informed by aspects of Payback Framework and RIF. Mixture of working forwards and backwards. Documentary and literature review (starting with 30 published policy documents and checking for impact from research), citation tracking (Web of Science, Scopus, and Google Scholar) and interviews with researchers (n = 7) about the impact of their own research | Impact on policy | Range of methods. Drew on analysis in Hanney et al.2 to combine mostly working backwards from policy documents in the specific field, but also forwards from individual researchers. However, not focused on a specific programme of research |
Found:
| ||||||||
19 | Bunn et al.61 | 2014 | Application | UK | NIHR: Cochrane systematic reviews from the supported CRGs | Applied same framework as developed in Bunn59 that was informed by aspects of Payback Framework and RIF. Questionnaires to review groups (20; 87% responded); documentary review; data verification and analysis; semistructured telephone interviews with users (i.e. eight guideline developers); detailed study of 60 reviews: questionnaire to review lead authors (60; 48% responded) and bibliometric analysis | As above on Bunn et al.:59 knowledge production, research targeting, informing policy development, impact on practice | Strength that overall considerable data collection using a range of methods, with interviews providing explanations for figures showing the impact being made on policies:
|
Found: 1502 reviews between 2007–11. Of the 60 reviews, 27 had been cited > 100 times. Identified 40 examples where reviews informed further primary research. Thirteen (22%) of the surveys from authors said had influenced primary studies. Overall, there were 722 citations in 248 guidelines or evidence reviews behind them. A total of 481 reviews were in at least one guideline. Eight CRGs and 12 authors gave examples of impact on practice or services, but most did not know | ||||||||
20 | Buykx et al.352 | 2012 | Methodological | Australia | HSR | Developed the health services RIF, synthesising RIF (Kuruvilla et al.123), Lavis et al.,353,354 and the Payback Framework – the version developed by Kalucy et al.65 | Identified main areas of impact to assess: research related, policy, service and societal. Also planned to consider whether the impact originated with the researcher (i.e. producer push) or the end-user (i.e. user pull) | Planned to use to evaluate own research |
21 | Cacari-Stone et al.176 | 2014 | Both | USA | Public (ethnic) research | CBPR policy-making framework. Consists of four linked circles: context (which sets the stage for) the CBPR process (which feeds into) the policy-making process (which influences) outcomes Six case studies (using multiple methods – individual in-person interviews, focus groups, policy-maker telephone interviews, archival media and document review, and participant observation) | Outcomes include political action, policies (formal/informal), changed policy landscape (e.g. improved distributive justice) and hence health outcomes | Strong, trusting pre-existing community–campus relationships. Tension for change (e.g. widespread local awareness of injustice). Dedicated funding for the CBPR activity. Wide range of research methods used to collect multiple data sources (‘street science’ and academic research) to produce robust and credible account of local inequalities. Effective use of these data in civic engagement activities including media campaign and broad-based public awareness with different sectors of the community Diverse and effective networking with intersectoral partners including advocacy organisations |
22 | Caddell et al.96 | 2010 | Application | Canada | IWK Health Centre, Halifax, NS, Canada. Research operating grants (small grants): women and children’s health | RIF: adapted. Online questionnaire to 64 principal investigators and co-principal investigators (39 completed surveys: 61%) | Five subsections: impact on research; policy; practice; society; personal: self or career development | Pioneering study of small grants. However, relatively low response rate and sample size, and reliant on a single data collection method – self report from researchers. An association between presenting at conferences and practice impacts. Authors stress the link between research and excellence in health care:
|
Found: 16% policy impact: 8% in health centre; 8% beyond; 32% said resulted in a change in clinical practice; 55% informed clinical practice by providing broader clinical understanding and increased awareness; 46% improved quality of care; 87% improved research skills | ||||||||
23 | CAHS7 | 2009 | Methodological | Canada | All health research | Appendix C reviewed a range of frameworks starting with Payback Framework on which the framework in the main report built. Other frameworks reviewed: Walt and Gilson’s analytical model, RIF, research utilisation ladder, Lavis’s decision-making model, the AP Weiss logic model approach, HTA organisation assessment framework, societal impact framework, and the BSC | Appendix E identifies indicators of impact organised according to the five categories of impact from the Payback Framework | The full report is probably the most systematic and important analysis of research impact assessment in the review period. An account of the report’s key recommendations is contained in our comments below on the article by Frank and Nason.115 Appendix D analyses a series of key issues facing research evaluation including: attribution (and the role of logic models and case studies in addressing this); the counterfactual (‘Having a framework that can understand the different external contextual factors that may have been involved in impacts makes understanding the counterfactual easier’; internal and external threats to evaluation validity; time lags to impact |
24 | CIHR355 | 2005 | Methodological | Canada | All CIHR health research | All three approaches to measuring research impact reviewed were found to have intellectual agreement on key issues, although the ways of conceptualising returns differed. It was decided to adapt the five-dimensional categorisation in the Buxton and Hanney’s payback model39 for CIHR’s framework. Recommended methods include a variety of approaches should be used as appropriate for subject area and stakeholder concerns. Include case studies or narratives and indicators of achievement in specific areas defined by the five impact categories | Adapted the five-dimensional categorisation in the Buxton and Haney’s payback model39 | Following an expert meeting, a draft framework was developed. It was reconciled with CIHR’s Common Performance Measurement and Evaluation Framework to ensure consistency with existing evaluation activities and to build on initiatives under way within the 13 institutes of CIHR |
25 | Carden356 | 2009 | Application | International studies (Canadian report) | International development: wide range (one case study health) | Case studies (n = 23): interviews and documentary analysis | Policy | Full report from International Development Research Centre allows detailed analysis of strengths and weaknesses of case study approach. Seems to be separate from funder(s) of projects |
26 | Cohen et al.52 | 2015 | Both | Australia | NHMRC: intervention studies in various programmes | Adapted categories from the Payback Framework and the CAHS framework, and aspects of case study scoring from UK’s REF. Mixed-method sequential methodology. Chief investigators of 70 eligible intervention studies who completed two surveys and an interview were included in the final sample (n = 50; 71%), on which post-research impact assessments were conducted. Data from the surveys and interviews triangulated with additional documentary analysis to develop comprehensive case studies. Case studies that indicated policy and practice impacts were summarised and the reported impacts were scored by an expert panel using criteria for four impact dimensions: corroboration, attribution, reach and importance. The scoring system separately considered reach and importance/significance ‘so as not to downplay the potential impact of smaller studies, or studies with small target groups’ | Developed a categorisation of impacts and mapped it onto the Payback categories. There were four main categories: scholarly outputs, translational outputs, policy and practice impacts, and long-term outcomes. Each has subcategories, five of which relevant to this study: within the translational outputs subcategory ‘intervention packaged for implementation’, and within policy and practice:
| Case studies on all of the projects that met the inclusion criteria rather than applying any selection criteria. Multiple methods combined to form an innovative approach that included attempting independent verification of claimed impacts and advances in the approach to scoring impact case studies. Addressed issues such as attribution and corroboration by the data collection methods used and the scoring system. However, overall, there were issues with the scoring; the whole process was resource intensive. Had sufficient time been allowed for the impacts to occur? Authors note that:
|
Found: 19 (38%) of studies had at least one impact in the categories listed. There were a total of 21 translational impacts and 21 policy and practice impacts. Examples given of each. Scoring resulted in projects being classified into high, medium and low impact score groups. Case studies provided illustrations of high impact | ||||||||
27 | Contandriopoulos et al.163 | 2010 | Methodological | N/A | HSR | Review of knowledge exchange; evidence into policy | How research knowledge is taken up and used in health-care organisations and policy settings – and why this process often fails | Knowledge may be individual or collective. Collective knowledge exchange differs when there is disagreement on values and priorities. If this is the case, scientific criteria, such as ‘strength of evidence’, may be overshadowed by political ones |
28 | Currie et al.357 | 2005 | Both | Canada | Health promotion and community development | Impact model for community–campus partnerships. Methods not clearly stated. Implicitly, the model is used to guide the development of bespoke metrics and indicators that can be used to track the project. A brief example sketched of the work of the Research Alliance for Children with Special Needs. See King et al.188 for a later application | Three domains of impact in community campus partnerships:
| Strength: ‘systems’ model that recognises multidirectional influence, direct and indirect. Impact at multiple levels (individual, organisational, system) Limitation (to some extent acknowledged by researchers): model is outcome focused and does not address ‘structural elements of partnerships and audiences, nor processes that could be utilized to enhance research impacts’. (Hence does not look at partnership dynamics, absorptive capacity, social capital, or engage with literature on CBPR or with the political/emancipatory nature of much CBPR) |
29 | de Goede et al.158 | 2012 | Both | The Netherlands | Epidemiology and public health | Adaptation of Weiss’s model of research utilisation (meaning most policy impact is indirect and non-linear, achieved via interaction and enlightenment). Three case studies: interviews, ethnography and document analysis, with special focus on analysing the interactions in deliberative meetings among policy-makers | Model is more focused on processes than end-impacts: assumption that much of the impact is indirect and diffuse, hence it is the interactions and influences that matter. Structured framework for capturing the complexity of research utilisation, consisting of three phases:
| Explore barriers to effective uptake of research: expectation (are policy-makers ‘ready’ for these findings?), transfer (how effectively and appropriately are findings communicated?), acceptance (are findings seen as credible and true?) and interpretation (what value do policy-makers place on them?). A significant barrier to uptake was different framings of problems (epidemiologists focused on ‘healthy behaviours’ whereas policy-makers took a more social framing and considered things like school drop-outs or domestic violence). Some policy-makers found the local health messages so irrelevant to their view of the problem that they did not use them at all. Early involvement of policy-makers in the process of producing local health messages appeared to make it more likely that they would be used |
Found: most impacts were conceptual. Case studies considered the fate of ‘local health messages’ produced by researchers but intended for local policy influence. Policy-makers rarely took a local health message and immediately implemented it in ways that resonated with researchers’ model of the problem | ||||||||
30 | de Goede et al.358 | 2012 | Both | The Netherlands | Epidemiology and public health | Extension of de Goede158 described above. Again adaptation of Weiss’s model of research utilisation. Extended above study by including a quantitative scale for measuring kinds of utilisation. Case studies. This paper reports development and use of a questionnaire to 155 local policy-makers. Sets of questions linked to each of the three ways evidence used (see the next column) | Use of epidemiological evidence instrumentally (in specific and direct ways to solve a particular problem), conceptually (in a more general way to improve understanding of problems) or symbolically | More about how research is used than the impact of a programme of research. The various ways that the research was used were treated as dependent variables, with various independent variables tested for correlation (e.g. whether the policy-maker had previous experience of research or was involved in the research process) |
31 | Deloitte Access Economics25 | 2011 | Both | Australia | NHMRC: subset – cardiovascular disease, cancer, sudden infant death syndrome, asthma and muscular dystrophy | ROI analysis (cost–benefit analysis). Outcomes measured as: net benefit and benefit-to-cost ratio. Collation of funding data and estimation of projected benefits | Gains in well-being measured as reductions in DALYs gains in averted costs incorporated as well as productivity and indirect gains. Benefit-to-cost ratio: cardiovascular disease, 6.1; cancer, 2.7; sudden infant death syndrome, 1.2; and muscular dystrophy, 0.7 | Valid attempt to value monetised health benefits and equate with a lagged investment period, also accounting to some extent for problems of attribution. However, weaknesses include the use of projected health gains – ‘unknown unknowns’ and a weak basis for time lag between R&D and health gains. Does not seem to account for delivery costs of new interventions. Some disagreement about robustness of DALYs |
32 | Dembe et al.124 | 2014 | Methodological | USA | All health | The translational research impact scale: informed by a logic model from W.K. Kellogg Foundation, by the RIF (Kuruvilla et al.123) and the Becker Library (Sarli et al.118). Identified 79 possible indicators used in 25 previous articles and reduced them to 72 through consulting a panel of experts, but further work was being undertaken to develop the requisite measurement processes
| Three main domains: research-related impacts; translational impacts (that include ‘improvements result in better quality of care’); societal impacts | Would be comprehensive; however, feasibility yet to be reported on and number of subdomains and indicators in each domain varies considerably |
33 | Department for International Development8 | 2014 | Methodological | International studies (review conducted in UK) | Review of studies assessing impact of international development research, including health | Devised own theory of change that combines four major pathways by which research has been hypothesised to contribute to development. There are four pathways going from the supply of research and from the demand for research outputs towards poverty reduction and improved quality of life: economic growth, human capital, products and technologies, and evidence-informed policy/practice | Broadly structured review around the pathways developed: economic growth, human capital, pro-poor products and technologies, evidence-informed policy/practice, quantifying economic impact | Reviewed and organised a large number of papers and usefully challenges some assumptions about the benefits from research, but did not include some important papers from the health sector, e.g. Brambila et al.,137 showing how long-term programmes of health research can make an impact |
34 | Donovan128 | 2008 | Methodological | Australia | All research fields | RQF recommended the impact of Australian higher education could be assessed by research groups producing:
| Wider economic, social, environmental and cultural benefits of research | Describes the RQF that became a key element in the development of methods to assess the impact of research in a national scheme across the entire system of higher education. Although it was not eventually introduced in Australia, it was drawn on in recommendations that the REF adopt impact case studies |
35 | Donovan et al.62 | 2014 | Application | Australia (assessment by UK team) | National Breast Cancer Foundation: key programmes | Payback Framework. Documentary analysis, bibliometrics, survey (242 sent – 63% response rate), 16 case studies and cross-case analysis. Applied to a range of funding schemes used by the charity | The five payback categories: knowledge, research, policy and product development, health and economy | Thorough study funded by the charity that announced the findings would be used to inform their research strategy; however, many projects had only recently been completed |
Found: citation rate between 2006 and 10 was double the world benchmark; 185 higher degrees (121 PhDs); 46% career progression; 66% generated tools for future research use; leveraged additional AUS$1.4 for each dollar spent; 10% impact on policy – 29% expected to do so; 11% contributed to product development; 14% impact on practice/behaviour, 39% expected. Brief accounts of case studies included some important examples of impacts achieved | ||||||||
36 | Drew et al.90 | 2013 | Both | USA | NIEHS: all programmes of environmental health sciences research | High-impacts tracking system. A framework informed by stream of work from NIEHS (Engel-Cox et al.,63 Orians et al.17), also the Becker Library approach (Sarli et al.118) ‘closely aligned with the categories we used in our logic models, and also informed our ontology’. Also informed by the development in the UK of researchfish. High-impacts tracking system:
| Outputs: scientific findings, publications, patents, collaborations, animal models, biomarkers, curricula and guidelines, databases and software, measurement instruments and sensors | This is one part of ambitious plans being developed by NIEHS for use by them. However, they are still evolving and recognise the need to develop ways to capture long-term impacts and ‘expert opinion and review of specific program areas’. A growing interest in UK developments in impact assessment illustrated by fact that informed by the researchfish approach |
Impacts: improved health/disease reduction, exposure reduction, policies and regulations, community benefit, economic benefit | ||||||||
37 | Druce et al.359 | 2009 | Application | International | IAVI | None stated beyond assessing the extent to which IAVI met its strategic objectives over period 2003–7. Qualitative and quantitative: documentary review; interviews (100+); and field visits | Assessed initiative’s accomplishments in the following: R&D; clinical trials; advocacy and communications; policy | Comprehensive evaluation independently commissioned and conducted, using a range of methods. Denied access to individual partnership agreements for reasons of confidentiality. Not specifically an assessment of impacts, but the IAVI objectives included items such as policy. However, IAVI produces policy analysis more than producing research used in policy |
Found: added scientific knowledge; built capacity; been a leader in advocacy for HIV vaccines; ‘important value-adding contributions to policy issues’ | ||||||||
38 | Engel-Cox et al.63 | 2008 | Both | USA | NIEHS: intended for all programmes of environmental health sciences research | NIEHS logic framework developed and identified a range of outcomes informed by the Payback Framework and Bozeman’s public value mapping;360 then added a list of metrics for logic model components. The logic model is complex, as in addition to inputs, activities, outputs, and outcomes (short term, intermediate and long term), there are also four pathways: NIEHS and other government pathways; grantee institutions; business and ‘industry; and community. Each pathway illustrates the research process that would be carried out most directly by a given institutional partner that is being evaluated.’ The model also included the knowledge reservoir, and contextual factors. No explicit description of methods used to conduct the illustrative case studies, but implied documentary review and ‘expert elicitation’ | From payback and value mapping: translation into policy, guidelines, improved allocation of resources, commercial development; new and improved products and processes; the incidence, magnitude, and duration of social change; HSC welfare gain and national economic benefit from commercial exploration and a healthy workforce; environmental quality and sustainability. Range of impacts identified in cases studies | Important methodological development, illustrated in two case studies rather than a full application. Builds comprehensively on earlier work. Having the various pathways allows a broader perspective to be developed (e.g. by the grantee institution pathway) than that of individual projects. However, challenges include:
|
39 | Ensor et al.361 | 2009 | Application | Nepal | Maternity: skilled attendance | None stated. Key informant interviews to identify the role of specific research in a key policy | Policy: specific area of financing of skilled attendance at birth | Factors for rapid utilisation: the research processes helped ensure wide communication of the findings; political expediency meant that there was a key political champion advocating a strong policy response |
40 | Eriksen and Hervik94 | 2005 | Application | Sweden | VINNOVA: neck injuries research | VINNOVA approach. Mixed approach: economic analysis of benefits for Swedes, the industry and the world compared with the costs. Classic R&D assessment of contribution to the research field: publications, research training, quality of the research, development of networks, etc.: desk analysis followed by review by a panel. Best estimates also made of value of future research | Benefits to society, the companies involved and the research field | Wide-ranging approach including important long-term impacts from long-term funding of a centre. Admitted ‘there is a great deal of uncertainty in these calculations’. Perhaps it is even more challenging than is acknowledged in the report |
Found:
| ||||||||
41 | Evans et al.101 | 2014 | Application | UK | Public involvement in health research | Realist evaluation. Mixed-method case study | Factors and interactions that influence successful involvement of patients and the public in research | Strength – clear and rigorous application of realist method supplemented with other perspectives where appropriate. Various mechanisms – collaborative action, relationship building, engagement, motivation, knowledge exchange and learning – which interact with context to produce different outcomes in different parts of the programme |
42 | Expert Panel for Health Directorate of the European Commission’s Research Innovation Directorate General53 | 2013 | Application | EU | EU FPs 5, 6 and 7: public health projects | Payback Framework. Documentary review: all 70 completed projects; 120 ongoing; key informant interviews with particularly successful and underperforming projects (n = 16). A data extraction form was constructed based on the categories from the Payback Framework, with each of the main categories broken down into a series of specific questions. Distinction made between expected and achieved | Dimensions adapted from Payback Framework: knowledge production; research capacity building; informing health policy (and practice added after pilot); health and health sector benefits; economic and social impact; dissemination | Used documentary review, therefore, for completed projects had data about whole set. However,
|
Found: only six out of the 70 completed projects did not achieve the primary intended output; 56 peer-reviewed publication; 42% took actions to engage or inform policy-makers; four projects change of policy (22% expected to do so); six impact on health service delivery; seven impact on health practitioners; six impact on health; one beneficial impact on small and medium enterprise | ||||||||
43 | Frank and Nason115 | 2009 | Methodological | Canada | All health | Adapted the Payback Framework to develop the CAHS framework. No application but identified 66 validated indicators. Framework based on a report overseen by an expert international panel and supported by a series of commissioned appendices. The panel recognised that ‘The 66 validated indictors currently listed do not cover the full spectrum of possibilities’ and identified a series of implementation challenges | Adapted the five categories in the dimensional categorisation of the Buxton and Hanney’s payback model:
| Based on the work of an international panel of experts informed by a major review and analysis of many aspects of impact assessment. Highlighted a series of challenges facing any assessment of research impacts, including: attribution, the counter-factual and time lags. The CAHS framework has informed a series of studies. Attempting to develop an inclusive set of indicators has generated additional challenges, whereas the Payback Framework put more emphasis on addressing issues such as attribution through use of case studies |
44 | Garfinkel et al.362 | 2006 | Methodological | USA | Perinatal health | Societal outcomes map. ‘Technology road mapping’ can be described as ‘graphical overviews of potential solutions over time to specific concerns’, aimed at clarifying what inputs are needed to produce desired outcomes | Start by cataloguing desired outcomes, then specify inputs, identify potential research and policy paths, and mine the academic literature quantitatively to profile a research domain | Essentially an ‘engineering’ theory. Speculative and deterministic, seems to be a formalised brainstorming process that generates complicated (but not complex) boxes-and-arrows diagrams. Marginal for our review and does not link to assessment of specific programmes of research |
45 | Gibbons et al.179 | 1994 | Methodological | N/A | University–society relationships | Co-production. ‘Mode 2’ knowledge is:
| Reframing of impact in terms of the increasingly complex and diverse infrastructures and relationships that support knowledge production rather than as dissemination, implementation or translation of research ‘findings’ | Pre-dates the current review period, but important for the philosophical taxonomy. Strength: novel and important reconceptualisation. Weakness: few detailed empirical examples hence (when initially published) largely speculative |
46 | Gibson et al.363 | 2014 | Application | USA | Comparative effectiveness research: four technologies | Before/after. Time trend. Each of the Comparative Effectiveness Trials identified a clinical practice guideline citing it and included the publication date of each in the analysis | Practice change | ‘This study demonstrates that evaluating the impact on clinical practice, based on results of published CER trials and CPGs, is complex’ |
Found: no clear pattern of utilisation in the first four quarters after publication. (While this study was not measuring impact by inclusion on a guideline, all four were rapidly cited on one and would have been counted as making an impact in other studies) | ||||||||
47 | Glasgow et al.178 | 2012 | Methodological | N/A | Health research | Implementation science. New model proposed | Alignment of mode 1, research-based evidence, with mode 2, socially engaged implementation process | Strength: attempt to integrate mode 1 and mode 2. Weakness: may have inadvertently compromised core assumptions and principles of some models in producing the hybrid. Speculates that by aligning the ‘internal validity’ of RCTs with the ‘external validity’ of social engagement, will have greater impact |
48 | Godin and Doré364 | 2005 | Methodological | N/A | Science in general, including health | Developed an approach based on a range of dimensions of society, beyond the economic one, on which science has an impact. Challenges identified (see columns 8 and 9) might help inform methods for impact assessment | Identified a very preliminary list of 72 impacts and indicators within 11 dimensions on which impact could be assessed: science, technology, economy, culture, society, policy, organisation, health, environment, symbolic and training | Suggests three challenges must be met before one conducts any measurement of these types of impact:
|
49 | Gold and Taylor138 | 2007 | Application | USA | AHRQ: Integrated Delivery Systems Research Network | Documentary review; descriptive interviews (85); four case studies. Mixed | Changes in operations. ‘Of the 50 completed projects studied, 30 had an operational effect or use’ | Partnerships. Success factors: responsiveness of project work to delivery system needs, ongoing funding, development of tools that helped users see their operational relevance |
50 | Graham et al.85 | 2012 | Both | Canada | AlHS: tested on several programmes, e.g. independent investigators to programme (all fields) | Developed and tested an AIHS version of the CAHS framework. Archival review of applications, progress reports, etc. For example, from 215 grantees on independent investigators programme whose awards ended 2004–8 | Started with the five CAHS categories (knowledge, capacity, decision-making, health, social and economic). Added additional one on organisational performance and additional indicators on items such as innovation | Mainly methodological describing how developed AIHS version of CAHS framework, with a particular focus on developing data capture approaches for the many indicators identified. The products and tools generated by AIHS through the framework’s implementation included:
|
Found (for independent investigators programme): advancing knowledge, e.g. 3901 publications; building capacity, e.g. CAD$217M leveraged in additional funding; informing decision-making, e.g. guidelines were developed in collaborations with health authorities, industry, government, and non-profit organisations; health care, e.g. 42 improvements to health care were identified through improvements to 30 therapeutics and 12 diagnosis/prognosis techniques; economic and social, e.g. 10 products in development, 5 spin-offs | ||||||||
51 | Grant et al.38 | 2009 | Methodological | UK | Review for HEFCE: all university research | Four methods for evaluating impact of university research. Reviewed against HEFCE criteria. Recommended adoption of case study approach as set out in the RQF from Australia (see Donovan128) | Wide range of impacts | Methodological review funded by HEFCE to inform its plans for assessing university research impact. Recommendation informed the development of the REF – see HEFCE33 |
52 | Grazier et al.365 | 2013 | Methodological | USA | NIH CTSA: clinical research units programme | ROI analysis (cost–benefit analysis) Essentially adopts a traditional cost–benefit approach. Paper details development of a ROI protocol to enable project-based evaluations of CTSA programme awards. Model development as an iterative process involving stakeholders to identify important components – beginning simple and may be limited by difficulties in identifying, measuring and monetising benefits/costs, hence qualitative data can support quantitative | ROI – timing and magnitude of expected gains/timing and magnitude of expected costs. Proposes methods (e.g. survey, scoping, interviews) for identify availability, accessibility and quality of data and suggests supplementing with qualitative data | Acknowledges that although not all benefits can be quantitatively measured, it is important for wider understanding of impact. Does it add much in comparison to other, more formal/considered approaches? Conjecture: value of return will be a function of a number of characteristics. These include awards through the CTSA and other sources; the institutions at the time of the award, before it and after; the investigator; number of collaborations in the award; length and extent of ‘exposure’ to the clinical research unit of research programmes; all dependant on the scope and boundary discussions with stakeholders and on the synthesised model constructed. Note difficulties in attribution if there are multiple sources of funding, and time lag between investment and health gain |
53 | Group of Eight and Australian Technology Network of Universities105 | 2012 | Both | Australia | All research fields | EIA based on the REF. Case studies (n = 162) developed by researchers in institutions to describe the impact achieved from 2007 to mid-2012 by their research conducted from 1992. Case studies then assessed and scored by expert panels containing many people from outside higher education. Panels rated the impacts according to their reach and significance | Explicitly adopted the definition of impact set out in the UK’s 2014 REF assessment exercise. (The EIA trial was designed after the criteria for the REF had been set out, but was conducted before the REF). Focus on measuring the innovation dividend in areas of: defence, economic development, society (including health) and environment | Had the strength of building on the 2014 REF. Reported that the case study methodology ‘to assess research impact is applicable as a way forward to a national assessment of impact’. Weaknesses or problems related to the time taken to put together the case studies, especially if the exercise was scaled up to a national impact assessment exercise, and given the time involved in assessing the case studies ‘more extensive Panel briefings would be essential should this assessment method be adopted at national level’ |
Found: 87% of cases rated as being in the top three categories out of five (plus a not classified category)
| ||||||||
54 | Guinea et al.64 | 2015 | Both | International | Seventh EU FP: international development public health research project | Impact-oriented monitoring informed by Payback Framework. Tested and propose elements: (1) project results framework (to be developed by each project co-ordinator during grant negotiation process to help link objectives with activities, results, impacts, and can be updated throughout life of the project); and (2) co-ordinators survey (web based and linked to five payback categories plus dissemination; completed at end of project and after 3 years, and in middle of projects of ≥ 4 years); end-user survey (web based, to people identified by project co-ordinator at end of project); assessment tool (online/web to assess individual projects at end of project and 3 years after on the basis of data gathered as above). In developing tool also conducted nine case studies | Based on five dimensions of Payback multidimensional categorisation, but with the additional category being ‘Pathway to impact’: advancing knowledge; capacity building and research targeting; informing decision-making, practice and policy; population health and health sector benefits; broad economic and social benefits; dissemination and knowledge transfer | Developed own comprehensive methodology informed by existing frameworks and tested a range of methods, but had a low response rate: 28 out of 116 projects. Large-scale EU project funded in the light of EU Court of Auditors criticism of lack of evaluation of EU FP4, FP5 and part of FP6. Aim:
|
Found: generally findings mentioned only in relation to commenting on methodology | ||||||||
55 | Guthrie et al.9 | 2013 | Methodological | USA (review conducted in UK) | All fields | A total of 14 frameworks, six reviewed in detail (REF,33 Excellence in Research for Australia,366 STAR METRICS®,367 CAHS,7 NIHR Dashboard,368 Productive Interactions81). Criteria analysed included purpose: analysis, accountability, advocacy, allocation | The research evaluation frameworks and tools examined covered evaluation in general, not just impacts. Found considerable variety in role and potential of the different frameworks. Suggested CAHS payback could have advantages | Full report analysing many aspects of research evaluation systems. Conducted for Association of American Medical Colleges. In various places our review draws on aspects of this analysis |
56 | Guthrie et al.27 | 2015 | Both | UK | HTA programme | ROI. Selected 10 key HTA studies, mostly RCTs but a few systematic reviews, and applied desk analysis | Key impact: per patient QALY gains associated with the intervention monetised at a health-care opportunity cost of between £20,000–30,000 net of health-care costs. Net benefit calculated as a hypothetical full year implementation | Has the strength compared with most other ROI studies of having a clear picture of the cost of the research inputs and detailed case study analysis. Weaknesses: small sample size (10/743); does not adequately address attribution problems but assumes HTA studies were responsible for all post-HTA research implementation as they were considered to be ‘definitive’ evidence |
Found: only 12% of potential net-benefit would cover the £367M invested in HTA programme | ||||||||
57 | Gutman et al.132 | 2009 | Both | USA | RWJF: ALR programme (see also Ottoson et al.369 on the same programme) | The conceptual model used in the ALR programme ‘was used to guide the evaluation’ but aspects needed refinement to give more emphasis to context, attracting additional research funding, and translating research into policy change. Aspects of Weiss’s model used for analysing policy contributions. A retrospective, in-depth, descriptive study utilising multiple methods, both qualitative and quantitative. Qualitative data derived mainly from 88 interviews with key informants: a random sample of grantees; other funding organisations; policy and advocacy organisations; programme leadership and RWJF staff. Quantitative data derived primarily from a web-based survey of grantee investigators (sent to PIs, chief investigators and applicants); of the 74 projects, 68 responses analysed. Analysed the early examples of policy impacts by using Weiss’s framework and found examples of instrumental, political and conceptual use | Building knowledge base, including creation of a new field; building human resources; growing financial resources; contribution to policy debate and change | Comprehensive data collection from diverse sources attempted to assess the impact of the research programme as part of a wider intervention; however, only 16% of competitively awarded grants had been completed prior to the year of the evaluation. The author commended on the limitations:
|
Found:
| ||||||||
58 | Hage112 | 2009 | Methodological | Canada (part of appendix A of the CAHS report) | Treatment sector of the Canadian health care system | Informed development of CAHS framework | Identifies a series of meso-level metrics of impact that identify, for example, the detailed aspects of impacts on health care that might arise from different phases of research and shows how these might help an impact assessment | Referred to in the main CAHS report which states:
|
59 | Hanney et al.2 | 2007 | Both | UK | NHS: HTA programme | Payback Framework. Multiple methods approach, literature review, funder documents, survey, case studies, interviews (n = 16) (between 1993 and 2003) | The review found that the Payback Framework is the most widely used approach/model. Impact on knowledge generation can be quantified more than that on policy, behaviour or health gain. A higher level of impact on policy than is often assumed. Primary study: used categories from Payback Framework. The HTA programme had considerable impact in terms of publications, dissemination, policy and behaviour. Different parts of the programme had different impacts, as expected: NICE TARs had 96% impact on past policy compared to 60% for primary and secondary HTA research. The mean number of publications per project is 2.93. Case studies showed large diversity in levels and forms of impacts and the way in which they arise. NICE TARs demonstrate the importance of having a customer (receptor) body for having impact on policy | The survey showed that the data on payback can be collected but more than one-third of the projects did not respond. The review conducted as part of the study identified the Payback Framework as the most appropriate approach to use to assess the impact of the HTA programme. It facilitated capture of key factors in achieving high levels of impact, i.e. the agenda setting to meet the needs of the health-care system, the generally high scientific quality of the research; and the existence of a range of ‘receptor bodies’ to receive and use the findings |
60 | Hanney et al.370 | 2006 | Application | UK | One diabetes researcher’s body of research over a certain period | Bibliometrics, surveys to key authors, semistructured interviews with the researcher and experts/users. The bibliometric analysis traced citations through first-, second- and third- generation papers with qualitative analysis of the importance of the work in citing papers | Partly informed by informed by the categories from the Payback Framework, with the articles describing the knowledge produced being particularly important | Qualitative approaches important alongside the bibliometric analysis |
Found: various examples of impact, and not all papers thought to have make an impact were highly cited | ||||||||
61 | Hanney et al.51 | 2013 | Both | UK | Asthma UK: all programmes of asthma research | Payback. Survey, documents, case studies, some expanding the approach | Five categories from the Payback Framework: publications, including follow on; targeting further research and capacity building; policy influence and product development; health and health sector benefits; broader economic benefits | Extended Payback Framework to assess impact from long-term professorial chair funding and cofunding with MRC of a research centre. Also, as intended, informed strategy of the medical research charity |
Found: various categories of social impact arose from only a minority of projects (13% on policy, 17% on product development, 6% on health gain) but some important influence on guidelines, potentially major breakthroughs in several asthma therapies, establishment of pioneering collaborative research centre | ||||||||
62 | Hanney et al.46 | 2013 | Methodological | N/A | Involvement in research in any health field/impact on health-care performance | The review identified papers assessing how far there were improvements in health-care performance associated with engagement in research. Hourglass review – focused and wide review. Built a matrix using an iterative approach:
| To map and explore plausible mechanisms through which research engagement might improve health services performance at clinician, team, service or organisational levels. (Improve understanding of the impact of engagement in health research.) Identified two main dimensions to categorise studies that assessed whether research engagement led to improved performance or not:
| The focused review collated more evidence than previously thought, and although it was generally positive it was difficult to interpret One difficulty of applying the matrix arose because some of the papers in our focused review have features that fitted into more than one category on a certain dimension. Nevertheless, it was:
|
63 | Hansen et al.111 | 2013 | Methodological | The European Observatory on Health Systems and Policies | Public health | Summarises four conceptual frameworks: Payback; RIF; European Commission seventh Framework (see next column); and research utilisation ladder. All essentially variants of case study | European Commission FP7: Description of main dissemination activities and exploitation of results; synergies with science education (involving students or creating science material); engagement with civil society and policy-makers (e.g. NGOs, government, patient groups) and production of outputs which could be used by policy-makers; use of dissemination mechanisms to reach the general public in appropriate languages; use and dissemination (peer-reviewed journal articles, patent applications, intellectual property rights, spin-offs); and the employment consequences of the project | Non-systematic review that usefully summarises some approaches but omits others. Importantly suggests:
|
64 | Hera136 | 2014 | Application | Africa | AHSI-RES | Key element of the design: adoption of an interactive model of knowledge translation. A theory-driven approach was used by constructing post hoc results frameworks for 6 of the 10 research projects according to a generic theory of change model for the programme. Then participatory workshops were held with the research teams to test the frameworks against the reality of implementation. Data gathered by a range of methods including documentary review; interviews at programme level; project-level information – for six projects workshops (see next column), and for the remaining four a total of 12 interviews with 16 members of research teams; participant observation of end-of-programme workshop, at which the team also presented some preliminary findings | Relevance; research capacity building; policy impact; social and gender equity; sustainability | There was a range of methods and it did assess using the logic model of the programme; however, it was completed just before the end of the programme, therefore, only identifying early impact. Abstract:
|
Found: a highly relevant structure that responded to the needs for research capacity building (but mainly only in institutions with little background in the field) and knowledge translation. Policy impact was created during the research process: 7/10 projects reported policy impact already. More progress in focusing research on pro-poor issues than on gender
| ||||||||
65 | HEFCE33 | 2012 | Methodological | UK | REF: Medicine and Life Sciences | REF Impact Assessment. Impact template is a description of the environment and activities in a higher education institution oriented to maximising impact Impact case study is a four-page description of a research project/programme and ensuing impact, with references and corroborating sources |
| At the time that this guidance was published, the approach was largely untested, though there had been a ‘dry run’. The development of the approach can be traced through from the RQF in Australia (Donovan128), through the HEFCE commissioned review (Grant et al.38) to these plans |
66 | Home371 | 2008 | Application | UK | UK prospective diabetes study: type 2 diabetes mellitus | Drew on application of aspects of the Payback Framework Home had coauthored in Hanney et al.370 Narrative review by expert diabetes ‘insider’ | Publications, guidelines, education material, changes in monitoring, treatment and health outcomes | The UK Prospective Diabetes Study resembled a programme in that it consisted of a group of clinical trials, epidemiological analyses and health modelling studies. Not a formal impact assessment but more of an insider’s review drawing on author’s experience as a leading diabetes medical academic, and his involvement in a previous research impact assessment in the field |
Found: 85 full papers, 78% in leading journals; cited in many guidelines, and not just in those relating to diabetes, but traces a complex picture of how citations of papers can get overtaken by citation of reviews that cite the papers: ‘considerable impact on available educational material’. Influenced monitoring and treatment:
| ||||||||
67 | Jagosh et al.173 | 2012 | Both | N/A | CBPR | CBPR. Systematic review using realist principles | If delivered effectively, CBPR can:
| Strength: rigorous application of realist methodology. Weakness: findings pertain only to CBPR; relatively small sample of high-quality studies. Factors: extent to which research designs are culturally and logistically appropriate; extent of measures to develop capacity and capability in stakeholder groups; how and to what extent conflicts are managed; and extent to which mutual trust builds over time |
68 | JISC372 | 2013 | Both | UK | Jisc Business and Community Engagement programme: partnership projects in a range of fields including health | The central team provided projects with an evaluation framework designed to help projects ‘to target and identify the emerging impact of their project’. Various projects drew on existing frameworks and tools including the Research Contribution Framework,373 stakeholder analysis. This report is based on detailing the learning from nine tripartite partnership projects set up to develop capacity in universities to embed impact analysis in research using the expertise of business and community engagement practitioners and information management specialists. Projects were ‘experiential and action learning approaches’ | Progress in the individual projects including: drawing on existing knowledge and frameworks about impact; developing and testing a model; delivering a system that is theoretically robust, practical to use and meets needs of stakeholders | Not health specific but some projects related to health, e.g. the Emphasising Research Impacts project at Newcastle University faculty of Medical Sciences. Of marginal relevance because not a standard research impact assessment because the programme consisted of projects developing the capacity to enhance and assess impact. However, this is another dimension of the increasing emphasis on impact assessment |
Found: the overall accounts report challenges, but, ‘in spite of these challenges, the projects reported many positive outcomes’ – improved understanding of impact, improved evaluation strategies embedded within research groups, greater awareness of training needs, ‘improved focus on who to engage to maximise the potential of the impact of the research’ | ||||||||
69 | Johnston et al.75 | 2006 | Both | USA | National Institute of Neurological Disorders and Stroke: programme of clinical trials – all pre 2000 Phase III RCTs | ROI analysis. Health economic modelling used to estimate ROI from 28 RCTs | NIH funding of Phase III trials equated with quasi-societal returns (aggregate treatment costs/savings and health gains measured in QALYs, valued based on USA GDP per capita) based on projected usage. Eight RCTs had adequate data to estimate usage/cost/effects. Twenty-eight trials with a total cost of US$335M were included. Six trials (21%) led to improvements in health. Four (14%) resulted in cost savings. There were 470,000 QALY in 10 years since funding of trials at cost of US$3.6B. The projected net benefit was US$15.2B. Yearly ROI 46% | Innovative quantitative attempt to value benefits of a specific programme of research, net the costs of delivery of new interventions. Incorporate standard health economic information. However, limited by incomplete data and reliance on published work for model inputs. Inherent problems with cost–utility analyses and imprecise estimates. Unclear about the quality of data on usage, in particular. Another example of economic value assessment showing large gains, and an important development in the field because of its application to the programme level |
70 | Kagan et al.374 | 2009 | Methodological | Global research (analysis conducted in USA) | NIH: global HIV clinical trials research programme | Construct a new conceptual framework through a participatory process for an evaluation system for the National Institute of Allergy and Infectious Diseases HIV/AIDS clinical trials programme | Developed a concept map of success factors. The evaluation framework depicts a broad range of factors that affect the success of the clinical research networks. There is an average importance rating for each of the ideas (on a 1–5 scale) and average importance rating of the statements within each cluster, biomedical objectives (highest average rating score: 4.11), scientific agenda setting (4.06), collaboration communication and harmonisation (3.81), operations and management (3.80), Division of AIDS policies and procedures (3.96), resource utilisation (4.00), community involvement (4.05) and relevance to participants | Strengths: careful analysis of a pioneering and participatory process to develop a new conceptual framework mapping success factors Limitations: the results of the development of the conceptual framework are context specific. The ideas generated, organised and prioritised by the stakeholders were specific to the Division of AIDS clinical networks, thus limiting the generalisability of the results |
71 | Kalucy et al.65 | 2009 | Application | Australia | Primary health care | Payback. Telephone interviews (n = 13) plus bibliometric analysis of publications and analysis of range of project documents | Payback categories | Pioneering application of the Payback Framework – tested in the context of plans for the introduction of RQF in Australia (see Donovan128). Some limitations:
|
Found: in testing the approach the Payback Framework was found to be robust and applicable. Advantage over bibliometric analysis – picked up many more publications especially in applied research where most outputs were not indexed on Web of Science. Four case studies consisted of one RCT, one action research study and two case note audits. The ‘logic model’ of the Payback Framework worked better for the RCT than for the action research study where there was input from ‘users’ at every stage in the research | ||||||||
72 | King et al.375 | 2009 | Methodological | Canada | HSC, community–campus partnerships | CIROP questionnaires. (See King et al.84 for application as part of wider study) | Four domains, psychometrically independent: personal knowledge development; personal research skill development; organisational/group access to and use of information; community and organisational development (for research), i.e.:
| Seems psychometrically ‘internally valid’ (76% of variance was accounted for by the principal component analysis) but seems to measure a very small range of possible impacts, and only measures people’s perceptions of these. The aim was ‘to develop a generic survey measure of the influence of research partnerships on skills, decisions, and community capacity, in the eyes of target audience members’, i.e. it was intended to measure research-oriented outputs, not service-oriented ones. Hence very researcher-focused |
73 | King et al.188 | 2010 | Application | Canada | Community development for HSC | Impact model for community–campus partnerships: see Currie et al.357 The impact model specifies:
| The study analysed community partnerships’ structure, process and outcomes. Structure measured by number and types of partner, local/national orientation, ratio of university: community staff and grant income. Process measured by indicators of research utilisation. Outcome measured using CIROP scale (see King et al.375). CIROP items used as dependent variables in a regression analysis. Main findings: mean impact scores indicate that research partnerships have small to moderate impacts on community and organisational development, and personal research skill development, but moderate to fairly great levels of impact on personal knowledge development | Innovative because it applies an ambitious impact model (Currie et al.357) and outcome measure scale (King et al.375), but convenience sample. Could be questions about how far the instruments have captured the key dimensions of the partnerships – very quantitative and tick-list focused. The finding that ‘personal knowledge development’ increased more than other dependent variables may be an artefact of the study design (asking personal recall of what happened) |
74 | Kingwell et al.139 | 2006 | Application | Australia | NHMRC: projects completed in 1992, 1997 and 2003 | No framework explicitly described, but used the new end-of-grant form developed by the NHMRC to capture impacts. Expert panel review of end-of-grant reports from researchers completing in 2003: 139 reports out of 454 expected (29%); retrospective surveys of investigators of earlier projects using a simplified version of new end-of grant report as the survey instrument: 1997 – 131/259 in contactable sample (51%); 1992 – response too low to use for full analysis but examples of impact identified. Separately computer-assisted telephone interview survey of recipients of people awards: 596 of 1897 (31%) completed survey | Knowledge gain; health gain: any research with a self-reported effect on clinical practice or other health service delivery practice or outcomes, changes in public health practice, or changes in health policy. Wealth gain: any research with self-reported commercial activity, including commercial potential and patents. People award recipients: career prospects | Helped identify ways to take impact assessment forward, e.g. on issues of timing, but generally quite low response rates. Also highlighted some projects with clinically relevant outcomes for showcasing to the community |
Findings: papers per grant for 1997 ranged from basic research (7.0) to health services (3.0). For 2003, basic (7.5) and health services (4.3). For 1997, 24% of grants were deemed to have affected clinical practice, 14% public health practice and 9% health policy. Commercial potential: 41% of grants were deemed to have such potential. Patents arose from 20%. 89% of people award recipients thought career prospects improved (but there are barriers) | ||||||||
75 | Kogan and Henkel49 | 1983 | Both | UK | Department of Health and Social Security- funded programmes: wide range | Developed a collaborative approach informed by Weiss’s taxonomy of research utilisation, Caplan et al.376 etc. Case study of the new ‘Rothschild’ approach to organising and funding of Department of Health and Social Security research: including ethnography, participant observation, document analysis | Evaluation of the model introduced by Rothschild in which government departments had a chief scientific officer charged with commissioning research from scientists. Assessment of how far the department could successfully commission research to meet the needs of policy-makers | Rigorous and extensive ethnography of the process of government commissioning of research/policy-making, more so than on impacts of specific programmes. Now somewhat dated, but likely that key principles are still transferable. Science and government are very different cultural ‘worlds’, but also mutually shaping and interdependent. The key to success is sustained interaction over time. Linear models fail to do justice to the sheer complexity of both research and government. The policy timescale fits poorly to the research cycle |
76 | Kok and Schuit116 | 2012 | Methodological | N/A | Health research | Contribution Mapping. A three-phase process map that includes actors, activities and alignment efforts during research formulation (vision, aims, set-up), research production and ‘knowledge extension’ (dissemination and utilisation by both linked and unlinked actors). The contribution map is produced in a four-stage process beginning with an in-depth interview with the lead researcher, covering the three phases above and showing the contribution of the research to the realignment of actors and activities | ‘Impact’ is conceptualised as a new alignment of people, ideas, theories and artefacts. Knowledge may be formalised in journal articles but may also be captured more diffusely as ‘the knowledge available to a group of people’ or inscribed in technologies, e.g. magnetic resonance imaging | Elegant and richly theorised framework based on actor–network theory; however, no empirical application described in this paper (but contributed to thinking in Bennett et al.57). It challenges the view that impact can be attributed to a single research project:
|
77 | Kryl et al.377 | 2012 | Both | UK | NICE guidelines: dementia, chronic obstructive pulmonary disease | Analysis of the papers cited on guidelines to check for the origin/funder of the research | Identified if papers had an author from UK institution. Checked funder acknowledgement and categorised by type/organisation | It argues that it is possible to track the source of funding, but improved accessibility of data and reporting of funding needed
|
Found: over one-third of papers have at least one UK-based author. In over 40% of cited papers, no funding acknowledgement was found. The MRC, Department of Health/NHS and the Wellcome Trust ‘were overtly linked to only a small proportion of papers cited in the guidelines’ | ||||||||
78 | Kuruvilla et al.123 | 2006 | Methodological | UK | Health research | RIF. Policy impact assessment element was informed by Weiss’s taxonomy. Semistructured interview and document analysis leading to one-page ‘researcher narrative’, which was sent to the researcher for validation | Four broad areas of impact:
| Pragmatic, carefully tested and richly informed by an extensive literature review. Designed to help researchers develop their narratives of impact. Inclusive and imaginative, and has strong face validity. The way RIF intended to be used as follows:
|
79 | Kuruvilla et al.97 | 2007 | Application | UK | Health research | RIF. (See Kuruvilla et al.123) Case studies: 11 projects in total, selected for maximum variety. Semistructured interview and document analysis leading to one-page ‘researcher narrative’ which was sent to the researcher for validation | See companion paper above123 Prior relationships with policy-makers, reputation in field meaning invitation to bid for funding. Research networks, collaborations were key in helping communication Communication with other academics was straightforward but communication with policy-makers was challenging. Media and funders’ websites/reports were important channels. Policy impact occurred through different mechanisms, theorised using Weiss’s 1998 taxonomy.343 Instrumental use (research findings drive policy-making); mobilisation of support (research provides support for policy proposals); conceptual use; redefining/wider influence. The structured impact narratives facilitated analysis across projects | Describes the successful application of above framework. The framework helped develop researcher impact narratives which were mostly found to be objectively verifiable ‘and facilitated comparisons across projects, highlighting issues for research management and assessment,’ but some ‘putative impacts were not as easily verifiable within the scope of this study, for example social capital or economic impact’. It was useful to help researchers ‘think through and describe the impact of their work across a range of instances when they are asked to account for these: in writing grant proposals, in research assessment exercises and in contributing to complex research, policy and service interventions aimed at improving health and promoting societal development’. Despite many strengths, not specifically designed for application to assess impact of programmes of funded research |
80 | Kwan et al.66 | 2007 | Application | Hong Kong | Health and HSR fund | Payback Framework. Adapted Payback survey sent to 205 PIs of completed projects: 178 (87%) responded. Statistical analysis including multivariate analysis | Knowledge production; research targeting and capacity building; informing policy- and decision-making; application of the findings through changed behaviour; health and health services benefit | Rigorous adaptation and application of existing framework plus detailed statistical analysis. High response rate and some real examples very briefly described, however it relied solely on self-reports by PIs. Multivariate analysis found investigators’ participation in policy committees as a result of the research and liaison with potential users before and during research were significantly associated with health service benefit, policy and decision-making, and change in behaviour |
Found: 5.4 publications per project
| ||||||||
81 | Latour378 | 2005 | Methodological | N/A | Critical social science | Actor–network theory. Case study | An actor–network consists of both people and technologies (or artefacts); they are inherently unstable. ‘Impact’ is conceptualised as achieving a new alignment of the actor-network through a process called ‘translation’ | Strength: novel, imaginative and potentially useful framework for considering multistakeholder research networks. Weakness: the claim that ‘objects have agency’ is widely contested; exclusive focus on ‘network effects’ mean that human agency is undertheorised |
82 | Laws et al.88 | 2013 | Application | Australia | Schools Physical Activity and Nutrition Survey | Banzi’s research impact model. Used the framework proposed by Banzi et al.,4 which has a range of potential areas of impact ‘which largely reflect the range of other commonly used models, for example, the payback framework’. Semistructured interviews with PIs (n = 3) and users (n = 9) of Schools Physical Activity and Nutrition Survey data; bibliometric analysis; verification using documentary evidence. Triangulation of data to produce case studies | Five categories: advancing knowledge; capacity building; policy impacts; practice impacts; broader impacts | Combined several methods and triangulated data to produce detailed analysis including illustrative quotes, but noted may have been some social desirability response bias. Discusses difficulty of attributing impacts to a single piece of research, particularly the longer-term societal, health and economic impacts. ‘The use of “contribution mapping” as proposed by Kok and colleagues may provide an alternative way forward.’ Factors: perceived credibility of survey findings; active dissemination; contextual factors, including continuity and partnerships between researchers and end-users, mechanisms and structures in place to implement recommendations; good fit with organisational culture |
Found: each of the three surveys reported in multiple peer-review articles (32 in total); two PhDs and two post-doc positions; broad agenda-setting for policy and some examples of underpinning new policies; informed programme planning; more difficult to identify broader health, economic or societal impacts | ||||||||
83 | Lewis et al.113 | 2009 | Both | Canada | Manitoba Centre for Health Policy | ROI analysis. Stakeholder interviews; bibliometrics/altmetrics; cost analysis | Policy and culture of decision-making; financial; health status; public confidence; capacity building. The impact of Manitoba Centre for Health Policy demonstrated in ‘numerous ways, including reputation, research revenues and productivity, varying influence on policy and system management, and a major cultural and intellectual influence on the Manitoba environment’. Quantifiable ROI was 200% | Relatively explicit set of criteria/framework from which to evaluate. Problems of attribution. Described as ROI, akin to Payback Framework in many respects |
84 | Liebow et al.91 | 2009 | Application | USA | NIEHS: Extramural Asthma Research Programme | NIEHS logic model (see Engel-Cox et al.63). The logic model tailored to inputs, outputs and outcomes of the NIEHS asthma portfolio. Data from existing NIH databases were used and in some cases data matched with that from public data on, for example, the US Food and Drug Administration website for the references in new drug applications, plus available bibliometric data and structured review of expert opinion stated in legislative hearings | Publications, clinical policy and application of findings, community interventions, environmental policy and practice, health outcomes, and technology developments | Based on key aspects of the framework specifically developed previously for the research funder. Aim to obtain readily accessible, consistently organised indicator data could not in general be realised:
|
Found: PI publications from 0 to > 100; 2057 publications attributable to 30 years’ funding; PI membership of various advisory panels, etc.; four patents; matching of databases identified NIEHS-funded trials cited in new drug applications, but not able to link trends in environmental impacts or health and social impacts to specific research activity | ||||||||
85 | Lomas154 | 2007 | Methodological | N/A | HSR | Knowledge brokering; linkage and exchange | How the link between researchers and policy-makers works – and why it so often fails to work | Strength: clear and authoritative summary of a component of the literature. Weakness: ‘non-systematic’ review, hence omits other perspectives
|
86 | Longmore67 | 2014 | Application | UK | Addenbrooke’s Charitable Trust: fellowship scheme | Payback Framework informed the study. Developed a fellowship review form based on items such as researchfish and the Payback Framework. Used the form to gather data from fellows through face-to-face interviews or via e-mails (14/18 fellows participated) | Career development; publications and dissemination; leveraged funding; capacity building; patient or health-care benefits; intellectual property/research tools/spinouts | The fellowship scheme was a small-scale (support for 1 year or less) attempt to nurture future clinical academics. Total value of 18 awards < £1M. The way the evaluation of such a small-scale scheme conducted reflects growing interest in impact assessment |
Found: most secured external fellowships leading to PhDs; research projects have ‘potential to inform future research that may ultimately deliver benefits for patients’. Some fellows reported the research improved aspects of how they approached patients | ||||||||
87 | Martin and Tang379 | 2007 | Both | UK | Range of UK publicly funded research, e.g. the social benefit of preventative dental care | Updates the earlier Science Policy Research Unit framework for assessing the channels that might lead to economic and social benefits from public funding of basic research. Channels are: useful knowledge; skilled graduates and researchers; new instruments and methodologies; enhanced problem-solving capacity; new firms; provision of social knowledge. Desk analysis. Examined a series of case studies to identify the elements of the framework that might apply in each one | Key focus is on the channels, but some channels are items sometimes included in lists of impacts | Important analysis, informed by a review of case studies, of the exploitation channels to consider when assessing the benefits from publicly funded research. Shows the linear model has shortcomings, and highlights how some impacts will take a long time to move through the various channels and that incorrect science policy options might be adopted unless the long-term impacts are taken into account |
Found: the social benefit of preventative dental care would be improved oral health care and avoidance of fillings – identified two exploitation channels for this research: spin-off; scientific instrumentation. Despite problems facing research impact assessment:
| ||||||||
88 | Martin174 | 2010 | Methodological | UK | Social research | Co-production (focus on practitioners) | Practitioner involvement in research. Involvement may be nominal (to confer legitimacy on a project), instrumental (to improve delivery), representative (to avoid creating dependency) or transformative (to enable people to influence their own destiny). The greater the involvement, the greater the potential impact in HSR | An elegant and simple tool for assessing (semiquantitatively) the level of lay involvement in a study. Weakness: not tested |
89 | McCarthy380 | 2012 | Application | EU | EU: Public Health Innovation and Research in Europe | None stated. Through the European Public Health Association, experts assessed the uptake of the eight public health collaborative projects (within areas of health promotion, health threats and health services), for 30 European countries. National public health associations reviewed the reports. Methods varied between countries. Following stakeholder workshops, or internal and external consultations, 11 national reports discussed impacts of the public health innovations | Focus on uptake of the innovations: impact on policy and practice | Quite wide-ranging input into the study: in total, 111 stakeholders were involved in workshops. Reports only produced for 11 countries. Methods varied: in Ireland, one person provided information. Strategies noted most often to spread the results were: reports; websites and national conferences; and seminars and lectures. Background: European Court of Auditors critical of the first EU Public Health Programme – project co-ordinators could not demonstrate ‘take up’ by target groups |
Found: in 11 countries, there were reports on the eight innovations for 45 (51%) of the possible public health markets. The innovations contributed positively to policy, practice and research, across different levels and in different ways, in 35 (39%) markets, while competing innovation activities were recorded in 10 (11%) markets | ||||||||
90 | McLean et al.381 | 2012 | Methodological | Canada | CIHR: knowledge translation funding programmes | A framework inspired by the principles of integrated knowledge translation. Develops a logic model for evaluating the knowledge translation funding programme. The paper is a protocol but sets out planned methods:
| Immediate outcomes:
| The logic model used includes a focus on the processes as intermediate outcomes |
91 | Meagher et al.95 | 2008 | Both | UK | ESRC: all responsive mode funded projects in psychology |
| Collected data on six domains:
| Thorough methods, but noted as a limitation, that the exclusive focus on responsive mode projects:
|
Found: conceptual (indirect, enlightenment-based) impacts were more common than instrumental (direct, knowledge-driven) ones | ||||||||
92 | MRC76 | 2013 | Both | UK | MRC: all programmes | The MRC economic impact report ‘is part of the research council’s performance management framework implemented by the Department for Business, Innovation and Skills (BIS)’. Combination of information from the MRC’s own databases and data gathered from PIs through researchfish | The BIS metrics framework 2011/12 includes range of items: budget allocation and funding leveraged from other sources; expenditure on different types of research; number of researchers funded in various categories; publications; research training; knowledge exchange activities; commercialisation: patents, spin-off, IP income; policy influence | Broad picture of activity and some key impacts by drawing on extensive data gathering through researchfish, etc. Questions about some of the data categories, e.g. classifying systematic reviews as a policy document, and questions of definition of economic impact. The MRC Economic Impact Report has been published each year since 2005. It illustrates several themes:
|
Found: data collected for all items, including:
| ||||||||
93 | MRC103 | 2013 | Application | UK | MRC: all programmes | Researchfish |
| Can be implemented regularly and collects data from a wide range of researchers but possibly neglects some areas of impact. Also questions about how far it is fully completed |
94 | Meijer80 | 2012 | Methodological | The Netherlands | Dutch research but EU perspective: all fields | Scientific and societal quality of research. Combines three approaches:
| Creating societal relevance is a four-step process:
| Methodologically innovative but not published in peer-reviewed journal and speculative. Seems tied in with a wider Dutch-led effort to measure effect of research funding in context of competitiveness of EU countries. See also Mostert et al.100 |
95 | Meyer152 | 2009 | Methodological | Canada: (part of appendix A of CAHS report) | Clinical research | Starting point is the Payback Framework, as amended in CIHR versions | Provides details about health and economic gains being the most important of Payback Framework categories, and can be important for assessing the impact of clinical research to work towards showing them but that is more difficult | Part of the evidence base that encouraged the CAHS panel to adopt an adapted version of Payback Framework. It contributes to the discussion about assessing negative impacts |
96 | Milat et al.89 | 2013 | Application | Australia | New South Wales Health Promotion Demonstration Research Grants Scheme | Banzi’s research impact model. This draws on the range of five potential areas of impact set out in the Payback Framework (see also Laws et al.88 above for a parallel study). Semistructured interviews with CIs (n = 17) and end-users (n = 29) of the 15 projects. Thematic coding of interview data and triangulation with other data sources to produce case studies for each project. Case studies individually assessed against four impact criteria and discussed to reach group assessment consensus at a verification panel meeting where key influences of research impact also identified | Advancing knowledge and better targeting of future research; capacity building; informing policies and product development; health, societal and economic impacts | Detailed multimethod case study analysis of all (n = 15) projects in the programme, including a range of elements in the various payback categories. An independent panel conducted scoring. Illustrative quotes were supplied. Some potential for social response bias as some end-users may have been inclined to over-inflate positive impacts. The team identified a range of factors linked to high-impact projects. These included:
|
Found: both CIs and end-users indicated capacity building occurred through staff development, partnership building and follow-on research funding; 13/15 projects scored above the minimum for impact on policy and practice combined, and 10/15 were in the moderate or high categories; no project independently assesses as high impact in health, societal and economic impacts category, but 13/15 were above the minimum | ||||||||
97 | Milat et al.129 | 2015 | Methodological | N/A | All health, but started with focus on public health and health promotion | Review:
| The review notes the growth of hybrids of previous conceptual frameworks that categorise impacts and benefits in many dimensions and try to integrate them. Of the main frameworks analysed, ‘all attempted to quantify a mix of more proximal research and policy and practice impacts, as well as more distal societal and economic benefits of research’ | While the review identified just 31 primary studies and one systematic review that met their review criteria, ‘88% of studies that met the review criteria were published since 2006’ The attempts to broaden evaluation of research ‘raise an important question of how to construct an impact assessment process that can assess multidimensional impacts while being feasible to implement on a system level’. The potential for bias because of the involvement of PIs in impact assessments means end-users should routinely be interviewed in impact assessments and claims should be verified by documentary evidence |
98 | Moreira382 | 2013 | Application | N/A | Health care | Co-production. Case study with extensive ethnography of health service researchers and systematic reviewers | Intersectoral interaction between university, society and market | Strength: rigorous application of the ‘mode 1 vs. mode 2’ taxonomy to a detailed case study. ‘Impact’ occurs through the coevolution of three activities: market-driven reforms oriented to efficiency (‘market’), epidemiologically driven research oriented to clinical effectiveness (‘laboratory’) and patient and public involvement (‘forum’). This process is messy, organic, largely unpredictable and contested |
99 | Morlacchi and Nelson383 | 2011 | Both | USA-focused international study | National Heart Lung and Blood Institute stream of research on the LVAD | Propose medical practice evolves as a result of progress along three pathways: ‘improvements in the ability to develop effective medical technologies, learning in medical practice, and advances in biomedical scientific understanding of disease’. Longitudinal and contextual case study of the development of the LVAD using interviews with key actors, direct observation and documentary analysis to produce an historical analysis | Study analyses sources of advances in medical practice, and challenges the idea that scientific understanding of disease is the single source | Study has a different focus than most others: it attempts to show the impact on advances in medical practice made by three pathways, of which scientific understanding is only one |
Found: case study of the emergence of the LVAD therapy showed the importance of progress along all three pathways, though an essential aspect was the collective and cumulative learning that requires experience that can only be gained through the use of the LVAD | ||||||||
100 | Mostert et al.100 | 2010 | Both | The Netherlands | Leiden University Medical Centre: various departments/groups including public health | Societal quality score. Van Ark and Klasen’s125 theory of communication in which audiences are segmented into different target groups needing different approaches. Scientific quality depends on communication with the academic sector; societal quality depends on communication with groups in society – specifically, three groups: lay public, health-care professionals and private sector
| Three types of communication: knowledge production, e.g. papers, briefings, radio/television, services, products; knowledge exchange, e.g. running courses, giving lectures, participating in guideline development, responding to invitations to advise or give invited lectures (these can be divided into ‘sender to receiver’, ‘mutual exchange’ and ‘receiver to sender’); and knowledge use, e.g. citation of papers, purchase of products; earning capacity, i.e. the ability of the research group to attract external funding | Careful development of a new approach to assessing research impact appropriate for the specific circumstances of the medical faculties being integrated with their academic hospitals. Heavily quantitative – basically a counting exercise ‘how many of x have you done?’ Only looks at process as they say ultimate societal quality takes a long time to happen and is hard to attribute to a single research group. Did not control for the size of the group. Only a weak correlation was found between social and scientific quality |
101 | Muir et al.107 | 2005 | Methodological | Australia | All public research | None explicitly stated: focus on ‘measurements/indictors to monitor economic benefit flowing from commercialisation of research funded by the public sector’ | Metrics for commercialisation of public research need to be broadened to match understanding that commercialisation contributes ‘to Australia’s economic, social and environmental well-being. This is achieved through developing intellectual property, ideas, know-how and research-based skills resulting in new and improved products, services and business processes transferable to the private sector.’ Fourteen metrics covering: IP, consultancies and contracts, skills development and transfer. They form the basis of future data collection | Commissioned by the Australian government to inform policy on impact assessment in an area that has been important in Australia |
102 | Murphy77 | 2012 | Both | New Zealand | New Zealand publicly sponsored clinical trials | Multistrand mixed methods. Survey; cost–benefit analysis | Health outcomes; stakeholder perceptions (perceived value); economic outcomes (CBA) with QALYs as measurement of health benefit. QALYs valued using societal valuation of statistical life and health-care system opportunity cost (based on average Pharmaceutical Management Agency positive recommendation). Suggestion that benefit outweighs costs for all stakeholders | Triangulation methods. Use of well-validated QALYs in economic modelling. But only applied to two trials as a PhD study, and limited to assessing the benefits that accrued for patients in the trial |
103 | Nason et al.47 | 2011 | Application | Ireland | Health Research Board: all health fields | Payback Framework
|
| Several of the case studies were expanded to consider whole streams of research to more comprehensively identify wider benefits. However, difficult to compare across case studies, because of the limited number owing to the resource-intensive nature of exercise. Article expands concept of economic impact to consider: major growth in funding and focus on ‘knowledge economy’ strategy, enabling international sources of funding to be attracted |
Found: a range of impacts in all categories including: world-class articles; new clinical assays; improved recovery time; development of new drug company | ||||||||
104 | NHS SDO68 | 2006 | Application | UK | NIHR: SDO programme | Informed by the Payback Framework. Purposive selection of 23 projects to reflect the range of SDO research and where some evidence of impact was known to be available. Data collected in two stages, starting with primary outputs (publications); snowballing to capture data on secondary outputs (policy and practice impact). Internal sources used first: annual reports, projects database, programme managers and leads. External databases checked and all non-case study PIs e-mailed to provide data. Secondary outputs (policy documents, etc.) identified using web searches and sent to PI for verification. Eleven of the 23 projects purposively selected as case studies, including semistructured interviews with researchers and users | Two main categories taken from the Payback Framework: primary outputs (publications: SDO reports, academic papers, policy); and secondary outputs (citing in policy documents, practice guidance, newsletters, website and the media). In the 11 case studies, the impact and utilisation of the research is presented in five domains: services, policy, practice, research and capacity building | The range of methods used did identify a range of impacts, and the accounts of the case studies provide good examples. However, 2006 was quite early to conduct an impact assessment of first 5 years of programme from 2001 to 2006 |
Found: publications – all had at least one published by SDO and average of 1.7 articles; policy and practice guidelines: 12 projects cited in total of 24 documents, e.g. five citations in NICE guidance | ||||||||
105 | Niederkrotenthaler et al.384 | 2011 | Methodological | Austria | All fields | Societal impact factor tool developed to consider the effect of a publication on a wide set of non-scientific areas, and also the motivation behind the publication, and efforts by the authors to translate their findings. Self-evaluation of papers by authors: in three main categories they score their paper, and provide textual justification/evidence for the score (see dimensions assessed). The self-evaluation sheet would then be sent to a reviewer for independent checking. Authors would be invited to submit their publications for reassessment if any developments | It was intended that the tool would be refined, but the version tested had three main elements: the aim of the publication (1 point if aim was gain of knowledge, application of knowledge or increase in awareness); the authors efforts to translate their research into social action (1 point); size of translation (geographical area: 1, 2 or 3 points; status: 1 or 2 points; target group: 1 point for individuals, 2 points for subgroups or 3 points for whole population) | Niederkrotenthaler et al.384 claim an advantage of their tool over that developed, for example by Sarli et al.,118 is that the tool does not specify the precise nature of any kind of translation (e.g. devices, guidelines) but leaves that to the author applying for a score to describe. But, as they admit, the tool ‘cannot be considered ready for routine implementation’. Because it aims to develop the equivalent to the scientific impact factor, the focus is at the publication level, which might seem even more limiting than a focus on projects |
106 | Oliver and Singer385 | 2006 | Application | USA | CHBRP: HSR | Informed by framework for analysis of policy design and political feasibility based on the typologies of Wilson386,387 and Arnold.388 Sixteen interviews with 20 key informants and documentary analysis | Impact of HSR on legislative debates and decisions | Theoretically informed analysis of the role of research identified various examples of use in Californian health policy-making; however, few details were given on the precise methods used. It uses a range of models from political science to help analyse the theoretical and actual role for a university-based body specifically designed to feed research and analysis into the legislative process around health insurance mandates |
Found:
| ||||||||
107 | Oortwijn69 | 2008 | Application | The Netherlands | ZonMw Health Care Efficiency Research programme: HTA | Payback Framework. Logic model; survey data collected from PIs of 43 studies conducted using health-care efficiency research funds (response rate 79%); case study analysis (including 14 interviews) of five HTA projects. Developed and applied a two-round scoring system | Knowledge production Research benefits Informing policy Changing health practice Broader impact on health There was a total of 101 papers, 25 PhD theses, citation in guidelines in six projects, implementation of new treatment strategies in 11 projects | Use of triangulation methods and presentation of scores that account for wider range of impacts. Potentially not long enough to witness benefits for many of projects. The programme was mainly conducted in academic hospitals, with a large responsive mode element and most studies were prospective clinical trials |
108 | Orians et al.17 | 2009 | Application | USA | NIEHS: Extramural Asthma Research Programme | NIEHS logic model (see Engel-Cox et al.63). Web-based survey of 1151 asthma researchers who received funding from the NIEHS or comparison federal agencies from 1975–2005. A total of 725 responded (63%). While the researchers all received federal funds, most of the questions covered respondents’ body of research. Key informant interviews with end-users (n = 16). Analysis of the NIEHS model in the light of the findings. Companion article to Liebow et al.91 that described the attempt to apply the NIEHS framework using databases | Wide range of impacts considered as set out in Engel-Cox et al.63 Examples set out below from survey findings of asthma researchers | Large numbers surveyed and the focus on their role as researchers rather than on specific projects allowed nuanced assessment of dissemination and product development. However, the contribution to understanding outcomes is more limited:
|
Found: include: papers – 96%; research tool and methods – 29%; improved environmental measurement techniques – 20%; spin-off companies – 4%; licensing a patent – 38% of patent holder; changes in guidelines – 19%; changes in environmental standards/regulations indoor air – 8%; changes in business practices regarding air – 8%; changes in public knowledge – 33%; changes in clinical practice – 27%. End-users saw research use being in various categories: professional development; intervention/regulation, e.g. reducing environmental tobacco smoke and exposures to lead, etc.; new drug development and regulation; clinical practice | ||||||||
109 | Ottoson et al.369 | 2009 | Application | USA | RWJF: ALR programme | Utilisation-focused evaluation. Telephone interviews with 136 key informants (first-line consumers and implementers, policy-shapers) representative of four out of five levels in logic model (professional community, policy, scientific community, funders); bibliometric analysis | Is a field emerging? (If so, what is its name?) Is there an awareness of ALR within the field? Has ALR contributed to policy discussions? | Neat logic model. Limitations of snowball sampling in being representative of opinions. It makes five recommendations, which imply association with impact/greater utilisation, e.g. bridging research and policy (‘substantial and coordinated investment’); boosting visibility and relevance of policy (engage end-users and intermediaries early in research process); emphasising collaboration and co-ordination. Pair of evaluations of the same initiative – Gutman et al.132 |
Found: it had contributed to development of transdisciplinary field; ‘ALR’s contributions to policy discussions were found across a spectrum of policy-development phases’ | ||||||||
110 | Ovseiko et al.389 | 2012 | Both | UK | Oxford University Clinical Medicine | REF pilot impact indicators. Describes survey (48% response) and other approaches used for data collection for this piloting of the REF | Delivering highly skilled people; creating new businesses, improving the performance of existing businesses, or commercialising new products or processes; attracting R&D investment from global business; better-informed public policy-making or improved public services; improved patient care or health outcomes; cultural enrichment, including improved public engagement with science and research; improved social welfare, social cohesion or national security | Important contribution to the analysis of the development of the REF. Problems with retrospective collection of data. Existence of self-selection bias? All known REF gaming issues. It concluded that:
|
Found:
| ||||||||
111 | Penfield et al.14 | 2014 | Methodological | UK in particular | All fields | Frameworks examined: Payback; SIAMPI; Australian RQF; RAND report that led to REF impact case study approach (Grant et al.38). Plus overview of specific indicators. Metrics, e.g. social ROI; narrative (case study) surveys and testimonies; citations outside academia/documentation. Also describes history of the impact component of the REF | Cites the REF definition of impact:
| Rounded analysis led to the conclusion:
|
112 | Percy-Smith et al.390 | 2006 | Application | Scotland | Health Scotland: five evidence informed policy and practice initiatives | Not restricted to a single framework but drew heavily on Social Care Institute for Excellence taxonomy (see Walter et al.391). Also drew on and amended Health Development Agency framework in the light of findings to produce a new, more complex model (see next column). ‘Mixed method’: literature review plus document analysis plus stakeholder seminar (n = 1) plus stakeholder interviews (n = 22) plus survey of participants in learning networks (631 completed forms received). Evaluated at three levels: the programme as a whole, the five separate initiatives and the individual practitioners | Impact was assessed in terms of:
| Successful programmes were characterised by a high degree of opportunism (i.e. use policy windows). Extensive ‘reach’ into professional and other stakeholder groups. Clarity and accessibility of presentation of key messages
Some case studies produced to showcase good practice may have been questionable in some aspects of quality: hence, there is a quality control issue in relation to ‘practice based evidence’ |
113 | Pittman and Almeida344 | 2006 | Both | International | International Development Research Centre and Pan American Health Organization: programme on social protection in health in Latin America and the Caribbean | Research programme informed by concept of linkage between researchers and users (Lomas342) and therefore was structured with the intention of achieving impact. Not described in detail but appears to be mostly an insider account while the initiative was still under way and drawing also on early papers and discussions with researchers | The linkage model aims to ensure opportunities for users to be involved throughout a project, and that as a consequence impacts will arise | The detailed knowledge of the programme enabled an informed analysis; however, there were timing problems. The programme itself ran into difficulties of providing incentives for decision-makers:
|
Found: a negotiated research question that influenced ‘not only the project design, but the decision-makers’ ways of thinking about the problem as well’. In four out of the five cases, turnover among government officials impaired the process, but in the fifth team:
| ||||||||
114 | Poortvliet et al.140 | 2010 | Both | Belgium | KCE: HTA, HSR and good clinical practice | Developed own framework. Documentary review; two group discussions, with 11 KCE experts, with two KCE mangers; interviews with stakeholders (n = 20); web-based survey to project managers: 66 external (28% responded) and 101 KCE (72% responded) – a total of 88 managers reported on 126 projects; nine detailed case studies selected by stratified random approach; and international comparisons with three agencies using documentary/literature review and interviews (n = 3) | Dissemination (outputs, dissemination activities, stakeholders addressed, engagement of stakeholders in outputs, actual take-up of outputs); contribution to the decision-making process (familiarity with research projects, research utilisation, relevance of projects for policy-making/practice, reactions on finished projects from stakeholders); external image of KCE | Comprehensive range of methods and analysis that focused on the processes as well as impacts. However, in addition to some conflicts of interest, the data collection phases were simultaneous, thus reducing scope for triangulation through a sequence of data collection activities. While some important conditions for achieving impact were not realised, the report concludes that KCE had established some:
|
Found: 16 stakeholders said findings influenced decision-making, four said not in their organisation; 58% of project co-ordinators thought projects contributed to policy development: more for HTA than Good Clinical Practice or HSR | ||||||||
115 | Reed et al.70 | 2011 | Application | Australia | Primary care research | Payback Framework. Online survey to 41 contactable CIs (out of 59 projects). Asked impacts expected, how many achieved. Some projects excluded as still under way, others refused. Out of 23 completed, 17 were relevant | Five domains: research transfer (including knowledge production); research targeting, capacity building and absorption; informing policy and product development; health and health sector benefits; and broader economic benefits | Inclusion of questions about what expectations CIs had had about impact: allowed interesting comparison with what achieved. However, quite a large number of CIs could not be located, or refused to participate. For those who did, there may have been ‘risks of a bias towards positive benefits’ and relied solely on survey. Interesting, comparison with Kalucy et al.65 from the same team:
|
Found: 13 CIs (76%) considered achieved at least half impacts expected; 11 PhDs came from 10 projects; further research, 65% achieved; 13 projects (76%) expected to influence national/state policy-making, four (24%) did so, but eight (47%) influenced decision-making at organisational, local or regional level [combined nine separate projects (53%) had policy/decision impact]; 10 (59%) expected to lead to improved health outcomes, five (29%) did so. Few broader economic impacts achieved. Three of the examples of impact overall were unexpected | ||||||||
116 | Higher Education Funding Council106 | 2015 | Both | UK | All medical, health, biological, agricultural, veterinary and food sciences | REF 2014. Panel reflections on the methods and results of the REF 2014 exercise. However, did not have access to the findings of the analysis funded by HEFCE of the impact case studies that was still on-going at the time of publication of the report | All aspects of impacts assessed in the REF | Given the scale and apparent success of the REF, the comments supporting the case study approach are highly important. The main panel recognised the difficulties in demonstrating the link between research and impact which may be non-linear, but thought that the narrative case study largely succeeded in capturing the complex links between research and impact Submissions could be strengthened in future
|
Found: MPA
| ||||||||
117 | Rispel and Doherty135 | 2011 | Application | South Africa | CHP | None explicitly stated. Interviews with 25 purposively selected key informants (12 CHP alumni, seven current members of staff, six external stakeholders); documentary review; aspects of ‘insider account’ as both researchers had spent some time at CHP – but also time working elsewhere | Contribution to health policy development and implementation. Found: CHP ‘has contributed directly to health policy development and implementation while also changing the way government understood or approached policy issues’. ‘All key informants acknowledged that CHP had a significant impact on national health policy at one point or another’ | Authors claim that:
|
118 | Ritter and Lancaster392 | 2013 | Both | Australia | National Drug and Alcohol Research Centre: illicit drug epidemiological monitoring systems | Informed by previous frameworks, developed a three-component approach to assessing research influence:
| Use of research in policy documents, policy processes and media | Developed an original framework and successfully used it to structure and implement their study. Then also analysed the findings in relation to a series a key theoretical perspectives on the link between policy and research, e.g. epistemic communities393 and linkage and exchange.394 Had the strength of being entirely independent of the researchers for data collection. However, it recognised that the approach did not extend to considering how far the research findings have changed drug policies in Australia |
Found: the majority of major drug strategy documents do not reference research, but the monitoring systems referenced in more detailed policy documents. In terms of policy processes, it found that 18 parliamentary committees and inquiries contained a total of 87 mentions of one of the monitoring systems, and often these were in submissions from other bodies. Sixty-eight mentions in the media: 0.2 of total drug mentions, but only 7.4% of drug-related media stories refer to any research | ||||||||
119 | Rosas et al.395 | 2013 | Both | USA | NIH: HIV/AIDS Clinical Trials Network | Process marker approach. Out of 419 publications in 2006–8, selected 22 from the network’s flagship studies in terms of scientific priority as primary interventional clinical trials. Obtained data about protocol dates from the network database. Identified publication date and citations on Web of Science. Used PubMed database to identify which citations were meta-analyses, reviews, and guidelines. Operationalises ‘date markers’ in citation data to track dissemination progress by selecting key end points | Citation of research in reviews, meta-analyses and guidelines | Does not need any direct input from the researchers of the projects assessed. However, used a very small sample and the simple citation metrics used have limitations. The time from study approval to citation in guidelines is shorter than identified in some other papers, but the HIV/AIDS field ‘is quite specialized’ |
Found: 11 of the 22 publications were cited in guidelines within 2 years of publication, mean time from study approval to first guideline citation: 74.1 months | ||||||||
120 | Royal Netherlands Academy of Arts and Sciences79 | 2010 | Application | The Netherlands | All fields | Update of evaluation framework previously used by the organisations to assess research, not only its impact, at the level of research organisations and groups or programmes. Self-evaluation and external review, including site visit every 6 years | List a range of specific measures, indicators or more qualitative, that might be used in the self-evaluation. Also for assessment of societal relevance: ‘Several methods have been developed for specific areas (the payback methods for health research, for example) and new methods are being developed’ and here gives a link to the website for the ERiC project – see Spaapen et al.109 Uses the concept of societal quality to refer to productive interactions (as with some other Dutch studies) | As the major approach to assessing publicly funded research in the Netherlands has the strength of ‘a broad scope’ including a focus on the societal relevance of research. However, while the range of options of methods to show societal relevance provides flexibility, there might be uncertainties for institutions and programmes in deciding the best approach for a formal assessment |
121 | RSM McClure Watters et al.71 | 2012 | Application | UK | Northern Ireland Executive: HSC research | Payback Framework. Desk analysis of documents and literature, consultations with stakeholders, survey informed by Payback Framework, three case studies, benchmarking. Surveys to all 169 PIs for projects funded between 1998 and 2011 who could be contacted. There was a response rate of 84 (50%) | Used five Payback categories: knowledge/dissemination; benefits to future research (career development, qualifications, extended category to include jobs supported by the research funding); informing policy; health and health sector benefits (health gain, improved service delivery, cost reductions, increased equity); economic benefits (note: main factor considered here was additional funding brought in to support the HSC-funded projects, and leveraging for further funds) | Wide range of methods used to provide comprehensive picture of the context and impact of HSC research funding. There were both strengths and weaknesses in the focus on the economic impact. Given the clear geographic focus of the HSC R&D funding, it was valid to demonstrate the role of the funding from the Northern Ireland Executive in leveraging funding from outside Northern Ireland to support health research in Northern Ireland, but, ‘Much more detailed analysis would be necessary to demonstrate what proportion of this follow-on money was fully leveraged by the original HSC funding’. The case studies ‘provide good examples of the incremental nature of the impacts associated with research within the Health and Social Care field’ |
Found: 66 PhDs/master’s/MDs supported; considerable career progression; over 100 posts supported; 19% impact on policy development; 20% health gain; 13% cost reductions; 17% increased equity; additional funding covered as much as HSC-funding on projects, substantial leveraged funds for follow-on projects came from outside Northern Ireland | ||||||||
122 | Runyan et al.396 | 2014 | Both | USA | Injury Control Research Centers | Not explicitly stated beyond noting that further refinement and identification of systems to support their use:
| The background refers to ‘impacts of academic research centres’: they have a teaching role as well as research role; therefore, indicators are comprehensive. Proposed indicators include: scholarly products; funding (including leveraged funds); policy-related work in relation to government and/or organisational policies (including both consultations and actual impact on policies); technical assistance (including consultation requests, e.g. participation on planning committees for community initiatives); improving community capacity (e.g. collaborations with community members involving collecting or evaluating data); interventions developed | Despite detailed work to develop the list of indicators, the process still needed further refinement and piloting. While the authors reflect the view that the richness of centres’ contributions exceeds the sum of the individual components, it is not clear how far the list of indicators reflect previous theoretical discussions about how this perspective can inform research impact assessments. A growing number of studies assess the impact of long-term funding for research centres, and might take a broader perspective than that applied to specific programmes of research. This feeds into discussion about the role of impact assessment within wider assessment of research:
|
123 | Rycroft-Malone et al.102 | 2013 | Application | UK | NIHR: CLAHRCs – HSR | Realist evaluation. Mixed-method case study including interviews, ethnography, document analysis and participant observation | What works for whom in what circumstances in multistakeholder research collaborations, using a CLAHRC as a worked example | Strength: clear and rigorous application of realist method supplemented with other perspectives where appropriate. Weakness: not yet replicated in other CLAHRCs or similar models. Various mechanisms – collaborative action, relationship-building, engagement, motivation, knowledge exchange and learning – which interact with context to produce different outcomes in different parts of the programme |
124 | Sainty99 | 2013 | Application | UK | Occupational Therapy Research Foundation | Becker Medical Library model. All 11 grantees who had completed a UK Occupational Therapy Research Foundation-funded project were invited to complete a ‘personalised impact assessment form’ (equivalent to a survey). Eight responded (73%). Two researchers were invited to provide an independent review of the collated findings | Based on the Becker Model, four main categories: research output/advancement of knowledge (e.g. publications, PhD completion, career progression, follow-on funding); clinical implementation (e.g. assessment tools/outcome measures generated, guidelines, loan of final report from library, training materials, clinicians report change in practice); community or public benefits (e.g. service users engagement activities in project, presentations to public); economic benefits | Study informed by a literature review and based on a model. Possibility of recall and selection bias in responses from researchers – the only data source. In relation to the clinically related activities of three projects:
|
Found: one PhD, one MPhil, six career progression, four further grants; and three projects – local clinical application | ||||||||
125 | Sarli et al.118 | 2010 | Methodological (plus case study) | USA | All health | Developed a new approach called The Becker Medical Library model for Assessment of Research. Started from the logic model of W.K. Kellogg Foundation397 ‘which emphasises inputs, activities, outputs, outcomes, and impact measures as a means of evaluating a program’. Methods proposed in the new model: main emphasis is on the indicators for which the data are to be collected (see column Impact: examined and found), but referring to the website on which indicators made available authors state:
| For each of a series of main headings lists the range of indicators, and the evidence for each indicator. Main headings: research outputs; knowledge transfer; clinical implementation; and community benefit | A comprehensive list, but could be questions about the diversity of items included in some of the categories, and how far they have been fully linked with the organising framework. It was also challenging to establish a clear pathway of diffusion of research output into knowledge transfer, clinical implementation or community benefit outcomes as a result of a research study. This was, in part, due to ‘the difficulty of establishing a direct correlation from a research finding to a specific indicator’. The Becker Model is mainly seen as a tool for self-evaluation:
|
126 | Sarli and Holmes398 | 2012 | Methodological | USA | All health | The Becker Medical Library model for Assessment of Research – update | Same basic method as in original model, but updated ‘to include additional indicators of research impact based on review of other research projects’ | Claimed that the changes also ‘reflect the authors’ intention to make the model more user friendly’. It has similarities with researchfish. After this, the web version updated on regular basis: https://becker |
127 | SHRF86 | 2013 | Application | Canada | SHRF | CAHS. Review by external consultant: 22 interviews, including with five researchers whose SHRF-funded work formed the basis of case studies | Five categories from CAHS’s framework: research capacity (e.g. personnel, additional research activity funding, infrastructure); advancing knowledge; informing decision-makers; health impacts; broad economic and social impacts (e.g. research activity) | Applies a framework SHRF had helped develop, but framework applied to just five case studies. Among the claimed facilitators of impact are SHRF’s ‘group-building grants, creation of research groups and networks’. The emphasis on the economic benefits in terms of research jobs attracted is linked to research infrastructure |
Found: examples of capacity building and researcher retention: three out of five research groups reported limited impact on clinical and policy decision-makers, but one case study describes how a clinic launched as a demonstration project has now been used as the model by others; some guidelines been developed; ‘for every dollar awarded the researcher attains four dollars from external sources of funding . . . suggested that with the presence of special research infrastructure . . . comes higher paying jobs, resulting in a higher tax base and a highly sought knowledge economy’ | ||||||||
128 | Schapper et al.72 | 2012 | Both | Australia | Murdoch Children’s Research Institute | Institute’s own research performance evaluation framework ‘based on eight key research payback categories’ from the Payback Framework and draw on the RIF (Kuruvilla et al.123). A committee oversees the annual evaluation with a nominee from each of six themes and external member and chairperson. Evaluation ‘seeks to assess quantitatively the direct benefits from research’. Data are gathered centrally and verified by the relevant theme. Theme with highest score on a particular measure are awarded maximum points, others are ranked relative to this. Each theme nominates the best three research outcomes over 5 years, then interviewed by research strategy team using detailed questionnaire to gain evidence and verify outcomes. Research outcomes assessed using a questionnaire based on the RIF | Three broad categories: knowledge creation; inputs to research; and commercial, clinical and health outcomes. The six major areas of outcomes: development of an intervention; development of new research methods or applications; communication to a broad audience; adoption into practice and development of guidelines and policy; translation into practice – implementation of guidelines and policy; and impact of translation and on health | Framework developed for use in the institute to provide a balanced assessment across the wide range of modes of research conducted in the institute. Despite issues of the weighting to give various factors, and the relative scoring of data the evaluation ‘is generally viewed positively by researchers at the Institute’. However, it might appear rather formulaic. Impact embedded into the performance evaluation and strategic management of a research institute; ‘provides a fair and transparent means of disbursing internal funding. It is also a powerful tool for evaluating the Institute’s progress towards achieving its strategic goals, and is therefore a key driver for research excellence.’ It claims that the evaluation ‘is unique’ |
129 | Schumacher and Zechmeister134 | 2013 | Both | Austria | Institute for Technology Assessment and Ludwig Boltzmann Institute for HTA: HTA | An account that goes with the Zechmeister and Schumacher83 study in developing a framework informed by various strands of previous work including Weiss’s enlightenment concept and a ‘multidimensional concept of impact’. Combination of interviews (same 15 as Zechmeister and Schumacher83), download analysis, retrospective routine data analysis and media analysis | Whether or not the HTA research programmes ‘have had an impact on the Austrian healthcare system’. Considered seven impact categories: awareness, acceptance, process, decision, practice, final outcomes (i.e. economic impact), and enlightenment | Comprehensive methods. They developed a matrix to show the methods used, the impact categories to which they relate and the indicators addressed by each method. ‘The strength of our approach is that it takes into account that HTA research may affect all system levels, in a multi-dimensional manner rather than a linear one.’ However, there were only 15 interviews so they were not able to cover all target groups and ‘selection bias may have occurred . . . Further limitations are a lack of benchmarks.’ The study seems to underplay extent to which earlier studies used a combination of methods |
Found: rising downloads, some reports high media interest; increasingly used for investment decisions and negotiation; ‘Economic impact was indicated by reduced expenditures due to HTA recommendation . . . in places, an “HTA culture” can be recognized’ | ||||||||
130 | Scientific Management Review Board399 | 2014 | Methodological | USA | NIH | Describes early stages in developing an approach. Assessments should:
| Assessments should attribute outcomes to all contributors, allow sufficient time to have elapsed and ‘begin with identifying the purpose of the study and its audiences’ | Identified issues but progress limited. Relevance for our current review is that it highlights the view from the world’s largest health research funder that considerable further work is required to develop an approach to assess the value of biomedical research supported by the NIH. Referring to data systems such as STAR METRICS says:
|
131 | Scott et al.73 | 2011 | Both | USA | NIH: Mind–Body Interactions and Health programme | Payback Framework incorporated into the design as a conceptual framework and adapted with greater focus on how the agenda for the programme was shaped and the extent of community outreach and engagement. Centres: documentary review, database analysis, interviews (centre PIs), bibliometric data, construction of narrative templates for centres based on the combined data. Similar approach for projects | Main payback categories as related to the stages of the payback model: primary outputs (knowledge production; research targeting and capacity building – including an emphasis on the effect on the host institution); secondary outputs; final outcomes (improvements in health, well-being and quality of life, changes to health-care service delivery, and broader social and economic impacts) | Adaption of conceptual framework to meet needs of specific evaluation, and use of wide range of complementary methods, but no findings presented in this paper. They considered the range of issues facing impact assessment: timing of evaluation with many outcomes not likely to be evident at the time evaluations usually requested – used term latency to describe that situation; attribution |
132 | Shiel and Di Ruggiero114 | 2009 | Methodological | Canada (part of appendix A of CAHS report) | Population and public health research | Recommends that the CAHS should adopt the Payback Framework, as amended by CIHR. Identifies sources for data for the various payback categories: bibliometrics; case studies; evaluation studies of clinical guidelines; database analysis; special studies of specific items | Shows how framework could be introduced to cover population and public health research. Covers the main payback categories: knowledge production; research targeting and capacity; informing policy; health and health sector benefits; and economic impacts | Analyses and contrasts payback studies and ROI studies (and by ROI mean the economic valuation studies, and not the broader use of the term by the CAHS who put the term ROI in the title of their report that recommended a variation of Payback). Also briefly highlights receptor capacity benefits from conducting research (e.g. p. 53) |
133 | Solans-Domènech et al.87 | 2013 | Application | Catalonia | Agency for Health Quality and Assessment of Catalonia: respiratory disease | ROI model from CAHS: this paper describes a study within the overall study described in Adam et al.84 Interviews with 23 key informants: researchers and decision-makers. Differences between achieved and expected impact described: expected defined as what hoped to achieve at start | Five main categories from CAHS: advancing knowledge; capacity building; informed decision-making; health and social; and broad economic | The detailed case studies allowed a thorough analysis of expected and achieved impacts, but there were only six cases:
|
Found: 3/6 projects achieved all expected impacts; there were some unexpected impacts, and some expected impacts not achieved in final outcomes and adoption phase | ||||||||
134 | Soper et al.185 | 2013 | Both | UK | NIHR: CLAHRCs – HSR | Did not start with a specific theoretical perspective in the evaluation – adopted a formative and emergent approach, but it informed by various studies: Quebec Social Research Council, Quality Enhancement Research Initiative, Need to Know project. Stakeholder survey of CLAHRCs, in-depth case studies of two CLAHRCs, validation interviews with nine CLAHRCs | From the two CLAHRC case studies they looked at 1. Establishing the CLAHRC, 2. Working as a CLAHRC, and 3. Emerging impacts and legacies. Stated that ‘both CLAHRCs had some comparatively rapid success in making an impact on health care provided locally and more widely across the NHS’. A common feature was use of range of knowledge transfer and exchange strategies. Authors used Mannion et al.’s framework to assess CLAHRCs efforts to encourage cultural change and explain variable success with different groups. Four types of culture: clan, hierarchical, developmental and rational | Did not specify which theories they would test, they considered how best to relate their data to existing theories. Instead they used a formative and emergent approach to analyse whether CLAHRCs met their remit or not. Limited to two case studies of CLAHRCs and people closely associated with them |
135 | Spaapen et al.109 | 2007 | Both | The Netherlands | Mostly describes methodological development, illustrated by earlier case studies, including one on the pharmaceutical science programme at Groningen University | SciQuest approach of the ERiC initiative. Mixed-method case studies using qualitative methods, a quantitative instrument called contextual response analysis and quantitative assessment of financial interactions (grants, spin-outs, etc.). SciQuest methodology is deliberately non-prescriptive and context-sensitive. There are four key stages:
| Productive interactions (direct, indirect and financial) must happen for impact to occur. There are three social domains: science and certified knowledge; industry and market; and policy and societal. For the REPP in the pharmaceutical sciences example they set 15 benchmarks – five for each domain. For each benchmark there were five levels. The faculty had the highest benchmark in: relative citation impact; representation in editorial boards, involvement in industry/market; and additional grants societal/policy | It focuses the assessment on the context and is designed to overcome what were seen as the linear and deterministic assumptions of logic models. The primary goal of the study of the pharmaceutical sciences faculty was to support the faculty in conducting the self-evaluation required under the assessment system for academic research in the Netherlands. SciQuest is described as a ‘fourth generation’ evaluation approach. It draws on the work of Gibbons et al.179 on ‘Mode 2 knowledge production’. SciQuest is theoretically elegant but there are concerns about its feasibility for regular use:
|
136 | Spaapen et al.81 | 2011 | Both | EU – led from the Netherlands | Four case studies: health, ICT, nanotech, social/human sciences | Productive interactions. Based on SIAMPI collaboration. SIAMPI ‘involves two central tasks: to enlighten the mechanisms by which social impact occurs and to develop methods to assess social impact. SIAMPI . . . developed an analytical framework for the study of productive interactions and social impact’. Productive interactions, based on three categories of interaction:
| The focus of the study was mainly on assessing the interactions. They found:
| Definition:
|
137 | Spoth et al.400 | 2011 | Methodological | USA | Family-focused prevention science | Primarily a review. Brief account of the PROSPER model ‘for the delivery of evidence-based intervention’, but also involves evaluation of interventions that are community–university partnerships. Collection of outcome data | Includes analysis of long-term effects. Explores four translational impact factors: effectiveness of interventions; extensiveness of their population coverage; efficiency of interventions; and engagement of populations | Not a traditional model for assessing research impact, but the review covers a range of issues |
138 | Sridharan et al.401 | 2013 | Both | International | Teasdale-Corti Global Health Research Partnership Programme | The evaluation was informed by a theory-driven approach (from Pawson et al. 2004) that emphasised iterative learning and assessing performance in light of anticipated timelines of impact. Interviews with planners of the programme. Analysis of the 8/14 final reports then available, and their proposals. Surveys: 1–87 grant recipients attending an initiative symposium; second to four different groups: co-PIs from Canada, co-PIs from southern countries, users, leadership award recipients; each group receiving a different version. Filmed interviews with some grantees. Bibliometric analysis. Brief case studies of three grantees including skype™ (Microsoft Corporation, Redmond, WA, USA) interviews. Documentary analysis and iterative development of the theory of change | Wide range of impacts identified in the proposals and linked to the activities that supported the theory of change. Included knowledge production, capacity building to conduct and use research, enhanced recognition of the role for research in policy and practice and alignment with local needs, strengthened health systems and enhanced health equity, improved health | Very wide-ranging and aimed to conduct a theory-driven evaluation, but limitations in how far they were able to implement a theory-driven approach and the timing limited what impacts could be identified, including from those projects that had not finished at the time of the evaluation. Again, increasing desire to include impacts in programme evaluation, but the timing of such evaluations is too soon for many of the impacts to have arisen. Also the findings mainly presented in terms of the different methods of data collection, which add clarity in terms of the report on each method, but reduces the overall description of how far the various impacts achieved |
Found:
| ||||||||
139 | Sullivan et al.402 | 2011 | Both | UK | UK cancer centres (part of international programme of accreditation – but not a specific stream of funding) | Scientometric/bibliometric analysis, including guidelines. Broad bibliometric analysis but key issue:
| Found:
| Method not applied to funded programmes in this study but might have potential for such application |
140 | Sutherland et al.403 | 2011 | Both | UK | Any field: but example on research into means of conserving wild bee populations in UK | Propose ‘a quantitative approach in which impact scores for individual research publications are derived according to their contribution to answering questions of relevance to research end users’. Tested the proposed approach by evaluating the impact of the bee research. Identified key interventions (n = 54), searched for publications (n = 159) that test the interventions. Forty-four stakeholders allocated 1000 points between the different interventions according to how they should be prioritised. Three experts assessed the evidence for each intervention and contribution and relevance of each publication. Finally an impact score was calculated for each paper using an equation that incorporated the various elements (priority score, certainty of knowledge, contribution of knowledge, relevance and number of interventions for which the paper provides evidence) | How far the knowledge would be relevant, etc. for important policies and practice | Developed a new approach having discussed the problems with either assessing impact through stages to direct benefits to society or by counting the amount of communication. The approach could possibly be adapted for application to a programme of research It does not face the problems identified with attempts to track the actual impact from research, but the method is time-consuming. The authors report ‘A potential bias is introduced by the practitioners’ prior knowledge’, but the attempt to identify ways to reduce this suggest that the approach is attempting to identify a way of scoring impact that is insulated from the very context within which knowledge has in reality to exist if it is to make a real-world impact |
141 | The Madrillon Group74 | 2011 | Application | USA | NIH: Mind–Body Interactions and Health Programme | Payback Framework incorporated into the design as a conceptual framework (called the Research Payback Framework) and adapted with greater focus on how the agenda for the programme was shaped and the extent of community outreach and engagement (see Scott et al.73). Mixed-methods cross-sectional evaluation design. Used qualitative and quantitative data to build three snapshots of the programme as a whole, the research centres and the research projects. The request for semistructured interviews received 100% response rate from PIs of all 15 centres and all 44 investigator-initiated projects. Impacts of centres scored by adapting the scales used previous in payback studies and presenting the scores as radar graphs | Main payback categories: knowledge production; research targeting and capacity building; influence on policy; influence on health outcomes and health-care delivery; and broader economic and social impacts) | Thorough multimethod study applying and adapting conceptual framework. Conducted innovative analysis through examining three overlapping levels (programme, centre and projects), but while interviews were used rather than a survey, most of the data on wider benefits came from researchers and ‘[d]etermining whether a research project could be credited with an effect within a given benefit category still involved a degree of subjectivity’. The NIH feasibility study had led to a call stating that the Payback Framework should be used for this assessment:
|
Found: achieved the programmatic goals and objectives of facilitating interdisciplinary collaboration in research ideas and building research capacity for mind–body research through the development of research personnel and funding of research core services. The centres and projects ‘produced clear and positive effects across all five of the Payback Framework research benefits categories’. Projects: 34% influenced policies; 48% led to improved health outcomes | ||||||||
142 | Theobald et al.404 | 2009 | Application | Kenya/Malawi/Nigeria | Liverpool School of Tropical Medicine’s Global Health Development Group: three operational research projects | Case studies organised using RAPID framework from Overseas Development Institute. They selected three cases using pre-defined criteria, including that it was known they had demonstrated an impact on health policy and practice. Analysed the research impact process using RAPID framework for research-policy links, which focuses on: the political context; links between policy-makers and other stakeholders; and the evidence. External influences examined as the wider context. Desk analysis with inputs from the PIs and the Global Health Development Working Group | Impact on health system strengthening and promotion of equity in health service provision | Using a common framework for the analysis of three case studies allowed the analysis to go further than had been reported in individual accounts of the projects, but there were only three case studies and were from one research group rather than a single funder’s programme – paper is at margins of meeting inclusion criteria Key factor:
|
Found: in all cases, new knowledge and approaches were needed to fulfil policy requirements; they all involved partnerships between researchers and users; the links with policy-makers were not only at policy agenda setting but developing partnerships at multiple levels and with multiple players was key in all cases; the use of equity considerations was central in each case. Also capacity building was important | ||||||||
143 | Tremblay et al.405 | 2010 | Both | Canada | CFI: research infrastructure in a range of fields | Developed their own method: the Outcome Measurement Study Unit of analysis is multidisciplinary research themes at individual institutions. A theme includes CFI expenditures in a range of research infrastructure, laboratories, etc., data and computing platforms. Methods combine: research institution completes an in-depth questionnaire covering a series of indicators – the 50 plus questions cover quantitative and qualitative data; expert panel review of the survey data plus other documents followed by a visit to the institution | Evaluates contributions of CFI expenditure to improving outcomes in five categories: strategic research planning; research capacity (physical and human); training of highly qualified personnel; research productivity; and innovation and extrinsic benefits. The indicators listed for each include for the final category: leverage of CFI expenditures; type and amount of knowledge and technology exchange; key innovations – economic, social and organisational, evolving industrial clusters. Panel report for each outcome measurement study | Thorough analysis of approaches to impact assessment led to an approach that combines a range of methods and multiple outcome categories and ‘provides for a richness of analysis’. However, it is quite resource intensive for those being assessed and the assessors; some of the categories of outcomes, and indicators, relate to items such as recruitment of researchers that in other approaches would probably be viewed as inputs because they are directly paid for by the research funding. Perhaps different factors come into play because this is infrastructure funding, which might therefore facilitate the recruitment of researchers |
Found: examples of social and economic benefits include:
| ||||||||
144 | Trochim et al.406 | 2008 | Both | USA | Pilot application to National Cancer Institute’s TTURC | Evaluation of large initiatives – a logic model. Concept mapping, logic modelling, a detailed researcher survey, content analysis and systematic peer-evaluation of progress reports, bibliometric analysis and peer evaluation of publications and citations, and financial expenditures analysis | Concept mapping produced domains of collaboration, scientific integration, professional validation, communication and health impacts. Some of these were mainly short-term and process oriented; others were medium or longer term, and outcome oriented This paper mostly about the short-term processes and interactions (details of managing the research programme and, especially, how to measure transdisciplinary collaboration). Brief on long-term markers:
| Describes just a single worked example. Authors also noted that the researchers surveyed expressed:
|
145 | UK Evaluation Forum19 | 2006 | Methodological | UK | All health fields | Discusses a range of approaches to assessing impact of medical research, but uses the categorisation of economic impacts from Buxton et al.28 as an organising framework for key sections | Reviewed a range of dimensions of impact but argued the socioeconomic benefits were an area where there was particular need for further studies | An important step in developing health research impact assessment as a field of study, in particular assessment of economic impacts. Key recommendations included:
|
146 | Upton et al.149 | 2014 | Methodological | UK | All fields: especially in relation to knowledge exchange | Propose an approach that would shift the focus from the outcomes of knowledge exchange to the process of engagement between academics and external audiences. They claim:
| Focused on analysing roles for assessing outcomes or processes | Recommendations for assessing impact, in particular, related to the higher education innovation funding, knowledge exchange funding It was based on responses from many academics, but the situation might have changed as a result of the REF. Addresses key issue of assessment of processes or outcomes |
Found:
| ||||||||
147 | Van de Ven and Johnson203 | 2006 | Methodological | N/A | General: could be applied to HSR | Co-production. Interdisciplinarity and productive intersectoral conflict enable the salient features of reality to surface in the Co-production process through ‘arbitrage’ (see Chapter 4) | Intersectoral conflict, especially the ‘two cultures’ of academia and policy-making | Strength: introduces and justifies the pragmatic approach to exploring and achieving impact. Weakness: no well worked-up examples |
148 | Walter et al.391 | 2004 | Methodological | Scotland | Social care | Social Care Institute for Excellence | Three models of research utilisation:
| Pre-dates our review period, but thought important to include because applied in Percy-Smith et al.390 |
149 | Weiss120 | 2007 | Methodological | USA | All health | Draws on the United Way model for measuring programme outcomes to develop a medical research logic model. Moves from inputs to activities, outputs and outcomes: initial, intermediate and long term. Discusses various approaches that could be used, e.g. surveys of practitioners to track awareness of research findings; changes in guidelines and education and training; use of DALYs or QALYs to assess patient benefit | Range of dimensions from the outputs such as publications, to clinician awareness, to guidelines, etc., to implementation and patient well-being | While it draws on much existing literature, it is seen as important article in moves towards increased interest in assessment of research impact, especially through the use of logic models |
150 | Weiss34 | 1979 | Methodological | N/A | Social science | Weiss’s taxonomy of research utilisation. Impact is rarely direct (either knowledge-driven or problem-solving). It is usually indirect, through sustained interaction between policy-makers and scientists, and occurs through conceptual changes (e.g. ‘enlightenment’). Research findings may be used instrumentally, and also symbolically to support a political decision, or a study may be commissioned tactically to delay a decision | Utilisation of research findings in policy-making | Plausible explanatory model for successes and failures of efforts to get social science research to influence policy. May be less relevant for other types of research/impact |
151 | Wellcome Trust93 | 2014 | Both | UK | Wellcome Trust: all funding, some health programmes highlighted | Framework has six outcome measures and 12 indicators of success. Range of qualitative and quantitative measures linked to the indicators and collected annually. Wide range of internal and external sources, including end of grant forms. The information gathering and report production led by the evaluation team though it is reliant on many sources Complementing the quantitative and metric-based information contained in volume 1 of the report, volume 2 contains a series of research profiles that describe the story (to date) of a particular outcome or impact. The Wellcome Trust research profiles – taking the form of highlights and histories – are agreed with the researchers involved and validated by senior trust staff |
| Another example of a major funder including impact in annual collection of data about the work funded
|
Discoveries: applications of research – contributions to the development of enabling technologies, products and devices, uptake of research into policy and practice; engagement; research leaders; research environment; influence | ||||||||
Found: 6% of grants ending 2012/13 reported filing a patent; 17% engaged with commercial collaborators during their research; £218M in venture capital; 40 inventions during 2012/13. A total of 28% of grants that ended in 2012/13 reported engagement with policy-makers and health-care professionals; 14% reported production of software and/or databases | ||||||||
152 | Westhorp164 | 2014 | Methodological | N/A | Evaluation and programme planning | Realist evaluation. This approach involves mixed-method case studies aimed at developing and testing programme theories about what works for whom in what circumstances | No empirical examples given. It does not specifically refer to research impact, but impact in general. Reading across to research, the assumption is that different research findings will have different impacts in different contexts because the context–mechanism–outcome link will be different | Strength: clear and authoritative summary of the realist approach and how it might be applied in impact assessment. Weakness: no examples given, and is for programme evaluation in general |
153 | White172 | 1996 | Methodological | N/A | CBPR | CBPR | Lay involvement in research. Involvement may be nominal (to confer legitimacy on a project), instrumental (to improve delivery), representative (to avoid creating dependency) or transformative (to enable people to influence their own destiny) | Elegant and simple tool for assessing (semiquantitatively) the level of lay involvement in a study. Weakness: not tested in HSR. The greater the involvement, the greater the potential impact |
154 | Williams et al.78 | 2008 | Application | UK | NHS HTA TARs: economic evaluations | The whole study focused on the use of economic evaluations in NHS decision-making. The part that is relevant for this review is the specific case study on the use of economic evaluations by NICE technology appraisal committees because they explicitly drew on one programme of research, i.e. the HTA TARs. In that case study, the economic evaluations were specifically commissioned to be used in the committee’s decision-making. Data collection informed by grounded theory. Documentary review for the seven identified appraisal topics included; 30 semistructured interviews with members of the NICE appraisal committee; group discussion with NICE technical support team; observation of meetings related to the selected topics. Triangulation of data | Use of economic evaluations in decision-making by NICE about whether or not to recommend specific interventions were made available for patients treated by the NHS | Provides an interesting contrast with many other studies in which the evidence is not produced so directly to be used by formal decision-making structures operating as a ‘receptor body’. However, the economic evaluation was just one part of the evidence from the TAR |
Found:
| ||||||||
155 | Williams et al.92 | 2009 | Both | USA | NIOSH | Developed the NIOSH logic model: inputs, activities, outputs, transfer, intermediate customs, intermediate outcomes, final customers, intermediate outcomes and end outcomes. Also developed outcome worksheets based on the historical tracing approach (see Project Hindsight407 and TRACES408), which reverse the order:
The expert panels receiving the documents would score the impact on a 5–0 scale, and separately score relevance of the research, i.e. did the programme appropriately set priorities among research needs and how engaged is the programme in transfer activities? | Intermediate outcomes include: adoption of new technologies; changes in workplace policies, practices and procedures; changes in the physical environment and organisation of work; and changes in knowledge, attitudes, and behaviour of the final customers (i.e. employees, employers). End outcomes include:
| Thorough account of the development and implementation of tools to prepare data for an outcome narrative for expert panel assessment that has the strength of involving analysis that moves both forwards from the research and backwards from the outcomes. However, inevitably, this account provides only a partial picture and does not provide a complete example of what was prepared for the panel by any programme, or how the panel scored it. Also, while working backwards was seen as a strength in that it focused attention on a collective body of research rather than individual projects, it is not entirely clear what happens to the data identified when working backwards that comes from research programmes other than the one for which the narrative is being prepared. This places the report in the context of increased interest in the USA in evaluating/accounting for the outcomes from publicly funded programmes: Government Performance and Results Act and the Performance Assessment Rating Tool. Quotes from Government document on the Programme Assessment Rating Tool:
|
Found: the report provides a few examples of impacts found by some NIOSH programmes, but they are illustrations and not full presentation of findings, e.g. the Hearing Loss Prevention programme initially identified 44 outcomes across 10 major research areas | ||||||||
156 | Wilson et al.409 | 2010 | Application | UK | Applied and public health research but from a range of UK programmes | As part of a wider survey the questions related to impact informed by questions from RIF. Survey to 485 PIs (232 completed) mainly focusing on dissemination of their research in general, but asked PIs to select one example of a project they had disseminated and answer more questions including some on research impact. It compared the responses to the various questions to analyse dissemination factors associated with the reporting of research impact | Impact on policy, health services and citation in clinical guidelines | Comparison of data from separate questions on (a) aspects of dissemination and (b) impacts claimed provides interesting findings, but only partially relevant for impact from programmes because the focus was on PIs from a range of programmes and they could chose examples of research on which to concentrate, etc. Those ‘receiving dissemination advice and support, and/or who believe that researchers need to do more than publish academic journal articles were more likely to report policy impacts . . . having access to dissemination support . . . appears to increase the chance that research findings are misreported’. ‘Although only a minority indicated that they routinely recorded formal or informal feedback about the impact of their research, when asked about impact in relation to specific research they had recently completed most respondents were able to provide examples’ |
Found: 70% provided some details on impact on health policy and practice; 51% research led to discussions or interactions with policy-makers or cited in policy documents; 28% cited in clinical guidelines; 49% research had, or likely to have, influence on the acceptability or availability of a health intervention or on the organisation of health service: evidence claimed for this included citation in guidelines, or named specific interventions where acceleration anticipated. A total of 29 respondents felt their findings had been misrepresented or used in ways they felt inappropriate; 15 of which referred to the media | ||||||||
157 | Wooding et al.36 | 2014 | Both | Australia/Canada/UK | Leading medical research funders: cardiovascular and stroke | The study, called Project Retrosight, built on Payback Framework – developed new methods to expand analysis. Web-based survey, interviews, bibliometric analysis A total of 29 case studies (12 Canadian, nine UK and eight Australian). Scoring of case studies by experts and research team. Analysis of factors associated with impact by comparing the scores with features of the research processes recorded in the case studies | Assessed factors associated with impact in two groupings of the five payback categories – academic impact and wider impact. The 29 case studies revealed a number of findings: diverse range of impacts, variations between the impacts of basic biomedical and clinical research, no correlation between knowledge production and wider impacts, engagement with practitioners and patients is associated with high academic impact and high wider impacts, etc. Biomedical research = more academic impact, clinical research = more wider impacts (on health policies and health gain). Collaboration = wider impact | Using case studies enabled Project Retrosight to identify paybacks which otherwise would not have been detected. A complex set of steps, methods and innovation were required to understand success and translation. These methods allowed a large and robust data set which, when applied, developed new approaches to understand factors associated with high and low payback. However, there were only 29 case studies, so there could have been inconsistencies between projects and countries. Health research funder wishing to achieve wider impacts should not use academic impacts as a proxy. Pathways to impact = funders should consider ways to assist researchers to maximise potential for research to be translated |
158 | Wooding et al.104 | 2009 | Both | UK | ARC: wide range | The framework developed and applied in this study subsequently was named the RAND/ARC Impact Scoring System – see Grant et al.,38 although that name not used in this report. It was developed from the Payback Framework and linked to research pathways developed to show the roles of different types of research within an overall portfolio. A web-based tick-list survey completed by PIs. It was piloted in 2007 on 136 grants ending in 2002 and 2006 (118 replies: 87% response rate). All data from each grant presented on one sheet called an ‘impact array’ in which each row of blocks shows all the answers from one grant, and each column represents all the answers to a single question. Each coloured block shows a positive response to the question. The research pathway and impact array data were then combined to explore which types of research gave rise to particular types of impact | Included: future research (e.g. further funding/new collaborations, contribution to careers, new tools); dissemination; health policy (citations on guidelines, contribution to guideline committees and discussion of health policy); training (undergraduates and health professionals); interventions/products (new treatments, public health advice, etc.) | The author’s claim:
|
Found: > 80% of grants generated new research tools, 50% of which were shared; 2.5% of research led ‘to diagnostics, therapeutics or public health advice that is in or nearing use, and 7.6% has generated intellectual property that has been protected or is in the process of being so’ and six projects (5%) to policy impact | ||||||||
159 | Xia410 | 2011 | Application | USA | AHRQ: The Effective Health Care Programme | None stated. Database analysis: they examined the AHRQ online database and compiled a list of conclusions from completed comparative effectiveness reviews in various therapy areas. Then they compared these conclusions to the current access of these therapies in a selection of the largest US plans by lines covered using their online formulary databases | Impact of AHRQ findings on formulary decisions of health-care organisations | Very little detail available from abstract and findings not strong, but potentially relevant for analysis of NIHR HTA programme |
Found:
| ||||||||
160 | Zachariah et al.141 | 2014 | Application | International | World Health Organization/ Special Programme for Research and Training in Tropical Diseases Structured Operational Research and Training Initiative: adopted an existing training initiative | None stated. Retrospective cohort study, by survey, of attendees of eight operational research capacity building courses: 93 enrolled, 83 completed. A total of 89 published papers and 88 replied about policy and practice. For each reported impact claimed from a published paper, a description of the attributed effect was requested. Categorisation was done by two independent researchers | Course completion (i.e. capacity building); attendees published papers assessed for impact on policy and practice | High response rate to survey; rare example of assessing wider impacts from capacity building course and range of example briefly described. However, the effects were self-reported and may have been responder bias. Possible factors for rapid translation include:
|
Found: 65 (74%) stated impact on policy and practice, including: 11 influenced national strategic policy and/or national guidelines; 12 national data monitoring was adapted; 13 change in routine implementation of a national programme; 14 change in routine implementation of a local programme | ||||||||
161 | Zechmeister and Schumacher83 | 2012 | Both | Austria | Institute for Technology Assessment and Ludwig Boltzmann Institute for HTA: HTA | Developed own methods but were partly informed by the work in Quebec described in our 2007 review,2 i.e. Jacob.43,44 Identified all HTA reports aimed at use before re-imbursement decisions made (category 1) or for disinvestment of oversupplied technologies (category 2). There were 11 full HTA reports, 58 rapid assessments. Descriptive quantitative analysis of administrative data informed by studies such as Jacob (1993, 1997) and 15 interviews with administrators and payers | Impact on ‘reimbursements/investment and disinvestment decisions that may have consequences for volume supplied, expenditure/or resource distribution . . . an impact in terms of rationalisation (reducing oversupply) or re-distribution of resources into evidence-based technologies, and we attempt to calculate this in monetary dimensions’ | It uses two main methods and ‘were able to present a rich picture of different perspectives’. The report provides a brief narrative of a range of examples and the calculations of the cost savings. However, while they were able to claim the majority of reports ‘have been used in decision processes’ the wording about the level of impact is much less clear:
|
Found: five full HTA reports and 56 rapid assessments ‘were used for reimbursement decisions’. Four full HTAs and two rapid assessments ‘used for disinvestment decisions and resulted in reduced volumes and expenditure’. Two full HTAs had no impact. Other factors also played a role, but in only 45% of reports ‘the recommendation and decision were totally consistent’ and 53% partially consistent |
AHRQ, Agency for Healthcare Research and Quality; AHSI-RES, Africa Health Systems Initiative Support to African Research Partnerships; AIDS, acquired immunodeficiency syndrome; ALR, Active Living Research; Application, application; ARC, Arthritis Research Campaign; Both, conceptual and application; CFI, Canadian Foundation for Innovation; CHBRP, California Health Benefits Review Program; CHP, Centre for Health Policy; CIHR, Canadian Institutes of Health Research; CIROP, Community Impacts of Research Oriented Partnerships; Co-I, coinvestigator; CRG, Cochrane Review Group; CTSA, Clinical and Translational Science Awards; FIC, Fogarty International Center; FP, Framework Programme; GCP, good clinical practice; HEI, higher education institute; HIV, human immunodeficiency virus; HSC, Health and Social Care; HSR, health services research; IAVI, International AIDS Vaccine Initiative; ICT, information and communications technology; KCE, Belgium Health Care Knowledge Centre; LVAD, left ventricular assist device; MD, Doctor of Medicine; Methodological, conceptual/ methodological; MPA, Main Panel A; MPhil, Master of Philosophy; N/A, not applicable; NGO, non-governmental organisation; NIH, National Institutes of Health; NSW, New South Wales; PhD, Doctor of Philosophy; PI, principal investigator; PROSPER, PROmoting School–community–university Partnerships to Enhance Resilience; R&D, research and development; RAPID, Research and Policy in Development programme; REPP, Research Embedment and Performance Profile; ROAMEF, Rationale, Objectives, Appraisal, Monitoring, Evaluation and Feedback; ROI, return on investment; RWJF, Robert Wood Johnson Foundation; SDO, Service and Delivery Organisation; SHRF, Saskatchewan Health Research Foundation; TTURC, Trans-disciplinary Tobacco Use Research Centre.
- The included studies in the updated review - Models and applications for measuri...The included studies in the updated review - Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme
- Ndufa4 [Mastomys coucha]Ndufa4 [Mastomys coucha]Gene ID:116098581Gene
Your browsing activity is empty.
Activity recording is turned off.
See more...