U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Show details

Chapter 26 Building Capacity in eHealth Evaluation: The Pathway Ahead

, , and .

26.1. Introduction

While much progress has been made in recent years, accommodating the growing demand for evidence relating to eHealth will require a continued focus on capacity building. Similar interest in evaluation capacity building extends into other disciplines and, as a consequence, has been the focus of discussion in the literature.

Labin, Duffy, Meyers, Wandersman, and Lesesne (2012) define evaluation capacity building as an “intentional process to increase individual motivation, knowledge and skills, and to enhance a group or organization’s ability to conduct or use evaluation” (p. 308). For the purpose of this discussion, the focus will be on the broader health system’s ability to conduct or use evaluation related to digital solutions. Preskill and Boyle (2008) describe the goal of evaluation capacity building as being “where members continuously ask questions that matter, collect, analyze, and interpret data, and use evaluation findings for decision-making and action” (p. 448). They go on to describe essential inputs including leadership support, incentives, resources and opportunities to transfer learning (Preskill & Boyle, 2008). This is consistent with the themes emerging from the capacity building experience in eHealth.

26.2. Motivation for Benefits Evaluation and Benefits Realization

Evaluation is a core component of an overall approach to benefits realization (Hagens, 2009). Clear and specific articulation of the benefits being targeted is an important starting point. With this step, expectations can be set and the mobilization of required participants can begin. A next step is identification of key assumptions or conditions necessary for benefits to materialize, and action required to address them. These actions may be many and varied. Examples include decision support, user interface considerations, workflow or other process redesign, policy or practice change, or approaches to harvesting quality or productivity gains. A structured change management methodology can help ensure success. As part of this process, measurement against objectives allows for the opportunity to adapt and adjust based on the findings on an ongoing basis, thereby improving results. Information to manage course corrections and subsequent steps is always required. Stakeholders and funders will also want to know the value produced and have other accountability considerations addressed.

The most effective evaluations are managed with the end in mind, informed by the stakeholders who have the ability to apply the findings. Ideally, evaluation work will meet the needs of multiple stakeholders, and thus consider multiple perspectives. As discussed, funders and decision-makers are an important audience. Clinicians and other staff in clinical settings will also be interested. Evaluation can inform clinicians of progress achieved and help them get the most out of investments. Evidence to inform optimization of benefits is also critical for implementers and vendors. Academia supports knowledge translation through teaching and encourages rigour and quality of methods and analysis. The varied interests of stakeholder groups require consideration in the design and execution of evaluation. Meeting the needs of all stakeholders may require trade-offs. For example, formative or process evaluation can heavily inform adoption and optimization. Summative or outcome evaluation is required to effectively assess value.

As a result, many stakeholders need to have skills and assets to contribute to the evaluation process, but not all have an equal capacity to do so. Capacity to design, execute, and be responsive to evaluations is a top capacity need. Academia contributes substantially to addressing capacity needs and can be effective at publishing and communicating findings. Clinicians, health sector leaders, implementers, the vendor community, internal and external evaluators, and training providers can also play important roles. With growing needs for these skill sets, there is an opportunity for greater participation by all.

Effective evaluation also requires the focused engagement of those involved in digital health initiatives from users to implementation teams to leadership. For instance, time and support is needed to co-design evaluation frameworks, gain approvals, contribute insights, facilitate data collection, provide other input, and respond to evaluation findings.

26.3. The Foundation

Encouraging and supporting capacity development is best built upon a foundation of tried and tested frameworks, tools and processes. There are a number of cross-sector structured approaches that have been applied to support benefits realization for digital health, such as Val it (see http://www.isaca.org/knowledge-center/val-it-it-value-delivery-/pages/val-it1.aspx), Prosci (see https://www.prosci.com/) and value chains. There are also a number of tools that have been tailored to the health sector’s needs. Some of important contributors to this body of knowledge and resources are discussed in greater depth in chapter 1 of this handbook.

In Canada, digital health-related resources have been developed by a variety of individuals and organizations. For instance, Canada Health Infoway’s Benefit Evaluation Framework (discussed in detail in chapter 2) provides a high-level, coherent, evidence-based model to guide discussion of benefits and evaluation approaches (Lau, Hagens, & Muttitt, 2007). This framework, along with sets of indicators that focus on various types of digital health, have been regularly used to support measurement as part of a benefits realization cycle. The broader Clinical Adoption Framework is also a useful reference to consider the range of inputs influencing success (Lau, Price, & Keshavjee, 2011). Likewise, the Newfoundland and Labrador Centre for Health Information (nlchi, n.d.) produced a range of materials including an evaluation framework and a series of successful evaluations as examples. Faculty at the University of Victoria’s School of Health Information Science have also been productive, with a series of eHealth evaluation frameworks and tools through the jointly funded cihr/Infoway eHealth Observatory (Lau, n.d.). In addition, a number of open and proprietary evaluation tools and frameworks are offered by solution vendors, consulting firms, and think tanks.

Internationally, there have also been many contributions. A notable health it evaluation framework and toolkit was produced by the United States Agency for Healthcare Research and Quality (ahrq, n.d.) to support their demonstration projects (Cusack & Poon, 2007). It was informed by some of the groundbreaking u.S. research which began emerging a number of decades ago. Another important contribution comes from the Organisation for Economic Cooperation and Development (oecd), which has been developing benchmark measures to allow comparison and knowledge sharing (oecd, 2013). They cover four major domains: provider-centric electronic records, patient-centric electronic records and services, health information exchange, and telehealth. While there are challenges with differing terminology and approaches to eHealth across countries, the oecd effort is proving important for supporting cross-national benchmarking and efforts by countries to enhance digital health measurement (Adler-Milstein, Ronchi, Cohen, Winn, & Jha, 2014).

Important foundational outputs of the work of organizations such as those discussed above also include practical tools to assist with conducting evaluations. The System & Use Assessment survey developed by Canada Health Infoway and its partners is one such example. It has been extensively applied across Canada over the last decade (Infoway, 2006, 2012). There are many other similar examples of well-tested tools to make collection and interpretation of data easier for organizations building capacity.

Virtual communities have been important for sharing all of these resources, as well as the experiences of those involved. They also provide a forum to build fruitful relationships, such as connecting experienced evaluators with those in need of support.

While these resources provide a helpful starting point for those embarking on eHealth evaluation, there is an ongoing need for development and evolution. The rapid change of technology and its application in healthcare requires development of evaluation methodologies and tools to keep pace. Similarly, the growing demand for evidence to inform decision-making requires evaluation approaches aligned to evolving priorities and questions being posed. The increasing digitization of health has generated new data sources with substantial potential to improve evaluation options, as well as broadly inform the health system. Seizing this opportunity, however, takes careful planning and cooperation.

26.4. Approaches for Building Capacity

Just as the contributions to the knowledge base came from many different stakeholder groups, evaluation capacity building has come from across the sector, through leveraging evaluation expertise and capacity developed in other domains.

Basic undergraduate education through universities and colleges, a traditional approach for capacity building, has been impactful. While the University of Victoria offered the first health informatics program in Canada, there are now over 10 that train undergraduate students in the fundamentals of eHealth and its implications on the health system, and several that produce experts through their graduate programs. Fewer have courses specifically dedicated to evaluation.

More broadly, Canada’s faculties of medicine, nursing, and pharmacy have undertaken specific initiatives to focus on how to better prepare students to practice in modern, technology-enabled, clinical environments (Baker, Charlebois, Lopatka, Moineau, & Zelmer, 2016). Supported by Infoway, the specific goals of this program were to:

  • Ensure that clinicians-in-training are ready to practice in, and gain value from, an ict-enabled environment when they graduate; and
  • Integrate concepts and expectations related to the use of ict in practice into curricula design and educational processes.

In a number of cases, these efforts include embedding competencies related to evaluation in health professional undergraduate education. Continuing education through academia and other education providers is also essential, as many professionals seek core skills to embark on evaluation work or to enrich their knowledge in key areas.

Recognizing the importance of capacity building and the critical role of academia, the Canadian Institutes of Health Research (cihr) and Infoway partnered in 2008 to offer a five year cihr-Infoway Chair in eHealth. This award, won by Francis Lau of the University of Victoria, proved a successful example of targeted funding making a significant impact, with outputs including some of the frameworks, publications, and communities referenced earlier (Lau, 2014).

Practical experience in undertaking and addressing the findings of evaluations is also important for building capacity, particularly given that the volume of evaluation activity has increased in recent years. This growth parallels the rise in evaluation in the public sector as a whole. Many investments in eHealth today have an explicit requirement for measurement, be it around the implementation process, the change effort, the adoption and/or the impacts. This was not previously the norm, but more sophisticated approaches to project delivery and an increasing demand for evidence-informed decisions has changed the expectations. With greater funding and attention from leadership, implementers have sought out evaluators from academia and the private sector, and often take the opportunity to grow in-house capabilities. Arguably, the most effective work comes from collaboration between these groups, matching those in a position to shape evaluations and generate knowledge with those who are in a position to apply the findings.

Growth in the volume of evaluation activity has required investments of financial, human, and other resources. Granting agencies have an important role in this area. cihr has made some very important contributions over the past decade, with eHealth an explicit focus of a number of grant competitions and knowledge translation activities. Embedding evaluation as part of project plans and budgets is also increasingly common. Organizations delivering eHealth solutions are now more likely to require evaluation as a deliverable, and are able to budget for it and engage skilled internal or external evaluators to support the work.

26.5. Approaches for Knowledge Translation and Benefits Realization

As important as increasing the capacity for conducting evaluations is increasing the application of findings. The Canada Health Infoway Benefits Evaluation Framework focuses on three purposes: accountability, informing clinicians and other digital health users, and driving benefits realization.

Accountability for investments made is increasingly important in the public sector and has been an important driver of expanded evaluation and performance management practices in Canada. Methodologies and reporting approaches must be tailored for this purpose. Clinicians, steeped in a culture of evidence-informed practice, similarly expect evidence to shape digital health design, implementation, and adoption, as well as its effective integration into clinical practice.

This includes supporting evidence-informed strategic planning and implementation. All stakeholders involved in implementation can benefit from evidence to inform optimization and realization of benefits. For instance, initial strategic planning typically includes a review of the evidence and critical success factors to inform priorities, assess options, and guide plans. Subsequently, evidence may help to drive enabling functionality like decision support, redesigning workflows to capture potential productivity improvements, addressing barriers to adoption like user interface challenges or inconsistent policies, or harnessing data for secondary use. While any of these factors may be identified during project planning, often the full value proposition emerges over time, with thoughtful observation, analysis, and ongoing response to feedback from users.

Traditional approaches to knowledge translation (kt), such as publications and conferences, remain central to the long-term objective of building a rich and robust knowledge base. They both enable communication to a range of audiences, and conferences increase the opportunity to build collaboration from that communication. Peer-reviewed literature helps to create quality standards that allow those applying the results to apply them appropriately and confidently. Limitations of peer review publication include delays (often in excess of a year), the effort required to complete the process, and disincentives for many outside the academic community to contribute findings.

In addition, kt approaches have been rapidly evolving, to both get evidence into the hands of decision-makers more quickly and encourage broad participation. Within specific projects, rapid cycle improvement methods can help to get actionable information into the hands of those with the ability to adapt plans and processes. Ideally, projects are designed with an optimization period. This ensures that resources are available to make adjustments as the process unfolds. Often quality improvement cycles are built into broader change management methodologies. The National Change Management Framework and supporting toolkit, developed by the Pan-Canadian Change Management Network with the support of Canada Health Infoway, positions evaluation as a central activity and provides some of the practical guidance required to enable long-term success (Infoway, 2013).

An expanding range of approaches beyond peer-reviewed journals are also being used to share knowledge across organizational boundaries. For instance, webinars, often tied to the kinds of communities described above, are increasingly prevalent and valuable. There are also well-regarded print/online journals and magazines, and growing online and social media options. Each of these has unique pros and cons, with considerations such as reducing disincentives to sharing experiences, streamlining process requirements and prerequisites, removing complexity to access information, and ensuring that the quality of information can be assessed by users. Integrated kt and multi-channel communications are important considerations.

26.6. Capacity Building Examples

This section provides selected examples of the capacity building outputs that are mentioned in the Foundation section (26.3) of this chapter. The examples cover peer support communities, knowledge and learning resources, and formal evaluation courses.

26.6.1. Peer Support Communities

Canada Health Infoway initiated a Benefits Evaluation community in 2007, as work was underway to put the evaluation strategy into operation. Early roadblocks had emerged in gaining buy-in from project teams to take accountability for evaluation and ensuring that there were people with the right skills to be successful. The community directly addressed these roadblocks, bringing stakeholder groups together and showcasing practical methodologies, effective partnerships, and the value of having evidence. Much credit for the early success of this community goes to the staff of the Newfoundland and Labrador Centre for Health Information, who brought substantial expertise to this forum and demonstrated the collaborative relationship they had achieved between implementers and evaluators (nlchi, n.d.). Today, there is strong participation from many groups across Canada and the community contributed substantially to the development of a series of indicator sets, which are included in Infoway’s Benefits Evaluation Technical report (Infoway, n.d.). It has evolved over the years to focus on emerging areas of need and to engage a broader audience. In addition, Canada Health Infoway frequently brings evaluation expertise into other Infoway-facilitated communities, like jurisdictional implementers groups, clinician reference groups or InfoCentral communities.

A further example is the virtual eHealth Benefits Evaluation Knowledge Translation (be-kt) community, which evolved from the University of Victoria’s (UVic) eHealth Observatory (Lau, n.d.). In 2012-13, researchers at the eHealth Observatory facilitated a virtual learning community in eHealth evaluation with a broad membership including implementers, policy-makers and academia. This community featured live online sessions with presentations from mentors, follow-up questions to prompt online discussions, and resources and links to support members in their evaluation activities (Bassi, Lau, Hagens, Leaver, & Price, 2013). The community attracted over 130 participants, many from outside academia, who were seeking the knowledge and network to increase the use of evaluation in their organizations. Over an 18-month period, the be-kt community website was visited 4,425 times and viewed 14,683 times by both registered and unregistered members. Additionally during that period, 28 live seminar sessions were held on different topics related to eHealth evaluation. The presenters included researchers from the eHealth Observatory, Infoway benefits realization staff and jurisdictional representatives. The overall feedback from community members was largely positive, in that the effort had raised awareness of the importance of be, where to find be resources, and how to apply the be Frameworks, methods and tools. Interested readers can refer to the final report and lessons learned from the eHealth Observatory website (Bassi, 2014).

26.6.2. Knowledge and Learning Resources

Over the years, a growing number of online knowledge and learning resources on eHealth evaluation have been published. Examples of the organizations and groups that provide publicly available eHealth evaluation resources over the Internet are listed below.

  • Canada Health Infoway maintains a rich repository of knowledge resources in benefits evaluation on its website (Infoway, n.d.). These resources include the Infoway be Framework, the be technical indicator report, and published jurisdictional be reports in its online resource centre.
  • The Newfoundland and Labrador Centre for Health Information has published the outputs of its benefits evaluation work done over the years on its website (nlchi, n.d.). These resources include an inventory of published electronic health record (ehr) initiatives across Canada, a review of published ehr evaluation literature and reports, and a proposed evaluation framework for ehr initiatives. In particular the proposed framework describes a collaborative process working with stakeholders to develop meaningful and relevant evaluation study design and measures that can be implemented by healthcare organizations.
  • University of Victoria eHealth Observatory: This is part of a five-year chair in eHealth award jointly funded by cihr and Infoway to examine the effects of health information systems deployment in Canada. The website contains a set of eHealth evaluation frameworks, rapid evaluation methods and sample evaluation tools that can be applied and/or adapted in field evaluation studies of different eHealth systems (Lau, n.d.).
  • The Agency for Healthcare Research and Quality was funded as part of the national strategy in the United States to improve the quality of care through it. Over the years, the ahrq Health it website has amassed a rich set of resources that include health it evaluation toolkits, ahrq-funded health it projects, published health it evaluation studies and position papers in health it adoption and evaluation (ahrq, n.d.).
  • Members of the European Federation of Medical Informatics (efmi) working group on Evaluation (eval) and the International Medical Informatics Association (imia) working group on Technology Assessment and Quality Improvement have published a set of guidelines for the reporting of evaluation studies in health informatics called stare-HI (Talmon et al., 2009; Brender et al., 2013) and for good evaluation practice in health informatics called gep-HI (Nykänen et al., 2011). These guidelines are invaluable resources that provide guidance on how one should design, conduct and report high-quality eHealth evaluation studies in the field setting.
  • Organisation for Economic Cooperation and Development (oecd, 2013) offers model surveys and other benchmarking tools related to health information and communications technologies.
  • Institute for Health Information Studies, umit — Researchers at the University for Health Sciences, Medical Informatics and Technology (umit) have published an online inventory of evaluation studies in medical informatics called the Web-based evaluation database or Evaldb (see Ammenwerth & de Keizer, 2005). This database contains over 1,800 published health it evaluation studies and systematic reviews, and is updated on an ongoing basis. It is one of the most comprehensive inventories on eHealth evaluation studies published to date.
  • The National Institutes of Health Informatics (nihi) provides a suite of online education sessions, including a series on evaluation, with sections on qualitative and quantitative methods, that can be accessed at www.nihi.ca

26.6.3. Formal Evaluation Courses

The School of Health Information Science at the University of Victoria has been offering a graduate level course on eHealth evaluation since 2010 as part of its MSc program in health informatics. This course is delivered as a five-day intensive on-campus workshop with two weeks of online follow-up through Web-conference sessions. The course goals are to help students: (a) understand the types of evaluation frameworks, methods and studies available; (b) become knowledgeable in how evaluation studies are designed, conducted and reported; and (c) apply evaluation findings to inform healthcare policy and practice. The workshop is made up of class lectures and discussions, case studies, guest speakers, and individual and group assignments. The assignments provide students with opportunities to appraise published eHealth evaluation studies, and to apply best eHealth evaluation practice guidelines in eHealth case examples while designing an eHealth field evaluation study. The course covers (but is not limited to) the following topics:

  • Methods of appraising and reporting eHealth evaluation studies (e.g., assessment of methodological quality, best practices in eHealth evaluation);
  • eHealth evaluation frameworks (e.g., Infoway Benefits Evaluation Framework, Clinical Adoption Framework);
  • eHealth evaluation study design and methods (e.g., quantitative versus qualitative, mixed methods, experimental, observational studies, surveys, usability studies); and
  • examples of published eHealth evaluation studies (e.g., reviews, controlled and descriptive studies).

There are other Canadian universities that offer health-related evaluation courses as part of their graduate programs in eHealth. For example, students in the MSc eHealth program at McMaster University can enrol in such elective courses as Health Economics and Evaluation (C711), Fundamentals of Health Research & Evaluation Methods (hrm721), Economic Analysis for the Evaluation of Health Services (hrm737), and Approaches to the Evaluation of Health Services (hrm762). Students in the MSc of Health Informatics program at the University of Waterloo can enrol in the Evaluation of Public Health Program (phs614) course as an elective. There is also an MSc program in Health Evaluation at the University of Waterloo with its entire curriculum focused on program evaluation in public health and health systems. Note that the courses mentioned at these universities are not necessarily specific to eHealth.

26.7. Looking Ahead

Some important opportunities emerge through exploring capacity building for evaluation. Partnerships between academia and such other stakeholders as implementation teams, clinical users, and funders, have proven so mutually beneficial as to warrant expansion. There is value in continuing to build, maintain, and share the pool of such resources as data collection tools and sample methodologies. Diversification of training opportunities from degrees to courses, workshops and online offerings, has been important for expanding the pool of evaluators. Integrating evaluation and optimization into the project life cycle has likewise proven valuable. Sharing and acting on the results of evaluation, both locally and more broadly, is also important, just as evidence-informed care has become the standard for clinical practice. Much progress has been made, but many opportunities remain to continue to build capacity in this domain.

References

Copyright © 2016 Francis Lau and Craig Kuziemsky.

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

Bookshelf ID: NBK481599

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (4.5M)

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...