U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Roundtable on Translating Genomic-Based Research for Health. The Value of Genetic and Genomic Technologies: Workshop Summary. Washington (DC): National Academies Press (US); 2010.

Cover of The Value of Genetic and Genomic Technologies

The Value of Genetic and Genomic Technologies: Workshop Summary.

Show details

3Pharmacogenomic Testing to Guide Warfarin Dosing

In the second clinical scenario, David Veenstra of the Department of Pharmacology and the Institute of Public Health Genetics at the University of Washington described how pharmacogenomic testing could be used to guide initial warfarin dosing and management.1 Because warfarin has a very narrow therapeutic range and because there is high inter- and intra-patient variability in response, finding the optimal dose can be challenging. While there are non-genetic factors that affect individual response, it is known that variations in two specific genes are associated with response to warfarin, and it has been suggested that pharmacogenomic-based dosing could speed up the determination of the appropriate initial therapeutic dose.

WARFARIN PHARMACOGENOMICS

Warfarin (known also by the brand name Coumadin) is an anticoagulant used for the prevention of thromboembolic events. Most commonly prescribed for patients with atrial fibrillation, it is also used to prevent clotting events in patients with mechanical heart valves or deep vein thrombosis as well as given prophylactically prior to major orthopedic surgery. Warfarin has been in use since 1954, and in 2004 more than 16 million prescriptions were dispensed in the United States. There are currently no direct competitor drugs on the market. Warfarin is highly effective, reducing the risk of ischemic stroke by more than 50 percent compared to aspirin and by nearly 70 percent when compared to cases where there is no anti-thrombotic therapy at all.

Initiation of Treatment

The International Normalized Ratio (INR), which is used to measure treatment response in patients receiving warfarin, is the ratio of the patient’s prothrombin time to a control or “normal” sample, corrected for the sensitivity of the control reagent used relative to an international standard. If the INR is either too low or too high, the patient has a three times higher risk of a clotting or bleeding event, respectively. Serious, life-threatening bleeding events (those requiring medical intervention, such as gastrointestinal or intracranial bleeding) happen in about 2 to 10 percent of patients during the first year of warfarin treatment, and approximately 1 percent of these events are fatal. Warfarin is generally underutilized, particularly in the elderly, because of concerns about bleeding events.

It is well known that certain clinical and demographic factors influence warfarin dose requirements, including age, race, sex, co-morbidities, concomitant medications, and diet. A clinician will make adjustments in the starting warfarin dose based on such known factors. Once the patient begins taking warfarin, the clinician monitors the patient closely; the monitoring is initially performed once every three to four days, then it is done weekly or every two weeks, and then, once a patient is stable, maybe every four to eight weeks. As such, warfarin dose management can already be considered “personalized medicine.” Still, a given patient’s INRs are generally in the appropriate range only 50 to 70 percent of the time. The question then, Veenstra said, is whether genomics can be used to improve warfarin management.

Warfarin Genetics

There are two genes known to be involved in outcomes related to warfarin therapy. The first, CYP2C9, codes for an enzyme that is primarily responsible for the metabolism of warfarin. Early studies identified two variants, *2 and *3, that affect the half-life of the drug. Warfarin metabolism is reduced by 40 percent in patients with the *2 variant and by 90 percent in those with the *3 variant. The prevalence of these variants in populations varies by race, occurring most often in patients of European descent and least commonly in patients of Asian descent.

Variant CYP2C9 genotypes account for about 10 percent of warfarin dosing issues. Clinical outcomes studies indicate that patients with the *2 or *3 variants have approximately twice the risk of a life-threatening bleeding event and that, during the first 90 days of therapy, that risk is actually about four times higher (Higashi et al., 2002; Limdi et al., 2008). CYP2C9 variants also affect the length of time required to achieve stable dosing. The hypothesis, according to Veenstra, is that with an increased half-life of warfarin, people with one of these gene variants are much slower to respond to dose adjustments. Correspondingly, Veenstra has observed that it takes, on average, three months longer to stabilize these patients compared to patients without the variant at the University of Washington anti-coagulation clinic.

The other gene of interest is vitamin K epoxide reductase, VKORC1, which codes for the warfarin drug target. The VKORC1 genotype is responsible for 20 to 25 percent of the variation in the required warfarin dose. The “A” haplotype group of polymorphisms is associated with a lower required warfarin dose, while patients with group “B” haplotypes require higher doses (Rieder et al., 2005). Interestingly, unlike the case with the CYP2C9 variants, VKORC1 gene variants have not been found to be associated with bleeding risk.

The rationale for learning an individual’s CYP2C9 and VKORC1 genotypes is that this information could guide the determination of the initial warfarin dose, allowing the clinician to stabilize the patient’s INR more quickly, reducing the number of necessary office visits, and ultimately putting the patient at a lower risk of a bleeding event. While most of the studies to date have focused on the safety-related issue of reducing bleeding events, there are also issues of efficacy in terms of administering higher doses to patients who need them.

Pharmacogenetic Testing

There is convincing evidence of warfarin’s clinical validity, Veenstra said. Testing for the select, informative CYP2C9 and VKORC1 single nucleotide polymorphisms is straightforward and the International Warfarin Pharmacogenetics Consortium has recently developed a warfarin dose prediction algorithm (IWPC, 2009) using findings from nine different countries and based on the relationship between dose requirements and the known clinical and genetic factors. The consortium found that estimating a starting dose using the pharmacogenetic algorithm resulted in a more accurate starting dose which was closer to the required stable therapeutic dose than starting doses estimated using clinical factors alone. The largest difference between estimation approaches was observed in patients who had high dose requirements—greater than 49 mg per week—although this was not a high proportion of patients. There was also some benefit of including genetic information when making estimates for patients who required low doses, less than 21 mg per week. Further support for the analytic and clinical validity of pharmacogenomic testing for warfarin has been provided by a systematic review completed in 2006 by the American College of Medical Genetics (ACMG) (Flockhart et al., 2008).

While such studies support the clinical validity of pharmacogenomic testing to guide warfarin dosing, few studies have been done that provide direct evidence of clinical utility. Indeed, the 2006 ACMG report concluded that no study had shown testing to be effective in reducing high INRs, shortening the time to a stable INR, or limiting the number of serious bleeding events. A more recent systematic review by researchers at the University of California, San Francisco did not find sufficient evidence to support the use of genetic testing to guide warfarin therapy (Kangelaris et al., 2009). Additionally, the Clinical Practice Guidelines for the American College of Chest Physicians states explicitly that “we suggest against pharmacogenetic-based dosing until randomized data indicate that it is beneficial” (Ansell et al., 2008).

Veenstra raised several issues that should be considered when the necessary randomized controlled trials are conducted, including: selection of comparator (should it be against an algorithm that uses clinical information, standard of care, or intense monitoring?); statistical power (if bleeding events are the primary outcome, the trial will require 5,000 to 10,000 patients to be sufficiently powered, assuming a 4 to 8 percent risk of a major bleed in the first year of therapy and a 25 percent relative risk overall); and use of surrogate markers (trials may be designed with the percentage of time that INR is in range as the primary outcome).

Veenstra described several randomized controlled trials that assessed or are in the process of assessing the impact of genotype-guided dosing on clinical outcomes. A study by Anderson et al. randomly assigned 200 patients to two groups that used either genetic information in addition to clinical information or else clinical information alone (Anderson et al., 2007). Overall, the investigators found no significant difference between these two groups in the time that INR was in range, but there did seem to be a trend toward benefit for certain patient groups. Another study by Caraco et al. reported a shorter time to first therapeutic INR and to first stable INR in patients using CYP2C9-guided warfarin therapy (Caraco et al., 2008), although Veenstra noted that it is somewhat difficult to interpret this study because of the existence of different follow-up periods for the control and study groups. Both studies do give some indication of potential clinical utility, though neither is conclusive. An ongoing trial that may provide more definitive results is the NIH-funded Clarification of Optimal Anticoagulation through Genetics (COAG) trial. The trial will enroll about 1,200 patients and will compare a clinical algorithm versus a clinical-plus-genomic algorithm, assessing the percentage of time that INR is in range in the first month of treatment as the primary outcome.2 Patients will be followed for at least three months and up to one year. Secondary outcomes will include bleeding and clotting events.

There have been several cost-effectiveness studies, Veenstra said. Recent studies have come to the conclusion that there is a great potential for cost savings, but that it cannot be realized until testing costs decrease and the uncertainty concerning effectiveness is reduced (Hughes and Pirmohamed, 2007; Veenstra, 2007; Eckman et al., 2009; Patrick et al., 2009; Meckley et al., 2010).

The Centers for Medicare and Medicaid Services (CMS) recently issued a coverage decision for warfarin pharmacogenomic testing based on the current evidence available. The decision states that CMS will only reimburse testing if the patient is enrolled in a randomized controlled trial with sufficient power to detect major bleeding and thromboembolic events. This is a “coverage with evidence development” approach, Veenstra explained. Likewise, in January 2010 the Food and Drug Administration (FDA) used the information derived from the IWPC report to update the drug label for warfarin to include dose ranges based on pharmacogenomic information. Together, this and a previous label update in 2007 inform healthcare providers about the association between warfarin dosing and variants of the CYP2C9 and VKORC1 genes, but they do not require that pharmacogenomic testing be done.

In summary, Veenstra said, there is a validated relationship between warfarin dosing and two different genes. There is a plausible benefit that could be derived from testing, but there are not enough data providing direct evidence of clinical utility. Lastly, the evidence requirements for decision-making are variable and dependant to a certain degree on stakeholder perspective.

PANEL REACTION

Pharmacy Perspective

Anna Garrett, manager of outpatient clinical pharmacy services and a clinical pharmacist practitioner at Mission Hospital in Asheville, North Carolina, discussed her experience managing a large group of patients being treated with warfarin and other injectable agents. About four years ago, Garrett said, sales representatives began talking about genetic testing and warfarin dosing. At that time there was not a lot of evidence to support it, and she noted that she is still of the opinion that there is not enough evidence to suggest that pharmacogenomic testing should change what is being done clinically.

One challenge in an outpatient pharmacy clinic setting is the time it takes to obtain pharmacogenomic test results. If patients are being carefully managed in a controlled situation, then it will be possible to identify those patients who are highly sensitive to warfarin even before the genomic test results come back, which can be as much as five days after the test is sent out to the laboratory. It is instead the patients who are slow responders to warfarin for whom the genomic information could be of value.

Economics is another concern, Garrett noted. The majority of her patients live in rural areas and are largely on Medicare and cannot pay the $300 to $400 cost of testing. Thus, in order for pharmacogenomic testing to be useful for this patient base, the price would have to drop significantly, she said.

Although such tests are not available, Garret said it would be a real benefit to have genetic testing that could identify those patients who are more likely to have adverse drug interactions (e.g., an exaggerated reaction to warfarin and amiodarone, or an adverse response to the combination of warfarin and acetaminophen or ciprofloxacin). While such reactions are uncommon, they do occur in a subset of patients. As a pharmacist, Garrett said, she finds the prospect of individualized drug therapy based on genetic sequence to be exciting, especially the ability to know which products to treat patients with first, rather than having to try three or four different drugs before finding the one that works best for that patient.

Regulatory Perspective

Elizabeth Mansfield of the Office of In Vitro Diagnostic Devices at the FDA offered firsthand insight into the warfarin label change that Veenstra discussed. The FDA’s primary responsibility is the safety and effectiveness of products, Mansfield said. When information becomes available that could potentially reduce adverse events, a timely label change is warranted. An earlier warfarin label change in 2007 told care providers they could use VKORC1 and CYP2C9 testing to try to adjust the patient’s warfarin dose, but it did not provide any further information, as there was little known at the time. As more data became available, the agency was able to conduct a meta-analysis of a number of studies and derive dose recommendations based on genotype. This is not predictive in any way, Mansfield cautioned, as there are still limited data on outcomes, but the label was changed to provide information to those who felt that they could use it.

Mansfield agreed with Garrett that, while genomic testing could be beneficial in guiding the initial warfarin dose, the testing generally doesn’t have a turnaround time fast enough to be useful in this capacity, although some types of point-of-care tests could be envisioned in the near future.

Another focus of Mansfield’s office at FDA is the quality of the test being performed. Many of the available genomic tests are laboratory developed and most likely are not reviewed by FDA for their performance characteristics. Of particular concern for genomic testing for warfarin dosing is that before taking warfarin, patients do not express any observable phenotype that would suggest that they have a particular allele of CYP2C9 or VKORC1. Then the patients have one laboratory test done, and the results are assumed to be correct. As such, testing needs to be very accurate, and FDA sets a very high bar for approval—greater than 99 percent accuracy with a 95 percent lower bound confidence interval, Mansfield said. But the FDA can only regulate the tests that it is aware of.

Pharmacogenomic testing, if used, must be used carefully, and in concert with INR, Mansfield concluded.

Private Practice Perspective

Dennis Salisbury, a family physician at the Rocky Mountain Clinic in southwest Montana, addressed how doctors in private practice make the decision whether to recommend pharmacogenomic testing to patients. Reiterating some of the points he made when discussing the Lynch syndrome example, Salisbury said that it is important that there be a clear association between a genomic test and a condition and that there should also be relevant information about the incidence and prevalence of the condition, the severity of the illness’s impact, and the potential benefits of the genomic testing. The cost of testing is also a factor. Ultimately, whether to proceed with the testing must be the patient’s autonomous decision, but the validity and predictive value of the testing have the greatest weight in determining his recommendations to patients, Salisbury said.

Considering the warfarin example, Salisbury recalled a male patient in his mid 40s who had worked in rice paddies in Southeast Asia most of his life before moving to the United States. The patient had had a valve replacement to address an aortic murmur and aortic insufficiency, and he was sent home, apparently without proper anticoagulation management. He arrived back at a hospital with cardiac tamponade and a prothrombin time greater than 100, which is very high, Salisbury noted. Following pericardiocentesis, the patient developed infective pericarditis and osteomyelitis, and he had to have a muscle flap transposed to hold the bones of his chest together. Salisbury commented that this is a very unfortunate example of what can happen when warfarin is not well managed.

Pharmacogenomic testing for warfarin dosing to patients has some potential benefits, Salisbury said, as well as some potential to improve the way providers manage these patients. Furthermore, the fact that a 2005 report found warfarin to be one of three drugs responsible for one-third of prescription drug-related emergency department visits suggests that there is a potential financial benefit for health systems as well.

Anticoagulation management is challenging for a small practice, Salisbury said, as it involves calling the patients in, adjusting their dosing, and arranging for another test. Thus, finding a way to make this an easier and more accurate process would also benefit private practice.

Overall, Salisbury said, the data are very exciting and very hopeful, but they do not yet prove that the test is really of value. There are costs to consider besides the cost of the genetic test, he noted. There is a potential for cost savings if INRs could be done less frequently, for example. There are also cost and time savings for patients if they can travel less frequently and miss less work for testing. Finally, there is the cost of the low molecular weight heparin that patients are given during the time when the correct warfarin dose is being determined, which could be reduced if a therapeutic dose is reached sooner. All these costs should be considered when weighing benefits. But right now, Salisbury concluded, the benefit cannot be established, in part, because of the laboratory turnaround time for the test, but also because there are not yet sufficient data to prove cost savings, time savings, or, most important, improvement of outcomes.

Insurance Provider Perspective

Palmetto GBA is a subsidiary of Blue Cross and Blue Shield of South Carolina that provides technology, training, finance, and customer service solutions for health care, including overseeing Medicare benefits. Arthur Lurvey, an endocrinologist and internist and currently a Medicare medical director at Palmetto GBA, provided his perspective on coverage decisions.

Medicare is an insurance plan, Lurvey reminded participants, not a health plan. Medicare pays for the services involved in the diagnosis and treatment of an illness or injury or to repair a damaged organ, but it does not pay for screening, as per the original law. A number of laws passed subsequent to the establishment of Medicare specified additional coverage for colorectal screening, diabetic screening, lipid screening, mammograms, and PSA tests for prostate screening. Pharmacogenomic testing to guide warfarin dosing is different from these cases in that it involves patients with no particular disease, so coverage is not straightforward based on Medicare law. However, Lurvey said, if the government were to decide that testing is related to a particular condition, then perhaps it could be covered under existing precedence. In fact, the test is currently covered when conducted in the context of a randomized, controlled clinical trial.

For the government—and for many of the insurance companies that tend to follow the government—there are two types of coverage, Lurvey explained. National coverage is determined based on a study of all the relevant literature and expert testimony at an open meeting of the Medicare Evidence Development and Coverage Advisory Committee (MEDCAC). Local coverage is determined by medical directors such as Lurvey, who work with others to determine if Medicare should pay for a particular cost in a particular region when there is no national decision.

In California, for example, there are many individual, small- and medium-sized laboratories that are seeking coverage approval for “home brew” genetic and genomic tests. Since the laboratories conducting these tests are located in California, the “local” coverage decisions are essentially for the whole country, Lurvey said. Unfortunately there are no long-term or even medium-term studies sufficient to guide a coverage decision at this time.

When considering coverage, medical directors review white papers or recommendation guidelines from the relevant specialty societies as well as technical advisory committee reports from Blue Cross and Blue Shield or others, and they talk directly with professors and other academicians in the field. They are not necessarily trying to determine if the test has validity (because most of them do have reproducibility and validity) but rather whether it also has utility. Does it make a difference in either the prognosis of a patient or the treatment path? The other aspect they must consider is cost. While CMS generally does not factor cost into coverage decisions if the test has efficacy, quality, and reproducibility, Lurvey said that a particular concern is that these individual pharmacogenomic tests are going to add a large cost to the system. Everyone will bear the costs through insurance premiums or taxes, he said, whether the tests are paid for by insurance companies or federal or state agencies.

Ultimately, more data are needed on whether these tests have utility and on whether they make a difference in the way that physicians and other providers give treatment or therapy to their patients.

Panel Discussion

Defining Value

Marc Williams, who moderated the discussion, asked the panelists what they look for in determining the value of a test. All the panelists agreed that value encompasses improvements in clinical outcome and quality of life as well as reductions in complications and in morbidity and mortality. Some type of measurable difference in the outcome for the patient is important. Garrett said that a test should provide information that cannot be obtained by other means. Cost is also important, Salisbury added, especially given the number of patients in the community who do not have insurance. Convenience and satisfaction should also be considered. In the case at hand, for example, it would be easier to do an INR test once every 3 weeks than once every 3 days. Lurvey said that prognostic information is very helpful, particularly if it can guide decisions on whether or not to treat. In assessing value, Veenstra said, evidence-based medicine is key. Representing the FDA view, Mansfield said that the test must also be safe and effective.

Data Collection and Interpretation

Williams noted that it is difficult to translate randomized controlled trial findings into real world clinical practice. Large population-based studies may better reflect what is really happening in practice, but these studies have other issues. Williams asked the panel to comment on how data collection should be balanced in order to better answer the question of value in a timely fashion.

All agreed that no single approach to data collection is sufficient. A challenge in interpreting randomized controlled trials, Veenstra said, is that, for the most part, patients in control groups are not really getting “usual care.” They will be receiving high-quality care at an anticoagulation clinic or academic medical center. Evidence arising from real-world trials may be more useful and more generalizable. Garrett concurred, adding that investigators and participants in randomized controlled trials “behave very well” and may not give a good indication of real-world experience versus usual care. Lurvey added that the placebo effect, in which behavior changes because investigators or participants know they are part of a study, also needs to be taken into consideration. While controlled trials do not necessarily represent the real world, Mansfield said, they do give a sense of the potential for what could be achieved if everyone received controlled care. There needs to be a way to determine the potential for a test before looking at the uncontrolled situations in which it is difficult to tease out the variables. Salisbury agreed and suggested that more attention should be paid to practice-based research networks. He also said that the broader use of electronic health records will make it easier to gather data on real care situations and real outcomes. Lurvey suggested that it also might be useful to consider the tremendous amount of claims data that is available.

In terms of the design of trials, Veenstra said, if this test were a drug that had significant potential for revenue, there would have been a lot more invested in its development. Trials of the test have been designed without much information about the optimal design of the intervention. Veenstra and colleagues ran clinical trial simulations and found that elements such as the design of the trial, when dose adjustments are made, and how information on the half-life effect is utilized can modify the effectiveness of the intervention. These are the types of factors that need to be considered when developing a randomized controlled trial. The first step is to define the questions that really need to be answered in order to have well-designed, adequately powered studies that will generate useful data.

Mansfield noted that the FDA works to ensure that any device being developed is answering a specific question to begin with, which helps guide companies toward defining the intended use of the device. The FDA also has the authority to require post-market studies for devices, but only to the extent of determining safety and efficacy in populations or circumstances beyond the original application. Warfarin is an old drug, and there is limited authority to go back and enforce any changes unless there is a safety issue (which is how the label change was posed). Mansfield said that it would certainly be possible for FDA to work with others to define the key questions and the appropriate trial design, but the agency could not mandate those things.

Genotype Versus Expressed Phenotype

Management at anticoagulation clinics has been shown to improve outcomes in patients that are on anticoagulant medications, but it is not possible to have anticoagulation clinics everywhere, Williams said. Given that, would genotyping be more appropriate in certain settings? Would an inexpensive point-of-care test be of greater value in a rural practice than in a large academic center that can afford to have an anticoagulation clinic? And how do we explore when other types of interventions may be more appropriate than genotypic methods? As an example, Williams noted that home INR monitoring is not available in the United States, but it is done in Europe, and outcomes are much better there.

Veenstra responded that there is a great potential benefit for genetic testing in populations that are rural or underserved, but those are populations that are challenging to study. Epidemiological approaches may help. One question could be, for example, whether someone with a variant CYP2C9 gene who lives in a rural area, where he or she may not be seen as frequently, is at a higher risk than someone who is seen on a regular basis. Correspondingly, is it better to try to provide such people with genetic testing or with improved anticoagulation services? Garrett agreed that genetic testing would be beneficial for people who are in more rural areas and do not have access to anti-coagulation clinics, especially if it is given in combination with home INR testing, although she noted that this type of change in service would lead to a severe drop in laboratory revenue.

Lurvey said that a fair number of INR tests are still necessary when warfarin therapy is initiated. Genotyping can help to identify a starting level and may indicate if a much higher dose will be needed, but it does not change the amount of testing required. He reminded the panel that the genetic test has to be interpreted and is only helpful if used correctly. Just making the test available will not necessarily be of much help to providers who do not do such testing very often and do not know how to use it. Mansfield replied that the testing, as described in the warfarin label, can provide some confidence to the physician, helping to guide the physician if the patient has a poor INR.

The study done at Intermountain found that the total number of INRs done in the entire population that was genotyped was, in fact, reduced, and this was factored into the economic analysis, Williams said. One question that has not been addressed is when to draw the INR. Particularly for those with CYP2C9 variants, when the INR is drawn and when the dosage adjustments are made may have a significant influence on safety. The INR measurement can also be imprecise. A measurement of 3, for example, could be anywhere between about a 3.3 and a 2.7. One problem that was discovered, Williams said, was that if the patient had a 3.1 measurement, a dose adjustment was made to bring it closer to 3. Williams referred to this as “tampering,” where a process that is already stable is adjusted. Inter-mountain then put into effect a process improvement which calls for the dose to remain unchanged but the INR to be measured again a little sooner if the patient is in a 10-percent range on either side of the target range. As a result, the amount of time spent in range increased from 50 percent to 75 percent for the patients in that population.

OPEN DISCUSSION

Electronic Health Records and Data Capture

A participant noted that it is currently very challenging to try to compare data across electronic health record systems because there are no data element standards. Williams said that a provision in the High Tech Act of 2009 calls for meaningful use standards for electronic health records, but it does not recognize the potential that standardized electronic health records could have for research. The 2007 FDAAA addresses post-market data gathering, and the issue of infrastructure, Mansfield said, is at the forefront for both healthcare organizations with electronic health records and also the FDA with its own data management infrastructure. Mining data from electronic health records can help provide a picture of how medicine is really practiced across the country and what care patients are getting, Salisbury said. Lurvey concurred and added that examining claims data allows one to see the existence of marked differences in practice patterns between urban and rural areas as well as across different states.

Williams noted that there are rich amounts of observational information in medical records, but virtually none of it is captured in a structured or organized way. As such, researchers must resort to working with billing codes or other procedure codes in their search for data. More focus is needed on how to capture this clinic information at a rich level so it can be used not only for the current study but for future analysis as well. Lurvey pointed out that it is difficult to anticipate what data will be of interest in the future. Williams said the goal should be to capture all possible information, but Lurvey said that such an approach would be expensive. Williams responded that data capture and storage are not expensive beyond the original infrastructure costs.

Coverage with Evidence Development

A coverage decision from CMS regarding evidence development directly applies only to the Medicare population. A participant noted that she is working with Aetna, United, and a number of other plans around the country to develop a similar protocol across systems and thus a greater patient population. Currently each plan does its own evidence development and technology assessment, she said, so bringing plans together to develop a unified approach is complicated. With the tremendous pressure on private sector plans to cover genetic tests, this type of wide-ranging coverage decision could produce the evidence required to determine clinical utility.

Payers do pay for coverage with evidence development in children’s oncology, Williams said. In essence, every child with cancer in the United States is being treated with experimental protocols. This has had a tremendous impact on the knowledge base about these rare tumors and how best to treat them—an effect that is measurable in terms of outcomes and morbidity and mortality. It is not clear, however, whether such a model could be translated to other areas, such as pharmacogenomics.

Translation into Clinical Practice

A participant mentioned the recent black box warning added to the label for Plavix, which notes that patients with CYP2C19 variants may not metabolize the drug as expected. There has been a lot of discussion that this might drive an increased use of genetic testing for prescribing Plavix, he said, even though there is an alternative—prasugrel—whose metabolism does not appear to be affected by the same genetic mutation. Would a different standard be applied in a case like this in which there is an alternative, or would the same standard be applied to all pharmacogenetic opportunities? Garrett responded that decisions will probably be on a case-by-case basis. Offering the physician perspective, Salisbury said that he does not anticipate that situations such as the Plavix warning will drive a more general adoption of genetic testing; rather, it will drive physicians and other providers toward a product that is easier to prescribe.

Because every case has a different risk–benefit trade-off, Mansfield predicted that decisions will continue to be made on a case-by-case basis. Veenstra agreed, noting that the Plavix case has the potential to change people’s perceptions to a certain degree about pharmacogenetic testing. If the warfarin test was for a drug interaction, there would be no question, he said, and testing would be the standard of care. Or if the test cost $5 and the results were available in 5 minutes, it would likely be widely done. There is some point at which the evidence threshold changes, and where that point is will most likely be determined by safety considerations. Another important consideration, Williams added, is how broadly a drug is used. A drug like Warfarin, which may have two million new users each year, will require a different evidentiary standard than a drug for a rare disorder which might be used 100 to 200 times a year.

Lurvey noted that many groups have been urging CMS to make a national decision on genetics and genomics in general. At some point the government may make a decision on payment or coverage for this wide range of tests, but the government won’t determine which tests are best.

For this particular case, roundtable chair Burke observed, there is agreement across the panel that there just is no convincing evidence yet. This raises two key questions for the translation of genomic-based research: What evidence is compelling case by case? and, How can that evidence be obtained? Certain kinds of research infrastructures might lead to more efficiency, she said, but ultimately it comes down to who is going to pay for what level of evidence. Given that cost is always a concern in this kind of situation, are there ways to collect quality evidence more efficiently? Burke said that one model might be provided by the development of the Oncotype DX test, a prospective-retrospective approach in which a hypothesis was developed and then addressed using existing specimens from prior randomized controlled trials. We need to think innovatively about evidence in that way and apply it as broadly as we can, she said.

Veenstra suggested that two approaches will come out of the comparative effectiveness research area, one a priority-setting process involving multiple stakeholders providing input on study design and the other a quantitative approach to assessing the potential benefit of the research and the value of the information analyses.

A participant noted that there are different settings in which health care is provided, and different ways that warfarin treatment is initiated (i.e., inpatient versus outpatient initiation, rural versus academic medical center, different indications for warfarin use, and different target INRs depending on the indication). When considering a warfarin genotyping study, which of those settings should be used, what are the most important questions to answer, and how should those questions be formulated?

Another gap, Williams said, is research around implementation. If something is found to work, how is it then implemented so that it works for everyone? At Intermountain, for example, implementation of Lynch syndrome testing is automated. Tissue from colorectal cancer patients follows a standardized pathway, and ordering Lynch screening is not dependant on any individual care provider. Veenstra added that ideally there would be a generalizable model that can be modular and adaptable because it is not acceptable to wait an average of 17 years from the point when enough evidence is collected to demonstrate clinical utility until the time that a test is fully implemented.

Footnotes

1

The complete scenario provided to workshop participants is available in Appendix D.

2
Copyright © 2010, National Academy of Sciences.
Bookshelf ID: NBK52750

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (468K)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...