U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Madame Curie Bioscience Database [Internet]. Austin (TX): Landes Bioscience; 2000-2013.

Cover of Madame Curie Bioscience Database

Madame Curie Bioscience Database [Internet].

Show details

Why Government Is the Problem (and What to Do about It)

.

Introduction

P.J. O'Rourke, the political satirist and author, has said that giving money and power to government is like giving whiskey and car keys to teenage boys. I agree, but he raises a central question addressed from many perspectives in this volume: Why is government the problem?

There are a number of reasons. At the root of many is bureaucratic and commercial self-interest in conflict with the public interest.

In this chapter I consider both the relationship between the biotechnology industry (as represented by its trade associations) and federal agencies; and the system of bureaucratic incentives, disincentives, rewards and punishments within the FDA and EPA that affects officials' decision-making.

There are some possible solutions but there is no magic bullet or instant formula. Some extend from first principles, others from empirical experience. Some are sweeping and attempt to introduce systematic changes, others are specific and mechanistic. The point is—change is both needed and possible. This chapter presents constructive proposals.

Essay 1 Self-Interest and the Formulation of Public Policy

Onerous regulatory policies? Blame industry!

America learned long ago that what's good for General Motors isn't necessarily good for the country. This axiom applies equally well to the biotechnology industry. For years a triad of industry titans (the Monsanto Company, Ciba-Geigy Corporation and Pioneer Hi-Bred International) have dominated the biotechnology industry's trade associations. It hardly sounds plausible that industry would wish to crank up the level of regulation, but just consider how they stand to benefit from regulation that acts as a market-entry barrier. Such barriers can be a potent weapon against smaller, highly innovative competitors.

The biotechnology industry has not driven a single major regulatory improvement in the past 20 years. They have been, at best, tentative on a variety of regulatory issues. Since the mid-1980s, biotechnology trade associations have lobbied feebly, accepted—and even proposed—policies and compromises that disadvantage their own members in the competitive marketplace. They have often shown a myopic fixation on creating governmental assurances for environmental and consumer activists, at any cost to innovation and competition. But this myopia is neither the only nor the best explanation. In a Faustian bargain, overregulation gives big companies market-share protection and shelter for existing products while government bureaucrats get bigger budgets and empires. Highly innovative small and mid-sized companies and the nation's academic research enterprise find R&D impeded by excessive costs and delays. American innovation and good old-fashioned competition go to hell.

Biotechnology trade groups' checkered past

A few examples will illustrate how the biotechnology trade associations served the interests of a few big and influential companies in an anticompetitive campaign inimical to the public interest.

In the mid-1980s, North Carolina implemented a new law that created a new state entity specifically to regulate field trials of rDNA-modified organisms. The new regulations required extensive evaluation of testing of common crops such as corn, wheat, tomatoes and tobacco. The state's rules were both scientifically flawed and superfluous to public and environmental health. Even the state regulators involved with their creation and implementation were apologetic about them.

Representing the industry at the time were the Association of Biotechnology Companies (ABC) and the Industrial Biotechnology Association (IBA), which later merged to form the single Biotechnology Industry Organization (BIO). To the bewilderment of the biotechnology community, the ABC endorsed the North Carolina law. An ABC official observed privately that his organization, vying with the IBA for visibility and members, had to do something—anything—to distinguish itself. It succeeded.

Another episode occurred during a 1986 public meeting of the federal interagency Biotechnology Science Coordinating Committee. The BSCC had been established to, among other goals, enable various biotechnology stakeholders and interest groups to have a voice in policy decisions. At the time, the federal agencies were struggling with the question of the “scope of regulation,” that is, whether regulation should focus on the use of gene-splicing techniques or on the risk-related characteristics of products. At the 1986 meeting, ABC and the Pharmaceutical Manufacturers Association (PMA) described the enormous investments that their members routinely made in R&D and the importance of their new products to consumers and the U.S. economy. They argued for a scientifically sound, risk-based and predictable approach to regulation. Poorly conceived and burdensome regulation, they said, could put whole industrial sectors and tens of thousands of jobs at risk. Then, the IBA's representative took the podium. He told the assembled audience, including TV cameras, that his members were tired of discussing regulation: what companies really needed from government was—more databases. Databases? One person in the audience was so astonished that he literally fell out of his chair.

These bewildering incidents were typical of the trade associations' contributions to the debate in the 1980s. The associations were characteristically unreliable whenever Reagan and Bush administration officials requested their support for scientifically defensible regulation at EPA and USDA.

The BIO monolith

BIO, which now represents the ostensibly coalesced national interests of the U.S. biotechnology industry, has pursued a strategy defined largely by myopia and low expectations on major issues. Its communications with the Clinton administration have typically focused on resource issues. With two exceptions, regulation has fallen from BIO's agenda. The two exceptions are FDA reform and the Convention on Biological Diversity (CBD, or more colloquially, the Biodiversity Treaty).

Consider the recent BIO proposals for FDA reform. It was only after relentless attacks on the FDA in major newspapers,1 the announcement of several reform initiatives by think-tanks,2 and the publication of a detailed reform proposal3 that BIO finally was compelled to address the need for reform. By contrast, European companies have consistently worked through the Brussels-based Senior Advisory Group, Biotechnology, exerting pressure to revise EU regulations. The slow-moving BIO finally released its FDA reform proposals in early 1995 but offered little beyond what the Clinton administration and the FDA had already conceded (see chapter 3).

BIO's most serious policy failings have been in the realm of EPA and FDA regulation, and the CBD. R&D on bioremediation and pest-resistant garden and crop plants are the early casualties. A report on the evolution of U.S. biotechnology prepared for Canada's National Biotechnology Advisory Committee even listed Carl Feldbaum, BIO's president, among those promoting the “guilty until proven innocent” view of biotechnology product regulation.4 BIO's position does not bode well for the entrepreneurial agricultural and environmental biotechnology companies. The association has actually been hostile to innovation and small business development in these emerging sectors (vide infra).

BIO and the EPA

Consider the cozy alliance between the powerful BIO and the EPA. For a decade, new regulations under the EPA's pesticide and toxic substances statutes have sidestepped scientific advice in order to single out for special scrutiny new biotechnology products that could supplant chemical pesticides and clean up toxic wastes. Scientific and professional associations such as the American Society for Biochemistry and Molecular Biology and the American Society for Microbiology, and the University of California's Systemwide Biotechnology Program, have criticized the EPA's general approach and its specific proposals as being unscientific, barriers to early stage research and, ultimately, contrary to the public interest. A common criticism has been that agency policies impede the progression from publicly funded research to tangible public benefits. (See, also, the discussion in chapter 3 of a report by 11 scientific societies which criticizes EPA's plant-pesticide policy.)

BIO, in contrast, has not joined the denunciation. In fact, it has consistently endorsed and defended the EPA's proposals including those (under both the pesticide and toxic substances statutes) that single out small-scale field trials of rDNA-manipulated microorganisms for regulation while exempting all others. In return, EPA policies have been crafted in a manner that allows exemptions from regulation for certain specific (and seemingly arbitrarily-selected) products—plants modified by the addition of a viral coat protein gene, for example. Such actions by EPA selectively benefit those companies far enough along in R&D to have gotten their products anointed and which possess sufficient regulatory experience to recognize the opportunity. Other companies, most of which are small and in earlier stages of R&D, are left behind to struggle with the new regulatory barriers.

BIO even supports the EPA's bizarre 1994 proposal to expand regulation of chemical pesticides to include whole plants manipulated with rDNA techniques—products which would enable farmers to use smaller amounts of chemical pesticides while sustaining crop yields. (The final rule still has not been issued; presumably, EPA intends to delay until after the November 1996 elections, which could change the now Republican-controlled Congress.) There is no question of this proposed rule serving any useful function. New pest-resistant plant varieties selected or crafted by older and less precise techniques have, after all, a long history of safe use—without any governmental regulation, let alone regulation as pesticides.

In 1995, BIO lobbied Congress feverishly to defeat a pivotal legislative rider, the so-called Walsh amendment, which would have denied EPA funding where “whole agricultural plants [are] subject to regulation by another federal agency” (see chapter 3). The Walsh amendment would have both limited the agency's expanding influence and shifted the regulation of many of these products under the FDA's risk-based 1992 policy on new-biotechnology foods.5 Unblushingly, BIO asserted that they and their companies had “enjoyed a good working relationship with EPA” and described EPA officials as “flexible in their regulatory approach” and as being eager to minimize the impact of regulation on biotechnology.6

The amendment passed the House by a 210—210 procedural vote despite the attack. At the time, a spokesman for BIO promised that the association would continue the battle in the Senate. The Senate subcommittee chairman responsible for the EPA appropriation was Christopher S. Bond, a Republican from Missouri, Monsanto's home state. The Senate subcommittee did not take up the amendment, and, to no one's surprise, Bond saw to it that it died in the House-Senate conference.

In their lobbying against the Walsh amendment, BIO and Monsanto worked closely to court key congressmen and strong-arm supporters of the amendment. An official of the Institute of Food Technologists, a nationwide professional association that supported the amendment, described their experience with BIO and Monsanto:

I just had a meeting this morning with a delegation from BIO and Monsanto who trooped into town to educate me. All went well for the first hour and then, as time was nearing a close, they got down to the brass tacks of our [pro-amendment] position on the Walsh amendment. Still in my ‘sweet’ finding-common-ground way, I indicated that many [in our organization] supported our position and that we did not see any way to yield on that position. Bob Harness, VP for Registrations and Regulatory Services [at Ceregen, a Monsanto subsidiary] converted to Mr. Hyde when I indicated that we saw no reason to adjust our views. At that, he rose from the table, beet red, and declared that they would continue to fight us.7

Before leaving, the visitors demanded lists of the Institute's financial supporters.

BIO has thus been lobbying for new and duplicative regulation of plants by the EPA. It is significant that the BIO-favored EPA approach creates new barriers to commercial success for smaller, more cash-strapped companies and for academic researchers. The proposed plant-pesticide policy would discourage fundamental university research on the nature of pest-host interactions needed to develop biological pest control products. The most pernicious effect of BIO's approach is that it treats academic research as expendable. Some in industry seem to have forgotten that the genesis of the new biotechnology was the confluence and synergy of basic research in academic biochemistry, microbiology and analytical chemistry. Alternatively, do BIO members consider publicly funded research to be a competitive threat? Either way, BIO's cynical strategy undermines the foundation of U.S. competitiveness in high technology—viz, federally supported R&D.

Devotees of conspiracy theories may find it interesting that the implausible EPA plant-pesticide policy was crafted by Bush administration EPA Assistant Administrator Linda Fisher, who passed through the government's revolving door and is now vice president for government relations at Monsanto. It is noteworthy, too, that the large agricultural companies that control BIO are among the world's largest producers of chemical pesticides and control significant market shares of the pest control market. (Monsanto's sales of crop protection and lawn-and-garden products in 1995 were $2.47 billion, for example.)

BIO and the FDA

BIO and the Clinton regulators have often danced cheek-to-cheek. In 1993, BIO was an ally in FDA's plan to require the premarket registration of new biotechnology foods (see chapter 3). This single change would disrupt a successful and progressive 15—#8211;year FDA policy of treating biotechnology products the same as similar nonbiotechnology products. The principal beneficiaries would be only those who wish to slow the development of the new biotechnology products. The July 18, 1994 BIO Bulletin claimed that the association was playing hardball with the FDA, by accepting premarket registration while rejecting premarket approval. But premarket approval had never been a serious option.

Experts in plant science, food and nutrition persistently criticized the proposal for a premarket registration requirement. Following the November 1994 elections, the new Republican Congress made pointed inquiries about the FDA's intentions. Then, suddenly, BIO had a change of heart. On January 5, 1995, BIO withdrew support from the proposal. The Clinton administration followed suit, and shortly thereafter the FDA quietly deleted the food biotechnology registration policy from its regulatory agenda. Calling it “completed,” the agency buried the announcement in a single line in an OMB publication.8 It will be instructive to see BIO's response to the newly-resurrected FDA policy of discriminating against biotech-derived foods (chapter 3, essay 1).

BIO and the convention on biological diversity

While the primary goals of the CBD are laudable (but of little conceivable advantage to U.S. industry), the implementation is vague; worse still, the panels that have met to craft and implement the biosafety protocol mandated by the treaty have imployed regressive, unscientific approaches, singling out products made with rDNA techniques. BIO's vacillation on the CBD (chapter 4) is another sorry episode. After initially aligning themselves with the Bush administration's opposition to the CBD, BIO flip-flopped (with the change in presidential administration) and became a major booster. According to environmental writer Russ Hoyle, BIO may now have reversed course again:

Thanks in part to the efforts of U.S. observers [of the drafting of the biosafety protocol] who have witnessed the campaigns of disinformation and bald deceit employed by the forces arrayed against U.S. biotechnology interests, [BIO] has at last begun to bestir itself for the debate over the biosafety protocol.9

BIO deserves few kudos for finally coming around on this issue. Reading the leaked copies of the draft biosafety protocol (prepared by a Washington D.C. “public interest” group called the Community Nutrition Institute), BIO must have begun to feel like Dr. Frankenstein with his monster running amok. Again, according to Hoyle,

Its preamble declares that genetic diversity “is dependent on the socioeconomic conditions of the peoples maintaining it,” code for a regime that includes social and cultural studies, sociology and “history relevant to risk assessment” in its definition of science. It designates illegal traffic in genetically engineered goods as a criminal act and includes jail sentences for responsible corporate and national officials. Besides requiring exporters of biotechnology products to submit complete safety information on a case-by-case basis, the draft protocol establishes an independent international body of experts to conduct risk assessments and make decisions on all transboundary trade in [rDNA—#8211;manipulated organisms].10

Follow the self-interest

Congressional committees with jurisdiction over these agencies and issues have been nonplussed by the goings-on. But, as Milton Friedman counsels, look for the self-interest. Sometimes the self-interest of industry, and often that of government regulators, is inimical to what is best for the rest of us.

Who benefits from the EPA's policies? The EPA grows in size and power, as it spawns new bureaucracies for regulating rDNA-derived organisms; and the big agricultural chemical/biotechnology companies thrive. BIO continues to be the darling of its largest, most influential members and reinforces the illusion of influence through its obsequious relationship with the Clinton administration.

Who loses? Small corporate competitors in biotechnology struggle against artificially high market entry barriers; several companies, in fact, (including DeKalb, Agracetus and Calgene) have recently been purchased, in whole or part, by Monsanto, Pioneer and Ciba-Geigy. The research community is discouraged by regulatory hurdles and shrinking resources; and ultimately, farmers, consumers and those interested in toxic waste cleanup are left with continuing problems but no new solutions.

The complex nature of BIO's self-interest, even beyond the immediate interests of its dominant members, is apparent in its actions. First, the strategy of lowered expectations and demands enables the organization to claim almost any outcome as a BIO victory. Second, BIO has cultivated an aura of “influence,” even as it bows to government politicos, regulators and even the antibiotechnology troglodytes, while consistently eschewing efforts to do the right thing. Finally, the BIO leadership has listed noticeably to the political left. The following example is illustrative of BIO's shortcomings.

In 1995, BIO decided on a major new policy initiative. What was it? Something like transferable tax credits for research spending, so that its constituents, even small start-up companies without revenues, could receive tax benefits? Deregulation? Nope. Bioethics! In an August 4, 1995, letter to BIO's members, President Carl Feldbaum explained:

As our industry progresses, bioethics issues grow increasingly important and they are frequently raised by the media in response to new biotechnology discoveries and developments. Currently, we are facing gene patenting issues, but the debate looms much broader and deeper. Ethical questions are also raised about gene therapy, transgenics and privacy and discrimination in genetic testing. Accordingly, BIO has designated bioethics as a long-term top priority (emphasis added).

Choosing bioethics as a top policy priority is like throwing a computer terminal to a drowning man. But it is characteristic of BIO: bioethics is a subject likely to be of low priority for most small and medium-sized companies, who worry most about product approvals, financing and cash flow; it elicits minimal expectations from member companies; it creates a great deal of “busy work” for BIO staff and consultants; and it enables BIO to claim success from even the most meager, insubstantial result.

Individual biotechnology companies and trade associations can and should take an aggressive position on federal regulatory policies that will shape the future of the technology and the industries that use it. The long view of regulation should be dictated by scientific principles, economics and common sense. It should resist the temptation of flawed, short-term fixes.

My solution to BIO's skewed priorities? Link the financial remuneration of BIO's top officials to tangible improvements in federal policy. These would include regulatory improvements at the government oversight agencies, as well as tax credits for research, patent reform, and so forth. The association's officials should get no credit for trying hard, staying late at the office, getting President Clinton to speak at the annual meeting, flowery oratory, slick press releases, voluminous reports or scintillating panel discussions. After all, BIO's mandate is to represent members' collective interests in the nation's capital. BIO officials' bottom line should depend on the legislative and regulatory bottom line. (I'd bet the mortgage money that this innovation would change BIO's priorities and level of effectiveness overnight.)

Bureaucratic versus societal risk

Government regulators, like the rest of society, generally act in their own self-interest, even when those actions are inimical to the best interests of others. The FDA and EPA both offer examples of self-interest clashing with the public interest. It's no secret that many EPA and FDA officials have a zero-risk mind set and believe they have a blank check to protect public safety. Aren't these same people eager to approve safe and innovative new products? Don't they want to encourage the development of improved ways to control pests, clean up toxic wastes and treat diseases? As a matter of fact, no.

They have other things on their minds. Civil servants and political appointees think a lot about simply staying out of trouble.

There is a marked asymmetry between the two kinds of mistaken decisions that a regulator can make that would get him into trouble: (1) a harmful product is approved for marketing (a “Type1” error) or (2) a useful product that treats disease or promotes environmental protection is rejected, delayed, or never achieves marketing approval (a “Type2” error).11 In other words, the regulator commits a Type1 error by permitting something harmful to happen and a Type2 error by not permitting something beneficial. The consequences for the regulator are very different in the two cases.

Consider two examples of a Type1 error—the FDA's approval of the “swine flu” vaccine, which caused temporary paralysis in a significant number of patients, and the EPA's approval of a pesticide that causes birth defects in humans and endangered species of birds. These kinds of mistakes are made highly visible by the media and are denounced by the public and legislative oversight bodies. Both the developers of the product and the regulators who allowed it to be marketed are excoriated and punished: modern-day pillories include congressional oversight hearings, CBS' 60 Minutes and editorials in the New York Times. A regulatory official's career might be damaged irreparably by his good-faith but mistaken approval of a high-profile product.

Type2 errors, in the form of unreasonable governmental demands or requirements, can delay or prevent entirely the marketing of a new product. But a Type2 error caused by a regulator's bad judgments, timidity or anxiety usually gains little or no attention outside the company that makes the product. And if the regulator's mistake precipitates a company's decision to abandon the product, that is seldom widely known. There may be no direct evidence that continued patient suffering, farmers' loss of crops to insects or the reliance on outmoded technology for cleaning up oil spills is avoidable (chapter 3)—or that regulatory officials are culpable. The only counter-example is where activists closely scrutinize agency review of certain products and aggressively publicize Type2 errors—such as the AIDS activist groups that monitor FDA.

Agencies and reviewers frequently justify and accept Type2 errors, remonstrating that they are merely “erring on the side of caution.” Too often this euphemism is accepted uncritically by the media and the public.

Consider, however, the possible societal impact of Type2 errors. As related in chapter 3, the Monsanto Company several years ago proposed a scientifically interesting and potentially important small-scale field trial—biological control of a voracious corn-eating insect. The experiment would have used a harmless soil bacterium, Pseudomonas fluorescens, into which, using new biotechnology techniques, scientists had introduced a single gene from another, equally innocuous bacterium.

In spite of the unanimous conclusion of the EPA's panel of extramural scientific experts and other federal agencies that there was virtually no likelihood of significant risk in the field trial (and leaving aside the enormous potential benefit to farmers and consumers), the EPA refused to permit it.

Two aspects of this prototypic Type2 error are so striking that they bear repeating: the field trial would have been subject to no government regulation at all had the researchers used an organism with identical characteristics but crafted with less precise “conventional” genetic techniques instead of the new biotechnology, and Monsanto's response to the rejection was to dismantle its entire research program on microorganisms for pest control.

In the ensuing decade, few other companies have pursued these products and dared to test the regulatory waters. This whole sector of biotechnology has been damped by regulatory disincentives—ironically, at a time when new markets have appeared as competing products (chemical pesticides) have fallen into societal and governmental disfavor.

Egregious and costly Type2 errors can also take the form of broad agency policies. A good example is the 1994 EPA proposal to regulate as pesticides whole plants whose resistance to pests, disease or environmental stress has been enhanced by new biotechnology (discussed in chapter 3). EPA intends to regulate these plants, such as corn, tomatoes and marigolds, more stringently than chemical pesticides similar to DDT or parathion. You don't have to be a scientist to know that this makes no sense, affords no added protection to human health or the environment, and discourages research.

Many Type2 errors are not so readily apparent, however. Rather, they reflect more the “culture” of risk-aversion in which every decision, every choice is overly conservative. For example, in the early 1980s when I was at FDA, the agency confronted an interesting decision about a new vaccine.

The first-generation vaccine for hepatitis B, which had been on the market for several years, was not a popular product. It was purified from the pooled plasma of patients with chronic active hepatitis, a population likely to be harboring many dangerous pathogens. Even though each batch of the vaccine was exhaustively inactivated, tested and purified, it was not used enthusiastically by physicians or patients.

The manufacturer of that product, Merck and Company, also developed the second generation, rDNA-derived vaccine. It had its origins in baker's yeast, Saccharomyces cerevisiae, which was modified by the addition of a single surface antigen gene from the hepatitis B virus. During fermentation, the yeast produced large amounts of the viral antigen which was highly purified. In this case the likelihood of contamination of the vaccine with human pathogens is virtually nil.

Demonstration of safety for this vaccine was straightforward. FDA's more vexing decision concerned efficacy—specifically whether the manufacturer had to perform clinical trials to show that the product actually prevented hepatitis, or whether a laboratory surrogate would be adequate. Arguably, it would be adequate to demonstrate that vaccine recipients synthesize the appropriate amounts and types of antiviral antibodies (a great deal was known about this —quot;seroconversion—quot; from the first-generation vaccine).

FDA's decision was important. Large amounts of time and money were at stake. Clinical trials to demonstrate hepatitis prevention had to be done in high-risk populations which are only found abroad, primarily in Asia; and organizing, performing and analyzing the studies would take years and be very costly.

In the end, FDA opted for the full clinical trials, rejecting even a middle course where seroconversion would be the primary measure of efficacy but a pilot study in Asia would confirm hepatitis prevention. The result was several years' delay while in the United States tens of thousands of cases of hepatitisB occurred annually that could have been prevented by the vaccine. (Approximately 5% of hepatitis B cases have serious complications and 0.1% are fatal.)

In my experience, in the current climate of regulatory agencies' risk-aversion and fear of Type1 errors, judgments like this one occur systematically and commonly.

Real reform will require a change in the culture that prevails at the agencies. A system of rewards and punishments that responds to both Type1 and Type2 errors is needed. Government regulators have learned to avoid Type1 errors at almost any cost because of the potent influence of the “stick” in a regulator's reward-punishment system. The agencies now need to create new sticks and carrots to redress the imbalance.

Some “carrots” exist in performance plan goals and the like, but these are, as the English say, weak as water. The clever regulator can always take refuge in the “inadequacy” or “inconclusiveness” of the data, and demand yet another study, another way of analyzing the data, another meeting of an advisory committee. Procrastination is a perennial problem of “pretesting” or “premarket” regulation.

Procrastination also has its benefits. The longer that product approvals take, and the more functions the agency assigns to itself, the larger the number of regulators needed. Larger budgets and larger empires to be managed are hardly disincentives.

The regulatory bureaucracy's current spectrum of incentives and disincentives encourages regulators to act like special interests. They tend to do what's best for themselves instead of what's best for patients or society as a whole. While most agency employees are dedicated and hardworking (and underpaid), they are forced to respond to the rewards and punishments of a system not of their own making. They are unlikely to rise above the level of expectations set by that system.

Essay 2 Strategies for Reform

The conundrum of what to do about government regulation of technology and its products requires a multifaceted solution. Industry trade associations need to define and better represent the long-term interests of their constituencies. Agencies need to implement specific reforms and also to change their culture of risk-aversion, in order to alter the current interplay of incentives and disincentives. The system needs to be better “gamed,” in order to make self-interest constructive, rather than detrimental. Society's less visible stakeholders—patient, consumer and environmental groups—need to participate in the policymaking process and demand better.

The Quest for Risk-Based Regulation

The uses of the new biotechnology in “contained” laboratories, pilot plants, greenhouses and production facilities have engendered little controversy. The NIH Guidelines for Research Involving Recombinant DNA Research have exempted from oversight virtually all laboratory experiments, which has allowed organisms of low risk to be handled under minimal containment conditions. These conditions permit large numbers of living organisms to be present in the workplace and even to be released from the laboratory.1 Despite extensive work in thousands of laboratories throughout the United States with millions of individual genetic clones, there has been no report of these incidental releases causing a human illness or any injury to the environment.

As discussed extensively in this volume, government regulation often has discriminated against the testing of rDNA-modified organisms in the environment. The key issue has centered on the scope of regulation—in other words, what experiments and products should fall into the regulatory net. Many negligible-risk experiments have been subjected to extreme regulatory scrutiny and lengthy delays solely because rDNA techniques were employed, even when the genetic change was completely characterized and benign and the organism demonstrably innocuous. As discussed in chapters 3 and 4, the impacts have been substantial. Investigators have shied away from areas of research that require field trials of recombinant organisms;2 companies have avoided the newest, most precise and powerful techniques in order to manage R&D costs3 and investors have avoided companies whose recombinant DNA-derived products became caught up in the public controversy and new regulation.4

Government agencies have variously regulated new biotechnology products using either previously existing regimes (FDA, until recently) or crafted new ones (EPA, USDA and NIH). Whether regulatory strategies are new or old, certain cardinal principles should apply. First, triggers to regulation—the criteria that determine which products and experiments warrant regulation—must be scientifically defensible. Second, the degree of oversight and compliance burdens must be commensurate with scientifically measurable risk. Some have contended that this may be obvious in theory but difficult to achieve in practice. Skeptics or critics of risk-based oversight contend that if we knew a priori which experiments and products were risky, agencies could just perform “armchair” risk assessment and exempt those proposals that pose negligible risk. Both assertions are weak.

The United States and other nations have devised other regulatory nets based on assumptions about the magnitude or the distribution of risk. For example, we require permits for field trials with certain organisms known or considered to be plant pests, whereas we exempt similar organisms based on a knowledgeable assessment of predicted risk. The validity of these assumptions determines the integrity of the regulatory scheme; without them, we might as well flip a coin or exempt field trials performed on certain days of the week.

Consistent with this regulatory philosophy, the federal government's 1986 Coordinated Framework for the Regulation of Biotechnology attempted, at least on paper, to focus oversight and regulatory triggers on the characteristics of products and on their intended use, rather than on the processes used for genetic manipulation.5

In spite of the Coordinated Framework's clearly-stated goals, the USDA and EPA have created oversight regimes for tests in the environment that conflicted with them. The agencies should have benefited from two landmark documents produced by the National Academy of Sciences6 and the National Research Council.7 They did not, however, and a number of U.S., foreign and international regulatory proposals have been based on process (chapters 3 and4). For the most part, these proposals capture all rDNA-manipulated organisms. Sometimes the review requirement is limited to those which manifest phenotypes that “do not exist in nature,” according to the rationale that such organisms are “unfamiliar,” and by extension, potentially high risk. These proposals focus implicitly or explicitly on a process-determined definition of “familiar,” an approach that seems to be derived from the prosaic meaning of the word, “accepted, accustomed, well-known.” “Familiarity” is inappropriately equated with safety. Demonstrating the wrong-headedness and circularity of this approach, organisms are considered “familiar” solely because they are “natural” or have been created by older, more “familiar” genetic manipulation techniques. No matter how pathogenic, invasive or otherwise hazardous these “familiar” or “natural” organisms may be, they are intentionally exempted from the regulatory net.

A risk-based algorithm for field trials

As discussed extensively above, rDNA-manipulated organisms can be regulated in the same manner as other organisms. That is, according to intended use (such as vaccines, pesticides or food additives) or to intrinsic risk (a function of characteristics such as pathogenicity, toxigenicity and invasiveness). It is ironic that while various regulatory agencies in many nations have been struggling to make technique-based regulatory schemes plausible, risk-based alternatives have been widely available—and working.

Field trials of organisms that pose significant risk warrant biosafety oversight and appropriate precautions. In 1995, several colleagues and I refined an earlier biosafety algorithm that was mentioned in chapter 3. Like its predecessor, the more recent version of the algorithm8 is scientifically defensible and risk-based. The basis of the algorithm is the tabulation of organisms into risk categories. The algorithm accommodates any organism, whether naturally occurring or genetically modified by old or new methods. It can provide the foundation for a cost-effective oversight system. It is adaptable to the resources and needs of different forms of oversight and regulatory mechanisms, whether they are implemented by governments or by other institutions.

First, in order to ascertain the degree of oversight appropriate to a wildtype, unmodified or parental organism in a field trial, a researcher would determine the “preliminary biosafety level,” based on lists that stratify or categorize organisms according to risk. This tabulation would be based on scientific knowledge and experience as compiled by experts. A number of factors would determine this overall level of safety concern, including pest/pathogen status, ability to establish, location of centers of origin and dynamics of pollination (for plants), other ecological relationships and potential for monitoring and control.

Thus, the lists would provide an indication of the intrinsic level of risk of the organism, ranging from, say, Level1 (lowest safety concern) to Level5 (greatest safety concern). An important factor in stratifying plants according to potential risk, for example, would be the presence in a geographic area of cross-hybridizing relatives of the plant to be tested.9 The proximity of a relative does not, however, alone confirm a risk. For example, there is limited gene flow from maize to nearby teosinte (and vice versa). Even when such gene flow occurs, it appears neither to be detrimental to the teosintes nor to change their basic nature as distinctive wild races and species.10 Thus, the presence of teosinte near a field trial of maize does not alter the assessment of maize as posing negligible risk (category 1). By contrast, distinct varieties of oilseed rape (Brassica napus, or canola) with widely differing concentrations of erucic acid (and intended for different applications) should be kept segregated to avoid outcrossing between varieties. For example, high-erucic acid canola might be classified as category 1 in regions where that variety of the plant is grown but perhaps category 4 where low-erucic acid canola is grown.

This approach is analogous to that used for categorizing microorganisms by the U.S. Centers of Disease Control (CDC) and the National Institutes of Health (NIH) to establish laboratory safety standards for the handling of pathogens,11 and to one proposed (but never implemented) by the USDA.12 Foreign countries' regulatory approaches that employ inclusive lists of regulated articles such as plant pests or animal pathogens operate within similar principles.13

As a practical matter, it would be difficult for experts to stratify every organism in every geographic region according to risk. However, categorizing, say, a hundred of the major crop plants that are the likeliest candidates for field trials would be a feasible and useful beginning. Additional panels of experts could then address subsets of fish, terrestrial animals, microorganisms, insects and other groups.

Next, the preliminary categorization would be subject to reconsideration and adjustment in light of any new traits introduced, independent of the technique used for modification. The biosafety level could be adjusted up or down—on the basis of a major change in evolutionary fitness, pathogenicity, toxigenicity or invasiveness. Adjustments to higher categories might include, for example, field trials with pathogens manifesting new multiple antibiotic resistance or increased host range. Adjustments to lower categories could include a plant with decreased pollen production or a toxigenic microorganism after the complete deletion of its major toxin gene(s). Given the kinds of changes currently being made with molecular genetic techniques,14 reclassification to higher biosafety levels would rarely be indicated.

Using the algorithm to determine degree of oversight

For regulators, especially those in the developing world, a crucial “first cut” is the designation of organisms that are considered to be of negligible risk (Level1) or low risk (Level2). This is important because, arguably, field trials in these lowest risk categories can be exempt from case by case review and managed using standard research practices appropriate to the test organism. By contrast, field trials with organisms in the highest risk categories should automatically require biosafety evaluation. (It is worth noting that this graduated approach to regulated and nonregulated field trials is both more scientific and more risk-based than either the pre- or post-rDNA regulatory regimes of EPA15 or USDA;16 see chapter 3.)

In theory, the degree of oversight of proposed field trials can vary widely between exempt (that is, subject to only the usual standard of practice for an agricultural experiment with that organism) and prior approval required (that is, by a national, regional or international agency), with various levels in between. However, we proposed only three levels of oversight: exempt, notification (to a local or international agency), or prior approval required.

In our scheme, the degree of required oversight can take into consideration not only the perceived level of risk but also other factors such as the available regulatory resources and the financial and manpower burden that regulation exacts from researchers and the government. Within the algorithm, a national or other policymaking authority could choose to apply regulatory strictures more stringently (tending toward more prior approval) or less stringently (tending toward more risk categories being exempt or requiring only notification). Of paramount importance, however, is that the algorithm ensures that the overall approach is always within a scientifically sound context and that the degree of oversight is commensurate with risk.

Many regulatory authorities would likely require case-by-case review for organisms in categories4 and 5, exempt experiments in categories 1 and 2, and require a simple notification (describing the organism to be tested, the site, the risk management measures and so forth) for category 3. Within this internally-consistent scheme, other permutations are possible that would be chosen to meet regional preferences and needs.

This algorithm is very flexible and applicable to any organism. It meets the basic requirements of a biosafety regime—it is risk-based, scientifically defensible and focused on the characteristics of the test organisms and the environment of the field trial. The algorithm is highly adaptable; it can be incorporated into existing regulatory regimes in industrialized countries or be used by nations that currently lack such mechanisms. Moreover, it can offer adequate safety precautions to protect the public health and the environment from significant risk, coupled with the cost-effectiveness demanded by limited government resources.

Reform and the agencies

The EPA

As discussed in chapter 3, EPA's biotechnology policies make neither scientific nor economic sense. They would require governmental review and high compliance costs for largely innocuous organisms, while exempting field trials of “naturally occurring” organisms and organisms manipulated by techniques other than rDNA that could foul waterways or pose other environmental risks.

It is both dismaying and ironic that the EPA's expenditure of hundreds of thousands of staff-hours in order to craft and implement new policies, over more than a decade, was unnecessary. Had agency officials simply applied the prevailing scientific consensus to their FIFRA and TSCA proposals, they would have concluded that no changes were needed from policies and requirements that were in effect before the advent of new biotechnology (just as FDA did). Products of the new rDNA technology would have been held to the same standards for testing and marketing as similar products used for similar purposes. Had regulators, the application of the new biotechnology to fields such as bioremediation, mining, oil recovery and pest control, that path now would be more advanced.

Little magic is needed to improve EPA's sorry state. The regulation of biotechnology products would benefit immediately from the application of the risk-based algorithm described above to the FIFRA and TSCA regulations. This could be easily accomplished.

It is unlikely that there is either interest or willingness to do so at EPA, however. The agency's regulatory infrastructure and personnel need a thorough, unbiased, nonpartisan, extramural review—and the recommendations should, for once, be implemented. The agency needs to incorporate avoidance of Type2 errors into employees' performance plans and reviews, and would benefit from the kind of ombudsman panel described below that has the power to discipline agency officials for flawed policies or decisions. EPA needs to introduce accountability for regulators.

Equally important, we need national leaders committed to a strong role for science in public policy, including a role that includes comparative risk assessment in federal spending priorities and a “marketplace of ideas” in policy formulation. We need a knowledgeable, tough and competent EPA Administrator who will clean house and work with the Inspector General to deal severely with the kind of chicanery described above and in chapter 3. None of this is likely to happen during the Clinton-Gore administration. We can expect only more of the same.

The FDA

For the regulation of biotechnology, the FDA adopted a scientific paradigm early on that avoided discriminating against biotechnology products. Yet, the FDA still suffers from some of the same systemic problems as the EPA and USDA—distorted bureaucratic incentives and disincentives, ill-conceived policies and mismanagement. The FDA, which oversees more than $1 trillion worth of products annually, badly needs regulatory reform.

As discussed in chapter 3, the fear of comprehensive congressionally-mandated FDA reform stimulated the Clinton administration to announce a series of so-called “reforms” during 1995 and 1996. That Clinton administration officials are not serious about FDA reform, however, is clear from what was not included in the announced changes. They chose not to roll back recently instituted policies such as: broad new guidelines that require certain proportions of women and minorities in all federally funded clinical research, thereby slowing and increasing the costs of clinical trials, and encouraging companies to do the research abroad (these requirements were actually promulgated by NIH, as mandated by the 103rd Congress); new rules on the reporting of drug side effects; and restrictions on “promotional activities” such as informing physicians about peer-reviewed research findings and convening focus groups.17

The administration's changes seem more intended to impress the FDA's critics with a lengthy laundry list of “accomplishments” than to implement genuine change. The minimal benefits will accrue principally to larger, established companies that already have products on the market. Entrepreneurial companies, whose products are primarily in early developmental stages, are left out.

The most serious flaw in the administration's proposals—one likely to vitiate much of their impact—is that the reforms are to be implemented by the FDA itself. Experience should have taught the futility of an agency being directed to reform itself. Similar reforms, instigated by President Bush's Council on Competitiveness, were announced by the FDA in 1991.18 These were conservative but potentially consequential. They included such changes as outside organizations performing reviews (under contract to FDA); expanded use of extramural advisory committees; an expanded role for Institutional Review Boards; a more flexible interpretation of the efficacy standard; U.S. recognition of foreign approvals (that is, reciprocity) and various management improvements. An important element of many of these reforms was structural change—which would actually diminish the scope of FDA's discretion or jurisdiction—rather than mere managerial tinkering. To no one's surprise, the agency studied them, literally, to death. As an FDA official at the time, I recall agency officials' amusement at the prospect of their reforming themselves. And that was during a presidential administration that really did care about streamlining regulation.

Minimalist but effective reform

No one who has worked at a regulatory agency is likely to be terribly optimistic about the prospect of dramatically improving the “culture” of rank-and-file agency employees (although there are suggestions for accomplishing this, below). Therefore, a “first practical principle” of reform should be to strive also for structural changes which reduce the agency's discretion and influence. This can be accomplished by exempting entirely certain regulated activities or products, or by transferring regulatory functions to nongovernmental entities where the professional culture and the incentives and disincentives that shape behavior are more propitious.

Applying these principles, the Congress could, at a stroke, achieve real reform. A few narrow but critical amendments to the FDA's enabling statutes would remove certain functions from the governmental monopoly and reduce the agency's opportunities for mischief:

  • Exempt from FDA jurisdiction early small-scale clinical trials, which are overseen already by research institutions' Institutional Review Boards;
  • Reduce or eliminate FDA control over drug advertising and promotion;
  • Require the FDA to recognize drug approvals by comparable regulatory apparatuses abroad (e.g., the U.K., Canada and the European Medicines Evaluation Agency);
  • Direct the FDA to certify private-sector entities to perform reviews of clinical trials;
  • Eliminate the FDA from the review of exports of experimental drugs and medical devices; and
  • Establish statutory “hammers” (waiting periods after which approval is automatic), which would compel the agency to meet mandated time limits for product review.

These genuine reforms would offer no less safety to consumers but would confer several benefits. They would lessen the regulatory load—and the costs—of pharmaceutical development, permit the FDA to focus on essential functions and provide physicians and patients more (and less expensive) therapeutic alternatives.

While these reforms would improve the efficiency of drug regulation and do no harm to consumers in the process, none of them addresses the fundamental problem of the distorted incentives, disincentives, rewards and punishments that influence federal regulators' behavior.

Avoidance of Type1 errors implicitly is already built into reviewers' and managers' performance plans. An employee whose actions compel his superiors to defend his mistaken approval of a hazardous product will suffer during his annual performance review. The system should similarly foster aversion to Type2 errors. For all FDA (and EPA) employees involved in product evaluation or compliance, therefore, performance plans and employee annual reviews should be required to give equivalent weight to Type1 and Type2 errors. (Similar performance plan elements, such as meeting affirmative action goals, are currently applied at least as widely at federal agencies.) This new element would require no more than the kind of cost/benefit critical judgments that managers are routinely supposed to perform.

Another remedy is an ombudsman panel that evaluates agency actions and disciplines misbehavior. Cases could be submitted to the panel by drug companies, associations, patient groups or others.

My own experience as an FDA reviewer and manager suggests a number of cases appropriate for the ombudsman. During the 1980s, despite a demonstration that a certain proven anticancer agent could shrink the malignant Kaposi's sarcoma lesions found in AIDS, the FDA would not consider approval for that use. The agency said, in effect, that such an effect was merely cosmetic, and that the drug's sponsor needed to show improvement of a “meaningful” endpoint, such as patient survival. Even though that decision was eventually reversed, the officials who were responsible for it should be held accountable.

An ombudsman panel must have several characteristics: (1) an organizational location outside the agency, to provide arm's length from FDA officials, including the Commissioner (the Department of Health and Human Services' Office of Inspector General might be an appropriate location); (2) access to a wide spectrum of scientific, medical and regulatory expertise, either via a large membership or ad hoc experts as necessary; and (3) authority to recommend disciplinary sanctions, ranging from censure to forfeited pay and bonuses or demotion, depending on the egregiousness and impact of the decision.

The actions of the ombudsman panel would redress, in part, the agency's tendency to guard against approving a harmful product even at the expense of erecting huge economic barriers to R&D and marketing. This innovative mechanism could help to balance regulators' incentives and disincentives. More fundamentally, it could be applied to other regulatory agencies. It is a conservative governmental mechanism for correcting the entrenched bias toward eliminating product risk regardless of the cost of lost benefits.

More sweeping reform: the Progress and Freedom Foundation proposal and hr3199

In February 1996, the Washington D.C.-based Progress and Freedom Foundation (PFF) published a comprehensive analysis of the FDA, along with proposals for the reform of drug and medical device regulation.19 (I was one of the authors.) Their solution to the systemic problems is to turn over much of the evaluation of drugs to nongovernmental entities—a recommendation that has been made repeatedly by blue-ribbon expert groups convened to evaluate the drug-approval system.

The PFF proposal would retain the basic requirements that a drug be proved safe and effective before marketing. It mandates, however, that “Drug Certification Bodies” (DCBs) supplant the FDA in two important ways: overseeing drug companies' clinical testing and, after testing is completed, performing the primary review of the data that supports marketing.

The DCBs can be private- or public-sector organizations (universities, for example), profit-making or nonprofit. They would be subject to FDA certification and auditing, and staffed with “experts qualified by scientific training and experience,” as required by statute.

The drug sponsor would submit the request for approval to market a drug to the DCB rather than to the FDA. After evaluating it, the DCB would submit the results (if favorable) to the FDA. The FDA Commissioner would have a designated period of time in which to accept or deny the DCB's recommendation for approval. In the event that the FDA denied approval, an appeal mechanism would be available to the drug sponsor.

The PFF proposal is derived both from first principles and careful study of three decades' experience with the FDA's drug regulation. As discussed in this essay and in chapter 3, a fundamental problem at the FDA is the agency's tendency to slow the approval process to avoid even the remotest possibility of approving a product that might be harmful. The costs are high and the profitability threshold has risen, as the price to bring a drug to market has rocketed to around $500 million.20 Many important therapies have been delayed or abandoned. A contributing factor is the FDA's absolute regulatory “monopoly.” For drug approval, the feds are the only game in town.

The PFF plan redresses this problem ingeniously by introducing the element of competition into the drug review process. DCBs would compete for clients, and thus, the system would favor those that offer the greatest expertise, the best service and a reputation for integrity. In contrast to the FDA, where bureaucratic incentives and disincentives encourage a “go slow” mind set, competition would encourage DCBs to devise new, more innovative and efficient ways for their clients to demonstrate product safety and efficacy.

DCBs would be discouraged in several ways from the temptation of a “quick and dirty” but lucrative review: the threat of losing FDA certification; the need for FDA's final sign-off on the approval and the risk of legal liability, should the product ultimately cause harm after an inadequate review. This balance is not unlike that confronted by Underwriters' Laboratories, which certifies that electrical equipment meets certain safety standards. It also closely resembles the system of medical device regulation in the European Union.

It is remarkable that within two months of its publication major elements of the PFF proposal found their way into proposed legislation, HR3199, the House of Representatives' Drugs and Biological Products Reform Act of 1996.

The bill addresses in several ways the ponderousness and length of drug testing and evaluation. It clarifies that the voluminous raw data from clinical trials—often running to hundreds of thousands, sometimes millions of pages—will not always be required by the FDA. Condensed, tabulated or summarized data often will be adequate. Agency reviewers would have access to additional material if it were requested by supervisory FDA officials.

HR3199 also emphasizes that the demonstration of efficacy of a new drug may be based on even a single “well-controlled” trial, if the statistical analysis and reviewers' judgment support it. This change removes ambiguity in the language of the existing statute. In addition, taking aim at the FDA's tendency to require double-blind trials (where both patient and physician are ignorant whether the treatment is with active drug or placebo) under all circumstances, the bill mandates that clinical trial design should be “appropriate to the intended use of the drug and the disease.”

The legislation establishes a new, more liberal efficacy standard for drugs intended to treat “serious or life-threatening” conditions. Like the current standard for AIDS drugs that has spurred rapid approvals, experts would have to conclude that “there is a reasonable likelihood that the drug will be effective in a significant number of patients and that the risk from the drug is no greater than the risk from the condition.” This is just common sense. It is also humane: it would permit other patients the same treatment currently reserved for those with AIDS.

The bill would change in important ways FDA's censorship of scientific and medical information (chapter 3). The FDA currently prohibits drug companies from distributing textbooks and articles to health professionals if they contain information about not-yet-approved uses of drugs—even though these constitute almost half of all physicians' prescriptions, about 70% of all cancer chemotherapy and 90% of drugs used in pediatrics.21 The bill would permit the legitimate dissemination of information via textbooks and articles from peer-reviewed journals.

Equally important because it goes to the basic issue of how new uses are sanctioned for an already-approved drug, HR3199 would permit retrospective evidence from clinical research (instead of expensive and time-consuming prospective studies) as an alternate basis for approving additional uses.

The bill includes some modest incentives for the FDA to improve its performance, such as mandatory annual reports to the Congress that summarize the agency's success in meeting goals and statutory deadlines and that compare the FDA's performance with its foreign counterparts. To ensure that the FDA's attempts at international harmonization are sensible, the bill would require congressional notification before the FDA enters into international agreements.

The most significant changes wrought by the legislation—similar to but less sweeping than the PFF proposal—address the FDA's monopoly over the drug approval process. These provisions would provide a partial answer to systemic problems in the regulation of drug development, by turning over part of the evaluation of drugs to nongovernmental entities.

These proposals to “privatize” certain regulatory activities have been savaged by defenders of big government. There is, however, nothing sacrosanct about a government regulatory monopoly that offers manufacturers no alternative route to the review and certification of products. Moreover, regulation that assures public safety is not a binary choice—that is, either government or private-sector. That is illustrated by the bill's establishment of nongovernmental alternatives to some FDA oversight. Drug sponsors could opt to have their products reviewed by nongovernmental organizations that could be private- or public-sector (universities, for example), profit-making or nonprofit. These organizations would be subject to FDA accreditation and auditing. Strict requirements backed by civil and criminal sanctions would assure the confidentiality of data and the management of potential conflicts of interest.

This new approach closely resembles regulatory apparatuses already operating elsewhere, except that the sponsor could choose to have the FDA perform the review and in all cases the agency would retain the responsibility for final sign-off.

HR3199 omits any concrete provisions for reciprocity, which would hasten the approval of a drug in the United States after its sanction by a major foreign regulatory authority. Reciprocity could be achieved, for example, simply by giving the FDA a finite period of time (say, 60days) from the date of a UK or EU approval to show cause why a product should not be approved. In the absence of such evidence from the FDA (which carries the burden of proof), the drug would be approved automatically.

Overall, the House Commerce Committee's solutions to FDA reform are almost too good to be true. They are logical, carefully targeted, and bipartisan and—most important—favor the public interest. They would get drugs to patients who need them, faster and cheaper. Legislation should balance momentous social and economic issues, legal precedents and the public interest. HR3199, which strongly reflects the influence of House Commerce Committee majority counsel John Cohrssen (and is a kind of reprise of his earlier attempts to reform FDA while at the Bush administration's Council on Competitiveness), is a stunning example.

Essay 3 Building a Constituency for Science-Based Policies

Habituation, the gradual adaptation to a specific stimulus or to the environment, is a biological phenomenon that may also be said to apply to responses to political influences. Irrational and burdensome public policies can become so much a part of the landscape that their victims—consumers, businesses and research institutions—no longer experience the appropriate rush of self-righteous anger and push for reform. The worst becomes the norm.

Habituation can be observed in the unresponsive attitudes of academic and industrial scientists towards the excessive regulation of agricultural and environmental biotechnology. Scientists should actively question the flawed paradigms underlying the regulations. Few do. Putting this another way, the members of the research community need to pursue their self-interest aggressively, as other groups do.

We have seen the behavior of industry trade associations toward ill-conceived or excessive biotechnology regulation (above and chapter 3). The expected chorus of indignation from individual agricultural biotechnology researchers and companies has been absent. Surprisingly, most have settled for clarity instead of reasonableness. They have settled for predictability, even though it is the predictability of delay and frustration. Often, rather than working to make regulation more reasonable investigators have changed the direction of their research to avoid regulatory strictures.

Exceptions include a handful of individual scientists, the University of California's Systemwide Biotechnology Program, and a few professional societies that have called for the rationalization of regulation in editorials and letters to government agencies.1 Another notable exception is the August 1996 report from no fewer than elevenscientific societies that excoriates EPA's plant-pesticide regulatory proposal.2 This development illustrates that the threshold for definitive action is high—the plant-pesticide proposal is only the latest in a decade-long string of scientifically bankrupt EPA policies—but when it is reached, the scientific community can be mobilized.

Federal regulatory policies have given rise to disincentives to R&D in various sectors of U.S. biotechnology, accompanied by the disenchantment of researchers and investors (chapters 2 and3). Milton Friedman diagnosed correctly that the “government is the problem” and that the problem is with the system, not with the people (although an exception may be EPA, where, as discussed in chapter 3 individuals appear to be culpable).3 The self-interest of government officials often causes them to behave in a way that is inimical to the self-interest of the rest of us.

If those who are interested in the vitality of science, technology and innovation sit on their hands, nothing will change—except, perhaps, for the worse. The agencies seem unwilling and unable to reform themselves. The Clinton administration certainly is not serious about reforms.

Earlier in this chapter, I suggested ways that government regulation could be improved by structural and management changes, along with a risk-based algorithm for the oversight of field trials. But government left to its devices won't adopt these changes. It is past time that those outside government began to hold policy makers accountable and to exert pressure. But whence is this pressure to come?

I suggest six strategies for “progress”—defined as the integration into public policy of scientific, risk-based, regulatory approaches. First, scientists, as individuals, must do more of what physicist and writer Freeman Dyson, paleontologist Stephen Jay Gould and the late microbiologist Bernard Davis have done in their articles and books: they have participated in the dialogue on public policy issues. As scientists, they have made unique contributions, especially when exposing nonscientific arguments. Whether one agrees or disagrees with their arguments, their contributions are invaluable. This kind of involvement should occur in every possible forum, including scientific and “popular” articles, communication with the news media, and—especially—scientific advisory panels at government agencies.

This strategy is not, however, without its risks. No matter how brilliant a scientist may be in his specialty, acuity in public policy requires a different perspective. I was reminded of this by a 1996 Scienceeditorial by yeast geneticist Gerry Fink.4 Fink recounted how in 1977, National Science Foundation administrator Herman Lewis found a way to circumvent the NIH recombinant DNA guidelines' prohibition on doing certain cloning experiments in yeast; this permitted Fink and his co—#8211;workers to do the experiments, literally years before they would have been able otherwise. Their experiments accelerated research leading ultimately to the development of a much improved, second-generation hepatitisB vaccine (of which I was one of the FDA reviewers; see discussion above). But in his editorial Fink neglected the critical point—that other U.S. researchers who lacked a governmental good Samaritan were stymied for years by regressive, unnecessarily restrictive federal regulatory policies. Because Lewis was such a notable exception to the rule, the story should have been about bad science making bad policy, and the real-world impacts of bad policy. But Fink lacked a perspective on the wider dimensions of the NIH policy.

The second strategy pertains to science in its institutional forms—the professional associations, faculties, academies and journals. These institutions should explore and elucidate the controversies over public policy and seek to elevate the level of discourse on them. Scientific societies can, for example, help to create and promote a broader policy perspective by building public policy symposia into national and international conferences. And to return to the example of the Fink editorial, the editors at Science could have steered the piece to the more didactic and broader theme.

Reporters—and by extension, their bosses, the editors—have tremendous power to illuminate public policy issues that have a scientific component. Too often, in the interest of “balance,” all of the views on an issue are presented as though they were of equal weight or value—even after the dialogue has progressed to a point where some views have already been discredited. The Flat Earth Society should not receive the same attention and credence as the geography department at Berkeley, when the former eschews empirical evidence. It should be a “given” that advocates of “creation biology” are less deserving than Harvard cosmologist and paleontologist Stephen Jay Gould of having their views sought on evolutionary issues. Likewise, on many biotechnology issues, the media need to—but often do not—reflect the current status of the “debate.” Unfortunately, by manufacturing or exaggerating a controversy they often get a “better” story.

Third, companies and trade associations should consistently take the long view of regulation, one dictated by scientific and free-market economic principles, and resist the temptation of flawed short-term fixes. There is no sound reason for the U.S. biotechnology industry to support or encourage policies like the USDA's Plant Pest Act regulations, the EPA's technique-based proposals or the FDA's proposal to require the submission of data for all new biotechnology foods.

In the long-run, commercial interests will benefit from the predictability and logic of science-based policies—and from a robust academic research enterprise. Productivity is squandered by strategies that are anticompetitive, that make experiments more expensive and difficult, and that force researchers to do mountains of unnecessary paperwork instead of experiments.

Fourth, those who are not directly involved in science but who are important stakeholders in the ultimate applications of science and technology—venture capitalists, consumer groups, patients' groups—should commission experts to help them both to discern and advocate rationality in governmental oversight of R&D and product marketing.

Fifth, government officials need dramatic behavioral modification. Consider the old story about the city slicker who sees a farmer whacking his mule with a two-by-four. The city slicker asks the farmer what he wants the animal to do. “Nothing, yet,” the farmer replies, “I'm just trying to get his attention.” As in the story, American taxpayers need to get bigger sticks. Negative reinforcement, appropriately applied, can redress some of the existing asymmetry between Type1 and Type2 errors. For example, independent ombudsman panels that have the authority to discipline individuals at regulatory agencies for egregious errors would introduce accountability into regulators' policy-making and decision-making.

Sixth, the government/nongovernment balance of influence should be shifted. There is nothing sacrosanct about a government monopoly over regulation.

There are a variety of alternative institutional arrangements that could serve to oversee the premarket approval and monitoring of products like drugs, medical devices, food additives and pesticides. As economist Robert Tollison has said of pharmaceutical regulation:

There are a variety of alternative institutional arrangements which could be called upon to regulate and monitor the safety and effectiveness of the nation's supply of new drugs and medical devices. These range from the present system, which can be classified as a government monopoly on drug and device certification, to a free market system in which government would have no regulatory role at all. In between these two extremes are a number of institutional alternatives which combine more or less government oversight with more or less private involvement. There are costs and benefits of each system, and it is through a careful consideration of these costs and benefits that one can proceed to choose a system that provides the most net benefits to medical consumers.5

As I have argued throughout this book, the costs of the present system often outweigh its benefits; I have identified several examples where the balance is overwhelmingly negative. It is past time for the regulatory pendulum to swing away from government monopoly to a part of the arc that emphasizes the innovation and efficiency favored by nongovernmental mechanisms.

The remedies I have suggested seem the only routes to altering, even in this small realm and in a limited way, the validity of historian Barbara Tuchman's observation that “[m]ankind, it seems, makes a poorer performance of government than of almost any other human activity.”6

References

    Essay 1: Onerous Regulatory Policies: Blame Industry!

    1.
    Bovard J. First Step to an FDA Cure: Dump Kessler The Wall Street Journal December 8, 1994A16 See also Miller H I. When Politics Drives Science The Los Angeles Times December 12, 1994. Miller H I. Dr. Kessler's regulatory obsessions The Washington Times December 15, 1994A21.
    2.
    Bovard J. Ibid.
    3.
    Crooke S T. Comprehensive Reform of the New Drug Regulatory Process. Bio/Technology. 1994;13:25–29. [PubMed: 9634747]
    4.
    Sears G, van Beek J, Golder G. Improving Canadian Biotechnology Regulation—A Study of the U.S. Experience 1995. consultants' report. I was a consultant on this report but not one of its authors .
    5.
    Anon Statement of policy: foods derived from new plant varieties. Federal Register. 1992;57:22984–23005.
    6.
    Feldbaum C F. Letter from BIO to Congressman Don Young July 17, 1995.
    7.
    Nettleton J. Personal communication.
    8.
    Anon Unified regulatory agenda. Federal Register. 1995;60:23291.
    9.
    Hoyle R. Biosafety protocol draft spooks U.S. biotechnology officials Nat Biotechnol 199614803 See also Kaiser J. U.S. Frets Over Global Biosafety Rules Science 1996273 299. [PubMed: 9630993]
    10.
    Idem.
    11.
    Helms R B. Preface to Drugs and Health: Economic Issues and Policy Objectives Washington D.C.: American Enterprise Institute for Public Policy Research, 1981xx–xxiii.

    Essay 2: Reform Strategies

    1.
    Lincoln D R, Fisher E S, Lambert D. et al. Release and Containment of Microorganisms from Applied Genetics Activities report submitted in fulfillment of EPA Grant No. R–808317–01, 1983.
    2.
    Ratner M. BSCC addresses scope of oversight Biotechnology 19908196–8. See also UW researchers stymied by genetic test limits. The Capital Times (Madison, WI) March 16, 198831.
    3.
    Naj A K. Clouds Gather Over the Biotechnology Industry. The Wall Street Journal. 1989;11
    4.
    Miller H I. Governmental regulation of the products of the new biotechnology: A U.S. perspective In: Proceedings of Trends in Biotechnology: An International Conference Organized by the Swedish Council for Forestry and Agricultural Research and the Swedish Recombinant DNA Advisory Committee Stockholm: AB Boktryck HBG 1990.
    5.
    Anon Coordinated framework for regulation of biotechnology. Federal Register. 1986;51:23302–93. [PubMed: 11655807]
    6.
    Anon Introduction of Recombinant DNA-Engineered Organisms into the Environment: Key Issues Washington D.C.: Council of the U.S. Academy of Sciences/National Academy Press, 1987.
    7.
    Anon Field Testing Genetically Modified Organisms: Framework for Decisions Washington D.C.: U.S. National Research Council/National Academy Press, 1989. [PubMed: 25144044]
    8.
    Miller H I, Altman D W, Barton J H. et al. Biotechnology Oversight in Developing Countries: A Risk-Based Algorithm. Biotechnology. 1995;13:955–59.
    9.
    Anon Field Testing Genetically Modified Organisms: Framework for Decisions. Ibid. [PubMed: 25144044]
    10.
    Idem.
    11.
    Anon Biosafety in Microbiological and Biomedical Laboratories, Centers for Disease Control/National Institutes of Health, U.S. Department of Health and Human Services Washington D.C.: U.S. Government Printing Office, 1988.
    12.
    Anon Proposed guidelines for research involving the planned introduction into the environment of organisms with deliberately modified traits. Federal Register. 1991;56:4134. [PubMed: 11656084]
    13.
    Frommer W, Ager B, Archer L. et al. Safe biotechnology: III. Safety precautions for handling microorganisms of different risk classes. Appl Microbiol Biotech. 1989;30:541.
    14.
    Chrispeels M J, Sadava D E. Plants, Genes and Agriculture Boston: Jones and Bartlett 1994. chapter 15 .
    15.
    Anon Coordinated framework for regulation of biotechnology. Ibid.
    16.
    Idem.
    17.
    Miller H I. Anti-Medicine Man. National Review. 1995:48–51.
    18.
    Anon Council on Competitiveness Fact Sheet: Improving the Nation's Drug Approval Process Washington D.C.: The White House, November 13, 1991.
    19.
    Epstein R A, Lenard T M, Miller H I. et al. Advancing Medical Innovation Washington D.C.: The Progress and Freedom Foundation 1989.
    20.
    The Boston Consulting Group analysis based on data of DiMasi J. et al. , as quoted by the Office of Technology Assessment in Pharmaceutical R&D: Costs, Risks, and Rewards Washington D.C., February1993Estimate is in pretax 1990 dollars .
    21.
    Henderson D R. FDA censorship can be hazardous to your health Policy Brief 158, St. Louis: Center for the Study of American Business September1995.

    Essay 3: Building a Constituency for Science-Based Policies

    1.
    Arntzen C. Regulation of Transgenic Plants Science 19922571327 Also Huttner S L, Arntzen C, Beachy R. et al. Revising Oversight of Genetically-Modified Plants Biotechnology 199210 967–971, and Anon ASBMB [American Society for Biochemistry and Molecular Biology] News Winter 19932 1.
    2.
    Anon Appropriate Oversight for Plants with Inherited Traits for Resistance to Pests. 1996.
    3.
    Friedman M. Why government is the problem In: Hoover Institution Essays in Public Policy Stanford: Hoover Institution Press, 1993.
    4.
    Fink G. Bureaucrats Save Lives. Science. 1996;271:1213. [PubMed: 8638092]
    5.
    Tollison R D. Institutional alternatives for the regulation of drugs and medical devices In: Advancing Medical Innovation Washington D.C.: The Progress & Freedom Foundation, 199618.
    6.
    Tuchman B W. The March of Folly: From Troy to Vietnam. Boston: Knopf. 1984;4
Copyright © 2000-2013, Landes Bioscience.
Bookshelf ID: NBK6046

Views

  • PubReader
  • Print View
  • Cite this Page

Related information

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...