U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Woloshin S, Schwartz LM, Welch HG. Know Your Chances: Understanding Health Statistics. Berkeley (CA): University of California Press; 2008.

Cover of Know Your Chances

Know Your Chances: Understanding Health Statistics.

Show details

Chapter 10Who’s Behind the Numbers?

The last issue we want to raise also has to do with whether to believe the numbers. In addition to being aware of the underlying science, as the previous chapter discussed, it’s also important to be aware of the people who produce the numbers. Ideally, researchers do not have a vested interest in how their study turns out; they are concerned only that the research was performed correctly. In reality, however, researchers may stand to benefit personally or professionally if the test or treatment being studied works well—in other words, they may have a conflict of interest.

The most blatant conflicts of interest involve money. The most obvious example is the direct involvement of industry in research. Pharmaceutical companies and device manufacturers need to sell their products. Research showing that the products work well is crucial, as is generating excitement about the products among physicians and the public. Financial conflicts of interest can influence every phase of the process, from the design of the research through its dissemination (as summarized in the table on page 110). A growing literature suggests that, unfortunately, these conflicts of interest are common and influential.1

Phase of Research or DisseminationTactic to Generate Exaggerated Result
Study designConducting studies that stack the deck in favor of the product
  • by comparing it to a placebo rather than to another drug that treats the same problem, because it is much easier to look better than nothing (the placebo) than to look better than a proven drug
  • by comparing it to the “weakest” drug that treats the same problem (for example, choosing the least effective drug for comparison, or using the other drug at a low dose)
  • by measuring less important surrogate outcomes, where it is easier and faster to show a difference
Publication of scientific resultsSelectively publishing only the studies with the most favorable results (rather than all studies)
Selectively reporting only favorable outcomes in medical journal articles or in prescription drug labels (or purposely omitting worrisome outcomes)
“Spinning” the results to the publicUsing unrepresentative patient anecdotes, citing a “miracle cure” rather than the typical effect among patients
Making strong statements about how impressive the results are (but failing to provide any numbers)
Using the biggest numbers possible to describe how many people have the problem or how big the benefit is (typically by providing only the relative change in outcomes)
Exaggerating what is good and minimizing what is bad about the product
Public campaigns to promote use of the interventionScaring people into adopting the intervention, by highlighting the great danger of the problem or the great danger of failing to take action
Shaming people into adopting the intervention, by equating its use with being a socially responsible person

Studies can be designed to stack the deck in favor of a company’s product. Outcome measures can be crafted to show impressive differences that distract everyone from asking fundamental questions, such as “Does the finding really matter?” There can be selective publication of only the most favorable studies and the most favorable findings within studies. And, after publication, results can be spun for public consumption by launching public relations and ad campaigns that use the compelling presentation tactics outlined in chapter 9. In addition, disease advocacy groups or paid research consultants can be mobilized to use the same tactics to reach the public directly and through the news media.

Of course, money can also influence physicians directly. For example, a doctor who invents and patents a new test for heart disease (or owns stock in the company that holds the patent) can earn a lot more money if the test appears to work really well. So can the researchers and investors involved with the growing number of companies that offer genetic testing services. Papers touting the association of specific genes with increased risk of diseases as diverse as prostate cancer and restless legs syndrome can mean big money when published in high-profile journals—even if no one knows what, if anything, to do based on the results.

Financial conflicts of interest have been the focus of great attention in the past few years, and many medical journals have begun to require that researchers disclose potential conflicts when they publish their work. To be honest, it’s not clear how well this is working. Most journals just don’t have the resources to verify disclosures for accuracy and instead must rely on the honesty of the researchers.

But there are other, less blatant forces that can also create a conflict of interest. We understand these forces because we are researchers too. Many researchers desire prestige and publicity, both of which help us advance in the academic world. Most researchers strongly believe that what they’re studying does, in fact, work—understandably, that belief motivates us to do the research in the first place and to see it through. These forces lead to what we call professional conflicts of interest. All of us have them to various degrees.

While financial conflicts of interest are the most powerful, both types of conflict can affect the quality of scientists’ work. In the most extreme cases, research has actually been faked. A recent, infamous example involved bone marrow transplants as a treatment for breast cancer patients.2 Researchers conducting a randomized controlled trial in South Africa reported amazing results: 51 percent of women who received bone marrow transplants had no evidence of tumor after treatment, compared to only 4 percent of those who received standard treatment. Unfortunately, the researchers had lied, and the results were fiction. Two randomized trials subsequently showed that bone marrow transplantation—which has serious side effects—did not help breast cancer patients. Luckily, such deception is pretty rare. The much more common problem is that conflicts of interest lead researchers to exaggerate the importance of their findings.

Researchers occasionally act more like advocates than scientists, which can lead them to “spin” their results. As outlined in the table on page 110, they may make extreme overstatements to the media about the importance of their work, or they may use anecdotes irresponsibly—telling the story of the one patient who experienced a “miracle cure” and ignoring the less impressive effects on more typical patients. They sometimes make strong assertions about how important their results are (typically without providing actual numbers), or they endeavor to present the biggest numbers possible. And they often exaggerate what is good (information that is favorable to their test or treatment) and minimize what is bad (information that is unfavorable, such as side effects).

In addition, there are well-meaning organizations—patient advocacy groups and public health agencies—who may use these same tactics to promote their particular causes.3 They strongly believe that they have identified problems that really matter and offer solutions that really work. They argue that the main obstacle they face is getting people to listen and to do the right thing—that is, to follow their advice and eat less fat, get more flu shots, undergo screening for cancer, and so on. Unfortunately, they sometimes resort to fear and shame to achieve these ends. For example, the March of Dimes ran a campaign that equated an expectant mother who does not take folate (a vitamin supplement that reduces the chance of rare neural tube defects from about 2 in 1,000 births to about 1 in 1,000) with a mother who lets her baby crawl into oncoming traffic. And slogans such as “If you haven’t had a recent mammogram, you may need more than your breasts examined” are pretty clear: no sane woman would choose to forgo screening.

Regardless of who is behind exaggeration—or what their motivation is—a host of other individuals and organizations are eager to amplify it, including university public relations machines, advocacy groups, and, of course, the news media.

So it’s important to consider whether the people behind the numbers benefit from the health messages you receive. Whenever you hear someone touting a new test or treatment, it’s a good idea to ask whether the researcher or the organization that paid for the work stands to benefit financially (or otherwise) from its success. Increasingly, you can find answers about who is funding research. Disclosure information is now routinely available in medical journal articles and in the article index in the U.S. National Library of Medicine.4 The federal government’s registry of clinical trials5 also provides funding information. Discovering the affiliations of organizations or professional experts quoted in the news can be tricky. Journalists sometimes report this information in news stories, but not reliably. One useful source is the Integrity in Science Project, a database sponsored by the Center for Science in the Public Interest, which allows you to search for corporate conflicts of interest among scientists and nonprofit organizations.6

Be wary of information from sources that have important interests—besides your health—in promoting a new treatment or product. This doesn’t mean that you should dismiss what they have to say out of hand. It just makes it more imperative that you get all the relevant numbers. Then you can decide for yourself whether the news is too good to be true.

We hope this book will help you approach health messages critically—not with cynicism, but with healthy skepticism. This means not accepting claims at face value, because they come from a prestigious source or because it feels like everyone else accepts them. It is worth reexamining a diagram that we presented earlier in the book:

Image ucalpkycf19.jpg

Healthy skepticism helps you push back against unfounded and exaggerated claims and avoid unnecessary fear and false hope. It takes discipline to look beyond claims—to find out the numbers, to evaluate the science they are based on, and to learn who is behind the claims. But that’s what you have to do—and what you are now ready to do—to really know your chances.

Copyright © 2008, The Regents of the University of California.

Know Your Chances: Understanding Health Statistics is hereby licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license, which permits copying, distribution, and transmission of the work, provided the original work is properly cited, not used for commercial purposes, nor is altered or transformed.

Bookshelf ID: NBK126163

Views

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...