“Faith based
research is okay, shoddy research is common, but the two interact and end up
... in PNAS?”1
Now that the
debate over health care reform is beginning to heat up, expect to hear a
loudening chorus of voices insisting that the key to the future health of
Americans is more research funding for the National Institutes of Health. An early salvo in this direction was
published recently in the flagship publication of the National Academies (of
science, engineering, and medicine), PNAS.
The article2
is an extraordinary exercise in statistical distortion. It’s basic points are these: (1) Rising expenditures on NIH research
correlate with rising indicators of health in America; (2) As Americans get (on
average) older, the economic well-being of the nation increasingly will depend
on them to lead economically productive lives; (3) this, in turn, will demand
better health interventions for an aging population; therefore, (4) NIH budgets
need to keep up with this economic imperative.
The paper concludes: “the size of
NIH expenditures relative to GDP should quadruple to about 1% (≈$120 billion)
and be done sufficiently rapidly (10 years) to compensate for the slowing growth
of the
The paper
includes four graphs (figure 2),
each of which shows a curve of rising funding level over time for a particular
NIH research institute, and a curve showing death rates from the diseases that
each institute focuses on. On three
graphs (heart disease and the Heart, Lung and Blood Institute; stroke and the
As
aggravating (and common) as it is when scientists use illogical or unscientific
arguments to promote science, it’s perhaps even more irritating when they employ
bad or deceptive scientific arguments. Let me just point out a few problems with
this paper, and highlight a few issues that it raises.
There are
many pathways to good health, many variables that contribute to good health,
and complex, incompletely understood relations among these pathways and
variables. Standard of living, level of
education, access to affordable medical care, levels of income equity, diet,
climate, and other factors all have been shown to have a bearing on public
health outcomes. Claiming a direct
causal relation between health outcomes and a single variable (in this case,
NIH funding) without considering how other variables may be contributing to the
outcome is inherently misleading. To
understand why this is so, imagine that the Clinton Administration’s efforts in
the mid-1990s to implement meaningful health care reform had actually
succeeded, so that in the ensuing 15 years, millions of more people had had
access to affordable health care than has actually been the case. The resulting improvements in health of the
average American would have occurred independently of whatever was going on
with the NIH budget. But a similar
analysis to the one in the PNAS paper
would have shown the same strong correlations between NIH budget increases and
the enhanced health outcomes; the key causal role of changing health policy
would be invisible.
A variant of
this hypothetical case is on display in countries that actually do make an
effort to provide health care access for all citizens. As recently summarized in an article
in the June 25th issue of The
Economist, “Comparisons with other rich countries and within the
The authors
state that the fit between total NIH funding and death rate curves (their
Figure 4) explains “98% of the variation of age-adjusted mortality rates. Although [this] does not prove causation it
makes the search for alternate explanatory variables of equal power
difficult.” Nonsense. Given the authors don’t look at any other
variables, they cannot test the real-world validity of their correlation. This is an act of faith, not science; it is a
classic formula for generating spurious correlations. For example, given that budgets for pretty
much everything have gone up during the last fifty years, and that budget
trends across government programs tend to track one another, there are no doubt
many other budget curves that could be nicely matched to the death-rate
curve. And in any case we have seen already
that other countries can deliver better health to their citizens for less money
and with less research—so even if the correlation had some validity, it would
merely underscore the inequity and inefficiency of the U.S. system.
The paper
further errs by attributing to NIH (and the NIH budget) activities and outcomes
that in fact had little to do with NIH.
For example, the authors state that, in addition to medical
interventions, “public health initiatives against smoking, and promoting
screening for breast and colon cancers, led to the initiation of
Obviously my
point is not that NIH does not contribute to the nation’s health in important
ways, but that the contribution—one of many, many variables—cannot, in theory
or practice, be teased out by discovering correlations between budget trends
and health trends. This sort of analysis
contributes to the notion that funding policy for NIH amounts to health policy
for the nation. We’ve already tried that
trick. After the failure of health care
reform during the Clinton Administration, the government’s fall-back policy was
to double NIH’s budget between 1998 and 2003.
Surprise: health care costs continued to skyrocket, millions of more
people became disenfranchised from an ever-more-unaffordable health care
system, and more and more municipalities and corporations began to sink under
the mounting obligation of providing unaffordable health care for their
employees and pensioners. How much
healthier might the nation have been if these trends had been reversed (even if
NIH funding had stayed flat!)?
One final
point: Imagine a publication in a
prestigious journal claiming that pharmaceutical company revenues were strongly
correlated with positive public health outcomes—that the more drugs the
companies sold, the healthier the nation became. And imagine that the authors concluded, based
on their analysis, that government policies should therefore encourage
pharmaceutical profits, e.g., by extending patent lives or providing tax
credits to the industry. And now finally
imagine that the authors of the paper acknowledged that their research had been
supported by millions of dollars of research funding from the pharmaceutical
industry. Would this paper have any
credibility? Could it even be published?
The PNAS article recommends a ridiculous four-fold NIH budget increase over the next decade. The article also includes, on the bottom of the first page, in small print, this statement: “The authors declare no conflict of interest.” Yet the first author of the paper was described in an August 21, 2002 New York Times article5 as “among the 10 biggest recipients of National Institutes of Health grants,” and the research reported in the PNAS article was also NIH supported. What’s the difference between the hypothetical case and the real one?
About the Author: Daniel Sarewitz is the co-director of CSPO.
1 Comment offered by a colleague who,
having yet to achieve tenure, prefers to remain anonymous (which in itself
raises the obvious question of how the tenure process is protecting freedom of
expression—but that’s another post). I
thank this same invisible person for help with this Soapbox post.
2 Manton, K., Gu, X-L, Lowrimore, G.,
Ullian, A., and Tolley, H.D., 2009, “NIH funding trajectories and their
correlations with U.S. health dynamics from 1950 to 2004,” PNAS 106(27): 10981-10986.
3 “Heading for the Emergency Room”,
2009, The Economist, June 25, p. 75.
4 Proctor, R. The Nazi War on Cancer, 2000,
5 Zernike, K., 2002, “Duke Repays
$700,000 in Grant Money and Reports a Swindle, The New York Times (August 21).

