![]() |
|
![]() |
Public failures in US science policy (Draft) By Barry Bozeman and Daniel Sarewitz
A considerably revised version of this article is published in Science and Public Policy.
Bozeman, B. & Sarewitz, D. (2005, April). Public values and public failure in US science policy. Science and Public Policy, 32(2), 119-136.
Introduction
Science policy in the United States is constructed upon a foundation of economic rationality. In this respect, science is no different from many other policy domains. When Americans consider institutional arrangements for delivery of public goods and services, they either begin with systematic economic reasoning or, more often, with less systematic assumptions filtered through the laissez-faire economics nostrums embedded deeply in U.S. political culture. When Americans seek to determine the value of resources, goods and services, they reflexively look for price indices, eschewing more complex and indeterminate approaches to assessing value.
In some public policy realms the predominance of neoclassical economic reasoning poses few if any problems. But for many years it has been clear that the economic efficiency shoe is an uncomfortable fit for some policy feet. The focus of this paper is on the manifest problems of economic valuing for science and technology activities. We argue that pervasive use of market valuation, market failure assumptions and economic metaphors have two important consequences that shape the structure of science policy in undesirable ways. In the first place, reliance on economic reasoning tends to shift the discourse about science policy away from political questions of “why?” and “to what end?” to economic questions of “how much?” If we assume that everyone is made better off by investments in science then the only sensible policy question is “how much science can we afford?” But if we assume that science’s benefits and costs, both economic and social, are extremely varied and affect citizens in very different ways and to different degrees, public value questions emerge as at least as important as the economic ones.
A second and related consequence of dominant economic frameworks for public policy is that the role of the public in science policy is easily shunted aside. If science policy is about nonzero sum investments with universally distributed benefits, then an investment ideology is appropriate and, thus, decisions are technical ones best handled by scientific elites and public sector “fund managers.” This approach has especially dominated state governments’ technology-based economic development programs (see Bozeman, 2000). But if one considers public values as just as important as science and technology’s economic return on investment, the choices among public values become of paramount importance. With a public value orientation, rather than an economic investment framework, an expanded role for ordinary citizens is required because the choices are political ones about higher order values and individual preferences rather than technical ones about whether innovation in extruded plastics will spur economic growth more than innovation in ceramics.
We argue that economic rationales for science and technology, important as they are, should never be permitted to displace public value rationales, just as intrinsic criteria for judging science, important as they are, should never displace external criteria. But social theory offers few alternative ways of thinking systematically about science policy. There is no social theory of scientific outcomes; there is no social choice theory for science. There is a market failure model that tells us conditions under which government should “intervene.” We provide an alternative schema, which we call the “public failure” model. This model gives rise to a number of criteria that can allow policy analysts, decision makers, and administrators to make rigorous judgments about the public value and distributional implications of science policy, in addition to questions of economic efficiency and economic growth.
The public failure model depends on the notion that there are such things as "public values," just as the market failure model derives from an idealized notion of a market, generally defined in terms of efficiency. As is the case with so many concepts, especially, normative ones, “public value” is more often defined by the context in which it is used than by stipulation. In some cases, “public value” seems to mean the same as “public interest” (Flathman, 1958), a notoriously fuzzy concept [Shubert, 1960; Sorauf, 1957). In others, “public value” equates with the core legal features and jurisprudential warrants of a polity (e.g. Fuller, 1964). Some argue that public managers convey public value in their authoritative acts. What we mean by “public values” are ones that embody the prerogatives, normative standards, social supports, rights and procedural guarantees, a given society aspires to provide to all citizens. This is not the same as a public good because public values are not goods, either tangible (dishwashers) or less tangible (e.g. scientific information). Public values are not Platonic ideals, rather they vary across cultures and time, depending on the common values prized in the culture. Public values can be posited but, apart from careful and lengthy empirical inquiry, cannot be specified for any given locus. Even in the absence of empirical inquiry into the particular public values of particular cultures, public value is a useful enabling concept, not unlike the enabling concept of the perfect market.
A key assumption of our paper, and the public failure model (Bozeman, 2002), is that market efficiency and public value are not closely correlated. Public failure can occur with market success, public failure and market failure can occur simultaneously, in some happy circumstances, public success and market success can coincide. To illustrate the disjunction between market efficiency and public value, one need only consider the case of AIDS drugs, an excellent illustration that market failure and public failure are not the end points of a single dimension. AIDS drugs represent a remarkable market success in rich nations, where initial public investment in research under conditions of market failure led to private sector development of effective pharmaceutical interventions. But the global perspective is different, since on the order of 95 percent of HIV and AIDS sufferers worldwide do not have access to these expensive drugs. It is possible, that is, to have extremely efficient markets and unconscionable failure of public values. The case of AIDs medicines is an illustration of the constricting moral and operational knots we tie ourselves in when we rely on traditional concepts of market efficiency as a guide to science policy and its concomitant social outcomes.
In this paper we begin by briefly summarizing the importance of market thinking in science policy. We then outline our alternative, public failure theory, an approach that, like market failure theory, suggests diagnostic characteristics and guidelines for action. Much of the paper is devoted to developing case examples of particular types of public values failure in contemporary science and technology policy. In a concluding section we return to the two problems cited above: limited public participation is science policy and the displacement of outcomes and social impact issues with a narrow focus on expenditure increments.
Market Thinking in U.S. Science Policy
While market-based assumptions affect all U.S. policy-making, they are particularly prevalent in science policy. For example, economic markets have been invoked as the central metaphor for the efficient operation of the science enterprise itself. In a seminal paper, Michael Polanyi (1962) portrayed science as a self-regulating marketplace of ideas whose efficiency could not be improved, but could only be compromised, by outside intervention. Such economic metaphors have had a powerful impact on public discourse about science policy (see discussion in chapter 3 of Guston, 2000). More relevant to our discussion here, however, are the dual convictions that science and technology are the major contributors to economic growth, and that the role of government in science is predominantly to "intervene" only when markets fail to promote sufficient private investment in science. Our point is not that these convictions are misplaced—they are, on the contrary, quite robust. But they are an insufficient foundation for science policy.
If one looks at the evidence of both case studies (Nelson, Peck and Kalachek, 1967; Rosenbloom and Spencer, 1996; Mowery and Rosenberg, 1989) and econometric studies (e.g. Griliches, 1995; Jones, 1995; Denison, 1962), it is difficult to challenge the idea that science is, indeed, the engine of economic growth. Whether familiar estimates (Solow, 1957; Bureau of Labor Statistics, 1989) are correct and the contribution of science and technology is 30%, more, or less, depends upon the particular combination of unrealistic assumptions one wishes to embrace. But the contribution science and technology, when taken along with education and human capital, does, indeed, seem to account for the majority of economic growth in most industrialized nations.
Our concern with the notion of science as engine of the economy has nothing to do with its veracity. Rather, we are concerned that the power of this market perspective sometimes overwhelms our ability to think systematically about science as the engine of social change, and thus acts as a powerful force for conservatism. The market perspective implies that if we leave science unfettered, the best of all possible worlds will emerge. The very idea that studies of economic growth employ production function economics says much about the worldview analysts bring to the problem. Growth is taken as an object for maximization. While we are happy to assume that most analysts are pursuing a value-neutral means of understanding growth, the cumulative effect of seeking knowledge for the maximization of growth (when taken with a world-wide social obsession with monitoring growth rates) is that it is all too possible to begin thinking of science and technology exclusively in terms of their contribution to economic growth while giving short shrift to the effects on social change, quality of life, or even equitable growth.
Evidence for this phenomenon is overwhelming. Virtually every science policy document issued by the federal government or scientific organizations is grounded on assertions about the role of science and technology in economic growth, with social outcomes portrayed as secondary consequences of this growth. As suggested by recent White House policy statements (e.g. Clinton and Gore, 1994; Lindsay, 2001), and a thousand other policy documents, economic arguments are both the literal and the metaphorical leading edge of science policy discourse, and have been throughout the post-World-War-II era (e.g., see V. Bush, 1945). When it comes to science policy, all good things flow from economic growth. Few science policy documents pay more than lip service to the idea that scientific and technological advance may contribute at once to economic growth and to social change that can be inequitable, disruptive, or in other ways problematic for modern, democratic societies.
But policy for economic growth is not the same as policy for social well-being. For example, while it is arguable that science’s contribution to economic growth is best attained with relatively limited government planning and adjustment, it is clearly not the case that quality of life goals or social goals will be achieved by a “hands off” approach. But even if lack of interference in both economic markets and the marketplace of scientific ideas is the best route to efficiency, no one posits an efficient social choice market, much less an invisible hand insuring optimal outcomes.
Dominant though it may be in policy discourse, however, the market failure paradigm— either its sophisticated political economy rationale or its unsophisticated argument about the necessity of keeping business and government institutions in separate roles—cannot be a sufficient basis for clear thinking about science policy. The arguments for focusing on public value, rather than market failure, as the first order governing principle for science policy are quite simple and unavoidable:
An Introduction to Public Failure Theory
The concept of the public interest and public value has had an important role both in the articulation of the founding documents of the United States, and in political discourse ever since. As we have said, however, analysis based on economic interests and values has had, especially in the past half-century, a much stronger influence on many types of policy making than arguments rooted in consideration of public values. The reasons for this situation are clear (Bozeman, 2002). The market failure model is backed by theory, diagnostic criteria, and measurement tools that are much more concrete and useable than anything that has been mustered on behalf of public values. Simply put, it is easier to conceptualize, implement, and evaluate policies based on market failure models that those based on articulations of public values. Moreover, the market failure model is easy for the public and politicians to understand, and for that reason alone it may be commonly invoked. Policies based on market failure analysis are likely to be much less politically contentious than those based on an assessments of public values. And, in the case of science policy, it is certainly not surprising that scientists themselves—who play a large role in determining such policy—gravitate toward a frame of reference that can be quantified and unambiguously defined.
For economists and polic y makers alike, the market failure rationale for policy making derives from the belief that open markets are not just the most efficient allocators of goods and services, but also the most effective indicators and implementers of public preferences. Government intervention in the market is thus justified only when market processes fail and “prices lie—that is, when the prices of goods and services give false signals about their real value, confounding the communication between consumers and producers” (Donahue, 1991, p. 18). The causes of such failure are oft-articulated and well-understood: externalities; steep transaction costs; distortion of information or inhibition of information flow about a good or service; and monopolistic behavior or other competitive failures. (For a precise definition of market failure, see Bannock, Baxter and Davis, 1998, p. 117.)
Despite the relative rigors of market failure theory, no economic analysis can encompass the full range of policy choices facing society. Most generally, economics is about the private value of public things, not about the public value of things both private and public. Efficient pricing cannot solve all problems of public values, and indeed, there is a strong societal sense that in some domains, economically optimal solutions are entirely inappropriate—for example, in creating a market for blood or organ donors. And some political actions explicitly eschew economic efficiency in favor of social equity—for example, the Americans with Disabilities Act. Yet economic approaches have been applied, often controversially, in domains that encompass public values, such as in the valuation of natural resources, the creation of markets for pollution-permit trading, and the design of science and technology programs.
Even if one accepts market-failure analysis on its own terms, it is unavoidably incomplete, for two rather obvious reasons. First, just because market-failure occurs does not mean that government action can eliminate or avoid the causes of this failure. For example, environmental externalities may signal market failure, but a government decision to respond to these externalities is not an a priori demonstration that doing so will lead to more social benefit. While the U.S. government invested billions in collaboration with the big three automakers in the Partners for a New Generation of Vehicles program, Japanese automakers on their own produced a first generation of hybrid-electric automobiles that outstripped anything the Americans had to offer. Second, the absence of market failure does not a priori imply public success, and therefore is not a sufficient reason to eschew government investment. That is, market value does not equal public value. The situation with AIDS drugs in the developin g world is the archetypal example: the market has succeeded marvelously, while millions wait to die.
As obvious as these arguments may be, they are insufficiently available for purposes of public policy making. The aim of public failure theory is to provide an analytical framework that can compete with market failure theory in terms of analytical rigor and political effectiveness. The notion of public failure derives from the obvious reality that sometimes neither the market nor the public sector is providing goods and services necessary to achieve certain core public values. A public failure approach to policy making changes the terms of debate by making government (and public values) something other than a subsidiary issue of efficiency in market performance. The key question in market failure rhetoric is this: “Are prices distorted due to a failure of communication between consumers and producers?” The key question in public failure goes an essential step further: “Regardless of market efficiency, is there nonetheless a failure to provide an essential public value?”
We recognize, of course, that “prices” are more tangible—or at least quantifiable—than “public values” as a unit of analysis. However, there is very little, if any, fundamental disagreement in the United States about the existence of a fairly comprehensive set of core public values, especially those embodied in the nation’s founding documents, such as the right to subsistence, the rule by consent of the governed, freedom of speech and religious practice, and habeas corpus. This basis is more than sufficient for us to proceed, because public failure theory is not a decision-making tool (a la cost-benefit analysis), but a framework to promote rigorous deliberation about the relation between economic value and public value. We note, by analogy, that just as any discussion of public values derives from a set of democratic ideals that can at best be imperfectly achieved, so is market failure rhetoric rooted in the idealized construct of a perfectly free marketplace. Conversely, just as this unattainable economic state does not prevent the pursuit of rigor in discussing market behavior, neither should the idealized nature of public values preclude an analogous rigor in public value discourse. Our purpose here is to promote such rigor in the discussion and development of science policy.
Public Failure in Science Policy
Public failure occurs when core public values are not reflected in social relations, either in the market or in public policy. Bozeman (2002) elucidates criteria for identifying public values failure, criteria that to some extent mirror market failure criteria, but which focus on public values rather than efficiency of market transactions. While there is no implication that this is a set of canonical public value criteria, they provide a starting point. We suggest that these public failure criteria may enhance discourse and decisions about the allocation of responsibilities between public and private sectors, and for shaping allocations within the public sector. In this section we briefly describe each of six criteria and use an example from R&D policy to illustrate its application and implications. We do not provide an extensive review of the criteria; more detail is provided in Bozeman (2002).
1. Public Failure in Values Articulation and Aggregation: The Case of Peer Review
Public failure can occur when expression of public values is somehow stifled or distorted. The ability of those who are entrusted to take action on the basis of such expression may thus be compromised. For example, if campaign financing procedures lead to conspicuous conflict between public values (as elicited, say, through polling) and the values of elected officials, then there is a potential for public failure. This type of conflict may provide an incentive for private investment in lobbying that is rational by economic standards yet counter to the larger public interest—as when efforts by the insurance industry and its allies to prevent health care reform have overcome a broader public desire for more affordable and equitable health care delivery system. In science policy, a pervasive cause of values articulation public failures is an absence of mechanisms that allow non-scientists to have a significant say about public investments in science. The reasons for this failure are clear: How can a non-scientist be expected to make a sensible choice between, say, funding research aimed at understanding the contribution of polycystic ovarian syndrome, versus research on understanding the impact of bovine hormone in milk on breast cancer?
Of course it would take a great deal of effort even for most scientists to have an informed opinion about the largely unknowable trade-off between two important lines of applied medical research. Moreover, politicians (and the general public) are as likely as scientists to understand their own preferences for, say, favorable outcomes in breast cancer research vs. favorable outcomes in astronomy or in polar sciences or in mental health. This is not an overestimation of the ability of the “median voter” to understand science trade-offs but, rather, a perhaps more realistic (and pessimistic) estimate of the relative ability of experts to make high level values choices.
Public institutions are, in general, poorly equipped to address public value in science policy. In large measure this is because of the expert-driven nature of science policy, a process whereby programs and spending alternatives emerge in large part from scientists, mediated by science agencies or professional organizations (e.g., the National Academy of Sciences), with almost no cross-science comparisons and little attention to the outcomes and values served by disparate packages of science and technology research. The institutionalization of peer review as the paradigmatic means of choosing among research projects and, more recently, programmatic awards and grants for new research centers and national science and engineering facilities, sometimes has the effect of suppressing consideration of public values. When the hallmark of a “good decision” is peer review, then “good” has no necessary connection with public value but, instead, with the scientific credentials of the individuals and institutions performing science. While the capabilities of the performers are obviously of great importance, they are less important than the content of science, what is performed. When a focus on peer review (or any other such procedural mechanism) displaces deliberation about public value, it constitutes a public failure.
There is some evidence that science bureaucrats are aware of the role of peer review in goal displacement. In response to Congressional prodding, the National Science Foundation (NSF) has sought to use the peer review process to enhance the societal value of its research portfolio. In addition to standard criteria of scientific merit, NSF added in 1997 the criterion of social benefit to its peer review process. NSF then commissioned the National Academy of Public Administration (NAPA) to evaluate how well this effort to incorporate public values into peer review was working. NAPA reported back (NAPA, 2001) that the approach (NAPA, 2001), as implemented, was unlikely to have much positive effect. Problems with NSF’s approach ranged from a lack of “quantitative measures and performance indicators to track the objectives of the new merit review criteria,” (p. 7) to skepticism or even outright opposition on the part of reviewers to the inclusion of social impact criteria to begin with. NAPA went on to recommend a variety of actions that NSF could take to correct these problems, such as improving “the conceptual clarity of the objectives of the new criteria,” (p. 8) and ensuring “genuine attention to the goals of the new criteria throughout the entire review cycle.” (p. 9)
The irony of the NSF social impacts criteria and the NAPA report, of course, is that neither addressed the underlying source of public failure. No one denies that the scientific community has great skill in assessing technical quality of research, but who has vested it with special training, skill, or legitimacy in assessing its social value? Moreover, there is no particular reason to believe that the social priorities of scientists are representative of society. Indeed, surveys of scientists’ political opinions and values would lead us to believe there are often large differences between scientists and the general public (e.g., Bauer, et al., 2000; Plutzer, et al, 1998; Barke and Jenkins-Smith, 1993), though scientists are closer to other elite groups such as journalists. Just as with other professions and demographic groups, scientists' values probably resemble those of persons who have socioeconomic attributes and life experiences similar to their own. And, of course, citizens have not voted for scientists or in any way designated them as the "official" articulators or judges of public value.
Due to its jurisdictional structure, Congress is generally unwilling to make either cross-science choices or systematic choices among the public values that science serves. Likewise, the science bureaucracy works within scientific fields, disciplines or objectives, rarely among them. Scientists themselves have even less incentive for the internecine warfare that would arise with a more systematic assessment of the values associated with a diverse set of desired scientific outcomes. But public values operate at a level where science policy receives very little attention: cross-science comparisons, and the opportunity costs associated with resource allocation decisions. It is also at this level of broad preferences where the general public, or at least the attentive public, can best contribute, but the structure of U.S. science policy provides little opportunity to do so. This is a built-in public failure affording limited opportunity for values articulation. (See Kitcher, 2001, for a formal argument about enhancing the public role in articulating public values for science.)
When the putatively "value neutral" science policy funding machine is temporarily thrown off kilter, it is generally because an issue emerges that grips the public imagination to such a degree that public value issues simply cannot be ignored. The most recent case in point is the stem cell controversy, which provides an excellent proof that it is not the complexity of scientific issues that forestalls public participation and public values articulation. The scientific issues in the various stem cell debates not only are technical and esoteric. But the values issues are so fundamental and compelling that research cannot proceed apace. When the conduct of research requires one to consider such issues as “what constitutes a human being?” then the momentum of the science policy funding machine slows down and the role of public values is brought to the fore. The result highlights the limited capacity and the dearth of institutions available for including public values in science. With respect to stem cells, a cacophony of approaches have emerged, from legislation to reports of science organizations to presidentially appointed committees. In the latter case, it is noteworthy that the Clinton and subsequent Bush administrations each appointed their own expert bioethics committees with distinctly different expert membership to reflect respective Administration ideologies. Nevertheless, the stem cell controversy is noteworthy not only because it highlights the difficulty of articulating public value but because it represents one of the few cases where public values (or at least ethical dilemmas) receive much attention. For good or ill, the stem cell controversy slowed down the science policy funding machine. 2. Public Failure and “Imperfect Monopolies”: The case of clinical trials.
Whereas private sector monopoly is an indication of market failure, in some cases the inability of a government activity to protect its monopoly may lead to erosion of public good. For example, foreign policy is a legitimate government monopoly, and any competition from unauthorized envoys could damage the broader public interest, as when individuals negotiate with foreign regimes that are holding hostages or political prisoners. Similarly, implementation of justice is a necessary government monopoly that, when taken over by private hands, represents a well-recognized public failure.
Regulation of private sector activities to protect public welfare is a widely accepted role of government, although the appropriate degree of regulation is often highly contentious. In the area of clinical trials to determine the efficacy of pharmaceuticals and other medical interventions, the government has granted research institutions and scientists considerable autonomy. This autonomy is justified by faith in the self-policing capacity of the scientific community, especially as embodied in its claim to objectivity—or at least disinterest—through the scientific method, and to quality control through the peer review mechanism.
The protection of humans in scientific experiments is a well-established public value, enshrined in international law through the Nuremburg Code and Helsinki Declaration (Woodward, 1999), and nationally through such codified principles as informed prior consent. All experiments involving humans that are funded, in whole or part, with federal dollars, are overseen by Institutional Review Boards—decentralized, self-policing oversight bodies aimed at “protecting the rights and welfare of human subjects of research,” (CFR Title 45, Part 46.103) and ensuring that “[r]isks to the subjects are reasonable in relation to anticipated benefits” (CFR 45: 46.111). Such experiments include tests and trials to demonstrate the efficacy of new drugs, therapies, and procedures, both as part of the process of gaining government approval for general use, and as a means of informing physicians about the relative value of available options.
In September of 1999, 18-year-old Jesse Gelsinger died while undergoing gene therapy for a rare liver disease. Gene therapy had long been touted as a potentially miraculous emerging line of treatment for a wide variety of serious genetic disorders, but its promise had remained unfulfilled, and Gelisinger’s death made national news. Early reporting on his death suggested only that something had gone terribly wrong in the experiment, but that all appropriate processes and procedures had been followed to ensure that risk was minimized and his participation was fully consensual (Wade, 1999). But deeper investigations revealed irregularities. The consent forms that Gelsinger signed misrepresented the dosages that were to be administered, and did not include information about animal deaths from similar treatments. Evidence of high toxicity and adverse side effects in earlier experiments was ignored. The doctor in charge of the experiments—as well as the university he worked for—had a financial stake in the company that would have produced the new gene therapy (Nelson and Weiss, 1999).
Before Gelsinger’s death grabbed the headlines, academic studies of clinical trials had been painting a more dispassionate, less publicized picture of public failure. A number of studies revealed that clinical trials directly or indirectly supported by pharmaceutical companies often yielded more favorable assessments of new therapies than trials that were not tied to the private sector in any way. In one analysis, only five percent of company-sponsored studies on anticancer drugs yielded unfavorable assessments, while for studies sponsored by non-profits, the unfavorable rate was 38 percent (Friedberg et al, 1999). An investigation of calcium-channel antagonists, a class of drug used to treat cardiovascular disease, demonstrated “a strong association between authors’ published positions on the safety of calcium-channel antagonists and their financial relationships with pharmaceutical manufacturers” (Stelfox et al, 1998, p. 101). An analysis of published symposia proceedings showed that “[a]rticles with drug company support are more likely than article s without drug company support to have outcomes favoring the drug of interest” (Cho and Bero, 1996, p. 485).
Few have argued that such results demonstrate scientific fraud. More likely, “[c]lose and remunerative collaboration with a company naturally creates goodwill [that] can subtly influence scientific judgment in ways that may be difficult to discern” (Angell, 2000, p. 1517). Some scientists have publicly claimed that they are not subject to such influences. James Wilson, the researcher in charge of the Gelsinger trial, said: “To suggest that I acted or was influenced by money is really offensive to me. . . You’ve got to be on the cutting edge and take risks if you’re going to stay on top [scientifically]” (Nelson and Weiss, 1999, p. A1) While such claims to special immunity from human weakness may or may not ring true, researcher ties to industry by definition constitute a conflict of interest which, if not revealed to patients in trials undermines the principle of informed consent, and if not apparent to peer reviewers and publishers of research articles, can obscure the implications of research results. Bodenheimer (2000) catalogued a variety of ways in which a drug test can be carried out to favor one result or another without rendering the data itself invalid (for example, “[i]f a drug is tested in a healthier population . . . than the population that will actually receive the drug, a trial may find that the drug relieves symptoms and creates fewer adverse effects than will actually be the case” (p. 1541)).
In terms of public failure theory, a particularly troublesome attribute of this problem lies in the difficulty of actually documenting the threat to public values. The Gelsinger story was atypical in that the connection between the conflict of interest and the public consequence— Gelsinger’s death—was obvious (even if not explicitly causal). However, when published studies comparing one drug to another influence a physician to prescribe one drug rather than another, the very existence of the public failure may be difficult to ascertain, and the public-failure consequences highly diffused. It is, indeed, a testimony to the transparency of the biomedical and medical-legal research enterprises that this problem did emerge.
Federal regulations for oversight of human subjects research does not explicitly require Institutional Review Boards to consider conflict of interest in the approval process (CFR 45:46), although the Food and Drug Administration, for example, requires that applications for drug approval must be accompanied by disclosure of investigator conflicts in all research supporting the application. The final report of the now-defunct National Bioethics Advisory Commission (2000) recommended that the government develop specific guidelines for defining and regulating conflict of interest in human-subjects research, and that conflict of interest should be disclosed to research participants as part of the prior consent process. Enforcement of these recommendations would help to reestablish the government monopoly over protecting human subjects of medical research, and thus help to reverse a case of public failure in science. (See Goldner, 2000, for a comprehensive discussion of conflict of interest in biomedical research.) 3. Public Failure due to Scarcity of Providers: The case of geographic information.
Protection of a core public value may depend on the presence of a sufficient number of providers of that value. If market signals are insufficient to attract the necessary number of providers, and if the government fails to step in, then public failure may occur. Few would disagree that the number of high quality public school teachers is less than optimal, due to many factors including relatively low salaries and other disincentives. This may be counted a public failure. About half of all Americans believe in the right of a woman to obtain an abortion (and less than half are firmly opposed), so arguably the scarcity of doctors who can provide abortions is a public failure. When certain government activities are deregulated, provider scarcity may follow. For example, when airline deregulation leads to decreased services for rural areas, significant portions of the population may be adversely affected. The market may be operating efficiently, but public failure has occurred.
The federal government has long been recognized as the appropriate source of support for developing and disseminating data on the geographic and physiographic characteristics of the nation. The Lewis and Clark expedition was a famous early example of a government research project aimed at garnering geographic information, and surveying and mapping activities were early mainstays of federal support for science prior to World War I (e.g., Dupree, 1986).
In recent decades, the importance of geographically referenced, or geospatial, data has increased rapidly. This growth has been fueled by new technologies, from remote sensing and geographic positioning satellites to sophisticated computer graphics software, and also by societal demand for new capabilities to monitor and address complex challenges, ranging from environmental protection to emergency management. At the same time, private sector involvement in both the collection and use of geospatial data has increased, leading to a number of dilemmas regarding the appropriate allocation of public and private activities. Considerable attention has been focused on the need to ensure that this increasing private sector role does not result in erosion of public access to information and products that are recognized as public goods.
The revolution in geographic information began with NASA’s launching of the first civilian remote sensing satellite—Landsat 1—in 1972. Efforts to gradually privatize the Landsat program were initiated in 1979 by the Carter administration. Two years later, the Reagan administration began advocating a more rapid shift to privatization, which in turn led to the passage of legislation in 1984 (P.L. 98-365) that privatized the sale of Landsat data, and encouraged private sector development of future satellites. These actions took place despite studies indicating that privatization was not yet economically sustainable. And indeed, for the next five years, the Landsat program was in a state of consta nt fiscal crisis. By the early 1990s, with the operational satellites (Landsats 4 and 5) beyond their design life, and no concrete plans for replacing them either with public or private satellites, the very existence of the Landsat program was in jeopardy (NASA, 1998; NRC, 1991).
In 1992, Congress took action to ensure the continued provision of satellite-based geospatial data. In doing so, it explicitly noted that the privatization effort had failed to provide the public good for which it was intended: “The cost of Landsat data has impeded the use of such data for scientific purposes, such as for global environmental change research, as well as for other public sector applications” (H.R. 6133). A new law was enacted (P.L. 102-555) to ensure the continuity of the Landsat program, but also to ensure that publicly supported scientists and other who depended on satellite imagery for non-commercial uses would have access both to archived data and to newly acquired data—access that had been compromised by high prices during the privatization effort.
The near debacle created by the premature privatization of the Landsat program represents a clear example of how a scarcity of providers can deprive society of a public good upon which it depends, and how such public failure can be corrected by appropriate government action. In this case, the obvious failure of a putative market solution—privatizing Landsat— made it easy to recognize the public failure. The more interesting and problematic case, however, occurs whe n the markets are functioning well, but the provision of the public good is not automatically preserved. Recent development of geospatial data policy exemplifies an awareness of this tension, and is in fact something of a success story: a case where policy intervention ensures that market success is combined with public success.
As we have discussed, overcoming public failure depends on general agreement about the desirability of particular public value, and indeed the idea of geospatial data as a public good is well accepted. For example, a study by the National Academy of Public Administration states that “[m]any believe that [geospatial] data should be made widely available at no cost or at reasonable cost to the user, and that this will satisfy an almost infinite variety of governmental, commercial, and societal needs” (NAPA, 1998, p. 2). One National Research Council (NRC) committee asserted that “it is in the public interest for government to play a leading and facilitating role in coordinating the development of spatial data and to make those data available for public use and exchange” (NRC, 1995, p. 1). Another NRC committee made the even more specific claim “that it is in the public interest and a federal responsibility for the . . . development of an interdisciplinary, multidatabase architecture that will allow disparate databases to become nondestructively interoperable in a common geospatial context” (NRC, 2001, p. 77) While such language cannot easily be derived from the Constitution, it can nonetheless be justified by “countless applications (e.g., facility management, real estate transactions, taxation, land-use planning, transportation, emergency services, environmental assessment and monitoring, and research)” (NRC, 1993, p. 2), and the consequent public benefits that access to geospatial data can confer.
The major obstacle to ensuring such benefits has become the coordination of rapidly expanding private and public sector capabilities in acquiring, processing, and disseminating a wide variety of geographic information. The transition from analog (“paper”) maps and photos to digital databases has enabled a thriving private sector effort to apply spatial data to a diversity of public and private needs. In particular, the rise of “geographic information systems” (GIS) has created the capability of bringing together very different types of data to support decision making. The challenge of assuring that data, software, and hardware capabilities arising from a multitude of providers did not create a sort of geographic information tower of Babel even led to the formation, in 1994, of a non-profit organization, the Open GIS Consortium, “to address the lack of interoperability among systems that process georeferenced data” (Open GIS Consortium, 1999, p. 2).
The situation had rapidly changed from one of a scarcity of providers of a single type of data—satellite imagery—to a scarcity of providers of an integrated product. The point is worth emphasizing: in the first case, market failure resulted in provider scarcity; in the second, the culprit was market success. A new need rapidly arose: to ensure “a common spatial data foundation organized according to widely accepted layers and scales (or resolution) that is available for the entire area of geographic coverage . . . to which other geospatial data can be easily referenced” (NAPA, 1998). For example, if a municipality needed to develop geospatial data to support ecosystem management, it might require spatially referenced data about the location of wetlands and other sensitive areas, about demographic and land-use trends, groundwater chemistry, surface water flow, sources of pollution, distribution of animal and plant species, power lines and pipelines, and of course traditional physiographic data. For these data to be useful, they must be combined as part of a single geospatial database, which means they must be available in compatible formats and coverage, and usable with one software package, on one computer. Such compatibility was not arising from private sector providers acting individually to maximize profit and capture market share.
The need for government intervention has been broadly accepted and recognized in both the public and private sectors. In 1994, President Clinton issued Executive Order 12906, “Coordinating geographic data acquisition and access: the national spatial data infrastructure [NSDI]” (p. 17671), to establish (1) a Federal Geographic Data Committee to coordinate the development of the NSDI; (2) a national geospatial electronic data clearinghouse that would encompass all data collected by the public sector; (3) a process for developing standards to ensure compatibility among public, private, and non-profit sector sources of geospatial data; and (4) a framework of basic geospatial data—“data you can trust”—for a variety of applications that would be available for all. The framework represented a clear embodiment of public action to ensure a public good: “basic geographic data in a common format and an accessible environment that anyone can use and to which anyone can contribute . . . a nationwide community for data sharing” (Federal Geographic Data Committee, 2001). 4. Public Failures and Short Time Horizons: Energy R&D.
Human beings pay attention to unborn generations, but they do not do so out of economic rationality. Pricing will not account for consequences that are expected to emerge in the distant future. Thus, there is clearly a public role in guaranteeing the long-term perspective even if there is no short-term market failure. This type of problem has emerged most conspicuously in the area of environmental protection. For example, the price of gasoline in the past did not reflect the public health costs associated with high levels of lead in the air; currently gas prices do not account for the long-term global environmental costs associated with climate change. In the case of lead, government action created a public success—mandatory introduction of lead-free gasolines—in the absence of market signals. For climate change, the necessary regulatory and R&D investments have yet to be made, so here we see both public failure and market failure arising from short time horizons.
The market-failure paradigm has provided a politically robust rationale for long-term investment in research where no foreseeable application exists. Oddly enough, the paradigm has been less successful as a justification for public research investment where the long-term application is clear, but the short-term incentives for private sector involvement are weak. This irony reflects the apparent repugnance in market failure dogma to choose "winners and losers." The consequences are starkly illustrated in the case of energy R&D.
The energy crises of the 1970s demonstrated that the long-term U.S. dependence on foreign sources of oil could have far-reaching economic and political consequences. More recently, the growing awareness of the connections between fossil fuel use and global climate change have created a compelling long-term incentive to switch to other types of less-polluting energy technologies. In the face of these two realities, trends in both public and private investment in energy R&D are quite amazing: they have declined by almost two-thirds, in real dollars, since the late 1970s (Dooley, 1999, Fig. 4).
The reasons for this lack of investment are clear: over short time horizons, the market has been working quite well. In particular, and contrary to general expectations, oil prices have been fairly stable as a result of aggressive exploration, enhanced extraction technologies, open global markets, and the willingness of the U.S. to intervene militarily to protect access to oil supplies. In market-failure thinking, continued low energy prices justify neither aggressive public funding of energy R&D, nor government regulatory action to promote efficiency, which could stimulate private sector R&D. Absent the price-pressures of oil embargoes, there has been little motive to innovate. Annual energy technology patents, for example, have been declining since the 1980s (Margolis and Kammen, 1999).
Reduced commitment to research and innovation seems to be reflected in both energy production and consumption trends. In 1970, fossil fuels accounted for ninety-three percent of all U.S. energy production. This proportion declined five percent over the next five years—during the first energy crisis—but only eight percent more over the succeeding fifteen years. (Energy Information Administration, 2000). On the consumption end, energy intensity (energy use per unit of economic output) declined 2.4 percent per year in the 1980s; in the 1990s the average annual decline was only 1.5 percent (Energy Information Administration, 2001).
Given the ongoing volatility of politics in the Middle East, and given the increasing evidence that carbon emissions influence the behavior of the global climate, it is difficult not to see the declining public investment in research on alternative energy sources and more efficient energy technologies as deeply problematic. This declining investment documents a continued failure to overcome the short-term thinking in government energy R&D policy that is made possible by a well-functioning market that continues to deliver stable energy supplies and low prices. 5. Over-Reliance on Substitutability of Resources: Organ Sales as a Medical Public Failure
Market mechanisms may indemnify against the loss of particular resources, or offer substitutes for lost resources. While such mechanisms may be efficient from a market perspective, they may also represent public failure. An obvious example is the calculation that automobile manufacturers might use when determining how safe to make a vehicle. Part of this calculation includes the price of adequate indemnification against law suits for wrongful death. While such trade-offs are unavoidable (a totally safe car would be either unaffordable, or immobile), they may still represent public failure, for example if the manufacturer determines that the costs of fixing a known problem exceed the expected legal costs, as occurred when Ford failed to correct the exploding gas tanks in its Pinto model (Tietz, 1993). The idea that life is explicitly substitutable offends sensibilities of most non-economists, and may often imply public failure.
A related example comes from environmental policy. “No-net-loss” policies allow for developers who fill in existing wetlands to construct artificial wetlands as a substitute. However, ecological research suggests that artificial wetlands tend not to have the same species diversity or ecological value as the natural ones that they replace (Kaiser, 2001). Similarly, when old-growth forests are clear-cut and replaced by planting of monoculture forests, the ecological value of the original forest has not been replaced, even if the economic value, as measured by board-feet of lumber, is maintained.
In market failure-based policies, public value failures are most often owing to the substitution of money for a tangible or natural resource. One especially interesting case implicating science policies and pertaining to medical practice is money-for-body-parts transactions. In the United States, trafficking in human organs is illegal—a clear signal that public values should take precedence over market efficiency. But elsewhere in the world, evidence of a market in human organs to supply rapidly advancing capabilities in medical science continues to crop up. An active market in kidneys and corneas has been documented in India (Kumar, 1994), and an organ market is also thought to exist in the Philippines (Medical Industry Today, 1998). Such transactions degrade humans, victimize the poor, and invariably occur under some type of economic or political duress; they exemplify public failure. But from a market standpoint, money-for-body-parts transactions may be viewed as efficient, with money being an acceptable substitute for personal health. From a consumer sovereignty standpoint this logic is unimpeachable. As one Harvard economist writes: “If a desperately ill individual who would die without a kidney is able to buy one from a healthy individual, both are made better off. Why . . . stand in the way of market transactions that will not only make those who engage in them happier but also save lives?” (Shavell, 1999, p. 22) Perhaps whether or not one keeps one’s corneas is really a matter of one’s view about the substitutability of the health for other assets. But in a world where millions live in abject poverty, the notion of consumer sovereignty and of rational choice of one good for another seems less about markets than about massive public values failure. 6. Public Failure and Benefit Hoarding: The case of terminator technology
In the marketplace, externalities may distort prices and thus skew costs or benefits toward particular consumers. For example, the costs of cleaning up pollution are rarely included in the price of the polluting good. Thus, those who produce and consume that good may benefit preferentially. Analogously, if the benefits of a public policy meant to aid a large group are captured preferentially by a much smaller group, public failure may be occurring. Recent attention to the “digital divide” may illustrate such a failure. Disparities in health care may be another example. Development of the internet and many medical technologies was made possible by public support of the necessary R&D. If only certain segments of the population are benefiting from this investment, then benefit hoarding may be taking place.
In the early 1980s, following a decade of disappointing economic performance, U.S. policy makers were anxious to find ways to stimulate economic growth. One area of action focused on creating incentives to transfer the results of government-funded research to the private sector as a stimulus to technological innovation, and resulted in such laws as the Stephenson-Wydler Act of 1980, the Bayh-Dole Act of 1980, and the Federal Technology Transfer Act of 1986. The Technology Transfer Act legalized public -private research partnerships, called "CRADAs" (Cooperative Research and Development Agreements) meant to stimulate collaboration between government and corporate laboratories. The "Findings" that articulate the rationale for the Act include brief mention of "social well-being," "increased public services" and "public purposes," but in fact focus almost entirely on economic arguments, for example: "Increased industrial and technological innovation would reduce trade deficits, stabilize the dollar, increase productivity gains, increase employment, and stabilize prices." (15 USC Sec. 3701) While these are all laudatory goals, they make no mention of possible social impacts that could run counter to the public interest.
On March 3, 1998, The U.S. Patent office granted a patent jointly to the U.S. Department of Agriculture's (USDA) Agricultural Research Service and the Delta and Pine Land Co., a breeder of cotton and soybeans, entitled "Control of Plant Gene Expression." This patent arose from joint work funded through a CRADA, and embodied the type of technology transfer envisioned by legislators more than a decade earlier.
The patent covered a process—called the Technology Protection System (TPS)— that would allow seeds to be genetically engineered so that they did not give rise to fertile offspring. The intent was to protect the technological innovation embodied in new varieties of seeds (for example, resistance to drought or herbicides), by ensuring that farmers could not plant second-generation seeds produced by the first generation crop. Rather, they would have to buy new seeds for each planting. In the words of the USDA (ARS, 2001), the new technology "would protect investments made in breeding or genetically engineering these crops. It would do this by reducing potential sales losses from unauthorized reproduction and sale of seed." This economic argument was causally linked to a social-benefits argument via standard market logic: "The knowledge that the seed companies could potentially recoup their investment through sales will provide a stronger incentive for the companies to develop new, more useful varieties that the market demands. Today's emerging scientific approaches to crop breeding—especially genetic engineering—could be crucial to meeting future world food needs, conserving soil and water, conserving genetic resources, reducing negative environmental effects of farming, and spurring farm and other economic growth. TPS technology will contribute to these outcomes by encouraging development of new crop varieties with increased nutrition to benefit consumers and with stronger resistance to drought, disease and insects to benefit farmers for example." (ARS, 2001)
TPS technology does appear to hold considerable interest for plant-breeding companies, and TPS patents continue to be granted in the U.S. and abroad (ETC Group, 2002). In essence, TPS makes protection of intellectual property a biological process, rather than a legal one. At present, seed companies must count on the honest farmers to honor intellectual property by not "brown-bagging" second generation seeds, or the companies must resort to policing of farms to enforce their intellectual property. Indeed, in pursuing the latter course, Monsanto suffered a public relations disaster when they sued a Saskatchewan rape-seed farmer for patent infringement (e.g., Margoshes, 1999).
TPS is a testimony to amazing progress in genetic engineering. The process described in the original patent involves enormously complex, integrated manipulation of transgenic components that are inserted into the DNA of the plant that is to be protected. A plant gene "normally activated late in seed development" must be fused with a "promoter to the coding sequence for a protein that will kill an embryo going through the last stages of development" and then coupled to a mechanism to repress the promoter until it is treated with a specific chemical (Crouch, 1998).
Less than two years after the TPS patent was granted, M.S. Swaminathan, one of the founders of the Green Revolution and an advocate of biotechnology in the service of global agriculture, declared that if TPS was widely adopted, "[s]mall farmers will then experience genetic enslavement since their agricultural destiny will be in the hands of a few companies." (Swaminathan, 1999) The Consultative Group on International Agricultural Research (CGIAR)—the organization that provided much of the science for the Green Revolution—banned TPS from their research agenda (Service, 1998) and Monsanto Company, which was attempting to acquire Delta and Pine Land Company (co-holder of the original patent), pledged, under pressure from public interest groups and philanthropic foundations, "not to commercialize sterile seed technologies" (Shapiro, 1990).
The Rural Advancement Foundation (RAFI), which mobilized opposition to TPS, coined the phrase "terminator technology," and asserted that the "seed-sterilizing technology threatens to eliminate the age-old right of farmers to save seed from their harvest and it jeopardizes the food security of 1.4 billion people—resource poor farmers in the South—who depend on farm-saved seed." (ETC Group, 1998) RAFI also argued that TPS would further contribute to diminution of global agricultural genetic diversity, especially for plant varieties of importance to developing countries.
The argument against TPS is multifaceted (our summary is drawn from: Visser et al., 2001; Eaton et al, 2002; Service, 1998; ETC Group, 1998, 1999.). At the heart of the issue is the practice by many farmers, especia lly (but not only) in the developing world, to continually seek to find and create better plant varieties for local growing conditions, through careful selection of kept seed, as well as purchase of new varieties from seed distributors (private or public). TPS was alleged to threaten this process in many interconnected ways. First, it would allow commercial breeders to capture markets for crops that are not amenable to hybridization, including wheat, rice, and cotton. (Commercial breeders do not focus on such crops precisely because they cannot control farmers' use of kept seed. Hybrid seed, on the other hand, tends not to express its engineered attributes homogeneously in the second generation, and thus offers some inherent protection of intellectual property.) This commercialization of seed varieties in turn would inevitably reduce the available sources of such seed due to advantages conferred to larger breeders and seed purchasers by economies of scale. Local plant breeders' access to new genetic materials would thus become increasingly restricted, and their ability to select for improved seed varieties would be impaired.
Because commercial plant breeders would be aiming their products at the most profitable markets—i.e., those of the rich countries—they would be unlikely to engineer plant varieties to meet the needs of poorer farmers—as is the generally case with hybrid products. At the same time, publicly funded plant breeding organizations, such as CGIAR, might be blocked from using engineered traits developed by private breeders unless they also accepted TPS. Such trends would exacerbate agricultural technology gaps between rich and poor. In addition, because poor farmers would find it increasingly difficult to acquire seed without terminator technology, their exposure to year-to-year food-supply disruption due to economic, political, climatic, or other factors would increase. Finally, genetic diversity of agricultural varieties would decline, because the largest source of such diversity is the seed-production activities of farmers themselves. Large breeding companies tend to reduce, not increase, genetic diversity.
In defense of TPS, USDA focuses on market arguments (ARS, 2001): "[L]oss of cost savings from brown-bagging also must be weighed against the productivity gains to the farmer from having superior new varieties that could increase crop values such as yield and quality, input cost reductions such as for fertilizers and pesticides, and reduced losses such as those due to pests or adverse soils and weather." Such arguments assume a level playing field, where the attributes of new, engineered seed varieties will be those needed by small farmers and poor farmers, where such farmers will be able to afford the new varieties, and where they will, therefore, no longer be dependent on their own seed selection skills to optimize crops for particular local growing conditions. But even should such an optimistic scenario transpire, it ignores the effects of reduced genetic diversity on the resilie nce of agricultural systems worldwide.
Terminator technologies thus create a possibility for corporations to gain control of a process—seed selection—and a product—plant varieties—that have been in the hands of farmers for millennia. The effect is a private hoarding of previously public goods. This effect is less troubling in the context of affluent nations, where agriculture has become increasingly industrialized, than for poor ones, where small farmers continue to depend upon kept seed and selective breeding for crop improvement and adaptation, and for food security from year-to-year. A particularly conspicuous element of this story is that the original research was partly funded by public money and conducted at a public research laboratory. As such, it is an exemplar of way that market values displace public values in justifying public funding of science and technology.
Conclusions: How Might a Public Values-Based Science Differ?
As is the case for most policy domains, science policy in the United States is viewed through economic lens. With great regularity, the terms “price” and “value” are used interchangeably. We have shown how economic rationales for science and technology, important as they are, may tend to displace social outcomes rationales and, similarly, intrinsic criteria for judging science, important as they are, may displace external criteria.
One reason this happens, we argue, is that there are few alternative ways of thinking systematically about science policy. There is no social theory of scientific outcomes; there is no social choice theory for science. There is a market failure model that tells us conditions under which government should “intervene.” We provided an alternative set of criteria, criteria based on a public values failure model (Bozeman, 2002) and devoted most of this paper to illustrating cases of public values failure, including many where there is no apparent market failure. In this light, we conclude by reflecting on the question: “how would science policy differ if, in addition to economic reasoning and market failure criteria, public values and public failure criteria were important in decision-making?”
We have more answers to this question than we have space. So we give our attention to just the two issues, mentioned in the introduction as, that flow logically from a market based science policy: (1) a limited role for the public and (2) a focus on expenditure increment.
From "How Much?" to "What?" and "For Whom?"
Traditionally, science policy has proceeded with a tacit, sometimes an explicit, assumption that everyone is made better off by investments in science. With that logic, the most important question, naturally, is “how much science can we afford?” But if one assumes that both the benefits and the negative impacts of science are unevenly distributed, it becomes important to pay more attention to the value bases of science and its outcomes, and to questions of who benefits.
Each year, science policy, as so much of the rest of federal policy, becomes riveted on the federal budget and, specifically, the amount of money available for science. There is great deliberation about the actual amount, the rate of growth, the percentage of the budget, the amount for science vs. other outlay categories, and the amount (and percentage and rate of growth) for particular agencies and particular programs within agencies. Every year, even during years of considerable growth in spending, there are at least a few “funding crises” identified and these become grist for about a month’s worth of op ed’s and sound bites. During this same period there is generally a profound silence about the science that will or will not be performed and the impact it will or will not have on society. How one knows that “there is a crisis in funding for [insert your favorite science here]” is never exactly clear. In one notable example, the President-elect of the American Association for the Advancement of Sciences surveyed scientists and discovered that many were unhappy because they felt that they did not have enough funding (Lederman, 1991).
It we set aside the comedic value of this yearly ritual, its value to science policy is likely negative. Few people (perhaps none) have the breadth of understanding to even begin to provide valid account of what will happen as a result of a 5% decrease or increase in, say, chemistry.
Thus, the change number becomes a conveniently available surrogate for knowledge about how science supports public values.
It is easy to criticize the numbers ritual but more difficult to identify any means of moving from “How much?” to “Why?” This is a general problem in federal budgeting but is especially acute in science policy. Some government officials are on the right track. When agency heads and their budget officers are asked to report on what the expect to accomplish with a increment or 10% as opposed to a decrement of 10% there is sure to be a great deal of smoke and mirrors, but at least there is some chance that a bit of truth will be reflected. An even better means of ensuring some increased focus on outcomes is to actually become serious about examining previous outcomes. Almost all government reforms pay some lip service to doing this, but there is limited staff, method, and will to follow through. Thus, the real answer may lie in political leadership. If policy-makers can, in a more sophisticated manner, articulate a vision for what is to be accomplished, science funding agencies, and ultimately scientists, often prove a remarkably malleable and effective means of achieving a vision. The problem, however, is that political vision usually results in ungovernable and highly ineffective windfalls such as the War on Cancer (Studer and Chubin, 1980) or for an impossible, immeasurable demand that the National Science Foundation use 60% of its funding for science that will have a social impact (Stokes, 1997). When the U.S. Congress provides a vision statement for science it is either focused on a target that is so broad that it may defy the intent (e.g. the War on Cancer) or it is based on a set of generalities. For example, in Title 42 (U.S. Code) Congress provides the following representatively toothless mandate:
Congress finds and declares that science and technology should contribute to the following priority goals without being limited thereto:
(1) fostering leadership in the quest for international peace and progress toward human freedom, dignity, and well-being by enlarging the contributions of American scientists and engineers to the knowle dge of man and his universe, by making discoveries of basic science widely available at home and abroad, and by utilizing technology in support of United States national and foreign policy goals; (2) increasing the efficient use of essential materials and products, and generally contributing to economic opportunity, stability, and appropriate growth; (3) assuring an adequate supply of food, materials, and energy for the Nation's needs; (4) contributing to the national security; (5) improving the quality of health care available to all residents of the United States; (6) preserving, fostering, and restoring a healthful and esthetic natural environment;...
(U.S. Code, Title 42 - The Public Health And Welfare, Chapter 79 - Science And Technology Policy, Organization And Priorities, Subchapter I - National Science, Engineering, And Technology Policy And Priorities, 1997)
If there is hope of moving from “how much” to “why,” the likely vehicle is the same sort of blue ribbon panels that have been employed in such projects as Healthy People 2010 (U.S. Public Health Service) in which problem-focused experts identified policy-making priorities to address social (health) needs. The example is instructive not only as an effective approach to identifying problems and possible solutions but because of its limited influence on policy and the near invisibility of the work among medical researchers. Going from “how much?” to “what?” will not be easy.
The Public as part of Public Value
Science policy taking greater heed of public value criteria would necessarily require more public participation in setting the highest level goals for science. But in the largest scale value choices the public seems no less equipped to participate. The idea that science policy must be closely held by science policy elites is a canard. No one is interested in having cab drivers, school teachers, homemakers or factory workers spear-heading human genome research. But the idea that an informed public cannot be trusted (or, at least, as trusted as scientific elites) to determine the highest level public values is insupportable. We concur with Philip Kitcher (2001), Donald Stokes (1997), and others who have argued that the general public is certainly prepared to divulge its basic values and preferences once the scientific issues have been clarified. According to Stokes (p. 141-2):
At the retail level, choices among alternative projects of use-inspired basic research should be largely in the hands of those who can judge scientific promise, constrained by wholesale decisions as to the societal needs a given program of research is to address. By contrast, there is at the wholesale level of research funding much asymmetry between the ability of those with scientific expertise to judge the social value of alternative programs of research and the ability of nonscientists to judge the scientific promise of these programs…(The scientists’) advantage rapidly melts away when the promise of alternative programs of research must be judged across fields of widely varied conceptual and empirical content.
Despite the many difficulties of developing an effective role for public participation in science, we feel confident that a broad public, the “consumers,” can play a larger role in the “wholesale” choices. This is not to say that science budgeting (or any budgeting) by plebiscite— what Kitcher (2001) calls "vulgar democracy"—is a good idea. Rather, public participation is likely the best single guarantee of maintaining an outcomes-focused science policy. Furthermore, broad-based public participation encourages scientific elites to take great care in communicating their work and work aspirations in terms linking research efforts to desired social and economic outcomes. Public participation can, thus, help change the discourse of science from market failure and internal scientific and technical issues to public value and extrinsic benefit.
In most instances there is no elite conspiracy to lock out citizens from the science and values issues that will affect them. The problem is that few adequate mechanisms for public participation have been developed and, absent such institutions, the “default option” is exercised: market-based decisions and economic rationality exercised by elites. The one very important exception is the increasingly prominent role of elected officials in science policy, a development many abhor. We have seen in scie nce policy, as in virtually every realm of public policy, that vote calculus is often not an improvement over economic self-interest calculus. But public officials are, at least, working within a process that sometimes allows public value to enter into the calculus.
References
ARS (USDA Agricultural Research Service), 2001,"Why USDA's Technology Protection System (aka 'Terminator') Benefits Agriculture," ARS News & Information website: www.ars.usda.gov/is/br/tps/
Bannock, G.,
R. Baxter and E. Davis (1998). The Penguin Dictionary of Economics.
New York: Penguin Books.
Barke, R.
and H. Jenkins-Smith (1993) “Politics and Scientific Expertise:
Scientists, Risk Perception, and Nuclear Waste Policy,” Risk Analysis,
13, 4, 425-439.
Bauer, M., K. Petkova, and P. Boyadjieva (2000)
“Public Knowledge of and Attitudes to Science: Alternative Measures That
May End the "Science War,” Science, Technology and Human Values,
Bodenheimer,
T., 2000, Uneasy alliance—clinical investigators and the pharmaceutical
industry, New England Journal of Medicine, 342(20): 1539-1544.
Donahue, J., 1991, The Privatization Decision
(New York: Basic Books).
Eaton, D., Van Tongeren, F., Louwaars, N., Visser, B. & Van der Meer, I.(2002), "Economic and policy aspects of 'terminator' technology." Biotechnology and Development Monitor, No. 49, p. 19-22.
Energy Information Administration, 2000, Annual Energy Review, Table 1.2, at http://www.eia.doe.gov/emeu/aer/txt/tab0102.htm
Energy Information Agency, 2001, Annual Energy Outlook , Table 7, at www.eia.doe.gov/oiaf/aeo/tbl7.html
ETC Group, 1998, "Terminator Technology Targets Farmers," Communique, March 30, available at: www.etcgroup.org/article.asp?newsid=188
ETC Group 1999, "The Terminator's Wider Implications," Communique, January 30, available at: www.etcgroup.org/article.asp?newsid+184
ETC Group, 2002, "Sterile Harvest: New Crop of Terminator Patents Threatens Food Sovereignty," News Item, January 31, available at: www.etcgroup.org/article.asp?newsid=290
Federal Geographic Data Committee, 2001, Overview: What the framework approach involves, at http://www.fgdc.gov/framework/frameworkintroguide/overview.html, accesses Nov. 2001.
Flathman, R. (1966). The Public Interest. New York: Wiley.
Friedberg, M., B. Saffran, T. Stinson, W. Nelson, and C.L. Bennett, 1999, Evaluation of conflict of interest in economic analyses of new drugs used in oncology, Journal of the American Medical Association, 282(15): 1453-1457.
Fuller, Lon (1964). The Morality of Law. (New Haven, CT: Yale University Press).
Goldner, J.A., 2000, Dealing with conflicts of interest in biomedical research: IRB oversight as the next best solution to the abolitionist approach. The Journal of Law, Medicine, & Ethics, 28(4): 379-404.
Griliches, Z. (1995) “R&D Productiv ity: Econometric Results and Measurement Issues,” in S. Paul (ed.) The Handbook of the Economics of Innovation and Technological Change (Oxford: Blackwell).
Guston, D. (2000) Between Politics and Science: Assuring the Integrity and Productivity of Research. Oxford: Oxford University Press.
Kumar, S., 1994, Curbing the trade in human organs in india, Lancet, v. 344, p. 750.
Kitcher, Philip (2001), Science, Truth, and Democracy. New York, NY: Oxford University Press.
Lindsay, L (2001). Speech (untitled) to the American Association for the Advancement of Science Science and Technology Policy Colloquium, May 3, at http://www.ostp.gov/html/010515.html
Margolis, R.M., and Kammen, D.M., 1999, Underinvestment: The energy technology and R&D challenge, Science, v. 285, July 30, pp. 690-692.
Margoshes, Dave, Saskatchewan Farmer Battles Agro-Chemical Giant, The Vancouver Sun, August 14, 1999, p. B1)
Medical Industry Today, 1998, Suspected murder for organ trade probed in Philippines, Sept. 9, pp. 127-128.
Mowery, D. C. and N. Rosenberg (1989) Technology and the Pursuit of Economic Growth. New York: Cambridge University Press.
NAPA (National Academy of Public Administration), 1998, Geographic Information for the 21st Century—Building a Strategy for the Natio n (Washington, DC, National Academy of Public Administration), at http://www.gsdi.org/docs/canberra/naparepo.html.
NAPA (2001), A Study of the National Science Foundation's Criteria for Project Selection (Washington, DC, National Academy of Public Administration), 97 p.
NASA, 1998, Landsat Program Chronology, at http://geo.arc.nasa.gov/sge/landsat/lpchron.html (last updated Nov. 4, 1998).
National Bioethics Advisory Commision, 2001, Ethical and Policy Issues in Research Involving Human Participants (Bethesda, MD: NBAC).
Nelson, D. and R. Weiss, 1999, Hasty decisions in the race to a cure?: Gene therapy study proceeded despite safety, ethics concerns. Washington Post, November 21, p. A1.
Nelson, R. R., M. J. Peck, and E. D. Kalachek (1967) Technology, Economic Growth, and Public Policy; A Rand Corporation and Brookings Institution Study. Washington: Brookings Institution.
NRC (National Research Council), 1991, Assessment of Satellite Earth Observation Programs 1991 (Washington, DC: National Academy Press).
NRC, 1993, Toward a Coordinated Spatial Data Infrastructure for the Nation (Washington, DC: National Academy Press).
NRC, 1995, A Data Foundaton for the National Saptial Data Infrastructure (Washington, DC: National Academy Press).
NRC, 2001, Future Roles and Opportunities for the U.S. Geological Survey (Washington, DC: National Academy Press).
Open GIS Consortium, 1999, Spatial Connectivity for a Changing World (promotional brochure) (Wayland, MA: Open GIS Consortium).
Peterson, T. and B. Bonning (2002) “Trait Protection System: A Case Study,” Iowa State Biotechnology Ethics Program downloaded February 13, 2002 from http://www.biotech.iastate.edu/Bioethics/case/trait_protection.html posted January 12, 2002.
Plutzer, Eric ; A. Maney; and R. O'Connor (1998) “Ideology and Elites' Perceptions of the Safety of New Technologies,” American Journal of Political Science, 42, 1, 190-209 Polanyi, Michael (1962). “The Republic of Science: Its Political and Economic Theory,” Minerva, 1, 54-73.
Rosenbloom, R. and W. Spencer (1996). Engines of Innovation: U.S. Industrial Research at the End of an Era. Boston: Harvard Business School Press.
Schubert, G. (1960) The Public Interest. Glencoe, IL: The Free Press.
Service, Robert F., "Seed-Sterilizing 'Terminator Technology' Sows Discord" Science, Oct. 30, 1998, 282: 850-851.
Shapiro, 1999, "Open Letter from Monsanto CEO Robert B. Shapiro to Rockefeller Foundation President Gordon Conway," October 4, available at: www.biotech-info.net/monsanto_letter.pdf
Shavell, S., 1999, Why not sell organs?, New York Times, May 11, p. 22.Smith, Bruce (1990) American Science Policy since World War II. Washington, DC: The Brookings Institution.
Solow, R. (1957) “Technical Change and the Aggregate Production Function,” Review of Economics and Statistics, 39, 3, 312-320.
Sorauf, F. (1957). “The Public Interest Reconsidered,” Journal of Politics, 19, 616-639.
Stelfox, H.T., G. Chua, K. O’Rourke, and A.S. Detsky, 1998, Conflict of interest in the debate over calcium-channel antagonists, New England Journal of Medicine, 338(2): 101-106.
Stokes, D. (1997) Pasteur’s Quadrant: Basic Science and Technological Innovation. Washington, DC: The Brookings Institution.
Studer, K. E., and D. E. Chubin, The Cancer Mission. Social Contexts of Biomedical Research. Sage, Beverly Hills, etc. 1980.
Swaminathan, M.S., 1999, "Genetic Engineering and Food, Ecological Livelihood Security in Predominantly Agricultural Developing Countries," speech to CGIAR/NAS Biotechnology Conference, October 21, at: www.cgiar.org/biotechc/swami.htm
Tietz, G. F., 1993, Strict products liability, design defects and corporate decision-making: greater deterrence through stricter process, Villanova Law Review, v. 38, pp. 1361-1459.
U.S. Congress (1993) PL 103-62. The Government Performance and Results Act of 1993.
Visser, B.; van der Meer, I.; Louwaars, N.; Beekwilder, J. & Eato, D. (2001), "The impact of 'terminator' technology." Biotechnology and Development Monitor, No. 48, p. 9-12.
Wade, N., 1999, Patient dies during a trial of therapy us ing genes, New York Times, September 29, p. A24.
Woodward, B., Challenges to human subject protection in US medical research, Journal of the American Medical Association, 282(20): 1947-1952.
|