|
Perspectives

Where Responsibility Lies
By
Daniel Sarewitz
In
thinking about the responsibility of science in light of rapidly
emerging technologies today, Einstein presents us with a problem. He of
course embodies both our image and our ideal of what it means to be a
“scientist.” A bronze Einstein watches benignly over the grounds of the
National Academy of Sciences in Washington, DC, the self-proclaimed
“temple of science.” This statue seeks to show Einstein as a familiar
human being, ruffled sweater, droopy eyes, oversized hands and feet,
wise, avuncular, worldly, and a bit tired. In his left hand is a
notebook, etched with a few of his most famous equations, the
manifestation of his brain.
So
here is the problem: Let’s say this is the real Einstein. Certainly
this is the mythic Einstein. Yet almost no scientist is really like
this, at least not anymore. Einstein did his research in his head; he
transformed the way we understand our world, and became a sort of moral
compass for humanity; he sought truth, fled oppression, pursued peace.
Einstein equaled what we wanted science, and scientists, to be. But
there can be no more Einsteins.
In
the century since Einstein’s incredible year, science has changed, and
our understanding of science has changed. Today, something like 8
million scientists work in the world: in universities, corporations,
governments, civic organizations. Some of them are geniuses, some are
leaders, some are loners, some make a difference in the world, and
advance noble ideas and causes. But, really, the idea of an individual
scientist is now pretty much an abstraction and an anachronism. The
world has become a global innovation machine. Scientists are gears in
this machine. They are part of a disseminated, complex process that is
constantly remaking the world itself. What can one say about the
scientific responsibility of a gear?
It is
true, of course, that science and technology have been constantly
remaking the world for some time now, and that each new wave of remaking
has brought with it both enormous advances and huge challenges. But
important things seem to be different now.
Most
obviously, the scale and reach of the enterprise has grown enormously.
Many aspects of the scientific endeavor, such as numbers of scientists,
numbers of papers, and numbers of publications, display exponential
growth. Moore’s law is perhaps the most famous of these exponential
curves, and stands as a surrogate for the growth of both data gathering
and data processing capabilities. Obviously the global annual
investment in research and development today, which I would estimate as
roughly a trillion dollars per year, dwarfs expenditure levels of
Einstein’s day.
Second, the distinction between science and technology is increasingly
blurred, to the point where it confuses more than it clarifies.
Technological advance drives science in various ways, for example, by
constantly raising challenges that demand new scientific exploration,
and also by providing scientific tools that create their own demand for
use. Public sector investments in scientific research are often
justified in the context of technological application, while
intellectual property rights are extended farther and farther upstream
to capture the fruits of scientific discovery, and scientific advances
in some fields are instantly translatable into marketable innovations.
So, not only are science and technology blurring together, but so are
their public and private aspects.
Third, science and technology now act on a world already created by
science and technology. This is what Ulrich Beck termed “reflexive
scientization”: “the sciences are confronted with their own products,
defects, and secondary problems . . . . scientific skepticism [extends]
to the inherent foundations and external consequences of science
itself.” (p. 155) These foundations and consequences help to constitute
everyday existence; they are not perceived, indeed are not
comprehensible, as something added on. As a consequence of this
integralness, science also descends from its pedestal to become part of
the everyday. Science’s claim to disinterest, which we see in Einstein,
and privilege, which we afford to him, are undermined. Beck says:
“Science becomes more and more necessary, but at the same time less and
less sufficient for the socially binding definition of truth.” (p.
156).
Fourth, and related to the previous, is this: we are moving farther
away from any sort of grand synthesis. From our studies of the climate
to our mapping of the genome, we just keep on finding more complexity
and more uncertainty. This is just science doing its work, of course,
but it means that the prospects of an Einstein changing the way we see
everything are pretty much vanished. When a scientist is sufficiently
impudent to try to posit a grand theory, as the biologist E.O. Wilson
did in his book
Consilience, it seems more like desperation, or
at least religion, than science. Similarly, the idea that physics can
provide us with a “theory of everything” attests mostly to an absurdly
limited notion of “everything.”
Fifth, the pace, breadth, and depth of scientific and technological
change seem to outstrip anything that has come before. Accelerating
trends in computer power and machine miniaturization, coupled with
advances in materials, energy systems, communications technologies, and
robotics will transform all aspects of society: the structure of
manufacturing and labor; the ways that we acquire, transmit, and use
information; our manner of interaction with fellow humans in venues
ranging from bedroom to battlefield.
Perhaps many of these changes will be no more culturally profound than
the social disruptions caused by, say, the convergence of the steel and
railroad industries, or the proliferation of electricity and the
telephone. This is still saying a lot. Yet something new does seem to
be emerging in terms of the relation between technology and the human
species itself. Technology has always been about enhancing the power of
the human being, yet the enhancement has been external to our
essence—prosthetic, as it were. Eyeglasses, telescopes, running shoes,
guided missiles, supercomputers, these are all magnifiers or extenders
of human reach. But on the horizon now is the potential disappearance
of any conceptual stability to the idea of human identity and the
distinction between humans and machines. Steroids and other performance
enhancers in athletes are just the trivial foreshadowing of, say,
ubiquitous in-body sensors, implanted memory expansion, and even virtual
immortality. As just one specific example, the inventor and
technological visionary Ray Kurzweil believes that in the next fifty
years: “Brain implants based on massively distributed intelligent
nanobots will ultimately expand our memories a trillionfold, and
otherwise vastly improve all of our sensory, pattern recognition, and
cognitive abilities.” If, as some scientists suggest, germ-line
enhancement therapies become possible as well, then we will finally be
able to reconcile Darwin and Lamarck. Beck’s “reflexive scientization”
will become literally internalized, as we remake ourselves. We have
hardly begun to consider the implications of these changes for our
society.
So we
are faced with the prospect of enormous, culturally disorienting and
transforming changes across multiple dimensions, and the question is
how, in the face of such changes, we can understand the responsibility
of science and scientists.
Existing concepts just won’t do. In the years following World War II,
as the U.S. began a gradual and unprecedented ramping up of its
commitment to publicly funded science, the responsibility of the
scientist was widely portrayed simply as a responsibility to science
itself. Often spoken of in terms of a social contract, the
responsibility of the scientist was just to do honest science. The
trick here was that the fruits of science in society were supposed to be
unpredictable, yet automatic. Through the unfettered exploration of
nature, and the serendipitous translation of insight into innovation,
science made the world better. The logic of the progression, though,
was only visible in hindsight. It could not be planned.
Symbolically, then, we can trace wave after wave of transformational
innovation and wealth creation in the decades after World War II to the
massive Cold War commitment to publicly funded science, a commitment
made compelling by the cathartic wedding of basic science to national
defense in the development—and use—of the atomic bombs that ended the
war with Japan, which in turn finds its origin in Einstein’s famous
letter to Roosevelt about the feasibility of nuclear weapons, itself
made possible by the most famous equation of them all, E=mc2.
Thus
did post-industrial society spring from the head of Einstein and other
mythic figures, in the apparent absence of human intent and agency—and
thus, responsibility. Society needed only to provide resources for
scientists, and scientists needed only bring integrity and openness to
their work. To do more was not only unnecessary, but, as Karl Polanyi
liked to argue, entirely counterproductive. Scientists would operate in
a self-regulating, autonomous republic, and the world would unfold. If,
of course, it did not blow itself up in the process. But in either
case, to suggest that responsibility for long-term outcomes lay in
individual scientists was incoherent.
Thus,
questions about the social responsibility of science were addressed
within the context of the social contract. During the Cold War, some
scientists expressed their social responsibility outside the laboratory,
as social activists, opposing the arms race, or within the laboratory,
by refusing to participate in classified research programs. But the
ideal of the autonomous republic of science was not called into
question. The burden of responsibility always fell on the world outside
of the republic.
In
the 1980s and 1990s some significant attention was paid in the U.S. to
instances, real and alleged, of “scientific misconduct.” The research
community, embodied in such organizations as the National Academy of
Sciences, constituted committees to study the problem, while politicians
wondered whether scientists could, after all, be trusted to meet their
internal responsibilities. The deep anxiety expressed by leaders of the
scientific community on this issue was also a fear that the supposed
social contract might be called into question—as it was. But the
solution—greater independent oversight of various funding bodies and
research activities, plus codes of conduct, and classes on research
ethics, only served to maintain an inward focus. The problem was about
ensuring the integrity of a still-autonomous republic, not about the
validity of the ideal itself.
Public investments in science have always, of course, been justified
explicitly in terms of meeting social needs, and from time-to-time,
politicians have lost patience with the metaphysical model connecting
autonomous knowledge creation to societal benefits. In the U.S. in the
1960s and 90s, for example, scientists were called upon to more
explicitly devote themselves to research that addressed urgent social
challenges ranging from environmental degradation to emerging diseases
to economic competitiveness. Science policy experts meanwhile have
sought to resolve the apparent tension between the requirements of
scientific autonomy, and a responsibility for contributing to social
betterment, through theoretical constructs with names like “use-inspired
basic research” and “Jeffersonian science.” The idea was that
fundamental science could as well be carried out in the context of a
particular societal problem—say, some type of cancer; or global climate
change—as it could in the context of no problem at all. The autonomous
republic of science could thus be preserved, while also serving explicit
social ends. Social responsibility for the scientist could still be
conceived very much in terms of the relationship between an individual
scientist and her or his work. Is the science of high quality? Is he
doing research in an ethical manner? Is she choosing projects that are
likely to be beneficial?
But
the individual scientist has little if any meaning in the global
innovation system. There are millions of such scientists, and more
coming on line each day. The scale, complexity, aims, and outcomes of
the enterprise may be completely opaque to an individual scientist, but
this does not compromise their ability to perform. To locate scientific
responsibility in the individual scientist is thus to render it
meaningless as well. It’s not that we shouldn’t expect scientists to
behave ethically, its just that such behavior, even when considered
cumulatively across a community of scientists, has little or probably no
connection to the dynamics and impacts of the innovation system. Or,
put somewhat differently, we can populate our innovation system with
scientists pursuing truth and behaving ethically, and still end up with
outcomes that we cannot manage, or that challenge our values and
principles.
So,
the key question is this: is there some notion of scientific
responsibility that can play a useful role in confronting the waves of
technological transformation that seem likely to inundate society in the
next few decades?
In my
previous list of how the science enterprise has changed since Einstein’s
heyday, I didn’t mention one important thing: we now have a
considerably deeper understanding of how this enterprise actually works.
We know that scientists negotiate not only with nature to advance
knowledge, but with each other, with their funders, with politicians,
corporate executives, various publics. We know that the directions and
velocities of science reflect decisions made by people, and decisions
emerge within a context. We know that context is strongly embodied by
the institutions where science is conducted and planned. These
understandings have not yet been brought to bear on new notions of
scientific responsibility, but, if we are to have any prospect of
guiding the accelerating technological change that engulfs us, now is
the time to consider how this might be done.
There
is huge inertia and low predictability in the global innovation system,
so my intent is not to be particularly optimistic about, say, ensuring
the public value of emerging technologies, but rather to suggest where
we might intervene if we are to move toward a meaningful notion of
scientific responsibility—and thus accountability. For reasons I’ve
tried to explain, it seems unavoidable to me that responsibility must be
located in the processes by which decisions about science are made and
implemented in the institutions of public science, rather than in the
motives and norms of individuals who conduct science. What we need to
aim for is a capacity for reflexiveness—for social learning that expands
the realm of choice available within public research institutions. This
means, above all, that scientific institutions, broadly defined, will
have to build the capability for scientists and technologists, as well
as research planners and administrators, to understand where their
research agendas come from, what interests are embedded in the agendas,
who supports the agendas, who opposes them, and why. Institutions need,
in other words, to enable consciousness—consciousness of the contexts
within which scientists do their work and make their decisions. This
quest for consciousness can emerge from persistent collaboration with
social science and humanities researchers, from open and informed
engagement with various interest groups and publics, and from
unflinching reflection and deliberation about motives and values. These
things need to be built into research institutions. In the ideal, the
creation of consciousness can help signal emerging problems, support
anticipatory governance, and enable better choices to be made about the
directions and pace of knowledge creation.
There
is, by the way, a weak but perhaps valuable analog for what I am
suggesting here, and that is the private sector laboratory.
Corporations go to great lengths to understand the market context that
can justify—always in terms of profitability, of course—the research
being pursued, while economic performance indicators provide feedbacks
for accountability. Can public science engender a commitment to
understanding its social and cultural context that matches the
commitment of private science to understanding its marketplace context?
I
realize that what I’m suggesting is somehow less satisfying than simply
demanding that scientists take moral ownership of their work and its
consequences, and do the right thing. But if such an individualistic
approach ever made sense, it no longer does. If we are to have any
prospect of guiding and modulating the advance of rapidly emerging
transformational technologies in accord with goals, values, and
interests other than those dictated by the marketplace and the military,
then the responsibility of the scientist must expand from doing the
right thing as an individual, to participating in the reflexive process
of creating institutional consciousness. This, in turn, demands a
radical shift away from the constricting notion of science as an
autonomous republic, to embrace the realization that science and society
are moving together in an intimate, co-evolutionary dance.
Daniel Sarewitz is the Director of the Consortium for Science, Policy
and Outcomes and a Professor of Science and Society at Arizona State
University. This Perspective was adapted from a presentation given at
"Thinking With Einstein: The Responsibility of Science for Peace in the
21st Century" in Berlin, Germany on October 15, 2005.
The
views expressed here are those of the author.
Return to Perspectives
|