Science and Engineering Indicators—Of What?
By Wil Lepkowski
Number 2, posted February 16, 2001

Every two years the National Science Foundation produces for its National Science Board, Congress, and the President an increasingly dense efflorescence of statistics known as Science and Engineering Indicators. The S&T world expectantly waits for this tome for the latest macro and micro reflections of the whole colossal enterprise.

The year 2000 edition, commemorating NSF’s 50th birthday, came out last June, and NSF is currently deep into the preparation for the 2002 edition, due out, according to Indicators acting director Rolf F. Lehming, on January 15 of next year. Indicators is free for the asking and can be downloaded off NSF’s statistics website (www.nsf.gov/sbe/srs/stats.htm). Tucked into its back cover is a highly portable CD ROM version that can also be obtained separately.

What people see and expect in Indicators are such items as R&D spending by federal agencies, levels of state spending, defense versus non-defense research spending, numerical facts and trends in the science and engineering workforce, figures of all sorts on higher education in science and engineering, the usually sorry state of elementary and high school education in science and math, funding patterns in universities, arrays of data on industry and the global marketplace including competitiveness in particular industries, patents granted in the U.S. to foreign inventors, and the latest word on how well the public "understands" science.

Those expecting to see in Indicators any serious assessments of science and technology’s social and industrial outcomes, however, are almost always sure to be disappointed. Indicators will help you build your own policy analyses upward, downward, or sideways off the numbers and often will muse gingerly upon statistical trends. But as for the social and economic ramifications of research and development, Indicators has largely fallen flat.

Except for a few brave sidebars that discuss implications of data, Indicators says little about what the data actually suggest about changes in social or economic policy. It’s really a set of review articles written around data trends.  As one close NSF observer of Indicators says, the work has never been a true indicators report. The question is whether there is a need for a different format to make it a true indicators volume with data that tell a whole story.

The Indicators editors acknowledge the limitations of the report, but explain that developing the methodologies is too expensive and would divert resources away from what Indicators does best and is in fact charged to do: collect and improve statistics across a continuum of time that point inward toward the state of the research enterprise itself, not outward toward broader impact.

The editors more or less agree with critics that their methodology—dividing research into "basic" and "applied," for example—reflects the rigid and outdated “linear” model of science and technology that says science leads to technology with nothing significantly helping out or getting in the way. A lot of course does, in the form of feedback loops, organizational arrangements, tacit perceptions, luck, market acceptance, economic incentives, and a host of other informational pathways. It would be nice if Indicators could do a better job of reflecting such realities.

The ideal format for Indicators, says Don Kash, Hazel Professor of Public Policy at George Mason University in Fairfax, Va., "would be one which gives data on how technology is developed and used, and how innovation actually occurs." Kash is currently studying how a country’s culture affects that country’s ability to innovate technologically. Indicators never gets into such things.

"From my point of view," he says, "Indicators would try to provide data on the non-R&D elements of industry. These kinds of data are out there. The Organization for Economic Cooperation and Development in Paris is trying to do it. A lot of that stuff comes from surveys. One could surely ask for linkages with other organizations and for information coming from, say, the factory floor. I use Indicators, but not very much. It is of little value to someone with my interests in how various factors tie into industrial innovation. But I would certainly not want to see it go away.

“In a sense," Kash adds, "Indicators is a product of the old linear model. All of us agree that the model doesn’t really make any sense. We keep using it for the framework needed in talking about these things. But the trouble with Indicators right now is that it seems to function at the front end of the old linear model."

Fortunately, Indicators is being forced into the future. Back when the 2000 edition was being planned the Board insisted that it be bold and commemorative. One thing it did was commission an overview insert that attempted to put the R&D enterprise into perspective. The result was a thoughtful insert that discussed the policy implications of today's accelerating scientific and technological revolution and how Indicators could be used to raise key questions for a deeper understanding of change. It suggested that Indicators ought to head in more social directions but made no actual commitment.

Still, the Board did appear to follow through. It commissioned for the 2000 edition a special chapter on the implications of information technologies. If anything harbors vast social and economic material, it is the ITs. Indicators will be repeating the act for 2002, and for 2004 will add a chapter on environment, ecology, and complexity. Just how it will balance all three is currently up for grabs, says Lehming, who will review his thinking on the matter before the Board at its forthcoming March meeting..

So how did that new information chapter shape up? Did it seek out the numbers of computers in business and in the home? Did it measure the growth of the Internet, set forth an inventory of the various sub-technologies that expand the scope and versatility of the Internet, present the numbers of people engaged in the enterprise at all its levels, assess the gender and minority breakdown or people working in the field, depict the kinds and amounts of public and private spending in the field? Did it measure the profile of publicly funded R&D against an assessment of public needs in information technology? And did it assess the diffusion patterns of these technologies into the poorer parts of the world?

And, finally, did it take a stab at assessing the changes in the social fabric wrought by IT—from, say, General Electric’s enthusiastic adoption of the Internet in its business transactions, to evaluating the impact of the home computer on family life, to including a rundown of some of the major equity issues in government (local, state, and federal) programs in the use of these technologies? In short, could this new Indicators chapter be a test bed for a new social function for Indicators?

Actually the new chapter, written by SRI International under an NSF contract, did meet many of the criteria. It runs a sidebar on Moore’s law, the credo that the numbers of transistors per chip doubles every 18 months. It tracks the steep downward price curve for chips and microprocessors. It runs a profile of government support for computing research, lists industry spending on IT equipment in the 1990s, prints a bar graph on the numbers of PC’s per 100 white collar workers in 1997. At the conclusion, as though teasing the reader, it lists a set of five “potential information technology indices” needed for further work into outcomes. They are data on interconnectivity, correlations between information and quality of life, sets of leading information indicators, a home media index, and a marginalization index. Not too a bad a list for establishing that basic handle on the information enterprise; perhaps the shape of things to come for Indicators.

Many informed people want Indicators to try to extend itself in more social ways. A National Research Council panel assembled to assess NSF’s Science Resource Studies division, which produces Indicators, last year recommended that SRS move in the direction of broader payoff. It said, “SRS should revise the data it collects on R&D funding, performance, outputs, and outcomes to improve comparability across surveys.” All of this should begin, it continued, “by addressing structural changes in industrial research and development, the relationship between R&D and innovation, the apparent increase in intra- and inter-sectoral partnerships and alliances, and claims that interdisciplinary research is necessary.”

The NRC panel stayed clear of social impact, but it at least got the ball rolling in the right direction. Lehming indicates that Indicators is definitely trending toward the social. Its chapter on public attitudes toward science and technology is a social output chapter in the broadest sense. It finds repeatedly that the public (with the exception of environmental, energy, and health advocates who assess research outcomes with vigor) likes and appreciates science but has hardly an idea about how scientists go about their work. Lehming acknowledges health as well as energy research outcomes should be a natural for any science indicators. But the usually cautious Board, fearful of treading on National Institutes of Health or Department of Energy territory, is steering clear of those.

Lehming anticipates practically every question thrown at him related to making Indicators broader and better. The question always comes down to money, methodology, and volume. Just how big should Indicators be? Anytime a new chapter is added to Indicators, he says, no other chapter is subtracted. “The notion that we have been slowly nudging in the direction of outcomes is absolutely correct,” he says. “But we’re feeling our way. It all has to be based on empirical data.”

xxx