Strategic Research Partnerships: Proceedings from an NSF Workshop

Technology Innovation Indicator Surveys

John A. Hansen
State University of New York College at Fredonia

  1. Background
    1. The nature of indicators
    2. Theoretical foundations for innovation indicator development
  2. Early Work on New Innovation Indicators
    1. Object-based studies
    2. Subject-based studies
  3. The First Oslo Manual and the First Community Innovation Survey
    1. The first Oslo Manual
    2. Statistical units for data collection
    3. The first Community Innovation Survey (CIS-1)
  4. The Second Oslo Manual and the Second Community Innovation Survey
    1. Overview
    2. Definitions and basic concepts
    3. Degree of novelty
    4. Statistical unit of analysis revisited
    5. Topics covered in Oslo-2
    6. The second Community Innovation Survey (CIS-2)
  5. Other Recent Innovation Surveys Outside the United States
    1. Canadian innovation surveys
    2. Other European surveys
  6. Innovation Surveys in the United States
    1. NSF-sponsored innovation indicator surveys
    2. The Yale/CMU surveys
    3. Other innovation indicator studies in the United States
  7. Issues for U.S. Innovation Indicator Development
    1. The reporting unit
    2. The composition of the questionnaire
    3. Sector coverage
    4. Response rate maximization

I. Background top

This paper summarizes work that has occurred over the last three decades that is geared toward the development of new indicators of technological product and process innovation in the private sector of the economy. In addition to surveying the indicator development work that has occurred in Europe and North America, it is also intended to frame the issues that will confront a new effort to construct innovation indicators for the United States. The research that underlies this paper was originally performed under contract with SRI, International on behalf of the United States National Science Foundation. These efforts were not designed to focus exclusively (or even primarily) on Strategic Research Partnerships (SRP's). While a number of the innovation surveys described below do ask questions that are related to inter-firm relationships more broadly, and while these surveys have occasionally contained questions about "R&D Limited Partnerships" or "R&D Joint ventures," these topics played bit parts rather than starring roles in their respective surveys. Instead, the purpose of this paper is to provide an historical context within which current and future innovation surveys may be viewed. Over the past two decades much has been learned about how to usefully structure and administer innovation surveys. That information may be of use to those contemplating future surveys concerning related topics.

A. The nature of indicators top

The importance of the development of technologically new products or production processes has been widely appreciated since at least as early as the industrial revolution. Writing in 1776, Adam Smith felt that this concept was so self evident that "It is unnecessary to give any example."[1] In the ensuing two centuries the pace of technological change has quickened dramatically. Government policies with regard to innovation have sometimes played the role of promoter, sometimes regulator, and sometimes referee between competing private interests. To support these functions a substantial effort has been made to understand the nature of technological innovation and to measure various facets of technological development.

Technological innovation is a concept that is sufficiently complex and multi-dimensional that it is impossible to measure directly. In this sense it is a bit like measuring the health of a human being. There is no single measure of human health, so we must rely on a range of indicators, such as body temperature, skin color, level of pain or discomfort, the levels of various different components of the blood, dark and light areas on x-rays, and so forth. Each of these indicators is based on our fundamental understanding of how the various biological systems in humans work. As our understanding of human physiology improves, so does our capacity to develop better indicators of human health. The underlying system is sufficiently complex and multi-faceted that it is reasonable to conclude that no single measure of human health will ever be developed.

So it is with innovation. Technological innovation is a process that involves the interaction of many different resources within and among firms. It also results in a wide variety of outputs that cannot be measured along any single-dimensioned scale. As a result, innovation can never be measured directly. Instead, indicators of innovation provide information on various facets of the innovation process, helping us to understand the phenomenon better and assisting those (both in the public and private sectors) who must formulate innovation policies.

B. Theoretical foundations for innovation indicator development top

Changes in our understanding of the innovation process over time have resulted in substantial changes in indicators of innovation. Decades ago, innovation was perceived as a process that took place almost entirely within individual firms. Innovation was viewed as a procedure that often started with basic research and then moved in a linear fashion through applied research, development, trial production runs and continued through to the market introduction and diffusion of new products or products produced by new production processes. At each stage, inputs such as R&D expenditures, scientific and engineering employment, etc. could be monitored and intermediate outputs such as patents and professional literature citations could be gauged. Much less attention was paid to developing indicators of innovation outputs. There were occasional efforts to assess output directly, and a considerable amount of effort was devoted to understanding and measuring productivity changes (as an indicator of process innovation). However, it was usually an unstated, but implicit assumption that differences in innovation outputs among firms or between time periods could be understood by viewing differences in innovation inputs. Thus, measures of innovation inputs could reasonably serve as indicators of innovation outputs.

The early work that was done developing innovation input indicators was extremely valuable and has provided, together with productivity data and patent data, the only long term time series data related to innovation in the U.S. However, because of its focus on inputs, and the implicit relationship of inputs to a linear model of innovation that largely neglected inter-firm linkages, it also distorted our understanding of innovation. For example, since a relatively small number of large firms in the U.S. economy accounted for the vast majority of R&D expenditures, it was assumed that they were also responsible for almost all technological innovation. Thus, if you believed these data, innovation policy could be usefully directed at large R&D based firms, and small and medium-sized firms, which generally had no central R&D lab, could safely be ignored. In the heyday of the large corporate R&D lab this was an easy enough mistake to make, but it became harder to justify this view in the face of the extremely rapid growth of small, technology-based enterprises in the past two decades. Thus, many felt it was essential to look beyond indicators based on innovation inputs, toward indicators that were then described as being "downstream" from research and development.

At the same time advances were being made in our understanding of the innovation process itself. For example, innovation was no longer perceived as being a linear process within each firm. Some innovations occurred without any traditional "research" at all. Others began at a stage that had previously been thought of as downstream from research, but then required scientific and engineering expertise later, to solve problems related to the commercialization of a new product or process. This view of innovation, popularized by Kline and Rosenberg as the "chain-link model" of innovation, serves a key foundation for most of the recent innovation indicator development.[2]

Innovation is also not an activity that occurs wholly within firms. Studies in many countries have confirmed that a significant portion of firms that introduce new products or new production processes have no formal R&D process at all. In many cases this is because they rely on technologies developed elsewhere. Their role in innovating consists of adapting these technologies or combining technologies developed elsewhere to produce improved or completely new products or production techniques. The importance of backward linkages with supplier firms, at least in some industries, has been understood for a very long time. In the 1970s, largely based upon the work of Eric von Hippel, considerable attention was focused on the role of users and customers in the innovation process as well. More recently Chris DeBresson has argued that the relationships between firms involved in innovation really consist of complex networks with an array of communications and interactions among firms.[3]

The conclusion from all of this is that without a better understanding of the nature of interactions among firms (whether they be customers, suppliers, or more complex relationships) any examination of the linkage between innovation (as measured by older indicators) and economic performance may be tenuous at best. For example, correlations between firm performance measures such as sales growth or profitability and R&D expenditures have always been difficult because it was nearly impossible to specify the lag structure between innovation investment and improved performance. But if the underlying R&D that serves as a basis for sales growth due to new products or processes isn't even made by the firm that introduced the innovation, then documenting this relationship with traditional innovation indicators will be impossible. Furthermore, policies that rely on traditional measures to indicate where innovation is occurring may be fundamentally flawed.

The newer indicators of innovation were designed to paint a more detailed picture of innovation by more directly examining innovative outputs, by collecting data on the structure of innovative activities within firms and by tracing the linkages between firms that give rise to innovation. While development work on these indicators has been going on for at least two decades, they are "new" in the sense that their collection is only now becoming regularized, and they are not being regularly collected at all in the U.S. Thus, they are the focus of this chapter. This is not to imply that data collection of innovation input data (R&D expenditures, technical employment) or patent or bibliometric data is not continuing to improve, only that they are beyond the scope of this chapter.

II. Early Work on New Innovation Indicators top

A. Object-based studies top

Two fundamentally different approaches have been used to collect data for new indicators of technological innovation.[4] The earliest work took the innovation itself as the unit of analysis and attempted to collect data on the number of innovations produced, expenditures on their development, their rate of diffusion and their significance. This approach has sometimes been referred to as innovation-based and is called the object approach by the latest version of the Oslo Manual.[5] Studies of this type were undertaken in Britain, the United States, and Canada.[6] The usual method for this type of data collection project is to develop a list of significant innovations through literature searches or panels of experts, identify the firms that introduced the innovations, and then send questionnaires to those firms about the specific innovations. A variation on this approach is to send a questionnaire to firms that asks them to identify their most significant innovation or innovations and then answer a series of questions about the specific development project that let to that innovation.

This approach however, has not been the one favored by most data collection projects sponsored by or performed by government agencies. There are a number of reasons for this. First, firms have a very difficult time responding to detailed questions about innovation activities that are related to specific innovations. They simply don't retain this type of data. Secondly, the innovation-based approach only collects data about successful innovations. When studies are limited to successes, it is more difficult to use the data to distinguish factors that relate to successful innovative outcomes. Finally, government statistical agencies are generally geared toward collecting contemporaneous data for relatively brief periods not exceeding a few years. Studies based on literature searches or expert identification of innovations are inherently historical in perspective and generally cover rather long time periods. More recently some government innovation surveys (notably Canada) have included questions asking firms to identify their most important innovation. But this approach adopts the firm's view of which innovations are important and generally limits firms to identifying only one or two representative innovations.

B. Subject-based studies top

The second approach is to collect data from firms about the totality of their innovation efforts, not merely those that are associated with specific innovations. In this approach, firms are asked to provide information on expenditures for various innovation-related activities, information about the structure of these activities within their firms, information on their innovation-based relations with other firms and institutions and information about the firm's view of its innovation goals, policies and the obstacles it faces. This approach is sometimes referred to as a firm-based approach and is described by the Oslo Manual as subject-based.

Before 1992, a number of subject-based surveys were conducted in a variety of countries, mostly in Western Europe. One of the earliest of these was undertaken by Lothar Scholz at the IFO Institute in Germany.[7] Conceptualized in 1977-78, this survey was first conducted in 1979, and performed annually thereafter. The centerpiece of this survey is its request for data on innovation expenses by category: research, experimental development, construction and design, patents and licenses, production preparation for new products, production process innovation, and administrative process innovation. This survey is noteworthy in that it is one of the few that has been successful in obtaining answers from firms on innovation expenditure by function over a significant period of time. This is partly due to the fact that the survey is repeated annually (so firms can refer to their previous year's responses). In addition, the IFO Institute works fairly closely with respondent firms, and respondents are provided with reports of the resulting data disaggregated by industrial sector, which they find very useful. This survey also asked a range of questions about the number and types of innovations introduced, the sources of ideas and barriers to innovation, and the technologies that underlay the innovations.

The largest innovation indicators survey undertaken during this period was performed in Italy by the National Research Council and the Central Statistical Office.[8] This was an incredibly ambitious project that began with a fairly brief survey sent to every manufacturing company in Italy with more than 20 employees (about 35,000 firms in all). This was the first large-scale survey to demonstrate that innovation was a pervasive phenomenon. While the percentage of innovators did increase with firm size, the Italian survey showed that even among the very smallest firms, almost two thirds had introduced new products or production processes. Thus the Italian survey clearly showed that innovation was a much more widespread phenomenon than in-house R&D.

The initial Italian survey was followed by a more detailed questionnaire administered to all those firms that had reported some innovation activity in the initial comprehensive survey. Questions were asked about the number and types of innovations, their costs, the types of technologies involved, sources of information, obstacles to innovation, their impact on sales and future technological opportunities. Many of these questions serve as the basis for items included in the first edition of the Oslo Manual.

In the United States, early subject-based innovation surveys were sponsored by the U.S. National Science Foundation and conducted at MIT's Center for Policy Alternatives and Boston University's Center for Technology and Policy beginning in 1981.[9] The goal of this series of projects was to develop workable new indicators of innovation that would relate either to innovation outputs or to significant factors in a firm's environment that affected the innovation process. It is interesting to note that at the time this work was going on the investigators were completely unaware of the European firm-based studies, yet developed questions that were strikingly similar to those used in Germany and Italy.

III. The First Oslo Manual and the First Community Innovation Survey top

A. The first Oslo Manual top

In the late 70s and mid-1980s the Organization for Economic Cooperation and Development (OECD) held a series of workshops to bring together individuals who were working on new indicators of technological innovation to compare notes. At this point new indicator technology was maturing to the point that the policy community was beginning to seriously evaluate the need for standards in data collection that would facilitate international comparisons, in much the same way that the OECD's Frascati Manual provided standard for the collection of innovation input data.[10]

In 1988 the first multinational study that collected data for new innovation indicators was undertaken in Scandinavia, under the aegis of the Nordic Fund for Industrial Development.[11] The questions on this group of surveys revolved around many of the same themes that were pursued in the earlier surveys in Europe and the United States. From the beginning, however, it was anticipated that the surveys would be constructed in such a way that international comparisons of the results between the participating countries (Norway, Denmark, Finland, and Sweden) would be possible.

About the same time the Nordic Industrial Fund also sponsored a series of workshops to move toward a standardized approach to innovation indicator data collection. The initial intent was to provide some input for the ongoing Nordic Survey. The keynote paper for the first set of meetings, developed by Keith Smith of the Resource Policy Group in Oslo, referenced only the Nordic Survey,[12] but from the beginning the group, which comprised most of the individuals who had developed extant survey instruments[13], framed the discussion more generally in terms that could be applied across the OECD. An additional workshop was held the following year and the general framework for a guide to collecting innovation indicator data was in place. Drafting of what came (at the suggestion of Alfred Kleinknecht) to be known as "The Oslo Manual" was left to Keith Smith and Mikael Akerblom. The first revision of the Manual, was adopted and published by the OECD in 1992.[14]

The first Oslo Manual did not contain specific questions that were recommended to be included on innovation surveys. Instead it laid a conceptual framework for developing innovation indicators and discussed the general areas in which data had been collected by various existing surveys. Then specific topic areas were recommended for inclusion in future national surveys. The principal topic areas were:

  1. Firm objectives in undertaking innovation. This included the firm's technological strategies such as developing radically new products, imitation of market leaders, adapting technologies developed elsewhere, etc. It also discussed the firm's specific strategies with respect to product innovation (replace existing products, open up new markets, etc.) and process innovation (lowering production costs, increasing production flexibility, etc.)
  2. Sources of innovative ideas. This included cooperation with customers, suppliers, subcontractors, research institutes, government facilities and universities. It also included the acquisition of embodied or disembodied technology, and ideas from the scientific, technical, or commercial literature, trade fairs, exhibitions etc.
  3. Factors that hamper innovation. These included high risk, expense, lack of information, lack of technological opportunities, resistance to change within the firm, regulatory barriers, etc.
  4. The proportion of sales and exports due to new products. This was a measure of the importance of new products to the firm. Products were deemed to be "new" during their first three years on the market.
  5. The structure of R&D. This included a collection of issues concerning whether firms have a central R&D facility, the proportion of their R&D budget that is spent in such a facility, and the degree to which they have cooperative R&D relationships with other firms or research organizations.
  6. The acquisition and sale of technology. This included the degree to which firms rely on patents or other mechanisms for the protection of intellectual property and the degree to which they have licensing arrangements with other firms.
  7. Innovation costs by activity. This item dealt with total expenditures related to new product and process development disaggregated by type of activity, for example, internal and external R&D, acquisition of disembodied technology, expenditures for tooling up, engineering and manufacturing start-up and marketing.

B. Statistical units for data collection top

A key issue that has been a source of some frustration for innovation indicator researchers concerns the proper statistical unit at which to collect the data. If data is collected at the corporate or enterprise level, it is fairly easy to merge it with other data collected at this level including data on R&D. This has been an approach that has been attractive to many national statistical offices because they are already collecting other data at this level. Also, the technology strategies of firms are sometimes developed at this level and collecting data at a lower level of aggregation may make it difficult to pursue these issues. On the other hand, much more data, and often better quality data, can be collected at the establishment level, where most innovation activities actually occur. However, it is often impossible to reaggregate data that is collected at the establishment level to provide information about the enterprise as a whole.

This topic will be treated in more detail in section VII.A below. At this point it is worth noting that for most of the early European studies this was less of an issue than in the United States. In the early 80s, when asked about this question, one European data collector said that for most firms that he had dealt with there was very little difference between the enterprise and the establishment. Each of the half dozen firms in his country that was large enough to create a problem was treated as a special case.

In the United States, however, the difference is enormous. All of the NSF-sponsored U.S. studies up to this point have used the enterprise as the unit of analysis, in order to be consistent with NSF's other data collection procedures. Sometimes exceptions were made in individual cases where the enterprises were essentially holding companies or where the firms themselves asked that the survey be sent to establishments. In instances where inconsistent procedures were used, it created significant data analysis problems.

The original Oslo Manual treats this issue fairly ambiguously. Initially it says that the unit of analysis is the "enterprise-type" unit, by which it means the smallest possible separate legal entity. However, it also approves of the use of a smaller unit (a division or establishment) in cases where the firm is engaged in many different types of activities. It then says that the "enterprise group" should not be used unless its activities are relatively homogeneous.

C. The first Community Innovation Survey (CIS-1) top

In the early 90s the European Community sought to design a common questionnaire that would be based on the Oslo Manual and could be administered in all of the EU countries. This project was implemented as a joint venture of Eurostat and the SPRINT / European Innovation Modeling System (EIMS) program of DGXIII. In 1991-92 there was a small-scale pretest of the survey in five countries. The survey instrument was revised in early 1992, and in 1992/93 data collection was completed in Belgium, Germany, Denmark, France, Greece, Italy, Ireland, Luxembourg, the Netherlands, Portugal, the United Kingdom, and Norway. Over 40,000 firms were surveyed in the course of this project.[15]

The goal of attaining comparability between nations was not fully achieved for a number of reasons. First, the survey instrument itself differed between countries. Each country was free to modify the survey as they saw fit and indeed some did, either by adding, deleting, or making alterations in the CIS core questions. Sometimes these alterations were subtle. For example, an identical question that asks for categorical responses but provides different response categories may make comparisons impossible. As Arundel et al., noted "even minor differences such as a change in layout or a change in scale can have substantial effects on the comparability of the results."[16]

Secondly, sampling and follow up procedures varied substantially between countries. In some countries surveys were conducted, others conducted a census. In some cases the population was taken to be all manufacturing firms, in others the questionnaire was targeted toward firms believed to be innovators. Some surveys were conducted by mail, others were based on interviews. In some countries firms were legally mandated to respond, in others they were not. The guidelines for filling out the survey that were provided to firms also differed substantially between countries. In part these differences occurred because of different legal and institutional requirements of the different countries, in part they occurred because countries wished to make their surveys comparable with previous surveys they had done, and in part they occurred because in some areas there were no recommended procedures for the EU as a whole. The Commission was reluctant to provide detailed procedures on data collection because they felt it would presumptuous given that they are only empowered to make recommendations, not promulgate requirements.

After the first Community Innovation Survey was completed substantial revisions were made both to the survey instrument and the Oslo Manual that served as the basis for the instrument. Since CIS-1 has largely been superceded, it will not be considered in detail here.

IV. The Second Oslo Manual and the Second Community Innovation Survey top

A. Overview top

A revised version of the Oslo Manual was published in 1997 (hereafter Oslo-2). The revisions to the Manual were in part based upon the field survey experience of CIS-1, but were also driven by fundamental changes in the economy itself. In particular, for the first time an attempt was made to draft innovation indicator data collection recommendations that would apply to service industries as well as manufacturing industries. Just as the first version of the Oslo Manual served as a basis for the first CIS survey, the second version of the Manual laid the underpinnings for the second CIS survey.

B. Definitions and basic concepts top

In Oslo-2, technological innovation is divided into two categories: Technological Product Innovation and Technological Process Innovation. Product innovations are further subdivided into new products and improved products. Oslo-2 recommends that when surveys ask firms whether they have made any innovations during the relevant survey period, that they be asked about each of these three categories separately. The definitions for the three types of innovation are provided as follows:

A technologically new product is a product whose technological characteristics or intended uses differ significantly from those of previously produced products. Such innovations can involve radically new technologies, can be based on combining existing technologies in new uses, or can be derived from the use of new knowledge.

A technologically improved product is an existing product whose performance has been significantly enhanced or upgraded. A simple product may be improved (in terms of better performance or lower costs) through use of higher-performance components or materials, or a complex product which consists of a number of integrated technical sub-systems may be improved by partial changes to one of the sub-systems.

Technological process innovation is the adoption of technologically new or significantly improved production methods, including methods of product delivery. These methods may involve changes in equipment, or production organization, or a combination of these changes, and may be derived from the use of new knowledge. The methods may be intended to produce or deliver technologically new or improved products, which cannot be produced or delivered using conventional production methods, or essentially to increase the production or delivery efficiency of existing products.[17]

Definitions for concepts that are as amorphous as technologically new products and processes are very difficult to develop. One way to be clearer about them is to provide lots of examples of the types of things that should be included and should not be included within the definition in the hope that respondents would be able to draw analogies between the product and process examples mentioned in the definition and their own products and processes. The problem with this approach is that it leads to relatively lengthy definitions. Experience has shown that if the definitions become so lengthy that a separate sheet containing definitions must be included with the questionnaire, many, if not most, of the respondents will not read the definitions section at all. Therefore an effort was been made in Oslo-2 to be clear, but also to be brief in providing definitions.

Not only is it difficult for firms to determine what is "new" but it is also difficult for them to determine the difference between "new" and "improved" consistently. When deciding whether a product is wholly new or simply improved, they are most likely to take their own company's history as their frame of reference. As a result, a wholly new product produced by a company that manufactures household cleaners may have less technological innovation content than a product that is classified as merely improved by a manufacturer of embedded microcontrollers.

Oslo-2 provides a brief discussion of some of the problems that arise when attempting to apply these definitions directly to the service sector. In particular, in service industries the distinction between the product and production process becomes blurred because the product is generally intangible and consumption occurs simultaneously with production. While the Manual does offer a wide range of examples of service sector innovations, it provides little guidance for separating them into product and process innovation categories. For a more detailed discussion of options with regard to the service sector see section 7.3, below.

C. Degree of novelty top

Another problem that must be addressed by innovation indicator data collection efforts is the specification of the degree of novelty required in order for a product or process to be considered truly "new." To take two extremes, an innovation might be considered new only if it was being introduced for the first time anywhere in the world in any industry. Alternatively, it might be considered new if it were simply being used for the first time by the "innovating firm" even if it had been previously widely used in other firms in the same industry.

Oslo-2 refers to these two extremes as "world-wide technological product or process (TPP) innovation" and "firm-only TPP innovation," respectively. They are defined as follows:

Worldwide TPP innovation occurs the very first time a new or improved product or process is implemented. Firm-only TPP innovation occurs when a firm implements a new or improved product or process which is technologically novel for the unit concerned but is already implemented in other firms and industries.[18]

While it may be reasonably argued that every application of existing technology in a different setting requires a degree of adaptation and, thus, innovation, it is also clear that the adoption of existing technology that is widely used elsewhere involves a substantially reduced degree of innovation relative to the creation and first use of new technology. After all, the introduction of a production process that is "new to the firm" might simply occur because the firm was expanding its product line into a new area which required different equipment based on existing (and possibly quite ancient) technologies.

On the other hand, firms generally know when a product or production process is new to their firm. Often they do not know whether it is also new to their industry, new to their country or region, or new to the world. In fact, in DeBresson's object-based study of innovation in Canada, he found that a rather large number of firms claimed to have developed world first innovations.[19] In fact, in a number of cases more than one Canadian firm claimed to have been the first in the world to develop a particular innovation.

It is generally the case in innovation indicator research that there is a tension between the data that would be most helpful from a policy perspective and the data that firms are readily able to provide. Survey design is often something of a balancing act. The more the survey focuses on obtaining the best possible data, the less able and willing firms are to supply that data and the lower the response rate. However, by moving too far in the other direction and asking firms only questions that they can easily answer, high response rates can be achieved, but the resulting information may be uninteresting. One of the most important reasons for pretesting a new survey vehicle is to determine whether a proper balance has been achieved between minimizing respondent burden and maximizing the usefulness of information obtained.

Oslo-2 takes the position that any TPP that was new to a firm was to be classified as an innovation. It also recommended asking firms about their world-first innovations and perhaps also about some intermediate degrees of novelty, such as TPPs that were new to the country or new to the region.

D. Statistical unit of analysis revisited top

The 1997 Oslo Manual revisited the issue of which level within the firm should provide data and noted a distinction between the reporting unit (which is the part of the firm that was asked to provide the data) and the statistical unit (which is the part of the firm that the data is collected about). Oslo-2 notes that if the reporting unit is larger than the statistical unit, it may be difficult to determine how to distribute the data gathered among the various portions of the firm that comprise the different statistical units. Suppose, for instance, that there is a large enterprise with a number of different production divisions. If the reporting unit is the enterprise, but the statistical unit is the production division, it may be difficult to allocate data reported by the enterprise among the various divisions. In asking questions about the objectives of innovation, for example, if the enterprise prioritizes its objectives, it is unclear whether these priorities are the same for all divisions or whether they reflect the priorities of the largest, or most profitable, or most innovative divisions. Or they might reflect some sort of weighted average of the objectives of the various divisions.

Oslo-2 does not explore the implications of the statistical unit being larger than the reporting unit, but the problems are similar. Suppose that the reporting unit is the production division but the statistical unit is the enterprise. If data is collected from the various production divisions it may be difficult or impossible to reaggregate this data back to the level of the enterprise. This is clearly the case if a survey is taken rather than a census or if some divisions are among the non-responders. This is because sampling is generally taken from the population of establishments as a whole, not on an enterprise by enterprise basis. Furthermore, even if all divisions are surveyed and all divisions respond it may not be possible to reaggregate data. In this case reaggregation of quantitative data would be possible, of course. One could simply add up the R&D expenditures for all divisions to obtain a total R&D expenditure for the firm. But on qualitative questions such as the strategic objectives of the firm's innovation activities, if two divisions report one objective and three divisions report another, it is not clear how these would be aggregated to get an overall firm objective.

The principle difference between the treatment of the statistical unit in the first and second Oslo Manuals is that Oslo-2 seems to have a much greater recognition of the problems involved in selecting any one unit of analysis. Its basic recommendation is the same: that the enterprise-type unit generally be used, but it makes this recommendation "Taking into account how innovation activities are usually organized." It also recommends that when enterprises are involved in several industries, a smaller unit like the kind-of-activity unit (KAU) "an enterprise or part of an enterprise which engages in one kind of economic activity without being restricted to the geographic area in which that activity is carried out"[20] may be more appropriate.[21]

Oslo-2 also recognizes the problems that will arise from attempting to evaluate innovation within multinational companies. Specific recommendations for dealing with the problems of attempting to calculate national data in the face multinational corporations are not included.

E. Topics covered in Oslo-2 top

As with the first Manual, Oslo-2 does not recommend the wording of specific questions that might be included in the survey. It does, however, go into substantial detail about what should be included in the questions, occasionally to the point where the wording of the questions can be derived fairly directly from the text of the Manual. The major topic areas covered in Oslo-2 are:

Note that there are no questions here devoted directly to research relationships among firms. More broadly, connections between firms are explored in the questions that concern the sources of information for innovations (where other firms are a possible source) and purchases and sales of new technology.

A substantial amount of time and effort has been spent attempting to develop innovation survey questions that measure the resources devoted to the wide range of innovation activities within firms. Almost all of these efforts have resulted in data of questionable value. The biggest problem stems from attempts to separate the part of each category of expenditure that is related to new and improved products and processes from the part that relates to routine activities. For example, research and development expenditures are relatively easy to collect because almost all R&D expenditures are directly related to the development of new products or processes. However, even R&D expenditures may not be trivial to calculate because many firm's research personnel spend a portion of their time working on products that reflect style variations which are not properly viewed as technologically new products. Interviews with the individuals in these firms who are responsible for reporting R&D expenditures reveal that they are only partially successful in separating out R&D expenditures related to technologically new products from those which are not.

As data is collected about firm activities that are closer to the market introduction of new products, it becomes much more difficult to collect data that solely relates to expenditures for technologically new products and processes. Expenditures for plant and equipment, for example, often can not be segregated into expenditures for new products (to say nothing of expenditures for technologically new products) and expenditures for expansion of the production of existing products. The same thing is true for marketing expenditures. Most survey response analysis has shown that the questions on innovation expenditures are the most difficult for respondents, have the lowest response rates, and produce results of questionable value.

Oslo-2 recognizes many of these problems and devotes a substantial section to attempting to hone the definitions to be clear about which items should be included and which should not. Despite this, Oslo-2 clearly sees the problem as "not which data to collect, but how to collect reliable data on innovation expenditures other than R&D expenditures."[27] To try to improve the situation, Oslo-2 recommends that surveys ask firms to indicate whether the data provided in this area are fairly accurate or are rough estimates only. The Manual notes that this may result in more firms simply doing rough estimates, but it might also raise response rates.[28] In this context it is worth noting that high response rates are not always desirable. If the alternatives are high response rates but poor quality data or low response rates but carefully answered questions, the latter may, in fact, be preferable.

Oslo-2 also includes a very useful section on survey implementation, imputation of data from non-respondents, and tabulation of the results. The latter issue needs substantially more attention. In a comparison of the data obtained from six countries, Hansen found that even when the questions asked were essentially identical and even when the responses were gathered in a similar way, the results of the various national surveys might not be comparable if the results are not reported using the same protocol. This occurs because the results are published in aggregate form and if the same aggregation procedures are not used, useful comparisons between the data sets will not be possible.[29]

F. The second Community Innovation Survey (CIS-2) top

The most recent EC survey is the second Community Innovation Survey (CIS-2). As with the first CIS, the EC developed a model or "harmonized" set of questions for the second round of surveying. Individual countries were then free to modify, add or delete questions as they saw fit. The questionnaire was developed in early 1997. By the end of 1998, fourteen of the European Union countries and Norway had implemented this survey. Many of these countries had submitted the resulting data to Eurostat and that data is in the process of being cleaned and analyzed as this paper is being written.

The second Community Innovation Survey is actually two surveys, one that is designed to cover manufacturing industries and one for the service sector. The inclusion of the service sector represents a major step forward since each year services account for a larger fraction of most national economies. Service sector industries that are covered by CIS-2 include electricity, gas and water supply (NACE 40-41), wholesale trade (51), transportation (60-62), telecommunications (64.2), financial intermediation (65-67), computer and related activities (72) and engineering services (74.2 in part). Notably absent is the health care sector.

The changes made to the CIS manufacturing questionnaire to adapt it to the service sector are actually relatively minor. So, instead of treating the two questionnaires separately, they will be discussed at the same time, but the adaptations made for the service sector will be noted along the way. The service sector questions were pre-tested in Germany and the Netherlands. In addition, an early larger scale test was conducted in Italy. A more complete discussion of the nature and status of innovation indicators in the service sector will be found in section VII.C, below.

The survey begins with a list of questions about the nature of the firm and its activities including employment, sales, and exports. These questions are rather routine and probably presented few response difficulties for firms. In many countries this data may be known ahead of time and therefore could be deleted from the survey.

The definitions for new products and processes are included with the questions about whether firms introduced any innovations, rather than on a separate sheet. Technological innovation is defined as "technologically new products and processes and significant technological improvements in products and processes." By new, the survey means new to the enterprise. The manufacturing questionnaire asks separately about new products and new production processes. With regard to products, it asks firms to specify the percentage of sales during the period 1994-96 that were generated by new products, improved products and unchanged products. It also asks firms about the percentage of sales that were generated by products that were not only new to the firm, but new to the market as well. Note the distinction here between the CIS survey and the Oslo Manual. The Oslo Manual specifically refers to "world-wide TPP innovation," while the CIS survey asks about products that are new to the firm's market. These are not necessarily the same thing. A firm could view its market as being regional or national. If a product existed in other national or regional markets but was being introduced to the firm's market for the first time, it might count it as new. In fact there is substantial ambiguity here since different firms in the same market may view the concept of a "market" differently. The survey provides no guidance for interpretation of this term.

In the service sector survey, no distinction is drawn between product and process innovation. Instead firms are asked whether they introduced "any new or significantly improved services or methods to produce or deliver services." This was done because of the problems of segregating innovations into product and process innovations when production and consumption occur simultaneously. In addition, the question on the percentage of sales due to new products was dropped from the service questionnaire because it was found that firms in the service sector had much more difficulty answering it than firms in manufacturing.

One key distinction between the first and second CIS questionnaires is that the first one refers to innovations that were "developed or introduced" in the relevant period while the second refers to innovations that were "introduced onto the market" or "used within a production process." Arundel, et al., argue that this leads to a confusion of the technology creation process with the diffusion process and suggest rewording this question so that it asks separately about the introduction of innovations that were developed within the firm and the introduction of innovations developed elsewhere.[30] This approach, it is argued, would clarify the interpretation of many of the remaining questions, which seem to apply mostly to the creators of new products rather than those who diffuse the technology.

CIS-2 contains a significant section asking firms to disaggregate 1996 innovation costs between the following categories of expenditure:

One paragraph definitions are included for each of these areas. The service sector question differs slightly in that it refers to "technological innovations" rather than "product and process innovations." It also refers to "software and other external technology" rather than just "other external technology." Finally, it omits the "industrial design" category and in its place has "preparations to introduce new or significantly improved services or methods to produce or deliver them."

This section also asks four additional questions concerning the resources devoted to innovation. Firms are asked about R&D employment (again assuming the data is not available from other surveys). They are also asked whether they preformed R&D continuously, occasionally, or not at all. They were also asked for a categorical (yes/no) response as to whether they received government financial support for innovation activities. These included subsidized loans and grants, but there is no reference to provisions of the tax code that might have in effect provided subsidies. Finally, they were asked whether they had applied for any patents in any country during the 1994-96 period.

CIS-2 asked for categorical responses regarding the objectives of firm's innovation activities during the period 1994-96. Firms were not asked to rank order their objectives, rather, for each of the listed objectives they were asked to specify whether the objective was not relevant or was slightly important, moderately important or very important. The list of objectives provided was virtually the same for the service sector as for manufacturing (the word "service" replaced the word "product"):

Firms were also asked to specify the main source of information for innovations during the 1994-96 period. The same scale was used as in the objectives question and the question was identical on the services and manufacturing questionnaire. The sources included were:

Note here that the wording of this question tends to exclude SRP's, although almost every other form of inter-firm relationship (customer, supplier, competitor) is included. In an additional question, however, firms were asked to specify whether they had been involved with any joint R&D or other innovation projects during the 1994-96 time frame. If so, they were asked to specify whether their partners were located in the same country, in Europe, the United States, Japan, or elsewhere. The types of partners specified was identical to the sources of information list, except for the obvious deletions of the enterprise itself, patents, information networks and fairs.

Finally, firms were asked if they had had at least one innovation project seriously delayed, abolished or aborted before it was started. If so they were given a list of possible reasons that this could occur and asked whether the reason resulted in delay, abolition, or not having been started. Possible "hampering factors" include:

This question was identical on the manufacturing and service sector surveys. The value of this question will be considered in detail below. It is worth noting, however, that the survey provided no guidance to firms concerning how to separate the twin issues of economic risk and innovation costs. Nor did it ask firms whether having innovation hampered by factors like a lack of finance and the need to meet standards was a good thing or a bad thing from the standpoint of the firm. Rather, there seems to be an underlying assumption in this question that all things that hamper innovation are undesirable.

In a footnote, CIS-2 recommends that the national surveys also ask firms to describe the most important technologically new or improved product or process. There are no recommendations on the core survey for the wording of such a question nor is there any guidance about which types of questions should be asked about the most important improved product or process. In addition, there is no discussion of this question in the guidelines for submitting the data Eurostat. One is left with the impression that Eurostat feels the question is important, but is unsure of the best way to implement the question or to collect and present the results.

Currently, Eurostat is gearing up for the third iteration of the Community Innovation Survey. It is hoped that the questionnaire will be finalized by the end of 2000 so that can be administered in 2001. There may well be a more detailed exploration of the relationships between firms on this questionnaire, though the direction this exploration might take has yet to be determined.

V. Other Recent Innovation Surveys Outside the United States top

A. Canadian innovation surveys top

The Science and Technology Redesign Project of Statistics Canada has been most directly responsible for the collection of innovation indicator data in that country. Canada is worth treating separately here because in addition to conducting surveys that stem from questions in the Oslo Manual, they have also expanded that design toward drawing connections between technology developers and technology users. In addition, they have performed significant development work in service sector innovation surveys, and have also conducted a couple of industry-specific surveys that generate useful information for future innovation indicator development. The discussion below is not intended to provide a comprehensive review of the Canadian surveys; rather it will focus on those areas where the Canadian questions were significantly different than those used elsewhere.

In 1993, Statistics Canada conducted an Oslo-style survey of innovation in Canadian manufacturing industries. A similar methodology was applied in a 1997 survey (1996 data) of the communications, financial services and technical business services industries. Little, if anything about this survey is service sector specific; most of its questions are general enough that they could be applied to both the manufacturing and service sectors of the economy.

The Canadian surveys are significantly longer than those contemplated in either of the Oslo Manuals. In part this is because completion of the survey is mandated by Canadian law. This permits the Canadians to survey on a wider range of topics than in other countries. Canadians have the additional advantage of having a single statistical data collection agency for the country. As a result, routine questions about the firm that might have to be duplicated on multiple surveys in other countries can simply be obtained in Canada by linking data sets based on a tax identification number.

Questions that have been added to the survey by the Canadians include the usage of employee development and training programs, employee access to the Internet, and the firm's use of the Internet for selling its products. In addition, there is a rather detailed section on the qualitative impacts of innovation activities on the firm, including its impact on productivity, the quality of service, the range of products offered, the size of the geographic market, and the firm's impact on the environment. Firms are also asked the degree to which new products replaced products that the firm previously offered. In addition to asking questions about the impact of new and improved products on firm sales and exports, firms are also asked a question about the frequency with which new products are introduced.

As in most Oslo-based surveys, firms are asked about their objectives in pursuing innovation development programs. The Canadian survey uses a threshold question (whether the objective relevant at all) followed by a five point scale for rating how important the objective is to the firm. The Canadian survey lists potential innovation objectives in more detail than most other surveys. So, for example, while CIS-2 lists "open up markets or increase market share" as a possible objective, the Canadian survey subdivides this into two categories ("open up" and "increase share") and further subdivides the increase share category by geographic region (domestic, European, USA, Japan, other Pacific rim, and other).

Aggregating this data in such a way that it will be comparable with the results of the CIS-2 survey will be difficult or impossible for three reasons. First, the CIS survey uses a 3 point scale rather than a 6 point scale. Second, because the data is not quantitative, it cannot be simply added up among the finer categories of the Canadian study to create the coarser categories of the CIS survey. Finally, the wording and design of the question itself may impact the comparability of the two surveys.

The same situation extends to questions on sources of information for firm innovation and barriers to innovation as well. In each case the Canadian survey presents a substantially finer subdivision of categories for firm responses. The implication here is not that the Canadian survey is either better or worse than the CIS version. Rather, it simply points up the importance of explicitly deciding whether or not to conform one's national survey design to those used elsewhere. There is clearly a trade off here between gaining more information about the domestic economy and obtaining results that are comparable with other nations.

As noted above, CIS-2 recommends that surveys ask firms to specify their most important technologically new or improved product or process. The Canadian survey takes up this issue and pursues it in some detail. Firms are asked to describe the most important innovation and are provided with a list of novel attributes and asked to specify all of them that apply to this innovation. These potentially novel attributes are:

The principal reason the Canadians have focused on the firm's most important innovation is that the Canadian theoretical framework looks at innovation as having three parts: the generation of knowledge, the diffusion of knowledge and the use of knowledge. Very often these three functions do not occur in the same firm. The direction this theory is moving is to look at clusters of firms which may have complex relationships and information flows. As a result, it is important to pursue, in some detail, the linkages between technology developers and technology users. So, for example, they ask firms to specify the industry or industries and country or countries that were the main suppliers of ideas for the specified innovation as well as the ones who were the main customers for the new product. The idea here is to move beyond questions that, for example, might ask firms to specify the percentage of new ideas obtained from customers, and instead begin to identify networks of firms and industries that produce and use new technology. Note that this concept of "clusters" or "networks" of firms may be substantially less formal than that which is implied by SRP's. In particular, these clusters may have little in the way of a contractual foundation and may be more ephemeral in nature, appearing and disappearing as the firms see fit.

In addition firms are asked whether the innovation was a world first, a first for Canada, or simply a first for a local market. If it was not a world first, firms were asked where it was developed first and the length of time between its initial development and adoption by the responding firm. Firms are also asked about the effect of this innovation on firm employment and on the skill requirements of the firm's workers.

The Canadians devote a substantial section of their questionnaire to intellectual property rights. They ask both how frequently various mechanisms were used over a three year period in categorical brackets (none, 1 to 5 times, 6 to 20 times, 21 to 100 times, more than 100 times) and how effective they were in protecting intellectual property (using a threshold question and a five point scale). In this question they specifically ask about copyrights, patents, industrial designs, trade secrets, trademarks, integrated circuit designs, and plant breeders rights. They also ask firms about the effectiveness of two additional strategies: being the first to market, and having a complex product design.

Finally firms are asked to rate the importance of various factors to their overall competitive strategy and to the overall success of their firm. This permits one to assess in a more general way the role of innovation in the firm. A fairly detailed list of factors that might contribute to firm success is included under general areas such as technology and R&D, management, production, markets, financing and human resources. The factors that might contribute to a firm's competitive strategy include such items as price, quality, and customer service in addition to introducing new or improved products. In each case firms are not asked to compare the importance of these factors, they are merely asked to rate the importance of each on a five point scale (or as not applicable at all).

The Canadian Survey of Innovation was repeated in the fall of 1999, collecting data covering the 1997–1999 period. The questionnaire was similar to that used previously, though some changes were made in the order in which the questions were asked to attempt to minimize respondent misinterpretation. In addition, a new statistical unit was introduced called the "provincial enterprise". This unit consisted of all establishments of a multi-provincial firm that are located in a particular province. The idea was to develop a database with regional data. However the survey was sent to each company's national head office. Some problems of double-counting resulted when the head office attributed a single innovation to each statistical unit.[31]

In addition to the more general innovation surveys, Canada has conducted surveys on biotechnology in 1997 and 1999 and has recently completed a survey of the Canadian construction industry. The biotechnology survey was conducted in two parts, one of which surveyed biotechnology companies themselves and the other which surveyed firms that were in industries that were likely to be users of biotechnology-based products. While these surveys do not have a great deal of relevance to more general innovation indicator surveys, in a couple of cases they resulted in the creation of interesting new survey questions that may have some broader relevance. In particular, in addition to asking questions about whether intellectual property had been purchased or sold during the survey period, the biotechnology survey also asked whether firms had ever been forced to abandon a project because further work was blocked because of intellectual property that was held by another firm. They also asked the number of times in the past year that the firm had been involved in patent litigation.

Canadian analysts report that firms had considerable difficulty answering the questions about education of employees, whether an innovation was new to the world or just new to the local market, and the amount of time that elapsed between the introduction of an innovation by another firm and the its adopting by the responding firm. In addition, the Canadian survey contained a question asking firms to allocate the costs of innovation among categories and it found that Canadian firms had a great deal of trouble doing so.

The biotechnology survey also has an interesting variation on the barriers to innovation question. Instead of asking directly about barriers to innovation it asks firms to check the three most important "problems" to successful commercialization:

This provides a potential alternative to the usual problems with separating out the effects of risk and cost in assessing innovation barriers.

B. Other European surveys top

A number of other European countries (both within and outside the EU) have built upon the basic structure of the CIS and other reported surveys, adding questions on issues of concern to them. In addition to the EU countries, surveys have been carried out in at least the following countries: Switzerland, Norway, Poland, the Slovak Republic, Russia, Japan and Australia. These will not be considered in detail here, but will be taken up to the extent that they offer questions that are significantly different than those discussed above.

The Italian survey, for example, asks about the impact of innovation on firm employment, and also adds opinion questions about the impact of innovation on firm performance and about the firm's innovation plans for the future.

Like the Canadian survey, the Polish and Slovak Republic surveys ask questions about the purchase and transfer of new technologies.

The Swiss survey explores a number of interesting new areas. Firms are asked to evaluate, on a 1-5 scale, the technological opportunities available in their industry. The questionnaire also asks firms to evaluate the level of competition (with separate questions for price competition and other kinds of competition) using the same 1-5 scale. Firms are asked to characterize their products as standardized, differentiated, and/or custom built, though the survey does not ask firms to specify the percentage of products that fall in each category. In addition, firms are asked to rate the contribution of external information to the effectiveness of internal innovation development.

Switzerland also conducted a separate survey on the diffusion of basic technology in industry, focusing mainly on the conditions surrounding the adoption of computer-assisted production and the diffusion of microelectronic-based technologies. This raises an interesting question of whether it is advisable to bundle special surveys on multiple topics together if they are targeted at the same respondents. On the one hand, it no doubt reduces the cost of administering the survey and causes basic questions about the firm to not have to be repeated on both surveys. On the other hand, it also may cause the survey to become so large that response rates are adversely affected. The Swiss survey, including the computer assisted production and microelectronics special topics ran to sixteen pages of fairly small type.

In the second round of the CIS, most countries implemented the core CIS-2 questionnaire with relatively few changes. Exceptions include the United Kingdom and Germany, which both implemented significantly more detailed surveys. The UK survey asks an interesting question about the degree to which firms have implemented technologically-oriented management or organizational changes. They ask specifically about electronic data interchange, just in time (or similar) planning systems, electronic mail, use of the Internet, investments in people, quality management systems or standards (such as ISO9000), and benchmarking performance against other firms.

Under the topic of information sources, the UK survey asks the usual question about external sources of information that provide ideas for new or improved products or production processes, but it also goes on to ask which sources of information were used to actually carry out innovation projects as opposed to just suggesting ideas.

VI. Innovation Surveys in the United States top

Efforts to collect innovation indicator data (other than input indicators) in the United States have been on going for at least twenty-five years. In the mid-70s, NSF sponsored a group of pilot studies that were geared toward measuring the resources devoted to innovation on a project by project basis.[32] These studies encountered significant problems because it was determined that firms rarely kept records of this sort that attributed specific costs to specific development projects.

A. NSF-sponsored innovation indicator surveys top

In the early 1980s the focus shifted to collecting data about firms rather than collecting data about specific innovation projects. Christopher Hill, et al. explored the feasibility of a very wide range of potential innovation indicators. These indicators were developed through an exhaustive search of the extant literature on innovation theory and tested by conducting in-depth interviews with potential respondents using a series of trial innovation questionnaires.[33] Some of the questions in CIS-1 and CIS-2 can trace their roots to this indicator development project. This project culminated in the survey of 600 manufacturing firms in 1983-84 (collecting 1982 data).[34] The topics covered in this survey were:

This survey achieved a response rate in excess of fifty percent. The completion rate for individual questions on the returned surveys was in excess of ninety percent. The survey was repeated in 1986 (collecting 1985 data) by Audits and Surveys, Incorporated. Roughly two thousand firms were involved in this latter study, but the response rate was substantially lower than with the previous survey. Almost 100 firms were respondents to both surveys. While minor changes were made to a few of the questions, for the most part the data collected were the same in both surveys. In some cases the same individual answered both questionnaires while in others, the questionnaire was answered by two different individuals within the same firm. One effort to assess the quality of the data consisted of analyzing whether the differences between the survey responses were greater when a different person responded than when the same person completed both surveys.[35]

In 1994 the U.S. Census Bureau conducted another pilot survey to develop innovation indicators in the U.S covering the 1990-1992 period. In this project 1000 firms were surveyed with a questionnaire that contained many of the same topics as the Eurostat surveys. Questions were included on issues such as:

The response rate obtained from this survey was 57%. One hundred thirty of the firms were the subject of intensive follow-up and for these firms a response in excess of 80 percent was achieved.

One of the most interesting results of this survey was the finding that of those firms introducing innovations, 84% also were R&D performers.[36] This is a stark contrast with most the European studies, which found a very large number of innovating firms that performed no R&D at all.

B. The Yale/CMU surveys top

In the 1980's Levin, et al. conducted a survey designed to elicit information concerning the ability of firms to appropriate the results of their own technology development programs.[37] This survey, which came to be known as the Yale survey was later adopted and expanded by Wes Cohen, et al. at Carnegie Mellon University.[38] The second survey (hereafter CMU) is the focus of this section. It is significant for a number of reasons. Most importantly, the reporting unit for this survey is the business unit, rather than the enterprise. As a result, significantly more detailed questions could be asked.

The sampling frame for this survey was constructed from the Directory of American Research and Technology[39] as supplemented by Standard and Poor's Compustat database. The sample was thus limited to R&D labs or units within firms that actually conducted R&D. The focus on firms that perform R&D was driven by the fact that the survey principally concerned the R&D function within the firm rather than the broader range of innovation activities, thus a firm that did not perform R&D would have found little on the survey that pertained to them. The CMU survey did not, however, focus primarily on measuring R&D inputs, but rather looked carefully at research objectives, information sources, and the structure of the environment in which R&D occurred within the firm. As a result, many of the questions are similar to those found on the CIS and MIT questionnaires. In addition, the CMU questionnaire also asked about the competitive environment in which the firm operated and the mechanisms that were used to protect intellectual property rights.

It is useful to focus on those areas of CMU survey that asked questions that were wholly different than those incorporated on previous surveys. For example, in attempting to pin down characteristics of the R&D environment in firms, questions were asked about how frequently R&D personnel interacted face to face with personnel in the firm's marketing and production units or in other R&D units. This is an example of the type of question that would be impossible to explore on the enterprise level, but certainly makes sense at the establishment level and seems to work at the business unit level as well. Firms were also asked questions about the relationships between R&D and other firm functions, such as whether personnel were rotated across units or whether teams were constructed drawing on various cross-functional units. They were also asked to specify the percentage of R&D projects that were started at the request of another unit within the firm. All of these questions stem from a more complex, non-linear model of innovation within firms.

While a number of other studies ask about the importance of firm interactions with universities and government labs, the CMU questionnaire was able to ask for more information about the nature of these relationships. For example, in each of three categories (research findings, prototypes and new instruments and techniques) it asked the percentage of R&D projects that used research results from universities or government labs. It also presented a series of scientific fields (Biology, Chemistry, Electrical Engineering, etc.) and asked on a four-point scale what the significance of university or government research was to the firm's R&D activities.

The CMU survey also included a section that asked about the relationship between the firm and its competitors. Firms were asked to name the most innovative firms in their industry and to assess their own level of innovation (disaggregated by product and process innovation) relative to other firms in the industry. Then firms were asked to assess the overall rate of product and process innovation in the industry as a whole. Firms were asked questions aimed at assessing how early in the innovation process they became aware of their competitor's innovations and what percentage of their innovations projects have the same technical goals as their competitors. Finally, firms were asked to estimate the number of competitors they have by region of the world, and how many were able to introduce competing innovations in time to effectively diminish the profitability of the firm's own innovations.

A significant section of the CMU questionnaire is devoted to assessing the firms ability to capture the returns from innovation using various mechanisms (patents, trade secrets, etc.). First firms were asked to specify the percentage of their innovations (disaggregated by product innovations and process innovations) that were effectively protected by:

A number of additional questions were asked about the firm's patenting behavior, including the number of patents applied for, the reason that patent applications are made (prevent copying by other firms, measure researcher performance, obtain revenue, etc.) and the reasons the firm might specifically decide not to patent a new discovery (information disclosure, cost of patent application, difficulty in demonstrating novelty, etc.). In addition firms were asked how long it took competitors to introduce similar alternatives both in cases where patents had been obtained and in cases where they had not. This question was asked separately for product and process innovations.

C. Other innovation indicator studies in the United States top

There has been a range of other studies in the United States that have attempted to develop innovation indictors. As was noted in the introductory section, the NSF sponsored a number of these projects in the 1970s. In addition over the past couple of decades the U.S. Small Business Administration has developed a database of U.S. introduced innovations, with a focus on those developed by small businesses.[40] This database assigned each innovation to a 4 digit SIC category and also contained information on the geographical area where the innovating establishment was located and a ranking that reflected the significance of the innovation. This database is limited to product innovations. In 1993, Gelman Research Associates attempted to sample from this database and obtain data about the timing of key events in innovation development, the sources and uses of funds, the markets served and commercial impacts of innovations and the innovations degree of novelty. The study was marred by extremely low response rates and was eventually limited only to participation by small firms.[41]

One other survey that is worthy of note is an R&D survey conducted by the Industrial Research Institute and Center for Innovation Management Studies. In addition to asking about R&D spending, it also asked about the organization of R&D within firms (notably the degree of centralization), the sources of R&D funds and the percentage of sales attributable to new or improved products.[42]

VII. Issues for U.S. Innovation Indicator Development top

As the previous section demonstrated, a number of efforts to develop innovation indicator surveys have been undertaken in the United States, sponsored both by governmental and non-governmental agencies. We have learned a considerable amount from the experience. Most importantly, we have learned that useful data can be collected, though achieving acceptable response rates is rather difficult. We have also leaned that the United States has some unique characteristics that confront data collectors with a different set of challenges than are faced elsewhere.

Two of these characteristics are especially important. First the United States has no single central statistical office. Thus, developing linkages between innovation data and other economic data is inherently more difficult. Secondly, the United States has an extremely rich variety of organizational forms among its firms. Hence, while the Europeans might take the view that it was reasonable to collect data at the enterprise level and then handle any firms for which this presented difficulties as individual special cases, this will probably not work in the United States. These two characteristics combine to create an even more difficult problem. If it were possible for the National Science Foundation to determine the optimal statistical unit of analysis for innovation surveys, it could determine that innovation survey data would be collected at that level. However, it cannot determine the statistical unit of analysis for other government agencies that collect data from firms. Thus it must take into account both the theoretically preferable manner in which to collect data and a desire to have the data linked to other data sets when determining the statistical unit.

This section focuses on issues that need to be addressed before a new innovation indicator survey could be mounted in the United States. The topics here include the content of the survey instrument itself, the reporting unit, coverage of service sector firms, and procedures to maximize the response rate.

A. The reporting unit top

The Oslo Manual draws a distinction between the reporting unit and the statistical unit. The reporting unit within a firm is the level of the organization that actually receives the questionnaire and is asked to fill it out. The statistical unit within a firm is the level of the organization that the data is actually collected about. These need not be identical. For example, it is possible to ask that the firm as a whole report the percentage of sales from new products for each of its establishments. In this instance the reporting unit would be the enterprise, but the statistical unit would be the establishment. The Oslo Manual suggests, however, that whenever possible, the reporting unit and the statistical unit should be the same.

The content of the survey depends in part on the reporting unit selected. Detailed questions, especially concerning various types of innovation expenditure, cannot be collected at the enterprise level because the data simply are not known at that level. On the other hand, questions concerning firm strategy, such as those revolving around the firm's innovation objectives, may be developed at the enterprise level, making it difficult to collect these data at the establishment level. Thus it is important to make a decision about the level at which data will be collected before making final decisions about what data to collect.

While the issue of the reporting unit has often been framed in terms of the alternatives of the enterprise or the establishment, Archibugi, et al. point out that there are really a number of different candidates for the reporting unit:

The legally defined enterprise is a unit which has legal status in a given country. It might have one or several establishments, one or several business units. In several cases, it corresponds to the unit registered for tax purposes. According to this definition, establishments or business units located outside the borders of the nation should not be considered.

The economically defined enterprise is classified according to the ownership or control. It includes all establishments or business units which are owned or controlled by the enterprise, located in the same or in a different country than the enterprise's headquarters. Often, large economically defined enterprises are subdivided even within one country, into several legally defined enterprises.

The business unit is part of the enterprise, although several enterprises are composed by a single business unit. A business unit may have one or more establishments. [note: this unit is intended to be similar to the "line of business" concept in the U.S.]

The establishment is a geographically specific production unit. Several enterprises, especially among those of smaller size, have a single establishment only.[43]

Given the importance of developing national data on innovation, the economically defined enterprise is unlikely to be adopted as the reporting unit. As a result, we will focus on the other three candidates, which will be concisely referred to at the enterprise, the business unit, and the establishment.

Most ongoing innovation indicator studies use the enterprise as the reporting unit (and the statistical unit of analysis). The Oslo Manual specifically makes this recommendation, but adds that diversified firms may be subdivided according to the type of economic activity that they engage in. To date all of the U.S. National Science Foundation innovation indicator data has been collected from enterprises as well. In the first round of CIS surveys, only two countries used something other than the enterprise as the basis for their survey. The second CIS survey instructions clearly indicate a strong preference for using the enterprise as the statistical unit:

The statistical unit for CIS 2 should be the enterprise, as defined in the Council Regulation on statistical units or as defined in the statistical business register. If the enterprise for some exceptional reasons is not feasible as statistical unit other units like divisions of enterprise groups or kind of activity units could be used. These exceptional units should be indicated in the database. Some adjustments for these might be needed in the processing of data.[44]

One key reason for relying on the enterprise as the reporting and statistical unit is pragmatic. Other data, (notably R&D expenditures) are collected with the enterprise as the reporting unit. Thus data collectors both have a great deal more experience collecting data from enterprises and have other historical data series that are collected on an enterprise basis.

Policy makers have traditionally wanted firm-level innovation data so that they could link it to other firm-level data sets and so that they could address questions that were inherently firm-level questions, such as the distribution of innovation activities by firm size. It is notable that virtually every innovation indicator study has attempted to collect data and report results disaggregated by firm size. Without firm-level data this is impossible.

If having firm-level data is important, the only practical way to obtain it is to collect data at the enterprise level. This may be observed by considering the methodology that would be required to collect firm level data at a lower level within the firm. In principle, if the data collected were purely quantitative, it should be possible to collect the data from each of the firm's establishments or business units and then re-aggregate it back to the level of the firm as a whole. This would be possible if either a census was taken of all of the firm's establishments or business units or if some method were established for imputing values for the missing components of the firm. Even if a census were used, it is likely that there would be some non-respondents among the firm's establishments, requiring imputation of some missing values in any case. This technique would probably require that each firm be treated as a "special case" so that the analyst has a list of each of the units within the firm and is able to keep track of which units responded and which did not. The analyst would have to be sufficiently well informed about the firms operations that he or she could intelligently estimate the missing values.

In the case of qualitative data, it is likely to be impossible to reconstruct firm data from data provided by the various establishments or business units. For example consider the following question from CIS-2:

Between 1994-96 has your firm introduced any technologically new or improved processes? If yes, who developed these processes?

Mainly other enterprises or institutes O
Your enterprise and other enterprises or institutes O
Mainly your enterprise O

Suppose a firm has four establishments or business units. Three of them indicate the first response (mainly other enterprises or institutes) and one indicates the third (mainly your enterprise). How should we re-aggregate this data to the firm level? Should we assume that since this work is done both within and outside the firm the appropriate response for the firm as a whole is item 2 (even though no entity has checked it)? Should we conclude that the answer should be the first response because three of the four units checked it? Should we weight the responses by sales or R&D expenditure to come up with an average response?

On the other hand, suppose the one establishment that indicated the third choice also contains the firm's central R&D lab. Ought we not to conclude from this that the establishment with the central R&D lab is fundamentally different than the rest of the firm and that no single answer to this question will adequately describe the firm's behavior? The raises a fundamental problem with collecting data at the enterprise level. If it is not possible for the data collector to construct a reasonable answer to the question based on information obtained from the various firm establishments (or business units) this may be because a single reasonable answer for the firm as a whole does not exist.

Another problem with collecting data at the enterprise level is that it makes sector-level analyses rather difficult. Many, if not most, enterprises span more than one industrial sector. The Oslo Manual recommends using International Standard Industrial Classification (ISIC) codes or NACE codes to classify enterprises by sector. The recommended divisions are only to the two digit classification level, so the categories tend to be fairly broad.[45] Even at the two digit level, however, it is extremely difficult to classify even moderately diversified firms. Oslo-2 recommends classification by principal area of economic activity. Thus for a firm in more than one two digit category, all of its activities will be attributable to its principal category. This creates problems at the two digit level, but classification of enterprises at any finer level of stratification than two digits is virtually impossible.

If we come to the conclusion that the only practical way to collect data about the enterprise as a whole is to survey at the enterprise level rather than the establishment or business unit level, it has a substantial effect on the type of data that can be collected for two reasons. First, enterprises know less about the activities going on in the business units than the units themselves do, so they are less able to answer detailed questions (especially in areas such as innovation costs) than are business units or establishments. Thus, surveys of enterprises rely heavily on either qualitative data or on rough estimates of quantitative data. Secondly, asking for qualitative data at the enterprise level does not eliminate the aggregation problem described above, it merely causes it to be dealt with by the firm itself rather than by the data collector. It is still necessary for someone to look at the various behaviors of the business units within the firm and make a judgment about what data should be reported for the firm as a whole. While it is arguable that individuals inside the firm are in a better position to make judgments about how to aggregate qualitative data from disparate business units, it does not mean that it will be possible to report meaningful summary data in situations where no meaningful summary data actually exist.

If it were possible to do without data aggregated on an enterprise-wide basis, it would be possible to collect the data on either the establishment or business unit basis. An establishment represents an entity that is limited to a single geographic area. As a result, respondents at this level generally tend to have more detailed information available than do respondents at the enterprise level. Another advantage of collecting at the establishment level is that it is the only mechanism that will permit analysis of data disaggregated by geographic region. Neither enterprise-level data nor business unit data permit tracking the geographic location of innovation activities.

There are a number of problems with the establishment approach, however. First, there is a much larger population of establishments than of firms. This would represent a very significant increase in cost for those surveys that attempt to conduct a census rather than select a sample. Even for those researchers who only wish to survey a sample of establishments, significant problems will arise in identifying the population from which the sample is to be drawn.

There are particular problems associated with achieving high response rates when surveying establishments. At the enterprise level, there is generally someone whose job it is to be concerned with innovation within the firm. This person may have a title such as Vice President for Research and Development, Chief Technical Officer, or Director of Technology. This person is likely to have at least some sympathy for the goals of the innovation indicator data collection project and some interest in the underlying concepts. In doing surveys of this type we have found that many of these individuals have a great deal of enthusiasm for the innovation indicators project and have launched their own ongoing internal innovation data collect efforts. It is far less likely that a similar individual will exist at the establishment level. At the establishment level potential respondents are more likely to find the survey purely an inconvenience that interferes with the flow of their work. Previous studies have found that one thing that contributes to increasing response rates is that the survey be addressed to an individual within the firm by name. This involves identifying the name of the person within the firm that is the most appropriate individual to fill out the questionnaire. Because establishments generally do not have offices or individuals who are specifically responsible for innovation within the firm, respondent identification will be substantially more difficult than it is for enterprises.

At the establishment level response rates may also be hampered if potential respondents do not believe that they have the authority to complete and return the questionnaire. In these cases respondents may forward the questionnaire back to the enterprise level rather than completing it themselves.

The third alternative is to collect data at the level of the business unit. A business unit consists of all establishments within an enterprise that are in the same line of business. While the activities of individual establishments may span multiple NACE code categories, all of the activities of an establishment would be attributable to its principal NACE category. Because the activities of establishments are substantially more homogeneous than the activities of enterprises, this problem is significantly less serious than in the case of establishments. As a result, line of business reporting can generally be successfully achieved at a more disaggregated sectoral level than enterprise-based reporting.

There are a number of advantages to this approach. To the extent that it is desirable to analyze innovation on a sector basis, the business unit approach provides data that will most clearly facilitate this analysis. Companies themselves often view business units as natural divisions for record keeping and strategic planning, so it would be easier for them to provide data at this level. However, it is worth noting that there is no particular reason that companies would view the boundaries between business units as being the same as those that were called for by the various standard industrial classification systems.

In addition, since the data are designed to summarize firm behavior, it makes sense to collect this data at a level where the data within each reporting unit is relatively homogeneous and the differences between reporting units are greatest. Because the line of business often dictates the type of technology developed and used and the way it is applied, these categories occur most naturally when the statistical unit is based on business units.

Some of the problems identified in conjunction with collecting data at the establishment level also exist in the case of business unit reporting. Obtaining a population of business units from which to sample (or to conduct a census) is likely to be even more difficult than obtaining a list of establishments. This is because establishments at least have a relatively unambiguous identifying characteristic (a distinct geographical address) whereas the identifying characteristic of business units is more amorphous. Identifying the appropriate individual within the company to respond to the survey will also be more difficult, but since lines of business are generally a higher level of aggregation within a firm than establishments, there is a better chance that someone is specifically responsible for innovation.

The degree of difficulty posed by these considerations depends on how the firm is organized. If firms are already organized along business unit lines (for example, with divisions that correspond to NACE business units) then locating someone to provide the data and obtaining the data will be relatively easy. If, however, the firm is not internally divided along business unit lines, simply trying to explain to a potential respondent (who may never have heard of SIC, ISIC or NACE codes) what data is being requested will pose a daunting task. It might be useful to discuss this issue in some detail with representatives of the Federal Trade Commission who attempted to collect data along business unit lines in the 1980s. Their perspective on the level of difficulty associated with requesting data from firms might provide some guidance as to whether it is reasonable to expect acceptable response rates if data is collected in this fashion.

The focus up until now on collecting data from enterprises is based on the view that innovation is an activity that is firm-centric. That is, information flows and new product and process development are activities that occur mostly within firms. As we have begun to understand the degree to which linkages with customers, suppliers and others are important to the innovation process, these linkages have been dealt with as exceptions... important exceptions, it is true, but exceptions nonetheless. Thus it was considered reasonable to argue that collecting data on an establishment level was problematic, because central R&D labs, which would be treated as separate establishments, report R&D but no sales. However, if the R&D that underlay a new product innovation was conducted in a completely different firm (either because of an SRP or some other arrangement) for some reason this wasn't viewed as a reason for abandoning enterprise-based data collection. The situation is made worse because the reporting unit has generally been the legally defined enterprise, not the economically defined enterprise. Thus R&D that is performed within the firm, but in a subsidiary that is in a different country, is not counted either.

Recent research results from Statistics Canada cause one to wonder if the problem isn't even more serious. In a recent data collection effort on the construction industry in Canada, researchers found that the very concept of a "firm" was beginning to disappear. On some construction projects "firms" as we think of them have no persistence. The firm is essentially a joint venture of contractors (not working as subcontractors for a general contractor) which come together to form a "firm" for the life of a single construction project. It is argued that this results in economies in the design process and also reduces litigation costs if something goes wrong.

Similar behavior can be seen in other industries as well. Engineering expertise is being contracted out by firms on a project by project basis. In some cases these relationships are with engineering consulting firms, while in others independent contractors are hired. Some of these relationships will persist for long periods of time while others will relate to just one project. The research capacity of firms using this technique is thus extremely fluid. Perhaps most interesting is the fact that the firms that are consumers of these engineering services are often firms that have almost no internal development capacity of their own. They may regularly introduce new products or new production processes, but have done essentially no development themselves. This model has been observed for quite some time in computer software, where firms with no in-house software development capability would hire outside consultants or firms to create custom software packages, and in the process substantially alter their production processes. However, the approach has been picked up in a number of other industries, and is now quite common, for example, in the development of custom embedded microprocessor applications and even such traditionally less technological industries as toys.

B. The composition of the questionnaire top

A great deal of time has been spent over the past two decades on the development of specific questions that might be included on innovation indicator surveys. It is important to design these surveys so that the results will be comparable with previous surveys and with surveys that are conducted in other countries. Thus it is useful to begin by considering questions that have been included on previous surveys both in the United States and elsewhere. However, since the field is not likely to stand still, it is also important to not ignore on-going theoretical developments that may result in productive new areas of inquiry.

CIS-2 asks for innovation data in six basic areas: The scope and importance of innovation activities, the resources devoted to innovation activities, the objectives of innovation, the sources of information, cooperative innovation ventures, and factors hampering innovation.

Questions on the scope and importance of innovation activities ask whether the firm is involved in the introduction of new products and processes and the extent to which these activities have contributed to firm sales. These questions have now been tested in quite a number of countries over a substantial period of time. All indications are that firms are able to answer them and that the data produced are reliable.[46]

The remaining issue with regard to these questions is one of scope. Arundel, et al., argue that questions of this type should be careful to include asking about the development of new technological products and processes (TPPs) as well as their introduction on the market.[47] Some surveys have asked for firms to differentiate between sales that are attributable to new products and those that are attributable to improved products. In the United States, when these data were collected at the enterprise level, most firms answer with "educated guesses" rather than calculations based on the firm's financial records.[48] In this case the question of whether to ask for a further subdivision is partly psychological. Firms are often reluctant to provide responses when there is no hard data to support the answers given. The more questions that are included that firms feel uncomfortable answering, the more likely they are to not respond to the survey at all. Asking for more detailed breakdowns of items where the respondent has little confidence in the accuracy (except in general terms) of the aggregate estimate, may result in a lower response rate.

Questions on the resources devoted to innovation have caused significant problems for most studies in which they have been included. Oslo-2 concedes that "Not many enterprises keep separate records of other [non-R&D] TPP innovation expenditures," but nevertheless concludes that "experience has shown that it is quite possible for them to give acceptable estimates of the non-R&D portion."[49] Later though, the Manual notes that most studies that have attempted to collect this data have found that firms simply don't have it.[50]

Some work has been done to assess the validity of this indicator. For example, comparisons of data collected in the U.S. in 1982 and 1985 found large unexplainable differences in the responses to this question. It also found that the percentage of total innovation expense accounted for by R&D was much higher than was indicated by previous studies. For example, in 1985, firms reported that on average expenditures for new plant and equipment related to the introduction of new products was only twice as high as their expenditures on R&D. Just three years earlier an admittedly smaller sample of firms reported that it was 21 times higher.[51] Other studies have reported similar anomalies. For example, in a survey conducted in the Nordic Countries that was sponsored by the Nordic Industrial Fund, it was found that R&D accounted on average for more than two thirds of all innovation expenditures in Norway.[52]

This is not to say that this question has never worked. In a series of annual studies of innovation expenditures in Germany, Lothar Scholz found that this data could be collected in a meaningful way. However, it required a substantial amount of close work with the companies involved in the survey. When the survey was first begun, response rates were rather poor. However as the survey continued over time and firms themselves began to see the value in it, response rates improved as did the apparent quality of the data. The firms believed the survey had value because as participants in the survey, they received a sector report that summarized the collected data for their specific industry. Scholz believed that firms became more skilled at preparing these estimates as they became more experienced with them. He also suggested that experienced firms used a procedure of estimating the change from the preceding survey rather than constructing a wholly new estimate for each year's survey.

Questions on the firm's objectives for innovation, sources of information, and cooperative arrangements with others are relatively easily answered. As discussed above, it is sometimes difficult to know how to interpret the answers to these questions when the response is from an enterprise with many disparate business units. With regard to all survey questions that ask simply whether a firm has a particular activity, relationship or goal, the larger and more diversified the enterprise the more likely it is to answer "yes." Diversified firms simply do more different kinds of things than smaller, less diversified firms. If the activity, relationship or goal exists in any of the diversified firm's various units, the answer to the question for the enterprise as a whole will be in the affirmative. However, the total amount of innovation produced by a large firm that does a wide range of things is not necessarily more than the innovation produced by a group of small firms which, if taken together, would have the same range of activities.

In some areas, CIS-2 asks firms to specify whether each item is not relevant, slightly important, moderately important, or very important. In a highly diversified firm, if an item is critically important, but only to a single business, it is unclear whether they will answer very important, because it is critical to the one unit, or some lower level of importance because it only affects one unit. Firms are offered no guidance on this issue on the survey itself.

One additional difficulty that has come up with regard to the objectives of innovation question is that unless the question is very carefully worded, firms may answer from the perspective of whether each of the goals on the list is an objective of the firm's competitive strategy in general, rather than a goal of the firm's innovation strategy.

The final area on CIS-2 concerned factors that hamper innovation. Questions of this type have appeared on a large number of surveys over the years either as factors that hamper innovation or as obstacles to innovation. The importance of this subject stems from a desire on the part of policy makers to promote innovation in the economy. Policy maker's concerns over the level of innovation stem from two sources. First early economic studies pointed out both theoretically and empirically that there is a divergence between the private and social returns to investment in innovation. As an innovation becomes diffused through the economy, the firm that introduced it will only be able to capture a portion of the benefits that accrue from that innovation. As a result, the incentive to develop innovations in the first place is less than it would be if firms could capture all of the benefits that they produce.

Second, the government necessarily has a role in the innovation process. For example it determines the rules and regulations surrounding firms' use of patents and technology licensing. It finances a significant amount of research either directly through grants and contracts or indirectly through its purchases of goods and services that have new technologies embedded in them. It also establishes environmental (and other) regulations that affect technological development. As a result, it is concerned about the degree to which these policies promote or hamper innovation in private firms.

While recognizing that assessing the degree to which firm innovation is hampered by various factors is important, it may not be that the best way to do this is to ask firms directly. There are a number of reasons for this. First, it is not clear that firms (or anyone else for that matter) can usefully disagregate hampering factors that are inherently intertwined. For example, CIS-2 asks firms whether they are hampered by "excessive perceived economic risks", by "innovation costs [being] too high," or by a "lack of appropriate sources of finance." The decision to invest in new product or process development stems from an analysis (albeit sometimes an informal analysis) of the likely return on the investment, adjusted for the perceived risk, and the cost of the investment. Lower risks or higher returns will justify innovation investments with higher costs. It is difficult to see how a firm could look at these three factors one at a time, rather than considering them as a group.

Even when it is possible to disentangle the various hampering factors, it is not clear that the firms actually know the answer to this question. We can find out from a survey how important they perceive these factors to be (or at least what they report this importance to), but it is quite possible that one of the most significant factors hampering innovation is that firms don't have a good understanding of what obstacles they actually face. It is also possible that on a government questionnaire asking whether government regulations or standards hamper innovation, firms may view the survey is an opportunity to alter government policies in this area.

Finally, there is a substantial bias built into most of the questions of this type. The words used in the question are almost always pejorative. Firms are asked if they are "hampered" by "obstacles" or "barriers." They aren't asked if they are restrained from making unwise and unprofitable investments in products or processes that have little market potential.

Aside from those questions specifically included on the CIS-2 survey, there are some areas where it might be useful to considering making additions. One area that deserves consideration is the collection of data that will help trace the relationships that are part of an innovation production/diffusion network. The "sources of information" question is designed to move in that direction, but it collects data concerning only one kind of interaction (information exchange) and looks only at very broad categories of firms (customers, suppliers, competitors).

Another approach is that taken by Canada and a few other countries, where firms are asked to identify specifically their most important innovation. Follow up questions can then be asked about other firms that were involved in either the development or diffusion of this innovation. This provides much more detailed information about the inter-relationships of various firms' innovation activities.

A key problem with this approach is that it generally asks about only one innovation. The firm's "most important" innovation may not be a typical innovation. It may stand out in the mind of a respondent precisely because it was so unusual. On the other hand, it would be rather difficult to ask firms to name a "typical" innovation, since these are likely to be relatively routine and unmemorable. In any case, this may be the type of question that is best addressed at the enterprise or business unit level, since large, diversified firms are likely to have trouble answering it at all.

An alternative approach has been at least partially explored in the CMU study. Instead of only asking about the importance of sources of information by various categories of firms, the CMU study disaggregates the sources of information question by type of technology (at least when asking about university or government contributions). For example, it asked whether university or government research yielded significant results to the firm in the area of biology, or physics, or chemistry. It is possible to envision extending this to the questions about sources of information from customers and suppliers as well, asking firms to specify the industries that had some relationship to their innovation efforts. This might facilitate identifying the clusters of firm-types that are responsible for innovation.

The Oslo Manual offers one other suggestion along these lines. It proposes "asking firms to indicate the proportion of sales due to technologically new or improved projects by the sector of main economic activity of their main client(s) for those technological product innovations."[53] Particularly at the enterprise level, this type of data may be difficult or impossible to obtain.

While it is mentioned in the current Oslo Manual, the latest CIS questionnaire does not ask for any information about mechanisms the firm might use to appropriate the benefits of its technology developments. At a minimum, it may be worth considering whether a question or two about the relative importance of various forms of intellectual property protection ought to be included. Such information is, of course, useful on its own, particularly since the legal environment created by government policies has a significant impact on firm's strategic decisions with regard to protecting intellectual property. However, since patents themselves have often been the subject of data collection efforts as intermediate outputs of the innovation process, understanding how firms view the importance of patents relative to other forms of protection is critical to interpreting the patent data itself.

A range of questions have also been asked on surveys about the structure of R&D within firms. These include such items as asking the percentage of R&D that is spent in a central research facility as opposed to production divisions, whether R&D is conducted in a central facility that is financed outside the facility (either from production division budgets or outside the firm), and the amount of time that R&D personnel spend on a range of activities including meeting with people from marketing or production, attending conferences, receiving additional education, etc. Other questions of this type included on many surveys include the degree of contracting out of R&D or participation in joint R&D relationships with universities, government laboratories, or other firms. In reviewing many of the concerns cited by the recent NRC report on industrial innovation in the U.S., many of them revolve around issues of the structure of R&D within firms.[54] These include questions concerning the alleged "hollowing out" of firm's research capabilities. Questions of this type could be structured to gather information about these concerns.

One concern raised by a number of analysts is that often qualitative questions are not anchored to any reference that is shared between firms.[55] Firms are asked, for example, whether an innovation objective is slightly important, important, or very important. As responses are collected, it is reasonable to think that various respondents will have very different ideas of what "important" means. Thus two identical firms with identical sets of objects might provide different answers as one rate an objective "important" while another said "very important." One way around this is to ask firms to identify the most important factors, rather than evaluating the importance of each one separately.

C. Sector coverage top

The service sector of the economy continues to grow relative to manufacturing and now accounts for well over half of all employment. However, until now, innovation indicators in the United States have focused exclusively on manufacturing. Partly the reason for this was pragmatic; it was deemed to be more difficult to collect meaningful data from the service sector. It was also partly policy driven. Evangelista, et al., point out that innovation policy is almost exclusively directed toward the manufacturing and university sectors,[56] hence the need for innovation data for policy purposes was limited to those sectors. However, not only has the service sector become a large portion of our economy, it is also major contributor to technological innovation. In OECD countries, the service sector accounted for nearly a quarter of all business R&D in 1991.[57] As the importance of the service sector of our economy grows, it is difficult to imagine that the collection of innovation indicator data could be limited to the industrial sector for much longer.

If the decision is made to include service sector firms in innovation indicator data collection projects, it is reasonable to ask what, if anything, about these firms causes them to require any different treatment than manufacturing enterprises. While there are a number of distinctions, the key element cited in most studies is that in the service sector production and consumption occur simultaneously.[58] The reason for this is that in services there is no tangible product that can be stored or inventoried. From an indicators standpoint, this leads to a general concern about whether it would be possible to treat product and process innovations separately, since the process by which the service is produced is generally also the product. An example of this sort of problem can be found in the introduction of the automatic teller machine (ATM) in the banking industry. The ATM is a production process because it is the mechanism by which banking services are delivered to consumers. Consumers, however, view the ATM as the product. A clear distinction here is probably impossible.

Eurostat approached this problem by sponsoring a series of pilot studies of service industry innovation. Initially twenty interviews were conducted in Germany and the Netherlands (10 in each country) to determine whether the definitions in the Oslo Manual would have to be changed to accommodate the service sector. Note that the assumption was made that the questionnaire would be pretty much the same for the service sector as for the manufacturing industries, but some changes might be required in the definitions of "new products," "new processes," etc.

A number of significant changes in the definitions were recommended as a result of the pretest. Most importantly, separate definitions for product and process innovation are not included. Instead the final version of CIS-2 is clear that both types of innovation need to be included, but does not ask firms to attempt to separate them.

A new or improved service is considered to be a technological innovation when its characteristics and ways of use are either completely new or significantly improved qualitatively or in terms of performance and technologies used. The adoption of a production or delivery method which is characterized by significantly improved performance is also a technological innovation. Such adoption may involve change of equipment, organization of production or both and may be intended to produce or deliver new or significantly improved services which cannot be produced or delivered using existing production methods or to improve the production of delivery efficiency of existing services.

The introduction of a new or significantly improved service or production or delivery method can require the use of radically new technologies or a new combination of existing technologies or new knowledge. The technologies involved are often embedded in new or improved machinery, equipment, or software. The new knowledge involved could be the result of research, acquisition or utilization of specific skills and competencies.[59]

Another early effort to develop innovation indicators for the service sector was undertaken in 1995 in Italy. The survey was limited to in person interviews with nine companies. Here the researchers were careful to focus on innovation that was "technological" rather than also allowing for innovation that was based on "new knowledge," but they also preserved the definitional distinction between product and process innovation. The Italians found that initially firms exhibited a great deal of confusion about whether innovations were product or process innovations, but since the research was based on in-person interviews, the interviewers were able to explore means of clarifying this distinction. They found that if they explained that a process innovation was one that was aimed at increasing the overall efficiency of the firm while a product innovation involved the introduction of a new or improved service, firms were able to distinguish between them.[60]

Another major change that came about as a result of the pretest was that questions that attempted to assess the significant of innovation by asking about their contribution to sales were dropped. The reason is that firms have trouble identifying the sales that result from a product addition or change. In these industries, services are often bundled together and sold a package. Often this sales method is dictated by the product itself. Returning to the example of ATM machines, the services provided by these machines are most often packaged with a range of other bank account services. It might be possible to calculate the amount paid by consumers (in fees and foregone interest) for the services associated with a particular type of account (though even this is questionable), but it is impossible to isolate the component of the fee that is related to ATM services.

The inability to develop data for the new product sales indicator is disappointing because in manufacturing, firms have generally been able to provide this data. It is perhaps the only quantitative measure we have that provides information on diffusion. Its value as an indicator is demonstrated in part by the quantity and range of surveys on which it has been used.

As yet, few efforts have been made to develop any new indicators of innovation in the service sector that do not have counterparts in the manufacturing sector. It may be that there simply are none. However, when the current crop of indicators was developed, the researchers who developed them clearly had manufacturing in mind. Had they focused on the service sector instead, it is not clear that this same group of indicators would have emerged. As a result, it might be worth considering devoting some resources to taking a fresh look at the service sector from this perspective.

One other item is worth mentioning. None of the work that has been done to assess the feasibility of applying these indicators to the service sector has been performed in the healthcare industry. In fact, this sector is not mentioned in the classification list of service sector enterprises in the Oslo Manual, nor was it treated in the Canadian service sector survey. The reason for this is that in these countries the healthcare industry is generally viewed as a part of the public sector of the economy rather than the private sector. This raises two interesting issues. First, should health care be included in a U.S. survey of innovation in the service sector? Second, should public sector service providers in the U.S. (the U.S. Postal Service, for example, or public universities, as education providers, not R&D providers) be included? As long as innovation data collection was related solely to manufacturing, this issue didn't arise, since there is very little public sector manufacturing. As the focus shifts to the service sector, however, it must be addressed.

D. Response rate maximization top

A key concern for future U.S. innovation indicator data collection projects is achieving a high response rate. To date, voluntary surveys on innovation activities that are as detailed as those contemplated by either Oslo-2 or CIS-2 have rarely achieved response rates in excess of 60 percent. This is true both in the United States and in European countries. Previously-sponsored NSF innovation indicator studies have achieved response rates that range from below 30 percent to almost 60 percent. Surveying at the establishment level (as in the Yale/CMU studies) resulted in similar response rates. The initial Yale study obtained a response rate of just over 40 percent. All of these rates are significantly lower than NSF and other government agencies are accustomed to obtaining on other surveys.

Response rates do matter. If the characteristics of non-responders are fundamentally different than those of responders, substantial doubt is cast on the quality of the information obtained by the survey. In recognition of this, many innovation surveys have carried out studies that attempted to compare the characteristics of those who responded to the survey and those who did not. In many instances, this consisted of collecting publicly available data about responders and non-responders to determine whether there were any systematic differences between them. For example, Hansen, et al. performed an analysis of the size and industry classification of respondents and non-respondents for a NSF-sponsored survey of 600 companies in 1982. They found that there was a higher response rate from larger firms than smaller firms, especially in mature industries, such as food, primary metals, paper and stone, glass, clay and concrete.[61]

CIS-2 requires that any country that attains a response rate to the initial survey of less than 70 percent must conduct a follow-up non-respondent analysis. This analysis goes beyond simply collecting data from public sources and instead attempts to gather information from a sample of the non-respondents themselves. The goal of this second round of surveying, a 100 percent response rate from this sample of non-respondents, may be ambitious given that the sampling frame is a group of firms that have previously declined to participate in the survey. Preliminary indications are that the level of innovation among non-responders is actually higher than it is among those who responded to the survey. These results would seem to be consistent with those of the Hansen study.

There are a number of things that can be done to maximize the response rate. Questionnaire length and organization are key considerations, since excessively long questionnaires that are difficult to follow are more likely to be discarded. Some argue that the more difficult questions should be reserved until the end of the questionnaire since once respondents have invested time in filling out the easier questions, they are more likely to continue until the end. Difficult questions up front result in the survey being discarded before the respondent has invested any time in it.

Follow-up is also crucial. Initial surveying is unlikely to produce a response rate higher than 25 percent. Follow-up by telephone can easily double this. The 1994 NSF-sponsored study of 1000 firms clearly demonstrates how effective this type of follow-up can be when pursued aggressively. The survey team selected 130 firms who had not responded to the survey for intensive follow-up. Ultimately it was able to obtain responses from 80 percent of these firms.[62]

One factor that is often neglected is the tendency for response rates to rise over time in the case of surveys that are repeated on a regular basis. Lothar Scholz found that this was the case even when the survey was intended to collect data on innovation expenditures by category; a subject that is among the most difficult for firms. There are a number of reasons for this. First, response rates will be higher if it is possible to determine before the survey is mailed the name of the most appropriate individual within the firm to receive it. While this can be done on a one time survey, it is difficult and expensive. However when a survey is repeated, a database of previous respondents exists which can be drawn upon to target appropriate individuals.

In addition, respondents have an easier time completing a survey if they have previously answered similar questions. Less time is required for reading definitions and developing an understanding of the survey's basic concepts. In addition, if the agency collecting the data has been careful to publish summary data from the previous round of surveys in a form that is useful to the respondents (notably, disaggregated by industrial sector) and has made certain that respondents have ready access to those summaries, it can have a substantial positive impact on response rates as well. This is one factor that Scholz cites as being essential to his relatively high response rates in Germany.

Finally, repeated surveys containing identical data requests may affect firm's views about what data is important for them to collect in assessing their own level of innovation. Previous work in the United States found that many firms were searching for a metric of innovation within their own organizations and some adopted questions from the U.S. survey on an ongoing basis for internal use. If a survey is repeated over time, and the results are regularly published, firms may come to collect this data for their own purposes.

These factors lead to a general conclusion that in beginning innovation indicator data collection efforts it is important to not be too discouraged about response rates that are somewhat below those obtained in other studies that are conducted on a regular basis.[63]



Footnotes

[1] Smith, Adam, The Wealth of Nations, (New York: The Modern Library, 1937, 1965) p. 9.

[2] Kline, S.J., and Rosenberg, N., "An Overview of Innovation," in Landau, R. and Rosenberg, N. (eds.) The Positive Sum Strategy. Harnessing Technology for Economics Growth (Washington: National Academy Press, 1986).

[3] DeBresson, Christian, Economic Interdependence and Innovation: An Input-Output Analysis (London: Edward Alger, 1996).

[4]  For a more detailed discussion of surveys conducted during this period see: Hansen, J., "Innovation Indicators: Summary of an International Survey." OECD Workshop on Innovation Statistics (OECD/DESTI/IP/86.8), 1986. In addition to those surveys discussed above, early surveys were also conducted in France, the Netherlands, and Canada.

[5]  OECD, OSLO Manual: Proposed Guidelines for Collecting and Interpreting Technological Innovation Data (Paris: OECD, 1997).

[6] In Britain: Townsend, J. et al., "Science Innovations in Britain since 1945." SPRU Occasional Paper Series N. 16. (Brighton, SPRU, 1981). Pavit, Keith, "Characteristics of Innovation Activities in British Industry." OMEGA, 1983, vol. 11, no.2, pp. 113-130. Pavitt, K., M. Robson, and J. Townsend, "The Size Distribution of Innovating Firms in the U.K. 1945-1983." (Brighton, SPRU, 1985).
    In the United States: Gellman Research Associates, "Indicators of International Trends in Technological Innovation." (Gellman Research Associates, 1976).
    In Canada: DeBresson, Chris and Brent Murray, "Innovation in Canada. A Retrospective Survey: 1945-1978." (New Westminster, B.C.: Cooperative Research Unit on Science and Technology, 1984).

[7] Scholz, Lothar and Heinz Schmalholz, "IFO - Innovation Survey. Efforts to Inform Decision-Makers of Innovation Activities in the Federal Republic of Germany." Paper prepared for the OECD Workshop on Pentent and Innovation Statistics, June, 1982.
    Schmalholz, Heinz and Lothar Scholz. "Innovation in der Industrie: Struktur und Entwicklung der Innovationsaktivitaten, 1979-1982." IFO studien zur industriewirtschaft, 28.

[8] Avveduto, S. and Sirilli, G., "The Survey on Technological Innovation in Italian Manufacturing Industry: Problems and Perspectives." OECD workshop on Innovations Statistics, 1986. Archibugi, D., Cesaratto, S. and Sirilli, G., "Sources of Innovation Activities and Industrial Organization in Italy", Research Policy, 20, 1991, pp. 299-314.

[9] Hill, C.T., Hansen, J.A., and Maxwell, J.H, "Assessing the Feasibility of New Science and Technology Indicators." (Cambridge, MA: MIT Center for Policy Alternatives, 1982) CPA 82-4. Hansen, J.A., Stein, J.I., and Moore, T.S., "Industrial Innovation in the United States: A Survey of Six Hundred Companies." (Boston: BU Center for Technology and Policy, 1984) Report 84-1.

[10] OECD, The Measurement of Scientific and Technical Activities: Frascati Manual. (Paris, OECD, 1980).

[11] Nordic Industrial Fund, Innovation Activities in the Nordic Countries (Oslo: Nordic Industrial Fund, 1991).

[12] Smith, Keith, "The Nordic Innovation Indicators Project: Issues for Innovation Analysis and Technology Policy." (Oslo: Gruppen for Ressursstudier, April, 1989).

[13] Including Sirilli from Italy, Scholz from Germany, DeBresson from Canada, Hansen from the United States, Mikael Akerblom from Finland, Alfred Kleinknecht from the Netherlands, Pari Patel from Britain, and Andre Piatier from France, as well as representatives from the Nordic Countries and the OECD.

[14] OECD, OECD Proposed Guidelines for Collecting and Interpreting Technological Innovation Data: Oslo Manual (Paris: OECD, 1992).

[15] Evangelista, R., Sandven, T., Sirilli, G. and Smith, K. "Measuring the Cost of Innovation in European Industry" presented at the International Conference on Innovation Measurement and Policies, May, 1996.

[16] Arundel, Anthony, with Keith Smith, Pari Patel, and Georgio Sirill. "The Future of Innovation Measurement in Europe: Concepts, Problems and Practical Directions." IDEA Paper Number 3, The STEP Group, 1998. All of the IDEA papers referenced in this report can be conveniently downloaded from: http://www.step.no/Projectarea/IDEA/papers.htm

[17] OECD, OSLO Manual: Proposed Guidelines for Collecting and Interpreting Technological Innovation Data (Paris: OECD, 1997.) pp. 48-49.

[18] ibid., p. 52.

[19] DeBresson, Chris and Brent Murray, "Innovation in Canada. A Retrospective Survey: 1945-1978" (New Westminster, B.C.: Cooperative Research Unit on Science and Technology, 1984).

[20] From OECD, 1997, op. cit. p. 121: The kind-of-activity unit (KAU) groups all the parts of an enterprise contributing to the performance of an activity at class level (four digits) of NACE Rev. 1 and corresponds to one or more operational subdivisions of the enterprise. The enterprise's information system must be capable of indicating or calculating for each KAU at least the value of production, intermediate consumption, manpower costs, the operating surplus and employment and gross fixed capital formation." (Council Regulation (EEC) No 696/93 of 15 March 1993 on the statistical units for the observation and analysis of the production system in the Community, OJ No. L 76, p. I, Section III/F of the Annex).

[21] OECD, 1997, op. cit. pp. 63-65.

[22] Note that the difference between these first two items is that a firm may have engaged in innovation activities that were either aborted before the introduction of a new product or process or have not yet come to fruition.

[23] If not already available from other surveys.

[24] In the case of the first two items, it is also recommended that they be broken down into products that are new to the market and products that are new only to the firm.

[25] Some concern is expressed in the Manual that firms with short product lifecycles would naturally have a higher percentage of sales from new products and that it might be useful to separate the effect of large new product sales due to short lifecycles from large new product sales due to other factors.

[26] As in the previous item, it is useful to be able to account separately for those firms who engage primarily in custom production (where virtually everything is new).

[27] OECD, 1997, op. cit. p. 89.

[28] ibid.

[29] Hansen, J. A. "New Indicators of Industrial Innovation in Six Countries: A Comparative Analysis." Final Report to the National Science Foundation, June 22, 1992.

[30] Arundel, Anthony, with Keith Smith, Pari Patel, and Georgio Sirilli. "The Future of Innovation Measurement in Europe: Concepts, Problems and Practical Directions." Idea Paper Number 3, The Step Group, 1998. pp. C-IV to C-VI.

[31] Information for this section was provided by Daood Hamdani of Statistics Canada.

[32] Fabricant, S., et al. Accounting by Business Firms for Investment in Research and Development (New York: New York Univ. Dept of Economics, 1975) NSF/RDA 73-191.
    Posner, L. and Rosenberg, L. The Feasibility of Monitoring Expenditures for Technological Innovation (Washington: Practical Concepts Inc., 1974.)
    Roberts, R.E., et al. Investment in Innovation prepared by Midwest Research Institute for the National R&D Assessment Program, National Science Foundation, 1974.
    Hildred, W., and Bengstom, L. Surveying Investment in Innovation (Denver: Denver Research Institute, 1974) NSF/RDA 73-21.

[33] Hill, Hansen, and Maxwell, 1982, op. cit.

[34] Hansen, Stein, and Moore, 1984, op. cit.

[35] Hansen, J. A. "New Innovation Indicator Data Validation." Final Report to the National Science Foundation. 1991.

[36] Rausch, Lawrence, "R&D Continues to be an Important Part of the Innovation Process." NSF Data Brief, vol 1996, no. 7, August 7, 1996.

[37] Levin, R., Klevorick, A., Nelson, R. and Winter, S. "Appropriating the Returns from Industrial R&D," Brookings Papers on Economic Activity (1987) pp. 783-820.

[38] Cohen, W., Nelson, R., and Walsh, J. "Appropriability Conditions and Why Firms Patent and Why They do not in the American Manufacturing Sector." Mimeo, Carnegie Mellon University.

[39] Bowker Press, Directory of American Research and Technology (New York: Bowker Press, 1984).

[40] The Futures Group, Characterization of Innovations Introduced on the U.S. Market in 1982. Study prepared for the U.S. Small Business Administration, 1984.

[41] Gelman Research Associates, A Survey of Innovative Activity (Jenkintown, PA: Gelman Research Associates, 1993). Final Report Prepared for the U.S. Small Business Administration.

[42] Bean, A., Russo, M, and Whitely, R. Benchmarking your R&D: Results form IRI/CIMS Annual R&D Survey for FY '96. Cited in Cooper, R. and Merril, S. "Trends in U.S. Industrial Innovation: An Assessment of National Data Sources and Information Gaps." Forthcoming.

[43] Archibugi, Daniele, Cohendet, Patrick, Kirstensen, Arne, and Schaffer, Karl-August "Evaluation of Community Innovation Survey (CIS)-Phase I" European Innovation Monitoring System (EIMS) Publication No 11. 1995.

[44] Eurostat, "The Second Community Innovation Survey: Annex II.3: Methodological Recommendations", 1997.

[45] It is interesting to note that the initial Yale study found that a two digit sector analysis was sufficient to elucidate most of the important inter-industry differences see Levin, et al., 1987. op. cit.

[46] Hansen, J. A. "New Innovation Indicator Data Validation" Final Report prepared for the U.S. National Science Foundation, 1991.
    Archibugi, et al., 1995. op. cit., chapter 5.

[47] Arundel, et al., 1998, op. cit., appendix C.

[48] Hill, C. Hansen, J., and Maxwell, J. Assessing the Feasibility of New Science and Technology Indicators (Cambridge: MIT Center for Policy Alternatives, 1982).
    This was not the case with all firms. Some had collected data themselves and used it for strategic planning. A few even included it in the firm's annual report.

[49] OECD, 1997, op. cit. p. 81.

[50] ibid. p. 89.

[51] Hansen, 1991, op. cit. p. 23.

[52] Nordic Industrial Fund, Innovation Activities in the Nordic Countries (Oslo: Nordic Industrial Fund, 1991). p. 56.

[53]  OECD, OSLO Manual: Proposed Guidelines for Collecting and Interpreting Technological Innovation Data (Paris: OECD, 1997). p. 76.

[54] Cooper, R., and Merrill, S., op. cit.

[55] See for example, Arundel, et al. "The Future of Innovation Measurement in Europe" IDEA Paper No. 3, The STEP Group, 1998. Appendix A, pages IV-V.

[56] Evangelista, R., Sirilli, G. and Smith, K., "Measuring Innovation in Services" IDEA Paper No. 6, The STEP Group, 1998.

[57] Evangelista, R., and Sirilli, G. "Innovation in the Service Sector: Results from the Italian Survey" IDEA Paper No. 7, The STEP Group, 1998. p. 1.

[58] See for example, Miles, I., Services Innovation, Statistical and Conceptual Issues, Working Group on Innovation and Technology Policy, OECD (DEST/EAS/STP/NESTI/ (95)12).

[59] OECD, "The Second Community Innovation Survey," Core Questionnaire: Service Sector, 1997.

[60] Evangelista, Sirilli, and Smith, op. cit. p. 20. This is something of an oversimplification. Firms were also given a number of examples.

[61] Hansen, Stein, and Moore, 1984, op. cit. pp. 57-59.

[62] Rausch, op. cit.

[63] Hansen, Stein, and Moore, 1984. op. cit., p. 153.


Previous Section Top of page Next Section Table of Contents Help SRS Home