Strategic Research Partnerships: Proceedings from an NSF Workshop

Strategic Research Partnerships and Economic Performance:
Data Considerations

Donald Siegel
Nottingham University Business School

  1. Introduction
  2. Review of Empirical Research on SRPs
  3. Measurement Issues
  4. Suggestions for Data Collection
  5. References

I. Introduction top

The number of strategic research partnerships (SRPs) involving firms, universities, non-profit organizations, and public agencies has increased markedly in recent years. Some of this growth can be attributed to three policy initiatives and a key economic trend:

Despite the ubiquity of SRPs and their potential importance as a mechanism for generating technological spillovers, it is difficult to evaluate the impact of these partnerships on economic performance, given the limitations of existing data. That is unfortunate because an assessment of the desirability of these policy initiatives ultimately depends on our ability to derive accurate estimates of the private and social returns to SRPs.

In this paper, I identify these data limitations and also outline the salient measurement issues, based on a comprehensive review of the burgeoning empirical literature on SRPs. I offer some suggestions for the collection of additional data that might ultimately enable researchers to determine which policy initiatives are effectively addressing market failures and stimulating improvements in economic performance.

Although some of this discussion constitutes a "wish list" for information that would be useful in policy analysis, I focus on suggestions that are feasible, given the federal government's limited resources for data collection. My review of the literature reveals that there is good news, in terms of feasibility, because much of this additional data has already been collected by private and non-profit organizations. Thus, it is conceivable that some of these institutions might be willing to engage in an SRP with NSF to exchange data and pool resources. This would reduce the cost of a data expansion initiative, as well as obviate the need to significantly add to the considerable response burden currently placed on high technology firms. At minimum, NSF should facilitate the process of linking existing data to these new, richer sources of information.

The remainder of this paper is organized as follows. Section II provides a brief review of recent empirical studies of the relationship between SRPs and economic performance. Much of this discussion focuses on the characteristics and shortcomings of the data analyzed in these papers. Section III outlines the salient measurement issues. The final section presents suggestions for the formulation of new indicators of SRPs and a specific data collection strategy. The objectives of this strategy are to target data collection efforts to the "most important" SRPs (those that are most likely to enhance economic growth) and to place a stronger emphasis on measuring SRP (and R&D) outputs.

II. Review of Empirical Research on SRPs top

Before discussing recent empirical studies of SRPs, it is useful to define some terms and characterize the wide variety of collaborative relationships that have emerged in recent years. SRPs are defined as any co-operative relationship involving organizations that conduct or sponsor R&D.[1] Many of these partnerships are potential sources of R&D spillovers and economic growth. The following are examples of SRPs:

Note that this definition is quite broad and includes SRPs that have gained in prominence in the "new" economy, with its greater emphasis on intellectual property, venture capital, entrepreneurial start-ups, and university-industry technology transfer (UITT). As described in Siegel, Waldman, and Link (1999), the recent increase in UITT, through a technology transfer office (TTO) and other formal means, has led to a concomitant rise in the incidence and complexity of research partnerships involving universities and firms. The authors also report that in recent years, universities have become more receptive to the idea of accepting an equity position in an entrepreneurial startup, in lieu of up-front licensing revenue.

The last two categories of SRPs (faculty consulting and educational partnerships involving universities and firms) constitute informal means of transferring technologies from universities to firms. According to a recent National Academy of Engineering (NAE) study, summarized in a forthcoming paper by Grossman, Morgan, and Reid (2001), these SRPs may also be important determinants of technological spillovers. The NAE study examined the contributions of academic research to industrial performance in five major industries and concluded that in some sectors, faculty consulting and educational partnerships between universities and firms played a critical role in the introduction of new production processes.

In characterizing SRPs, it is also important to distinguish between private-private partnerships and public-private partnerships.[3] Most SRPs fall into the latter category. Public-private partnerships receive some level of support from a public institution. Such support can assume various forms, such as government subsidies for projects funded by private firms (e.g., ATP), shared use of expertise and laboratory facilities (e.g., ERC or IUCRC), university technology incubators, science parks, licensing agreements between universities and firms, and university-based startups. Private-private partnerships are defined as relationships involving firms only. Examples of such partnerships include research joint ventures, strategic alliances, and networks involving two or more companies.

This distinction serves to underscore the "strategic" aspect of SRPs. For private-private partnerships, it is assumed that the key strategic objective is profit maximization. Hence, scholars who examine such relationships (see the burgeoning literature on SRPs in the field of strategic management) tend to focus on the impact of SRPs on stock prices or accounting profits. In the case of public-private partnerships, a government agency also has a "strategic" goal in establishing such an initiative. Typically, their objective is to address an innovation market failure (see Martin and Scott (2000)), and ultimately, enhance economic growth.

Thus, from a public policy perspective, once appropriate antitrust and intellectual property laws have been designed, public-private partnerships are likely to be of greater interest than collaborations involving firms only.[4] In theory, they should generate technological spillovers and ultimately, high social returns. If SRPs are achieving their goals, one would expect to see a reduction over time in the magnitudes of the market failures they address.

On the other hand, an assessment of the performance impact of private-private SRPs is more likely to reflect a private return to this activity. Although it is certainly relevant to calculate private returns, it is primarily the divergence between the private and social return that provides the fundamental rationale for government intervention in high technology industries. This is especially true when the private return is not sufficient to justify private investment. Note that private-private partnerships may also generate spillovers, although presumably of a smaller magnitude than public-private partnerships. The key difference is that for the private-private partnership, the private return is sufficient to warrant private investment, even if it falls short of the social return. I will return to this point later on, as I believe that much of the data collection effort should be focused on tracking the performance impact of public-private partnerships, so as to allow researchers to generate a better estimate of these social returns.

Another interesting policy issue involving public-private partnerships is the trend towards greater scrutiny of public investments in R&D. As described in Link (1996) and (1998), this stems, in part, from recent initiatives to hold public technology-based institutions more accountable for documenting the economic impact of the R&D projects they have supported. Universities may face similar pressures from legislative bodies that provide funding. In contrast, for private-private partnerships, shareholder accountability has always been a powerful force in constraining self-serving behavior on the part of corporate managers, ensuring that they will closely monitor the financial return on investment in SRPs.

Table 1, included at the end of the paper, summarizes the key features of 47 recent studies of SRPs. For each study, I denote the type of SRP, nature of the institutions involved in the SRP, unit of observation, data sets used in the empirical analysis, methodology, and proxies for performance. Note that scholars in a wide variety of disciplines, such as economics, finance, sociology, public policy, and strategic management have examined SRPs.

Interdisciplinary interest in this topic offers several advantages:

I now consider each of these in turn.

Three major datasets analyzed in these studies are the MERIT-CATI (Maastricht Economic Research Institute on Innovation and Technology-Cooperative Agreements & Technology Indicators) file, NSF's CORE (CO-operative REsearch) database, and the NCRA-RJV (National Co-operative Research Act-Research Joint Venture) database.[5] Many authors have examined special datasets consisting of firms that have received funds from government programs that support technology-based SRPs, such as the ATP and SBIR programs. Typically, these authors then link this information to firm-level surveys of production, R&D, accounting profitability, and stock prices (e.g., COMPUSTAT and CRSP), in order to assess the impact of the SRP on economic or financial performance.

It is interesting to note that the papers constitute a mix of quantitative and qualitative research. In fact, some researchers have designed their own surveys of firms involved in SRPs, typically with government or foundation support. More importantly, numerous authors have made liberal use of proprietary databases, such as files created by the Securities Data Company, Science Citation Index, Recombinant Capital, Corporate Technology Directory, and Venture Economics. Studies examining SRPs resulting from university-industry technology transfer (UITT) have been based on the comprehensive survey conducted by the Association of University Technology Managers (AUTM), as well as archival data on patents, licenses, and startups at several major universities (Stanford, Columbia, MIT, and the University of California system). Several authors, especially in the field of strategic management, have collected data on specific industries, such as chemicals, biotechnology, and semiconductors.

Table 1 also reveals that authors have used a wide variety of performance/output indicators for SRPs. These include the following conventional measures:

Many authors have interpreted these indicators as different ways of characterizing the spillover mechanism.

Not surprisingly, management and finance studies focus mainly on SRPs involving firms only and concentrate on explaining short-run financial performance and accounting profitability. On the other hand, economists devote their attention to public-private partnerships, the search for R&D spillovers, program evaluation (SBIR, ATP, EUREKA, Frameworks Programme) and the effects of consortia (SEMATECH), "crowding out" of private R&D investment, and the impact of SRPs on total factor productivity.

Many studies of research joint ventures and strategic alliances in the management and finance literatures use the event study methodology, which is based on the capital asset pricing model (CAPM). Event studies have been used widely by researchers in the fields of accounting, economics, and finance to assess the stock price effect that is conveyed by a major corporate announcement, such as announcements of quarterly earnings, mergers and acquisitions, new products and investments, legislation and regulatory changes, and other economically relevant events. This method measures the average change in share price that arises when an unanticipated event is announced. The event presumably provides new information on the future profitability of companies that experience it. In this instance, the event is the announcement of the formation of an SRP.

It is quite tempting to use the event study approach because firms and other organizations involved in SRPs typically do not report direct performance measures (for a given SRP). On the other hand, share price information is available for all publicly held firms. The use of this method also obviates the need to deal with difficult measurement issues associated with the measurement of total factor productivity (especially physical and technical capital). Furthermore, it is much more difficult (if not, impossible) for managers to manipulate share prices than measures of accounting profitability.

Despite these considerable strengths, event studies suffer from several critical limitations. First, as noted in McWilliams and Siegel (1997), they are based on a set of rather heroic assumptions that may be invalid for managerial decisions, such as the formation of an SRP. One such assumption is that the events are exogenous, which is clearly violated for most strategic decisions, such as the formation of an SRP.[6] Furthermore, it is important to note that the unit of observation in an event study is the firm, because stock prices are only available at the firm level, for publicly traded companies. Thus, event studies preclude an analysis of SRPs below the firm level and those involving privately held companies. That is unfortunate because many SRPs involve the creation of a small venture, which can easily be masked within a large organization. Finally, many leading economists (see Shleifer (2000)) have recently become more skeptical regarding the validity of the "efficient markets" hypothesis, which provides the theoretical basis for the capital asset pricing model (CAPM) and the associated event study methodology.

If short-run shifts in stock prices are not a good proxy for the long run performance of SRPs, we need to identify alternative measures. In the next section, I outline a set of measurement issues that help us identify "better" indicators, where the latter is defined as measures that improve our estimation of private and social returns.

III. Measurement Issues top

The growth of public investment in R&D, through "National Innovation Systems" and other programs, has led to greater interest in evaluating the social returns to publicly funded R&D. A missing link in the assessment of the social returns to publicly-funded R&D (at universities, federal research labs, and other nonprofit/public institutions) is the role that public R&D plays in the creation of new industries. A discussion of the problems researchers have encountered in quantifying the benefits of public R&D can be linked more broadly to the literature on the difficulties of measuring prices and productivity in high technology industries.

Currently, the government does a very poor job of tracking economic activity in embryonic industries and the emergence of new industries within existing sectors. This lack of coverage could result in a downward bias in estimates of the social returns to publicly-funded R&D, since it might lead to an underestimation of the impact of public R&D on economic efficiency. Presumably, these errors may also reduce the accuracy of estimates of the impact of SRPs on economic performance.

This conclusion is based on the following line of reasoning: Total factor productivity (TFP) is generally regarded to be the best metric of economic performance and thus, should be used to assess the social returns to SRPs. However, TFP is notoriously difficult to measure, mainly because of inadequate adjustments for changes in product and input quality. Using an industry's rate of introduction of new products as a proxy for mismeasurement of the quality of its output, Siegel (1994) examined the incidence of measurement errors in output prices across 348 manufacturing industries. He found that the producer price index (PPI), the most commonly used indicator of the rate of inflation used to calculate TFP, missed about 40% of quality improvements in the 1970s and early 1980s. In a subsequent paper (Siegel (1997)), the author reported that these measurement errors are especially severe in industries that invest heavily in computers and R&D. More importantly, he found that controlling for an industry's ability to generate new products yielded substantially more accurate estimates of the social returns to investment in computers and R&D.

It is reasonable to assume that the same logic might apply to assessing the social returns to SRPs, since some of these partnerships are specifically formed to develop a new product or to perfect a new production process. That is, the benefits of SRPs may be poorly measured because they show up in new products and industries. More comprehensive and more timely measures of the emergence of new industries by relevant statistical agencies, e.g., the U.S. Census Bureau, would likely result in more precise measures of the benefits of SRPs, in terms of stimulating product innovation and quality improvements (see Trajtenberg (1990)).

Additional critical aspects of SRP performance include their role in stimulating the diffusion of new technologies, fostering economic growth, and creating new jobs. These are considered to be of paramount importance for many public-private partnerships and it is essential, from a public policy perspective, that such institutions be able to document the global economic impact of these relationships. A notable example concerns SRPs resulting from university-industry technology transfer (UITT) activities. In this regard, it is interesting to note that a primary justification for the Bayh-Dole of Act of 1980, the landmark legislation that spurred growth in university ownership and management of intellectual property, was that it would foster a more rapid rate of technological diffusion and enhance economic growth. An evaluation of the "success" of UITT SRPs should ideally be based on an assessment of their impact on these variables.

Thus, collecting information on multiple outputs would be useful. For instance, universities have two options when they engage in commercialization of their intellectual property. One is to negotiate a licensing agreement with an existing company. Another avenue is to establish a relationship with a new company that is formed to commercialize the new technology. In some cases, the university assumes an equity position in the venture. According to the Association of University Technology Managers (AUTM), over 2,000 university technology transfer startups have been formed in the U.S. since 1980, some with funding from venture capital firms.

Despite the potential importance of university technology transfer startups as a mechanism for generating local technological spillovers and revenue to the university, there has been no systematic analysis of the determinants and consequences of university involvement in these new entrepreneurial ventures. As a result, it is difficult for policymakers and university administrators to assess the private and social returns to this activity. With regard to measures of the "outputs" of this process, special attention should be paid to three key potential dimensions of the social returns to university technology transfer: "time to market," firm growth, and survival.

Another measurement issue concerns the role of SRPs in the innovation process. SRPs can be viewed as an intermediate output of R&D, or the emergence of a new organizational form (such as an RJV or strategic alliance) that allows R&D to be conducted more effectively. This underscores the importance of tracking this activity and following these organizational entities over time, in order to determine which SRPs are accomplishing their objectives. This may be especially critical, given the embryonic nature of the technologies and industries involved in these relationships (e.g., biotechnology) and hence, the long lag between the formation of a partnership and the realization of returns to the organizations involved in the transaction.

As noted in the previous section of the paper, NSF does indeed track RJVs and there is some existing information on the survival of RJVs (e.g., case studies on SEMATECH (Link (1996), Link, Teece, and Finan (1996)). However, there needs to be a considerable expansion in the scope of coverage of SRPs, including many of the SRPs presented in Table 1. Also, more comprehensive, direct indicators of SRP "performance" (broadly defined) need to be systematically collected.

In gathering this additional information on SRPs, NSF should consider modifying its current data collection strategy with regard to R&D activity. That is, given the objective of deriving more accurate estimates of the private and social returns to innovative activity (as manifested in an SRP), there needs to be a fundamental shift from gathering data on R&D inputs to a greater focus on R&D outputs. Currently, the government does an excellent job of tracking R&D inputs, especially information on the scientific workforce and other human resource management factors, firm and university R&D expenditure, and patenting activity in academia and the industrial sector.[7]

A more fruitful approach for SRPs would involve stressing the collection of information on outcomes, such as new products, licensing agreements, formations of strategic networks, launching of startups, research collaborations (co-authoring), citations, the creation of new jobs and industries, and sales growth. This would enable researchers to extend some of the excellent work on evaluating the distribution of the private and social value of patents (see Henderson, Jaffe, and Trajtenberg (1998) and Jaffe, Trajtenberg, and Henderson (1993)) to licensing activities and other dimensions of output. Another useful methodology is the technique outlined in (2001), which is based on computing the expected private and social returns at the inception of an SRP.[8]

It is important to note that, with regard to certain outcome measures, such as total factor productivity, NSF does not have a comparative advantage in collecting its own performance data. In these instances, the best course of action would be for NSF to facilitate linkages between its own data on SRPs and other government data on economic performance. Indeed, there is a precedent for this, as some researchers have succeeded (with NSF financial support) in linking NSF's R&D firm-level R&D survey to the U.S. Census Bureau's establishment-level Longitudinal Research Database (LRD) (see Adams and Jaffe (1996) and Lichtenberg and Siegel (1991)).

In the following section, I present a proposed strategy for the collection of additional data on SRPs and economic performance.

IV. Suggestions for Data Collection top

Given the arguments and evidence presented in previous sections of the paper, I suggest that NSF contemplate adopting the following initiatives:

I now consider each of these in turn.

Given the rise in the incidence and variety of SRPs, it is useful for NSF to broaden its coverage of this activity. I have also maintained that it would be desirable to target the data collection effort to public-private partnerships, since there is typically more interest in assessing the social, as opposed to the private, returns to R&D. Furthermore, since private organizations involved in such relationships have accepted some form of direct or indirect governmental support or subsidy, it may be easier to convince them to respond to a new survey or an expanded version of an existing survey.[9]

I have also argued that greater attention should be paid to gathering information on R&D (and SRP) outputs, as opposed to the current data collection strategy, which appears to be focused on R&D inputs. This approach could potentially yield more precise estimates of R&D spillovers associated with publicly funded innovative activity. In a similar vein, it would also be useful to systematically collect information from as many firms as possible, including those who are not involved in SRPs and those who apply for subsidies, yet fail to receive them. This would allow for a much more accurate assessment of the effects of public support of R&D and potentially enable us to identify those SRPs that generate the highest social returns. Longitudinal analysis would also allow us to determine whether certain SRPs are indeed effectively targeting market failures, since economic theory predicts that government intervention is warranted when there is a substantial divergence between private and social returns.

Given the existence of limited resources for additional data collection, a cost-effective approach is that NSF itself should engage in public-private partnerships with organizations that have been systematically collecting data on various aspects of the new economy. These include non-profit organizations, such as the Association of University Technology Managers (AUTM). For instance, with AUTM's support, NSF could collect information from universities on various dimensions of output and performance we have discussed in this paper, such as faculty/graduate student involvement in UITT and more detailed questions on licensing activity and the formation, growth, and survival of university technology transfer startups. An alternative is to add a few questions to an existing NSF survey on relationships firms have with universities. It is also useful to note that there is some overlap in information reported to NSF is also reported to AUTM (e.g., both NSF and AUTM collect information on R&D expenditures). Another potential partner for NSF is the Technology Transfer Society (TTS), an organization of technology transfer professionals which publishes the Journal of Technology Transfer.

Finally, it is unwise for NSF to re-invent the wheel. As shown in Table 1, there now exist numerous proprietary databases, such as files created by the Securities Data Company, Science Citation Index, Recombinant Capital, Corporate Technology Directory, and Venture Economics, on SRPs. Furthermore, some researchers have collected their own quantitative and qualitative data on firms involved in SRPs, often with NSF support. Thus, to maximize the return on the data collection effort, linkages of existing datasets should be facilitated. Perhaps a unit could be formulated within NSF that assists researchers in constructing files that combine private and public data on SRPs and economic performance. A model for such a unit is the Center for Economic Studies at the U.S. Census Bureau, where researchers have been analyzing linked datasets since the late 1980s, subject to clearance procedures that preserve confidentiality. Similar clearance procedures could be implemented at NSF.



Footnotes

[1]  An even broader definition might include any individual or organization that has an interest or stake in this relationship.

[2]  ERCs and IUCRCs are NSF-sponsored public-private partnerships designed to promote technological diffusion, commercialization, and integration of research and education.

[3]  Hagedoorn, Link, and Vonortas (2000) distinguish between research partnerships that are formal and informal. While that may be also be an important distinction, the analysis presented in this section presumes that all SRPs are formal relationships.

[4]  Of course, the choice of appropriate antitrust and intellectual property laws requires an accurate assessment of the performance of collaborations involving firms only.

[5]  These datasets are described in greater detail in Hagedoorn, Link, and Vonortas (2000).

[6]  While it is possible to control for partially anticipated events, many authors do not incorporate such effects in their empirical analysis.

[7]  There is considerable debate regarding whether patents constitute an input or output of the R&D process.

[8]  This method is an extension of the standard Griliches/Mansfield approach to evaluating the private and social returns to innovation.

[9]  That is, one suspects that the response rate would be significantly higher for a survey involving public-private partnerships than a survey of SRPs involving private firms only.


Previous Section Top of page Next Section Table of Contents Help SRS Home