Economic
Assessment Office (EAO)
|
||||||||||||||||||||||||||||||||||||||||||||
Overview of the EAOThe Economic Assessment Office plans, develops, and implements ATP's evaluation program. In other words, our focus is on measuring the success of the ATP. To this end, we carry out a variety of evaluation studies, aided by leading experts. If you would like to know more about the evaluation program, continue with the overview, or return to our homepage.
Program Assessment in EAO Integration of Evaluation into Management of ATP Measure Against Mission A Logical Framework for Evaluation Increasing Spillover Benefits Over Time Two Paths to Long-Run Economic Benefits Better Tools/Multiple Approaches for Assessing Technology Impacts Substantial Involvement of Outside Experts Three Tests for ATP's Success EAO Objectives
Program Assessment in EAO
ATP initiated evaluation at the outset-several years prior to the passage of the Government Performance and Results Act (GPRA). We put an evaluation effort into place for two reasons:
Integration of Evaluation into Management of ATP
As illustrated by Exhibit 1, evaluation is most potent when it is integrated into program management. Note the sequence that begins with program design and is followed by implementation, assessment, lessons learned and feedback. In the next cycle, the assessment, lessons learned and feedback would be reflected in appropriate modifications to program design; then implementation would follow, etc. Our goal, which is to have program evaluation fully integrated into the program's dynamic structure and contributing to continuous program improvement, is being realized.
This achievement, not surprisingly, has not happed overnight. First, we had to develop the evaluation program and put it into practice. Then, we had to track programs, compile data, perform analysis, and begin deriving lessons from the early results. Now we are gaining insights into what is working and what is not, and this information is providing a basis for program modifications to improve effectiveness.
Measure Against Mission
There are some basic principles to follow in setting up an evaluation program. One basic principle is to measure against the mission. We examined our Statute for the essential mission and goals against which to measure the program's success. Exhibit 2 lists key elements from the ATP Statute, in paraphrased form, which guide what we seek to measure.
A Logical Framework for Evaluation
Another basic principle in setting up an evaluation program is to link in a systematic way the program's activities to the mission; the outputs to the activities; and the shorter- and longer-run outcomes to the outputs. In the parlance of program evaluation, this is sometimes called developing an "evaluation logic model," as illustrated in Exhibit 3.
Increasing Spillover Benefits Over Time
"Time" is an obvious issue in measuring impact. It is not a simple matter of R&D dollars in and economic impact immediately out. R&D takes time; commercialization of goods and services based on the technology platforms developed in ATP projects takes more time; and widespread technology dissemination can take a very long time. Exhibit 4 illustrates conceptually the time entailed for ATP projects to be done and to have impact.
Time in years is measured along the horizontal axis, starting with the announcement of an ATP competition for proposals, progressing to the announcement of awards, then indicating completion of projects in 2 to 5 years (on the average between 3 and 4 years), followed by the post-project period. Economic impact is measured conceptually on the vertical axis. The lower curve illustrates slowly rising benefits to awardees. The upper curve illustrates increasing total economic benefits to the nation over time. The difference between the two curves indicates spillover benefits that extend beyond the ATP award recipients, as other benefit from the new technologies. The kinds of effects that may be expected for a successful project approximately in each of the time periods are listed in the shaded columns. From an evaluation perspective, the time line means that we have had to make use of "indicators" of progress, as well as projections in estimating long-term impacts. With the passage of more time, retrospective estimates of project benefits based on a long view back will become more feasible.
Two Paths to Long-Run Economic Benefits
ATP's success is tracked along two principal paths as illustrated in Exhibit 5.
One path may be described as a direct marketplace path, and the other a more indirect path of knowledge and institutional effects. Of course, both paths are conditional on the technical success of the projects funded, which means that technical progress against the project goals is also important. Looking at the direct marketplace path of success, we ask if the technology developed in an ATP project is being commercialized in the post-project period in one or more applications by the ATP award recipients or their direct collaborators. We also look at how users of the resulting projects and services are affected. This is ATP's principal path for accelerating commercialization of the technology as called for by its mission. Looking at the indirect path-which may actually be an array of indirect paths-we investigate whether the knowledge and institutional effects created by the project may be influential outside the boundaries of the ATP-funded project, eventually translating into measurable marketplace effects. These indirect effects may be as important as, if eventually not more important, the direct effects, but they typically occur at a slower pace than the direct effects, tend to be serendipitous rather than intended, and provide less opportunity for deliberate acceleration of national economic benefits. Both paths may lead to substantial spillover effects.
Better Tools/Multiple Approaches for Assessing Technology Impacts
We have found that evaluating a complex program like the ATP requires all the evaluation tools in the tool kit-and then some-to address the many questions raised by ATP management, Congress, industry, and others. Exhibit 6 summarizes the main approaches we are taking.
For more on ATP's evaluation program click here.
Substantial Involvement of Outside Experts
In formulating our evaluation program and identifying the important questions to address, we have had the advantage of advice from leading experts in the field. Important contributions to ATP's evaluation have been made by many academics, private consultants, and non-profit organizations who have worked with us to develop, test, and apply new models and methodologies and to develop databases needed to research. The credibility of the evaluation effort is enhanced by the outstanding reputations of those involved.
Three Tests for ATP's Success
Ultimately, there are three tests for ATP's success, and these are illustrated in Exhibit 7:
Date created: June 1996 |
ATP website comments: webmaster-atp@nist.gov • Technical
ATP inquiries: InfoCoord.ATP@nist.gov Privacy policy / security notice / accessibility statement • NIST Disclaimer • NIST Information Quality Standards NIST is an agency of the U.S. Commerce Department's Technology Administration. |