spacer
  Home | About CDC | Press Room | Funding | A-Z Index | Centers, Institute & Offices | Training & Employment | Contact Us
spacer
spacer CDC Centers for Disease Control and Prevention Home Page spacer
CDC en Espaņol
spacer
Search:  
spacer
Health & Safety TopicsPublications & ProductsData & StatisticsConferences & Events
spacer
spacer
spacer
Public Health Training Network
spacer
  PHTN
space
arrow Home
space
arrow What is PHTN?
space
arrow PHTN Calendar
space
arrow Our Catalog
space
arrow News Flash
space
arrow Public Health Image Library
space
spacer spacer
spacer
Skip Nav spacer

Practical Evaluation
of
Public Health Programs

Workbook


evaltx.jpg (64431 bytes)



cdclogo1.gif (2865 bytes)phtnlogo.gif (3860 bytes)




 


Practical Evaluation

of

Public Health Programs






Prepared by

The University of Texas-Houston Health Science Center
School of Public Health
and
The Texas Department of Health











Supported by

The Centers for Disease Control and Prevention
and
The Association of Schools of Public Health

Cooperative Agreement 97-0531(P)-BHPR


Provided through

The Public Health Training Network

 

To order the videotape of the live satellite broadcast that originally accompanied this workbook, call 1-800-41-TRAIN. You can also call this number to enroll in the course for credit or to request information on ordering other Public Health Training Network (PHTN) course materials.

PRACTICAL EVALUATION OF
PUBLIC HEALTH PROGRAMS
DEVELOPMENT TEAM

|

Principal Investigator
Hardy Loe, Jr., MD, MPH
Associate Dean, UT-Houston School of Public Health

Course Design

Suzanne Adair, Ph.D.
PHTN Distance Learning Coordinator
Coordinator, Health Workforce Development
Texas Department of Health
Pamela Mathison, BSN, MA
Nurse Consultant
Adult Health Programs
Texas Department of Health

Instructional Design
Allan J. Abedor, Ph.D.
Professor Emeritus
UT-Houston Health Science Center

Writing Consultant
Susan Griffin, B.S., M. P. Aff.

Project Management

Ann Abbott, Ph.D.
Research Associate
UT-Houston School of Public Health
David Gillmore, M.Ed.
Research Associate
UT-Houston School of Public Health

 

In Cooperation With

Jenny Lewis, M.Ed.
Instructional Designer
Association of Schools of Public Health
Cathy Shoemaker, M.Ed
Senior Instructional Designer
Centers for Disease Control and Prevention

 

Technical Reviewers

Bobby Milstein, MPH
Behavioral Scientist
Office of Program Planning and Evaluation,
Office of the Director
Centers for Disease Control and Prevention
Scott Wetterhall, MD, MPH
Medical Officer
Office of Program Planning and Evaluation,
Office of the Director
Centers for Disease Control and Prevention

Nursing Consultant
Patricia Drehoble, RN, MPH
Director, Continuing Nursing Education Provider Unit
Centers for Disease Control and Prevention

PHTN Learner Support
Brian Siegmund, MS Ed
Supervisor, Distance Learning Support
Centers for Disease Control and Prevention

 


LISTING OF SECTIONS

Part I Objectives 1.1
Building Commitment for Evaluation 1.1

1.  Why Do Evaluation?

1.1

2.  Questions Evaluation Answers

1.1

3.  Definition of Practical Evaluation

1.2

4.  Standards for Practical Evaluation

1.3

a.  Useful

1.3

b.  Feasible

1.3

c.  Proper

1.4

d.  Accurate

1.4
Framework for Practical Evaluation 1.5
Panel Discussion: The Importance of Evaluation in the Current Public Health Environment 1.6
Why Evaluation Is Important to Your Program 1.8

1.  Improve Health Status

1.8

2.  Demonstrate Accountability

1.8

3.  Manage Resources

1.8

4.  Improve Program Operations

1.8
A Learning Program 1.10
Barriers to Program Evaluation 1.11

1. Practice Exercise

1.11

2. Types of Barriers

1.12

a.  Lack of Management Support

1.12

b.  Lack of Resources

1.12

c.  Lack of Skills

1.12

d.  Lack of Relevant Data

1.12

e.  Fear of Consequences

1.12
Comparison of the Academic Research Model and Practical Evaluation 1.13
Overcoming Barriers to Evaluation through a Collaborative Approach 1.14

1.  Reduces Suspicion and Fear

1.14

2.  Increases Awareness and Commitment

1.14

3.  Increases the Possibility of Achievement of Objectives

1.15

4.  Broadens Knowledge Base

1.15

5.  Teaches Evaluation Skills

1.15

6.  Teaches Stakeholders

1.15

7.  Increases Possibility Findings Will Be Used

1.16

8.  Allows for Differing Perspectives

1.16
Panel Discussion: The Importance of Evaluation to Your Program 1.17
Part I Summary 1.19
II. Framework for Planning and Implementing Program Evaluation
Part II Objectives 2.1
Overview of the Evaluation Framework 2.1
Evaluation Framework 2.2
Evaluation Framework Notes (Steps & Standards) 2.3
Applying the Evaluation Framework: 2.6

Step 1: Engage Stakeholders

2.6

Step 2: Describe the Program

2.6

a.  Types of Objectives

2.7

(1) Outcome Objectives

2.7

(2) Process Objectives

2.8

b. Well-Written Objectives

2.10
Practice Exercise 1 2.12

Step 3: Focus the Evaluation Design

2.14

Step 4: Gather Credible Evidence

2.14

a. Types of Data

2.15

b. Sources of Data

2.15

c. Ways to Improve the Quality of Data

2.17
Practice Exercise 2 2.18

Step 5: Justify Conclusions

2.20

Step 6: Ensure Use and Share Lessons Learned

2.20
Practice Exercise 3 2.22
Standards 2.24
Part II Summary 2.25

Practice Exercise 4 (Case Study)

2.26
Panel Discussion: Implementing Practical Evaluation 2.31
III. Additional Resources

Suggested Enrichment Activities

3.1

Recommended Process and Activities for Small Groups Viewing Videotaped Version of the Course

3.4

Recommended Process and Activities for Individuals Viewing Videotaped Version of the Course

3.6

Bibliography & Web Sites of Additional Evaluation Resources

3.8

Part I



Building Commitment

for Evaluation



PART I   OBJECTIVES


By the end of this program, you will be able to:

  • Define program evaluation in practical terms
  • Explain the importance of evaluation in the current public health environment
  • Name three reasons evaluation is important to your program
  • Using your program as an example, identify three barriers to evaluation and determine how they can be overcome


BUILDING COMMITMENT FOR EVALUATION


Why Do Evaluation?

With increased emphasis on accountability and quality of services from elected officials, the public, and the media, it is essential that health care organizations evaluate their programs focusing on improved outcomes, quality of services, and cost-effectiveness. The word "evaluation" often suggests a lengthy and costly process that requires the services of expert researchers and/or consultants. The purpose of this course is to present an alternative to the research model of evaluation. This approach is known as "practical evaluation."


Questions Evaluation Answers
In summary, an evaluation will provide a public health program with the answers to the following questions:

  • What have you done?
  • How well you have done it?
  • How much you have done?
  • How effective you have been?

 



Definition of Practical Evaluation

There are innumerable definitions for evaluation. The definition for evaluation used for this course is one proposed by Michael Quinn Patton in Practical Evaluation (1982)

"The practice of evaluation involves the systematic collection of information about the activities, characteristics and outcomes of programs, personnel, and products for use by specific people to reduce uncertainties, improve effectiveness and make decisions with regard to what those programs, personnel, or products are doing and affecting."

This approach to evaluation is practical and doable, recognizing real world constraints such as lack of time, resources, knowledge, and political boundaries.

There are a number of important components to this definition of practical evaluation. The first component is that data is collected in a systematic manner. This requires that the collection of data for the evaluation be planned from the beginning. It is too late when report time arrives to discover that essential data elements have not been collected.

Secondly, this definition specifies that evaluation focus on the activities, characteristics, and outcomes of the program being evaluated. More and more grants and contracts are calling for performance measures, which is consistent with this component of the definition.

Another component of this definition states that the purposes of evaluation are to:

  • reduce uncertainties
  • improve effectiveness
  • make decisions about the program

Practical program evaluation can provide feedback on program operations, expenditures, and results. It can be useful in deciding among budget alternatives, and in managing and reporting on uses of public funds.

Two other components emphasized in this definition of evaluation relevant to public health, are context and collaboration.


A practical evaluation is one that can reasonably be implemented within the context and constraints of a particular situation. Such constraints include inevitable limitations of time, resources, and knowledge as well as specific political boundaries. One example of context lies in the method of data collection and analysis for the evaluation. Data may not be available for certain evaluation questions within the time frame studied; personnel and automation resources may be limited in the collection of the required data; budgets may not include extra dollars to "out-source" the analysis of the evaluation data; and some of the evaluation questions may be politically "taboo" in some areas.

Collaboration is another important concept in the definition of evaluation. The team or collaborative approach involves a group of people who share the decision making for the major aspects of the evaluation. This approach to evaluation has many benefits. Using a collaborative approach to evaluation is one way to expand or maximize available resources. Collaboration is discussed in more detail later in this section.


Four Standards to Consider in Practical Evaluation

Useful
Is it useful to evaluate your program? Will the results be used to improve practice or allocate resources better? Will the evaluation answer stakeholders' questions?

Evaluation is useful in developing new program proposals, in reauthorizing existing programs, in justifying requests for additional funding, in accounting for use of public funds, and especially in improving program performance.

Feasible
Is it feasible to evaluate your program? Given your specific political environment and current resources, can you afford to do it? Do you have adequate support to conduct the evaluation? Does your budget allow for contracting with outside professionals to conduct the evaluation? If not, do you have the personnel, time, and monetary resources to do it in-house? If you can not evaluate all aspects of your program, what parts can you evaluate?


Proper
Are you able to conduct the evaluation properly? Is the approach fair and ethical? For example, in conducting a survey, the results are intended to be kept confidential. Are you truly able to maintain that confidentiality within the constraints of your organization and with the types of data processing and analysis required?

Accurate
Is the evaluation accurate? It is important that appropriate data collection methods have been used and that the data have been consistently collected. For example, if you have several people involved in conducting interviews, have you provided adequate training to ensure consistency and quality in gathering the information?



Framework for Practical Evaluation


evaltx.jpg (64431 bytes)



This framework has been developed by the Centers for Disease Control and Prevention Evaluation Working Group. This will be discussed in depth in Part 2 of this course as a guide for organizing and implementing evaluation.

 


PANEL DISCUSSION

TOPIC: The Importance of Evaluation in the Current Public Health Environment





























 

 


WHY EVALUATION IS IMPORTANT TO YOUR PROGRAM


Improve Health Status

The primary focus of evaluation is to maximize your program's ability to improve health status in your community. For example, the Michigan Diabetes Outreach Network was established to ensure comprehensive diabetes management for persons with diabetes. The purpose of the evaluation study was to collect data on services delivered and their impact on patient outcomes, and to provide feedback to the partnering agencies to improve their performance. The evaluation results revealed a major reduction in hospitalizations, amputations, and mortality served by the network.

Demonstrate Accountability

Evaluation can help you demonstrate accountability to your funding sources, stakeholders, and community. For example, with the Michigan Diabetes Outreach Network, not only did results demonstrate that patient outcomes were improved, these positive outcomes resulted in significant increases in state appropriations for comparable programs throughout the state.


Manage Resources

Evaluation can help you examine what resources you need and/or whether you are using the resources you have effectively. For example, inspectors with a food and drug safety program were struggling with how to get all their inspections done and still have time to do the reports from each inspection in a timely manner. After evaluating the problem and the potential solutions, along with the program's resources, the program decided to purchase a laptop computer for each inspector. Now the inspectors take their laptops with them on their inspections and complete most of their reports on-site. The computers turned out to be the needed resources to allow the inspectors to use their time efficiently to get their jobs done.


Improve Program Operations

Evaluation can help you improve your program operations. In the previous example, the purchase of laptop computers significantly improved the food and drug safety program's operations.


In another example of using evaluation for program improvement, a nurse in a tuberculosis prevention and control program started reporting the number of patients on directly-observed therapy (DOT) by region. Tuberculosis program personnel in each region did not want to be lowest on the chart. The number of patients receiving DOT increased in every region, showing that collecting and sharing the numbers resulted in a change in staff behavior and an overall improvement in program outcomes.

An important benefit of carrying out the evaluation process is the opportunity for program personnel to learn about their own programs. Persons who view evaluation as an important way to learn about the program and make improvements are better able to influence their program activities and outcomes than those who view evaluation as a time-consuming bureaucratic requirement. They are also more competent in representing both accomplishments and resource needs to policy-makers and the public at large. A graphic representation of how programs can "learn" from a practical evaluation is shown on the next page.


learn.jpg (55699 bytes)

As suggested by the graphic, a program evaluation generates data on program effectiveness and areas of weakness. The next program planning cycle attempts to correct as many weaknesses as possible. The revised program is implemented and another program evaluation assesses the effect or impact of the changes. This cycle of evaluation, planning, implementing, and re-evaluating should result in improved program effectiveness - or in other words, the program learns and begins operating at a higher level of effectiveness. The cycle is repeated continuously. Over time a gradual improvement of all programmatic activities should occur as a result of evaluation data being used in the next planning cycle, changes implemented, and evaluation being conducted on an ongoing basis.

 


 

BARRIERS TO PROGRAM EVALUATION

PRACTICE EXERCISE 1

INSTRUCTIONS: List barriers that prevent you or your program from doing program evaluation.

















BARRIERS TO EVALUATION


Lack of Management Support
Barriers to evaluation include lack of management support. If not initiated from the upper levels of management, independent program evaluations will often not receive the latitude and resources necessary to conduct a proper evaluation. Program personnel are often expected to collect additional data and perform the analyses in additional to their original duties. The result is often a poorly constructed and conducted evaluation with unreliable information on which to base decisions.

Lack of Resources
Lack of resources is probably one of the most common barriers to evaluation. Lack of time, adequate personnel resources, and necessary equipment can seriously hamper the evaluation process.

Lack of Skills
Lack of skills and resources in the collection, analysis, and interpretation of data fosters incomplete or inaccurate evaluation results. Many organizations simply do not have the necessary automation or personnel with the skills to construct interview or data collection instruments, or analyze and interpret the data. This could result in false assumptions and conclusions from inaccurate, missing, or irrelevant data.

Lack of Relevant Data
Lack of relevant data can render the evaluation useless. Unless data collection instruments and methods are carefully planned from the beginning of the evaluation period, missing, inconsistent, and untimely data will result in an incomplete evaluation. Certain questions will not be answered and the "holes" in the data can render the results meaningless.

Fear of Consequences
Fear of the consequences of the results will often preclude an evaluation. Evaluation is often threatening to the administrators or program managers whose programs are being evaluated. Evaluations may provide ammunition for those who want to reduce program expenditures or dramatically change the program's direction.


COMPARISON OF THE ACADEMIC RESEARCH MODEL AND PRACTICAL EVALUATION

Perhaps the biggest misunderstanding about program evaluation is that it must follow an academic research model. This is not necessary. The following is a comparison of the academic research and practical evaluation models.

  Academic Research Practical Evaluation
Purpose Test Hypotheses Improve program and practice
Method Controlled Environment Context sensitive
Statistics Sophisticated Simpler statistics

In the academic research model, the focus is on testing hypotheses. The intent of practical evaluation is to improve practice. For example, in the academic research model, a hypothesis might be tested that smoking behavior has decreased due to a public health promotion wellness program. In the practical evaluation model, the emphases might be on which public health promotion methods are getting the best results.

Most people think of research as requiring a controlled environment or control groups. In public health practices, this is seldom realistic. So practical evaluation focuses on the context in which people work, making this a more realistic model for public health practitioners. For example, in the academic research model, there would have to be a stratified random sample of both wellness program participants and a control group. The practical model would examine the data over time from the wellness program participants.

The research model requires sophisticated data collection and analysis. In practical evaluation, data collection needs to be well thought out and organized. Sophisticated statistical analyses may or may not be used, depending on what is being evaluated, the type of data collected, and the availability of resources for the data collection and analysis.

 


OVERCOMING BARRIERS TO EVALUATION

Many of the barriers to evaluation can be addressed through a collaborative approach. One way to meet the useful/feasible/proper/accurate standards for practical evaluation is to involve people who will assist in the design and implementation of the evaluation.

The collaborative approach means forming a team of people who make shared decisions about the purpose of evaluation, how it is going to be conducted, and how the results will be interpreted and used. Team members should be chosen based on the skills and knowledge needed to conduct the evaluation. Often people outside the program can bring some important skills, objective viewpoints, and fresh insights to the process. A team approach can help overcome the barrier of lack of resources.

The Collaborative Approach
A collaborative approach can have the following benefits:

Reduces Suspicion and Fear
By involving stakeholders in the evaluation process, you reduce suspicions and fears. If people are involved in the decision-making process about what to evaluate and how the evaluation will be conducted, they are more likely to support the process. If they are not involved, they can be suspicious about the process and fearful of the outcome. They may not trust the results or the people involved in the evaluation. People often fear that evaluation will result in the termination of the program or of their job. Involving them will help them understand that the focus of the evaluation is to improve the program.

Increases Awareness and Commitment
Through a collaborative approach, participants become more aware of and committed to the evaluation process. If people are involved in the evaluation process, they have ownership in the results. An open presentation and discussion of the program's data and information can lead to the development of a consensus about interpretation and follow-up action.


Increases the Possibility of Achievement of Objectives
A collaborative approach increases the possibility of achieving objectives. If people understand what is being evaluated and why, they are more likely to work toward improving those elements of the program. This approach promotes ownership of the process and responsibility for the outcomes.


Broadens Knowledge Base
A collaborative approach draws on a broader base of knowledge, skills, and experience in evaluation. In choosing the people to be on the evaluation team, the role each person will play in the process and the knowledge, skills, and experiences they bring to the evaluation should be considered. This is a good opportunity to look both inside and outside the organization to tap the needed resources for the type of evaluation planned. Team composition should include people with a variety of backgrounds, including those with front line service experience, statistical and epidemiological expertise, management and policy perspectives, and planning skills. All of these skills may not be available within the program or even the organization. If this is the case, it is often helpful to call upon a community college or university in your area for assistance from either faculty or student interns.


Teaches Evaluation Skills

A collaborative approach can serve as a method for teaching evaluation skills to team members. When people work together, they share ideas, knowledge, skills, and abilities. The evaluation team members can learn about program objectives, data collection methods, making evaluation decisions, and even how to work on a team. Team members learn by doing and come away from the process with a greater set of skills.


Teaches Stakeholders

A collaborative approach can serve as a method for teaching community stakeholders about public health. Involving people outside the program in the evaluation process can broaden their knowledge about your program and its role in public health. This is an excellent way to involve the community in a discussion of health problems and possible interventions. For example, you might involve members of a local clinic, local elected officials, representatives from community-based organizations such as the American Heart Association. Any of these representatives would walk away from the evaluation experience with a better understanding of the strengths of your program, the constraints under which public health programs operate, what is needed for improvement, and how the program relates to their interests.


Increases Possibility Findings Will Be Used
By involving stakeholders, there is a greater possibility that the findings will be used and implemented. Many organizations are in the process of developing quality assurance programs. When a variety of staff are involved in conducting chart audits, identifying problems, and determining solutions to correct the identified problems, they are more likely to buy into the implementation of the solutions decided upon by the group.


Allows for Differing Perspectives
A team allows for differing perspectives. This goes back to the old adages: "Two heads are better than one" and "The whole is greater than the sum of its parts." Including people outside the program provides unique points-of-view.


PANEL DISCUSSION

TOPIC: The Importance of Evaluation to
Your Program




























 


PART I: BUILDING COMMITMENT FOR EVALUATION


SUMMARY


Evaluation Tells You

  1. What you have done
  2. How well you have done it
  3. How much you have done
  4. How effective you have been


Definition of Practical Evaluation:

Program evaluation is the systematic collection of data related to a program's activities and outcomes so that decisions can be made to improve efficiency, effectiveness, or adequacy.

Components of Practical Evaluation

1. Data is collected in a systematic manner.
2. Evaluation focuses on the activities, characteristics, and outcomes of the program being evaluated.
3. The purpose of evaluation is to:

  • Reduce uncertainties
  • Improve effectiveness
  • Make decisions

4. Practical evaluation is implemented within the context and constraints of your particular situation.
5. Practical evaluation involves collaboration.


Reasons to Do Evaluation

1.  To measure program achievement
2.  To demonstrate accountability
3.  To examine what resources are needed and how effectively they are being used
4.  To improve program operations


Barriers to Evaluation

1.  Lack of management support
2.  Lack of resources
3.  Lack of skills and resources in the collection, analysis, and interpretation of data
4.  Lack of relevant data
5.  Fear of the consequences of the results of the evaluation

Comparison of the Academic Research Model and Practical Evaluation

  Academic Research Practical Evaluation
Purpose Test Hypotheses Improve program and practice
Method Controlled Environment Context sensitive
Statistics Sophisticated Simpler statistics


Overcoming Barriers Through a Collaborative Approach

  1. Reduces suspicion and fear
  2. Increases participant awareness and commitment
  3. Increases the possibility of achieving objectives
  4. Draws on a broader base of knowledge, skills, and experience
  5. Teaches evaluation skills to team members
  6. Teaches community stakeholders about public health
  7. Increases the possibility that findings will be used
  8. Incorporates differing perspectives


Part II



Framework for Planning

and

Implementing Program

Evaluation




PART II OBJECTIVES

By the end of this module, you should be able to:

  • List the six steps of the evaluation framework
  • Name the four standards for "good" evaluation
  • Recognize the component parts of a well written program objective
  • Apply the evaluation framework to a case study

OVERVIEW OF THE EVALUATION FRAMEWORK
Evaluation is the key to creating a very special kind of public health practice, a practice characterized by programs that actually learn and constantly -- systematically and intelligently -- improve their chances for success. All across public health, there are real-world examples of a shift toward learning through practical program evaluation. The changing circumstances of the work demand it.

The following framework shows how evaluation helps programs learn. This framework summarizes the steps in evaluation practice and the standards that define good evaluation.

 


EVALUATION FRAMEWORK

 

evaltx.jpg (64431 bytes)

This framework has been developed by the Centers for Disease Control and Prevention Evaluation Working Group.



EVALUATION FRAMEWORK
NOTES

 

Steps in Evaluation:

Step 1: Engage Stakeholders









Step 2: Describe the Program







Step 3: Focus the Evaluation Design

 

 

 


Step 4: Gather Credible Evidence








Step 5: Justify Conclusions








Step 6: Ensure Use and Share Lessons Learned







Standards of "Good" Evaluation

1.  Utility








2. Feasibility

 

 

 

3.  Propriety

 

 

 

4.  Accuracy

 


APPLYING THE EVALUATION FRAMEWORK


Step 1. Engage Stakeholders
Public health work nearly always involves partnerships. Stakeholders are people who have a stake in what will be learned from an evaluation and what will be done with the knowledge. Stakeholders include:

  • Insiders -- people who manage or work in the program or organization
  • Outsiders -- those who are served or affected by the program, or who work in partnership with the program to achieve its goals
  • Primary intended users of the evaluation -- people who are in a position to do or decide something about the program

The evaluation cycle begins by engaging stakeholders. Once involved, they help to carry out each of the other steps.

The evaluation team should include a variety of program staff and external stakeholders. Potential team members should represent:

  • management
  • program staff, including clerical or support positions
  • people from the community, including contractors
  • people receiving services of the program
  • those with relevant technical expertise.

Some or all of these people should also be involved in the planning stage of the program. Each evaluation team member should have a clear understanding of their role and responsibilities in the evaluation process.


Step 2. Describe the Program

Before stakeholders can discuss the value of a program, they must agree on the description and scope of the program. Different stakeholders may have different ideas about what the program is supposed to achieve and why. Evaluations done without agreement on program description are likely to be of questionable use.


Often, the very act of describing the program in a way that can be evaluated produces benefits for the program long before results from the evaluation are available. With a solid understanding of the program, it is possible to summarize its key features and set up a frame of reference for all the evaluation decisions that follow.

A complete program description should include:

  • Need -- What problem/opportunity does the program address (e.g., clean food and water, poor nutrition, dental health, diabetes)?
  • Expectations -- What are the anticipated outcomes or objectives of the program (e.g., few reported food-borne and water-borne illnesses, improved pregnancy outcomes, reduction in the incidence of dental caries, prevention of complications of diabetes)?
  • Activities -- What are the tasks involved in operating the program (e.g., regulatory actions, direct services, education)?
  • Context -- What is the operating environment of the program (e.g., limitations of time, skills, resources, political climate)?


Types of Objectives

In terms of evaluation, the most practical way to describe your program is to have well-written, measurable objectives. Outcome and process objectives provide a quantitative measurement of change that the program/health department can and should accomplish by some future date.

Outcome Objectives
An outcome objective is:

a statement of the amount of change expected for a given health problem/condition for a specified population within a given time frame.


An example of an outcome objective:

By December 31, 2003, the incidence of measles among people in Marvelous County born after December 31, 1956 will be reduced from 7.9 cases per 100,000 in 1996 to no more than 2.5 cases per 100,000 population.

Outcome objectives are:

  • long term (usually 3-5 years)
  • realistic
  • measurable

Outcome objectives are usually measured by:

  • levels of mortality, morbidity, and/or disability, e.g., infant mortality rate, low birth weight rate, number of cases of measles
  • levels of health conditions, e.g., hypertension, hypercholesterolemia
  • behavioral measures, such as rates of smoking.

Process Objectives

A process objective is:

a statement that measures the amount of change expected in the performance and utilization of interventions that impact on the outcome.

Interventions can include:

  • health services
  • health education
  • counseling
  • regulatory actions
  • legislative and policy changes

An example of a process objective is:

By December 31, 1999, 95% of the children six years of age in Marvelous County will have completed the primary immunization series.


Process objectives are:

  • short-term (usually one year)
  • realistic
  • measurable

There must be a logical, practical relationship between the outcome and process objectives. This relationship is based on educated projections of how much of what types of interventions will result in the expected or desired change in the identified health problem. For any one outcome objective, there may be several process objectives. That is, there may be several different interventions that all lead to the same desired change in health status.

An example of how process objectives relate to an outcome objective is:

Outcome objective:

By December 31, 2003, reduce the percentage of eligible patients ages 3-6 with dental caries in Utopia County from 60% in 1996 to no more than 20%.

Process objectives:

By December 31, 1999, increase the percent of eligible patients less than 4 years of age in Utopia County who have received sealants from 85% in 1997 to at least 90%.

By 1999, 80% of eligible patients in Utopia County will receive oral hygiene instruction and preventive education twice a year.

Other interventions to address this outcome might include: dental hygiene and nutrition education at appropriate grade levels, water fluoridation, and parent contacts/education.


Well-written Outcome and Process Objectives

All outcome and process objectives must include certain components to make them useful tools for measuring changes in health problems and interventions.

Component Outcome Objective Process Objective

When*

the time (month, fiscal year, calendar year) by or during which the change in health status would be achieved the time (month, fiscal year, calendar year) by or during which the intervention should be accomplished

What

the targeted health problem or health behavior to be decreased, increased, or maintained the targeted intervention (health service, health education, counseling, regulatory action, or legislative/policy change) to be accomplished


Whom

the target population who will benefit from the change in health status the target population who will benefit from the accomplishment of the intervention

Where

the area in which the target population is located (e.g., the city, the county, a specific clinic setting, or the state). the area in which the target population is located (e.g., the city, the county, a specific clinic setting, or the state).


Who**

the staff or agency responsible for correcting the health problem the staff or agency responsible for carrying out the proposed intervention

How Much***

the quantity of change in a health problem the amount of the intervention to be utilized, performed or accomplished

* When is expressed as a fiscal year (local, state, or federal) or calendar year. Whether fiscal year or calendar is used often depends on how the data is collected, e.g., vital statistics are collected on a calendar year basis, whereas clinic visit data may be collected by calendar year or by fiscal year. Units of time smaller than one year are used only in activities.


** Who is not always spelled out in objectives, particularly in outcome objectives, if the responsible staff or agency is self-evident or is later defined in process objectives or activities. The other five elements are usually included.

***Preferably, how much is written as a target rate, a percentage or a specific number, with the current rate, percentage, or specific number included for reference purposes.

With the increasing emphasis on accountability and performance measurement, objectives must include a measurable component and the data to measure the objective must be available. In reviewing objectives you have written, you may find that you do not have the data sets or data collection systems to measure some of the objectives. You may have to rewrite some of the objectives to reflect what you can measure. Afterward, you can work on developing the data collection systems pertinent to what you need to be able to measure in the future.


PRACTICE EXERCISE 1
STEPS 1 AND 2

ENGAGING STAKEHOLDERS and
DESCRIBING THE PROGRAM

A. Identify specific people in your program who could be involved in the evaluation.







B. Identify specific people, groups of people, or organizations who are served or affected by your program.







C. Identify specific people or groups of people who will use the evaluation findings to make decisions about your program.







D. What problem/opportunity does your program address? What are the anticipated outcomes of your program?






E. What are some specific activities that you or your program do to achieve the program goal?






F. What are current issues or trends affecting your program?






Step 3: Focus the Evaluation Design

It is not possible for an evaluation to address all questions for all stakeholders. There must be a focus. Is the purpose of the evaluation to improve your program, to improve health status, or to demonstrate accountability? The evaluation team must do advance planning and reach agreement regarding exactly:

  • what questions the evaluation will attempt to answer (e.g., what outcomes will be addressed, what is the real intent or purpose of the evaluation)
  • the process that will be followed (e.g., scheduling of meetings, deadlines)
  • what methods will be used to collect, analyze, and interpret the data (e.g., available census data, client records, logs, interviews, surveys, expenditure reports)
  • who will perform the different activities (e.g., collecting the data, analyzing and interpreting the data, writing the report)
  • how the results will be disseminated (e.g., who is the intended audience, which specific people are in a position to actually use the findings)

By utilizing a team approach to the evaluation, a wide range of resources, skills, experiences, and viewpoints can be brought into the process. This ensures optimal input from both internal and external sources and increases the possibility that the evaluation will be useful, feasible, proper, and accurate.

Step 4: Gather Credible Evidence

Credible information is the raw material of a good evaluation. It must be perceived by all stakeholders as trustworthy and relevant to answer their questions. Evidence should be viewed in a broad sense. Evidence consists of information and data from a variety of sources.


Types of data include:

  • Demographic descriptors
  • Indicators of health status
    Morbidity
    Mortality
    Disability
    Health behavior related to illness, injury
  • Qualitative indicators
    Community values
    Public and private policies
  • Quality of life indicators
  • Inventories of interventions
    Implemented by health agency
    Implemented by other agencies, practitioners
  • Capacity of interventions
  • Eligibility
  • Utilization
  • Expenditures

Sources of data include:

  • Routine statistical reports
    Census data
    Demographic information about populations at risk
    Vital statistics
    Reported morbidity
    National Health and Nutrition Examination Survey (HANES) - National Center for Health Statistics
    National Hispanic Health and Nutrition Examination Survey (HHANES) -
       National Center for Health Statistics

    Behavioral risk factor survey
    Health Plan Employer Data and Information Set (HEDIS) -
    National Committee for Quality Assurance (HMO data)
    Hospital discharge diagnoses
    Financial reports of agencies, hospitals, etc.
    Economic analyses

  • Published studies and surveys in local, state, and national literature
  • Official reports of governmental legislative, executive, and judiciary branches
  • Documents from task forces, working groups, advisory groups, and health
    and medical care organizations
  • Media articles and reports
  • Program reports
    Surveillance reports
    Log sheets of various program activities
    Service utilization
        Agency program
        Other community programs relevant to agency
    Program user satisfaction survey
    Special surveys
    Personnel time sheets
    Expenditure reports

It should be recognized that there will be data limitations and that it is acceptable to use imperfect data (e.g., state vs. local data; under-reporting of conditions) as long as these data are analyzed and interpreted appropriately. These limitations should be clearly stated in any report or presentation.


Ways to improve the quality of the data include:

  • Ensuring that everyone is using the same versions of forms
  • Thorough training of interviewers
  • Spot checking of consistency of coding
  • Quality assurance activities (e.g., record reviews)

It all depends on what kind of information stakeholders will find credible.


PRACTICE EXERCISE 2
STEPS 3 AND 4

FOCUS THE EVALUATION DESIGN and
GATHER CREDIBLE EVIDENCE

A.    What is the purpose of the evaluation (improvement, accountability, or knowledge)?




B.   Identify the specific users of the evaluation findings (e.g., program managers, funding sources, elected officials) and how they will use that information.


C.  What data will you need to collect to measure the outcomes and activities or interventions of your program? How will you collect it?


 

D. What resources (e.g., people, skills, technology, funds) will you need to carry out the evaluation?



E.  Name three types of data you collect in your program.



F.    For each one, how do you ensure that the data collected is credible and accurate?


 


Step 5. Justify Conclusions

Once the evaluation evidence is collected, care must be taken in reaching justified conclusions. The process of reaching justified conclusions involves several questions:

  • What are the findings?
  • What do these findings mean? That is, how significant are they (e.g., statistical significance, program/policy implications, size of the effect)?
  • How do the findings compare to the objectives for the program? How have we decided to what degree the program is successful or not (e.g., are the findings good/bad, high/low, favorable/unfavorable)?
  • What claims or recommendations are indicated for program improvement (e.g., what, if anything should be done)?


Step 6. Ensure Use and Share Lessons Learned

Finally, it is important to ensure that the evaluation findings are used and that the lessons learned are shared. One of the main purposes for doing an evaluation is to improve practice. One way to increase program success is to learn from evaluation feedback and then to use that knowledge for program improvement. Evaluation provides a way to discriminate between program strategies that are successful and those that are not. Evaluations that are not used or inadequately disseminated are simply not worth doing. The likelihood that the evaluation findings will be used increases through deliberate planning, preparation, and follow-up.


Some of the activities that promote use and dissemination include:

  • Designing the evaluation from the start to achieve intended uses by intended users.
  • Providing continuous feedback to stakeholders about interim findings, provisional interpretations, and decisions to be made that might affect likelihood of use.
  • Scheduling follow-up meetings with primary intended users to facilitate the transfer of evaluation conclusions into appropriate actions or decisions.
  • Disseminating both the procedures used and the lessons learned from the evaluation to stakeholders, using communications strategies tailored to their particular needs.

PRACTICE EXERCISE 3

STEPS 5 AND 6

JUSTIFY CONCLUSIONS and
ENSURE USE AND SHARE
LESSONS LEARNED


A.    If you were to evaluate your program today, to what degree has your program achieved its
        objectives? On   what factors and information did you base your decision? How would you justify
        your decision? What  information do you wish you had to help justify your conclusion?



B.    Based on your conclusion, what recommendations would you make to improve your program?



C.    How and to whom will you communicate the findings of your program evaluation?



D.    What recommendations would you make to improve the evaluation efforts of your program?


STANDARDS

Evaluation is closely tied to program practice. Many of these activities are part of everyone's daily job activities. In a sense, everyone conducts evaluations all day long, but these evaluations are very informal and generally the stakes involved are lower. As the stakes increase, for example when there are hard decisions to be made regarding the future of the program, then evaluations become formalized.

The center of the evaluation framework depicts standards for a "good" evaluation: utility, feasibility, propriety, and accuracy. These standards help make evaluation practical. They are well-supported decision principles to follow when faced with having to make real-world tradeoffs or compromises. For example, the standards can guard against creating an imbalanced evaluation -- such as one that is accurate and feasible, but useless; or one that would be genuinely useful, but is not feasible or ethical. An optimal (as opposed to ideal) evaluation strategy is one that accomplishes all steps in the evaluation framework in a way that accommodates the program context and meets or exceeds all relevant standards.

Evaluation is a way to improve and account for public health programs using methods that are useful, feasible, proper and accurate. In a sense, we evaluate all the time as a routine part of program planning and operation. But do we evaluate well? And do we agree on what evaluation is? The framework presented in this program and summarized below provides guidance for improving existing evaluation practice and a standard for further improvement.


PART II: FRAMEWORK FOR PLANNING AND IMPLEMENTING PRACTICAL PROGRAM EVALUATION


SUMMARY

Steps in Evaluation Practice

Standards for "Good" Evaluation

  • Engage Stakeholders
    Those involved or affected, primary intended users
  • Describe the Program
    Need, expectations, activities, stage, context
  • Focus the Evaluation Design
    Purpose, users, uses, questions, methods, protocol
  • Gather Credible Evidence
    Content, sources, quality, quantity, logistics
  • Justify Conclusions
    Standards, techniques
  • Ensure Use and Share Lessons Learned
    Design, preparation, feedback, follow-up, and dissemination
  • Utility
    Serve the information needs of intended users
  • Feasibility
    Be realistic, prudent, diplomatic, and frugal
  • Propriety
    Behave legally, ethically, and with due regard for the welfare of those involved and affected
  • Accuracy
    Reveal and convey technically accurate information

The steps and standards are used together throughout the evaluation process.

 


PRACTICE EXERCISE 4
CASE STUDY


In this case study, Ms. Lupe Mandujano-Garcia describes her program's efforts to increase the immunization status of 2-year-olds in Texas. She mentions some of the stakeholders involved in building coalitions for the program. She describes how certain activities and interventions were carried out in different communities.


DIRECTIONS: Based on the interview with Lupe, answer the questions related to "Engage Stakeholders" and "Describe the Program." Then, answer the remaining questions related to the other steps in the evaluation framework, as if you had Lupe's job.

Engage Stakeholders

  1. What stakeholders were involved in Lupe's program and how were they involved?


  2. What were some of their interests or values?


  3. How did Lupe use any or all of these stakeholders in evaluating the program?



    Describe the Program

  4. What was the outcome objective for Lupe's program?



  5. What were the interventions or activities used to achieve the desired outcome?



    Focus the Evaluation

  6.     Based on what you heard about Lupe's program, would you focus your evaluation of this program on
         program improvement or on accountability? Why?


  7.     Who would use the evaluation results and how would they use these results?


  8.     What kind(s) of data would need to be collected?


  9.     What resources (money, personnel, time, etc.) will be needed to conduct the evaluation?






    Gather Credible Evidence
  10.     What methods or tools will be used to collect the necessary data?



  11.     Who will be involved in collecting the data?



  12.     How will you ensure that your data is credible?



    Justify Conclusions

  13.    If the objective of Lupe's program was to raise the immunization rate of 2-year-olds to 75%, but the
         results showed that this rate was not achieved, what could this mean and how would you explain this?


  14.     Based on your answer to the first question, what recommendations would you make to improve your
        program and/or achieve the program objective?






    Ensure Use and Share Lessons Learned
  15.     How and when would you share the results of the evaluation with the stakeholders
        you identified?

PANEL DISCUSSION

TOPIC: Implementing Practical Evaluation



































 

 

 

Additional

Resources

 

 

 


 

SUGGESTED ENRICHMENT ACTIVITIES

A.   DIRECTIONS: Choose another program in your agency and interview two to three people in that  program about how they evaluate their programs. Use the following questions to guide your interviewing process.

  1. What are your objectives for the program?
  2. What are the major program activities?
  3. Why will those activities achieve those objectives?
  4. What resources are available to the program?
    •    Number of staff
    •    Total budget
    •    Sources of funds
       
  5. What evidence is necessary to determine whether objectives are met?
  6. What happens if objective are met? Not met?
  7. How is the program related to local priorities?
  8. What data or records are maintained?
    •    Costs
    •    Services delivered
    •    Outcomes
    •    Other
  9. How often are these data collected?
  10. How is this information used? Does anything change based on these data or records?
  11. What major problems are you experiencing?
  12. What results have been produced to date?
  13. What accomplishments are likely in the next few years?

B.    DIRECTIONS: Interview a program manager about program evaluation. Use the following questions to guide your interviewing process

  1. From your perspective, what is the program trying to accomplish and what resources does it have?
  2. What results have been produced to date?
  3. What accomplishments are likely within the next year or two?
  4. Why would the program produce those results?
  5. What are the program's main problems?
  6. How long does it take to solve those problems?
  7. What kinds of information do you get on the program's performance and results? What kinds of information do you need?
  8. How do you (how would you) use that information?
  9. What kinds of information are requested by the Office of Management and Budget and the Legislature (for public programs), by the board of directors (for private programs)?

    C.    DIRECTIONS: Write a set of objectives for your own program. Be sure that your objectives are
            measurable and that they contain all the elements of well-written objectives (when, what, whom,
            where, who, and how much).

D.    DIRECTIONS: Plan the evaluation for your program. Ensure that you have incorporated all
        the elements below into your planning process. You should determine:

  1. The names of the team members and their roles.
  2. The name of the team coordinator and his/her responsibilities.
  3. Data:
    •    What types will be collected?
    •    How will it be collected?
    •    How often it will be collected?
  4. Schedule: Calendar dates for:
  5. •    team meetings
    •    data collection
    •    data analysis
    •   report preparation
    •    report distribution

  6. Data Analyses:
    •What types of data analyses will be done?
    •Who will do the different analyses?
  7. Evaluation Report:
    •Who will be responsible for writing the report?
    •How will the evaluation report be distributed?
  8. Implementing evaluation results - How will the changes be implemented?


RECOMMENDED PROCESS AND ACTIVITIES
FOR SMALL GROUPS VIEWING VIDEOTAPED
VERSION OF THE COURSE

 

Selection of a Group Leader
A group leader should be selected based on: (1) previous experience and skills in managing small group discussions; and (2) previous completion of both modules (five hours) of instruction in "Practical Evaluation of Public Health Programs." (A group leader is not required, but could enhance the learning process.)

Facilities and Logistics
These recommendations assume a group size of between six to twelve participants and at least two instructional sessions totaling approximately five hours of instructional time. An appropriate viewing/discussion room should be obtained so that all participants can comfortably see and hear the video tapes and, ideally be seated at a table so they can write easily. Each participant must have a workbook.

Introductions
We strongly recommend that the group leader begin each session by introducing him/herself and then asking all participants to introduce themselves indicating where they are currently employed and the degree to which they have responsibility for or are involved in some aspect of program evaluation. Ask the participants what they specifically want to learn from the instructional program and list these issues in summary form on a flip chart for later reference and discussion. The facilitator should ask participants to share their own experiences with program evaluation.


Viewing the Video Tape

Ask participants to write down questions or problematic areas in their workbooks as the group is viewing the video. At periodic intervals, stop the video tape and post the participant's questions on the flipchart or white board.


Practice Exercise
It is important that the practice exercises embedded in the video tapes and workbooks be completed by all participants. We recommend that each participant complete the practice exercises individually, then compare results during a group discussion.


Discussion Following Completion of the Video Tapes

This is the time when participants' questions are answered through a group discussion. Have the group select from the list of posted questions on the flip chart or white board. Allow three to five minutes of discussion for each issue discussed. For particularly difficult or problematic questions, suggest additional readings found in the workbook bibliography.


Review of "Enrichment Activities" Prior to Viewing Second Videotape
We recommend that the second session of the small group begin with a short presentation of some of the "homework" done between sessions by the participants. This will increase understanding of the issues involved and provide additional motivation for those who did not finish the enrichment activities to do so following the second group session.


Review Logistics of Obtaining Continuing Education Credits and Completion of the CDC Evaluation

Many participants will be interested in obtaining continuing education credits. The group leader should be familiar with the procedures for filling out the Optical Scan forms ("bubblesheets") that must be returned to CDC to obtain credits. S/he should review the evaluation questionnaire and post-test, the instructions on the Optical Scan forms, as well as the supplemental instructions included in the materials packet. The leader should emphasize to participants that the Optical Scan forms must be filled out correctly, or granting of credits may be prohibited. For questions about the procedure or forms, call 1-800-41-TRAIN.


RECOMMENDED PROCESS AND ACTIVITIES FOR
INDIVIDUALS VIEWING VIDEOTAPED VERSION OF
THE COURSE (Self-Instruction)


Logistics
The two modules of instructions: (1) Building Commitment for Program Evaluation; and (2) Framework for Planning and Implementing Practical Program Evaluation can be used successfully by an individual working at his/her own pace. Of course you must have both video tapes, access to a VHS video tape player and have a copy of the program workbook.

Contact the Centers for Disease Control and Prevention (CDC) Public Health Training Network for videos and printed materials at 1-800-41-TRAIN or http://www.cdc.gov/phtn.


Pros and Cons of Self-Instruction
The advantage of self-instruction is that you, the learner, control the pace of the presentation. You can stop the video tape, rewind it, and replay difficult concepts any number of times until you are satisfied you fully understand. Of course you can also take notes or write down questions in your workbook as the tape is playing. Both of these activities further help you synthesize the information presented on the video. However, the workbook is designed to minimize the necessity of note taking as the major concepts are highlighted and summarized briefly in the workbook. In addition, the workbook can function as a very convenient vehicle for reviewing important evaluation ideas. You do not have to view all five hours of videotape again.

The downside of self-instruction is that you do not have anyone else around to discuss questions or issues - and such discussions can enhance learning. To offset this disadvantage, we have included several additional (optional) practice exercises (with feedback) and several additional enrichment activities that, when completed, will ensure that your learning through self-instruction mode is equal to or better than individuals who completed the two modules using the group-based instructional mode.

Using the Modules as Self-Instruction
Take your time viewing the video tapes. Write down questions that occur to you in the workbook. Stop the tape, rewind, and listen to segments you are unsure of. Do the exercises in the workbook and the optional enrichment activities. These will greatly assist you in applying the ideas presented into your own job situation. If you are unsure of the correct answer to a question on the post-test, feel free to find the relevant section in the workbook and read it. Finally, remember that these two modules represent an introduction to the major ideas in program evaluation. If all your questions are not answered, do some additional reading using the bibliography in the workbook.

Obtaining Continuing Education Credits and Completion of the CDC Evaluation
When you have completed the course, familiarize yourself with the procedures for filling out the Optical Scan forms ("bubblesheets") that must be returned to CDC to obtain credits. Review the evaluation questionnaire and post-test, the instructions on the Optical Scan forms, as well as the supplemental instructions included in the materials packet. Optical Scan forms must be filled out correctly, or granting of credits may be prohibited. For questions about the procedure or forms, call 1-800-41-TRAIN.


BIBLIOGRAPHY & WEB SITES OF ADDITIONAL
EVALUATION RESOURCES


1. A Practical Guide to Prevention Effectiveness: Decision and Economic Analyses
    Haddix, Teutsch, Shaffer and Dunet, editors
    U.S. Department of Health & Human Services, Public Health Service,
    Centers for Disease Control, 1994, Document Number PB95-138149

2. A Vision of Evaluation: A Report of Learnings from Independent Sector's Work on Evaluation
    Edited by Sandra Trice Gray, 1993
    Published by Independent Sector, 1828 L Street NW, Washington, DC 20036
    (202) 223-8100

3. Centers for Disease Control and Prevention, Public Health Training Network Web Site http://www.cdc.gov/phtn

4. Evaluating Community Efforts to Prevent Cardiovascular Disease
    U.S. Department of Health & Human Services, Public Health Service,
    Centers for Disease Control and Prevention, National Center for
    Chronic Disease Prevention and Health Promotion,
    Work Group on Health Promotion and Community Development, University of Kansas, 1995
    Order from:
    Technical Information Service Branch
    National Center for Chronic Disease Prevention
      and Health Promotion
    CDC Mail-Stop K-13, 4770 Burford Highway, N.E.
    Atlanta, Georgia 30341-3724

5. Evaluating Health Promotion Programs The Health Communication Unit at the Centre for Health Promotion
    University of Toronto, 100 College Street, Toronto, Ohio M5GILS
    Web Site: http://www.utoronto.ca/chp/hcu/hcu-publications.aspl#workbooks.com

6. Handbook of Practical Program Evaluation  Edited by Joseph S. Wholey, Harry P. Hatry, & Kathryn E. Newcomer
    Jossey-Bass Inc., Publishers, 1994 350 Sansome Street, San Francisco, California 94104

7. How to Assess Program Implementation Joan A. Kind., Lynn Lyons Morris, & Carol Taylor Fitz-Gibbon
    Sage Publications, 1987 2111 W. Hillcrest Dr., Newbury Park, California 91320

8. How to Communicate Evaluation Findings Carol Taylor Fitz-Gibbon, Lynn Lyons Morris, & Marie E. Freeman
    Sage Publications, 1987 2111 W. Hillcrest Dr., Newbury Park, California 91320

9. How to Design a Program Evaluation Carol Taylor Fitz-Gibbon and Lynn Lyons Morris  Sage Publications, 1987
    2111 W. Hillcrest Dr., Newbury Park, California 91320

10. How to Measure and Use Performance Tests Carol Taylor Fitz-Gibbon, Lynn Lyons Morris, & Elaine Lindheim 
     Sage Publications, 1987 2111 W. Hillcrest Dr., Newbury Park, California 91320

11. Measurements in Prevention: A Manual in Selecting and Using Instruments to Evaluate Prevention Programs
      U.S. Department of Health & Human Services, Public Health Service, Substance Abuse and Mental Health
      Administration, Center for Substance Abuse Prevention, Printed in 1993 CSAP Technical Report 8, DHHS
      Publication (SMA) 93-2041,  ordered from National Clearinghouse for Alcohol and Substance Abuse Education
      1-800-729-6686

12. Measuring Program Outcomes: A Practical Approach
      
United Way of America, 1996
       Harry Hatry, Therese van Houten, Margaret C. Plantz, & Martha Taylor Greenway
       Item No. 0989

13. Performance Improvement 1997: Evaluation Activities of the U.S. Department of Health and Human Services
      Printed in June 1997
      U.S. Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation
      HHS/OASPE Publication Number 97-001

14. Practical Evaluation
      Michael Quinn Patton
      Sage Publications, 1982
      2111 W. Hillcrest Dr., Newbury Park, California 91320

15. Preparing Instructional Objectives
      Robert F. Mager
      Fearon Publishers, 1965
      2165 Park Blvd., Palo Alto, California

16. Prevention Plus III: Assessing Alcohol and Other Drug Prevention
      Programs at the School and Community Level:
      A Four Step Guide to Useful Program Assessment

      U.S. Department of Health & Human Services, Public Health Service, Alcohol,
      Drug Abuse and Mental Health Administration, Printed in 1981.
      DHHS Publication number (ADM) 81-1817

17. The Program Evaluation Standards, 2nd Edition: How to Assess Evaluations of Educational Programs
      The Joint Committee on Standards for Educational Evaluation
      James R. Sanders, Chair
      Sage Publications, 1995
      2111 W. Hillcrest Dr., Newbury Park, California 91320

18. Utilization-Focused Evaluation, 3rd edition.
      Michael Quinn Patton
      Sage Publications, Thousand Oaks, CA. 1997


This page last reviewed: October 24, 2001
spacer
spacer
spacer
spacer
Home | Policies and Regulations | Disclaimer | e-Government | FOIA | Contact Us
spacer
spacer
spacer Safer, Healthier People
spacer
Centers for Disease Control and Prevention, 1600 Clifton Rd, Atlanta, GA 30333, U.S.A
Tel: (404) 639-3311 / Public Inquiries: (404) 639-3534 / (800) 311-3435
spacer FirstGovDHHS Department of Health
and Human Services