|
||||||||
National Center For Chronic Disease Prevention and Health Promotion | ||||||||
TIPS Home | What's New | Mission | Fact Sheets | Site Map | Contact Us |
|
|
Table 1. Distinguishing Principles of Research and Program Evaluation | ||
---|---|---|
Concept | Research Principles | Program Evaluation Principles |
Planning | Scientific method
|
Framework for program evaluation
|
Decision Making | Investigator-controlled
|
Stakeholder-controlled
|
Standards | Validity
|
Repeatability program evaluation standards
|
Questions | Facts
|
Values
|
Design | Isolate changes and control circumstances
|
Incorporate changes and account for circumstances
|
Data Collection | Sources
Indicators/Measures
|
Sources
Indicators/Measurers
|
Analysis & Synthesis | Timing
Scope
|
Timing
Scope
|
Judgments | Implicit
|
Explicit
|
Conclusions | Attribution and contribution
|
|
Uses | Disseminate to interested audiences
|
Feedback to stakeholders
Disseminate to interested audiences
|
Surveillance is the continuous monitoring or routine collection of data on various factors (e.g., behaviors, attitudes, deaths) over a regular interval of time. Surveillance systems have existing resources and infrastructure. Although data gathered by surveillance systems can be useful for evaluation, they serve other purposes besides evaluation. Some surveillance systems (e.g., Current Population Survey [CPS], and state cancer registries) have limited flexibility when it comes to adding questions that a particular program evaluation might like to have answered. Additional examples of surveillance systems include the Behavioral Risk Factor Surveillance System (BRFSS), Youth Tobacco Survey (YTS), and Youth Risk Behavior Survey (YRBS).
Surveillance and evaluation are terms that are often used together. However, they are two distinct concepts. It is important to clarify the purposes of surveillance and evaluation.
Evaluation provides tailored information to answer specific questions about a program. Data collection in evaluation is more flexible than in surveillance and may allow program areas to be assessed in greater depth. For example, states can use detailed surveys to evaluate how well a program was implemented and the impact of a program on participants' knowledge, attitudes, and behavior. States can also use qualitative methods (e.g., focus groups, feedback from program participants, and semistructured or open-ended interviews with program participants) to gain insight into the strengths and weaknesses of a particular program activity.
Surveillance and evaluation can and should be conducted simultaneously. To assess tobacco-use prevention and control efforts adequately, states will usually need to supplement surveillance data with data collected to answer specific evaluation questions. States can collect data on, for example, knowledge, attitudes, behaviors, and environmental indicators (e.g., local legislative information, public opinion/poll data, and data on community norms). They can also collect program planning and implementation information to document and measure the effectiveness of a program, including its policy and media efforts.
|
Data gathered during evaluation enable managers and staff to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward program goals, and to judge the success of the program in achieving its short-term, intermediate, and long-term outcomes. Tobacco-use prevention and control programs are designed to promote social and behavioral change and create an environment that reinforces nonsmoking behaviors and supports healthy lifestyles. These changes will lead to reductions in tobacco use and exposure to ETS. Through program evaluation, we can track these changes and, with careful evaluation designs, assess the effectiveness and impact of a particular program, intervention, or strategy (Box 1).
Recognizing the importance of evaluation in public health practice and the need for appropriate methods, the World Health Organization (WHO) established the Working Group on Health Promotion Evaluation. The Working Group prepared a set of conclusions and related recommendations to guide policymakers and practitioners.19 Recommendations immediately relevant to the evaluation of comprehensive tobacco control programs include
This manual illustrates how to apply CDC's Framework for Program Evaluation in Public Health Practice3 to the field of tobacco prevention and control. The framework is organized into the following six steps:
These six steps must be taken in any evaluation of tobacco prevention and control efforts. The steps are interdependent and not necessarily linear. Looking at Figure 1, you can see that each step builds on the successful completion of earlier steps. Each step in the framework is also associated with standards for "good" evaluation. There are four standards of evaluation that will help you design a good and practical evaluation: utility, feasibility, propriety, and accuracy.20
Utility: Does the evaluation have a constructive purpose? Will the evaluation meet the information needs of the various stakeholders? Will the evaluation provide relevant information in a timely manner?
Feasibility: Are the planned evaluation activities realistic? Are resources used prudently? Is the evaluation minimally disruptive to your program?
Propriety: Is the evaluation ethical? Does the evaluation protect the rights of individuals and protect the welfare of those involved?
Accuracy: Will the evaluation produce valid and reliable findings?
|
The evaluation team should include internal program staff, external stakeholders, and possibly consultants or contractors with evaluation expertise. An initial step in the formation of a team is deciding who will be responsible for planning and implementing evaluation activities. At least one program staff person should be selected as the lead evaluator to coordinate program evaluation efforts on behalf of the health department. This lead evaluator should be responsible for evaluation activities, including planning and budgeting for evaluation, developing program objectives, addressing data-collection needs, reporting findings, and working with consultants. The lead evaluator is ultimately responsible for engaging stakeholders, consultants, and other collaborators who bring the skills and interests needed to plan and conduct the evaluation. Although this staff person should have the skills necessary to competently coordinate evaluation activities, if necessary he or she can choose to look elsewhere for technical expertise to design and implement specific evaluation tasks. However, developing in-house evaluation expertise and capacity is a beneficial goal for the health department. See Box 2 for a list of the characteristics of the good evaluator.
Additional evaluation expertise can be found in other programs within the health department, through external partners (e.g., universities, organizations, and companies), from other states' tobacco control programs, and through technical assistance offered by CDC. An additional resource for states includes the CDC's Prevention Research Centers (PRC) program. The PRC program is a national network of 24 academic research centers committed to prevention research and the translation of that research into programs and policies. The centers work with state health departments and members of their communities to develop and evaluate state and local interventions that address the leading causes of death and disability in the nation. Linking university researchers, health agencies, community organizations, and national nonprofit organizations facilitates the translation of promising research findings into practical, innovative, and effective programs. Additional information on the PRCs is available at www.cdc.gov/prc/index.htm.
To supplement the internal evaluation capacity of the health department, you can also use outside
consultants as volunteers, advisory panel members, or contractors. External consultants can provide high
levels of evaluation expertise from an objective point of view. Important factors to consider when selecting
consultants are their level of professional training, experience, and ability to meet your needs. Overall, it is
important to find a consultant whose approach to evaluation, background, and training best fits your
program's evaluation needs and goals
(Box 2). The Evaluation Contracts Checklist presented in
Appendix D was designed to help evaluators and clients identify key issues for contracting an evaluation
or pieces of an evaluation. Advance agreements on the scope of the evaluation and process can mean
the difference between an evaluation's success and failure.
Characteristics of a good evaluator
|
To generate discussion around evaluation planning and implementation, several states have formed evaluation advisory panels. Advisory panels typically generate input from select local, regional, or national experts otherwise difficult to access. The formation of an evaluation advisory panel will lend additional credibility to your efforts and prove useful in cultivating widespread support for evaluation activities.
In summary, select a lead evaluator who has experience in conducting the type of evaluation you need and a history of evaluating similar programs. In addition, be sure to check all references carefully before you enter into a contract with any consultant. All of the characteristics of a good evaluator listed are important; however, given the value of working with a team, the evaluator's ability to work with a diverse group of stakeholders warrants highlighting. The lead evaluator should be willing and able to draw on community values, traditions, and customs and to work with knowledgeable community members in designing and conducting the evaluation.
The evaluation team members should clearly define their respective roles. One approach is to develop a written agreement that describes who will conduct the evaluation and assigns specific roles and responsibilities to individual team members. The agreement may either be formal or informal, but it is necessary to clarify 1) the purpose of the evaluation, 2) the potential users of the evaluation findings and plans for dissemination, 3) the way the evaluation will be conducted, 4) the resources available, and 5) protection for human subjects. The agreement should also include a time line and a budget for the evaluation.
Privacy Policy | Accessibility TIPS Home | What's New | About Us | Site Map | Contact Us CDC Home | Search | Health Topics A-Z This page last reviewed September 11, 2003 United States
Department of Health and Human Services |