Skip Navigation Links
Centers for Disease Control and Prevention
 CDC Home Search Health Topics A-Z
National Center For Chronic Disease Prevention and Health Promotion
Tobacco Information and Prevention Source (TIPS)
TIPS Home | What's New | Mission | Fact Sheets | Site Map | Contact Us
Contents
About Us
Publications Catalog
Surgeon General's Reports
Research, Data, and Reports
How To Quit
Educational Materials
New Citations
Tobacco Control Program Guidelines & Data
Celebrities Against Smoking
Sports Initiatives
Campaigns & Events
Smoking and Health Database
Related Links

 


Introduction to Program Evaluation for Comprehensive Tobacco Control Programs



Introduction

The health consequences of tobacco use

Tobacco use is the single most preventable cause of death and disease in our society. Annually, in the United States, tobacco use causes more than 430,000 deaths.1 Direct medical costs related to smoking total at least $50 billion per year;5 lost productivity adds another $50 billion.6 Tobacco use is addictive: nearly 70% of smokers want to quit smoking, but only 2.5% are able to quit permanently each year.7 Most smokers start smoking as adolescents.8 One in three teenagers who are regular smokers will eventually die of smoking-related causes.9

Other tobacco products also have serious health consequences. Use of smokeless tobacco is associated with leukoplakia and oral cancer.10,11 There is also strong evidence of causal relationships between regular cigar use and cancers of the lungs, larynx, oral cavity, and esophagus.12 These consequences are of particular concern because in 1999, 15.3% of U.S. high school students smoked cigars and 6.6% used smokeless tobacco.13

The risks of tobacco use extend beyond the actual users. Nearly 9 of 10 nonsmoking Americans have been exposed to environmental tobacco smoke (ETS).14 Exposure to ETS increases nonsmokers' risk for lung cancer and heart disease.15 Among children, ETS is also associated with serious respiratory problems, including asthma, pneumonia, and bronchitis.15,16 In addition, scientific evidence now links ETS with sudden infant death syndrome (SIDS) and low birth weight.15

Return to top of page


How to prevent and control tobacco use

Data from California, Massachusetts, Oregon, Arizona, and a growing number of other states have shown that implementing comprehensive tobacco control programs produces substantial reductions in tobacco use. Comprehensive tobacco control programs seek ultimately to reduce disease, disability, and death related to tobacco use by fulfilling the four CDC program goals:

  • Preventing the initiation of tobacco use among young people.
  • Promoting quitting among young people and adults.
  • Eliminating nonsmokers' exposure to environmental tobacco smoke (ETS).
  • Identifying and eliminating the disparities related to tobacco use and its effects among different population groups.

To achieve these goals, CDC recommends that states establish tobacco control programs that are comprehensive, sustainable, and accountable. On the basis of its analyses of comprehensive state tobacco control programs, CDC has identified a number of "best practices" to prevent and control tobacco use.2 Best Practices for Comprehensive Tobacco Control Programs2 is a guide to help states plan and budget for comprehensive tobacco control programs. Best Practices provides a justification for each program element, budget estimates for successful implementation, core resources to assist implementation, and references to scientific literature.

As outlined in Best Practices, a comprehensive tobacco control program must include surveillance and evaluation to ensure that tobacco control programs are achieving their goals.4,17

Return to top of page


What is program evaluation?

Program evaluation is "the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future program development."18 Program evaluation does not occur in a vacuum and is influenced by real-world constraints. Evaluation should be practical and feasible and must be conducted within the confines of resources, time, and political context. Moreover, evaluation should serve a useful purpose, be conducted in an ethical manner, and produce accurate findings. Evaluation findings should be used to make decisions about program implementation and to improve program effectiveness.

These are some of the questions program evaluation can answer: Is your program making a difference? Is your program effective in reducing tobacco consumption? Can your program be improved? What exactly is your program achieving? Is your program accomplishing what it was intended to accomplish? Was your program implemented as planned? Are you using resources efficiently and effectively? Is your program's performance on par with established standards?

The difference between research and program evaluation

Perhaps the greatest misunderstanding about program evaluation is that it must follow an academic research model. Academic research focuses primarily on testing hypotheses. A key purpose of practical program evaluation is to improve practice. We tend to think of research as requiring a controlled environment or control groups. In tobacco prevention and control, this is seldom realistic. Table 1 shows the principles that distinguish research (conducted, for example, to find the cause of a disease) and evaluation (conducted, for example, to find whether a particular intervention works or whether the program is reaching its intended audience).

Table 1. Distinguishing Principles of Research and Program Evaluation
Concept Research Principles Program Evaluation Principles
Planning Scientific method
  • State hypothesis.
  • Collect data.
  • Analyze data.
  • Draw conclusions.
Framework for program evaluation
  • Engage stakeholders.
  • Describe the program.
  • Focus the evaluation design.
  • Gather credible evidence.
  • Justify conclusions.
  • Ensure use and share lessons learned.
Decision Making Investigator-controlled
  • Authoritative.
Stakeholder-controlled
  • Collaborative.
Standards Validity
  • Internal (accuracy, precision).
  • External (generalizability).
Repeatability program evaluation standards
  • Utility.
  • Feasibility.
  • Propriety.
  • Accuracy.
Questions Facts
  • Descriptions.
  • Associations.
  • Effects.
Values
  • Merit (i.e., quality).
  • Worth (i.e., value).
  • Significance (i.e., importance).
Design Isolate changes and control circumstances
  • Narrow experimental influences.
  • Ensure stability over time.
  • Minimize context dependence.
  • Treat contextual factors as confounding (e.g., randomization, adjustment, statistical control).
  • Comparison groups are a necessity.
Incorporate changes and account for circumstances
  • Expand to see all domains of influence.
  • Encourage flexibility and improvement.
  • Maximize context sensitivity.
  • Treat contextual factors as essential information (e.g., system diagrams, logic models, hierarchical or ecological modeling).
  • Comparison groups are optional (and sometimes harmful).
Data Collection Sources
  • Limited number (accuracy preferred).
  • Sampling strategies are critical.
  • Concern for protecting human subjects.

Indicators/Measures

  • Quantitative.
  • Qualitative.
Sources
  • Multiple (triangulation preferred).
  • Sampling strategies are critical.
  • Concern for protecting human subjects, organizations, and communities.

Indicators/Measurers

  • Mixed methods (qualitative, quantitative, and integrated).
Analysis & Synthesis Timing
  • One-time (at the end).

Scope

  • Focus on specific variables.
Timing
  • Ongoing (formative and summative).

Scope

  • Integrate all data.
Judgments Implicit
  • Attempt to remain value-free.
Explicit
  • Examine agreement on values.
  • State precisely whose values are used.
Conclusions   Attribution and contribution
  • Establish time sequence.
  • Demonstrate plausible mechanisms.
  • Account for alternative explanations.
  • Show similar effects in similar contexts.
Uses Disseminate to interested audiences
  • Content and format varies to maximize comprehension.
Feedback to stakeholders
  • Focus on intended uses by intended users.
  • Build capacity.

Disseminate to interested audiences

  • Content and format varies to maximize comprehension.
  • Emphasis on full disclosure.
  • Requirement for balanced assessment.

What is surveillance?

Surveillance is the continuous monitoring or routine collection of data on various factors (e.g., behaviors, attitudes, deaths) over a regular interval of time. Surveillance systems have existing resources and infrastructure. Although data gathered by surveillance systems can be useful for evaluation, they serve other purposes besides evaluation. Some surveillance systems (e.g., Current Population Survey [CPS], and state cancer registries) have limited flexibility when it comes to adding questions that a particular program evaluation might like to have answered. Additional examples of surveillance systems include the Behavioral Risk Factor Surveillance System (BRFSS), Youth Tobacco Survey (YTS), and Youth Risk Behavior Survey (YRBS).

The relationship between surveillance and evaluation

Surveillance and evaluation are terms that are often used together. However, they are two distinct concepts. It is important to clarify the purposes of surveillance and evaluation.

Evaluation provides tailored information to answer specific questions about a program. Data collection in evaluation is more flexible than in surveillance and may allow program areas to be assessed in greater depth. For example, states can use detailed surveys to evaluate how well a program was implemented and the impact of a program on participants' knowledge, attitudes, and behavior. States can also use qualitative methods (e.g., focus groups, feedback from program participants, and semistructured or open-ended interviews with program participants) to gain insight into the strengths and weaknesses of a particular program activity.

Surveillance and evaluation can and should be conducted simultaneously. To assess tobacco-use prevention and control efforts adequately, states will usually need to supplement surveillance data with data collected to answer specific evaluation questions. States can collect data on, for example, knowledge, attitudes, behaviors, and environmental indicators (e.g., local legislative information, public opinion/poll data, and data on community norms). They can also collect program planning and implementation information to document and measure the effectiveness of a program, including its policy and media efforts.

Return to top of page


Why evaluate tobacco control programs?

Why evaluate tobacco prevention and control programs?
To monitor progress toward the program's goals.
To demonstrate that a particular tobacco control program or activity is effective.
To determine whether program components are producing the desired effects.
To permit comparisons among groups, particularly among populations with disproportionately high tobacco use and adverse health effects.
To justify the need for further funding and support.
To learn how to improve programs.
To ensure that only effective programs are maintained and resources are not wasted on ineffective programs.

Data gathered during evaluation enable managers and staff to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward program goals, and to judge the success of the program in achieving its short-term, intermediate, and long-term outcomes. Tobacco-use prevention and control programs are designed to promote social and behavioral change and create an environment that reinforces nonsmoking behaviors and supports healthy lifestyles. These changes will lead to reductions in tobacco use and exposure to ETS. Through program evaluation, we can track these changes and, with careful evaluation designs, assess the effectiveness and impact of a particular program, intervention, or strategy (Box 1).

Recognizing the importance of evaluation in public health practice and the need for appropriate methods, the World Health Organization (WHO) established the Working Group on Health Promotion Evaluation. The Working Group prepared a set of conclusions and related recommendations to guide policymakers and practitioners.19 Recommendations immediately relevant to the evaluation of comprehensive tobacco control programs include—

  • Encourage the adoption of participatory approaches to evaluation that provide meaningful opportunities for involvement by all of those with a direct interest in initiatives (programs, policies, and other organized activities).
  • Require that a minimum of 10% of the total financial resources for a health promotion initiative be allocated to evaluation.
  • Support the use of multiple methods to evaluate health promotion initiatives.
  • Support further research into the development of appropriate approaches to evaluating health promotion initiatives.
  • Support the establishment of a training and education infrastructure to develop expertise in the evaluation of health promotion initiatives.
  • Create and support opportunities for sharing information on evaluation methods used in health promotion through conferences, workshops, networks, and other means.

This manual illustrates how to apply CDC's Framework for Program Evaluation in Public Health Practice3 to the field of tobacco prevention and control. The framework is organized into the following six steps:

  • Engage stakeholders.
  • Describe the program.
  • Focus the evaluation.
  • Gather credible evidence.
  • Justify conclusions.
  • Ensure use of evaluation findings, and share lessons learned.

These six steps must be taken in any evaluation of tobacco prevention and control efforts. The steps are interdependent and not necessarily linear. Looking at Figure 1, you can see that each step builds on the successful completion of earlier steps. Each step in the framework is also associated with standards for "good" evaluation. There are four standards of evaluation that will help you design a good and practical evaluation: utility, feasibility, propriety, and accuracy.20

Figure 1. The CDC framework for program evaluation in public health practices. Those using screen readers, click on the full text version of this graphic below.
Text Version

Utility: Does the evaluation have a constructive purpose? Will the evaluation meet the information needs of the various stakeholders? Will the evaluation provide relevant information in a timely manner?

Feasibility: Are the planned evaluation activities realistic? Are resources used prudently? Is the evaluation minimally disruptive to your program?

Propriety: Is the evaluation ethical? Does the evaluation protect the rights of individuals and protect the welfare of those involved?

Accuracy: Will the evaluation produce valid and reliable findings?

How to select a lead evaluator and establish an evaluation team

A prevention research center in action
he West Virginia University Prevention Research Center worked with the American Lung Association and schools and communities in West Virginia and across the United States to develop and evaluate a smoking-cessation program for teenagers called Not On Tobacco (N-O-T).

The evaluation team should include internal program staff, external stakeholders, and possibly consultants or contractors with evaluation expertise. An initial step in the formation of a team is deciding who will be responsible for planning and implementing evaluation activities. At least one program staff person should be selected as the lead evaluator to coordinate program evaluation efforts on behalf of the health department. This lead evaluator should be responsible for evaluation activities, including planning and budgeting for evaluation, developing program objectives, addressing data-collection needs, reporting findings, and working with consultants. The lead evaluator is ultimately responsible for engaging stakeholders, consultants, and other collaborators who bring the skills and interests needed to plan and conduct the evaluation. Although this staff person should have the skills necessary to competently coordinate evaluation activities, if necessary he or she can choose to look elsewhere for technical expertise to design and implement specific evaluation tasks. However, developing in-house evaluation expertise and capacity is a beneficial goal for the health department. See Box 2 for a list of the characteristics of the good evaluator.

Additional evaluation expertise can be found in other programs within the health department, through external partners (e.g., universities, organizations, and companies), from other states' tobacco control programs, and through technical assistance offered by CDC. An additional resource for states includes the CDC's Prevention Research Centers (PRC) program. The PRC program is a national network of 24 academic research centers committed to prevention research and the translation of that research into programs and policies. The centers work with state health departments and members of their communities to develop and evaluate state and local interventions that address the leading causes of death and disability in the nation. Linking university researchers, health agencies, community organizations, and national nonprofit organizations facilitates the translation of promising research findings into practical, innovative, and effective programs. Additional information on the PRCs is available at www.cdc.gov/prc/index.htm.

To supplement the internal evaluation capacity of the health department, you can also use outside consultants as volunteers, advisory panel members, or contractors. External consultants can provide high levels of evaluation expertise from an objective point of view. Important factors to consider when selecting consultants are their level of professional training, experience, and ability to meet your needs. Overall, it is important to find a consultant whose approach to evaluation, background, and training best fits your program's evaluation needs and goals
(Box 2). The Evaluation Contracts Checklist presented in Appendix D was designed to help evaluators and clients identify key issues for contracting an evaluation or pieces of an evaluation. Advance agreements on the scope of the evaluation and process can mean the difference between an evaluation's success and failure.

Characteristics of a good evaluator

  • Has experience in the type of evaluation needed.
  • Is comfortable with qualitative and quantitative data sources and analysis.
  • Is able to work with a wide variety of stakeholders.
  • Can develop innovative approaches to evaluation while considering the realities affecting a program (e.g., a small budget).
  • Incorporates evaluation into all program activities.
  • Understands both the potential benefits and risks of evaluation.
  • Educates program personnel about designing and conducting the evaluation.
  • Will give staff the full findings (i.e., will not gloss over or fail to report certain findings for any reason).
  • Has strong coordination and organization skills.
  • Explains material clearly and patiently.
  • Respects all levels of personnel.
  • Communicates well with key personnel.
  • Exhibits cultural competency.
  • Delivers reports and protocols on time.

To generate discussion around evaluation planning and implementation, several states have formed evaluation advisory panels. Advisory panels typically generate input from select local, regional, or national experts otherwise difficult to access. The formation of an evaluation advisory panel will lend additional credibility to your efforts and prove useful in cultivating widespread support for evaluation activities.

In summary, select a lead evaluator who has experience in conducting the type of evaluation you need and a history of evaluating similar programs. In addition, be sure to check all references carefully before you enter into a contract with any consultant. All of the characteristics of a good evaluator listed are important; however, given the value of working with a team, the evaluator's ability to work with a diverse group of stakeholders warrants highlighting. The lead evaluator should be willing and able to draw on community values, traditions, and customs and to work with knowledgeable community members in designing and conducting the evaluation.

The evaluation team members should clearly define their respective roles. One approach is to develop a written agreement that describes who will conduct the evaluation and assigns specific roles and responsibilities to individual team members. The agreement may either be formal or informal, but it is necessary to clarify 1) the purpose of the evaluation, 2) the potential users of the evaluation findings and plans for dissemination, 3) the way the evaluation will be conducted, 4) the resources available, and 5) protection for human subjects. The agreement should also include a time line and a budget for the evaluation.

Return to top of page


Privacy Policy | Accessibility

TIPS Home | What's New | About Us | Site Map | Contact Us

CDC Home | Search | Health Topics A-Z

This page last reviewed September 11, 2003

United States Department of Health and Human Services
Centers for Disease Control and Prevention
National Center for Chronic Disease Prevention and Health Promotion
Office on Smoking and Health