Skip Navigation Links
Centers for Disease Control and Prevention
 CDC Home Search Health Topics A-Z
National Center For Chronic Disease Prevention and Health Promotion
Tobacco Information and Prevention Source (TIPS)
TIPS Home | What's New | Mission | Fact Sheets | Site Map | Contact Us
Contents
About Us
Publications Catalog
Surgeon General's Reports
Research, Data, and Reports
How To Quit
Educational Materials
New Citations
Tobacco Control Program Guidelines & Data
Celebrities Against Smoking
Sports Initiatives
Campaigns & Events
Smoking and Health Database
Related Links

 


Chapter 4: Gather Credible Evidence



Measuring program outcomes

Now that you have written measurable objectives, developed a logic model, and selected your evaluation questions, you can refine the outcomes you want to measure in your evaluation. Although you selected outcomes to prepare your logic model, during evaluation many tobacco control programs expand their set of outcomes for each goal area.

When choosing outcomes to measure, keep in mind the purpose, users, and intended uses of the evaluation. In addition, the outcomes you choose should be relevant, important, and discrete. Although it may be tempting to evaluate only the long-term outcomes of your program, monitoring short-term and intermediate outcomes is also important so you can relate changes in health outcomes to program activities or identify gaps in the program. Moreover, demonstrating short-term impact may help justify continued or additional funding. Measuring the implementation of program activities is also important to ensure that the program is functioning as it should.

On the basis of the ETS logic model shown on page 35 (Figure 5),LINK here are some example outcomes you may choose to measure (stratified by process or outcome level):

Long-term outcomes

  • Reduced exposure to ETS.

Intermediate outcomes

  • Increased percentage of smoke-free homes.
  • Increased percentage of smoke-free private cars.
  • New legislation restricting or prohibiting smoking in enclosed public places.
  • Increased percentage of workplaces with voluntary bans restricting or prohibiting smoking.
  • Increased percentage of public places with nonsmoking policies.
  • Increased percentage of restaurants with nonsmoking policies.
  • Increased adherence to and enforcement of nonsmoking policies.

Short-term outcomes

  • Increased knowledge and awareness about ETS.
  • Increased public support for smoke-free public places, workplaces, and schools.
  • Increased public exposure to information about ETS.
  • Education of policymakers, legislators, workplace managers and owners, and school officials about the harmful effects of ETS exposure.

In process evaluation, the outcome is really an output. Outputs are the direct products of program activities, often measured in terms of the amount of work accomplished, such as the number of clients served or sessions held.

Outputs

  • A counter-marketing campaign against ETS has been designed.
  • A counter-marketing campaign against ETS has been implemented.
  • Model voluntary smoke-free policies have been developed.
  • Model smoke-free work-site policies have been distributed.

Before choosing outputs and outcomes to measure, you should first ask yourself these three key questions:

  • Is it reasonable to believe the program can influence the outcome, even though it cannot control it?
  • Would measuring the outcome show program successes or pinpoint and address problems or shortcomings?
  • Would the program's stakeholders accept the outcome or output as a valid result of program activities?

Once you have selected a set of outputs and outcomes to measure, you should ask yourself these questions:

  • Do program activities and outputs and short-term, intermediate, and long-term outcomes logically relate to each other?
  • Do these relationships reflect the logic of the program—the sequence of influences and changes that program inputs, activities, and outputs are intended to set in motion?
  • Do the longer-term outcomes represent meaningful benefits or changes in participants' status, condition, or quality of life?
  • Have you considered possible negative outcomes of your program?

The outcomes you choose to measure should be—

  • Relevant to the goal and objectives of your program.
  • Important to achieve if your program is to attain its objectives.
  • Indicative of meaningful changes.
  • Influenced by your program.
  • Realistic about the scope of influence of your program.
  • Useful in identifying both problems and successes of your program.
  • Effective in representing the changes or benefits attributable to your program.

As discussed earlier, an evaluation should be focused, have a specific purpose and use, and reflect the program's stage of development. For example, you must prepare to conduct both a process evaluation and an outcome evaluation, as appropriate. Process evaluations and outcome evaluations use different types of data. If you have a well-established program, it may be appropriate to expect changes in intermediate or long-term outcomes. The outputs and outcomes you include in the evaluation should reflect important dimensions of the program at each stage of development. In addition, select outputs and outcomes that will be most informative given the purpose(s) of your evaluation. Identifying and measuring outputs and outcomes can provide the information to fully assess and understand the impact of program efforts and make appropriate program decisions.19

Return to top of page


Selecting indicators to measure outcomes

Once you have determined the outcomes you want to measure, you need to select indicators. Indicators are specific, observable, and measurable characteristics or changes that show the progress a program is making toward achieving a specified outcome.27 For example, the percentage of adult nonsmokers who report they have not been exposed to cigarette smoke in the previous 7 days is an indicator that can be used to measure the long-term outcome of "decreased exposure of adult nonsmokers to ETS."

Indicators must be relevant to identified focus areas and questions. Be sure that the cost of collecting data on the indicators is within the evaluation budget, and check the source and availability of expected data. Evaluation staff must decide 1) which data collection, management, and analysis strategies are most appropriate for each indicator, and 2) whether needed technical assistance is available and affordable.

To establish indicators for each outcome, you should review selected outcomes and identify "specific, observable accomplishment(s) or change(s) that will tell you whether the outcome has been achieved."27 Keep the following tips in mind when selecting your indicators:

  • There should be at least one indicator for each outcome.
  • The indicator must be focused and must measure an important dimension of the outcome.
  • The indicator must be clear and specific in terms of what it will measure.
  • The change measured by the indicator should represent progress that the program has made toward achieving the outcome.

Commonly used indicators include—

  • Participation rates.
  • Attitudes.
  • Individual behavior.
  • Community norms.
  • Policies.
  • Health status.

Indicators specific to tobacco prevention and control programs include—

  • The number of clean indoor air ordinances that have been passed during a given period.
  • The proportion of a targeted population group who report having smoked in the last 30 days.
  • The percentage of health insurance companies that reimburse for cessation services.

Table 2 provides examples of outcomes, outputs, indicators, and data sources for programs to eliminate exposure to ETS. The indicators are used to document change over time and measure progress toward objectives. Appendix B has examples for the goal of preventing initiation of tobacco use among young people, and Appendix C has examples for the goal of promoting quitting among young people and adults.

Table 2. Example Outcomes, Outputs, Indicators,
and Data Sources for the Goal of Eliminating
Exposure to Environmental Tobacco Smoke (ETS)

Long-Term Outcomes

Long-Term Indicators

Data Sources*

Decreased exposure of adult nonsmokers to ETS.

  • Percentage of adult nonsmokers who report they have not been exposed to cigarette smoke during the previous 7 days.
  • Percentage of adults who report they are never exposed to cigarette smoke in restaurants.
  • Percentage of adults who report they are not exposed to cigarette smoke at work during a typical work day.
  • Adult Tobacco Survey.

Decreased exposure of young people to ETS.

  • Percentage of young people who report they have not been in the same room as someone smoking in the previous 7 days.
  • Percentage of young people who report they have not been in a car with someone who was smoking in the previous 7 days.
  • Percentage of mothers who report their baby is never in a room with someone who is smoking.
  • Youth Tobacco Survey.
  • Pregnancy Risk Assessment Monitoring System.

Intermediate Outcomes

Intermediate Indicators

Data Sources*

Increased percentage of smoke-free homes and cars.

  • Percentage of adults who report smoking is not allowed in their home.
  • Percentage of adults who report smoking is not allowed in the family car.
  • Adult Tobacco Survey.
  • State surveys.

Increased percentage of workplaces with restrictions or prohibitions on smoking.

  • Percentage of workplaces with policies that prohibit or restrict smoking.
  • Percentage of adults employed at work sites with formal policies that prohibit smoking.
  • Behavioral Risk Factor Surveillance System (Optional Module).
  • State or local policy tracking.

Increased percentage of enclosed public places and restaurants with restrictions on smoking.

  • Percentage of counties with clean air ordinances.
  • Percentage of restaurants that prohibit smoking.
  • State legislative tracking.
  • Local policy tracking.

Increased enforcement of no-smoking laws.

  • Percentage of schools, workplaces, and public places that comply with smoke-free policies or regulations.
  • Percentage of adults who report asking someone not to smoke around them.
  • Site-specific surveys.
  • Adult Tobacco Survey.

Short-Term Outcomes

Short-Term Indicators

Data Sources*

Increased awareness of, and exposure to, messages about the hazards of ETS.

  • Percentage of adults who recall the content of an ETS media campaign (which includes brochures, posters, presentations).
  • State surveys.

Increased knowledge and improved attitudes and skills related to ETS.

  • Percentage of adults who believe breathing secondhand smoke is bad for them.
  • Percentage of adults who believe smoking around children is harmful.
  • Percentage of young people who believe breathing secondhand smoke is bad for them.
  • Percentage of young people who believe smoking around children is harmful.
  • Youth Tobacco Survey.
  • Adult Tobacco Survey.

Increased public support for no-smoking policies.

  • Percentage of people who report that they support smoke-free policies.
  • Percentage of people who believe smoking should not be allowed in restaurants, schools, workplaces, and other enclosed public places.
  • Adult Tobacco Survey.

Process Outputs

Process Indicators

Data Sources*

Increased number of smoke-free homes and private cars.

  • A media campaign under way about the negative health effects of ETS.
  • Media materials.

Increased number of smoke-free workplaces.

  • The number of local coalitions that report they distributed examples of smoke-free workplace policies to at least 50% of the manufacturing plants in their area.
  • State progress reports.
  • Copy of the model smoke-free policy.

Increased public support for smoke-free environments.

  • The number of news stories on ETS in major newspapers.
  • The number of news stories on ETS in Spanish newspapers.
  • Media tracking.

*For more information on data sources, see Appendix A.

Return to top of page

Selecting data sources for indicators

Now that you have determined the outcomes you want to measure and the indicators you will use to measure progress toward those outcomes, you need to select the data sources you will use to gather information on your indicators. Sources of data fall into three categories: people, documents, and observations. Box 3 lists possible sources of information for evaluations within these categories.

Sources of information

People

  • Clients, program participants, nonparticipants.
  • Staff, program managers, administrators.
  • Partner agency staff.
  • General public.
  • Community leaders or key members of a community.
  • Funders.
  • Critics or skeptics.
  • Representatives of advocacy groups.
  • Elected officials, legislators, policymakers.
  • Local and state health officials.

Observations

  • Meetings, special events or activities, job performance.
  • Service encounters.

Documents

  • Grant proposals, newsletters, press releases.
  • Meeting minutes, administrative records.
  • Registration or enrollment forms.
  • Publicity materials, quarterly reports.
  • Publications, journal articles, poster presentations.
  • Previous evaluation reports.
  • Needs assessments.
  • Surveillance summaries.
  • Database records.
  • Records held by funders or collaborators.
  • Web pages.
  • Graphs, maps, charts, photographs, videotapes.

When choosing data sources, pick those that meet your data needs. Try to avoid choosing a data source that may be familiar or popular but does not necessarily answer your questions. Keep in mind that budget issues alone should not drive your evaluation planning efforts. Consider the following questions:

  • What do you need to know?
  • When do you need the data?
  • How often do you need the data?
  • Will the data be compared with similar data from elsewhere?
  • Is credibility of the data an issue?
  • How much money do you have to spend?

In evaluating tobacco-use prevention and control programs, you have the option of using existing data systems or building new ones customized to your program's components. Some existing data sources include—

  • Behavioral Risk Factor Surveillance System (BRFSS).
  • Youth Risk Behavior System (YRBS).
  • Pregnancy Risk Assessment Monitoring System (PRAMS).
  • Cancer registries.
  • Vital statistics.
  • National Health Interview Survey (NHIS).
  • Youth Tobacco Survey (YTS).
  • Adult Tobacco Survey (ATS).
  • School Health Policies and Programs Study (SHPPS).

To ensure that these data sources meet your evaluation needs, you may need to modify them. If you use an existing surveillance system to inform aspects of your evaluation, you might want to add state-specific questions or expand the sample size. Expanding the sample size allows for more stable estimates and possible sub-state estimates. Likewise, to produce much-needed data, you may want to invest in oversampling disparate populations.

Keep in mind that, although large ongoing surveillance systems have the advantages of collecting data routinely and having existing resources and infrastructure, some of them (e.g., Current Population Survey [CPS]) have little flexibility with regard to the questions asked in the survey. Therefore, it is difficult (sometimes impossible) to use these systems to collect the special data you need for your evaluation. In contrast, surveys such as YTS, BRFSS, or PRAMS are flexible with regard to the questions asked: you can supplement their questions with your questions to get the data you need. However, the drawback to these surveys is that they are conducted only occasionally, and usually they require an expenditure of funds or other resources.

If the existing data systems cannot answer your evaluation questions, you will need to build a new data system or adopt a system that is not already in your state.

Examples of new data systems:

  • State or local policy tracking systems or site-specific surveys (such as those monitoring compliance with the Synar Amendment, and work-site, restaurant, or day-care-center surveys).
  • Key informant surveys.
  • Health systems and clinical settings surveys.
  • Media tracking surveys.
  • Systems that monitor pro-tobacco activities (including advertising, event sponsorship, promotional items, discounts).
  • Systems that monitor program activities (such as local program monitoring).
  • Systems that track sales data.
  • Systems that monitor the use of services (e.g., cessation services, education programs, quitlines).

Examples of useful systems that may not yet be in your state:

  • School Health Education Profiles (SHEP).
  • School Tobacco Survey (STS) (which includes the Lead Health Educator Survey and School Principal Survey).

Return to top of page


Suggested data-collection activities for different levels of resources

In general, the purpose of evaluation—rather than the amount of available resources—should determine datacollection strategies. However, we are including the following information as a general guide to help you plan your evaluation using the resources that you have available.

The variation in available resources across states ranges from low to high levels and necessitates a variation in the evaluation activity. As resources increase, investment in key evaluation activities should also increase. In Table 3, we suggest evaluation activities for low, medium, and high levels of resources. However, not all programs should strictly follow this guide because the needs of an evaluation will vary not only with the amount of resources available, but with the intended use of the evaluation data. For example, although only limited resources may be available, evaluation of a program that is primarily focused on funding local activities should include regional or local data on both outcome and process measures.

Table 3. Evaluation Activities You Can Accomplish with
Low, Medium, and High Levels of Resources

Sample evaluation activities

Resources

With a low level of resources, we suggest

With a medium level of resources, we suggest

With a high level of resources, we suggest

  • Improving your state's infrastructure* for surveillance and evaluation.
  • Improving state competencyCompetency:  Staff with knowledge and experience needed to conduct surveillance and evaluation. and capacityCapacity:  The resources (e.g., competent staff, appropriate data-collection systems) to conduct the evaluation. to conduct evaluation. Improving local competencyCompetency:  Staff with knowledge and experience needed to conduct surveillance and evaluation. to conduct evaluation. Improving local capacityCapacity:  The resources (e.g., competent staff, appropriate data-collection systems) to conduct the evaluation. to conduct evaluation.
  • Using or improving existing data systems for program evaluation.
  • Using existing national and state surveys and data collection systems. Improving existing national and state surveys or data collection systems. Further improving national or state surveys and data-collection systems.
  • Creating new data systems.
  • Creating and conducting a state survey to collect state data. Creating and conducting regional surveys to collect regional data. Creating and conducting local surveys to collect local data.

    * Infrastructure: All the components necessary to conduct evaluation (e.g., experienced staff, adequate funding).
    Competency:  Staff with knowledge and experience needed to conduct surveillance and evaluation.Competency: Staff with the knowledge and experience needed to conduct surveillance and evaluation.
    Capacity:  The resources (e.g., competent staff, appropriate data-collection systems) to conduct the evaluation.Capacity: The resources (e.g., competent staff, appropriate data-collection systems) to conduct evaluation.

    Infrastructure

    To enhance your program's internal capacity to coordinate and direct evaluation activities, program staff should develop competency in evaluation planning and implementation. Competency also includes having partnerships and in-kind resources within your agency to support program evaluation. You should dedicate staff time for a lead evaluator or evaluation coordinator. As your resources increase and activities expand to the local level, you should develop similar competencies and capacity at that level.

    Existing data systems

    At a minimum, states should use data from national surveys and state data-collection systems (e.g., BRFSS, YRBS, PRAMS, YTS, Legislative Tracking, NTCP Chronicle). National data systems provide comparison outcome and some process measures for state activities. Comparison data from national surveys and other data-collection systems can be used to evaluate activities across states and to document any lack of change that can be used to justify additional tobacco program funding. By working with system representatives, you can include additional tobacco-related measures on state data-collection instruments and increase the amount and type of data collected on regional and local measures. For example, tobacco control representatives are encouraged to build a partnership with the state BRFSS coordinator to include optional modules or state-added questions on the state BRFSS.

    Some state data are easily accessible via the State Tobacco Activities Tracking and Evaluation (STATE) System (www2.cdc.gov/nccdphp/osh/state). The STATE System is the first on-line compilation of state-based tobacco information from many different data sources; it allows the user to view summary information on tobacco use in all 50 states and the District of Columbia. The STATE System contains up-to-date and historical data on the prevalence of tobacco use, tobacco control laws, the health impact and costs of tobacco use, and tobacco agriculture and manufacturing.

    New data systems

    We strongly encourage states to develop and implement new data-collection systems such as a youth tobacco survey, an adult tobacco survey, subpopulation prevalence surveys, community capacity and infrastructure assessments, a health care provider survey, a media tracking survey, and local policy tracking, as appropriate. New data systems can be developed specifically to provide process and outcome measures for focused or unique program activities. Some states have implemented comparable systems that provide comparison data across certain states. These systems can be designed to provide data at the state or sub-state (e.g., health region, county) levels.

    Appendix A describes the different types of national, state, and topic-specific tobacco-related data sources. It also includes a description of the source, tobacco indicators, sampling frame, methodology, years completed, and contact information. (An Internet address is provided for most national data sources.) In the "comments" section is a description of the past use of the data source, advantages, disadvantages, and other details. Many of these data sources provide general and category-specific measures that assess changes in social norms at individual and community levels. You should choose a data source that will provide reliable and credible information about the outcome. You can also use more than one data source for a specific indicator, because multiple data sources will provide a more comprehensive view of your program. Although the data sources listed in Appendix A are almost all quantitative, qualitative data from focus groups, feedback from program participants, and semistructured or open-ended interviews with program participants or key informants are also important sources of information for an evaluation.

    Return to top of page


    Collecting data

    Once you have specified the outcomes you want to measure, selected indicators, reviewed existing sources of data, and determined which resources can be devoted to data collection, it is time to collect your data. The data you gather will be used to assess the effectiveness of your program and help you make decisions about your program. Therefore, data collection must produce informative, useful, and credible results. The quality and quantity of data, the collection method used, and the timing of the data collection are all factors that contribute to the credibility of the evidence that you gather in your evaluation. Keep in mind that you may not need to implement annual surveys for some information needs.

    For example, community assessments of capacity and infrastructure may only need to be administered every 5 years. And periodic sampling of subpopulations for tobacco use patterns may need to be done only every 2 to 3 years and possibly aggregated for analysis.

    Return to top of page


    Selecting data-collection methods

    It is important that the data-collection methods be the most appropriate for measuring the outcomes and indicators you have selected. Some methods are geared toward collecting qualitative data, and others toward collecting quantitative data. Some methods are more appropriate for specific audiences or resource considerations. The methods used must give adequate consideration to the evaluation purpose, the intended users, and what will be viewed as credible evidence.

    When choosing a method, think about the following:

    The purpose of the evaluation: Which method seems most appropriate for your purpose and the questions that you want to answer?

    The users of the evaluation: Will the method allow you to gather information that can be analyzed and presented in a way that will be seen as credible by your intended audience? Will they want standardized quantitative information from a data source such as the Adult Tobacco Survey, or descriptive, narrative information from focus groups, or both?

    The respondents from whom you will collect the data: Where and how can respondents best be reached? What is culturally appropriate? For example, is conducting a phone interview or personal, door-to-door interview more appropriate for certain population groups?

    The resources available (time, money, volunteers, travel expenses, supplies): Which method(s) can you afford and manage well? What is feasible? Will your evaluation be completed in time for the next legislative session or prior to the end of the school year? Consider your own abilities and time. Do you have an evaluation background or will you have to hire an evaluator? Do program funds and relevant policies allow you to hire external evaluators?

    The degree of intrusiveness—interruptions to the program or participants: Will the method disrupt the program or be seen as intrusive by the respondents? Also consider issues of confidentiality, if the information that you are seeking is sensitive.

    Type of information: Do you want representative information that applies to all participants (standardized information such as that from a survey, structured interview, or observation checklist that will be comparable nationally and across states)? Or, do you want to examine the range and diversity of experiences, or tell an in-depth story of particular people or programs (e.g., descriptive data as from a case study)?

    The advantages and disadvantages of each method: What are the key strengths and weaknesses in each? Consider issues such as time and respondent burden, cost, necessary infrastructure, access to sites and records, and overall level of complexity. What is the most appropriate for your evaluation needs?

    Mixed data-collection methods refers to the collection of both quantitative and qualitative data. Mixed methods can be used sequentially, when one method is used to prepare for the use of another, or simultaneously, when both methods are used in parallel. An example of sequential use of mixed methods is when focus groups (qualitative) are used to develop a survey instrument (quantitative), and then personal interviews (qualitative) are conducted to investigate issues that arose during coding or interpretation of survey data. An example of simultaneous use of mixed methods would be using personal interviews to verify the response validity of a quantitative survey.

    Different methods reveal different aspects of the program. For example—

    • You might conduct a group assessment at the end of a school-based tobacco control program to hear the group's viewpoint, as well as individual student interviews to get a range of opinions.
    • You might conduct a survey of all legislators in a state to gauge their interest in managed care support of cessation services and products, and you might also interview certain legislators individually to question them in greater detail.
    • You might conduct a focus group with community leaders to assess their attitudes regarding tobacco industry support of cultural and community activities. You might follow the focus group with individual structured or semi-structured interviews with the same participants.

    Using mixed methods increases the cross-checks on different subsets of findings and generates increased stakeholder confidence in the overall findings. In addition, combining methods provides a way to triangulate findings, which maximizes the strengths and minimizes the limitations of each method. Using mixed methods enables you to validate your findings, enhance reliability, and build a more thorough evaluation for improving program effectiveness.28

    Quality of data

    A quality evaluation produces data that are reliable, valid, and informative. An evaluation is reliable to the extent that it repeatedly produces the same results, and it is valid if it measures what it is intended to measure. The advantage of using existing data sources such as the YTS, BRFSS, YRBS, or PRAMS is that they have been pretested and designed to produce valid and reliable data. If you are designing your own evaluation tools, you should be aware of the factors that influence data quality:

    • The design of the data-collection instrument and how questions are worded.
    • The data-collection procedures.
    • Training of data collectors.
    • The selection of data sources.
    • How the data are coded.
    • Data management.
    • Routine error checking as part of data quality control.

    Quantity of data

    You will also need to determine the amount of data you want to collect during the evaluation. Your study must have a certain minimum quantity of data to detect a specified change produced by your program. In general, detecting small amounts of change requires larger sample sizes. For example, detecting a 5% increase would require a larger sample size than detecting a 10% increase. If you use tobacco data sources such as the YTS, the sample size has already been determined. If you are designing your own evaluation tool, you will need the help of a statistician to determine an adequate sample size.

    When assessing the quantity of data you need to collect (often expressed as sample size), you will also need to consider the level of detail and the types of comparisons you hope to make. You will also need to determine the jurisdictional level for which you are gathering the data (e.g., state, county, region, congressional district). Counties often appreciate and want county-level estimates; however, this usually means larger sample sizes and more expense.

    The next step is choosing a data-collection method. Although it is practical to use or adapt data-collection methods that have been pretested and evaluated for validity and reliability, the methods you choose must be able to answer the questions you want answered. Again, do not settle on a particular method because it is easy, familiar, or popular—the methods should be appropriate to the outcomes you want to measure. Examples of data-collection methods are surveys, interviews, observation, document analysis, focus groups, and case studies.

    The most widely used data-collection methods in tobacco prevention and control are surveys, such as the Youth Tobacco Survey. Other methods used include tracking policy changes, running focus groups to test antitobacco counter-marketing messages, reviewing vital statistics for deaths attributed to smoking, and conducting Synar Amendment inspections. For more information on specific data-collection systems, see Appendix A.

    You will need to outline procedures to follow when collecting the evaluation data. Consider these issues:

    • When will you collect the data? You will need to determine when (and at what intervals) it is most appropriate to collect the information. If you are measuring whether your objectives have been met, your objectives will provide guidance as to when to collect certain data. If you are evaluating specific program interventions such as a smoking-cessation program, you might want to obtain information from participants before they begin the program, upon completion of the program, and several months after the program. If you are assessing the effects of a counter-marketing campaign, you might want to assess tobacco-related knowledge, attitudes, and behaviors among your target audience before and after the campaign.
    • Who will be considered a participant in the evaluation? Are you targeting a relatively specific group (African American young people), or are you assessing trends among a more general population (all young people, grades 6–12)?
    • Are you going to collect data from all participants or a sample? Many tobacco control programs are community-based, and surveying a sample of the population participating in such programs is appropriate. However, if you have a small number of participants (such as students exposed to a tobacco curriculum in two schools), you may want to survey all the participants.
    • How will the information be collected? Will the information be collected by telephone, by mail, or through interviews? How will the information be computerized?
    • Who will collect the information? Are those collecting the data trained and trained consistently? Will the data collectors uniformly gather and record information? Your data collectors will need to be trained to ensure that they all collect information in the same way and without introducing bias. Preferably, interviewers should be trained together and by the same person.
    • How will the security and confidentiality of the information be maintained? It is important to ensure the privacy and confidentiality of the evaluation participants. You can do this by collecting information anonymously and making sure you keep data stored in a locked and secure place.
    • Do you need approval from an institutional review board (IRB) before collecting the data? What will be your informed consent procedures?

    The answers to some of these questions depend on your evaluation questions and the design you select to answer those questions. If you mainly want to monitor progress in meeting your objectives (e.g., assess the proportion of work sites with smoke-free policies), you may not need a particular evaluation design beyond monitoring the work sites that go smoke-free. If, however, you want to attribute the change to your program, you would want to use an experimental or quasi-experimental evaluation design.

    Checklist for gathering credible evidence
    checkmark Prepare to collect process and outcome data.
    checkmark Confirm the outcomes are logically linked to program activities.
    checkmark Confirm that outcomes are logically linked at the national, state, and local levels.
    checkmark Address a continuum of outcomes (short-term, intermediate, and long-term).
    checkmark Link outcomes to indicators and data sources.
    checkmark Identify at least one indicator for each outcome.
    checkmark Determine if you need to create a new data-collection system.
    checkmark Pilot test new instruments to identify and/or control sources of error.
    checkmark Consider adding evaluation questions to already existing surveillance systems.
    checkmark Consider a mixed-method approach to data collection.
    checkmark Take into account available resources.
    checkmark Consider issues of timing for data collection and reporting needs.

    Return to top of page


    Resources

    1. CDC Evaluation Working Group
      www.cdc.gov/eval
    2. State Tobacco Activities Tracking and Evaluation (STATE) System
      www2.cdc.gov/nccdphp/osh/state

    Links to non-Federal organizations are provided solely as a service to our users. Links do not constitute an endorsement of any organization by CDC or the Federal Government, and none should be inferred. CDC is not responsible for the content of the individual organization Web pages found at non-Federal links.


    Return to top of page


    Privacy Policy | Accessibility

    TIPS Home | What's New | About Us | Site Map | Contact Us

    CDC Home | Search | Health Topics A-Z

    This page last reviewed September 11, 2003

    United States Department of Health and Human Services
    Centers for Disease Control and Prevention
    National Center for Chronic Disease Prevention and Health Promotion
    Office on Smoking and Health