|
||||||||
National Center For Chronic Disease Prevention and Health Promotion | ||||||||
TIPS Home | What's New | Mission | Fact Sheets | Site Map | Contact Us |
|
|
Table 2.
Example Outcomes, Outputs, Indicators, and Data Sources for the Goal of Eliminating Exposure to Environmental Tobacco Smoke (ETS) |
||
---|---|---|
Long-Term Outcomes |
Long-Term Indicators |
Data Sources* |
Decreased exposure of adult nonsmokers to ETS. |
|
|
Decreased exposure of young people to ETS. |
|
|
Intermediate Outcomes |
Intermediate Indicators |
Data Sources* |
Increased percentage of smoke-free homes and cars. |
|
|
Increased percentage of workplaces with restrictions or prohibitions on smoking. |
|
|
Increased percentage of enclosed public places and restaurants with restrictions on smoking. |
|
|
Increased enforcement of no-smoking laws. |
|
|
Short-Term Outcomes |
Short-Term Indicators |
Data Sources* |
Increased awareness of, and exposure to, messages about the hazards of ETS. |
|
|
Increased knowledge and improved attitudes and skills related to ETS. |
|
|
Increased public support for no-smoking policies. |
|
|
Process Outputs |
Process Indicators |
Data Sources* |
Increased number of smoke-free homes and private cars. |
|
|
Increased number of smoke-free workplaces. |
|
|
Increased public support for smoke-free environments. |
|
|
*For more information on data sources, see Appendix A.
Now that you have determined the outcomes you want to measure and the indicators you will use to measure progress toward those outcomes, you need to select the data sources you will use to gather information on your indicators. Sources of data fall into three categories: people, documents, and observations. Box 3 lists possible sources of information for evaluations within these categories.
Sources of informationPeople
Observations
Documents
|
When choosing data sources, pick those that meet your data needs. Try to avoid choosing a data source that may be familiar or popular but does not necessarily answer your questions. Keep in mind that budget issues alone should not drive your evaluation planning efforts. Consider the following questions:
In evaluating tobacco-use prevention and control programs, you have the option of using existing data systems or building new ones customized to your program's components. Some existing data sources include—
To ensure that these data sources meet your evaluation needs, you may need to modify them. If you use an existing surveillance system to inform aspects of your evaluation, you might want to add state-specific questions or expand the sample size. Expanding the sample size allows for more stable estimates and possible sub-state estimates. Likewise, to produce much-needed data, you may want to invest in oversampling disparate populations.
Keep in mind that, although large ongoing surveillance systems have the advantages of collecting data routinely and having existing resources and infrastructure, some of them (e.g., Current Population Survey [CPS]) have little flexibility with regard to the questions asked in the survey. Therefore, it is difficult (sometimes impossible) to use these systems to collect the special data you need for your evaluation. In contrast, surveys such as YTS, BRFSS, or PRAMS are flexible with regard to the questions asked: you can supplement their questions with your questions to get the data you need. However, the drawback to these surveys is that they are conducted only occasionally, and usually they require an expenditure of funds or other resources.
If the existing data systems cannot answer your evaluation questions, you will need to build a new data system or adopt a system that is not already in your state.
Examples of new data systems:
Examples of useful systems that may not yet be in your state:
In general, the purpose of evaluation—rather than the amount of available resources—should determine datacollection strategies. However, we are including the following information as a general guide to help you plan your evaluation using the resources that you have available.
The variation in available resources across states ranges from low to high levels and necessitates a variation in the evaluation activity. As resources increase, investment in key evaluation activities should also increase. In Table 3, we suggest evaluation activities for low, medium, and high levels of resources. However, not all programs should strictly follow this guide because the needs of an evaluation will vary not only with the amount of resources available, but with the intended use of the evaluation data. For example, although only limited resources may be available, evaluation of a program that is primarily focused on funding local activities should include regional or local data on both outcome and process measures.
Table 3.
Evaluation Activities You Can Accomplish with Low, Medium, and High Levels of Resources |
|||||||
---|---|---|---|---|---|---|---|
|
|||||||
|
|||||||
|
|||||||
|
|||||||
|
* Infrastructure: All the components necessary to conduct
evaluation (e.g., experienced staff, adequate funding).
Competency:
Staff with the knowledge and experience needed to conduct surveillance and
evaluation.
Capacity:
The resources (e.g., competent staff, appropriate data-collection systems)
to conduct evaluation.
To enhance your program's internal capacity to coordinate and direct evaluation activities, program staff should develop competency in evaluation planning and implementation. Competency also includes having partnerships and in-kind resources within your agency to support program evaluation. You should dedicate staff time for a lead evaluator or evaluation coordinator. As your resources increase and activities expand to the local level, you should develop similar competencies and capacity at that level.
At a minimum, states should use data from national surveys and state data-collection systems (e.g., BRFSS, YRBS, PRAMS, YTS, Legislative Tracking, NTCP Chronicle). National data systems provide comparison outcome and some process measures for state activities. Comparison data from national surveys and other data-collection systems can be used to evaluate activities across states and to document any lack of change that can be used to justify additional tobacco program funding. By working with system representatives, you can include additional tobacco-related measures on state data-collection instruments and increase the amount and type of data collected on regional and local measures. For example, tobacco control representatives are encouraged to build a partnership with the state BRFSS coordinator to include optional modules or state-added questions on the state BRFSS.
Some state data are easily accessible via the State Tobacco Activities Tracking and Evaluation (STATE) System (www2.cdc.gov/nccdphp/osh/state). The STATE System is the first on-line compilation of state-based tobacco information from many different data sources; it allows the user to view summary information on tobacco use in all 50 states and the District of Columbia. The STATE System contains up-to-date and historical data on the prevalence of tobacco use, tobacco control laws, the health impact and costs of tobacco use, and tobacco agriculture and manufacturing.
We strongly encourage states to develop and implement new data-collection systems such as a youth tobacco survey, an adult tobacco survey, subpopulation prevalence surveys, community capacity and infrastructure assessments, a health care provider survey, a media tracking survey, and local policy tracking, as appropriate. New data systems can be developed specifically to provide process and outcome measures for focused or unique program activities. Some states have implemented comparable systems that provide comparison data across certain states. These systems can be designed to provide data at the state or sub-state (e.g., health region, county) levels.
Appendix A describes the different types of national, state, and topic-specific tobacco-related data sources. It also includes a description of the source, tobacco indicators, sampling frame, methodology, years completed, and contact information. (An Internet address is provided for most national data sources.) In the "comments" section is a description of the past use of the data source, advantages, disadvantages, and other details. Many of these data sources provide general and category-specific measures that assess changes in social norms at individual and community levels. You should choose a data source that will provide reliable and credible information about the outcome. You can also use more than one data source for a specific indicator, because multiple data sources will provide a more comprehensive view of your program. Although the data sources listed in Appendix A are almost all quantitative, qualitative data from focus groups, feedback from program participants, and semistructured or open-ended interviews with program participants or key informants are also important sources of information for an evaluation.
Once you have specified the outcomes you want to measure, selected indicators, reviewed existing sources of data, and determined which resources can be devoted to data collection, it is time to collect your data. The data you gather will be used to assess the effectiveness of your program and help you make decisions about your program. Therefore, data collection must produce informative, useful, and credible results. The quality and quantity of data, the collection method used, and the timing of the data collection are all factors that contribute to the credibility of the evidence that you gather in your evaluation. Keep in mind that you may not need to implement annual surveys for some information needs.
For example, community assessments of capacity and infrastructure may only need to be administered every 5 years. And periodic sampling of subpopulations for tobacco use patterns may need to be done only every 2 to 3 years and possibly aggregated for analysis.
It is important that the data-collection methods be the most appropriate for measuring the outcomes and indicators you have selected. Some methods are geared toward collecting qualitative data, and others toward collecting quantitative data. Some methods are more appropriate for specific audiences or resource considerations. The methods used must give adequate consideration to the evaluation purpose, the intended users, and what will be viewed as credible evidence.
When choosing a method, think about the following:
The purpose of the evaluation: Which method seems most appropriate for your purpose and the questions that you want to answer?
The users of the evaluation: Will the method allow you to gather information that can be analyzed and presented in a way that will be seen as credible by your intended audience? Will they want standardized quantitative information from a data source such as the Adult Tobacco Survey, or descriptive, narrative information from focus groups, or both?
The respondents from whom you will collect the data: Where and how can respondents best be reached? What is culturally appropriate? For example, is conducting a phone interview or personal, door-to-door interview more appropriate for certain population groups?
The resources available (time, money, volunteers, travel expenses, supplies): Which method(s) can you afford and manage well? What is feasible? Will your evaluation be completed in time for the next legislative session or prior to the end of the school year? Consider your own abilities and time. Do you have an evaluation background or will you have to hire an evaluator? Do program funds and relevant policies allow you to hire external evaluators?
The degree of intrusiveness—interruptions to the program or participants: Will the method disrupt the program or be seen as intrusive by the respondents? Also consider issues of confidentiality, if the information that you are seeking is sensitive.
Type of information: Do you want representative information that applies to all participants (standardized information such as that from a survey, structured interview, or observation checklist that will be comparable nationally and across states)? Or, do you want to examine the range and diversity of experiences, or tell an in-depth story of particular people or programs (e.g., descriptive data as from a case study)?
The advantages and disadvantages of each method: What are the key strengths and weaknesses in each? Consider issues such as time and respondent burden, cost, necessary infrastructure, access to sites and records, and overall level of complexity. What is the most appropriate for your evaluation needs?
Mixed data-collection methods refers to the collection of both quantitative and qualitative data. Mixed methods can be used sequentially, when one method is used to prepare for the use of another, or simultaneously, when both methods are used in parallel. An example of sequential use of mixed methods is when focus groups (qualitative) are used to develop a survey instrument (quantitative), and then personal interviews (qualitative) are conducted to investigate issues that arose during coding or interpretation of survey data. An example of simultaneous use of mixed methods would be using personal interviews to verify the response validity of a quantitative survey.
Different methods reveal different aspects of the program. For example—
Using mixed methods increases the cross-checks on different subsets of findings and generates increased stakeholder confidence in the overall findings. In addition, combining methods provides a way to triangulate findings, which maximizes the strengths and minimizes the limitations of each method. Using mixed methods enables you to validate your findings, enhance reliability, and build a more thorough evaluation for improving program effectiveness.28
A quality evaluation produces data that are reliable, valid, and informative. An evaluation is reliable to the extent that it repeatedly produces the same results, and it is valid if it measures what it is intended to measure. The advantage of using existing data sources such as the YTS, BRFSS, YRBS, or PRAMS is that they have been pretested and designed to produce valid and reliable data. If you are designing your own evaluation tools, you should be aware of the factors that influence data quality:
You will also need to determine the amount of data you want to collect during the evaluation. Your study must have a certain minimum quantity of data to detect a specified change produced by your program. In general, detecting small amounts of change requires larger sample sizes. For example, detecting a 5% increase would require a larger sample size than detecting a 10% increase. If you use tobacco data sources such as the YTS, the sample size has already been determined. If you are designing your own evaluation tool, you will need the help of a statistician to determine an adequate sample size.
When assessing the quantity of data you need to collect (often expressed as sample size), you will also need to consider the level of detail and the types of comparisons you hope to make. You will also need to determine the jurisdictional level for which you are gathering the data (e.g., state, county, region, congressional district). Counties often appreciate and want county-level estimates; however, this usually means larger sample sizes and more expense.
The next step is choosing a data-collection method. Although it is practical to use or adapt data-collection methods that have been pretested and evaluated for validity and reliability, the methods you choose must be able to answer the questions you want answered. Again, do not settle on a particular method because it is easy, familiar, or popular—the methods should be appropriate to the outcomes you want to measure. Examples of data-collection methods are surveys, interviews, observation, document analysis, focus groups, and case studies.
The most widely used data-collection methods in tobacco prevention and control are surveys, such as the Youth Tobacco Survey. Other methods used include tracking policy changes, running focus groups to test antitobacco counter-marketing messages, reviewing vital statistics for deaths attributed to smoking, and conducting Synar Amendment inspections. For more information on specific data-collection systems, see Appendix A.
You will need to outline procedures to follow when collecting the evaluation data. Consider these issues:
The answers to some of these questions depend on your evaluation questions and the design you select to answer those questions. If you mainly want to monitor progress in meeting your objectives (e.g., assess the proportion of work sites with smoke-free policies), you may not need a particular evaluation design beyond monitoring the work sites that go smoke-free. If, however, you want to attribute the change to your program, you would want to use an experimental or quasi-experimental evaluation design.
|
|
Links to non-Federal organizations are provided solely as a service to our users. Links do not constitute an endorsement of any organization by CDC or the Federal Government, and none should be inferred. CDC is not responsible for the content of the individual organization Web pages found at non-Federal links. |
|
Privacy Policy | Accessibility TIPS Home | What's New | About Us | Site Map | Contact Us CDC Home | Search | Health Topics A-Z This page last reviewed September 11, 2003 United States
Department of Health and Human Services |