Skip ACF banner and navigation
Department of Health and Human Services logo
Questions?  
Privacy  
Site Index  
Contact Us  
   Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News Search  
Administration for Children and Families US Department of Health and Human Services
Skip NavigationReturn to: ACF Home < CORE HOME < Publications and Reports < Introduction & Table of Contents
Previous Chapter    

Glossary

baseline data — Initial information on program participants or other program aspects collected prior to receipt of services or program intervention. Baseline data are often gathered through intake interviews and observations and are used later for comparing measures that determine changes in your participants, program, or environment.

bias — (refers to statistical bias). Inaccurate representation that produces systematic error in a research finding. Bias may result in overestimating or underestimating certain characteristics of the population. It may result from incomplete information or invalid collection methods and may be intentional or unintentional.

comparison group — Individuals whose characteristics (such as race/ethnicity, gender, and age) are similar to those of your program participants. These individuals may not receive any services, or they may receive a different set of services, activities, or products. In no instance do they receive the same service(s) as those you are evaluating. As part of the evaluation process, the experimental (or treatment) group and the comparison group are assessed to determine which type of services, activities, or products provided by your program produced the expected changes.

confidentiality — Since an evaluation may entail exchanging or gathering privileged or sensitive information about individuals, a written form that assures evaluation participants that information provided will not be openly disclosed nor associated with them by name is important. Such a form ensures that their privacy will be maintained.

consultant — An individual who provides expert or professional advice or services, often in a paid capacity.

control group — A group of individuals whose characteristics (such as race/ethnicity, gender, and age) are similar to those of your program participants, but do not receive the program (services, products, or activities) you are evaluating. Participants are randomly assigned to either the treatment (or program) group and the control group. A control group is used to assess the effect of your program on participants as compared to similar individuals not receiving the services, products, or activities you are evaluating. The same information is collected for people in the control group as in the experimental group.

cost-benefit analysis — A type of analysis that involves comparing the relative costs of operating a program (program expenses, staff salaries, etc.) to the benefits (gains to individuals or society) it generates. For example, a program to reduce cigarette smoking would focus on the difference between the dollars expended for converting smokers into nonsmokers with the dollar savings from reduced medical care for smoking related disease, days lost from work, and the like.

cost effectiveness analysis — A type of analysis that involves comparing the relative costs of operating a program with the extent to which the program met it goals and objectives. For example, a program to reduce cigarette smoking would estimate the dollars that had to be expended in order to convert each smoker into a nonsmoker.

cultural relevance — Demonstration that evaluation methods, procedures, and or instruments are appropriate for the culture(s) to which they are applied. (Other terms include cultural competency, cultural sensitivity).

culture — The shared values, traditions, norms, customs, arts, history, institutions, and experience of a group of people. The group may be identified by race, age, ethnicity, language, national origin, religion, or other social category or grouping.

data — Specific information or facts that are collected. A data item is usually a discrete or single measure. Examples of data items might include age, date of entry into program, or reading level. Sources of data may include case records, attendance records, referrals, assessments, interviews, and the like.

data analysis — The process of systematically applying statistical and logical techniques to describe, summarize, and compare data collected.

data collection instruments — Forms used to collect information for your evaluation. Forms may include interview instruments, intake forms, case logs, and attendance records. They may be developed specifically for your evaluation or modified from existing instruments. A professional evaluator can help select those that are most appropriate for your program.

data collection plan — A written document describing the specific procedures to be used to gather the evaluation information or data. The plan describes who collects the information, when and where it is collected, and how it is to be obtained.

database — An accumulation of information that has been systematically organized for easy access and analysis. Databases typically are computerized.

design — The overall plan and specification of the approach expected in a particular evaluation. The design describes how you plan to measure program components and how you plan to use the resulting measurements. A pre- and post-intervention design with or without a comparison or control group is the design needed to evaluate participant outcome objectives.

evaluation — A systematic method for collecting, analyzing, and using information to answer basic questions about your program. It helps to identify effective and ineffective services, practices, and approaches.

evaluator — An individual trained and experienced in designing and conducting an evaluation that uses tested and accepted research methodologies.

evaluation plan — A written document describing the overall approach or design you anticipate using to guide your evaluation. It includes what you plan to do, how you plan to do it, who will do it, when it will be done, and why the evaluation is being conducted. The evaluation plan serves as a guide for the evaluation.

evaluation team —The individuals, such as the outside evaluator, evaluation consultant, program manager, and program staff who participate in planning and conducting the evaluation. Team members assist in developing the evaluation design, developing data collection instruments, collecting data, analyzing data, and writing the report.

exit data — Information gathered after an individual leaves your program. Exit data are often compared to baseline data. For example, a Head Start program may complete a developmental assessment of children at the end of the program year to measure a child's developmental progress by comparing developmental status at the beginning and end of the program year.

experimental group — A group of individuals receiving the treatment or intervention being evaluated or studied. Experimental groups (also known as treatment groups) are usually compared to a control or comparison group.

focus group — A group of 7-10 people convened for the purpose of obtaining perceptions or opinions, suggesting ideas, or recommending actions. A focus group is a method of collecting data for evaluation purposes.

formative evaluation — A type of process evaluation of new programs or services that focuses on collecting data on program operations so that needed changes or modifications can be made to the program in its early stages. Formative evaluations are used to provide feedback to staff about the program components that are working and those that need to be changed.

immediate outcomes — The changes in program participants, knowledge, attitudes, and behavior that occur early in the course of the program. They may occur at certain program points, or at program completion. For example, acknowledging substance abuse problems is an immediate outcome.

impact evaluation — A type of outcome evaluation that focuses on the broad, longer-term impacts or results of a program. For example, an impact evaluation could show that a decrease in a community's overall infant mortality rate was the direct result of a program designed to provide early prenatal care.

in-kind service — Time or services donated to your program.

informed consent — A written agreement by program participants to voluntarily participate in an evaluation or study after having been advised of the purpose of the study, the type of information being collected, and how the information will be used.

instrument — A tool used to collect and organize information. Includes written instruments or measures, such as questionnaires, scales, and tests.

intermediate outcomes — Results or outcomes of a program or treatment that may require some time before they are realized. For example, part-time employment would be an intermediate outcome of a program designed to assist at-risk youth in becoming self-sufficient.

internal resources — An agency's or organization's resources including staff skills and experiences and any information you already have available through current program activities.

intervention — The specific services, activities, or products developed and implemented to change or improve program participants' knowledge, attitudes, behaviors, or awareness.

logic model — See the definition for program model.

management information system (MIS) — An information collection and analysis system, usually computerized, that facilitates access to program and participant information. It is usually designed and used for administrative purposes. The types of information typically included in an MIS are service delivery measures, such as session, contacts, or referrals; staff caseloads; client sociodemographic information; client status; and treatment outcomes. Many MIS can be adapted to meet evaluation requirements.

measurable terms — Specifying, through clear language, what it is you plan to do and how you plan to do it. Stating time periods for activities, "dosage" or frequency information (such as three 1-hour training sessions), and number of participants helps to make project activities measurable.

methodology — The way in which you find out information; a methodology describes how something will be (or was) done. The methodology includes the methods, procedures, and techniques used to collect and analyze information.

monitoring — The process of reviewing a program or activity to determine whether set standards or requirements are being met. Unlike evaluation, monitoring compares a program to an ideal or exact state.

objective — A specific statement that explains how a program goal will be accomplished. For example, an objective of the goal to improve adult literacy could be to provide tutoring to participants on a weekly basis for 6 months. An objective is stated so that changes, in this case, an increase in a specific type of knowledge, can be measured and analyzed. Objectives are written using measurable terms and are time-limited.

outcome — Outcomes are a result of the program, services, or products you provide and refer to changes in knowledge, attitude, or behavior in participants. They are referred to as participant outcomes in this manual.

outcome evaluation — Evaluation designed to assess the extent to which a program or intervention affects participants according to specific variables or data elements. These results are expected to be caused by program activities and tested by comparison of results across sample groups in the target population. Also known as impact and summative evaluation.

outcome objectives — The changes in knowledge, attitudes, awareness, or behavior that you expect to occur as a result of implementing your program component, service, or activity. Also known as participant outcome objectives.

outside evaluator — An evaluator not affiliated with your agency prior to the program evaluation. Also known as a third-party evaluator.

participant — An individual, family, agency, neighborhood, community, or State, receiving or participating in services provided by your program. Also known as a client or target population group.

pilot test — Preliminary test or study of your program or evaluation activities to try out procedures and make any needed changes or adjustments. For example, an agency may pilot test new data collection instruments that were developed for the evaluation.

posttest — A test or measurement taken after a service or intervention takes place. It is compared with the results of a pretest to show evidence of the effects or changes as a result of the service or intervention being evaluated.

pretest — A test or measurement taken before a service or intervention begins. It is compared with the results of a posttest to show evidence of the effects of the service or intervention being evaluated. A pretest can be used to obtain baseline data.

process evaluation — An evaluation that examines the extent to which a program is operating as intended by assessing ongoing program operations and whether the targeted population is being served. A process evaluation involves collecting data that describes program operations in detail, including the types and levels of services provided, the location of service delivery, staffing; sociodemographic characteristics of participants; the community in which services are provided, and the linkages with collaborating agencies. A process evaluation helps program staff identify needed interventions and/or change program components to improve service delivery. It is also called formative or implementation evaluation.

program implementation objectives — What you plan to do in your program, component, or service. For example, providing therapeutic child care for 15 children, giving them 2 hot meals per day, are referred to as program implementation objectives.

program model (or logic model) — A diagram showing the logic or rationale underlying your particular program. In other words, it is a picture of a program that shows what it is supposed to accomplish. A logic model describes the links between program objectives, program activities, and expected program outcomes.

qualitative data — Information that is difficult to measure, count, or express in numerical terms. For example, a participant's impression about the fairness of a program rule/requirement is qualitative data.

quantitative data — Information that can be expressed in numerical terms, counted or compared on a scale. For example, improvement in a child's reading level as measured by a reading test.

random assignment — The assignment of individuals in the pool of all potential participants to either the experimental (treatment) or control group in such a manner that their assignment to a group is determined entirely by chance.

reliability — Extent to which a measurement (such as an instrument or a data collection procedure) produces consistent results over repeated observations or administrations of the instrument under the same conditions each time. It is also important that reliability be maintained across data collectors; this is call interrater reliability.

sample — A subset of participants selected from the total study population. Samples can be random (selected by chance, such as every 6th individual on a waiting list) or nonrandom (selected purposefully, such as all 2-year olds in a Head Start program).

standardized instruments — Assessments, inventories, questionnaires, or interviews, that have been tested with a large number of individuals and are designed to be administered to program participants in consistent manner. Results of tests with program participants can be compared to reported results of the tests used with other populations.

statistical procedures — The set of standards and rules based in statistical theory, by which one can describe and evaluate what has occurred.

statistical test — Type of statistical procedure, such as a t-test or Z-score, that is applied to data to determine whether your results are statistically significant (i.e., the outcome is not likely to have resulted by chance alone).

summative evaluation — A type of outcome evaluation that assesses the results or outcomes of a program. This type of evaluation is concerned with a program's overall effectiveness.

treatment group — Also called an experimental group, a treatment group is composed of a group of individuals receiving the services, products, or activities (interventions) that you are evaluating.

validity — The extent to which a measurement instrument or test accurately measures what it is supposed to measure. For example, a reading test is a valid measure of reading skills, but is not a valid measure of total language competency.

variables — Specific characteristics or attributes, such as behaviors, age, or test scores, that are expected to change or vary. For example, the level of adolescent drug use after being exposed to a drug prevention program is one variable that may be examined in an evaluation.

Previous Chapter    
Return to: ACF Home < CORE HOME < Publications and Reports < Introduction & Table of Contents