Skip ACF banner and navigation
Department of Health and Human Services logo
Questions?  
Privacy  
Site Index  
Contact Us  
   Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News Search  
Administration for Children and Families US Department of Health and Human Services
Skip NavigationReturn to: ACF Home < CORE HOME < Publications and Reports < Introduction & Table of Contents
Previous Chapter   Next Chapter

Chapter 5: How Do You Prepare for an Evaluation?

When you build a house, you start by laying the foundation. If your foundation is not well constructed, your house will eventually develop cracks and you will be constantly patching them up. Preparing for an evaluation is like laying a foundation for a house. The effectiveness of an evaluation ultimately depends on how well you have planned it.

Begin preparing for the evaluation when you are planning the program, component, or service that you want to evaluate. This approach will ensure that the evaluation reflects the program's goals and objectives. The process of preparing for an evaluation should involve the outside evaluator or consultant (if you decide to hire one), all program staff who are to be part of the evaluation team, and anyone else in the agency who will be involved. The following steps are designed to help you build a strong foundation for your evaluation.

Step 1: Decide what to evaluate. Programs vary in size and scope. Some programs have multiple components, whereas others have only one or two. You can evaluate your entire program, one or two program components, or even one or two services or activities within a component. To a large extent, your decision about what to evaluate will depend on your available financial and staff resources. If your resources are limited, you may want to narrow the scope of your evaluation. It is better to conduct an effective evaluation of a single program component than to attempt an evaluation of several components or an entire program without sufficient resources.

Sometimes the decision about what to evaluate is made for you. This often occurs when funders require evaluation as a condition of a grant award. Funders may require evaluations of different types of programs including, but not limited to, demonstration projects. Evaluation of demonstration projects is particularly important to funders because the purpose of these projects is to develop and test effective program approaches and models.

At other times, you or your agency administrators will make the decision about what to evaluate. As a general rule, if you are planning to implement new programs, components, or services, you should also plan to evaluate them. This step will help you determine at the outset whether your new efforts are implemented successfully, and are effective in attaining expected participant outcomes. It will also help identify areas for improvement.

If your program is already operational, you may decide you want to evaluate a particular service or component because you are unsure about its effectiveness with some of your participants. Or, you may want to evaluate your program because you believe it is effective and you want to obtain additional funding to continue or expand it.

Step 2: Build a model of your program. Whether you decide to evaluate an entire program, a single component, or a single service, you will need to build a model that clearly describes what you plan to do. A model will provide a structural framework for your evaluation. You will need to develop a clear picture of the particular program, component, or service to be evaluated so that everyone involved has a shared understanding of what they are evaluating. Building a model will help you with this task.

There are a variety of types of models. The model discussed in this chapter focuses on the program's implementation and participant outcome objectives. The model represents a series of logically related assumptions about the program's participant population and the changes you hope to bring about in that population as a result of your program. A sample completed program model and a worksheet that can be used to develop a model for your program appear at the end of this chapter. The program model includes the following features.

Assumptions about your target population. Your assumptions about your target population are the reasons why you decided to develop a program, program component, or service. These assumptions may be based on theory, your own experiences in working with the target population, or your review of existing research or program literature.

Using the worksheet, you would write your assumptions in column 1. Some examples of assumptions about a participant population that could underlie development of a program and potential responses to these assumptions include the following:

Assumption: Children of parents who abuse alcohol or other drugs are at high risk for parental abuse or neglect.

»Response: Develop a program to work with families to address substance abuse and child abuse problems simultaneously.

Assumption: Runaways and homeless youth are at high risk for abuse of alcohol and other drugs.

»Response. Develop a program that provides drug abuse intervention or prevention services to runaway and homeless youth.

Assumption: Families with multiple interpersonal, social, and economic problems need early intervention to prevent the development of child maltreatment, family violence, alcohol and other drug (AOD) problems, or all three.

»Response: Develop an early intervention program that provides comprehensive support services to at-risk families.

Assumption. Children from low-income families are at high risk for developmental, educational, and social problems.

»Response: Develop a program that enhances the developmental, educational, and social adjustment opportunities for children.

Assumption: Child protective services (CPS) workers do not have sufficient skills for working with families in which substance abuse and child maltreatment coexist.

»Response: Develop a training program that will expand the knowledge and skill base of CPS workers.
 

Program interventions (implementation objectives). The program's interventions or implementation objectives represent what you plan to do to respond to the problems identified in your assumptions. They include the specific services, activities, or products you plan to develop or implement. Using the worksheet, you can fill in your program implementation objectives in column 2. Some examples of implementation objectives that correspond to the above assumptions include the following:

  • Provide intensive in-home services to parents and children.
  • Provide drug abuse education services to runaway and homeless youth.
  • Provide in-home counseling and case management services to low-income mothers with infants.
  • Provide comprehensive child development services to children and families.
  • Provide multidisciplinary training to CPS workers.

 

Immediate outcomes (immediate participant outcome objectives). Immediate participant outcome objectives can be entered in column 3. These are your expectations about the changes in participants' knowledge, attitudes, and behaviors that you expect to result from your intervention by the time participants complete the program. Examples of immediate outcomes linked to the above interventions include the following:

  • Parents will acknowledge their substance abuse problems.
  • Youth will demonstrate changes in their attitudes toward use of alcohol and other drugs.
  • Mothers will increase their knowledge of infant development and of effective and appropriate parenting practices.
  • Children will demonstrate improvements in their cognitive and interpersonal functioning.
  • CPS workers will increase their knowledge about the relationship between substance abuse and child maltreatment and about the appropriate service approach for substance-abusing parents.

 

Intermediate outcomes. Intermediate outcomes, entered in column 4, represent the changes in participants that you think will follow after immediate outcomes are achieved. Examples of intermediate outcomes include the following:

After parents acknowledge their AOD abuse problems, they will seek treatment to address this problem.

 

After parents receive treatment for AOD abuse, there will be a reduction in the incidence of child maltreatment.

After runaway and homeless youth change their attitudes toward AOD use, they will reduce this use.

After mothers have a greater understanding of child development and appropriate parenting practices, they will improve their parenting practices with their infants.

After children demonstrate improvements in their cognitive and interpersonal functioning, they will increase their ability to function at an age-appropriate level in a particular setting.

After CPS workers increase their knowledge about working with families in which AOD abuse and child maltreatment coexist, they will improve their skills for working with these families.

Anticipated program impact. The anticipated program impact, specified in the last column of the model, represents your expectations about the long-term effects of your program on participants or the community. They are derived logically from your immediate and intermediate outcomes. Examples of anticipated program impact include the following:

After runaway and homeless youth reduce their AOD abuse, they will seek services designed to help them resolve other problems they may have .

After mothers of infants become more effective parents, the need for out-of-home placements for their children will be reduced.

After CPS workers improve their skills for working with families in which AOD abuse and child maltreatment coexist, collaboration and integration of services between the child welfare and the substance abuse treatment systems will increase.

Program models are not difficult to construct, and they lay the foundation for your evaluation by clearly identifying your program implementation and participant outcome objectives. These models can then be stated in measurable terms for evaluation purposes.

Step 3: State your program implementation and participant outcome objectives in measurable terms. The program model serves as a basis for identifying your program's implementation and participant outcome objectives. Initially, you should focus your evaluation on assessing whether implementation objectives and immediate participant outcome objectives were attained. This task will allow you to assess whether it is worthwhile to commit additional resources to evaluating attainment of intermediate and final or long-term outcome objectives.

Remember, every program, component, or service can be characterized by two types of objectives — implementation objectives and outcome objectives. Both types of objectives will need to be stated in measurable terms.

Often program managers believe that stating objectives in measurable terms means that they have to establish performance standards or some kind of arbitrary "measure" that the program must attain. This is not correct. Stating objectives in measurable terms simply means that you describe what you plan to do in your program and how you expect the participants to change in a way will allow you to measure these objectives. From this perspective, measurement can involve anything from counting the number of services (or determining the duration of services) to using a standardized test that will result in a quantifiable score. Some examples of stating objectives in measurable terms are provided below.

Stating implementation objectives in measurable terms. Examples of implementation objectives include the following:

What you plan to do — The services/activities you plan to provide or the products you plan to develop, and the duration and intensity of the services or activities.

Who will do it — What the staffing arrangements will be; the characteristics and qualifications of the program staff who will deliver the services, conduct the training, or develop the products; and how these individuals will be recruited and hired.

Who you plan to reach and how many — A description of the participant population for the program; the number of participants to be reached during a specific time frame; and how you plan to recruit or reach the participants.

These objectives are not difficult to state in measurable terms. You simply need to be specific about your program's operations. The following example demonstrates how general implementation objectives can be transformed into measurable objectives.

General objective: Provide substance abuse prevention and intervention services to runaway youth.
 

» Measurable objectives:

What you plan to do — Provide eight drug abuse education class sessions per year with each session lasting for 2 weeks and involving 2-hour classes convened for 5 days of each week.

Develop a curriculum that will include at least two self-esteem building activities, four presentations by youth who are in recovery, two field trips to recreational facilities, four role playing activities involving parent-child interactions, and one educational lecture on drugs and their effects.

Who will do it — Classes will be conducted by two counselors. One will be a certified addictions counselor, and the other will have at least 2 years of experience working with runaway and homeless youth.

The curriculum for the classes will be developed by the 2 counselors in conjunction with the clinical director and an outside consultant who is an expert in the area of AOD abuse prevention and intervention.

Counselors will be recruited from current agency staff and will be supervised by the agency clinical director who will provide 3 hours of supervision each week.

Who you plan to reach and how many — Classes will be provided to all youth residing in the shelter during the time of the classes (from 8 to 14 youth for any given session) and to youth who are seeking crisis intervention services from the youth services agency (approximately 6 youth for each session). All youth will be between 13 and 17 years old. Youth seeking crisis intervention services will be recruited to the classes by the intake counselors and the clinical director.

A blank worksheet that can be used to state your implementation objectives in measurable terms is provided at the end of this chapter. From your description of the specific characteristics for each objective, the evaluation will be able to assess, on an ongoing basis whether the objectives were attained, the types of problems encountered during program implementation, and the areas where changes may need to be made. For example, using the example provided above, you may discover that the first class session included only two youth from the crisis intervention services. You will then need to assess your recruitment process, asking the following questions:

How many youth sought crisis intervention services during that timeframe?

How many youth agreed to participate?

What barriers were encountered to participation in the classes (such as youth or parent reluctance to give permission, lack of transportation, or lack of interest among youth)?

Based on your answers to these questions, you may decide to revise your recruitment strategies, train crisis intervention counselors to be more effective in recruiting youth, visit the family to encourage the youth's participation, or offer transportation to youth to make it easier for them to attend the classes.

Stating participant outcome objectives in measurable terms. This process requires you to be specific about the changes in knowledge, attitudes, awareness, or behavior that you expect to occur as a result of participation in your program. One way to be specific about these changes is to ask yourself the following question:

How will we know that the expected changes occurred?

To answer this question, you will have to identify the evidence needed to demonstrate that your participants have changed. The following examples demonstrate how participant outcome objectives may be stated in measurable terms. A worksheet for defining measurable participant outcome objectives appears at the end of this chapter.

General objective: We expect to improve the parenting skills of program participants.

»Measurable objective: Parents participating in the program will demonstrate significant increases in their scores on an instrument that measures parenting skills from intake to completion of the parenting education classes.

General objective: We expect to reduce the use of alcohol and other drugs by youth participating in the substance abuse intervention program.

»Measurable objective: Youth will indicate significant decreases in their scores on an instrument that measures use of alcohol and other drugs from intake to after program participation.

General objective: We expect to improve CPS workers' ability to work effectively with families in which child maltreatment and parental substance abuse problems coexist.

»Measurable objective: CPS workers will demonstrate significant increases in their scores on instruments that measure knowledge of substance abuse and child maltreatment issues and skills for working with these families from before to after training.

General objective: We expect to reduce the risk of child maltreatment for children in the families served.

»Measurable objective: Families served by the program will be significantly less likely than a similar group of families to be reported for child maltreatment for 6 months after they complete the program.

Step 4: Identify the context for your evaluation. Part of planning for an evaluation requires understanding the context in which the evaluation will take place. Think again about building a house. Before you can design your house, you need to know something about your lot. If your lot is on a hill, you must consider the slope of the hill when you design your house. If there are numerous trees on the lot, you must design your house to accommodate the trees.

Similarly, program evaluations do not take place in a vacuum, and the context of an evaluation must be considered before the evaluation can be planned and designed. Although many contextual factors can affect your evaluation, the most common factors pertain to your agency, your staff, and your participant population.

The agency context. The characteristics of an agency implementing a program affects both the program and the evaluation. The aspects of your agency that need to be considered in preparing for your evaluation include the following:

The agency's evaluation-related resources. Does the agency have a management information system in place that can be used to collect data on participants and services? Does the agency have an advisory board that includes members who have experience evaluating programs? Does the agency have discretionary funds in the budget that can be used for an evaluation?

The agency's history of conducting program evaluations. Has the agency evaluated its programs before? If yes, was the experience a negative or positive one? If it was negative, what were the problems encountered and how can they be avoided in the current evaluation? Are the designs of previous agency evaluations appropriate for the evaluation you are currently planning?

If the agency has a history of program evaluation, you may be able to use the previous evaluation designs and methodology for your current evaluation. Review these with your outside evaluator or consultant to determine whether they are applicable to your current needs. If they are applicable, this will save you a great deal of time and money.

The program's relationship to other agency activities. Is the program you want to evaluate integrated into other agency activities, or does it function as a separate entity? What are the relationships between the program and other agency activities? If it is integrated, how will you evaluate it apart from other agency activities? This can be a complicated process. If your evaluation team does not include someone who is an experienced evaluator, you may need assistance from an outside consultant to help you with this task.

The staff context. The support and full participation of program staff in an evaluation is critical to its success. Sometimes evaluations are not successfully implemented because program staff who are responsible for data collection do not consistently administer or complete evaluation forms, follow the directions of the evaluation team, or make concerted efforts to track participants after they left the program. The usual reason for staff-related evaluation problems is that staff were not adequately prepared for the evaluation or given the opportunity to participate in its planning and development. Contextual issues relevant to program staff include the following:

The staff's experiences in participating in program evaluations. Have your staff participated in evaluations prior to this one? If yes, was the experience a positive or negative one? If no, how much do they know about the evaluation process and how much training will they need to participate as full partners in the evaluation?

If staff have had negative experiences with evaluation, you will need to work with them to emphasize the positive aspects of evaluation and to demonstrate how this evaluation will be different from prior ones. All staff will need careful training if they are to be involved in any evaluation activities, and this training should be reinforced throughout the duration of the evaluation.

The staff's attitudes toward evaluation. Do your staff have positive or negative attitudes toward evaluation? If negative, what can be done to make them more positive? How can they be encouraged to support and participate fully in the evaluation?

Negative attitudes sometimes can be counteracted when program managers demonstrate enthusiasm for the evaluation and when evaluation activities are integrated with program activities. It may be helpful to demonstrate to staff how evaluation instruments also can be used as assessment tools for participants and therefore help staff develop treatment plans or needs assessments for individual participants.

The staff's knowledge about evaluation. Are your staff knowledgeable about the practices and procedures required for a program evaluation? Do any staff members have a background in conducting evaluations that could help you with the process?

Staff who are knowledgeable about evaluation practices and procedures can be a significant asset to an evaluation. They can assume some of the evaluation tasks and help train and supervise other staff on evaluation activities.

The participant population context. Before designing an evaluation, it is very important to understand the characteristics of your participant population. The primary issue relevant to the participant population context concerns the potential diversity of your program population. For example, is the program population similar or diverse with respect to age, gender, ethnicity, socioeconomic status, and literacy levels? If the population is diverse, how can the evaluation address this diversity?

Participant diversity can present a significant challenge to an evaluation effort. Instruments and methods that may be appropriate for some participants may not be for others. For example, written questionnaires may be easily completed by some participants, but others may not have adequate literacy levels. Similarly, face-to-face interviews may be appropriate for some of the cultural groups the program serves, but not to others.

If you serve a diverse population of participants, you may need to be flexible in your data collection methods. You may design an instrument, for example, that can be administered either as a written instrument or as an interview instrument. You also may need to have your instruments translated into different languages. However, it is important to remember that just translating an instrument does not necessarily mean that it will be culturally appropriate.

If you serve a particular cultural group, you may need to select the individuals who are to collect the evaluation information from the same cultural or ethnic group as your participants. If you are concerned about the literacy levels of your population, you will need to pilot test your instruments to make sure that participants understand what is being asked of them. More information related to pilot tests appears in Chapter 7.

Identifying contextual issues is essential to building a solid foundation for your evaluation. During this process, you will want to involve as many members of your expected evaluation team as possible. The decisions you make about how to address these contextual issues in your evaluation will be fundamental to ensuring that the evaluation operates successfully and that its design and methodology are appropriate for your participant population.

After you have completed these initial steps, it is time to "frame" your house. To frame a house, you need blueprints that detail the plans for the house. The blueprint for an evaluation is the evaluation plan. Chapter 6 discusses the elements that go into building this plan.

Previous Chapter   Next Chapter
Return to: ACF Home < CORE HOME < Publications and Reports < Introduction & Table of Contents