Skip ACF banner and navigation
Department of Health and Human Services logo
Questions?  
Privacy  
Site Index  
Contact Us  
   Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News Search  
Administration for Children and Families US Department of Health and Human Services
Skip NavigationReturn to: ACF Home < CORE HOME < Publications and Reports < Introduction & Table of Contents
Previous Chapter   Next Chapter

Chapter 6: What Should You Include in an Evaluation Plan?

If you decided to build a house, you probably would hire an architect to design the house and draw up the plans. Although it is possible to build a house without hiring an architect, this professional knows what is and is not structurally possible and understands the complex issues relevant to setting the foundation and placing the pipes, ducts, and electrical wires. An architect also knows what materials to use in various parts of the house and the types of materials that are best. However, an architect cannot design the house for you unless you tell him or her what you want.

An evaluation plan is a lot like an architect's plans for a house. It is a written document that specifies the evaluation design and details the practices and procedures to use to conduct the evaluation. Just as you would have an architect develop the plans for your house, it is a good idea to have an experienced evaluator develop the plans for your evaluation. Similarly, just as an architect cannot design your house without input from you, an experienced evaluator cannot develop an effective evaluation plan without assistance from you and your staff. The evaluator has the technical expertise, but you and your staff have the program expertise. Both are necessary for a useful evaluation plan.

If you plan to hire an outside evaluator to head your evaluation team, you may want to specify developing the evaluation plan as one of the evaluator's responsibilities, with assistance from you and program staff. If you plan to conduct an in-house evaluation and do not have someone on your evaluation team who is an experienced evaluator, this is a critical point at which to seek assistance from an evaluation consultant. The consultant can help you prepare the evaluation plan to ensure that your design and methodology are technically correct and appropriate for answering the evaluation questions.

This chapter provides information about the necessary ingredients to include in an evaluation plan. This information will help you:

  • Work with an experienced evaluator (either an outside evaluator or someone within your agency) to develop the plan.
  • Review the plan that an outside evaluator has developed to make sure all the ingredients are included.
  • Understand the kinds of things that are required in an evaluation and why your outside evaluator or evaluation consultant has chosen a specific design or methodology.

An evaluation plan should be developed at least 2 to 3 months before the time you expect to begin the evaluation so that you have ample time to have the plan reviewed, make any necessary changes, and test out information collection procedures and instruments before collecting data.

Do not begin collecting evaluation information until the plan is completed and the instruments have been pilot-tested. A sample evaluation plan outline that may be used as a guide appears at the end of this chapter. The major sections of the outline are discussed below.

 

Section I. The evaluation framework

This section can be used to present the program model (discussed in Chapter 5), program objectives, evaluation questions, and the timeframe for the evaluation (when collection of evaluation information will begin and end). It also should include a discussion of the context for the evaluation, particularly the aspects of the agency, program staff, and participants that may affect the evaluation (also discussed in Chapter 5). If an outside evaluator is preparing the plan, the evaluator will need your help to prepare this section.

 

Section II. Evaluating implementation objectives — procedures and methods

This section should provide detailed descriptions of the practices and procedures that will be used to answer evaluation questions pertaining to your program's implementation objectives. (Are implementation objectives being attained and, if not, why not? What barriers were encountered? What has facilitated attainment of objectives?)

 

Types of information needed. In an evaluation, information is often referred to as data. Many people think that the term "data" refers to numerical information. In fact, data can be facts, statistics, or any other items of information. Therefore, any information that is collected about your program or participants can be considered evaluation data.

The types of information needed will be guided by the objective you assess. For example, when the objective refers to what you plan to do, you must collect information on the types of services, activities, or educational/training products that are developed and implemented; who received them; and their duration and intensity.

When the objective pertains to who will do it, you must collect information on the characteristics of program staff (including their background and experience), how they were recruited and hired, their job descriptions, the training they received to perform their jobs, and the general staffing and supervisory arrangements for the program.

When the objective concerns who will participate, you must collect information about the characteristics of the participants, the numbers of participants, how they were recruited, barriers encountered in the recruitment process, and factors that facilitated recruitment.

Sources of necessary information. This refers to where, or from whom, you will obtain evaluation information. Again, the selection of sources will be guided by the objective you are assessing. For example:

  • Information on services can come from program records or from interviews with program staff.
  • Information on staff can come from program records, interviews with agency administrators, staff themselves, and program managers.
  • Information on participants and recruitment strategies can come from program records and interviews with program staff and administrators.
  • Information about barriers and facilitators to implementing the program can come from interviews with relevant program personnel.

 

This section of the plan also should include a discussion of how confidentiality of information will be maintained. You will need to develop participant consent forms that include a description of the evaluation objectives and how the information will be used. A sample participant consent form is provided at the end of this chapter.

How sources of information will be selected. If your program has a large number of staff members or participants, the time and cost of the evaluation can be reduced by including only a sample of these staff or participants as sources for evaluation information. If you decide to sample, you will need the assistance of an experienced evaluator to ensure that the sampling procedures result in a group of participants or staff that are appropriate for your evaluation objectives. Sampling is a complicated process, and if you do not sample correctly you run the risk of not being able to generalize your evaluation results to your participant population as a whole.

There are a variety of methods for sampling your sources.

  • You can sample by identifying a specific timeframe for collecting evaluation-related information and including only those participants who were served during that timeframe.
  • You can sample by randomly selecting the participants (or staff) to be used in the evaluation. For example, you might assign case numbers to participants and include only the even-numbered cases in your evaluation.
  • You can sample based on specific criteria, such as length of time with the program (for staff) or characteristics of participants.

Methods for collecting information. For each implementation objective you are assessing, the evaluation plan must specify how information will be collected (the instruments and procedures) and who will collect it. To the extent possible, collection of evaluation information should be integrated into program operations. For example, in direct services programs, the program's intake, assessment, and termination forms could be designed so that they are useful for evaluation purposes as well as for program purposes.

In training programs, the registration forms for participants can be used to collect evaluation-related information as well as provide information relevant to conducting the training. If your program uses a management information system (MIS) to track services and participants, it is possible that it will incorporate much of the information that you need for your evaluation.

There are a number of methods for collecting information including structured and open-ended interviews, paper and pencil inventories or questionnaires, observations, and systematic reviews of program or agency records or documents. The methods you select will depend upon the following:

  • The evidence you need to establish that your objectives were attained
  • Your sources
  • Your available resources

Chapter 7 provides more information on these methods. The instruments or forms that you will use to collect evaluation information should be developed or selected as part of the evaluation plan. Do not begin an evaluation until all of the data collection instruments are selected or developed. Again, instrument development or selection can be a complex process and your evaluation team may need assistance from an experienced evaluator for this task.

Confidentiality. An important part of implementing an evaluation is ensuring that your participants are aware of what you are doing and that they are cooperating with the evaluation voluntarily. People should be allowed their privacy, and this means they have the right to refuse to give any personal or family information, the right to refuse to answer any questions, and even the right to refuse to be a part of the evaluation at all.

Explain the evaluation activities and what will be required of them as part of the evaluation effort. Tell them that their name will not be used and that the information they provide will not be linked to them. Then, have them sign an informed consent form that documents that they understand the scope of the evaluation, know what is expected of them, agree (or disagree) to participate, and understand they have the right to refuse to give any information. They should also understand that they may drop out of the evaluation at any time without losing any program services. If children are involved, you must get the permission of their parents or guardians concerning their participation in the evaluation.

A sample informed consent form appears at the end of this chapter. Sometimes programs will have participants complete this form at the same time that they complete forms agreeing to participate in the program, or agreeing to let their children participate. This reduces the time needed for the evaluator to secure informed consent.

Timeframe for collecting information. Although you will have already specified a general timeframe for the evaluation, you will need to specify a time frame for collecting data relevant to each implementation objective. Times for data collection will again be guided by the objective under assessment.You should be sure to consider collecting evaluation at the same time for all participants; for example, after they have been in the program for 6 months.

Methods for analyzing information. This section of an evaluation plan describes the practices and procedures for use in analyzing the evaluation information. For assessing program implementation, the analyses will be primarily descriptive and may involve tabulating frequencies (of services and participant characteristics) and classifying narrative information into meaningful categories, such as types of barriers encountered, strategies for overcoming barriers, and types of facilitating factors. An experienced evaluator can help your evaluation team design an analysis plan that will maximize the benefits of the evaluation for the program and for program staff. More information on analyzing program implementation information is provided in Chapter 8.

 

 

Section III. Evaluating participant outcome objectives

The practices and procedures for evaluating attainment of participant outcome objectives are similar to those for evaluating implementation objectives. However, this part of your evaluation plan will need to address a few additional issues.

Selecting your evaluation design. A plan for evaluating participant outcome objectives must include a description of the evaluation design. Again, the assistance of an experienced evaluator (either an outside evaluator, consultant, or someone within your agency) is critical at this juncture.

The evaluation design must allow you to anser these basic questions about your participants:

  • Did program participants demonstrate changes in knowledge, attiudes, behaviors, or awareness?
  • Were the changes the result of the program's interventions

 

Two commonly used evaluation designs are:

  • Pre-intervention and post-intervention assessments
  • Pre-intervention and post-intervention assessments using a comparison or control group

 

A pre- and post-interventiondesign involves collectin information only on program participants. This information is collected at least twice: once before participatns begin the program and again either immediately or some time after they complete or leave the program. You can collect outcome information as often as you like after participants enter the program, but you must collect information on participants before they enter the program. This is called baseline information and is esstial for demonstrating that a change occurred.

If you are implementing an education or training program, this type of design can be effective for evaluating immediate changes in participants' knowledge and attitudes. In these types of programs, you can assess participants' knowledge and attitudes prior to the training and immediately after training with some degree of certainty that any observed changes resulted from your interventions.

However, if you want to assess longer-term outcomes of training and education programs or any outcomes of service delivery programs, the pre-intervention and post-intervention design by itself is not recommended. Collecting information only on program participants does not allow you to answer the question: Were participant changes the result of program interventions? The changes may have occurred as a result of other interventions, or are changes that might have occurred without any intervention at all.

To be able to attribute participant changes toyour program's intervention, yo need to use a pre- and post-intervention design that incorporates a comparisionor control group. In this design, two groups of individuals are included in your evaluation.

  • The treatment group (individuals who aprticipate in your program).
  • The nontreatment gorup (individuals who are similar to those in the treatment group, but who do not receive the same services as the treatment group.

The nontreatment group is called a control group if all eleigible program participants are randomly assigned to the treatment and nontreatment groups. Random assignmetn means that members of both groups can be assumed to be similar with respect toall key charactereistics except program participation. Thus, potential sources of biases are "controlled." A comparison group is a nontreatment group where you do not randomly assign people. A comparison gorup could be families from another program, childre from another school, or former program participants.

Using a control group greatly strengthens your evaluation, but ther eare barriers to implementing this design option. Program staff amy view random assignment as unethical because it deprives eligible participants of needed services. As a result, staff sometimes will priortize elgible participatns rather than use random assignemtn, or staff may simply refuse to assign individuals to the control group. Staff from other agencies may also feel random assignmetn is unethical and may refuse to refer individuals to your program.

To avoid these potential barriers, educate staff from you program and from other agencies in your community about the benefits of the random assignment process. No one would argue with the belief that it is important to provide services to individuals who need them. However, it is also important to find out if those services actually work. The random assignment process helps you determine whether or not your program's services are having the anticipated effect on participants. Staff from your program and form other agencies also must be informed that random assignment does not mean that control group members cannot receive any services or training. They may participate in the program after the evaluation data have been collected, or they may receive other types of services or training.

Another potential barrier to usin ga control group is the number of program participant that are recruited. If you find that you are recruiting fewer participants than you orignally anticipanted, you may not want to randomly assign participants to a control group because it would reduce the size of your service popoulation.

A final barrier is the difficulty of enlisting control group members in the evaluation process. Because control group members have not participated in the program, they are unlikely to have an interest in the evaluation and may refuse to be interviewed or complete a questionnaire. Some evaluation efforts set aside funds to provide money or othr incentives to encourage both control group and treatment group members to participate in the evaluation. Although there is some potential for bias in this situation, it is usually outweighed by the need to collect information from control group members.

If you are implementing a program in which random assignment of participants to treatment and control groups is not possible, you will need to identify a group of individuals or families who are similar to those participating in your program whom you can assess as part of your evaluation. This group is called a comparison group. Similar to a control group, members of a comparison group may receive other types of services or no services at all. Although using comparison groups means that programs do not have to deny services to eligible participants, you cannot be sure that the two groups are completely similar. You may have to collect enough information at baseline to try and control for potential differences as part of your statistical analyses.

Comparison group members may be participants in other programs provided by your agency or in programs offered by other agencies. If you plan to use a comparison group, you must make sure that this group will be available for assessments during the time frame of your evaluation. Also, be aware that comparison group members, like control group members, are difficult to enlist in an evaluation. The evaluation plan will need to specify strategies for encouraging nontreatment group members to take part in the evaluation.

Pilot-testing information collection instruments. Your plans for evaluating participant outcome objectives will need to include a discussion of plans for pilot-testing and revising information collection instruments. Chapter 7 provides information on pilot-testing instruments.

Analyzing participant outcome information. The plan for evaluating participant outcomes must include a comprehensive data analysis plan. The analyses must be structured to answer the questions about whether change occurred and whether these changes can be attributed to the program. A more detailed discussion on analyzing information on participant outcomes is provided in Chapter 8.

 

Section IV. Procedures for managing and monitoring the evaluation

This section of the evaluation plan can be used to describe the practices and procedures you expect to use to manage the evaluation. If staff are to be responsible for data collection, you will need to describe how they will be trained and monitored. You may want to develop a data collection manual that staff can use. This will ensure consistency in information collection and will be useful for staff who are hired after the evaluation begins. Chapter 7 discusses various types of evaluation monitoring activities.

This final section of the evaluation plan also should include a discussion of how changes in program operations will be handled in the evaluation. For example, if a particular service or program component is discontinued or added to the program, you will need to have procedures for documenting the time that this change occurred, the reasons for the change, and whether particular participants were involved in the program prior to or after the change. This will help determine whether the change had any impact on attainment of expected outcomes.

Once you and your experienced evaluator have completed the evaluation plan, it is a good idea to have it reviewed by selected individuals for their comments and suggestions. Potential reviewers include the following:

  • Agency administrators who can determine whether the evaluation plan is consistent with the agency's resources and evaluation objectives.
  • Program staff who can provide feedback on whether the evaluation will involve an excessive burden for them and whether it is appropriate for program participants.
  • Advisory board members who can assess whether the evaluation will provide the type of information most important to know.
  • Participants and community members who can determine if the evaluation instruments and procedures are culturally sensitive and appropriate.

After the evaluation plan is complete and the instruments pilot tested, you are ready to begin collecting evaluation information. Because this process is so critical to the success of an evaluation, the major issues pertaining to information collection discussed in more detail in the following chapter.

 

Sample Outline for Evaluation Plan

 

  1. Evaluation framework
    1. What you are going to evaluate
      1. Program model (assumptions about target population, interventions, immediate outcomes, intermediate outcomes, and final outcomes)
      2. Program implementation objectives (stated in general and then measurable terms)
        1. What you plan to do and how
        2. Who will do it
        3. Participant population and recruitment strategies
      3. Participant outcome objectives (stated in general and then measurable terms)
      4. Context for the evaluation
    2. Questions to be addressed in the evaluation
      1. Are implementation objectives being attained? If not, why (that is, what barriers or problems have been encountered)? What kinds of things facilitated implementation?
      2. Are participant outcome objectives being attained? If not, why (that is, what barriers or problems have been encountered)? What kinds of things facilitated attainment of participant outcomes?
        1. Do participant outcomes vary as a function of program features? (That is, which aspects of the program are most predictive of expected outcomes?)
        2. Do participant outcomes vary as a function of characteristics of the participants or staff?
    3. Timeframe for the evaluation
      1. When data collection will begin and end
      2. How and why timeframe was selected
  2. Evaluating implementation objectives — procedures and methods
  3. (question 1: Are implementation objectives being attained, and if not, why not?)
    1. Objective 1 (state objective in measurable terms)
      1. Type of information needed to determine if objective 1 is being attained and to assess barriers and facilitators
      2. Sources of information (that is, where you plan to get the information including staff, participants, program documents). Be sure to include your plans for maintaining confidentiality of the information obtained during the evaluation
      3. How sources of information were selected
      4. Time frame for collecting information
      5. Methods for collecting the information (such as interviews, paper and pencil instruments, observations, records reviews)
      6. Methods for analyzing the information to determine whether the objective was attained (that is, tabulation of frequencies, assessment of relationships between or among variables)
    2. Repeat this information for each implementation objective being assessed in the evaluation
  4. Evaluating participant outcome objectives—procedures and methods
    (question 2: Are participant outcome objectives being attained and if not, why not?)
    1. Evaluation design
    2. Objective 1 (state outcome objective in measurable terms)
      1. Types of information needed to determine if objective 1 is being attained (that is, what evidence will you use to demonstrate the change?)
      2. Methods of collecting that information (for example, questionnaires, observations, surveys, interviews) and plans for pilot-testing information collection methods
      3. Sources of information (such as program staff, participants, agency staff, program managers, etc.) and sampling plan, if relevant
      4. Timeframe for collecting information
      5. Methods for analyzing the information to determine whether the objective was attained (i.e., tabulation of frequencies, assessment of relationships between or among variables using statistical tests)
    3. Repeat this information for each participant outcome objective being assessed in the evaluation
  5. Procedures for managing and monitoring the evaluation
    1. Procedures for training staff to collect evaluation-related information
    2. Procedures for conducting quality control checks of the information collection process
    3. Timelines for collecting, analyzing, and reporting information, including procedures for providing evaluation-related feedback to program managers and staff

Sample Informed Consent Form

We would like you to participate in the Evaluation of [program name]. Your participation is important to us and will help us assess the effectiveness of the program. As a participant in [program name] we will ask you to [complete a questionnaire, answer questions in an interview, or other task].

We will keep all of your answers confidential. Your name will never be included in any reports and none of your answers will be linked to you in any way. The information that you provide will be combined with information from everyone else participating in the study.

[If information/data collection includes questions relevant to behaviors such as child abuse, drug abuse, or suicidal behaviors, the program should make clear its potential legal obligation to report this information — and that confidentiality may be broken in these cases. Make sure that you know what your legal reporting requirements are before you begin your evaluation.]

You do not have to participate in the evaluation. Even if you agree to participate now, you may stop participating at any time or refuse to answer any question. Refusing to be part of the evaluation will not affect your participation or the services you receive in [program name].

If you have any questions about the study you may call [name and telephone number of evaluator, program manager or community advocate].

By signing below, you confirm that this form has been explained to you and that you understand it.

Please Check One:

 AGREE TO PARTICIPATE

 DO NOT AGREE TO PARTICIPATE

__________________________________________Signed:
Participant or Parent/Guardian

__________________________________________Date:

Previous Chapter   Next Chapter
Return to: ACF Home < CORE HOME < Publications and Reports < Introduction & Table of Contents