Skip ACF banner and navigation
Department of Health and Human Services logo
Questions?  
Privacy  
Site Index  
Contact Us  
   Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News Search  
Administration for Children and Families US Department of Health and Human Services
Skip NavigationReturn to: ACF Home < CORE HOME < Publications and Reports < Introduction & Table of Contents
Previous Chapter   Next Chapter

Chapter 8: How Do You Make Sense of Evaluation Information?

For evaluation information to be useful, it must be analyzed and interpreted. Many program managers and staff are intimidated by this activity, believing that it is best left to an expert. This is only partially true. If your evaluation team does not include someone who is experienced in analyzing qualitative and quantitative evaluation data, you will need to seek the assistance of an outside consultant for this task. However, it is important for you and all other members of the evaluation team to participate in the analysis activities. This is the only way to ensure that the analyses will answer your evaluation questions, not ones that an outside consultant may want to answer.

Think again about building a house. You may look at a set of blueprints and see only a lot of lines, numbers, and arrows. But when a builder looks at the blueprints, this person sees exactly what needs to be done to build the house and understands all of the technical requirements. This is why most people hire an expert to build one. However, hiring an expert builder does not mean that you do not need to participate in the building process. You need to make sure that the house the builder is working on is the house you want, not one that the builder wants.

This chapter will not tell you how to analyze evaluation data. Instead, it provides some basic information about different procedures for analyzing evaluation data to help you understand and participate more fully in this process. There are many ways to analyze and interpret evaluation information. The methods discussed in this chapter are not the only methods one can use. Whatever methods the evaluation team decides to use, it is important to realize that analysis procedures must be guided by the evaluation questions. The following evaluation questions are discussed throughout this manual:

Are program implementation objectives being attained? If not, why not? What types of things were barriers to or facilitated attaining program implementation objectives?

Are participant outcome objectives being attained? If not, why not? What types of things were barriers to or facilitated attaining participant outcome objectives?

The following sections discuss procedures for analyzing evaluation information to answer both of these questions.

 

Analyzing information about program implementation objectives

In this manual, the basic program implementation objectives have been described as follows:

  • What you plan to do
  • Who will do it
  • Whom you plan to reach (your expected participant population) and with what intensity and duration
  • How many you expect to reach

You can analyze information about attainment of program implementation using a descriptive process. You describe what you did (or are doing), who did it, and the characteristics and number of participants. You then compare this information to your initial objectives and determine whether there is a difference between objectives and actual implementation. This process will answer the question: Were program implementation objectives attained?

If there are differences between your objectives and your actual implementation, you can analyze your evaluation information to identify the reasons for the differences. This step answers the question: If not, why not?

You also can use your evaluation information to identify barriers encountered to implementation and factors that facilitated implementation. This information can be used to "tell the story" of your program's implementation. An example of how this information might be organized for a drug abuse prevention program for runaway and homeless youth is provided in a table at the end of this chapter. The table represents an analysis of the program's measurable implementation objective concerning what the program plans to do.

You may remember that the measurable objectives introduced as examples in this manual for what you plan to do for the drug abuse prevention program were the following:

  • The program will provide eight drug abuse education class sessions per year.
  • Each session will last for 2 weeks.
  • Each 2-week session will involve 2 hours of classes per day.
  • Classes will be held for 5 days of each week of the session.

In the table, these measurable objectives appear in the first column. The actual program implementation information is provided in the second column. For this program, there were differences between objectives and actual implementation for three of the four measurable objectives. Column 3 notes the presence or absence of differences, and column 4 provides the reasons for those changes.

Columns 5 and 6 in the table identify the barriers encountered and the facilitating factors. These are important to identify whether or not implementation objectives were attained. They provide the context for understanding the program and will help you interpret the results of your analyses.

By reviewing the information in this table, you would be able to say the following things about your program:

The program implemented only six drug abuse prevention sessions instead of the intended eight sessions.

» The fewer than expected sessions were caused by a delay in startup time.

» The delay was caused by the difficulty of recruiting and hiring qualified staff, which took longer than expected.

 

» With staff now on board, we expect to be able to implement the full eight sessions in the second year.

» Once staff were hired, the sessions were implemented smoothly because there were a number of volunteers who provided assistance in organizing special events and transporting participants to the events.

Although the first two sessions were conducted for 2 weeks each, as intended, the remaining sessions were conducted for only 1 week.

» The decreased duration of the sessions was caused by the difficulty of maintaining the youth's interest during the 2-week period.

» Attendance dropped considerably during the second week, usually because of lack of interest, but sometimes because youth were moved to other placements or returned home.

» Attendance during the first week was maintained because of the availability of youth residing in the shelter.

For the first two sessions the class time was 2 hours per day, as originally intended. After the number of sessions was decreased, the class time was increased to 3 hours per day.

» The increase was caused by the need to cover the curriculum material during the session.

» The extensive experience of the staff, and the assistance of volunteers, facilitated covering the material during the 1-week period.

» The youth's interest was high during the 1-week period.

The classes were provided for 5 days during the 1-week period, as intended.

» This schedule was facilitated by staff availability and the access to youth residing in the shelter.

» It was more difficult to get youth from crisis intervention services to attend for all 5 days.

Information on this implementation objective will be expanded as you conduct a similar analysis of information relevant to the other implementation objectives of staffing (who will do it) and the population (number and characteristics of participants).

As you can see, if this information is provided on an on-going basis, it will provide opportunities for the program to improve its implementation and better meet the needs of program participants.

 

Analyzing information about participant outcome objectives

The analysis of participant outcome information must be designed to answer two questions:

Did the expected changes in participants' knowledge, attitudes, behavior, or awareness occur?

If changes occurred, were they the result of your program's interventions?

Another question that can be included in your analysis of participant outcome information is:

Did some participants change more than others and, if so, what explains this difference? (For example, characteristics of the participants, types of interventions, duration of interventions, intensity of interventions, or characteristics of staff.)

Your evaluation plan must include a detailed description of how you will analyze information to answer these questions. It is very important to know exactly what you want to do before you begin collecting data, particularly the types of statistical procedures that you will use to analyze participant outcome information.

Understanding statistical procedures. Statistical procedures are used to understand changes occurring among participants as a group. In many instances, your program participants may vary considerably with respect to change. Some participants may change a great deal, others may change only slightly, and still others may not change or may change in an unexpected direction. Statistical procedures will help you assess the overall effectiveness of your program and its effectiveness with various types of participants.

Statistical procedures also are important tools for an evaluation because they can determine whether the changes demonstrated by your participants are the result of a chance occurrence or are caused by the variables (program or procedure) being assessed. This is called statistical significance. Usually, a change may be considered statistically significant (not just a chance occurrence) if the probability of its happening by chance is less than 5 in 100 cases. However, in some situations, evaluators may set other standards for establishing significance, depending on the nature of the program, what is being measured, and the number of participants.

Another use for statistical procedures is determining the similarity between your treatment and nontreatment group members. This is particularly important if you are using a comparison group rather than a control group as your nontreatment group. If a comparison group is to be used to establish that participant changes were the result of your program's interventions and not some other factors, you must demonstrate that the members of the comparison group are similar to your participants in every key way except for program participation.

Statistical procedures can be used to determine the extent of similarity of group members with respect to age, gender, socioeconomic status, marital status, race or ethnicity, or other factors.

Statistical tests are a type of statistical procedure that examine the relationships among variables in an analysis. Some statistical tests include a dependent variable,one or more independent variables, and potential mediating or conditioning variables.

Dependent variables are your measures of the knowledge, attitude, or behavior that you expect will change as a result of your program. For example, if you expect parents to increase their scores on an instrument measuring understanding of child development or effective parenting, the scores on that instrument are the dependent variable for the statistical analyses.

Independent variables refer to your program interventions or elements. For example, the time of data collection (before and after program participation), the level of services or training, or the duration of services may be your independent variables.

Mediating or conditioning variables are those that may affect the relationship between the independent variable and the dependent variable. These are factors such as the participant's gender, socioeconomic status, age, race, or ethnicity.

Most statistical tests assess the relationships among independent variables, dependent variables, and mediating variables. The specific question answered by most statistical tests is: Does the dependent variable vary as a function of levels of the independent variable? For example, do scores on an instrument measuring understanding of child development vary as a function of when the instrument was administered (before and after the program)? In other words, did attendance at your program's child development class increase parents' knowledge?

Most statistical tests can also answer whether any other factors affected the relationship between the independent and dependent variables. For example, was the variation in scores from before to after the program affected by the ages of the persons taking the test, their socioeconomic status, their ethnicity, or other factors? The more independent and mediating variables you include in your statistical analyses, the more you will understand about your program's effectiveness.

As an example, you could assess whether parents' scores on an instrument measuring understanding of child development differed as a result of the time of instrument administration (at intake and at program exit), the age of the parent, and whether or not they completed the full program.

Suppose your statistical test indicates that, for your population as a whole, understanding of child development did not change significantly as a result of the time of instrument administration. That is, "program exit" scores were not significantly higher than "program intake" scores. This finding would presumably indicate that you were not successful in attaining this expected participant outcome.

However, lack of a significant change among your participants as a group does not necessarily rule out program effectiveness. If you include the potential mediating variable of age in your analysis, you may find that older mothers (ages 25 to 35) did demonstrate significant differences in before-and-after program scores but younger mothers (ages 17 to 24 years) did not. This would indicate that your program's interventions are effective for the older mothers in your target population, but not for the younger ones. You may then want to implement different types of interventions for the younger mothers, or you may want to limit your program recruitment to older mothers, who seem to benefit from what you are doing. And you would not have known this without the evaluation!

If you added the variable of whether or not participants completed the full program, you may find that those who completed the program were more likely to demonstrate increases in scores than mothers who did not complete the program and, further, that older mothers were more likely to complete the program than younger mothers. Based on this finding, you may want to find out why the younger mothers were not completing the program so that you can develop strategies for keeping younger mothers in the program.

 

Using the results of your analyses

The results of your analyses can answer your initial evaluation questions.

Are participant outcome objectives being attained?

If not, why not?

What factors contributed to attainment of objectives?

What factors were barriers to attainment of objectives?

These questions can be answered by interpreting the results of the statistical procedures performed on the participant outcome information. However, to fully address these questions, you will also need to look to the results of the analysis of program implementation information. This will provide a context for interpreting statistical results.

For example, if you find that one or more of your participant outcome objectives is not being attained, you may want to explain this finding. Sometimes you can look to your analysis of program implementation information to understand why this may have happened. You may find, for example, that your program was successful in attaining the outcome of an increase in parents' knowledge about child development, but was not successful in attaining the behavioral outcome of improved parenting skills.

In reviewing your program implementation information, you may find that some components of your program were successfully implemented as intended, but that the home-based counseling component of the program was not fully implemented as intended — and that the problems encountered in implementing the home-based counseling component included difficulty in recruiting qualified staff, extensive staff turnover in the counselor positions, and insufficient supervision for staff. Because the participant outcome most closely associated with this component was improving parenting skills, the absence of changes in this behavior may be attributable to the problems encountered in implementing this objective.

The results of integrating information from your participant outcome and program implementation analyses are the content for your evaluation report. Ideally, evaluation reports should be prepared on an ongoing basis so that you can receive feedback on the progress of your evaluation and your program. The specified times for each report would depend on your need for evaluation information, the time frame for the evaluation, and the duration of the program. Chapter 9 provides more information on preparing an evaluation report.

Previous Chapter   Next Chapter
Return to: ACF Home < CORE HOME < Publications and Reports < Introduction & Table of Contents