Skip ACF banner and navigation
Department of Health and Human Services logo
Questions?  
Privacy  
Site Index  
Contact Us  
   Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News Search  
Administration for Children and Families US Department of Health and Human Services
Skip NavigationReturn to: ACF Home < CORE HOME < Publications and Reports < Introduction & Table of Contents
Previous Chapter   Next Chapter

Chapter 7: How Do You Get the Information You Need for Your Evaluation?

As Chapter 6 noted, a major section of your evaluation plan concerns evaluation information — what kinds of information you need, what the sources for this information will be, and what procedures you use to collect it. Because these issues are so critical to the success of your evaluation effort, they are discussed in more detail in this chapter.

In a program evaluation, the information you collect is similar to the materials you use when you build a house. If you were to build a house, you would be very concerned about the quality of the materials used. High-quality materials ensure a strong and durable house. In an evaluation, the quality of the information you collect also affects its strength and durability. The higher the quality of the information collected, the better the evaluation.

At the end of the chapter, there are two worksheets to help you plan out the data collection process. One is a sample worksheet completed for a drug abuse prevention program for runaway and homeless youth, and the other is a blank worksheet that you and your evaluation team can complete together. The following sections cover each column of the worksheet.

 

What specific information do you need to address objectives?

Using the worksheet, fill in your program implementation (or participant outcome objectives) in column 1. Make sure that these objectives are stated in measurable terms. Your statement of objectives in measurable terms will determine the kinds of information you need and will avoid the problem of collecting more information than is actually necessary.

Next, complete column 2 by specifying the information that addresses each objective. This information is sometimes referred to as the data elements. For example, if two of your measurable participant outcome objectives are to improve youth's grades and scores on academic tests and reduce their incidence of behavioral problems as reported by teachers and student self-reports, you will need to collect the following information:

  • Student grades
  • Academic test scores
  • Number of behavior or discipline reports
  • Teacher assessments of classroom behaviors
  • Student self-assessments of classroom behaviors

These items are the data elements.

 

What are the best sources?

Column 3 can be used to identify appropriate sources for specific evaluation data. For every data element, there may be a range of potential sources, including:

  • Program records (case records, registration records, academic records, and other information)
  • Program management information systems
  • Program reports and documents
  • Program staff
  • Program participants
  • Family members of participants
  • Members of a control or comparison group
  • Staff of collaborating agencies
  • Records from other agencies (such as health agencies, schools, criminal justice agencies, mental health agencies, child welfare agencies, or direct service agencies)
  • Community leaders
  • Outside experts
  • The general public
  • National databases

In deciding the best sources for information, your evaluation team will need to answer three questions:

check What source is likely to provide the most accurate information?

checkWhat source is the least costly or time consuming?

check Will collecting information from a particular source pose an excessive burden on that person?

The judgment regarding accuracy is the most important decision. For example, it may be less costly or time consuming to obtain information about services from interviews with program staff, but staff may not provide as accurate information about services as may be obtained from case records or program logs.

When you interview staff, you are relying on their memories of what happened, but when you review case records or logs, you should be able to get information about what actually did happen. If you choose to use case records or program logs to obtain evaluation-relevant data, you will need to make sure that staff are consistent in recording evaluation information in the records. Sometimes case record reviews can be difficult to use for evaluation purposes because they are incomplete or do not report information in a consistent manner.

Another strategy is to identify existing information on your participants. Although your program may not collect certain information, other programs and agencies may. You might want to seek the cooperation of other agencies to obtain their data, or develop a collaboration that supports your evaluation.

 

What are the most effective data collection instruments?

Column 4 identifies the instruments that you will use to collect the data from specified sources. Some options for information collection instruments include the following:

  • Written surveys or questionnaires
  • Oral interviews (either in person or on the telephone) or focus group interviews (either structured or unstructured)
  • Extraction forms to be used for written records (such as case records or existing databases)
  • Observation forms or checklists to be used to assess participants' or staff members' behaviors

The types of instruments selected should be guided by your data elements. For example, information on barriers or facilitators to program implementation would be best obtained through oral interviews with program administrators and staff. Information on services provided may be more accurate if obtained by using a case record or program log extraction form.

Information on family functioning may be best obtained through observations or questionnaires designed to assess particular aspects of family relationships and behaviors. Focus group interviews are not always useful for collecting information on individual participant outcomes, but may be used effectively to assess participants' perceptions of a program.

Instruments for evaluating program implementation objectives. Your evaluation team will probably need to develop instruments to collect information on program implementation objectives. This is not a complicated process. You must pay attention to your information needs and potential sources and develop instruments designed specifically to obtain that information from that source. For example, if you want to collect information on planned services and activities from program planners, it is possible to construct an interview instrument that includes the following questions:

checkWhy was the decision made to develop (the particular service or activity)?

check Who was involved in making this decision?

checkWhat plans were made to ensure the cultural relevancy of (the particular service or activity)?

If case records or logs are viewed as appropriate sources for evaluation information, you will need to develop a case record or program log extraction form. For example, if you want to collect information on actual services or activities, you may design a records extraction form that includes the following items:

checkHow many times was (the particular activity or service) provided to each participant?

checkWho provided or implemented (the particular activity or service)?

checkWhat was the intensity of (the particular activity or service)? (How long was it provided for each participant at each time)?

checkWhat was the duration of (the particular activity or service)? (What was the timeframe during which the participant received or participated in the activity or service?)

Instruments for evaluating participant outcome objectives. Participant outcome objectives can be assessed using a variety of instruments, depending on your information needs. If your evaluation team decides to use interview instruments, observations, or existing records to collect participant outcome information, you will probably need to develop these instruments. In these situations, you would follow the same guidelines as you would use to develop instruments to assess program implementation objectives.

If your evaluation team decides to use questionnaires or assessment inventories to collect information on participant outcomes, you have the option of selecting existing instruments or developing your own. Many existing instruments can be used to assess participant outcomes, particularly with respect to child abuse potential, substance use, family cohesion, family stress, behavioral patterns, and so on. It is not possible to identify specific instruments or inventories in this manual as particularly noteworthy or useful, because the usefulness of an instrument depends to a large extent on the nature of your program and your participant outcome objectives. If you do not have someone on your evaluation team who is knowledgeable regarding existing assessment instruments, this would be a critical time to enlist the assistance of an outside consultant to identify appropriate instruments. Some resources for existing instruments are provided in the appendix.

There are advantages and disadvantages to using existing instruments. The primary advantages of using existing instruments or inventories are noted on the following page:

They often, but not always, are standardized. This means that the instrument has been administered to a very large population and the scores have been "normed" for that population. When an instrument has been "normed," it means that a specified range of scores is considered "normal," whereas scores in another range are considered "non-normal." Non-normal scores on instruments assessing child abuse potential, substance use, family cohesion, and the like may be indicators of potential problem behaviors.

They usually, but not always, have been established as valid and reliable. An instrument is valid if it measures what it is supposed to measure. It is reliable if individuals' responses to the instrument are consistent over time or within the instrument.

The primary disadvantages of using existing instruments are as follows:

They are not always appropriate for all cultural or ethnic populations. Scores that are "normed" on one cultural group may not reflect the norm of members of another cultural group. Translating the instrument into another language is not sufficient to make it culturally appropriate. The items and scoring system must reflect the norms, values, and traditions of the given cultural group.

They may not be useful for your program. Your participant outcome objectives and the interventions you developed to attain those objectives may not match what is being assessed by a standardized instrument. For example, if you want to evaluate the effects that a tutoring program has on runaway and homeless youth, an instrument measuring depression may not be useful.

If an outside consultant selects an instrument for your program evaluation, make sure that you and other members of the evaluation team review each item on the instrument to ensure that the information it asks for is consistent with your expectations about how program participants will change.

If your evaluation team is unable to find an appropriate existing instrument to assess participant outcome objectives, they will need to develop one. Again, if there is no one on your team who has expertise in developing assessment instruments, you will need the assistance of an outside consultant for this task.

Whether you decide to use an existing instrument or develop one, the instrument used should meet the following criteria:

checkIt should measure a domain addressed by your program. If you are providing parenting training, you would want an instrument to measure changes in parenting knowledge, skills, and behaviors, not an instrument measuring self-esteem, substance use, or personality type.

checkIt should be appropriate for your participants in terms of age or developmental level, language, and ease of use. These characteristics can be checked by conducting focus groups of participants or pilot testing the instruments.

checkIt should respect and reflect the participants' cultural backgrounds. The definitions, concepts, and items in the instrument should be relevant to the participants' community and experience.

checkThe respondent should be able to complete the instrument in a reasonable timeframe. Again, careful pilot testing can uncover any difficulties.

 

What procedures should you use to collect data?

It is critical that the evaluation team establish a set of procedures to ensure that the information will be collected in a consistent and systematic manner. Information collection procedures should include:

When the information will be collected. This will depend on the schedule the evaluation team has established for the specific time intervals that information must be collected.

Where the information will be collected. This is particularly relevant when information is to be collected from program participants. The evaluation team must decide whether the information will be collected in the program facility, in the participants' homes, or in some other location. It is a good idea to be consistent about where you collect information. For example, participants may provide different responses in their own home environments than they would in an agency office setting.

Who will collect the information. In some situations, you will need to be sure that information collectors meet certain criteria. For example, they may need to be familiar with the culture or the language of the individuals they are interviewing or observing. Administering some instruments also may require that the information collector has experience with the instruments or has clinical experience or training.

How the information will be collected. This refers to procedures for administering the instruments. Will they be administered as a group or individually? If you are collecting information from children, will other family members be present? If you are collecting information from individuals with a low level of literacy, will the data collectors read the items to them? The methods you use will depend in large part on the type of program and the characteristics of the participants. Training and education programs, for example, may have participants complete instruments in a group setting. Service delivery programs may find it more appropriate to individually administer instruments.

Everyone involved in collecting evaluation information must be trained in data collection procedures. Training should include:

checkAn item-by-item review of each of the instruments to be used in data collection, including a discussion of the meaning of each item, why it was included in the instrument, and how it is to be completed

checkA review of all instructions on administering or using the instruments, including instructions to the respondents

checkA discussion of potential problems that may arise in administering the instrument, including procedures for resolving the problems

checkA practice session during which data collection staff administer the instrument to one another, use it to extract information from existing case records or program logs, or complete it themselves, if it is a written questionnaire

checkA discussion of respondent confidentiality, including administering an informed consent form, answering respondents' questions about confidentiality, keeping completed instruments in a safe place, and procedures for submitting instruments to the appropriate person

checkA discussion of the need for frequent reviews and checks of the data and for meetings of data collectors to ensure data collection continues to be consistent.

It is useful to develop a manual that describes precisely what is expected in the information collection process. This will be a handy reference for data collection staff and will be useful for new staff who were hired after the initial evaluation training occurred.

 

What can be done to ensure the effectiveness of instruments and procedures?

Even after you have selected or constructed the instruments and trained the data-collection staff, you are not yet ready to begin collecting data. Before you can actually begin collecting evaluation information, you must "pilot test" your instruments and procedures. The pilot test will determine whether the instruments and procedures are effective — that they obtain the information needed for the evaluation, without being excessively burdensome to the respondents, and that they are appropriate for the program participant population.

You may pilot test your instruments on a small sample of program records or individuals who are similar to your program participants. You can use a sample of your own program's participants who will not participate in the actual evaluation or a group of participants in another similar program offered by your agency or by another agency in your community.

The kinds of information that can be obtained from a pilot test include:

How long it takes to complete interviews, extract information from records, or fill out questionnaires

Whether self-administered questionnaires can be completed by participants without assistance from staff

Whether the necessary records are readily available, complete, and consistently maintained

Whether the necessary information can be collected in the established time frame

Whether instruments and procedures are culturally appropriate

Whether the notification procedures (letters, informed consent, and the like) are easily implemented and executed

To the extent possible, pilot testing should be done by data collection staff. Ask them to take notes and make comments on the process of administering or using each instrument. Then review these notes and comments to determine whether changes are needed in the instruments or procedures. As part of pilot testing, instruments should be reviewed to assess the number of incomplete answers, unlikely answers, comments on items that may be included in the margins, or other indicators that revisions are necessary.

In addition, you can ask questions of participants after the pilot test to obtain their comments on the instruments and procedures. Frequently, after pilot testing the evaluation team will need to improve the wording of some questions or instructions to the respondent and delete or add items.

 

How can you monitor data collection activities?

Once data collection begins, this task will require careful monitoring to ensure consistency in the process. Nothing is more damaging to an evaluation effort than information collection instruments that have been incorrectly or inconsistently administered, or that are incomplete.

There are various activities that can be undertaken as part of the monitoring process.

Establish a routine and timeframe for submitting completed instruments. This may be included in your data collection manual. It is a good idea to have instruments submitted to the appropriate member of the evaluation team immediately after completion. That person can then review the instruments and make sure that they are being completed correctly. This will allow problems to be identified and resolved immediately. You may need to retrain some members of the staff responsible for data collection or have a group meeting to re-emphasize a particular procedure or activity.

Conduct random observations of the data collection process. A member of the evaluation team may be assigned the responsibility of observing the data collection process at various times during the evaluation. This person, for example, may sit in on an interview session to make sure that all of the procedures are being correctly conducted.

Conduct random checks of respondents. As an additional quality control measure, someone on the evaluation team may be assigned the responsibility of checking with a sample of respondents on a routine basis to determine whether the instruments were administered in the expected manner. This individual may ask respondents if they were given the informed consent form to sign and if it was explained to them, where they were interviewed, whether their questions about the interview were answered, and whether they felt the attitude or demeanor of the interviewer was appropriate.

Keep completed interview forms in a secure place. This will ensure that instruments are not lost and that confidentiality is maintained. Completed data collection instruments should not be left lying around, and access to this information should be limited. You may want to consider number-coding the forms rather than using names, though keeping a secured data base that connects the names to numbers.

Encourage staff to view the evaluation as an important part of the program. If program staff are given the responsibility for data collection, they will need support from you for this activity. Their first priority usually is providing services or training to participants and collecting evaluation information may not be valued. You will need to emphasize to your staff that the evaluation is part of the program and that evaluation information can help them improve their services or training to participants.

Once evaluation information is collected, you can begin to analyze it. To maximize the benefits of the evaluation to you, program staff, and program participants, this process should take place on an ongoing basis or at specified intervals during the evaluation. Information on the procedures for analyzing and interpreting evaluation information are discussed in the following chapter.

Previous Chapter   Next Chapter
Return to: ACF Home < CORE HOME < Publications and Reports < Introduction & Table of Contents