SciPICH Logo

SciPICH Publications - button/link

An Evidence-Based Approach

Introduction: Evaluation of IHC

Consumers & IHC Evaluation

Developers & IHC Evaluation

Policy Issues Relevant to IHC

Health Care Providers, Purchasers & IHC

SciPICH Final Report

SciPICH home icon and link


SciPICH Publications IconWired for Health and Well-Being: The Emergence of Interactive Health Communication

Editors: Thomas R. Eng, David H. Gustafson

Suggested Citation: Science Panel on Interactive Communication and Health. Wired for Health and Well-Being: the Emergence of Interactive Health Communication.  Washington, DC: US Department of Health and Human Services, US Government Printing Office, April 1999.

Download in PDF format:  [Entire Document] [References]


Appendix A: Evaluation Reporting Template For IHC Applications

The template is divided into four sections. Section I focuses on identification of the developer(s), the source(s) of funding for the application, the purpose of the application and its intended audience(s), technical requirements, and issues of confidentiality. Assurance of confidentiality will become increasingly important as applications that collect and utilize personal health information, such as those that assess individual risk for sensitive health conditions, proliferate.

Section II focuses on the results of formative and process evaluations, as contributors to application design and development. These items elicit information to help potential users and purchasers judge validity of the content, appropriateness of the application to their specific needs, and whether sufficient testing was done to ensure that the application functions as intended. This section attempts to go beyond the simple disclosure of the descriptive elements (e.g., identity of the developers, sponsorship and purpose of the application) to encourage disclosure of whether and how potential users and other "experts" were involved in application development and how extensively the application was tested prior to release.

Section III focuses on the results of any outcome evaluations performed. The list of outcomes is not exhaustive but includes those most commonly encountered, ranging from user satisfaction to changes in morbidity or mortality, reduced costs, or organizational change. Potential outcomes are broadly defined because individual developers, users, and purchasers may have very different needs and expectations. For example, while one developer or potential purchaser may be interested in an application that improves management of specific chronic disease symptoms, another may be solely interested in improving general patient satisfaction. Classifications of evaluation designs from the US Preventive Services Task Force are included to provide information relevant to the internal validity of the results (i.e., the strength of evidence that the observed results are due to the intervention). Descriptions of samples also are included to provide information relevant to the "generalizability" of results.

Section IV focuses on information about evaluators and funding to provide potential users and purchasers with information about potential biases or conflicts of interest relevant to the evaluation. The template also attempts to increase accountability for IHC applications by encouraging the disclosure of the person(s) responsible for design and content (Section I) and evaluation (Section IV).

Evaluation Reporting Template for IHC Applications, Version 1.0, Science Panel on Interactive Communication and Health

This is an evaluation reporting template for developers and evaluators of interactive health communication (IHC) applications to help them report evaluation results to those who are considering purchasing or using their applications. Because the template is designed to apply to all types of applications and evaluations, some items may not apply to a particular application or evaluation. Complete only those items that apply. This and subsequent versions of the template and other resources on evaluation of IHC are available at: URL: http://www.scipich.org

Comments and suggestions regarding the content, scope, utility, and practicality of this template should be directed to: SciPICH, Office of Disease Prevention and Health Promotion, US Department of Health and Human Services, 200 Independence Ave., SW, Room 738G, Washington, DC 20201 or e-mail comments to: scipich@health.org

I. Description of Application

  1. Title of product/application:
  2. Type of application (e.g., Web site, CD-ROM/DVD):
  3. Name(s) of developer(s):
  4. Relevant qualifications of developer(s):
  5. Contact(s) for additional information:
  6. Funding sources for development of the application (e.g., commercial company, government, foundation/nonprofit organization, individual):
  7. Category of application (e.g., clinical decision support, individual behavior change, peer support, risk assessment):
  8. Specific goal(s)/objective(s) of the application (What is the application intended to do? List multiple if applicable):
  9. Intended target audience(s) for the application (e.g., age group, gender, educational level, types of organizations and settings, disease groups, cultural/ethnic/population groups):
  10. Available in languages other than English? No Yes (specify):
  11. Does the application include paid advertisements, content, or links? No Yes
  12. Technological/resource requirements of the application (e.g., hardware, Internet, on-site support available):
  13. Describe how confidentiality or anonymity of users is protected:
  14. Indicate who will potentially be able to get information about users:

II. Formative and Process Evaluation*

  1. Indicate the processes and information source(s) used to ensure the validity of the content (e.g., peer-reviewed scientific literature, in-house "experts," recognized outside "experts," consensus panel of independent "experts," updating and review processes and timing):
  2. Are the specific original sources of information cited within the application? Yes No
  3. Describe the methods of instruction and/or communication used (e.g., drill and practice, modeling, simulations, reading generic online documents, interactive presentations of tailored information, specify methods used):
  4. Describe the media formats used (e.g., text, voice/sound, still graphics, animation/video, color):
  5. For each applicable evaluation question below indicate (i) the characteristics of the sample(s) used and how they were selected, (ii) the method(s) of assessment (e.g., specific measures used), and (iii) the evaluation results:
  6. If text or voice is used, how was the reading level or understandability tested?
  7. What is the extent of expected use of the application (e.g., average length and range of time, number of repeat uses)?
  8. How long will it take to train a beginning user to use the application proficiently?
  9. Describe how the application was Beta tested and debugged (e.g., by what users, in what settings):

III. Outcome Evaluation**

  1. For each applicable evaluation question below, indicate (i) the type of evaluation design (I-III),*** (ii) the characteristics of the sample(s) used and how they were selected, (iii) the method(s) of assessment (e.g., specific measures used), and (iv) the evaluation results:
  2. How much do users like the application?
  3. How helpful/useful do users find the application?
  4. Do users increase their knowledge?
  5. Do users change their beliefs or attitudes (e.g., self-efficacy, perceived importance, intentions to change behavior, satisfaction)?
  6. Do users change their behaviors (e.g., risk factor behaviors, interpersonal interactions, compliance, utilization of resources)?
  7. Are there changes in morbidity or mortality (e.g., symptoms, missed days of school/work, physiologic indicators)?
  8. Are there effects on costs/resource utilization (e.g., cost-effectiveness analysis)?
  9. Do organizations or systems change (e.g., resource utilization, effects on "culture")?

IV. Background of Evaluators

  1. Names and contact information for evaluator(s):
  2. Do any of the evaluators have a financial interest in the sale/dissemination of the application? No Yes (specify):
  3. Funding sources for the evaluation(s) of the application (e.g., developer’s funds, other commercial company, government, foundation/nonprofit organization):
  4. Has the evaluation been published in a peer-reviewed scientific journal? No Yes
  5. Is a copy of the evaluation report(s) available for review on request? No Yes (how to obtain):

 


* Formative evaluation is used to assess the nature of the problem and the needs of the target audience with a focus on informing and improving program design before implementation. This is conducted prior to or during early application development, and commonly consists of literature reviews and reviews of existing applications and interviews or focus groups of "experts" or members of the target audience. Process evaluation is used to monitor the administrative, organizational, or other operational characteristics of an intervention. This helps developers successfully translate the design into a functional application and is performed during application development. This commonly includes testing the application for functionality and also may be known as alpha and beta testing.

** Outcome evaluation is used to examine an intervention’s ability to achieve its intended results under ideal conditions (i.e., efficacy) or under real world circumstances (i.e., effectiveness), and also its ability to produce benefits in relation to its costs (i.e., efficiency or cost-effectiveness). This helps developers learn whether the application is successful at achieving its goals and objectives, and is performed after the implementation of the application.

*** Evaluation design types are grouped according to level of quality of evidence as classified by the US Preventive Services Task Force and the Canadian Task Force on the Periodic Health Exam. (US Preventive Services Task Force. Guide to Clinical Preventive Services. 2nd Ed. Washington, DC: US Department of Health and Human Services; 1996.)

I. Randomized controlled trials. Experiments in which potential users are randomly assigned to use the application or to a control group. Randomization promotes comparability between groups. These designs can be (a) doubleblinded: neither the participants nor the evaluators know which participants are in the intervention group or the control group, (b) single-blinded: the participants are not aware which experimental group they are in, or (c) non-blinded: both the participants and the evaluators are aware of who is in the intervention group and who is in the control group. Greater blinding lessens the chance of bias.

II-1. Nonrandomized controlled trials. Experiments comparing users and nonusers (or "controls") but they are not randomly assigned to these groups. For this type of design specify how the participants were recruited, selected, and assigned to the groups and how the groups compare (similarities and differences between users and nonusers) prior to the evaluation.

II-2. Cohort study/observational study. An evaluation of users with no comparison or control group.

II-3. Multiple time series. Observations of participants as they go through periods of use and nonuse of the application.

III. Descriptive studies, case reports, testimonials, "expert" committee opinions.

Original version was published in: Robinson TN, Patrick K, Eng TR, Gustafson D, for the Science Panel on Interactive Communication and Health. An evidence-based approach to interactive health communication: a challenge to medicine in the Information Age. JAMA. 1998;280:1264-1269.

 

Return to Table of Contents

Comments: SciPICH@health.org   Updated: 04/30/01