Hospital CAHPS Stakeholder Committee

Meeting Summary


Under its CAHPS® II initiative, the Agency for Healthcare Research and Quality (AHRQ) began to develop a national standard for assessing hospital patient experiences. The preliminary survey instrument for Hospital CAHPS and its methodology are designed to measure and publicly report patient experiences with hospital care.

On November 7, 2002, AHRQ and the Centers of Medicare & Medicaid Services (CMS) sponsored a stakeholder meeting in Rockville, MD.

Purpose / Instrument Development / Status of Hospital Survey / Public Reporting Issues / Next Steps / Participants


Purpose

Carolyn Clancy, M.D., Acting Director of AHRQ, welcomed participants and thanked them for their attendance. Since the development of a hospital instrument is of critical interest to so many different audiences, there must be opportunities for participation and feedback in the HCAHPS process such as:

In addition to these events, there will be other opportunities in the future for questions or comments about survey development and implementation.

Barbara Paul, M.D., CMS Division of Beneficiary Analysis, also welcomed the participants. She briefly reviewed CMS' strategy for hospital safety and quality and indicated that improvement opportunities depended on selection of the appropriate intervention with the goal to reduce performance variance. Dr. Paul reviewed past efforts (Managed Care Plan Compare, 1999; Dialysis Facilities comparison, 2001; Nursing Homes Quality Initiative 2002) and current initiatives (Home Health Quality Initiative, 2003; Hospital Quality Initiative, 2003) that could reduce variance in service delivery.

Beth Kosiak, AHRQ, began the discussion by stating that the primary purpose of the meeting is to hear stakeholder concerns about patient assessment:

HCAHPS development is an open process, and both AHRQ and CMS are requesting input from stakeholders throughout its development and implementation.

After introductions, Beth Kosiak facilitated the discussion. Several questions were generated before the meeting to guide the discussion. However, these were only suggested topics, and there was flexibility to explore other avenues if desired.

Return to Contents

Instrument Development

Overview

Chuck Darby, AHRQ, gave an overview of the CAHPS® team's instrument development processes. Undertaken by AHRQ and funded by CMS, this initiative calls for the development and testing of a survey instrument that can be used to collect patient reports and ratings of their hospital experiences. The process was initiated with a call for measures in the Federal Register. The seven resulting submissions were reviewed by the CAHPS® team to determine which items might be useful for the hospital instrument. A literature review was also completed to identify other instruments that may be valuable.

Chuck Darby indicated that a single instrument probably would not be selected as "the instrument," but that items from several instruments were more likely to find their way into the HCAHPS survey. In addition, other items might be developed specifically for this effort. It was envisioned that the content of the instrument would consist of existing questions from other instruments but modified, e.g., the response scale or the addition of new items. After an initial survey is developed, it will undergo cognitive and field testing, which will result in a revision of the initial instrument. The preliminary instrument was to be delivered to CMS by December 31, 2002.

Then CMS would begin its pilot testing of the instrument. Liz Goldstein indicated that the primary purpose of the pilot is to refine the instrument and to test strategies for public reporting. Chuck Darby indicated that AHRQ was interested in other opportunities to field test the instrument.

The American Hospital Association (AHA) voiced concern about publicly reporting data from an instrument being field tested. CMS indicated that it will not report data from questions found to be problematic, but that it depends on the results of field testing to determine which ones are and which ones will be reported.

Development Issues

Discussion questions included:

Topics To Be Covered

Several participants said that there are different levels of intensity of care in departments as well as different factors (such as gender, age, race, education, self-reported health, etc.) that may affect the results of the instrument. AHRQ staff stated that this is still under research and that it was anticipated that casemix adjustors would be needed. For different departments, the question might have to be asked differently and then combined for public reporting. AHRQ staff indicated that potentially there would be a core set of items and then supplemental sets for various departments.

Another comment was that in the hospital environment, unless there is a way to link with other data sets, the impact of the effort would be undercut. There need to be linkages back to patient specific information to determine what type of intervention is warranted, i.e., either institution-wide or more targeted interventions. AHRQ staff indicated that throughout the history of CAHPS®, the CAHPS® team has looked for ways to ease the burden of response to consumer as well as costs of administration. Linkages with other data sets may involve privacy issues, which will be expensive.

The first question put to the group was whether there were any reactions or comments about the domains selected for the instrument. There was some concern about the domain of pain management. It needs to be clarified in terms of physical comfort. Also it was stated that amenities or creature comforts may or may not have a place in the survey of patient experiences, but if they do, they should not be entwined in categories like pain management. Consideration will be given to separating pain management from the other areas of physical comfort.

Patient Safety as a Topic

In comments about patient safety as a domain, it was pointed out that a question could be relevant to multiple domains and that patient safety could be added but it might be applicable across domains. Other questions:

According to some participants, patients may not grasp the intent of the question, so there will be responses like "add more security guards" or "check on me more often." However, it was indicated that consumers are much more aware of the issues and that the patient should not be taken out of consideration with regard to questions about patient safety. There was general agreement that patient safety should be added as a domain.

The transition of care and continuity domain is an important area for Medicare, which involves transition between facilities. However, it was pointed out that assessment of transition is difficult, and it cannot occur without going outside the acute care experience to get information. Patients may not have a great expectation around the experience and, therefore, it may not be readily actionable because it may be difficult to determine who is responsible for the transition or continuity. There was concern about not holding the hospital accountable for what the receiving institution should be doing. It was noted that transition could also apply to family and friends, i.e., discharged to home. How does one measure that?

Standard Items and Custom Questions

The next question had to do with a standardized set of questions. Participants agreed that the survey must be standardized in order to be useful. There may be custom questions that can be developed, but they should be either developed or approved by AHRQ before being used by hospitals.

AHRQ staff asked if trending data may be lost if the instrument is changed. One participant said that it's not realistic for a core instrument to answer all the policy issues that need to be answered. Other modules can be developed around the core. It was suggested that there be the flexibility to have some of a hospital's own questions included with the standard core of questions. It might be useful to test both surveys to see how certain questions correlate to current trending data. The National Committee for Quality Assurance (NCQA) allows up to 13 additional items for customization purposes, which are submitted for approval and signoff for use in the standard questionnaire.

A participant suggested that if other items are added, the standardized items in the questionnaire might be changed. Vendors have completed some earlier testing and found that adding customized questions at the end of the questionnaire was better. If questions are inserted at other places in the questionnaire, it may affect the flow and could impact the validity of the instrument.

Administration

Both a mail and telephone version are to be used in the pilot because a different population is reached by phone than by mail. The length of the instrument was discussed and whether a short version of the longer one could be used interchangeably. It was determined that they could not because the instrument had to be standardized for comparison.

Collection of Data

How data are collected may affect responses. It was suggested that patients, not their proxies, should complete surveys. Collection depends on whether the sample will be continuous or noncontinuous, i.e., a snapshot in time. If noncontinuous, then a 3-4 month sampling period may be appropriate. One participant stated that seasonality is not a real issue, but others felt that summer months should be avoided due to turbulence with hospital personnel. If the sample is to be continuous, it may get to be too complicated and difficult to get clean sample frames. Although it was suggested that the instrument be sent out at least 2 weeks after the experience, testing would be needed to confirm this.

Return to Contents

Status of Hospital Survey Efforts

Implementation Issues

Discussion questions included:

Participants discussed the potential uses of the data from HCAHPS. Audiences include consumers, hospitals, hospital boards, health plans, consumer advocacy groups, employers and many others. It was pointed out that the data have not cascaded down to individual providers and that some sort of promotional effort should be undertaken to inform and educate providers.

Data should provide information on how to improve care, but that information should be actionable. It was suggested that the system adopted be paperless. The Hospital Corporation of America (HCA) has this type of system, with password protection and limited viewing access. Data collection in this system is continuous and updated weekly, with HCA issuing quarterly reports. Public reporting should consider continuous collection in that the industry is doing it this way. It was also noted that online customized data reporting capabilities should be inherent in this project because if there is no customized capability, it may diminish the perceived value of the effort. It will also help hospitals improve their quality, so information should be current.

It was suggested that vendors need to align themselves with what is being valued as important. They have the ability to provide additional help around the quality improvement aspect and potentially data collection. Vendors may be able to provide the linkage between what's there now and what is proposed. It would be ideal if all the data were housed at AHRQ for benchmarking purposes.

There were discussions about whether the instrument should be voluntary or mandatory. Participants were undecided. Noting that many voluntary efforts are for all intents and purposes mandatory, some participants suggested that HCAHPS should therefore start out being voluntary. It was suggested to make this effort voluntary in such a way that no one could afford not to be a part of it. Several participants favored a voluntary effort because of the newness of the tool as well as the process. Others indicated that the instrument should be required, indicating it's "clean and explicit" and that it is a necessary ingredient for public reporting efforts. It was pointed out that CMS could only mandate for Medicare and Medicaid.

Return to Contents

Public Reporting Issues

Liz Goldstein of CMS provided an overview of public reporting issues to the group. Discussion questions included:

The pilot is an opportunity to test different public reporting methods, and a couple of States with Quality Improvement Organizations (QIOs) will be testing these methods. The field test was to begin in February 2003, with some data reported by December 2003. (Since this meeting, CMS has decided not to report the HCAHPS data as part of the pilot test of the instrument.) CMS already has two Web sites that report clinical information, one for providers and one for consumers. It was suggested that there be a drill-down capability for consumers. Reporting will have to be compatible with the Web site's key messages, formats, and navigation structure. CMS staff indicated that on November 15, 2002, there was to be a Hospital Public Reporting Roundtable in Crystal City, VA.

Participants discussed how the public uses reports as well as the key messages of this effort. It was suggested that the report could be used as the basis of discussion between consumer and physician; however, physicians must be on board with this effort. One person reported that doctors seemed to be interested in talking about the report with tumor boards, etc. It was suggested that an outreach effort be employed directed at educating both consumers and physicians. It was also suggested that data should also be rolled up to the aggregate level because if there is too much information, consumers will not use it. In the California (Patients' Evaluation of Performance or PEP-C) project, the California HealthCare Foundation report 11 measures, some of which are overall score, overall by service line, and overall by dimension.

With regard to benchmarking, participants indicated that it would be most useful to compare by hospital type, geographic region, and county level. Hospitals might be interested to know how they are doing compared with competitors. Hospitals might also be interested in how they compare with to the "best hospital." It was pointed out that HCAHPS should take advantage of the Medicare fee-for-service (FFS) levels, which may be applicable to this project.

The impact of this effort will potentially lead to more competition. In addition, by making information public, CAHPS® has driven improvement in performance. The problems as seen by the hospitals include cost, efficiency, and duplication of efforts. One person said that data collection may present challenges to smaller hospitals. It was also noted that casemix adjustment to the data would be considered in this effort.

Participants discussed measuring cultural competency, which may be tricky. Doing so may be a research question for the future, and it might be easier to address communication issues first. The potential language options for the survey may include Spanish and Chinese, but it was pointed out that sometimes the hospitals didn't use the alternative languages. Domains have issues around cultural competencies, i.e., communication. Some items in the instrument may "turn them [people] off," and this should be considered.

The issue of frequency of administration was raised again. No decisions have been made in this regard. On a practical note, it was pointed out that trying to get hospitals to submit data in the proper format and on time can be challenging. CMS staff are hoping that they can learn about implementation issues from the pilot test.

Return to Contents

Steps

A summary of the meeting will be sent to the attendees and then posted on the AHRQ Web site. AHRQ and CMS are interested in continuing a dialog, possibly by E-mail or conference call with the members of this group. The stakeholder group may potentially meet after the pilot and after the cognitive testing of the instrument so that some data are available for review. It was noted that there is some urgency related to the effort, and if other stakeholders should be included in this meeting, then AHRQ or CMS should be notified.

A vendor meeting was to be held at CMS on November 18, 2002. A Web chat on HCAHPS was held October 24, 2002 (select to access the transcript).

Return to Contents

Participants

J. Brett Bennett, Premier, Inc.
James Bost, National Committee for Quality Assurance (NCQA).
Robert Dickler, Association of American Medical Colleges.
Joyce Dubow, AARP.
Nancy Foster, American Hospital Association.
Marsha Nelson, California Institute for Health System Performance.
Ellen Kurtzman, National Quality Forum.
Gerald Roche, Voluntary Hospitals of America (VHA).
Elroy Schuler, Hospital Corporation of America (HCA)
Paul Schyve, Joint Commission on Accreditation of Healthcare Organizations (JCAHO).
Bettinna Signori, Ford Motor Company.
Susan Van Gelder, Federation of American Hospitals.
Lt. Col. Thomas Williams, TRICARE.

AHRQ Staff

Christine Crofton
Charles Darby
Marybeth Farquhar
Beth Kosiak

CMS Staff

Barbara Crawley
Elizabeth Goldstein
Mark Koepke

Return to Contents

Current as of March 2003


Internet Citation:

Hospital CAHPS Stakeholder Committee: Meeting Summary. March 2003. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/qual/cahps/hcahpsstake.htm


Return to CAHPS®
Quality Assessment
AHRQ Home Page
Department of Health and Human Services