[This Transcript is Unedited]

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

DEPARTMENT OF HEALTH AND HUMAN SERVICES

WORK GROUP ON QUALITY

OF THE SUBCOMMITTEE ON POPULATIONS

December 12, 2001

Hubert H. Humphrey Building
200 Independence Avenue
Washington, D.C.

Reported By:
CASET Associates
10201 Lee Highway, Suite 160
Fairfax, Virginia 22030
(703) 352-0091

TABLE OF CONTENTS

Call to Order, Introductions - Kathryn Coltin

Public/Private Sector Patient Safety Initiatives - Jim Battles

Public/Private Sector Patient Safety Initiatives - Noel Eldridge

Public/Private Sector Patient Safety Initiatives - Janet Corrigan

Public Sector Patient Safety Initiatives - Dr. Allen Vaida

Private Sector Patient Safety Initiatives - S. Delbanco


WORK GROUP MEMBERS:

LIAISON/STAFF REPRESENTATIVES


P R O C E E D I N G S (1:30 p.m.)

Agenda Item: Call to Order, Introductions.

MS. COLTIN: The is the meeting of the work group on quality of the National Committee on Vital and Health Statistics.

My name is Kathryn Coltin. I chair the work group. I am a member of the committee, and I am with Harvard Pilgrim Health Care in Boston.

We are going to go around the room and introduce all the participants, both presenters, observers and committee members.

MS. JACKSON: Debbie Jackson with the National Center on Health Statistics, committee staff.

DR. STARFIELD: Barbara Starfield from the Johns Hopkins University, committee member.

DR. COHN: Simon Cohn from Kaiser Permanente, and a committee member.

MR. BATTLES: Jim Battles from the Agency for Health Care Research and Quality.

DR. MAYS: Vickie Mays, University of California, Los Angeles, committee member.

MS. LABULETTE: Cecelia Lobulette(?), American Association of Health Plans.

MS. PETERSON: Joanne Peterson, Institute for Safe Medication Practices. I am the safe medication management fellow.

MS. BECKER: I am Sean Becker. I am with the United States Pharmacopoeia. I am the director for patient safety initiatives.

MR. VAIDA: Allen Vaida, executive director of the Institute for Safe Medication Practices.

MR. ELDRIDGE: I am Noel Eldridge from the Department of Veterans Affairs, national center for patient safety.

MS. CORRIGAN: Janet Corrigan with the Institute of Medicine.

MS. WHITE: Gracie White, NCHS.

MS. COLTIN: Okay, we are going to hear first from Jim Battles. Jim has informed me that he is going to have to leave shortly after he concludes his presentation. So, we will take some questions after Jim's presentation.

Agenda Item: Public/Private Sector Patient Safety Initiative.

MR. BATTLES: Thank you very much. My presentation today will talk about the initiatives of patient safety from the Agency for Health Care Research and Quality. I think many of you know that, following the IOM report on, To Err is Human, caused sort of an avalanche of activity on patient safety.

AHRQ was responsible, as one of the lead agencies in the federal government, to establish a patient safety initiative and research program, and I am going to talk a little bit about what we have done with that. I think it is important for us to look at the issue of patient safety in terms of really what are our goals. I think that will reflect hopefully in what we are doing in moving our research agenda. In a sense, the goal of patient safety is to reduce the risk of iatrogenic injury to patients. It is the injury to patients that we are really interested in focus on; hence, patient safety.

In order to do that, we have to identify, remove and/or minimize the hazards which increase the risk of injury to patients. That is kind of the central focus of where we have gone. I think the big news for the agency last year about this time were the appropriations from Congress of $50 million to do a patient safety initiative program. Congress was fairly specific in what they thought they would like us to do. It is like anything else. There is usually no free lunch. They did attach a few strings. Some would say ropes, in letting us know what their intent was. They wanted to develop some guidelines in the collection of uniform data, establish competitive demonstration programs, determine ways to improve provider training.

They wanted us to have competitive demonstration programs for health care facilities and organizations, to look at the causes of medical errors, models, mechanisms to encourage reporting. They wanted to engage health systems and providers who were participating in the demonstration programs, to use all available technologies to reduce error. If we look at the research agenda, it was focused in terms of our reauthorization language. The agency was specifically charged to take on the mission of patient safety. Our name was changed from AHCPR to AHRQ.

The appropriations language let it be known what Congress expected us to do. We had input from our own national advisory committee as to what they felt was appropriate in the direction that we should proceed. Of course, we had the quick report doing what counts for patient safety as a report. Then, we held a research summit. Then, we continue to have interactions with other national organizations and partners.

Let me tell a little bit about the national summit. We felt it was essential to begin to get a bottom up. What was it that we needed from the users and funders of patient safety in terms of what kind of products were needed out there to address the issue of patient safety. So, we got input from a number of agencies that were funding patient safety and from the private sector.

We actually held two summits, one rather interesting on September 11 of 2000 and another one in the fall, which we began to formulate an agenda from this user driven input. So, the important items they wanted to have us look at was the epidemiology of error. What is the infrastructure needed to improve safety within organizations. What kind of information systems does one need to have.

Then, we need to have some idea of knowing what innovations should be adapted. You know, everybody, when they have a new agenda item, everybody's agency and program from George's Garage Shop and Computer Order Entry System has solutions. What really works? Do we know? What facilitates implementation? We may have many good ideas, but what gets people to adopt, to take on an innovation in patient safety. Then, of course, disseminating the information.

So, this was sort of a framework for a research agenda. We took that, plus what Congress had told us, and created a series of requests for application, to begin to allocate the funds. We had the first, and probably the largest RFA was devoted to the area of reporting of errors, health systems error reporting analysis and improvement demonstration projects. We also recognized that there were a number of organizations throughout the country who had quite the capacity for patient safety research.

One of the things that they were lacking was a stable source of funding to carry on that research. So, we have had an RFA for centers of excellence in patient safety. We need to concentrate on the infrastructure and structure to produce research. We also recognized that there was a significant number of institutions who had capacity to carry out and serve, but needed some concentration to bring the infrastructure within their organizations.

They needed some funding to begin to put the glue, to get the resources, to move them ahead to be viable centers for research. So, we have the developing centers program. Clearly, in the field of informatics, there are a number of potential innovations that can promote patient safety. So, we had the clinical informatics, CLIPS program.

Another major activity area, and an area of interest to Congress and, therefore, an interest of ours, was the working conditions. What were the conditions that surround quality of care and patient safety. This is an area that has had a lot of attention and some study, but not nearly enough. Then, clearly there was a need to disseminate patient safety research and education programs, take what we know and what works, and disseminate it to the community. That was sort of our pantheon of RFAs.

What we have now, we have had our grant reviews and we have made funding decisions. So, when we look at supporting demonstration projects, we have actually funded 24 demonstration projects totally $24.7 million, and these are the reporting demos. They include issues of looking at public disclosure for error reporting systems, confidential versus mandatory, analysis, administrative data, malpractice claims. In projects it looks at changes in information systems and improved data.

We hope that, with this level of funding and number of projects, we can begin to approach a multi-dimensional approach to the issue of collecting and analyzing information about reported events. The CLIPS program and use of technology, we funded 22 projects totally $5.3 million, to develop state of the art applications, including hand-held computers with decision support, simulation tools, examine how to improve specific tools, like infusion pumps and assess other technologies for improvement.

In working conditions, we funded a total of eight projects specifically looking at patient safety for about $3 million. We have also had a number of other projects related to working conditions that are focused more on quality. So, there is a total of 20 working condition projects, but eight of them are specifically addressing issues of working condition, such as staffing, fatigue, stress and sleep deprivation and what is their relationship to quality of care. We have a nice mix of different settings, different types of provider settings in which to look at this.

The last area is how can we disseminate the information in the centers of research. This includes both the research and the centers of excellence. We funded a total of 23 projects, three centers of excellence and more developing centers. So, we have a nice, I think, balance of a variety of programs, activities, different types of institutions, in terms of our developing centers, and then three fairly well established centers in Boston, Philadelphia and Houston, Texas that will be our three centers.

We funded seven projects looking at different types of dissemination and to demonstrate and evaluate new approaches to provider education, new ways of applying applications including simulation, organizations, national organizations such as the American College of Surgeons, American College of Surgeons, looking at dissemination of what we know about medical errors and patient safety. So, we have a nice balance of projects in that aspect of the portfolio.

Some additional activities that were included in the $50 million that were not part of the six RFAs, we funded a number of meetings with state and local officials to advance the patient safety initiative through our user liaison program, ULP program. So, we did a number of traveling road shows to address the issue of patient safety. We know that patient safety, if we are in it for the long haul, we are going to have to address the training of a core of individuals who will be the professionals who will carry this forward.

As you may now, the Joint Commission has mandated every institution must have a patient safety officer. Well, where are these patient safety officers going to come from? What is the background knowledge that they need, the experience level, and where are they going to get the training. So, we are doing a feasibility study for that. Then, we have contracts to look at uniform vocabulary and standards.

This gives you an idea of the spread of where the projects are. We wanted to be sure that we had a nice geographic blend as well as provider group to address the different dimensions. Not surprising, there is a concentration where you would expect some of the concentration to be. We were generally pleased that we have a geographic spread.

One of the other projects that we did, our agency funds the evidence based practice centers. We asked one of them to develop a report on what is the evidence base for a variety of patient safety practices. So, the Stanford UCSF EPC conducted that study. Some of you may have seen it. If you haven't, it is available on the web free for download, if you choose to download it.

We have a warning message. It is 750 pages long, and your colleagues will probably give you some static because you will tie up your printer. If you want to request a copy that is already printed, we would be happy to send you one and save your printer and ink cartridge. If you haven't seen that report, I think it is fairly useful in terms of a baseline of where we are in terms of evidence on certain safety practices.

We are working with the Department of Defense to evaluate training programs. We are developing patient safety measures. One of the areas that we know that there is some need for some common assessment of some things like safety culture organizations, and rather than having all of our grantees go off in multiple directions, we want to build on the activities that have already gone forward with the VA, some things in the DOD and other funded projects, to get some public domain, valid, reliable instruments that we can make available. Again, we are studying the feasibility report.

One of the things that was a bit new for our agency was, this is a fairly large portfolio of grants. There are actually 94 grants and contracts that constitute the money that we spent within the $50 million. We wanted to make a conscious effort, and want to make a conscious effort, to maximize the interaction amongst individual grantees, as well as mutual support.

To manage a portfolio of 94 projects was, one of the things was, we could increase our staff geometrically, or we could create a coordinating center under contract that could help us to do that. So, our coordinating center for patient safety research is to provide additional support, facilitate grantee communications and interactions through web based and list serve communications, annual meetings, teleconferences periodically. Provide technical assistance in the areas of methodology and design, to look at common instrumentation across multiple grants that come available. Also, collect and coordinate data sharing amongst the grantees. Obviously, at the later stages of development of the projects, to assist in the dissemination of the information. Again, our goal is to try to look at a rather uniform portfolio, knowing that we have a variety of different projects, but to keep that moving together as a total.

Another major area that we are engaged in, and we presented this information before to this group, so I will be fairly brief, on the activities of coordinating the existing federal reporting system. We have created the patient safety task force to bring together our CDC, CMS and FDA, to look at how the existing reporting activities that go on amongst the federal agencies as part of HHS can be more integrated, both in terms of providing some front end to those who are asked to report -- the users, the hospitals and health care facilities -- and then the integration of the data once it comes in.

What we found was that each agency has its own responsibility and regulation and has designed its reporting and data needs based on their specific requirements. Often, the data that is collected doesn't fit well once you get it together, simply because of the somewhat narrow approach. The people at the other end, the users, tend to report information in a more integrated fashion, but it gets segmented and can't get put back together for analysis.

To help us get started in that area, we have an implementation planning study that Medstat is doing for us, to peel back and see where the opportunities and possibilities for integration of the data, what would be necessary to provide the necessary linkage to get this data together, and then what plans would be most beneficial. It is a feasibility planning study as a follow on to that, and then actually looking at how we can design and implement.

Another important element is the fact that we need some data guidance on various aspects of patient safety. We have asked the IOM to do that. I think you will get more information from Janet on those activities in a few minutes. I won't steal any of Janet's thunder in that area.

One of the things, I think, that we think is important as we look at different types of errors is the kind of categories as they relate to reporting. It is kind of the iceberg model. At the top of the ice berg is what gets lots of attention, the sentinel events, the misadventures, where there was actual harm to patients.

Below that, we have an awful lot of no harm events. The difference between a no harm event is that either luck or the robust nature of human physiology, an event never went to fruition to harm a patient, but the potential for harm is there. The thing about a no harm, it got through all your barriers. Nothing stopped it. We were just lucky. A near miss, there was actually somebody intervened and interrupted that, and it is important to study all three of these dimensions of events.

The concept of this iceberg comes really back from Heinrich's work in transportation, where he talks about one major injury. For every one of those, there are 29 minor injuries. So, there are 300 no injury accidents. So, then, the potential to study the dimension is quite significant if we move past just the specific harm events. So, there is lots of opportunity to study our system.

One of the things we have to consciously keep in mind is that one of the first steps in error management is detection. If you don't know about it, you only react. So, from an organizational point of view, the detection of events is important to be high, because it is the errors that you don't detect that can have disastrous consequences. That creates this information, the context of detection sensitivity level. As the number of reports any organization would get, we would anticipate, should increase, and increase dramatically, what we are really managing for is the reduction in risk of the events reported.

We get this information from other organizations that have looked at event reporting. So, one of the things that we have to keep in mind in our goal of patient safety is to reduce the risk of iatrogenic injury. So, it is risk that we want, not necessarily the number of errors, or errors report. In fact, if we are successful, we are going to know a whole lot more about error than we did before.

What we want to do is the risk associated with events would go down. That is important for us to keep in mind, and particularly as we communicate with Congress. If we suddenly see the numbers increase of events, that is actually a good thing and not a bad thing, and it is kind of twisting that around.

Well, what are our next steps? Well, obviously the major aspect now is to implement the agenda and set the priority for those things that we have already funded. We have to coordinate the 94 separate projects that we have going and try to, I don't know if you want to call it herding the cats, to sort of move in a relatively coordinated manner, while communicating nicely together and sharing the data. What we want to be sure is that we can leverage every single dollar of that research portfolio for maximum benefit. Clearly, we need to be building partnerships, both from our federal role to our state and local institutions.

The other thing is that we have to keep the momentum going. That is going to be a challenge, because we have other little intervening things like things on September 11 that will shift some attention. We have to keep the momentum going. I can entertain any questions.

DR. STARFIELD: Hi, thanks very much. I am Barbara Starfield. I have two related questions and they have to do with terminology. Most of your presentation was on patient safety. Two or three times you used the term errors. Patient safety is more than errors. It is unanticipated or adverse effects.

I wanted to know how you conceptualize your work and how do your grantees conceptualize it? Are they really dealing with the broad field of patient safety? Are they dealing mostly with errors, and what is the balance?

The second is an unrelated thing. You used the term iatrogenic injury.

MR. BATTLES: Yes.

DR. STARFIELD: I wrote an editorial in JAMA once and used the term iatrogenesis, and it was taken up on a lot of web sites which said, doctors kill patients. I want to know what you think of as iatrogenic. Is it more than doctors doing things? Is it the system? Is it the poorly tested drugs and that sort of thing?

MR. BATTLES: On the first question, clearly, we look at it as more than just error. That would be fairly limiting.

The use of the term medical error can be helpful, because the public understands that. Clearly, the research dimension, if we are going to understand it, is a very close interrelationship between the patient safety and quality of care, and the structure that is going on within the activity.

So, if we only look at a very narrow definition of, is this an error or not, we will probably miss most of the important areas.

DR. STARFIELD: On adverse effects.

MR. BATTLES: Absolutely. Again, our goal is to reduce the injury to patients, however they are injured. So, if we take a longer look at that -- you can debate as to where is the injury. We may not know how to prevent some injuries today, but we certainly ought to be looking at that, and that would be part of the agenda in that respect. That is my personal choice because it is in the dictionary, has been used. It is caused by treatment.

I think one of the push backs that people have had is that they have assumed that that was the action of a physician or any health care provider. If we look at the potential for creating error, I like Jim Reson's(?) term. He uses the term, latent failures. Those are the organizational structures that one has. He claims that the higher one is up on the organizational ladder, the greater their capacity to create latent error.

So, our CEOs and other people may be as equally dangerous in terms of this setting. If we hold anybody at the sharp end accountable, at the blunt end they are equally accountable for this error. Everyone in the system can produce an iatrogenic injury, at least in my concept. Others may push back.

MS. COLTIN: I had a question. When you were talking about the projects that have been funded in the area of effectiveness research and patient safety, what basically works to reduce errors, you mentioned a project around uniform vocabulary and coding standards. Does that relate to vocabulary and coding standards in existing administrative data systems and developing electronic medical records, or in the types of error reporting systems that you are also funding, or is it all of those?

MR. BATTLES: That is what we asked Janet and the IOM to help us on, in part because it covers a variety of that activity. There are concepts are getting guidance on data guidance that are fairly broad. We didn't punt, but we wanted to begin to have a broader approach to looking at that. That is why we have engaged the Institute of Medicine.

Janet me, in a few minutes, be able to answer that question more specifically. I am avoiding it.

MS. COLTIN: Other questions? Well, thank you very much. We will move on to our next presenter, and that is Noel Eldridge.

Agenda Item: Public/Private Sector Patient Safety Initiative.

MR. ELDRIDGE: I have got a copy of everything I have brought. You can go grab one. They are the same charts that the people on the committee have, plus a bunch of other stuff that I brought from the VA. You can see there is a lot of stuff that I brought with me today. I can't cover it in 20 minutes. So, I am going to try to cover it as quickly as I can. I am going to skip over some of the things in my presentation.

If it is okay, I think I would like to address a couple of comments that Dr. Starfield said after Dr. Battles, just to give you a little context on our program, because I think they are informative. We have tried to scrub the word error out of almost everything in the program, because it means a lot of different things to a lot of different people.

Our program is a lot bigger than just talking about errors. We also look into attempted suicides. We look into something that is maybe more of a problem for us than for maybe other health care organizations, what we call patient elopements, when patients go missing or they disappear from the hospital or health care setting.

We look at falls, which isn't really an error, per se, but there can be a lot of adverse consequences from patient falls. We try to prevent those. We look at all manner of adverse drug events, even those that aren't associated with an error per se. So, we really tried to de-emphasize the word error.

I remember I was at a meeting with the National Quality Forum last year, shortly after taking this job. I heard Dr. Ken Shine from the IOM say -- and it made sense to me at the time based on the context -- we are not talking about mistakes; we are talking about errors. Looking back from a year ago, I don't remember what he was trying to say, but at the time it made sense.

I think this error/mistake really gets out of hand. So, we have tried to de-emphasize the word error. We have also tried to de-emphasize definitions. For example, iatrogenic, if you look it up in my dictionary -- and I did look it up -- it says physician induced adverse consequence or whatever the word is.

It is clearly a physician. I looked it up in more than one dictionary and it says physician. It doesn't say pharmacist. It doesn't say nurse. It doesn't say X-ray technician or MRI tech or whatever. It says physician. So, it gets out of hand sometimes when you really try to focus on the definitions. If you make the definitions the most important thing, you have a hard time getting started even.

That wasn't my decision. I haven't been with the program for a year, but I think it was a good decision by those people who were working on the program, to get started. One of the things that is different about our program is that it started in 1997-1998, and it has been in its current form pretty much since 1998-1999.

Most other projects or initiatives were at least jump started, if they were initiated in response to the IOM report, in late 1999. This one wasn't. We already had a program going at that point, and this just brought more attention to it, really.

What I want to focus on in this page is in the middle. Our system is designed for learning, learning and application to reduce future occurrences, rather than on punishment and going after the individuals that were associated with the adverse events. One thing that we saw, we had set up an external panel several years ago, and they gave us some advice we have been trying to implement. One is that the program needs to be viewed as non-punitive by people in the health care setting. Punishment doesn't just include formal punishment, like being reprimanded or being forced to take some training course, but just the shame and the embarrassment.

A system has to be set up so that it is not seen to be a punitive system. We have tried to do that. Then, the adverse events should be reviewed by multidisiciplinary teams. So, when we set up an RTA team, a root cause analysis team, we try to involve different types of health care professionals on that team, to bring in different perspectives and different backgrounds and different types of knowledge.

Now, the other item is that it is important to give people timely and relevant feedback on what they have reported, so that it doesn't seem to be a pointless exercise. The last one that we have really tried to focus on a lot is close call analysis. We use the term close call rather than near miss in our program. It is kind of semantics.

We think that is important because people are much more willing to talk about close calls than they are to talk about actual adverse events. We thought that was the case and we have seen it in practice. Over 90 percent of our reports are close call reports, rather than actual adverse event reports.

That may be the way that Dr. Battles showed, that close calls are much more common than adverse events, and may also have an aspect that people are more willing to report them. Then the last point is one that Dr. Battles also made at the bottom. The relationship between the number of events reported and those occurring are unclear. It is clear that fewer reports does not mean fewer events. That is a tricky thing to try to explain to some people.

It can be a big problem when the number of reports goes up. You try to tell people that it is not really a less safe system. We emphasize close calls. We have got this picture here of a canary in a coal mine, and that is kind of how we see close calls.

They provide warning that bad things can happen before they actually happen. For example, in our system we had a reported close call related to an MRI machine last January. Someone was almost injured in an MRI, due to the powerful magnet in the MRI. Then we started asking around and we found out that, in our facilities, this was very common. There were a lot of close calls occurring. Any kind of metallic object can turn into a projectile near an MRI machine.

We put out an alert on this within the VHA, the Veterans Health Administration, to the medical centers, and gave them directions on how to try to prevent this from happening. There are a lot of other things associated with MRIs. For example, a lot of tatoos have iron in them. People are getting third degree burns on their tatoos from going into MRIs, and this is not widely known. People with those circumferential tatoos around their arms, they are getting a circumferential burn around their arms. We try to get that information out.

We didn't wait until we had a real disaster. Probably a number of you heard about the case a few months ago in New York where a six-year-old child was killed by an oxygen bottle that smashed into his head and actually killed him. It was just made into a projectile by a magnet. We didn't wait for something like that to get it out. I think the close calls are very important to us.

Another thing we did was, we bounded what it is that we are talking about within our program. We have organized it so that it can be about learning rather than punishment. One of the reasons for that is that we have taken out what we call the intentionally unsafe act. If something is an intentionally unsafe act, we don't look at it. There are other systems within the Veterans Administration that look at that.

This is why we can look into these incidents, these events, and not have in mind that we are trying to go after these people, because they are not making these mistakes intentionally. Another thing about our program is the first bullet, that it is lead by a director that reports directly to the chief of the organization. It is not part of the quality program. It is just a difference in our program. It is a separate safety program. It is not part of the occupational safety program either. It is a patient safety program alone. So, the first thing we did, we narrowed it down by taking out all the intentional unsafe acts.

The next thing we did to try to make the job possible to look at the adverse events, was the use of root cause analysis that you are probably familiar with. We came up with a system to figure out which adverse events to look at. Let me show you this a little bit here.

Basically, we look at two aspects of the event, the severity of the event and the probability of the event occurring. We have got a little scale here, that allows us to standardize the handling of the reports, and permit a rational selection of the cases to be considered. Basically, there is more information attached to the back here, and I won't go through it too much. These here, you can see the numbers.

These are the ones that, if an event falls into this category, the person in the facility has to do a root cause analysis on it. They have to establish a team and do a root cause analysis. If it doesn't fall into a category three, they don't have to do a root cause analysis on the event. I can go into that in more detail, if people are interested, after the core presentation.

We set up a reporting system and a system to conduct root cause analysis. Our system depends on the care givers at the front end to receive the initial reports and conduct root cause analyses. The reports come in to the hospital from within that hospital. They don't send it to us at our central office.

When they are done with the root cause analysis, then they send it to us. So, there is no blue ribbon panel or anything that comes out to look at every adverse event. The people in the hospital are empowered or tasked or forced or however you want to look at it, to look at these adverse events themselves and try to come up with ways to address them.

To enable them to do that, we have had a very thorough training program. We have trained over 1,000 people in a three-day course in root cause analysis and other aspects of patient safety. Each one of these courses that has been offered over approximately the last two-and-a-half years, has been led by the director of the national center for patient safety, Dr. James Bajian(?).

So, he comes out to every training program and he leads the whole thing. It is not something that is delegated down the line as a kind of a low level effort. It is a very high priority effort within our program, to train people to be able to do these analyses. The chief of the whole program is the key person in doing it, quite frankly. Then, we have also developed cognitive tools, cognitive aids and other things to help people, once they leave the training session.

That is the little flip book that you have in your package there. That is one of the cognitive aids that we have developed. We call them triage questions. They help you get to the kinds of things that can be the root cause of adverse events. We have printed up thousands of them. People keep asking us for them. In fact, the American Hospital Association has ordered 6,000 copies of this to send to their hospitals. Now we are in the process of trying to figure out how to legally print them up and get paid without getting in trouble or anything.

This one is a screen shot of the program we developed. It is called the SPOT program. It doesn't really stand for anything. We just made up a nice acronym for it. The nice thing about this is that it gives people a way to do the root cause analysis that everybody is doing it the same way across the country, they can learn how to use it and it can guide the whole process. For example, let me show you, up here, you can see that you are going to put in the STAC score. That is the one, two or three that I mentioned before.

If you put a three in there, in the actual program, at the bottom there are little boxes about continue, or I choose not to do an RCA. If you put a three in there, the little box that says, I choose not to do an RCA disappears, because you have to do an RCA if you put a three in there. There are little things like this that help guide the user through the whole program. It has been developed by us and I actually got an e mail from somebody who is running the health program in Japan the day before yesterday, asking me if they can get a copy of this software, because they are interested in using it. We have had a lot of interest in this.

Well, after you do the root cause analysis and you have some confidence in why the event occurred and what can be done about it, how do you apply it. One thing that we do is, we have the RCA team that has been put together to do the review, communicate their findings back to the reporter. That is the first person who initially told them about the adverse event or close call.

Also, at the end of the process, the team has to send their report or brief their report directly to the facility director which, in this case, is the hospital director. The report has what we call corrective actions, things that the team thinks should be implemented to try to address this event so that it doesn't recur, or prevent it from recurring the first time, if it was a close call.

Basically, the director has to concur or not concur on all of these. If the director non-concurs, they say it is too expensive and the team has to come up with something else, or at least contemplate coming up with something else, we document what the director concurs and doesn't concur with. The director is not forced to do whatever the team says, obviously, but he has to document his non-concurrence if he is not going to do it.

Another thing that is good is that once the RCA is complete in this data base that I showed you before, it is sent directly to our head office, which is in Ann Arbor, Michigan. There, it is kept in the data base, and the data base is available for searches and for research to determine what types of alerts should be developed, what types of information should be widely disseminated throughout the Veterans Health Administration.

This is really the most challenging part of the whole program right now. You get a lot of adverse events and you get a lot of corrective actions. You don't know which ones necessarily to try to recommend to the whole organization and which ones to say, well, that happened in Big Springs, Texas, but it is not likely to happen in White River Junction, Vermont or whatever. So, it is a challenge for us right now.

Here are some of the results. Overall, we have seen a 30-fold increase in reports, and about 90 percent of those reports are close calls right now. I have a little bit of a philosophy there. You can read that. Basically, we are trying to identify systems vulnerabilities that can be reduced or eliminated, to prevent these things from occurring. We don't emphasize numbers in our program.

Another result is that we have been able to put in place patient safety managers throughout our whole system, at all the hospitals. I can't tell you that they have all been hired. They were all supposed to be hired by July of this year. There are probably a few laggards that have not been hired so far, but by and large, it is probably 150 out of 163 that have been hired.

The whole system is broken up into 22 networks within the Veterans Health Administration. Each one has a patient safety officer for that network of about typically six to 10 hospitals. The system primarily focuses on making changes at that facility, not system wide. That is the challenge for us, is to figure out how to capture the best things that are found at a facility and bring them nationwide, without overly prescribing to intelligent people how they should do their job.

This is outside our core program, but I thought it might be of interest to people. We have set up a new patient safety reporting system in concert with NASA. NASA has a long history of doing aviation safety, and they are seen as an impartial party in aviation and probably in health care also. I don't think anybody would feel that NASA has a vested interest in some point of view regarding health care.

It is a complementary system to what we have currently got. This system is totally de-identified. People will end in reports of adverse events, close calls, to NASA directly, actually on a hard copy, or they can print out a copy off the internet, write it down, mail it to NASA. NASA has a team of physicians, pharmacists, nurses that they have put together that are going to look at these reports and then disseminate them. They will be disseminated without any identification as to who submitted the report, what hospital it came from, what state it came from, et cetera.

They have been doing this in aviation for 25 years and they have received over half a million reports, and they haven't disclosed on in the whole 25 year history of half a million reports. This, I think, has the potential to really give a lot of insight into what is really going on out there, from people who may be too afraid to report through our regular system.

Even though it is not focusing on punishment, people may still be embarrassed or have some other shame aspect that prevents them from reporting. So, we are paying NASA to do this. It is written in the memorandum of agreement, that we basically can't ask for any information that is not in the memo. Basically, we can't say, this is a really terrible event and you have to tell us what has happened. We cannot say that and they cannot give it to us.

Here are some -- I put this chart in because of the topic that was on the agenda. I thought you might be interested in some of the things that we have accomplished that have been applied elsewhere. Congress directed the DOD to implement the program, modeled after our program. I think that was about two years ago. They have been doing it, but they have been slow in doing it, partially, I think, because of funding problems.

We have had a number of other organizations come to us to see what we are doing, and to start implementing their program based on our program. I already mentioned that the American Hospital Association is disseminating some of the things that we have developed. A couple of other items that were developed, training materials in health care, failure modes and effects analysis, and that safety assessment codes, the matrix with the ones and twos and threes, they want to package that up with the training module and then disseminate that to their facilities also.

I mentioned the contact from Japan. They actually went to our Ann Arbor office, and spent a couple of days there talking to our people. Dr. Bajian has been to the United Kingdom to help them set up their patient safety program. We have had inquiries from Denmark and Canada also.

The last bullet is a new item. We were just notified last month that our program was one of five winners in the innovations in American Government award program, which is administered by the Harvard University Kennedy School of Government. That was a big kudo for us, and we are pretty happy about that.

That is about all of this presentation. I have some additional information. If you are interested, you can peruse that. The only thing I want to touch upon is this one here. Sometimes you hear people talk about safety culture or the culture of safety, and it sounds like kind of another buzz word or flavor of the month or whatever.

I try to find a nice definition of it for people to look at and for people to use within our own organization. This one came actually from an electrical engineering group in Britain, but I think it gets to the same thing that we are trying to get to in health care.

If there are any questions, I would be glad to try to answer them.

DR. MAYS: I just wanted to ask, when you were talking about measuring suicide in the beginning, is that because what you were looking at is the safety of the patient, or is that because what you are looking at is the error in judgement in terms of protecting the patient?

MR. ELDRIDGE: It is both. For example, if we have a suicide where somebody was an outpatient coming in to be an inpatient and obviously had psychological problems, and he smuggled a gun into the hospital and he shot himself.

So, we look into that. We don't want him to kill himself. We also don't want him bringing a gun into the hospital where he could kill somebody else. So, we try to do something about addressing that. When we do the reviews of the suicides, we do look at the care the person received, to try to see, did he have to wait six months for an appointment, this kind of thing. That is an effect of the treatment, potentially, if he wasn't getting the kind of treatment that he needed.

MS. COLTIN: I was noticing on the input screen shot for the SPOT program, that there is an area in number five where you indicate the type of event, and you have fall, medication, parasuicide and missing, and then none of these types. I am curious what proportion are falling into the none of these types and whether, out of that, you envision any emerging classification.

MR. ELDRIDGE: Yes, this is something we are working on and we would be glad to get the input from IOM, and actually, we are meeting with some of the IOM staff. They are doing their terminology work for patient safety. The reason why we have these four categories, I was simplifying the presentation, but since you ask, I will give you the details.

What happens is, these are probably our four most common types of events that happen. Since there are so many of them, we have come to the conclusion that it doesn't make sense to necessarily do an RCA on each one. What we have done is, we have set up a system to do aggregated reviews. So, each facility, every quarter, will be doing a review on all their falls that quarter, all their medication errors or adverse drug events, all their parasuicides, completed suicides, and all their missing patient events.

If you choose one of those, the system will prompt you to see if you want to do an aggregated review, or do you want to do a review on this particular adverse event. So, once a quarter the facility puts together a team to look at 10 falls or five parasuicides or whatever. Those ones, very often, get into the category three that will require a lot of root cause analyses, especially falls.

MS. COLTIN: Is the system that you are working on with NASA using a similar classification for these?

MR. ELDRIDGE: No, I am not sure exactly how we are going to have that. These classifications are just for the aggregate reviews. That is all we ask for. We don't have the whole system worked out yet, how we are going to use key words, are we going to have a control vocabulary, that kind of thing. That is the big question for a lot of people.

MS. COLTIN: You said that that system is going to be widely available, the de-identified.

MR. ELDRIDGE: The NASA one?

MS. COLTIN: Yes, what do you mean by widely available?

MR. ELDRIDGE: What we are going to do, in every VA facility, every VA medical center, there are going to be forms that people can just pick up, go home, fill it out, mail it in.

What they have actually done in the aviation reporting system is, there is a section on the form where you write your name, your home address, et cetera, and there is a dotted line beneath that.

Then, when they are done with analyzing that review, analyzing that event, actually cut that piece of paper off and mail it back to you. So, they have no record of that, and they expunge it from their data base.

In fact, I don't even know if they enter that part of the data into their electronic data base to begin with. It would be widely available to report.

Then, afterwards, for aviation, what NASA has done, they scrub out the type of plane, if it is an unusual type of plane, or they scrub out the exact place where this incident occurred, so people can't work backwards to figure out, well, this is Delta Flight 700 or something.

Likewise, they are going to do the same thing for us, take out information that would enable somebody to work backwards to figure out where the event occurred.

Exactly how that information is going to be made available afterwards is not 100 percent set. We intend it to be information that will be available through the internet and just completely widely available to anybody who wants to look at it.

Maybe there will be a monthly report. Maybe there will be an annual report. Maybe it will be electronic. It is not 100 percent worked out yet. There is going to be no secrecy in that. The stuff that we have within our system is kept very confidential.

DR. STARFIELD: I just wanted to follow up on iatrogenic. You are going to have to develop another term.

As you can imagine, I look in dictionaries, too, and most of them have several definitions, iatro meaning medically caused.

I actually think that iatrogenic could be used more broadly if you consider the medical care system. Otherwise, you have to develop another term, because you are clearly doing much more broadly than medically related or doctor related.

MR. ELDRIDGE: That is right. I don't know, maybe there needs to be another term. So far, we have benefitted from not focusing on definitions.

DR. FITZMAURICE: I wanted to ask you, referring back to the patient safety reporting system, does that mean that there will be two sets of forms to fill out, one for the VA and one for NASA, describing the same incident?

MR. ELDRIDGE: Yes, people could report the same incident both ways if they wanted to. There is really not a form to fill out within the VA. We haven't even made a nationwide form.

DR. FITZMAURICE: Are they little cards?

MR. ELDRIDGE: Basically, we have a person that is supposed to be contacted within each hospital. Then, some hospitals have a particular form they use. Some hospitals rely on e mails or phone calls or whatever.

This turned into a very contentious topic. So, we decided to avoid it. Patient incident reports was the term. Since we have 22 different systems, some places had a reporting form that they liked and they didn't want to use the other form. Some people just liked to be able to send an e mail or phone call it in.

We said, look, whatever you want to do to get that incident reported, we don't care. We care about the work afterwards to analyze the incident and find out what happened.

For the NASA one, it is a standardized form that will be the same throughout the whole VA. Right now it is in one of our networks completely, which is Southern California.

It is being moved out into two other networks. By two months from now, it will be nationwide. We are having a giant training session next month.

We are bringing in people. From every hospital we are bringing two or three people in, bringing in different groups of people over two weeks, and train them in this program so that when they go back, there are a few people in each hospital that know something about the NASA patient safety reporting system.

DR. FITZMAURICE: Why do you suppose this has happened at VA and not at other moderately large organizations?

What do you think was the catalyst, the spur, that got you going and didn't get into other health plans, for example, or large provider organizations?

MR. ELDRIDGE: I have only been with the organization for a year, so I can just tell you kind of the folk lore that I have heard.

I think it is probably because of a few individuals, like Dr. Ken Kaiser who is running the VA and Dr. Jim Bajian, who was hired to make this program happen. He is my boss, a phenomenal person. That is one of the big reason, those people.

Also, there are legal reasons, like people aren't quite as accountable in the VA in terms of, if you get sued for malpractice in the VA, the government is going to cover it.

If you are working for the VA and you are a VA doctor, you are not personally going to be sued. Now, you could still get in the national practitioner data bank or other bad things can happen to you, but there are things that provide some protection for people within the VA.

Also, it is the largest health care system in the country. It is a huge system. So, there is a little more uniformity to it. It is distributed, but there is a headquarters that runs the whole thing. So, there are a lot of reasons.

DR. FITZMAURICE: Thank you.

MS. PETERSON: You talked about root cause analysis was done and practice changes might possibly have been put in place in one hospital, not necessarily through the system.

Does that information as to changes that they made to make things better get shared within the system?

MR. ELDRIDGE: The way that gets done is -- that is a good question. It gets sent to our office in Ann Arbor.

Right now, we are developing the software that will allow people to see it nationwide. There are very big concerns about the confidentiality of this, people being able to pursue all the root cause analyses.

So, we are trying to set it up with the right kind of permission so that, if you are working in one hospital, you can review all the adverse events that happened in another hospital, and see what happened when they studied their falls or when they studied their adverse drug events or these sorts of things.

That is part of our job at the national center for patient safety, to be a national center, to see the things that really need to be disseminated nationwide.

When I was saying that this is something that we haven't totally worked out yet, that is not to say that it is not something that we are working on and working out, or it is not something that we are hoping to be a leader in, in the future.

Right now, I can't come up here and say that if one hospital figured out the best way to prevent whatever from occurring, that everyone else in the VA, within two weeks, would know about it, and would be changing their system to be the same as it is in the good place.

MS. COLTIN: Thank you very much. Now we are going to hear from Janet Corrigan from the Institute of Medicine.

Agenda Item: Public/Private Sector Patient Safety Initiative.

MS. CORRIGAN: What I thought I would try to do is bring you up to date on a few of the developments that relate very specifically to building sort of the data infrastructure that we need for both safety and quality purposes. I know your focus right now is on safety.

At the Institute of Medicine, we view safety as a component of quality. So, each of our projects that relates to quality usually has some part of it that is specific to safety.

We have a variety of projects that cross both the safety, the effectiveness, patient centered, we have a variety of components of quality overall. I will try to focus mainly on the data infrastructure aspects of some of the different projects underway, and there are four efforts I will touch on.

The first is sort of some ongoing work that relates to creating an ability to have a national safety and quality tracking. The purpose of those projects is mainly to look at the magnitude of safety and quality problems, and has some idea over time whether we are getting better or worse in terms of our performance overall in the country.

The second area I will touch on has to do with this project that we have on patient safety data standards, a project which Jim Battles has talked a little bit about earlier. I will follow up specifically on what we are doing there.

The third area I want to touch on is a federal quality oversight project, which is looking very specifically at comparative quality information across the various federal health care programs and what needs to do be done if that also is going to be publicly reported.

Then the fourth area I will touch on is some development efforts that have to do with building a national health information infrastructure.

Let me go back and start with the national safety and quality tracking system. The Institute of Medicine did a project called the National Health Care Quality Report, which you have all seen the report that was released, and I think I came here and talked about it once before. Now there is an extensive effort underway in the Department of Health and Human Services to put together that national health care quality report.

One component of the dimensions of that report, one component is safety, another is effectiveness, a third one is patient centeredness, and the fourth is timeliness, and then equity is viewed as a cross cutting issue.

We have a project underway now that is taking a look at that equity cross cutting issue, and specifically, it is a project on health disparities.

We will be doing a workshop in the spring that is going to take a look at the kinds of measures that one might want to use, as well as the underlying data sources, to look at those four dimensions of safety, effectiveness, timeliness and patient centeredness, as they relate very specifically to different subpopulations that we have reason to believe are not receiving the kind of care that they should be receiving, and different issues that relate to health disparities.

That project will release a report in September of next year. It is not a full blown committee report, as we call them at the IOM, but rather, it will be a report on the workshop, the preliminary findings about the types of measures that might be useful in this disparities report, which is viewed as part of the family reports on the national health care quality effort.

Once again, there, the emphasis is on the magnitude of safety and quality concerns, measuring the magnitude, how big is the problem and what is the direction of change over time. In terms of the source data sets, what they will be looking at is three potential areas, the same as what we looked at in the national health care quality report.

The first is the potential to draw useful information from survey data, namely MEPS, which is clearly the largest. There, the name has a lot of advantages and it is probably the most useful source in the near term of information. It is expensive, and especially if one does want to begin to expand the sample size to get state level estimates, it sort of introduces a whole new aspect to it, and much more extensive sampling and sample size issues.

Our earlier committee, on the national health care quality report, also focused a little attention on the use of administrative data and I think concluded that, to a great extent, we have milked administrative data about as far as we can go, and it really has various limitations, in terms of it being limited, in terms of the clinical richness of the data, as well as differences across various programs. So, it is difficult to get comparability.

Then one immediately gets into the whole area of automated clinical records, and I will touch a little bit later on an effort we have underway to try to address some of the barriers of the automated clinical records and their development.

The second effort we have underway is the patient safety data standards project, as we call it. This project, as Jim indicated, really grew out of all the attention that is being paid to patient safety issues, and the reporting systems that are being set up not only within the federal programs -- and the VA has one that goes back a good long ways, obviously, and really is one of the most exemplary examples, I think, of patient safety reporting.

Also, the CDC has a reporting system in place, and a variety of discussions underway, and Medicare and DOD and other federal programs. In addition, the states -- which there are now about 14 or 15 that have reporting systems in place, mostly focusing on adverse events, there is a great deal of variability in those systems. I think the work of the National Academy for State Health Policy has really started to document very well what the various state level systems are collecting. I think there is an expectation here that more states will get into the area of patient safety reporting systems.

Indeed, the IOM recommended that they should, for the most egregious and serious events that result in death or very serious permanent injuries, which is, granted, a very limited set of adverse events. Still, that will be an important development. So, we want very much to think through what could be standardized, what should be standardized through these various systems, so there is more useful information that they get out of them, but also so there could be some comparability that we could actually begin to look at some of the comparative data.

In the case of adverse events, one of the things that the IOM committee will be taking a look at is the ability to define events and data sources such that one might be able to calculate rates.

That is less of an issue in the area of the near misses category, where it may be a sizeable issue, too, but part of the emphasis there, clearly, is on analyzing individual events, so that you understand what the root causes or contributing factors were. So, there are different objectives for different kinds of reporting systems, but the committee will be trying to get below the surface and identify what kind of standardization in both events that are being reported, as well as the basic data sources that are used to report on those events.

This project is a two-year effort that just began a couple of months ago. It now has a team of three people who are working on it. The project director is Philip Aspen, and he has only been on board for two weeks, or he would probably be here telling you a lot more detail about this project than I am prepared to today.

As a first step, what he and the other members of the team are doing is to meet with people like Noel Eldridge, where there has been an extensive reporting system in place, and to get an understanding of not only the types of events that are being reported, but also the kinds of forms that are being used to report that information, and what they see as the major standardization issues as we move forward. So, they are kind of getting the lay of the land by looking at the different reporting systems that are currently in place.

I expect the committee will be finalized in January. We are in the process of moving a potential slate through the internal structures of the National Academies which actually takes a little bit of time, as you might imagine. The final report for that project is due in the fall of 2003.

The third effort that is underway that looks at data structure issues for purposes of safety and quality is a congressionally mandated project that has been underway now for a little over a year. It is called the federal quality oversight project. This project is looking at federal quality oversight activities in Medicare, Medicaid, the DOD, VA, SCHIP, the Indian Health Services, and some other programs of the Public Health Service. There is a very, very broad scope of this project.

It is looking at whether or not the quality oversight programs, which include things like certification, accreditation, licensure requirements, the basic capabilities that have to be there in facilities and with health professions, also includes the variety of quality measurement and oversight programs that are in place, like the PROs that are there for Medicare, and that are also being used by a variety of other programs as well.

It also is looking very closely at the extent to which these programs have, over the last five to 10 years, moved toward direct measurement of clinical quality and safety, and the kinds of measures that they are using for that purpose. Also, they are looking at the extent to which they either currently are, or in the near future will likely begin to produce publicly available information in that area.

As soon as you start to go down that road, one of the things that is quite clear is that most of these federal programs have moved pretty extensively to direct measures of safety and quality, clinical quality, and that there is a particular focus on a subset of chronic conditions that pop up high on the list for most of the programs.

There is some variability but there is also a good deal of commonality in the kinds of conditions and clinical measures that are being looked at. That, of course, as you would expect, raises a whole set of issues about standardization of measures, performance measures, as well as, for purposes of not only making the information more useful to the programs that might want to share it, but also, if you are going to produce information and put it in the public domain to compare quality data, you would like to have some commonality of how you are measuring things and reporting them.

So, that committee is looking very closely at the infrastructure issues. Indeed, if we are going to move, in terms of our quality oversight and reporting activities, if we are going to move even more extensively toward technical measures of safety and quality, we have got to have a much more sophisticated information infrastructure to support that kind of emphasis. So, they are looking closely at ways that one might possibly encourage the development of more sophisticated information infrastructure.

There are numerous options that have been kicked around at a workshop that we held recently, and elsewhere within the community. Clearly, if there are consistent requirements for the reporting of various performance measures, that puts pressure on that delivery system to automate clinical data and to be able to report that information.

There is also a great deal of discussion -- and I know has been at the framework board of the National Quality Forum as well -- about whether we need to think more about minimum requirements for clinical information systems. So, one possible approach that was mentioned at our workshop, too, was conditions of participation that require things like automation medication order entry systems.

If, indeed, we can achieve a 70 or 80 percent reduction in adverse medication events, in places like Brigham and Women's where they have gone a good deal down the road, and I think that is what David Bateman's most recent studies show, then it will begin to say, this is a part of the necessary infrastructure that a hospital or medical group or others must have to deliver safe and high quality care.

That is kind of a flavor of the set of issues that are being addressed. I don't have a clue where they will come out. I wouldn't tell you if I did, but that is the active debate that is going on, and went on at a workshop we had just two months ago.

The fourth area where we have some developments underway, obviously all of these various products of the IOM and the Quality of Care in America Committee that released the Crossing the Quality Chasm report, I think it is becoming clearer that we have moved past the phase where we can think about relying on administrative and billing data exclusively.

That is not to say that they aren't rich sources of information, but our health care and record committee concluded that we must move to automated clinical data overall, not only for purposes of reporting, measuring and monitoring quality, but they reached that conclusion because they felt that you really can't deliver safe and effective care without the information support, without decision support systems.

If you want those decision support systems, you have got to have the automated clinical data. Having reached that conclusion, we have now started to identify what we think are the major barriers to the development of that infrastructure.

They fall into four categories. The first is an absence of leadership and public will. That is kind of a broad category, but the need to really begin to cultivate strong leadership, to carry us forward and to address clinical information infrastructure issues. Part and parcel of that is also continuing to build the public will to address these issues.

We actually view raising public awareness about the magnitude of the safety issues and the magnitude of the quality of care issues, the quality gap, as an important gap of building a public will to have such an infrastructure, to put the investments in place that are needed to have automated clinical information systems.

The second barrier is standards. You have heard a little bit about our efforts. I know the NCVHS itself has made tremendous contribution to this area of standards.

I will mention one other effort that, if you are not aware of, you might want to get more information on. It is the National Quality Forum's summit that they are holding on March 6 and 7.

That two-day effort will focus a lot of attention, I think, on the standards issue and you will hear a lot from different vendors and different groups about what they see as the barriers in the standards area.

The third barrier that we see to developing a better health information infrastructure are work force issues, and the ability of the health professions and other groups to function in an information rich environment, their willingness to be a part of what it takes to automate clinical data, and to use the decision support systems.

Toward that end, one effort that we are working on with ASPI, and in particularly with COGME and NACNAP -- which is the nursing counterpart for the Council on Graduate Medical Education -- we are working with them collaboratively to sponsor a health professions summit on June 17 and 18.

One of the focus areas of that summit will be how do we better train members of the health professions -- both physicians, nurses, pharmacists, dentists and others -- to work in a very different environment and one that has clinical data as well as decision support systems.

So, how do we train them in the early phases to want to work in that environment and to realize the benefits and appreciate the benefits of it, but to have the minimum skills that are required to also feel comfortable with that technology.

Then, the fourth, and the last, barrier that we believe to the health information infrastructure -- I am sure there are lots of others, but the fourth major ones that our committees have identified -- has to do with the financing and the incentive systems.

There, not only the issue of capital, how much capital is required and where is it going to come from and how are we going to make sure that the available capital goes to building the health information infrastructure, but also, what are the incentives that we provide to health institutions and professional groups to improve quality, and to do it in a way that it builds on a strong health information infrastructure.

To address those four areas, we clearly have some efforts underway. At IOM, we are trying to chip away at these one by one, and we are hoping that other groups will do so, too, and many of them are.

We also have in the planning phase a broader project that will look at all four and how they fit together, and it is a project to build a health information infrastructure.

We hope to get that off the ground in the spring. We have some support for it. We are actively trying to raise the additional support that we need, and also to help figure out how to best position this project.

Although it has grown to a great extent from our recent emphasis on quality and safety and health care delivery, any health information infrastructure needs to serve both research needs, the public health, a variety of other audiences.

We are trying to take a good deal of time at the front end to make sure that this project has been framed and specified well, and to make sure that we have the right cast of characters and the right expertise at the table, and groups involved at the front end.

MS. COLTIN: Questions?

DR. STARFIELD: Thanks, Janet. As always, very enlightening. Under your fourth category there, the national health information infrastructure, you actually address the computerized patient record.

I am sure you are aware of the efforts of this committee to develop the thinking about an NHII.

How do you see us working together so that we come up with something that, together, has the same force?

MS. CORRIGAN: Hopefully very closely. We absolutely don't want to reinvent the wheel. I will say that the Board on Health Care Services, which is the board that oversees the work in the division that I am in, has been chaired by Don Detmer for the last six years.

Paul Clayton is a member of the board and so is Paul Tang from Palo Alto Clinic. At a recent meeting that we had, to try to begin to think through this project -- and Ted Churliff(?) also weighed in on this issue, too, at our IOM council meeting -- everyone was very complimentary about the work here within NCVHS.

If anything, we want to find ways to amplify that message, to reinforce it and to build on it, and to figure out ways that we, as a group, all of us can just push this agenda forward.

What we would certainly do, some of the ways we try to do that is to certainly identify potential committee members who are really versed in the prior work and really have a lot of knowledge about it.

Second, we would certainly want to talk to all of you about ways that we could collaborate on workshops and really bring thinking into the effort, and make sure that both groups, as they go forward, are very much aware of the agenda of the others.

DR. STARFIELD: Could I just follow up on that? As you know, the NHII report that is coming out of here has got three parts, and the computerized patient record is only one of them. Then there is the population one and the personal one. To what extent are you focusing on those other two?

MS. CORRIGAN: They are very much on the table, yes, but we are in a very early stage in that project at this stage. We aren't very far down the road.

It is just beginning to get put into sort of draft and descriptive documents and we are just beginning to sort through it.

MS. COLTIN: Other questions? Okay, thank you very much. So, up to this point, we have heard a lot about what has been going on in the public sector, and the work of the IOM, which bridges both the public and private sectors.

Now we are going to hear a bit about some of the activities that are occurring in the private sector. Our first speaker is Dr. Allen Vaida from the Institute for Safe Medication Practices.

Let's take a few minutes for a break.

[Brief recess.]

Agenda Item: Public Sector Patient Safety Initiative.

MR. VAIDA: Thank you very much. It is a real pleasure to be invited here. What I would like to do is spend the time to tell you a little bit about the Institute for Safe Medication practices.

We are a non-profit, public organization, and we have over 25 years experience in medication error reporting and dissemination of safety recommendations. As we heard from the speaker from the Veterans Administration, we actually started our program a year before NASA's aviation program. We have never had a breach in confidentiality in over 26 years.

We have a multidisciplinary board of trustees, health professionals, academia, consumer, and also industry. Our mission is to encourage voluntary reporting, translate errors in education, promote a non-punitive culture, and to help prevent medication errors by productively interacting with several agencies, regulatory, professional, practitioners and also the industry, although we are a totally independent agency.

The foundation is our medication errors reporting program, and this is operated by the United States Pharmacopoeia in cooperation with ISMP. Practitioners, consumers, any health professionals can report in confidence, either through toll free numbers, through a self mailer reporting form. In your handout that you have there, I actually have a copy of the reporting form, the ISMP MERP reporting form. So, that is also a self mailer. Also, via the web site, we will get error reports in via the web site, as well as phone calls.

USP and ISMP are USDA Medwatch partners. What that means is that, all the reports that we get in, we do share with the FDA. Conversely, the FDA reports that come in that deal with medications, we get a copy of those.

We disseminate our information through journal publications. We have articles in all these journals listed. With the journals and newsletters, at last count -- in fact, we just went through this at our organization -- the top half there, with the ASHP newsletter, down to Hippocrates -- Hippocrates is drug information that is actually supportive to physicians for free on hand held devices.

We actually have what we call doc alerts that we put on there. The FDA also has alerts on there. The top half, we average about two million circulation a month through those publications.

We also have our ISMP medication safety alert. This is a publication that goes out 25 times a year, every two weeks. It is sent to every hospital in the United States and to about 30 foreign countries.

We estimate that our readership of that -- what we do is, we tell hospitals and organizations that get that, to make sure that they freely share it within the organization. At last count, we have close to 600,000 that share that publication. We also have a book, Medication Errors, that was edited by our president, Mike Cohen(?). We have medication safety self assessments, which I am going to talk about in a few moments.

We have a very active web site. If anybody has an opportunity to take a look at our web site, we also have publications on there. We have links on there, but we also have some testimonies that we gave in front of Congress and a number of legislative organizations on reporting programs and voluntary versus mandatory reporting.

Some of the success stories, I am just going to touch upon a couple of them. Several years ago, there were over 70 deaths reported with an injectable lidocaine bolus product. ISMP worked with the manufacturers, worked with FDA, and actually had this product pulled from the market.

More recently, there was a product called cerebex that is used for seizures. That actually led to seven deaths because of labeling issues. We were instrumental in working and having that labeling changed. Concentrated potassium chloride injection is something that we have worked with the Joint Commission on, to have that almost removed entirely from hospitals.

We have listed changes, abbreviations, that we also publish with the United States Pharmacopoeia and they have also been widely disseminated through journals and different media. The Joint Commission, they have a sentinel event alert, and they have actually picked up on a lot of recommendations, put them in their alert.

Just a couple pictures of success stories. With the United States Pharmacopoeia, there is a national coordinating council made up of several organizations, pharmacy, nursing, American Medical Association is on there, FDA is part of that organization.

What we have done is looked at the labeling and actually have the caps. You can see on this, on the cap here, it actually says, paralyzing agent for neuromuscular blocking agents, to get some changes made. Also, on here, if you look on the vial on the left, this is another problem that we had seen when it first came on the market. You can see that 20 milligrams per ml.

These are the types of things that we try to disseminate around the country. There were seven deaths reported with this. You have to look up in the corner there where you see that is actually a five ml vial. These were actually picked up.

With the labeling change, if you look at the vial on the other side, to have now 100 grams per 5 ml, a lot of manufacturers now have picked up on this, that that should be the way labeling should be on new products that come out.

We also work with chemotherapeutic products. This is sisplatinum, which is a chemotherapeutic agent. We suggested, with the tall man lettering, with the stock and the other box warnings up there, we worked with the manufacturer on this. Actually, they redid their entire chemotherapy labeling.

Along with that, as you saw with that last one, tall man lettering, this is something that USP and ISMP have actually been promoting over the years, for institutions to use to differentiate look alike, sound alike medications like this.

We are glad to report that recently the FDA just sent out a letter to 144 manufacturers to ask them to start using this tall man lettering on their labeling to differentiate products.

We also, between our newsletters, although our newsletter goes out every two weeks and we try to get information out to practitioners very rapidly, sometimes the two week time span isn't quick enough. So, several times during the year we will even turn out national alerts. What we say is not just to our subscribers, but we will send them to every hospital.

We will send out a press release, e mail, fax, and we will also mail them. We will get lists from associations, such as the American Society of Health System Pharmacists, the directors lists, and we will also get some other professional organization lists, and put out what we consider hazardous conditions.

One of the things we heard before was, do you wait until an error happens. This is something that we pride ourselves on. A lot of times we don't wait until an error happens. This was something we recently sent out a national alert on, these two vials here. The one vial that is marked there pancaronium(?), that is a neuromuscular blocking agent. It is a paralyzing agent.

The vial on the other side is analoprilate. That is to lower blood pressure. When this analoprilate came on the market, hospitals were spending tens of thousands of dollars for the brand name. They were all eager to get this generic brand. As soon as it hit the market, a major group purchasing organization notified us that these two vials looked exactly the same.

Within about two hours we had this on our web site. Within about 12 hours we had e mailed and faxed every hospital in the United States and, within 24 hours, we had a press release and a mailing out. We didn't wait until we got reports out or we got numbers. This is the type of information.

I will talk in a minute, but I think we have heard from some of the other speakers, too, that one of the important things about reporting is what you are going to do with the information. The information that we get in, we try to get it out to as many people as soon as possible to disseminate it, and that also helps with getting in more information.

Some of the consumer information reports, we do get calls from consumers who report through our reporting program for consumers. We actually have a spot on our web site for patient information. Again, I just have a couple more graphic slides that sometimes I think we may actually reach the point with that, that these are the types of products that we try to get out to consumers.

Some of the information we get out to them is in the information back to us. One product here on the right, the tear green, that is actually an eye drop. The other product is a topical anti-fungal agent that you would use in wounds, something that consumers may mix up very easily.

The same thing with the next one. This one is an actual report that we received in. The eye drop on the left there with the 0.5 percent is timalol, which is a beta blocker used for glaucoma, very common in diabetics. The other one is actually drops for their glucometer for them to check their blood sugar. Here is a diabetic who has poor eyesight and is mixing these up. Fortunately, there wasn't any serious harm.

This is, again, the type of information that we try to get out. We get our nomenclature, look alike names. Here are the problems that we have. We get reports in from these. Now we are also getting a lot in from consumers about this.

There is a product down there where you can see the zibrexin, zertec. We have gotten in several reports from consumers. Actually, when they are going to get prescriptions refilled. These two products, the majority of the time, are appearing side by side on pharmacy shelves and one can be picked up for the other.

I will talk in a minute, and also in your packet, is our community self assessment. This is the type of information that we are trying to put into the community pharmacist's hands, and what they can do to try to prevent some of these errors.

Now, a product that we are very proud of is, actually before the IOM report came out, several months before, we were working on what we call our medication safety self assessment. What we have done is taken a lot of the information and the recommendation over the 25 years experience of working with the reporting program and also some information that we get.

We go out and do safety reviews for hospitals. We do hospital consults, two and a half days. We go out with an interdisciplinary team of the pharmacist, nurse, risk manager and a physician. We take a physician with us. We go through the entire hospital medication use process, not just the pharmacy. We talk with administration. We talk with medical staff. We go through an entire system.

A lot of those recommendations, we said, wouldn't it be nice if we could put out these recommendations to hospitals and have them do a self assessment to see how many of these recommendations they actually have in place. Well, we formed a partnership with the American Hospital Association. Through that partnership we got some funding from the Commonwealth Fund, received a grant, and we were able to send this out to every hospital in the United States.

I will just mention now -- we don't have a lot of time -- this is also now being used, the United Kingdom is looking to pick this up, Canada is going to use this self assessment. There are already 20 or 30 hospitals that have a consortium in Australia that are using this. We are also thinking of using this with Spain. So, it is picking up actually international.

What I have here, I am going to spend just a few minutes going through a few things on the self assessment, because this actually went out in the spring of 2000. We had data sent back to us that came back in the fall of 2000. Right now, we are still analyzing some of the data we got back, but I just want to give you some of the results.

When we sent this out, we wanted, as I said, to basically put something in the hospital's hands. Also, what we did, we set up the entire self assessment on the web, and for hospitals to actually respond back to where they stood. It was de-identified, so that we couldn't trace it back to any hospitals. We wanted to look at some aggregate data, so that hospitals could compare themselves nationally to similar demographic hospitals.

What we did is, we asked hospitals, number one -- which is why I mention this -- in order to do this, they had to get a multidisciplinary team together. I think this is one of the most important things for the hospitals that did this. It is the feedback that we are still getting back from hospitals and also hospital administrators and physicians who were involved in this process.

This was not something that we said goes down to the director of pharmacy and have him fill out or her fill out. This is something you have to get a team together and work on this and do it very critically to say, where do we stand on some of these recommendations.

What we do have on our web site is preliminary comparative results, that are available to all hospitals that did participate. Also, for hospitals that didn't participate, the results are up there and the self assessment is still up on our web site, that they could actually download it now and still participate in it. It is not going into our data base, but we left it up there.

Here, you see back 1,435 responses out of about 5,200 acute care hospitals, 23 percent response rate. We consider this excellent. At the same time, what was going on were a lot of issues with peer review and letting this information outside your hospital, even though it was confidential.

There was a lot of pressure on hospitals actually not to send this information back, to complete it but not send it back. What we did is, we looked at our respondent profile out of that group that we did get back, so that some of the results was actually how hospitals were out there, and we fell in pretty close.

We had a little bit higher response from larger hospitals, but it wasn't that far off from nationally, the national data. The same with urban, rural, teaching and non-teaching. Our numbers weren't that far off from how they are nationally, and also parts of the country.

Although they are 28 percent in the midwest, all the acute care hospitals, we had 35 percent there, which is high, but you can see some of the other areas we are almost right on target, with the information that we got back. What we did is, we weighted all the scores in the self assessment -- it was 194 questions -- so that we could have something aggregate that hospitals could compare.

Some of the interesting things we saw, it really didn't depend on bed size, rural, urban, teaching and non-teaching, on how hospitals did. So, the maximum score that you could score on this self assessment was like 1,294. These are the raw scores of the hospitals.

Below 100 beds was 692, greater was 709 and 709, but you can see how close these hospitals were. So, there really wasn't a big difference on hospital size or rural or urban on where they fell. What we also did was put it as a percent of the maximum. One thing, when hospitals were comparing themselves, we didn't want them to look and say, well, I scored a 60 percent, so I am pretty good on this or I score a 720, looking at the raw score.

We wanted them to look to say, guess what, you would still only be 59 percent of the maximum. So, we put out the information this way. Again, this is just showing rural and urban. You can see there wasn't much difference nationally, looking, and the same thing with teaching and non-teaching hospitals. Those are the major differences. When ISMP looks at medication errors, when we do hospital consults and reports, we use what we call the 10 system elements.

We look at 10 system elements which are patient information, how information is communicated, medical devices. We look at the environment, quality, staff competency. We have 10 areas. Again, we focus on this because, number one, especially when it comes to errors, a lot of times there is a focus and a lot of the research and publications are on prescribing errors, dispensing errors, administration errors.

We try to promote a non-punitive environment, and a lot of times, well, that is an administration error and all the eyes go on the nurse and say that must be the nursing. If it is a prescribing error, everyone is thinking of the physician. So, we will take the error and say, if we are going to do a root cause on that, we are going to look at that error and say, was it due to a breakdown in patient information. Did the physician not have the information he or she needed at the time that they needed it and were prescribing it.

Was there a breakdown in communicating that information. Was there a breakdown in the environment. Were they stressed. Was it hectic. Was there a breakdown. That is how we try to look at this, and that is how our self assessment is broken up. What we see here is that, out of the maximum scores you could get for some of these sections, I have only listed five here.

Looking at the mean score and the percent of the max, we saw that there were some areas that hospitals actually could use more help in than others, such as patient information, communication, patient education. Right now, we are working with the American Hospital Association and the American Society of Health System Pharmacists, and several physician organizations, to look at providing some educational programs that address some of these areas where we saw real shortcomings.

One of our hopes is to actually redo this assessment, because we are getting a lot of phone calls on some health systems who have done this assessment as a system and now are actually looking to redo it, after they implemented some recommendations. All I did was put up here -- again, you can look at this in your handouts -- these were some of the drill down type questions.

Again, these are paraphrased with some of the questions. Where we saw that hospitals can use some help -- I put down the non-automated communication because I am not talking about bar coding or CPOE here, but some simple, low cost, or hardly any cost things, that hospitals could do, health systems could do, immediately to help prevent some problems or medication errors.

List of dangerous abbreviations, what we saw was only 14 percent of hospitals actually had a list of abbreviations that should never be used in a hospital, something simple, something they could get in quickly. So, this is the type of information we are getting back to hospitals, even on a national scale, so they can do something about it now, not waiting until numbers, or waiting until error reports come in.

Patient education, I have down here, too. That is written information provided to patients before they leave the hospital. Only 21 percent actually had full implementation in this, out of all the hospitals that responded.

Did staff investigate patient questions, if the patient has a question before they administer that? Only 24 percent. So, these are some of the areas now that we are looking at working with AHA, put some programs together.

Again, what were some of the implications and opportunities we saw when we did this? There is an enormous opportunity to focus assistance. We now have some areas that we feel hospitals could use some help in. Leadership organizations, one, working with AHA, we heard before, even, the IOM there is this lack of leadership.

We felt that, by partnering with the American Hospital Association, we could start to get some buy in from hospital administration. So, it is not just front line staff that is in this. So, we feel we are making some inroads. Also, working a lot of professional organizations. Impact, we feel that it is an effective communication vehicle to get out information. It did bring multidisciplinary groups together and also, as I mentioned before, it is also being used on an international scale now, too. Other hospitals internationally are using this.

What I did give you was a handout with our community. What we did is, after we did the acute care, we received a seed grant from the American Pharmaceutical Association and the National Association of Chain Drug Stores to say, why don't you do one of these for community ambulatory pharmacy with that focus. Again, we got a group together, we beta tested these, we alpha tested these. We got practitioners together when we came up with these self assessments.

Right now we have out a community ambulatory self assessment. This is also available on our web site. I gave you a hard copy of the book that is actually being sent out. This is open right now for also hospitals to respond. The results are on our web site. It is going to be open until May 2002 for them to respond back.

Then we hope to do the same type of thing, get the information, put the data back, aggregate it. Some of the other initiatives, just briefly, that we are involved in also, as I talk about the things that we are doing with the American Hospital Association, purchasing alliances.

A lot of purchasing alliances have picked up on our self assessments and are using them within their organizations. Institute for Health Care Improvement is actually looking now at a redesign project. We are working with them on that. We also work closely with the IHI on their breakthrough series on medication errors several years ago. We collaborated with them.

State coalitions, Massachusetts, probably a lot of you have heard about the Massachusetts coalition. Actually, we were the ones that prepared that self assessment, that bench marking project that they used. We gave that to Massachusetts, and they are doing some great things with that. There are also a lot of other states that are doing some tremendous quality projects with groups. There are some regional initiatives that are going around the country, and purchaser groups. We are going to hear about one as soon as I get off here, from the Leapfrog.

Michigan Blue Cross, we also heard about financial incentives. Actually, the Michigan Blue Cross/Blue Shield put us in last year, that hospitals would get an incentive bonus if they filled out our medication safety self assessment. So, that is something that Michigan Blue Cross and Blue Shield has already done that we felt very good about, because that is something that we know is needed, to put some incentive bonuses.

Best practice, what we promote is to basically look at internal errors, internal near misses. We heard some of this from the VA. One of the important things is the external errors. I think that is something that we are very big on. There are still some issues that we see out there. There still is a punitive environment and we still have a ways to go with this.

We see this also with some reporting programs. There are a lot of reporting programs jumping up now, especially on the state level, that we have a lot of concerns with. We get a lot of calls from them, but basically a lot of them are, in their nature punitive. A lot of them, the information goes to the state boards. We see this throughout the country.

We see it in Massachusetts. It has had one of the longest running programs, and we constantly get phone calls from physicians and nurses that were called in front of the board, they committed an error, and now they have to spend two days in a safe hospital and they are calling us to say, where is a safe hospital in Massachusetts.

Well, it is the type of thing that, number one, that is punitive in its very nature. Who is going to want to report again. Plus, they could spend a year in a safe hospital but if they are going back into an environment that doesn't have some safety features, that is really not going to do that much, because we are talking front line practitioners.

There are still a lot of problems with litigation. I am sure that you have heard this before in peer review. We do need peer review if we want to share errors. Fragmented reporting systems, as I said, I think a big thing is, we just heard from the VA. I think they have an excellent program. They are really making headway in that.

One of the things, too, is that we have a national voluntary reporting program with USP. We would love to get some of that information to share with 5,000 hospitals, not just 180 or 160 hospitals. So, that is some of the information that we see with some of the issues out there, too. Right now, we do have the United States Pharmacopoeia that we get.

Also USP and ISMP is working with ECRI, which is another group out there that has a medical device reporting program. They have had that for 30 years. We share information.

Just recently we were looking to try to sit down and work with the blood banks so that maybe practitioners could have one national spot to report in some of these errors, that would go to the different organizations, because we could get something out nationally to everyone.

I just put down a couple other things there, have a clear understanding of accountability. I don't have time to go into that now with my talk, but we know those are some of the issues, too. We know it is a struggle for a lot of state licensing boards to separate what are the accountability issues that they feel they have to do compared to just getting in the error reports. I would be happy to answer questions, or if you want to wait until after the next speaker.

MS. COLTIN: I think we will wait because we are running a little bit behind.

MR. VAIDA: Thank you.

Agenda Item: Private Sector Patient Safety Initiatives.

MS. DELBANCO: I also put some handouts in front of you. I am with the Leapfrog Group. Thanks for the opportunity to come and share with you today what we are up to.

Leapfrog started informally more than three years ago now. It is an effort that was started by the Business Roundtable, but started with what we call the usual suspects in health benefits purchasing.

These are the groups that tend to be more forward thinking and progressive when it comes to using their role as purchasers to drive improvement in the health care system. We call these usual suspects who founded Leapfrog the founding frogs. We try to have some humor in the group.

What is interesting, I think, about the founders is that they are a mix of very common household names, Fortune 500 companies like General Electric, General Motors and Verizon Communications, which at the time was still called GTE, two of the leading business coalitions, the Pacific Business Group on Health, and the Buyers Health Care Action Group.

Then also, very early at the table was the U.S. Office of Personnel Management, which oversees the federal employees benefits program, of course, what was then HCFA and is now CMS.

The group has grown quite a lot. I will tell you, of course, about what we do, but before I get there, today we are a group of more than 90 large health care purchasers who represent more than 26 million Americans, not including Medicare beneficiaries and those who get benefits through the Department of Defense, who is also joining Leapfrog as a liaison.

These purchasers spend more than $46 billion in health care expenditures every year. Although I know it is probably hard to read, both on the screen and on the handout you have in front of you, these slides are just to show you not only the vast numbers of members that we have, but also I want to take a minute just to share with you that our membership is really quite diverse at this point.

While most of our members are, indeed, Fortune 500 companies, we also have many state agencies now. We have a growing number of unions that have joined, many business coalitions, both those who actually do direct purchasers and then, not even included on this list, are another 10 or so organizations of purchasers that have joined Leapfrog as a way of communicating Leapfrog's initiatives to their members.

So, Leapfrog formed, like I said, informally three years ago because large health benefits purchasers were frustrated that health care costs seemed to be rising ad infinitum, but quality was not rising with it. They felt like something really needed to be done to make progress that was more appropriately scaled to the size of the problem.

One of the things, I think, that was unique about the founders of Leapfrog is that they were willing to look in the mirror and say that, maybe we, as one of many players in the health care system who have some kind of role to play in the quality of health care we have, haven't been buying right.

While we do have a fiduciary responsibility as benefits purchasers to make sure that the care we are providing to our employees, retirees and their families is of the utmost quality, you know, whenever it comes down to it, we make decisions based on cost. Clearly, we are not sending the signal we really want to send, which is that we do value breakthroughs in quality. We do value improvements in the overall value of health care, not just in things being as cheap as possible.

One way to think of this is gridlock. When people ask me where the name Leapfrog comes from, I usually tell them it sort of has two meanings.

One is that we want to make big leaps in patient safety, but also that we are interested in leapfrogging the gridlock that we are experiencing in the health care system, that is really preventing us from taking advantage of the know how and technology that we have today to make big improvements in the quality and safety of care.

So, this gridlock, the best way to sort of describe it, maybe it is a picture of a four-way intersection. On the one side you have purchasers who are trying to get through to the other side to make breakthroughs in quality. As I mentioned, they are not buying right. They recognize that, for the vast majority of them, they are buying based on costs.

Health plans, while they have been doing any number of things to drive quality improvement, including innovative case management techniques and disease management programs, often know more about the differences among the providers in their networks than they share with purchasers or with consumers.

So, it is very hard for those who are buying health care, in a sense, whether it is the employer or the individual health care consumer, to use information on the differences among providers, both to advantage themselves as well as to send a signal to the health care system that they, too, value quality.

Health care providers, being a doctor's daughter, I always believe that people go into health care because they want to make people's lives better. It is hard for providers to reengineer the way that they are providing care, unless they see a business case for why they should do that.

It is very difficult to convince a hospital board, for example, that they should invest millions of dollars in, for example, computerized physician order entry systems, unless there is a business case for doing that. Because health care purchasers, who really have the greatest opportunity to provide that business case, have not been doing so, I think some of the breakthroughs that providers want to make have not been possible.

Lastly, consumers, patients, members, enrollees, whatever you like to call those of us individuals who have to make health care choices, really have not been engaged at all when it comes to considering quality information when making health care choices. I think we have spent years now trying to make information available to consumers that focuses on health plans.

While that is helpful for some, I think very few of us think of health plans as really dictating the quality of our care in the way we think of our physicians or other care givers, or maybe the clinics or hospitals where we get care. I think in part we have been focusing on the wrong unit of analysis. That is partly because it has been easier to do that.

The other issue, I think we have been focusing on topics that are very remote for consumers to relate to. For example, I have information available to me, when I am choosing a health plan, about what proportion of women who should have gotten pap smears did. So, when I go to choose a health plan, I can look at that, and I am supposed to translate in my head that if I choose the health plan that has given the highest proportion of women pap smears that should have had them, that maybe my chances of getting cancer down the road would be less.

That is a very sort of long leap to make, if you will let me use that word. So, we are sort of looking at what would be more equivalent to how people respond when they hear there is going to be a lightening storm in the next few minutes. Most of us take cover, even though our chances of being hit by lightening are far smaller than our chances of getting cancer one day down the road.

So, all this gridlock is sort of problematic. Part of what our new thinking was that if we focus on preventible medical mistakes, we might actually find that sweet spot, that sort of lightening rod, that gets consumers engaged, that is a narrow enough and a dramatic enough topic that we can all the different players in the health care system to try to make breakthroughs.

Now, I am sure you have all heard the statistics from the Institute of Medicine report many, many times. The only comment I want to make is that you will see that Leapfrog's focus right now is on hospitals. The reason for that is because we know much more about how errors occur in the hospital settings, and what kinds of interventions are successful in reducing those hospital-based errors. There is a large appetite among Leapfrog members to figure out what we can do about outpatient care. Because it is less clear to us where we can focus based on the research, we are still waiting to see what we should do in that area.

Anyway, as the group was meeting in its early formative way, Chuck Buck, who recently retired from General Electric, was on the committee at the Institute of Medicine that produced To Err Is Human. So, he was the one who brought back to the group the idea of focusing more narrowly on medical errors as a way of potentially engaging consumers and finding a common rallying point for purchasers to work together toward making breakthroughs in quality.

The group then tried to figure out how they could possibly assemble a critical mass or purchasers to work together, because they knew that if they worked alone they wouldn't get very far. Bruce Bradley, who is the chair of the Leapfrog Group steering committee, who is also the director of managed care plans at General Motors, took the numbers from the Institute of Medicine's report and applied them to the GM family.

Now, of course, GM is huge. They provide benefits to 1.25 million Americans. When they looked at those numbers, they realized that they were losing close to 500 people each year -- this is among their active employees and retirees -- from preventible mistakes in hospitals. So, it was easy for him to go to Jack Smith, who was then the chairman of General Motors and say, we have got a crisis on our hands. Can you imagine if this many deaths were happening in our manufacturing plants each year. We would probably be out of business.

So, Jack Smith said, why don't we take this to the Business Roundtable, the national association of Fortune 500 CEOs, and see if they would be willing to sponsor an initiative that would allow us to get the word out at the CEO level, to many other employers, that we have something that we need to work on together. That is, indeed, what happened. So, in January 2000, the effort became more than sort of an airport cafeteria conversation and actually started getting organized.

Leapfrog, as you will hear me describe it, is not another organization. It is not meant to be sort of the creation of another institution that will live on forever. Instead, what it really is, is an action plan that we can use to leapfrog that gridlock that I was describing to you.

It really boils down to a two-pronged approach in terms of our action plan. On the one hand, it is really an organized effort on the part of health care purchasers to buy right, to send a signal to the health care marketplace that we do want to see breakthroughs made in terms of quality and safety. On the other hand, it is really about engaging and activating consumers to get them to be part of the solution as well.

When our members, as I described them to you earlier, joined the Leapfrog Group, what that really means is that they are committing publicly to a common set of purchasing principles.

Those purchasing principles are as follows. When our members join, they agree to inform and educate their enrollees about the fact that preventible mistakes happen, and that there are certain practices that they can look for in hospitals that might indicate that their care would be safer.

In addition, our members agreed to start comparing performance at the hospital or physician level, as we get more information available. The comparison in the early days of Leapfrog, will focus on our three safety leaps, and I will describe those to you in a minute.

Thirdly, all members agreed to reward and recognize hospitals -- and as I mentioned, we are focusing on hospitals right now -- who have implemented the three safety leaps that we think are proven to reduce preventible mistakes.

Now, reward and recognition come in all shapes and sizes and forms. All of our members, as they are spread across the country, face different health care markets, where different types of incentives or rewards may have value or not. So, we are encouraging all of our members to engage in dialogue locally with hospitals about what would make a business case for them to make investments that could protect patients. That may come in different forms and shapes, as I said.

In addition, all our members, as they join the Leapfrog Group, agree to highlight a few discrete practices that we think could make a big difference when it comes to improving patient safety. Again, those are our safety leaps.

Because employers do not have a great track record of really sticking to programs like this, we have a little insurance clause built in where we ask them, if at first their efforts don't succeed in changing things, they will intensify their efforts as time goes on until they actually see change, whether that is an informed group of consumers that is making better choices, or more hospitals in their community adopting practices that can protect patients, or however we might go about measuring that.

Our safety leaps were chosen with some careful thought. We went around to the patient safety gurus, the quality improvement experts and asked them, essentially, to help us come up with the equivalent of antilock brakes, air bags and seat belts for the health care system. You can hear General Motors speaking when I say that. We asked them to think about what practices were proven by research to make a big difference for patients. What would be understandable to the average American consumer.

Again, if we are trying to get consumers to be part of the solution and make different choices, we need them to understand their choices, and what is feasible now and what is readily ascertainable by outsiders, like purchasers, to determine whether or not these practices are in place. I am sure Noel -- even though I am sorry I had to miss his presentation -- talked to you about the importance of the culture of safety.

That is something that is very hard for an outsider to determine if it is in place or not. It is less difficult to find out whether a hospital has a computerized physician order entry system that meets certain specification. That is sort of an example there. So, our three safety leaps are as follows. We are promoting the use of computerized physician order entry systems in hospitals.

I am sure you are familiar with research that says that these systems, when they are installed well, can make a big difference in terms of reducing serious medication errors. We are also focused on this because we know that they can come along with some very helpful clinical decision support tools that physicians can use.

The second safety leap that we are focusing on is what we call intensive care unit physician staffing. The basic idea here is that patients who are in ICUs should have their care managed or co-managed by doctors with special training in critical care, otherwise known as intensivists. The research suggests that there could be a big difference in terms of mortality when you have these types of specialists overseeing care.

In addition, there has been some very exciting research lately showing that telemonitoring of ICUs by intensivists can make a big difference. We have recently expanded our standard or our safety leap to allow for presence in an ICU by an intensivist be accomplished by an EICU or telemonitoring.

In addition, in areas or over time as more areas participate in publicly reported risk adjustment outcomes programs in the ICU, those outcomes data would immediately replace the structural standard that we have created here around the staffing. Ultimately, what we are after is to try to identify for consumers which hospitals might have the best ICUs. If they are going in for care, that means they are likely to end up in an ICU for some part of their stay.

Lastly, we are promoting the idea of evidence-based hospital referral. The basic idea here is that, for patients who need certain high risk surgeries or have certain high risk neonatal conditions, they should be referred to hospitals where their outcomes are likely to be better. Now, in places where these data are publicly reported, for the procedures we are focusing on -- and I didn't include the slide, sorry -- that outlines those procedures, that is available on our web site.

We are talking about mostly coronary procedures as well as cancer surgery. For areas where the outcomes are publicly reported on those, we would obviously want patients to be referred based on those outcomes data. In the absence of that, we believe the evidence is strong enough that we can rely on volume as a proxy, the experience that hospitals have performing that procedure as a proxy for patients, to have a sense how good their outcomes may be.

Again, for both the ICU standard and the evidence-based hospital referral standard, we are very eager to move more toward outcomes data reporting rather than the structural measures we have set up.

In the absence of those, we feel we can help patients seek what may be safer care by informing them about these relationships. So, you have heard about where Leapfrog came from. You have heard about our action plan, the purchasing principles, the safety leaps. Now let me just give you a little bit of a flavor of how we are working to implement this nationally.

One thing we know for sure is that purchasers can't do an initiative like this alone. So, we are spending a lot of time reaching out to consumer groups, health plans, hospital groups, individual hospitals, physician groups, et cetera, to find out how we can work collaboratively toward these goals.

Hospitals clearly are the initial focus of our efforts. One of the things we are asking them to do is to share information publicly via an online web survey that we have created with the help of the Medstat group, to share information on their progress toward implementing the three safety leaps. Those data will be publicly reported for the first time in January, and that will be the beginning of an ongoing public repository of information that consumers can look at in helping them make health care choices.

We are initially going to be featuring five regions of the country in terms of the data, because that is where our employers have made the biggest effort to get hospitals to report. Again, it is a voluntary program. So, in areas where there has been a lot of outreach, that is where we have seen the most participation.

The other thing we are really looking to hospitals for is to help us understand what makes a business case. What kind of incentives can we put in the health care system to align incentives more correctly so that hospitals that do go the extra mile to protect patients will be rewarded for doing that.

This is just a quick screen shot of what the online hospital survey looks like, and that is available for your viewing if you visit our web sites. Health plans obviously play an incredibly important role. Many of our members delegate a lot of their purchasing responsibilities to health plans.

First and foremost, we hope they will help us educate consumers. They are more privy to information about when a patient is about to make a hospital choice than an employer is. So, that role is very important. We also are working within what we call a health plan lily pad that we set up, to learn about other opportunities we have to work together.

I think most of the national carriers have been extremely supportive and are planning to educate members, designate in their directories which hospitals meet our standards. They have been writing letters to their hospital CEOs encouraging them to fill out our survey. I think we will find lots of other opportunities to work together as time goes on.

Physicians play an incredibly important role. Research we did this past summer with consumers, employees and retirees of our members, made it clear, as we all know, that first and foremost, people want to listen to their physicians about where they should go to hospitals. So, we have been working hard to reach out to physician groups to inform them about our initiatives, and to find ways where we can work together to really use Leapfrog as an opportunity to enhance shared decision making.

Clearly, the three leaps that we are promoting are not the only factors that consumers will consider in choosing a hospital. They will want to go places where they are familiar, where their families can easily visit and lots of other features. We want to encourage physicians to help their patients think through all those factors when making a hospital choice. As I have mentioned several times, we clearly think that consumers or patients are part of the solution we are trying to strive toward.

One of the things we have seen over the last few years is that public awareness really has grown about the issue of preventible medical mistakes. We still have a lot we need to communicate to patients and help them figure out what it is they need to do, to both protect themselves and be part of the solution.

We were lucky to get a grant from the Robert Wood Johnson Foundation that allowed us to do a lot of consumer testing this last summer through focus groups that has now led into the development of a complete enrolling communications tool kit that all of our members will be using.

It includes newsletter copy, direct mail letter copy, e mail alerts, paycheck stuffers, and other ways in which employers can communicate with their active employees as well as their retirees about the fact that preventible mistakes happen and about our safety leaps. That tool kit is available in the public domain on our web site and we are encouraging all different parties, whether hospitals or physicians or health plans or others to make use of this.

They can customize it, brand it, change it. It doesn't even have to say Leapfrog. They will at least have the confidence of knowing that the messages are consumer tested, understandable and believable. I just want to mention that we have a lot of different work groups set up that we call lily pads, where we are working with different players in the health care system to try to find ways to make Leapfrog succeed.

The most recent one we have just set up is going to be focused around designing some incentive and reward models that are financially based, that our members can adapt in their own communities to help hospitals that have implemented our leaps get a preferential position in the marketplace.

We are working with health plans, benefits consultants, patient safety experts, communications experts and others to try and shape the future direction of where we are going. Leapfrog is a national effort, but we have special focus on what we call our regional roll outs. We have started with seven. We plan to use many more last year.

We are trying to turn Leapfrog from being just a purchaser initiative to one that is really community wide, in terms of being more collaborative with all the different stakeholders in the health care system. These are the areas that we are focusing on right now. As I mentioned, I think the number of these will significantly expand in the next few months.

We are looking at adding another maybe 10 to 15 regions. In those areas, there will be a concerted effort to bring all the players to the table, to get the hospitals to report to the survey, to start educating consumers and all the other elements of Leapfrog's action plan. We are starting to see some impact. Experts have estimated that the number of hospitals meeting our computerized physician order entry system standard will more than double next year.

For what it is worth, Bear Stearns now thinks we are a fundamental change agent in the health care system. In New York, employers like IBM, Pepsi, Xerox and Empire Blue Cross/Blue Shield have agreed to give bonus payments to hospitals that meet our standards.

In Atlanta, major employers have struck a deal with the largest hospital system to implement all of our standards in the next few years. We are starting to see this really become different players working toward these goals that I think everybody agrees will benefit patients.

As I mentioned, we have a web site. There is more information in there than you probably care to read, but I invite you to take a look and see what interests you. I will just end by saying that part of what keeps us motivated and going with all of this is that we think all of this effort can make a big difference for patients.

We commissioned some researchers at Dartmouth, led by Dr. John Birkmeyer, to estimate for us what the potential benefit could be, if all non-rural hospitals were to implement these standards. The conservative estimates are that we could prevent more than half a million serious medication errors each year and save close to 60,000 lives, just by these three standards alone.

The research suggests that, for every preventible death, there are many more preventible disabilities. So, you can do the math and figure out where we might be with that opportunity. That is the quick view on Leapfrog, and I would be happy to answer any questions.

MS. COLTIN: Thank you. Simon?

DR. COHN: Thank you very much for an interesting, very good presentation. I am actually a physician and obviously very sympathetic to, I think, a lot of the views of Leapfrog. I am sort of struggling, given the current environment, trying to figure out where the money is going to come for all of these things.

As I said, I have been watching Leapfrog for a while. I live in one of the populous western states. Right now, health plan dues are being raised from anywhere from 10 to 15 percent in many places. I would ask people in the Leapfrog Group, they seem to always be very resistant to the idea that there needs to be more money put into the system.

MS. DELBANCO: I will try to share with you some of my thoughts. When I think about what Leapfrog is about, it is first and foremost in my mind about helping consumers make better choices. The fact is that there are hospitals across the country that meet our standards. We would like to at least share that information. That is sort of our first step.

The other step is obviously to find a way in this, what I call, a zero sum game, to realign the way that payment is structured so that we are paying for quality. We are not paying for non-quality.

Right now, no matter what a hospital or physician does, they are paid the same. There is something a little bit crazy about that, if you think about the fact that, if a patient goes in and something goes wrong, the employer pays for those extra days, unless the hospital is being paid in the way that Medicare pays.

There is no other industry that is like that. There is also no way for a hospital or a physician who works hard to differentiate itself or himself or herself to be rewarded for that. I think that is part of the solution, although that is not an easy task to try to undertake.

I also think that some of our members are interested in becoming, in a sense, co-investors with some of the health care systems where many of their employees get care. We may see some models cropping up where they help with the outlay of capital up front, when it comes to computerized physician order entry, things like that.

I think there are going to be many different approaches taken. Leapfrog central, as we call ourselves sort of here in Washington, are not going to be dictating to our members how that solution occurs, because we don't know what the answer is and we want to see some innovative ideas pop up.

All of our members know that they need to create the business case, they need to find a way to make this happen. So, I think we are going to see some creativity.

DR. COHN: Can I ask one other question? I know you can't promise money, and clearly one of the themes of the whole day has been, there are a lot more things that we could do if there was money around to do it.

I actually have a question about computer physician order entry. I was trying to think in my own mind, I am pretty familiar with a lot of order entry systems that exist in hospitals, most of which the physician scribbles something on a paper and hands it to a nurse or an assistant to enter into the computer system.

I don't think that is what you meant here. I think you meant sort of the Brigham and Women's model of the physician actually doing order entry. So, assuming that that is one of your three requirements, how many hospitals are you counting this year as having that order entry system in place?

MS. DELBANCO: There are many different estimates out there -- Allen, I am sure you can comment on this as well. I have heard anything from two percent to seven percent of hospitals would have these systems in place.

The Leapfrog standard is actually fairly specific. It doesn't just say, do you have a system or not. It talks about the ability to intercept serious medication areas, whether physicians are using it and also whether physicians have to write a written acknowledgement to override a warning.

Again, a hospital can purchase the software and no one might be using it. They might have turned off all the annoying alerts that actually could protect patients, and not actually be succeeding at what we are trying to promote. We will have a better sense over time, certainly, from the Leapfrog survey, as more hospitals participate in the survey. We will get a better sense of how many hospitals meet our standard. Like I said, the estimates are fairly small in terms of the number who will meet it this year, but it is expected to more than double next year.

DR. COHN: From two to four percent?

MS. DELBANCO: Anywhere from two to seven percent.

DR. COHN: I don't mean to be facetious on this. I am just trying to think in my own mind of the non-academic medical centers that have that sort of functionality. I can think of some, but they are actually mostly owned by my organization.

MS. DELBANCO: If you think of all the Leapfrog standards, these are stretchables. Fortune 500 companies and other large purchasers didn't feel that there was a need to set standards that were already easily attainable and widespread.

DR. COHN: I am sorry. So, these are actually the stretch goals as opposed to a requirement.

MS. DELBANCO: There are no requirements in Leapfrog. Leapfrog is an entirely voluntary program where consumers are getting educated, hospitals are voluntarily sharing information and consumers will voluntarily provide positive incentives or rewards based on these standards.

We wanted to highlight practices we thought could make a big difference that weren't already in place, because that way we think we will get there a lot sooner than we would if we weren't pushing them.

MS. COLTIN: I have a question. We heard from some of the other presenters this morning about the need for reporting systems to be non-punitive in nature. They were talking about reporting systems that were somewhat different from the type that you were describing. They were more error reporting systems.

Clearly, the kind of reporting system that you are planning to put out for consumers has the potential to be punitive in the sense of embarrassing hospitals that fail to meet the standards.

Had you considered an approach that is more of a recognition-based approach, such as the American Diabetes Association uses with its physician recognition program, or the top 100 hospitals type approach, that recognizes those that have these systems in place or meet your other standards, but doesn't necessarily embarrass or, in other ways, punish those that haven't yet been able to meet them.

MS. DELBANCO: We considered many, many, many different approaches. The approach that we settled on is one where hospitals can voluntarily report. We don't just ask them if they meet our standards or not, but we ask them many standards about their progress or intentions toward meeting the standards, and we will be reporting that out.

So, as I mentioned, very few hospitals meet the standards, but we will be sharing information of hospitals that are on the path. I think that those numbers are much larger. Our feeling is that we, more than anything, want to commend hospitals that are willing to report at all, no matter how they look.

We will be working both in our national outreach to the press and to others, as well as locally, to make sure that that story gets heard, in addition to highlighting the hospitals that are on the path toward meeting the standard.

That was the approach we decided to take. We felt that, from both the consumer testing that we did, and other discussions we have had with many different stakeholders, that it would be more helpful to have a complete list of hospitals that report, and the information that they provide to us, and then to couch it in a context that makes that data understandable.

We have gotten feedback from many different hospital associations and consumer groups and others on what language to use and what displays to use, and things like that.

DR. STARFIELD: You mentioned that your focus was on hospitals. I don't know how the strategy to buy right works, when you focus on hospitals, because I would assume that most of these hospitals are buying health plans and not buying hospital care.

MS. DELBANCO: Sure, but their relationships with health plans, especially the large employers that we are working with, is such that they can work with the health plans to set up arrangements, such as the one that Empire Blue Cross/Blue Shield did. It is essentially a pass through bonus payment to hospitals.

Health plans, in a sense, at least the ones that we have been working with, most of the major national health plans, say that they need the backing of employers to do this kind of thing. They are not going to do it without that backing, but if they had that backing, they would really like to do it.

For years, they have been collecting information on how quality varies among their providers, and they would like to make that more transparent, but without the sort of support of the employer at the table with them to do that, it is hard to do it. I think it will be done in partnership with health plans. The idea is to create a more direct relationship between employers and hospitals than has existed before.

MS. COLTIN: Let me remind people that we didn't take questions earlier for Dr. Vaida. So, if anyone has any questions now, we will take them.

DR. COHN: Actually, I did want to just follow up to find out a little bit more of your thoughts. You mentioned, actually, in your last slide about issues for reporting and sharing.

You talked about fragmented reporting systems, which I think, based on our hearings today, one would -- certainly with standards, there seem to be so many of them. Maybe you can comment, are there any thoughts that you have about ways to de-fragment what is going on? It seems there is so much out there, that it is almost impossible to come to any conclusions about anything.

MR. VAIDA: I think one of the things -- the reason I had put this on the slide, too, is not just for pointing the finger at our program, but we have had a national volunteer reporting program for over 25 years. ECRI, as I have mentioned, has had a national voluntary reporting program in medical devices for 30 years. Blood bank has a reporting program. With the fragment, what I am talking about is, there are a lot of states, there is a lot of pressure on states that they feel they have to start their own reporting programs.

Why not utilize some of the reporting programs that are out there, rather than re-invent the wheel. If they feel that there is some pressure in starting some programs, make sure they share the information with national programs that are already out there, such as programs like ourselves.

Our concern is that there are going to be 50 reporting programs, one in each state, and what are you going to do with that information. Even from a federal level, the safe medical device act of 1990, you remember that was actually something that Congress legislated that you had to report medical device problems. That has been since 1990. I don't think that I know of anything that has come out of that from a learning perspective.

One of our things that we look at is to say, how are you going to share this information, how are you going to get it back, what are you going to do with it. That is why say report for learning.

I would like to just mention one thing, to follow up with Suzanne there, too. Our self assessment, actually, 10 percent of the hospitals reported that they have computerized physician order entry systems.

In looking in California, California just passed their legislation out there, I think it is Bill 1845 of quality processes for acute care -- 1875. They are looking at probably close to 50 hospitals. Now, there are 500 hospitals in California. So, you have about 10 percent there, too. That is the type of numbers that we are seeing.

Now, there is a significantly lower amount if you ask, do 100 percent of the physicians enter all their orders in. We are looking at about a 10 percent rate of systems that are out there, that are hospitals that actually have systems, and it is growing every day.

MS. COLTIN: Any other questions? Thank you very much. I think at this point we were just going to take a moment to decide whether to discuss today next steps for hearings that we might have to fill out any gaps in the report that we are preparing to embark on.

Given the time, I think I would prefer to devote the last 10 minutes to packing up and making it to the airport.

I think we will adjourn and we will defer that discussion to the next meeting when we will try to reserve time to do that. Thank you.

[Whereupon, at 4:30 p.m., the meeting was adjourned.]