This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.
Please go to www.ahrq.gov for current information.
Ambulatory Care Quality Alliance: Invitational Meeting
Report of the Reporting Workgroup
Randy Johnson, Motorola
Nancy Nielsen, American Medical Association
Principles for Public Reports on Health Care
Randy Johnson
stressed that employers want information about quality and said that the
workgroup has made progress. Others are focused on cost, quality, and access,
he said, while we're focused directly on reporting of quality and
efficiency—both of which will affect cost. As we make improvements, he said, we
will make progress on making data more available.
Johnson provided
an overview on the status of the Principles for Public Reports on Health
Care and the Principles for Reporting to Physicians and Hospitals.1 He said the goal is for data to be
available that are meaningful and useful to patients. He added that the end
users for the public reports include both patients and physicians contemplating
referrals.
More broadly,
Johnson said that the aim of the reports is to be comprehensive and to provide
the greatest return on investment for making care safe and equitable. He said
the workgroup's goals for 2006 include testing the reporting principles in the
selected pilots and other ongoing local coalitions and discussing potential
designs for a template for reporting information.
Regarding the
format of the reports, Johnson said it was important that people focus on how
patients think. The reports should address hospitals, physicians, and
integrated delivery systems, he said. Johnson added that it was also important
to take into account cultural factors and patient literacy.
Johnson stressed
that the use of public reporting should support informed choice. These reports
should be continually improved so they are increasingly effective, he said.
They should also be as transparent as possible and allow for timely results. He
also noted that the reports should employ standard measures so they can be used
nationwide. And he said that when portraying performance differences, the
methods should show significant differences.
Finally, Johnson
said he hoped participants could come to consensus on the two sets of revised
principles and he asked for feedback on the first two principles (regarding
comprehensive reporting and consumer-friendly formats).
Discussion
The discussion opened with a question about comprehensive
reporting and the language that says the reports "should address" a list of
items. Does this mean they should report in these areas, asked the participant,
who expressed concern that the wording did not encompass everything that needed
to be reported. He recommended instead that the language read that the reports
"should address these aspects for the broad health care system."
A second
participant warned that reporting could have adverse consequences and suggested
that the principles be worded to make clear that there be no unintended
consequences. Johnson noted that such language had been in an earlier draft of
the principles and he asked for more feedback from those in the room.
I disagree with
the comment, said one participant. She also noted difficulty understanding the
intent of the language about addressing "these areas for hospitals, physicians
and physician groups…" In response, Johnson said the language was intended to
mean treatment by the whole system, adding that perhaps more overview language
would make the intent more clear.
Continuing the
discussion on comprehensive reporting, another participant said a sentence
needed to be added after "reports should focus on areas that have the greatest
opportunities in making care safe, timely, effective, efficient, equitable, and
patient centered" to make clear that good contextual data was needed. Consumers
are interested, she said, and these data need to include what we're measuring
and why—especially when using clinically important measures. She noted that
while measures may already sing to physicians and nurses, it was important to
make them resonate with consumers as well.
Another
participant asked for clarification about the sentence that reads "Information
reported should include both information that consumers want based on the
literature as well as information that is important for consumers." Are these
two types of information? he asked. In response, Johnson noted that this is new
language added since the last AQA meeting. The sentence means that there would
potentially be literature consumers would want, and we need to look at that as
well.
I agree that
this is awkwardly worded, commented another participant. Research shows
consumers want certain information. While the statement is an effort to express
that thought, it's not clear, she said.
Does part of the
workgroup's process involve studying existing reporting systems? asked one
participant. If not, can we try to capture this? In response, Johnson said that
he did not recall looking at existing reporting systems. Carolyn Clancy stepped
in and suggested including language in the principles that makes clear the intent
to add to them when everyone gets smarter about collecting information. We can
get a lot better at making information useful to others, she said.
Another
participant recommended adding a principle around going back and designing the
reports for learning. This would address unintended consequences and contextual
understanding, she said. She added that the principle would have pieces of
contextual understanding within it regarding continual improvement. Someone
else pointed out that continual improvement was addressed under "use for public
reporting." I know, said the first participant, but I'd like to see this as a
separate bullet.
In additional to
the portrayal of performance differences as articulated here, observed one
participant, it is important that we explain small versus large data to portray
data fairly. Another suggested rolling out the reports after the data are
brought together and aggregated.
Finally, one
participant recommended not using the word "standard" (regarding "Reports
should rely on standard measures when available."), noting that many measures
are standard but not standardized.
Principles for Reporting to Physicians and Hospitals
Nancy Nielsen
discussed the Principles for Reporting to Physicians and Hospitals and
highlighted changes made since the beta set was endorsed by AQA last April. She
stressed that the principles were general and not intended to cover every
eventuality.
Under design,
she stressed that it was important for physicians to be involved in designing
the performance reporting system. She said that the language that says that the
performance measures "should be stable over time, unless there is compelling
evidence or a justifiable reason not to be," may need review. The rest, said
Nielsen, should be pretty clear and people ought to know what's being measured.
Under data collection, continued Nielsen, there are some
key issues. The overview is that administrative data shouldn't be the only data
used, yet recognizes that going beyond that could be a burden for everyone. Regarding
data accuracy, she said the intent has not changed significantly. Nielsen
stressed that there needs to be a way to correct inaccuracies, particularly as
patients move from one health plan to another.
Regarding data
aggregation, Nielsen stressed that the workgroup felt that it was important to
recognize that the more comprehensive the approach to how a physician
practices, the more accurate the snapshot.
Regarding the
report format section, Nielsen highlighted the final bullet, which says that:
-
Results of individual provider performance should be displayed
relative to peers. Any reported differences between individual providers should
include the statistical significance of the differences and relevancy. Reports
should focus on meaningful and actionable differences in performance.
This is
important, she said, and comes from the idea that an individual should be
displayed relative to his or her peers. The consensus of the workgroup, Nielsen
continued, was that providers would work harder if they could see where they
stand compared to their peers. She added that it was also important for the
reports to focus on providing information that is meaningful and can be acted
on.
Regarding report
purpose and frequency, Nielsen stressed the need for collaboration to advance
quality. She also highlighted new language:
Performance data
should, when available, reflect trend data over time (run charts and control
charts) rather than periodic snapshots to optimize data use for quality
improvement.
Finally, Nielsen
stressed that the final principle, regarding review period, was intended to
make sure that physicians have an opportunity to review performance results
prior to their release to the public.
Discussion
The discussion
opened with a question about the difference between physician and provider
reports. In response, Nielsen said that the reports to physicians are intended
to contain information meaningful to improving care. She said, for example,
that it is important for a physician to know if a patient doesn't have a
mammogram so that the physician can reach out to that patient.
One participant
noted that a footnote to the principles addresses reporting physician-specific
information to hospitals. He pointed out that a physician's relationship to a
hospital could be minimal, and noted that if hospitals receive information in
advance other parties might want it as well.
Another
participant suggested that both sets of reporting principles needed brief
preambles that made clear that the measures used in public reports would be
attuned to AQA's performance measures. The preamble, he said, should refer
people to more detailed information on measures. The participant also offered a
second, unrelated comment: that a lot of employers and vendors will look to use
these reports to see if they are doing the right thing.
A participant
asked about the language under design that said that performance measures
should be "stable over time." In response, Nielsen stressed that this language
isn't meant to preclude the development of other performance measures.
What's your
definition of display? asked another participant (referring to the language
that "Results of individual provider performance should be displayed relative
to peers"). Who will individual physician reports be displayed to? he asked,
thinking of trial lawyers or The Washington Post.
In response,
Nielsen said that the display would include points on a graph. The question is,
she said, should we identify physicians by name as points on the graph so you
know who the high and low performers are?
Another
participant also took issue with displaying relative to peers. Does that mean
individuals or an aggregate? she asked. In response, Nielsen said that the
reports are intended to provide a comparison to other individual providers
doing the same work.
What was the
upshot of past AQA discussions about high and low providers? asked another
participant. The question was should each provider be identified, or only you,
replied Nielsen, who stressed that the principles leave it open for those
designing the report to answer the question.
One participant
offered general comments on assessing effectiveness and the potential for
unintended consequences regarding public reports. He gave an example related to
beta blockers in which one hospital uses them at a rate of 95 percent while a
second uses them at a rate of 40 percent. The second hospital failed on
reporting but was running a clinical trial. The participant also said that
there would be concern in the physician community regarding discoverability and
legal implications. The principles should reflect these concerns, he said.
A participant
noted that a sole practitioner would get a report and asked whether someone
part of a team would get an individual report or a team report. Another participant
suggested that the language under the review period should specifically say
that a provider has an opportunity to respond to the report. In response to the
latter comment, Carolyn Clancy noted that such a provision was specifically
left out so as not to get into a debate with the provider who comes out not
looking good.
Another
participant suggested that the two reports—public and practitioner—should be
released together and should say they are from AQA. A second person wondered,
however, how to do that since the end users for the reports are different. A
third person said the two reports should flow together and that there needed to
be language that explains why we're talking about different groups for each
document. A fourth participant stressed that it was important for physicians to
know how results are being reported to the public.
Regarding the
use of non-administrative data, a participant asked for clarification. Do you
mean claims data submitted for billing purposes? he asked. He noted that many
people say administrative when they mean electronic data. We need to define the
difference between claims and clinical data, he said, and show that we want to
move toward electronic data.
One participant
asked about the bullet under report format that says that:
-
Justification and explanation of the rationale for setting
specific targets for physician performance should be disclosed publicly to
consumers, physicians, and hospitals.
Does this mean
that if 100 hospitals have normal quality then their scores won't be reported?
he asked. No, replied Nielsen, who explained that there would be some
explanation provided.
Given unintended
consequences, said one participant, do the principles consider the same quality
improvement process in the future? Yes, said Nielsen, who pointed to the
language on continual improvement incorporated into the Principles for
Public Reports on Health Care.
Noting that a
majority of the principles overlap, one participant suggested combining the two
sets into one document that highlights the differences. I'm personally opposed
to doing that, said Nielsen, because the reports are intended for different
purposes. Johnson added that the purpose of the reports for consumers are to
guide decision making, whereas the reports for providers are intended for
quality improvement. He asked for purchasers and consumers to comment on
whether the principles for providers would be of use to them.
I don't
understand why the public reporting principles are so different from those for
providers, said one participant. All of the provider principles under design,
collection, and accuracy would apply to anyone creating a public report. Which
wouldn't apply to public reports? he asked. In response, Nielsen noted that the
sample size and data collection methodology might be explained more to those
whose data are being analyzed—as opposed to consumers.
As a
consumer-oriented publisher, said one participant, I would want to know that
I'm providing consumers information about sample size. He added that he would rather
have a system that includes reported methodological strengths and weaknesses.
He and a second participant also noted that the methodology provides context,
and helps to make the reports as transparent as possible.
One participant
wondered who in the consumer community would get the reports and noted that
there needs to be language that makes clear that these reports are for public
consumption. Another suggested including a public comment period, noting that
the main audience is plans and vendors—not individual consumers.
I would prefer
not to get into the level of detail that says, for example, that there would be
a 30-day comment period, said Nielsen. Another participant agreed.
When a plan
analyzes data on an obstetrician, asked one participant, would this imply that
my health plan would send letters to all the physicians and allow them to
respond? He noted that this would add a lot of bureaucracy. This question goes
to organization, said another person. Having people review the information and
having a conversation with the decision maker does go toward providing
accountability. As long as there's the corollary that a physician cannot share
the document with his attorney, it's okay, he said.
Carolyn Clancy
stepped into the discussion, noting that she saw the review as more a matter of
seeing whether something was missed or that the statistics were inaccurate.
One participant
suggested that language about the comments be embedded in the principles. The
workgroup should work with physician organizations to define "unintended
consequences," she said.
Next, Randy
Johnson asked how to proceed. He laid out two alternatives:
-
That the principles
be finalized when the workgroup considers them okay, or
- That the workgroup
bring comments back to the full AQA body.
He asked what
participants preferred. "We tend to tweak and change the principles whenever we
look at them."
There was
discussion about the two alternatives. A few voiced interest in reviewing the
principles again over time (and noted that while people were comfortable with
the thrust of the principles the wording really matters), a process Nielsen
pointed out could go on forever. Others suggested approving the principles and
letting the workgroup move ahead.
Carolyn Clancy
noted the need to make a decision. A show of hands around the room indicated:
- A preference for
recommended universal acceptance of the reporting principles.
- A direction to the
workgroup to consider integration.
1. These principles and other information presented by the Reporting Workgroup are available at http://www.ambulatoryqualityalliance.org/january12meeting/reporting.
Previous Section Contents Next Section 