This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.
Please go to www.ahrq.gov for current information.
On April 28, 2008, Jeff Brady and Rosanna Coffey presented the AHRQ 2007 State Snapshots:
on an audio conference.
This is the transcript of the event's presentation.
Part 2 of 2 (MP3 files split due to size
of full-length recording)
(MP3 File, 33 minutes, 9.6 MB)
Return
to Contents
Return to Part 1
April 28, 2008
1:00-2:15 p.m. ET
Part 2 of 2
Jeff Brady: Rosanna, I'll pick up, I guess, there?
Rosanna Coffey: Yes, please. Thanks, Jeff.
Jeff Brady: Sure. So, just to orient folks, we're on slide
25 now. So, again, there will be an opportunity for additional questions at
the end. This was mainly just for clarifying questions as was asked.
One thing we'd like to do a major, I guess, objective or goal of this talk
is to kind of take the discussion beyond what happened in the state workshops
at the end of last year, beginning of this year. And so with that, again on
slide 25, we want to talk a little bit about answering this question for you
and also getting your input and feedback about how we envision it being used.
But it's simply just a question, can the State Snapshots help states set priorities,
and sort of a subordinate point to that, which measures are most important?
The next point that we want to touch on is whether or not the Snapshots can
be redesigned. Some of the particular points that we've heard raised from various
users are things such as whether or not it's possible to let users define their
own state group for which to compare themselves.
I think hopefully most of you picked up when Rosanna was talking, but at present
or maybe hopefully even from your own use of the site, at present the only
comparisons that we really provide are to the national average, and then also
to states within the region that a particular state is.
Another point -- but, again, the point there is whether or not other comparisons
are possible both technically within the site and then I guess what other comparisons
would be appropriate to include.
The other point is about data and whether or not that can be provided by payer
type. This is something that we already do to some extent for some measures,
where the data support it. In the National Healthcare Quality and Disparities
Report, some of this is pulled into the print reports already, but more information
is available on the data tables appendices for each of those reports. But,
again, back to the point of this discussion is whether or not some of that
could be incorporated into the State Snapshots' website.
And the last discussion, to some extent, is an example of how that happens
in that case for state employees as a particular subgroup. Some others, I think
that are of interest, are groups such as the Medicaid group, Medicare group,
and others.
And then the final, I guess, example that we've included here in terms of
discussions that we've heard about redesigning the site are about whether or
not the data would support substate geography. So, I think the best example
of this is for folks that attended the workshop -- attended one of the workshops,
the H Cut Mapping tool really does substate, provide the information in smaller
geographic units than states.
Currently within the National Healthcare Quality and Disparities Reports and
related derivative products, we don't go to a smaller geographic unit beyond
states. However, there are some examples where we've worked with other parts
of AHRQ and others outside of AHRQ to drill down beyond that on some specific
projects.
And then moving on to the last point on slide 25 in terms of some questions
that we've received, one in particular is about severity adjustments and whether
or not those could be made for state environments, and the context of this
question was the specific example of long-term care and patient acuity, and
to what extent, number one, does the site and data available within the site
control for that -- for patient acuity? And then, number two, what's available
if folks wanted to go beyond that to examine that?
So, again, our point with this part of the audio teleconference is to hopefully
stimulate some thinking in preparation for the discussion that we'll have in
a few minutes. But before that, I want to go ahead and turn to slide 26 and
explore just in a little bit more detail this idea of the Snapshots and setting
priorities, and whether or not that's a function of -- and an objective of
the site for users. The simple answer to that is, yes, definitely, that's what's
envisioned. And, as you can imagine, if you've spent any time on the site,
it's sort of set up to provide that answer and help states to set priorities.
In particular, thinking back to one of the new features where we provide the
dashboard of basically all of the performance meters for the different summary
measures, that's a quick way for users to get an overview of the landscape
and hopefully to identify both areas of high performance and low performance
in a particular state.
And then beyond that, the steps that we envision users might take, the second
item underneath the dashboard view there is sort of to do the sniff test, and
really ask yourself do these findings that we present in the snapshots confirm
what you may already know about healthcare in your state? In most cases, that
is the case where often you're looking at the exact same data that is populating
the site. And so there is consistency there. But, if not, certainly we want
to hear about that, or if there are alternative data sources that you use more
heavily and we'd like to know about how they may or may not coincide with what
we have on the site.
The second bullet about examining measures behind the meters to determine
what exactly is being measured. The site is built in this way so that users
can drill down. Once you have the broad overview of how performance is in your
state, then, really, you can go down to the measure level to see which measures
are driving either high or low performance. Because, really, the detailed view
is what often in almost all cases is needed to really plan any sort of response
or action.
And then the other point is our -- simply as it says here, are there some
measure that are particularly problematic? And, again, always sort of looping
back to ask the question is this consistent with what you know about your particular
state?
And then, finally, we envision definitely that the State Snapshots are a place
or at least a group of information around which other groups can talk and really
vent their results. And just some very simple ways that happens is they --
the user is responsible for maybe a broader scope of healthcare, contact subject
matter experts in particular areas to discuss their awareness or lack thereof
of a particular quality performance.
And then more in a policy question, are the current priorities that are set
supported by the data that's presented in the site? Are there some that may
be surprising or new ones that are suggested by the site?
And then, finally, this is really a tool for action, so what are the next
steps that your state might take with this awareness?
And, again, an important thing that we like to achieve with this audio teleconference
and conversation is to hear how this works in your state, is our vision of
how this tool might help state healthcare policymakers function. Is it accurate?
Is this actually playing out the way we hope that it is in your state?
With that, I think Rosanna, you're going to cover slide 27?
Rosanna Coffey: I am. I'm going to cover the next few slides
about the redesign of the Snapshot. Before I do that, this is for Tim Dyeson.
We did look up your question to get a more accurate answer for you. It's the
Bureau of Labor Statistics 2004 Quarterly Census of Employment and Wages, and
that's where that came from, Tim.
Okay, the Snapshot redesign. One of the questions we have was, could you design
your own group of states that you want to compare to? We find this is a question
that rural states, in particular, would like the ability to compare themselves
to other states that are as sparsely populated or has the same topology as
they have. And so we gave some thought to this, allowing you to just draw any
group of states that you want to draw complicates the design of the website
considerably. And just to be able to give you the ability to do it in the fly
is something that we'd have to evaluate the feasibility and cost of, and AHRQ
would need to decide whether to spend that kind of money on complete redesign
of the website to do that. So, that's a question.
But we do have another approach, which wouldn't be just defining states on
the fly, but you could perhaps specify some types of comparisons that you would
like to have and we could build those into the state snapshots, and that would
not be a big, expensive deal.
For example, I've listed some here. Suppose you want to compare your state
to other low population density states. So, that might be one type of comparison
we could build. Another might be the high poverty states. Another might be
our states that are magnets for tertiary care, where they have big, huge academic
centers and they draw people from all over the country for care. Some states
-- you wanted to compare yourself with another state like that.
And you may have other ideas, so we would really like to have two things from
you when we open this for comment. One is what are -- can you come up with
sort of the characteristics that you'd like to give to compare your state to
and help us define those. And the other question is, do you think this is a
valuable tool or not? Because AHRQ needs to make the decision about whether
to go into this with additional resources.
The next redesign question was about reporting by payer. In this case it's
a question of whether the data are available by payer within the state, and
we're 100% sure of it only for the HCUP data, the Healthcare Cost and Utilization
Project, which is the discharge records that have been put together by AHRQ
for many, many states, I think up to 30 or 40 states now.
So, that's one place where we have indicators by -- you have a record -- information
on the discharge records that this is a Medicaid, Medicare, private insurance,
uninsured, or some other categories. And that would be a feasible way to look
at those.
But the HCUP measures are very small proportion of the measures that are in
the Quality and Disparities Report. And so in order to beyond those, we need
to assess two things: Which of the data sources are all payer, and then for
those we'd have to recalculate rates by payer and examine cell sizes to make
sure that we're not identifying organizations or individuals.
And, also, which -- some of these measures are clearly payer-specific. There
are Medicare and Medicaid measures, and to be able to combine any of those
we'd have to see whether they have enough measures. In other words, are they
collecting the same types of measures and if it's one or two measures, we probably
wouldn't do a kind of meter for that.
But, again, this is an area where we'd like to have your comments on the value
of this when we open it for discussion.
And then the last redesign issue that I have here is whether we can report
by substate geography. And then, again, it really depends on the detail and
the data sources. HCUP has zip code level data, but, again, it may identify
hospitals and we cannot do that with HCUP.
I have heard that the Behavioral Risk Factor Surveillance Survey may report
by county in the future, so that would allow some county level statistic, perhaps.
But in general we need to assess confidentiality issues, and also whether
there are meaningful groups of counties in each state that could be combined,
because we might be able to get around some of the confidentiality issues if
we can group them. Like in Kentucky, if we did the Appalachian counties together,
or the counties that have difficult or poor health status in the state.
But it would be a pretty major undertaking to do this, so definitely AHRQ
wants to hear your comments on the value of this kind of substate analysis,
where we're looking at groups of counties, most likely.
And with that, Jeff, I'm going to turn it back to you for the last question
that we had.
Jeff Brady: Okay. Just to let folks know, we're on slide
30 now, and this picks up on what was mentioned previously about a specific
request, some discussions that we were having with a user of the site, and
their interest in explaining to some extent their performance in long-term
care. And more specifically they're wondering whether not patient acuity was
-- to what extent it could be responsible or explain their level of performance.
And actually what we found, to quickly summarize the experience in this particular
case, was that this is something that required really a level of detail that
would -- that is currently, for sure, beyond what's included in the State Snapshots
and to some extent might always be something that's beyond.
But nevertheless, I think we are open to considering maybe what parts, if
there are particular elements of severity that could easily be summarized and
incorporated into analyses that could be pulled into the snapshots.
But, really, this is just provided as an example of the kinds of questions
that we get, and we try to field and consider whether or not it's something
that we could not only help a user with, but also potentially help other users
with if it's a common request that we're getting.
And then just a few points in general about severity and what it is in the
Snapshots currently. As you all most likely know, many of the measures that
are included in the Snapshots and also the reports as well, do incorporate
severity adjustment. One example of that is HRQ Hospital Quality Measures.
The quality indicators have built into the measures themselves some adjustments
for severity. In many cases the underlying data for other data sources or the
methods are just not available to adjust for severity.
And so it's not to say, again, I just want to make the point that although
in long-term care, this is a question that we got specifically where the data
was not necessarily adjusted. For many other measures it is. In most cases
where there are not adjustments, there are definitely good reasons for that.
And in those cases really you need specialized data and research methods that
would extend beyond the scope of the State Snapshots tool.
With that, I think we want to stop on our prepared sort of overview of the
Snapshots. And so I think we want to ask the moderator to open it up. So, Denise,
if you would, I guess we're going to take questions now and feedback.
And then while we're waiting for questions, we actually did receive one question
in advance, and I'm going to go over that very briefly, and hopefully that
will even stimulate some further feedback and discussion from you all who are
participating.
This question actually was from one of the participants, Diane Feeney from
the Maryland State Department of Mental Health and Hygiene. We appreciate the
question, first of all. Thank you. And then I want to just touch on it briefly
and then, if necessary, give Diane an opportunity to make sure we've understood
the question correctly.
But it was a question that specifically related to data that's included also
on the Hospital Compare Site, so the CMS data. There were a few kind of points
embedded in this question. One was related to data currency, if you will.
And I want to make the point by way of answering this question first is to
say that the most current data that's in the State Snapshots for that source
is from 2005. And that's information that you can find fairly easily on the
site when you drill down to the measure level. You can see in a fairly clear
table that not only do we show the baseline year, but the most recent data
year.
But Diane's question related to, I think, some analysis that they performed
after looking at the State Snapshots site to look at more recent data and sort
of do some trending based on that. And I think her observation was that the
all-state average had actually declined to some extent. And I believe, Diane,
you said in your question that Maryland 's average had actually improved. And
certainly that's definitely possible. You've done exactly what we have hoped
users will do and, again, gone beyond the data that's included in the State
Snapshots site to look in particular areas.
Let me stop there first and just ask, Diane, are you on the phone now still?
Operator: Ms. Feeney, if you're on the line, please hit the
1 key. Your line is open.
Diane Feeney: Thank you. I think you've characterized my
question well. The issue was that people working currently particularly with
those heart attack measures believe that they're topped off in terms of hospital
performance. And that what I saw, though, in terms of a trend for the all state,
from the 2005 data period to the current period was performance had actually
gone down on four heart attack measures, which is not I guess the current thinking
amongst people who are sort of focused on this hospital measurement world.
Jeff Brady: Right, right. So, turning to the second part
of your question about sort of why we thought the observation that you made
of declining all state average, it would really be speculation on our part.
But just a few points to mention that, of course, for any measure there will
be variation, and that's something we try to characterize not only in the State
Snapshots site, but with the reports overall, whether it's variation at the
national level, variation among caparisons to state -- among states, or with
different priority populations.
But I think your observation that, yes, these are often measures that are
pointed to, to say that they are topping off and sort of reaching not complete
maximal performance, if you will, but definitely pushing up against that ceiling.
And, again, it would be speculation on our part, but I think it's possible
that it makes the case that actually even for these types of measures for which
there has been a longer history of focus and lots of effort, there's a long
history of measurement in those particular areas. But even for those, it requires
constant vigilance and sort of renewed dedication to sort of across-the-board
quality focus. Again, that's -- in terms of explaining the why, it's not something
we always get to in the reports or even in the snapshots, but that's sort of
where we end up on this point, I guess. But we would open it up for other responses
to that.
But I think your question makes a good point about how we hope folks use the
State Snapshots site. So, thanks again for the question.
Diane Feeney: Thank you.
Operator: Again, if you have a question or comment at this
time, please press the 1 key on your touchtone telephone. Our first question
comes from Jonathan Teague of OFHPD.
Mary Trent: Actually, this is Mary Trent. Same office. We'd
like to underscore the value of doing substate level geographic analyses for
California. In particular it's important because we have a very large, diverse
state, and because a lot of our health and healthcare planning is done at the
county level. So, substate level analyses would make the data much more actionable
and useful to policymakers.
The second is more of a question. Could you say more about adapting the Snapshots
methodology for facility level reporting?
Rosanna Coffey: Sure. This is Rosanna Coffey. I'll take the
second part of that, and thank you for your comment about the substate level
analyses. Adapting the technology, this was work at the main quality forum,
and we had originally provided to them the programs that we use for the State
Snapshots, how we got to the meters, the particular software we used, how the
measures were developed, and so on, and the programs that were used to develop
those. And when we provided those to Maine, it was very early in their planning,
and that was the point where they were knowing that they wanted to do analyses
at the hospital level. And so we thought that this methodology could be transferred
to the hospital level and, in fact, when we started working with them on it,
we were still thinking we were going to use these meter kind of layouts. It
turned out that we had issues of small cell sizes and small numbers of hospitals
in Maine. And so we had to come up with a different kind of methodology.
I don't know whether there's anyone on the line from Maine, but there is a
data test site. If anyone is on the line from Maine, would you push 1 on your
phone? There is a beta test site -- okay, Diane Williams. Diane, are you familiar
with the beta test site and whether it could be provided to other states at
this point?
Diane Williams: No, I'm sorry. (Inaudible)
Rosanna Coffey: Okay. Well, Josh Cutler is the direct of
the Maine Quality Forum, and we could ask him, Mary, if that could be provided
to you, so you could at least see what the -- what it looks like and what it
feels like. There are -- we had to actually combine measures differently. We
did an all or nothing -- not an all or nothing kind of thing, but we summed
up the results of cross measures of the same type so that we would have --
we could sum up numerators and use the same denominator. So, it was a different
approach.
And we also developed some regression methodology to test the -- whether a
particular hospital was above or below the average across a bunch of measures.
And that's the methodology. It's also fully documented on their website.
So, if you would get in touch with me, and you can do that by e-mailing me,
rosanna, r-o-s-a-n-n-a -- there is no "e" in there. --.coffey@ thomson.com.
We also have thomsonreuters.com, but you have to know how to spell Reuters.
So, anyway, that's sort of the background on that. I'd be glad to work with
you to get you more information.
Are there any more questions?
Operator: Again, if you have a question at this time, please
press the 1 key on your touchtone telephone. Our next comment or question comes
from the line of Jonathan Teague.
Mary Trent: Nobody spoke up. This is Mary again. Would it
be feasible, have you discussed doing some additional contextual analyses?
For example, disease prevalence using the BRFSS or NHANES or something like
that. And another one would be leading causes of death to alert states to medical
problems that might not be picked up in the analysis of the patient data. Have
you considered publishing something like Mercer Rates?
Rosanna Coffey: We could add other contextual factors. That's
relatively a simple thing to do. The ones that are on there do come out of
BRFSS. I'd have to open it up to remember exactly which they are. I'm just
doing that just for a second.
The health status measures, overweight, obesity, at risk of heart disease
and stroke, and I'm not sure about that reporting for mental health, whether
that's on there. But we could add other ones as well. And if you have some
ideas about those, ones you think we ought to include, let us know.
Leading cause of death and MRSA rates for the state. We could add any of those,
definitely. We just need to find the source and find the information by state,
and then put it on here in the same kind of setup.
Mary Trent: Thank you.
Rosanna Coffey: You're welcome.
Jeff Brady: Are there other questions or comments either?
Operator: Again,if you have a comment or question at this
time, please press the 1 key on your touchtone telephone.
Jeff Brady: Let me actually pose a question to the group
then. One of the things that's not specifically addressed in the slides but
it follows some of the discussion that we've had, those at the workshops and
elsewhere, and that's whether or not inclusion of particular quality improvement
efforts or in some cases promising practices that might be new, whether or
not packaging or including at least some examples of that along with the quality
performance information, whether or not that would be beneficial to users or
not. I wonder if anyone listening now has any thoughts or comments on that
point?
Rosanna Coffey: While we're waiting, if you have any questions
for any of us, Margie Shofer is going to be collecting all the questions, so
we have information at the end of the slide set to send -- for her e-mail address,
so you can send any information to her and we will get it. She needs to be
included on anything that comes to us anyway.
I'm wondering if anyone has thought about this state group comparison idea
that we've been seeing from several states, whether you have a special group
of states that you would like to be compared to?
Operator: I'm not showing any further questions or comments
at this time.
Margie Shofer: Okay, then, I'm going to conclude the Web
conference. I want to thank you all for the questions and your participation
in this audio conference. We hope you found it useful. So, on the last slide
you'll see that we share where you can find information about the products
we discussed today. So, the first bullet you can download the National Healthcare
Quality Reports or the Disparities Reports. These reports are pretty long,
they're over 200 pages, so we certainly welcome you to download them, but you
can also request them, if you want the hard copy. And that number there is
the number of our clearing house. You can get one copy for free, if you call
that number. You can also download the State Snapshots, and you see that we
provided the Web link there for you to download.
And Rosanna mentioned, you can download a summary version of your individual
State Snapshots. Now, they're not available yet, but they will be soon, and
we will send an e-mail out to alert you when they do become available.
Rosanna Coffey: Margie, I actually forgot to mention that.
Thank you.
Margie Shofer: Oh, sure. And if you have any questions or
comments about this tool or any other tools that we have at AHRQ, please do
not hesitate to contact me. Again, I'm Margie Shofer, and you have my contact
information there. For more information about the suite of our Tools developed
for State Quality Improvement, or for details about additional follow-up technical
assistance, again, that last bullet is where you should go for that type of
information.
Thanks again, and this concludes the audio conference. We really look forward
to hearing from you. So, please think about how we can help and please let
us know. Thank you, everyone.
Operator: Ladies and gentlemen, thank you for your participation
in today's conference. This concludes the program. You may now disconnect.
Thank you and have a great day.
Return to Contents