Virtual Festival of Evidence | Dr. Daryl Kor

LISTEN WATCH INTERVIEW FULL TRANSCRIPT

Whole systems thinking for complexity and fragility: challenges and guidance

kor-daryl

Dr. Daryl Kor | Mayo Clinic














INTERVIEW

Daryl Kor: We need to make sure that the data we’re putting into our modelling efforts is good quality data, so that it’s telling us what we think it’s telling us. If we’re not putting quality data into these models, what comes out of them is not going to be reliable in any way.

Interviewer: You’ve used data and predictive modelling in the area of transfusions, how did this work?

Daryl Kor: What we had hoped to do was optimise our transfusion practice. We had seen through multiple publications, as well as our own experience with our own data, that we were transfusing patients much more liberally than the evidence would suggest. By using advanced analytic techniques and utilising data in a more meaningful way, and then presenting that data to the end users at the time when they need it, we were able to substantially improve our transfusion practice. By the time we had finished up the pilot work it was apparent that we had saved quite a substantial amount of money. More importantly, patients were doing as well, if not better, than they had been doing with more liberal transfusion practices.

Interviewer: Do you think policy makers expect benefits to be visible sooner?

Daryl Kor: I think it depends on the specific domain of study. For this particular domain I think it was a two to three year interval, and we probably realised gains very early in the process. But we continued to realise gains until the very end. Depending on what the domain of interest might be, you might expect deliverables very quickly. For another domain it might be that the deliverables are going to take much longer, over the course of years.

Interviewer: The volume of data that’s out there is mind boggling. Do you think there needs to be a better system for hospitals to be able to pinpoint what is useful, so modellers can use the appropriate data more easily?

Daryl Kor: Yes. What my experience has told me is that it has to be a very close interaction between the informaticists, who are on the information and data side, and the users of that data. Because the users understand their workflows, they understand their data needs. By having that close interaction, we can cut out a lot of the unnecessary, redundant, excessive volume of data that’s not being used clinically. We can focus and target our efforts specifically on those data elements that are most relevant to those who are providing care at the bedside.



LISTEN WATCH INTERVIEW FULL TRANSCRIPT


FULL TRANSCRIPT

Daryl Kor: Thank you so very much let me start by thanking Terry for the invitation to come over. It’s a privilege to be here with all of you here this morning. And I very much look forward to talking about maybe things from a little bit different perspective. It was a little intimidating for me to accept the invitation. While I’m certainly related to modelling and simulation, I truthfully have most of my experiences upstream, I am the stakeholder, I am the champion, and increasingly in my role in the Centre for the Science of Healthcare Delivery, I am the one that helps to ensure that as best we can, the data that we’re using in the modelling strategy, such as Mike described this morning, is accurate and is the kind of data that we think it is.

And really that’s what I would like to spend most of the next 45 minutes talking about, is how we use information systems, and communication systems most effectively to optimise the care delivery at the bedside. And we’ll talk very briefly about some modelling through the course of the [00:01:00] presentation. But really as I indicated, this talk with be mostly upstream from the actual modelling procedures, and really talking about the information in the data that lies within those models. So disclosures, I have no conflicts of interest, or relevant financial disclosures, I won’t discuss any off-label, or pharmaceutical agents, or technology.

I also need to disclose that I am not a health informatacist, much to my chagrin, but I am extremely excited by the promise of health IT, and I’m dedicated to implementing innovative, informatic approaches to improve care delivery. And that’s as I said, really what we’re going to be talking about over the next 45 minutes or so. Briefly to outline, just two basic objectives at the end of the presentation, I hope that all of you will be able to appreciate some of the common informatics and communication barriers, which impede systems approaches to improving care delivery.

And I hope [00:02:00] that you’ll recognise some innovative solutions that we’ve employed at least locally at Mayo Clinic in Rochester, Minnesota to address some of these barriers. To frame the problem a bit, I’m sure most in this room are very well aware of this data. But in the United States, our average life expectancy is nowhere near, where we would like it to be. In fact, it’s well below the OECD median life expectancy average is 78.7 years. I apologise, but does anybody happen to have a pointer by any chance? I had one and I left it up in my room. Maybe borrow it if I could.

But the lower graph is showing percent of GDP spent on healthcare expenditures. So we can see where the U.S. is at here compared to the median for life expectancy. But we see that we…there is one area that we do quite well in and that’s spending a whole lot of money. [00:03:00] And so we need to try to understand, clearly there’s a great deal of interest in understanding how we can utilise the resources that we’re implementing and infusing into the system in a more meaningful way that will bring us from where we’re currently at up to where many, many others who are spending a far lower percentage of their gross domestic product on the healthcare industry. And we like to say in the States, we always like to make excuses for some of these sorts of findings, but we like to think it’s because our population has perhaps just not as…their behaviour’s perhaps are not as healthy as what it is in other countries.

Maybe we smoke more, maybe we’re…it’s the issue with obesity, but that’s…some data suggest that maybe it’s not that as well. This is for example, smoking data that compares Japan with the U.S. The smoking rates are substantially higher amongst the Japanese population, but again, when we go back to the life expectancy curb, Japan’s actually number one on that list. So while they’re certainly components of the U.S.’s sub-optimal [00:04:00] numbers in terms of life expectancy, in regards to our behaviours and our social activities, much of that is likely related to the way we’re delivering care to our patients. And this was highlighted very much in 1999 with the two errors Human Report which noted that adverse events occur in between 2.9 and 3.7 percent of all hospitalisations in the United States.

More concerning, approximately…or greater than 50 percent of those adverse events are related to medical errors. Another quarter, are related to medical negligence. That is estimated to be associated with on the low end, 44,000 deaths due to medical errors, and on the upper end, up to 100,000 deaths. And even if we take the low end of that estimate, at 44,000 that would put medical errors as a top seven cause of death in the United States. Beyond patient outcome, just again, healthcare utilisation, it’s been estimated that 30 to 40 cents of every dollar spent in the healthcare industry in the United States [00:05:00] goes to overuse, underuse, misuse, duplications, systems failures, unnecessary repetition, poor communication, and inefficiency, and that equates to upwards of 500 billion dollars.

A lot of zero’s behind that. So not only are patients not having the outcomes we would like them to have, but we’re spending a lot of money, perhaps inefficiently. And certainly, there’s a lot of room yet for discovery, for understanding what are the best care processes for specific diseases, for specific patient populations. But what we’ve also learned increasingly is that even amongst those diseases where we have fairly well defined care processes, they’re not being implemented with anywhere near the frequency that we would expect them to be. So as you may be able to see, I know the print is quite small on the other, on the far right side, but only 15 to 20 percent of most very well accepted care processes are actually implemented into the practice, despite a good volume of data that supports [00:06:00] their efficacy in that particular patient population.

And so the National Academy of Engineering and the Institute of Medicine convened a meeting titled, “Building a Better Delivery System – A new engineering healthcare partnership”. That was published in 2005, and they identified a number of essential elements that underline the crisis in healthcare delivery in the United States. And we can read through those. There’s rapid advances in medical science and technology, and the increasing complexity of care. I think we’ve discussed those at varying levels throughout this meeting. Still very much so, in the United States, it is a cottage industry structure. There’s very poor communication from one health system to the other health system, which is I think quite a bit different perhaps than what we have here in the U.K. Certainly, I think it’s much more of a cottage industry in the United States than it seems to be here.

The patient population is predominantly needing chronic, rather than acute care, even though our systems, really, are set up for the delivery of acute care. [00:07:00] The structure of the U.S. market for healthcare services strongly supports innovation in doing things to people, procedures, administering drugs, devices, equipment. But there’s relatively little innovation to improving the quality, and productivity of care, and preventative services. There’s a persistent under investment in healthcare delivery sector in information technology, and communication. And this is really as I said, where most of this presentation will find itself.

There’s an inability and unwillingness of the healthcare sector to take advantage of engineering based systems design, analysis, and management tools, much of which we’ve discussed varying levels through the first few days of this conference. So what are some of the challenges from a health IT standpoint? We’re heard some of these in the prior talk, Mike’s excellent presentation. We’ve heard these sort of weaved throughout the course of the meeting in the past couple of days that I’ve been here, but data availability, standardisation of data, interoperability of data, so that we can communicate not only amongst different [00:08:00] sites within the same health systems, but amongst health systems.

Data sources, aggregating data from silo of legacy source data systems, harmonisation of the data, data type, we’re very…we’ve become increasingly efficient and acquiring structured data elements, but much of what’s very important for understanding patient outcomes comes in unstructured clinic narrative that can be very difficult to harvest from the informatics structure. Data quality and interpretation that we’ve described is a big concern, and that’s where we spend a lot of our time, is really trying to understand the nature of the data that we’re going to be using. For particular use cases and understanding what the limitations of those data might be. Logistics in terms of efficiencies, scalability, the timeliness of the data.

We talked a little bit, I believe it was in yesterday afternoon’s session about real time prediction models, and there’s a completely set of concerns, and issues, and limitations that arise when we talk about the real time acquisition of data. We’ll talk a little bit about that as well. Increasingly, there’s concerns regarding visualisation [00:09:00] of data, particularly for the healthcare providers. We’re drowning in a sea of data, and it’s not presenting in any meaningful way, and I’ll show you some work from some colleagues of mine back at Mayo, and trying to improve that process of data visualisation. Implementation of data, dissemination of data as we described in a prior slide, for getting the best evidence to the bedside for the right patient at the right time, for the right indication. And the disconnect between the informatics, domain, and the clinical expertise.

If there’s anything I can leave this audience with as a complete talk, it would be the absolutely essential need for multi-disciplinary approaches to these problems. It can’t be just the informaticist, it can’t be just the clinicians, it can’t be just the engineers. It has to really be all of us sitting down at the table together to try to understand what the problems are and how to best address those. Then of course, as we’ve already described to some extent, we can’t ignore the issues of money. These are expensive processes to really implement a fully implemented, scalable, useful [00:10:00] IT system is not a cheap endeavour.

There’s a lot of money that goes into implementing these systems. So let’s move through this a little bit in terms of implementation of the electronic medical record in the United States, this is a classic HIMSS EMR adoption model that has changed over time, but surprisingly is not changing at the pace we would like. You can see that as you move from the bottom, stage zero to stage seven, it’s a more fully implemented electronic health record. And you can see that for the most part, we’re still actually at relatively low levels of implementation. We have a few sites, in Rochester, we actually claim to be at step seven, and we would meet criteria for step seven, but as I’ll show you, even at step seven of EHR implementation, we have substantial limitations that we’re experiencing having to work through with our electronic health record.

This is really one of the big issues that we have to deal with, and that’s the siloed legacy systems from all the source databases. So if you need to understand [00:11:00] data that comes from multiple domains, you have to interact with multiple different source data sets, there’s expertise within those source datasets that don’t understand the other source datasets that you need to work with. And so a substantial amount of our effort is trying to pull all of the information from these varied source data sets into a single source data set with really expertise that understands the data at a much deeper level than is typically the case, historically. These are a couple snapshots I thought would be somewhat useful to present.

This is a snapshot of our electronic health record or our NSC Information Management system in the operating room environment. Speaking of operating room environment, I kind of chuckled this morning. Everybody’s very comfortable with the acronym O.R., and as everybody was talking about O.R. to me, that gave me a very different visceral response, and oh, operating room, very different perspectives indeed from where I’m coming from. But, in our operating room, [00:12:00] this is our information management system, and you can see there’s a lot of data. This is data on vital signs, some data related to fluid therapies. We have data related to laboratory measurements. We have, again, more detailed data related to physiologic assessments.

But what’s really notable about this as I look through this, is it’s really just data elements. There’s no information there’s no knowledge that’s generated from this record, that’s forced upon the end user to accumulate data from the various source tables, and then generate a story that’s more meaningful and actionable for the care that we want to deliver to the patient. And so really, what we’re working very, very diligently towards is moving away from the representation of data to the representation of information, and most importantly knowledge to the end user so that we can help to optimise care delivery at the bedside.

So where we’re at right, in large part, I still feel is the MR done wrong? We have electronic source databases that capture an enormous volume of electronic data, but the data [00:13:00] are hidden, they’re fragmented, they exist in siloed legacy systems as we described. The systems in the United States, and I think this is true really around the world, are primarily optimised to capture billing, and document care delivery for medical legal purposes, particularly in the U.S. They are not well designed, nor intended at this point, to facilitate meaningful improvements and care delivery. And this is one of my favourite pictures that I ran across quite some time ago, and I come back to this frequently.

This is sort of, this is the little piece of information, or the small, select, few pieces of information that are absolutely essential to the question we’re trying to answer. And this is the sea of data that they live within that we need to try to sieve through so that we can find the data elements that we’re interested in. And increasingly over time, this particular data element, these data elements that we’re interested in stay the same size, but the size of the pie that we need to find them in just continues to get larger, and larger, and larger. And it’s making it more, and more challenging all the time [00:14:00] to sift through this information.

We’re left increasingly with the issue of information overload, and where information really at this point, is no longer useful, but just becoming noise. And so there’s been a great deal of interest from the highest levels of our institution. John Ellsworthy is the President and CEO of the Mayo Clinic, and has been for a number of years. He had this quote back in 2010, that really I think, highlights what we’re trying to accomplish with our informatics infrastructure back home. And he said, essentially we’re going to be moving away from an electronic medical record, which is initially just an electronic version of the paper record, which is, I think, in large part really where we’re at still with the electronic health record.

Rather to a smart electronic medical record that brings together what we know from research, practice, and education so that we can actually provide better patient care. So I’d just like to describe a few of the strategies that we’ve taken here, maybe it will resonate with some of you as you’ve thought about the infrastructure that you have at your own locations. But we very much [00:15:00]

have again, come to appreciate the increasing need for interactions between not just the informaticists, but the users of the data so that the informaticists have a sense of what are the limitations of the data, how are the data being used, and how can we present it most meaningful to those that really need this information. We’ve done this in the form primarily of creating what are terms data marts. What data marts are is essentially an access layer of a much larger data warehouse, and the data warehouse is generally a very large accumulation of relatively, unclean, dirty data as we typically have described.

We take that data for very specific purpose for example, we have data related to a surgical encounter. So instead of having the surgical encounter, the data be distributed through multiple source databases, we accumulate all of the data related to that surgical encounter in a single, surgical data mart so that we can very rapidly then get a complete picture, not only of patient flow from the time they present to the operating room to the time they leave the hospital, [00:16:00]

but as well as detailed information about their outcome information, long-term outcome information, and again, just a much more thorough and detailed picture of that patients surgical experience.

We have similar types of data marts for the intensive care unit environment, for transfusion therapies, which I’ll present some of that data in some subsequent slides as well. And so again, as I described earlier, a lot of our effort is going to the source databases at this point, they’re largely sort of siloed, they don’t communicate well. And we pulled the relevant data elements relevant to the population of interest into our own specific unified servers so that we can then perform very complex queries on those patient populations. We again serve three primary purposes with these data marts. The first is research and quality improvement.

A lot of our effort has gone for generating data for observational research. We also very much want to facilitate administrative functions, understanding efficiency through [00:17:00] the operating room environment, understanding delayed operating room starts, extended operating room cases, and understanding patient flow through the surgical environment. And we also wanted to provide decision support to the end users at the bedside. And that requires again, real time data feeds, which have brought a whole other set of concerns, and limitations that we’ve had to address. But I think, have very successfully addressed, and are providing some very meaningful support to the bedside, which I’ll show in some slides here coming up as well.

This gives a sense of size, and scope of the types of data acquisitions that we have. So this is just for the operating room environment. And you can say just for vital signs, we have data now from 1998 until the current date, and as of probably a month or two ago when I completed this slide, we have just short of three billion rows of data just for vital signs. And you can see that this scale of this increase is quite dramatically over a very short course. And again, to re-emphasise a [00:18:00] point that I’ve made twice already in this presentation, but the key to our success, we believe very strongly, is this multi-disciplinary approach. Bringing information science, computer science to the bedside with those that are delivering care. So let me start maybe with a use case, optimising transfusion therapies, provide a little bit of background information. For some of you this may be redundant, for some it may not.

But increasingly, evidence has suggested that transfusion therapies in this particular case, red blood cell transfusion may actually not provide benefit to the patient, but actually may be associated with harm. In fact, there’s been associations in multiple studies of red blood cell transfusion and increased risk of death. There’s also an accumulating body of data that suggests that red blood cell administration is associated with hospital acquired infections and respiratory failure. There have now been four and some additional trials that aren’t published, but will be shortly, large trials looking at the efficacy of red blood cell transfusion, and have shown really the conservative red blood cell transfusion practices are not [00:19:00] only safe, but for many populations are associated with improved outcomes when compared to more liberal red blood cell transfusion practices.

Despite this data, there’s very convincing data to suggest that practitioners aren’t necessarily transfusing in a conservative manner. There’s still great variability in the way we transfuse, surgical patients in particular. This is data for all U.S. centres that perform coronary artery bypass surgery, and it looked at the frequency with which the patient population gets transfused with red blood cells. And if we look at the far left, this would be an institution where essentially none of their patients who undergo coronary bypass surgery receive red blood cell transfusion. In comparison on the right, this would be an institution where essentially everybody who has coronary bypass surgery gets transfused, and then we have a distribution in between.

That’s really true for not just red blood cells, but for fresh frozen plasma administration and for platelet administration. So despite the fact that there’s minimal data to suggest that these more aggressive transfusion therapies [00:20:00] on the right are associated with improved patient outcome. There’s still a great deal of variability in how providers are transfusing patients. To make it more personal, to bring it back to my own institution, we evaluated our own transfusion practices for two years in 2012 and 2013, and over those two years, we noted that we had administered approximately 75,000 red blood cell transfusions, a little less than that.

From published data that suggests that for every unit of red blood cells transfused in the United States, it costs approximately $750. That’s associated with cost of 56 million dollars over that two-year interval. So each year at Mayo Clinic, just in Rochester, Minnesota, we’re spending around 28 million dollars on red blood cell transfusion therapies. And again, the evidence suggesting that many of those transfusion events probably are not providing any benefit, in fact, may be associated with patient harm. And also, as you can see with platelet and plasma transfusion, there’s a significant financial burden to those transfusion therapies as well. Again, [00:21:00] questioning the efficacy of those interventions for most patient populations.

The American Hospital Association, working with the American Association of Blood Banks, ABB, has in fact identified blood transfusion in the United States as one of the five overused resources in the U.S. and endorsed patient blood management as a strategy to better manage the resource. Coming back to one of my prior slides where I noted, there still is clearly a lot of need for discovery, there’s also a great need for implementation and dissemination of current knowledge. This is one such example that sort of ties into this transfusion therapy story, this is data from a colleague of mine that was published back in 2001 that was a protocol to help optimise your transfusion therapies for patients having cardiac surgery.

And this data again was published back in 2001, and when we evaluated in our own institution, the institution from which this data was generated, how well we were implementing [00:22:00] these strategies, we noted again a great deal of variability. In fact, for some providers, less than 40 percent of their practice was actually consistent with what our own data had suggested our practice ought to be. So again, highlighting that the evidence is there, but we’re not implementing the evidence at the bedside for those that are ordering the blood component therapies. And so we initiated the patient blood management program that had many, many, many components.

But we found at least what appears to us to be the most influential component, were some of the informatics approaches that we employed. And I’ll show just two of those components here in the coming slides. The first was the important of bedside decisions support in modifying clinician behaviour. And so it wasn’t enough to educate them about the risks of transfusion therapies, the cost associated with transfusion therapies, but they had to be reminded of that at the time where they were actually going to implement that order. Without that reminder, they continued with that order even though they, in the back of their mind [00:23:00] understood the data about unclear efficacy and substantial costs.

But when we provided them information at the time of order, we noted that that had a substantial impact on their practice. And I’ll show you again that in just a second. And what this does is it actually looks at the laboratory values within the information management system, it pulls those relevant laboratory values and actually tells the provider, these are the blood products that would appear to be justified based upon the values that we have available to us in the informatic system. The other thing we found that was a very powerful motivator of change, perhaps the most powerful motivator of change was to provide individuals feedback about their practice and to compare their practice to their peer group. And w

When we looked at these two strategies together, there were four basic elements to this pilot. The first was essentially just providing some information, some baseline information to the practice about the pilot. Point number two is when we started to initiate the pilot and we started to present some early pilot results to [00:24:00] the cardiac surgery team. Point three was when we implemented the bedside decision support, which is where we really started to see the most substantial influence on provider behaviour it’s actually providing them the relevant data at the time of order. And point four was when we started to provide them feedback about their practice, and compared their practice to their peer group.

You can see that point three and point four, over the temporal evolution of this program, were really the points where we started to see the most substantial impact on provider behaviour. So meaningful decision support at the time of order and providing them data about their practice, so that they understood how their practice compared to their peer group. What was the impact of this? You can see for each of the components that are listed on the bottom, we can see a very steady decline over time in the frequency of red blood cell administration. Similarly, for fresh frozen plasma, platelets and Cryoprecipitate.

The overall cumulative transfusion savings over the course of this project was approximately 25 million dollars [00:25:00] and for the investment that was put in it, the return on investment was approximately 10 fold. So a very, very meaningful deliverable to the institution. And really a use case, that for our group, provided us a lot of momentum and a lot of clout for us to pursue additional interests that we’ve moved forward with subsequent to this. Just to show that this isn’t something that’s just unique to the Mayo Clinic environment, this has also been reproduced by Tim Goodknow at Stanford University again, in the States.

This same sort of concept implementing meaningful decision support to the ordering providers, and what they noted comparing 2009 to 2012 data, was that despite an increased complexity of patient mix, the total red blood cell transfusions decreased by over 7,000 units or 25 percent. And the estimated debt savings for red blood cell units, here they use a much more conservative, what we call acquisition cost, $225 per unit. It saved their institution approximately 1.6 million dollars [00:26:00] just for red blood cell therapies. Now this is a modelling and simulation group so I thought it would be very prudent of me to at least put in a single slide to describe the uses of this infrastructure for modelling.

I have to be truthful, that rather than at an operational level, we’re really primarily using most of this data at this point, at an individual level in trying to predict patient outcomes and identify high-risk cohorts of patients, not only for implementing well established clinical practices, but also for experimenting with new clinical practices on these high risk populations to enrich our study population, and improve study feasibility. We’re also using the infrastructure that we’ve developed to provide performance metrics. I know tomorrow there’s going to be some presentations on the use of metrics in the hospital.

Again, coming back to the whole concept that providing data to users about their practice, and comparing that to their colleague, seems to be a very potent motivator of change that we’ve taken that concept from [00:27:00] blood transfusion therapies to many other performance metrics that occurred throughout the surgical environment. Whether it be prolonged [Unintelligible 00:27:05] length of stay, the conversion from outpatient to inpatient surgical procedures and a variety of additional endpoints that we’re hoping will meaningfully modify provider behaviour. Again, providing efficiency operating room reports, we talked about the need for very accurate source data.

So we spent a great deal of time for these types of reports, very precisely documenting exactly when a patient leaves the operating room, exactly when they enter the operating room, exactly when they enter the zone subsequent to that, whether it be the recovery area or the ICU environment. We’ve utilised this to very much modify how we work patient flow through the operating room environment whether it be staggered starts for certain anaesthesiologists that have multiple case rooms to start at the beginning of the day. But these again, have been very useful in our efforts to optimise operating room efficiency. Moving a little bit away from sort of retrospective [00:28:00] large data sets that are used for administrative purposes and observation and research, a great deal of our interest at this point is in generating decision support tools using real time data.

I had mentioned this before where we had pulled data from various source databases into a common server so that we can run complex rules on those data elements, and generate alerts for specific conditions, and I’ll provide a couple examples of that. Acute respiratory distress syndrome, I don’t know how familiar the audience is with this particular syndrome, but it’s an acute inflammatory response within the lungs. It has approximately a 25 percent associated mortality. At our institution, we did some cost analyses on this specific outcome and it’s associated with approximately 300 thousand dollars in incremental costs for those that experiences complication.

It’s very substantial to patient outcome. And it’s very substantial to healthcare resource utilisation as well. And we have learned over time that there are specific interventions that provider implement that can increase [00:29:00] the risk of developing this particular complication. One is how we manage their ventilation when they’re in the operating room environment. We have approximately 200, a little bit of epidemiology there are about 200 thousand cases of acute respiratory distress syndrome in the U.S. alone. Again, mortality is approximately 25 percent. And even amongst those who survive an episode of acute lung injury, a growing body of data has identified significant long-term functional outcome limitations as well.

As I’d mentioned, we noted that the way we ventilate patients is known to be a significant risk factor for whether or not they develop this syndrome. So we have developed decision support tools to help identify patients who may be ventilated in a manner that portends risk for this outcome. And what we’re doing is continually surveilling the anaesthetic record, the way we’re ventilating patients. And if we identify parameters that are consistent with potential injurious ventilator patterns, we then send alerts to the in-room provider to let them know of this fact. You can see the alerts that show up in the lower right hand corner. It [00:30:00] provides them with the data that indicates why the alert was generated, and then provides them some decision support for that particular patient, for that patient’s gender and for that patients height, what the ideal title volume ought to be.

Again, providing decision support at the bedside using real time data feeds. This is an example of some of what we’ve seen over the course of time and to what extent these results are directly related to the intervention that I just showed, or whether it’s overall educational efforts, or awareness of this particular complication. We have very clearly seen a very steady decline in the rate of this life threatening condition after major surgical procedures. Interestingly, we also thought that maybe that has to do more with patient population, demographics, and core morbidity, and disease burden, but what we’ve noted is that the rate of community acquired, acute respiratory distress syndrome is, in fact, unchanged over that time interval.

Almost all of the change in the incidence of ARDS has come from ARDS that occurs within the hospital care environment. So this is iatrogenic, [00:31:00] they didn’t come with this, they acquired it while they were in the hospital environment and that is the area that we’ve had by far the greatest impact. Which is again supportive of the efforts that we’ve implemented, to help reduce the occurrence and mitigate the severity, of this particular syndrome. That’s one example there are many examples. We’ve developed similar decision support tools for a whole variety of conditions, whether it be, transfusion related, acute lung injury, ventilator induced lung injury, identifying patients with anaemia, septic shock, when the H1N1 epidemic came through we utilised these strategies.

We’ve found them useful actually, for a whole host of different conditions in the operating room environment, and we’ve published on their beneficial value, both in terms of workflow, patient outcome, improvements, and cost reductions. Moving away really from the operating room environment, back into the intensive care unit environment, I think all of us can appreciate that this is incredibly data rich environment, with data coming from incredibly diverse [00:32:00] heterogeneous sources, and it just amplifies many of the concerns that I had mentioned in some of the prior slides in terms of information overload, and our ability to identify the information that’s relevant to our patient at the right time, amongst the sea of information that is perhaps redundant and irrelevant.

This is one estimate for a 24-bedded ICU, which is the average size of our ICU’s back in Rochester. We generate approximately 47,000 data points per day that providers have to sift through and make meaning of. And so this is really starting to present some of the work of two of my colleagues, Bryan Pickering, and Vitalic Horasovic, who really are pioneering the effort of data visualisation. How do we understand what data is most relevant to the provider, to the patient, so that we can provide it at the right time and avoid the concerns related to information overload, and cognitive load that [00:33:00] I described in some of the previous slides. And so, they’ve taken a very systematic approach, and we believe very strongly that a systematic approach is far better than a chaotic approach.

There are some exceptions, but generally speaking. So this started with field observation, going to the clinicians at the bedside and asking them, what are the key data elements that are most important to you in your practice. What are the data elements that are unimportant to you that we’re still presenting to you, and are really just clutter, and using the information that came from these field observations, surveys, and interviews, we’ve developed a user interface that’s far more usable for the end users, and I’m going to show that in the next couple of slides.

This is a nice slide that describes our initial understanding of what this interface would look like, and emphasises why we need engineers, because as clinicians we can conceptualise, perhaps, but to actually operationalise, that’s perhaps not our strength. So with the involvement of the engineers, we move from this figure to what we have termed ambient warning [00:34:00] and assessment response evaluation, which is a user interface, a data visualisation tool that as I noted, presents the data that has been identified as relevant to the users, as opposed to the current EMR that I showed in the first couple of slides. It’s really just a sea of data, it’s not information, it’s just data it’s really not harnessing the potential of the electronic health record.

So we have over 700 rules, and this list continues to grow. It presents data to the end user that in a format, in a workflow, that is how we work through the intensive care unit. It’s an organ-based representation of data. We have a systems based view as well so that we can understand capacity within the intensive care unit very quickly. It’s very common within our health system that we have an intensive care unit who’s full for their particular patient population. It needs to overflow to a unit that doesn’t typically care for that population. So this is very much facilitated. Our preparedness for this issue of overflow.

[00:35:00] It’s been adopted much, very successfully, much more successfully than systems that have been generated outside of our environment from providers that may not understand our data needs, and our workflow. And you can see over a very short course from the Q3 of 2012, where there were 17 thousand AWARE sessions. That has increased up to just short of 120 thousand AWARE sessions in quarter two of 2014, so very rapid adoption of this software. Importantly, we’ve actually studied and this has been published, does the software do what we intended to do.

We’ve provided, for example, many case scenarios in a research setting to providers with our standard anaesthesia information management system, and our electronic health record, and we’ve compared those responses and performances to the performances we see with the AWARE platform. And as we can see on the bottom panel, cognitive load has substantially improved. They are able to make decisions with far greater speed and accuracy, the rate of errors has substantially [00:36:00] fallen with the new user interface, and both provider satisfaction from terms of consultants as well as the bedside providers, the nursing staff have been substantially…they’ve indicated substantial pleasure with this particular system when compared to the system that had been in place before.

Importantly, AWARE is not just a viewer, it assesses time sensitive clinical interventions there’s a resuscitation module to provide decision support at the time of resuscitation. It’s an essential element for handover. Increasingly, we’ve recognised that the transition of care from one particular environment to another environment is where we have a lot of loss of information, a lot of data, and a potential for a lot of patient harm, so this tool is very useful in maintaining data integrity, and understanding the full patient story as we move from one care environment to the next. It allows group level population management, as I described in the multi-patient viewer.

It allows individuals to understand everybody who’s involved in a particular patients care. So we have very [00:37:00] much a siloed practitioner practice as well, such that we have consultants from various specialties that are involved in each particular patients care, and it’s often times difficult to coordinate all of those areas of expertise, and AWARE has very much allowed us to do that by assigning all of the providers responsible for those patients to a specific location within this platform. Patient clinical information as we described, the single patient viewer, it provides a task list so that everybody that’s part of the group understands what is the daily goals for each patient in the intensive care unit.

In the rounding tool as well, checklists, understanding structured clinical assessments, generating clinical notes to help again, improved workflow and allowing providers to spend less time at the record looking at the patients record as opposed to at the patient’s bedside discussing the care plan with the patients and family. Does it work? This is very preliminary data, so we are still very much evaluating what the actual impact of this software platform is [00:38:00] on patient outcomes. But at least initially, it appears that we’ve noticed substantial reductions in the length of stay.

In three of the four units that it has been implemented and that contrasts temporarily with the unit where AWARE was not implemented where we noted a substantial increase in length of stay, and when we look at checklist compliance, compliance is the best practice as far as the current evidence would suggest, we’ve noted that the use of AWARE has substantially improved checklist compliance in the intensive care units where it’s implemented. All right, rounding things out, just finishing up with the last couple of slides where I think the future of our informatic systems are largely going, we’ve talked a great deal about how we’ve been using informatic systems from the healthcare teams perspective, the systems perspective, we haven’t talked a great deal about implementing informatic strategies to help improve patient centeredness, really bringing it back to the patient and including them in there in their care decisions.

That’s [00:39:00] very much where we’re moving. We heard a talk yesterday for the implementation of personal health records where patients can access their health record at home. And in fact, we have the same strategies that we’ve implemented within the male clinic health system. This is my health record, which I can access anywhere that we have an electronic environment, and it’s not just for PC’s we’ve obviously moved this to the IOS systems as well. And increasingly, we’re interested in really bidirectional communications. So not just providing information from the healthcare side to the patient, but actually acquiring information from the patient and bringing it into the healthcare environment.

What is their activity at home? Helping us understand which patients are frail. Can we predict patients that will have surgical complications based upon their activity levels at home? So we’re very much interested right now in this bidirectional flow of information, both from the healthcare environment to the patient, as well as from the patient external to the healthcare environment bringing it back into the electronic [00:40:00] healthcare record. And we’ve piloted this in specifics…this is another surgical population. This is for cardiac surgery where we provide patients a very detailed plan of what their expectations should be for their recovery process subsequent to cardiac surgeries.

Patients become much more involved in their care, their recovery process and have a much more thorough understanding of that recovery process. It describes what their responsibilities are, what we expect of the patient to help improve them, through their recovery process. And again, they input data as well. They provide us information with how they’re doing with their recovery process. Once they go home, they can provide us with information about how they’re doing from a mobility standpoint, what their pain scales are. As providers, we can see how they’re doing at home, help understand how they’re doing in their home environment with that recovery process.

How compliant they are with their care plan, what their pain levels are, what their activity levels are, and increasingly, we’re interested in understanding how these sorts of assessments will [00:41:00] help us to understand and identify patients who are at greatest risk of consuming a large proportion of the healthcare resource. So to finish, innovative informatics and communication systems will play essential roles in future models in applying a systems based approach to improving care delivery. Despite increased computational power and capacity, the full potential of contemporary informatics and communication systems have not yet been realised.

I believe strongly that multi-disciplinarily is essential to optimise the value of these increasingly powerful, but expensive electronic resources. And I think we’re finding increasingly, that when done well, the deliverables can in fact, be very meaningful. I want to acknowledge all of the colleagues and co-workers that I have from the various areas that I’ve presented, the Data Mart, the Clinical Informatics program from the AWARE project, and the My Care project, which is the cardiac surgery recovery project. And with that, I’m happy to take any questions. Thank you very much for your attention.


LISTEN WATCH INTERVIEW FULL TRANSCRIPT