*.*
News7News 7
HomeHealthEvery year, predictive AI saves 50 lives in two ERs at UC San Diego Health

Every year, predictive AI saves 50 lives in two ERs at UC San Diego Health

by News7

Editor’s Note: This is part two of our two-part interview with Dr. Karandeep Singh. To read part one, click here.

Yesterday in our new series of articles, Chief AI Officers in Healthcare, we spoke with Dr. Karandeep Singh, Chief Health AI Officer and associate CMIO for inpatient care at UC San Diego Health. 

He described how accountability for all AI in a health system must lie with the Chief AI Officer, and how to hold this hot new position, executives must have skills that encompass clinical and artificial intelligence – though there need not be a balance.

Today we talk more with the physician AI chief about where and how UC San Diego Health is finding success with artificial intelligence. We dissect one AI project that has shown clinical ROI – and get some tips for executives seeking to become Chief AI Officers at their own organizations. 

Q. Please talk at a high level about where and how UC San Diego Health is using artificial intelligence today.

A. We’re using it today largely in two different broad classes of use. One of those is predictive AI, and one is generative AI.

Predictive AI is where we use AI to estimate the risk of having a bad outcome, usually, and where we design and implement interventions to try to prevent that outcome. That’s something we currently have widely in use for sepsis in all of our emergency rooms across UC San Diego Health. It’s something we’re in the process of deploying across our inpatient and ICU beds, as well.

This is something we implemented as early as 2018. It’s something we have rolled out in a really careful way. It was designed by colleagues of mine at UC San Diego Health. One of the key things that differentiates this from some other work that’s been done in this space is that in the process of rolling it out, they actually designed a study to put on top of that rollout to see whether or not the use of this model linked to an intervention that largely alerts our nursing staff is actually helping patients or not.

What the team found is that this model is saving about 50 lives across two ERs in our health system every year. It’s beneficial to people, and we’re keeping a really close eye on it and looking for further opportunities to improve. So that’s one example of where we’re using predictive AI.

Another one is predictive AI for forecasting purposes. I already highlighted in yesterday’s interview one of the use cases by our Mission Control, where we’re using a model to forecast our emergency department boarding patients. And that helps us figure out what things we need to do when we anticipate we’re going to have a busy day tomorrow in two days or in three days, and something that we’re still designing some of the workflows around. We have some workflows already implemented in progress.

So, the other broad category of use cases is generative AI. We’re using some of the capabilities within our electronic health record that allow generative AI capabilities. One example of that is when a patient sends a message to their primary care doctor, the doctor has the option to reply in the usual way where they type out the entire response, or they can see a preview of an AI draft response and can decide if they want to use it or not as a starting point, and then edit that response and send that one along.

If the clinician opts to do that, we append a message at the bottom that lets patients know this message was partially automatically generated so they know there was some process of drafting that message involved that wasn’t just the clinician being involved. That’s an example of one where we found that, surprisingly, it actually increases the amount of time it takes to reply to messages.

But the feedback we’ve gotten is that it is less of a burden to reply to a message when you have a little bit of boilerplate text to start with than to start with just a blank slate. That’s one that we’re still refining, and that’s an example of one that’s integrated into our EHR.

There are other ones where we have built them in-house. In some cases, it’s work that was done in my academic lab, but in a lot of cases, it was work done by colleagues of mine that we’re now looking to implement as part of the Jacob Center for Health and Health Innovation. One example of that is we have a generative AI tool that can read patient notes and abstract quality measures.

Quality measurement abstraction is something usually very time-consuming. The main implication of that is it takes a lot of people to do it. But more importantly, we’re only able to review a really small subset of people’s charts just because it’s so time-consuming. So, we never get to reviewing most charts in the electronic health record.

What we’ve found so far is we can get more than 90% accuracy using generative AI to do some of these chart reviews and abstractions of quality measures where we say, did they meet this quality measure or not? There’s still some room for improvement still there. But the other critical thing is we can review a lot more cases.

So, we’re not limited to a small number per month because we can run this on hundreds of patients, thousands of patients. It really gives us a more holistic view into our quality of care beyond what we could even achieve currently, despite throwing a lot of resources and a lot of time at trying to do this well.

Those are the two broad categories: predictive AI and generative AI. We’ve got a lot of other work, a lot of other use cases in progress or already implemented.

Q. This story is about what it’s like to be a Chief AI Officer in healthcare, and you’ve discussed a number of projects you’ve got going. For this next question, could you pick one project and talk about how you, as the Chief Health AI Officer, oversaw the project, what your role was?

A. I can talk about our Mission Control Forecasting Model. This was something already implemented in an initial version when I got here to UC San Diego Health. I’ve been here for 10 months now. Some of the things I’m working on are on the runway, and some are just starting to be implemented.

My role in this model, though, is that while it was working somewhat well, there were clear days where the model would predict that we’re going to have a not-so-busy day tomorrow. Tomorrow would roll around, and it was much busier than what the model was saying it was supposed to be.

Anytime you have a model that’s doing forecasting, where it is predicting tomorrow’s information using today, and it’s really far off, the people who are using that tool start to lose faith in it – as I would, too. When this happened, I think once or twice, I said, “We can’t just tweak things now. We have to go back and look at what are the things the model is assuming as to what information it’s using to figure out why tomorrow’s prediction is not accurate.”

What did we do here? I sat down with our data scientist. We went through that model line by line looking at code. And what that helped us do is figure out key things we thought were in the model, but actually weren’t because they had gotten removed previously because it was found to not be helpful.

So, we said, “Well, why was it not helpful?” We did a bunch of digging and looked at some of those predictors and found that some of those were not helpful because they were actually capturing the wrong information. Based on the description of the predictor, it was capturing something different than what the code was actually doing.

Doing that over the course of about three to five months, we went from version 2 of our model, which was implemented when I first got here, to version 5.1 of the model, which went live last month. What’s happened as a result of that? Our predictions today are substantially better than our predictions were in January and February. And what that does is help us start to rely on the model to do workflows.

When the model is not accurate, there’s not a lot of appetite toward linking any workflow around it. But when the model gets more accurate, people start to realize the model actually says tomorrow is going to be a busy day, and it turns out it is a busy day, or it says it’s not going to be and it turns out not to be busy. That now lets us think about all kinds of things we could do to make our healthcare and access to care a bit more efficient.

What are my activities there? Figuring out with the co-directors of our Center for Health Innovation, our data scientists, some of our PhD students, what is happening on the data side, what’s happening in our AI modeling code side, what’s happening in our processes for how we go live with new versions of models and our version control, and then making sure as we upload those new models, that gets communicated out to our Mission Control staff so they’re in the loop on when to expect the model to change and what is actually changing.

So, we develop model cards we distribute, then we make sure that information is communicated out to a broader set of health leaders at our Health AI Committee, which is our AI governing committee for the health system. So really, it’s soup to nuts being involved in everything from how we’re pulling data all the way to how it’s being used clinically by the health system.

None of that is stuff I can do alone. As you notice, each of those steps requires me to have some level of partnership, some level of someone who has domain knowledge and expertise. But what I have to do is make sure when a clinician notices a problem, we can think about and brainstorm what in the upstream processes might be creating that problem so we can fix it.

Q. Please offer a couple of tips for executives looking to become a chief AI officer for a hospital or health system.

A. One tip is you really need to understand two different worlds and understand how they connect. If you look online, there is a lot of chatter and discussion about AI. There’s a lot of excitement about AI. There’s a lot of people just sharing their experience of AI, and all of that is good information to capture.

It’s also important to read papers in the space of AI and understand some real limitations. When someone says, “We need to make sure we monitor this model because it might cause problems,” you should know roughly what kinds of problems it could cause, what are key historical examples of problems caused by health AI, because you’re essentially going to be the AI domain expert for the organization.

One of the key things is, it’s a bit difficult to pivot from being a healthcare administrator leader into a Chief Health AI Officer unless you already have a substantial amount of health AI knowledge or are willing to engage in that world and get that knowledge and build the community.

Similarly, there are challenges to people who know the health AI side really well, but don’t speak the language of healthcare, don’t speak the language of medicine, can’t translate that into a way that can be digestible by the rest of healthcare leadership.

Depending on which of those two worlds you’re coming from, how you’re going to need to develop to be able to serve in that role, is going to be a little bit different. If you’re coming to healthcare already, then you’ve really got to make sure you have domain expertise in AI that is going to translate to making sure that when you say you’re accountable, you actually are accountable.

And on the AI side, you need to understand how the healthcare system works so as you’re working with health leaders, you’re not just translating and giving them your excitement about a specific method, but you’re saying, “With this new method, here’s the thing you need to do today that you can’t do, that we could do. Here’s how much we would need to invest, and here’s what that return on investment would be if we were to invest in this capability.”

There’s really a number of different skill sets you have to have, but there, I think, thankfully, are a lot of different ways in which you can have a strength in one area and not necessarily across the entire spectrum.

That’s where different health systems will take slightly different approaches to how they look at this role. Other companies, like payers, are going to look at this role a little bit differently. That’s okay. You shouldn’t hire this role simply because you feel like you’re missing out. You should hire this role because you already are using AI or you want to use it, and you want to make sure someone at the end of the day is going to be accountable to how you use it and how you don’t use it.

Click here to watch the interview in a video that contains BONUS CONTENT not found in this story.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki

Email him: [email protected]

Healthcare IT News is a HIMSS Media publication

Source : Healthcare IT News

You may also like

12345678..........................%%%...*...........................................$$$$$$$$$$$$$$$$$$$$--------------------.....