Accountability for all artificial intelligence used by a health system lies with the Chief AI Officer

Editor’s Note: This is the second in our series, Chief AI Officers in Healthcare. To read the first, an interview with Dennis Chornenky at UC Davis Health, click here.

Some health systems across the country interviewing for their first Chief AI Officer – the hottest new executive in healthcare – are looking for executives with more clinical knowledge than artificial intelligence expertise. Conversely, some are looking for executives with more in-depth AI savvy than knowledge about the clinical side of IT. But both types of expertise are important to the position. The balance is up to the health system.

This is one observation of someone deeply in the know: Dr. Karandeep Singh, Chief Health AI Officer and associate CMIO for inpatient care at UC San Diego Health, who previously served as associate chief medical information officer of artificial intelligence at Michigan Medicine. Chief AI Officers are not yet common in healthcare – so a two-time Chief AI Officer is quite the find.

Healthcare IT News sat down with Singh in a wide-ranging discussion on what UC San Diego Health was looking for in a Chief AI Officer, the skills anyone looking to become a healthcare Chief AI Officer should have, what is expected of Singh, and much more.

Q. How did UC San Diego Health approach you to become its Chief Health AI Officer? What were they looking for and who would you report to?

A. This was a role we mutually figured out and found together. I was interviewing for a role that was going to be helping to lead the Jacobs Center for Health Innovation at UC San Diego Health, which is our primary innovation hub located within the health system.

One of the things I wanted to make sure is that if we have an arm of the health system doing innovation, that we also have a role within the health system thinking about AI governance and thinking about responsible use of AI. When I was interviewing for this role, we had a lot of conversations around what that could look like. And ultimately, that evolved into a Chief Health AI Officer role.

Q. This is not your first AI leadership role. You served as the associate chief medical information officer of artificial intelligence at Michigan Medicine.

A. I’m an academic faculty member with a research interest in applied AI. I think there’s a lot of interesting work happening if you look at the computer science and statistics space in the space of AI. But where my interest has always been is in looking at the intersection of when we get these tools and we actually use them at the bedside or inside the health system, how well do they do?

Can we figure out as a science the things we can do to make sure these tools, when they’re used in the health system, actually help our patients and help our clinicians? At Michigan, I was a tenure track academic faculty with an operational interest in AI.

As I was there, I got a chance to join our nascent clinical intelligence committee, which was the health AI governance committee for Michigan Medicine. That role, in some ways, had some of the elements of what a Chief Health AI Officer may be doing, but I think was largely still focused on health AI governance.

What I really wanted out of the next stage in my career was still this academic interest in figuring out what things work, what things don’t, and how we get things to work.

But at the same time, I wanted a level of coordination and awareness at our leadership level so that when we’re designing our health system and we are expanding our health system, we’re thinking about the capabilities available to us in AI in a way that allows us to implement things in a way we can study them to see whether they work.

I wanted a top-down approach to be able to figure out and prioritize what things should we be doing in the space of AI. But I still, even to this day, retain an academic lab within the university that’s focused on understanding what things work, what things don’t, and how can we make care better.

Q. What skills do you think anyone looking to become a healthcare Chief AI Officer should have?

A. If you look around the country at the places hiring Chief AI Officers, the first question asked is, “Why do you even need a Chief Health AI Officer?” Then, based on why you think you need one, that really guides the skills you might look for in someone to fill that role. So, looking at why you might need one.

Many health systems already are using AI. At this point, it’s not so much they’re looking into using AI. Almost every health system, if you look into their electronic health record, look into the things they’re doing, they already have a lot of AI-driven clinical workflows. Then the question you ask is, “Who’s accountable to the use of AI at the health system? Who understands what’s actually happening in the space of AI and AI-implemented workflows within the health system?”

When you get into that, you start to realize that while there are folks in the health system who understand different elements of the AI-driven workflow, there’s not one person who really stands out as being accountable.

In my mind, a lot of the reasons health systems should even look into this role is that if you’re using AI where we know that use in the wrong way or in an irresponsible way can harm people, you need someone who is ultimately accountable to that and is doing their best to make sure we have processes in place to prevent that from happening.

From that vantage point, the Chief Health AI Officer role is someone who should understand basic principles about AI, potentially have somewhat of a computing background, but someone who still understands the ultimate clinical use case and has some level of informatics understanding where they can understand how to study something to figure out if it works or if it doesn’t work when it’s integrated into the electronic health record and integrated into a clinical workflow.

Based on that, it’s a very diverse set of skills. There are places around the country that are going to be hiring folks with more clinical knowledge and maybe less AI knowledge, and that’s okay. And there are places that are going to be looking for roles where the person has a lot of technical background and maybe doesn’t understand the clinical workflow as much because they’re involved in the weeds of the electronic health record integration.

I don’t think either of those is entirely wrong because at the end of the day, you need someone who understands and is accountable. An ideal person to fill that role is someone who understands a little bit of both of those worlds and can help translate the AI side to the clinician and the clinical side to the AI experts.

Q. Please describe the AI part of your job at UC San Diego Health. In broad terms, what is expected of you? And in more specific terms, what is a typical day for you like?

A. To describe the AI part of my job, I’d probably look at two different elements. The first element is that we actually are doing a fair amount of work at UC San Diego Health and actually building AI in-house that we’re using for healthcare operations.

So, one example of this is we have patients who are in our ER who are admitted to the hospital, and we often refer to those as emergency department inpatients or patients who are boarding. The number of patients we have boarding in the ER often determines many other things about the health system and about things like wait times for the ER.

It’s something we care really deeply about and try to influence to try to improve it. One of the ways we do that is we actually forecast the number of emergency department inpatients we have over the next 10 days. That’s a tool that, while there may be specific things you can go to companies to try to get that capability, that’s something that’s pretty specific to our institution in terms of some of the factors we’re considering and the things we want.

Even before I joined UC San Diego Health, they had decided this was a capability they wanted to build out internally. When I joined, we made several improvements to the model and to the tool. We’ve also engaged our mission control leadership, mission control being our nucleus for where we actually use that information to improve the emergency department wait times.

We’ve worked with that leadership to improve the model and be able to do better forecasting. That’s a tool that’s entirely built in-house. It’s something I’ve looked at the source code for. I’ve worked with our data scientists on identifying some of the data issues and trying to improve those.

We’ve looked at other methods of doing that same forecasting to see if any methods that have just recently come out are better than some of the ones we’re using. That’s the one part of my role that I think, again, requires a knowledge of some of the algorithms, knowledge of some of the code to understand key data pipeline issues, and the ability to direct our data scientists in an effective way so they can do the work of building those tools so they work accurately.

The other part of it is we work with a lot of vendors to implement things. Sometimes that partner is our electronic health record vendor. Sometimes that’s other partners, like startup companies or other groups deploying things within our health system. How do we know the things these companies are deploying actually work?

So, to some degree, we partner with them and ask them, “Can you help us figure out how well it’s working in our health system?” But to a degree, at some point, you have to make sure you independently check and make sure for some of these things, especially ones that have a prospect of having a large impact on your health system.

That’s another place where we have capability to do that, where we can actually go and independently do some of that checking without having to rely entirely on our vendors to do that. It gives us another direct view into is what we’re expecting to find as a benefit actually what we’re observing as a benefit.

Being able to build things in-house and being able to evaluate things that are being developed by vendors are two capabilities I think health systems need to have. Otherwise, you end up implementing, implementing, implementing. You’re not really sure what’s working.

And then it becomes a really difficult discussion for what things do you keep going and what things do you turn off if you don’t go ahead and take that time to do the evaluation.

Click here to watch the interview in a video that contains bonus content not found in this story.

Part two of this two-part interview will be published tomorrow.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki

Email him: bsiwicki@himss.org

Healthcare IT News is a HIMSS Media publication

Source : Healthcare IT News

Related posts

Who Is Luigi Mangione, the ‘Person of Interest’ in the UnitedHealthcare Murder?

Ultra-processed foods are making us old beyond our years, study warns

First federal health agency joins TEFCA via eHealth Exchange