Home Health Johns Hopkins has big plans for AI in Epic chart summarization

Johns Hopkins has big plans for AI in Epic chart summarization

by News7

Yesterday, in part one of our in-depth interview with Dr. Brian Hasselfeld of Johns Hopkins Medicine, the senior medical director of digital health and innovation and associate director of Johns Hopkins inHealth, discussed the role of artificial intelligence in healthcare overall.

Today, Hasselfeld, who also is a primary care physician in internal medicine and pediatrics at Johns Hopkins Community Physicians, turns his focus to Johns Hopkins itself, where he and a number of teams throughout the organization have implemented AI in ambient scribing and patient portal applications. They’re working with EHR giant Epic on deploying AI for chart summarization – a major step forward.

Q. Let’s turn to AI at Johns Hopkins Medicine. You are using ambient scribe technology. How does this work in your workflows and what kinds of outcomes are you seeing?

A. Certainly a very topical space. We’re seeing a number of products taking a wide range of strategies. We’re similar to many that have taken some early moves in this space, recognizing technology really hasn’t done what it’s supposed to do in healthcare.

Arguably, most of the data would say, at least to the clinician, technology has done more harm in some ways, at least to our own workflows and experience in healthcare. So, we’re trying to think about some of those pieces where we can move technology back to the center and make it more enjoyable.

Again, many have acknowledged the documentation burden that sits on top of our clinicians with the explosion of EHR content, both by regulatory requirements and general workflow across many major systems. So, for most of our systems that have picked up on ambient AI, a listening device, the ambient part of it is listening to a clinical encounter, whether it be an outpatient visit, an ER history or inpatient rounds.

And on the back-end, the AI tool, usually what’s now known as a large language model, such as GPT, then takes the spoken word between the multiple parties and constructs it into a new generative paragraph.

It’s using the actual function of those large language models to generate a paragraph of content, usually then around a specific prompt. Given that model, “Please write a history based on this medical background.” And we’ve deployed that currently across a number of ambulatory or outpatient clinics, across a couple of different areas of specialty, currently with our first product and likely thinking about how we use more than one product to understand the different levels of functionality.

I myself just had clinic this morning and was fortunate enough to be using the ambient AI technology using a device, my own smartphone, with our EHR on the phone, and be able to launch the ambient AI product, which listens to the encounter and generates a draft note, which, of course, I’m responsible for and need to review myself and edit to ensure clinical accuracy. It’s really making that clinical interaction much better.

The ability to take the hands off the keyboard, look directly at the patient, and have an open conversation about a very intimate topic, their own personal health, and really taking the eyes from the computer and back to the patient, in my mind, is the main benefit so far.

Q. Johns Hopkins Medicine also is using AI for patient portal message draft replies. Please explain how physicians and nurses use this and the kinds of outcomes they achieve.

A. This enterprise tool is out to early users. It is probably well-known now to many who follow HIMSS Media content that patient emails or in-basket messages, messages generated through the patient portal, have exploded through the pandemic.

Here at Hopkins, we saw a nearly 3X increase in the number of messages sent by patients to our clinicians from pre-COVID in late 2019 to our run-rate that we see now. And some of that’s a really good thing. We want our patients to be engaged with us. We want to know when they’re feeling well or not well, and help be able to triage.

But again, the clinical workflow, including payment models and clinical care models, is not built for this constant communication, this constant contact. It’s built around visits. We did a well-intentioned thing, increasing connectivity with our patients. It’s a very easy modality, something we all do every day – email and text.

We’re used to communicating what we would call asynchronously or through written communication. But we really didn’t change the other side of it. The unintended consequence was dumping all that volume onto an unchanged clinical practice system.

Now, all of us are trying to figure out how we accelerate improvement in that meaningful area of clinician burnout while maintaining the benefit to our patients in having freer contact with their clinical team.

So, a message comes in. Some things are excluded, especially if they have attachments and things like that, as those types of messages are more difficult to interpret. And once the message lands at a clinical care team member, those that have access to the pilot deployment of the AI draft responses will see an option to select a draft response based on the content of the original message, then see the large language model’s draft response, based on some instructions given to it to try to interpret it in an appropriate way.

I can choose, as a clinician, to start with that draft or start with a blank message. Stanford just put out a paper on this, and articulates some of the pros and cons quite well, that one of the benefits is decreased cognitive burden on trying to think about responses for very routine types of messages.

We have also seen that clinicians who have picked up this tool and use it on a regular basis are definitely expressing a decreased in-basket burnout and clinician wellness metric. But at the same time, I think minimal time is saved right now because the draft responses are only really applicable and really useful to the patient message a minority of the time. In the Stanford published paper, it was 20% of the time.

We see our clinics ranging from low single-digit percentage to 30-40%, depending on the type of user, but still far less than half. The tool is not perfect, the workflow is not perfect, and it’s going to be part of that rapid but iterative process to figure out how we apply these tools to the most useful scenarios at this point.

Q. I understand Johns Hopkins Medicine is working on chart summarization via AI, with an initial emphasis on inpatient hospital course summary. How will AI work here and what are your expectations?

A. Of all the projects, this one is in its earliest phases. It’s a good example of the differences in application of the technology across the continuum of care and the depth of the problem being tackled.

In the previous examples, ambience and in-basket draft replies, we’re really working on a very concise transactional component to the clinical continuum. The single visit and its associated discussion, the single message and drafting a response. That’s very contained data.

When we start to think about that broader topic of chart summarization, the sky’s the limit, unfortunately or fortunately, in the problem to be addressed – the depth of data that needs to be understood. And again, that needs to be extracted from unstructured to structured.

Really, the work we as clinicians do every time we interact with the chart, we move through the chart in various ways, we extract what we feel we need to know, and we re-summarize. It’s a complex task. We are trying to work in the most targeted area, during an inpatient admission, you are essentially more time-bound than in other versions of chart summarization.

In outpatient, you may have to chart summarize 10 years of information depending on why you’re coming to that clinician or your reason for a visit. I had a new patient earlier today. I needed to know everything about their medical history. That’s a massive chart summarization task.

In inpatient, we have an opportunity to create some time-bound around what needs to be summarized. So, not even starting at the entirety of everything about the hospitalization – which actually can include reason for admission, which then can backtrack into the rest of the chart.

Inside of an admission, we have day-to-day progression of your journey through your hospital stay and interval change. Those are addressed in daily progress notes, in handouts between clinical teams. And we can narrow down the information to be summarized to the things that change and happen from yesterday to today, even though it’s a lot of potential things – images, labs, notes from the primary team, notes from the consultant, notes from the nursing team.

It is much more time-bound and still injects meaningful efficiency to the inpatient teams, and certainly identifies a well-known area of risk, which is handoff. Anytime your clinical team changes during your inpatient stay, which is frequent as we don’t ask clinicians to work 72 hours straight in most cases, then we have an opportunity to help support those areas of high-risk handoff.

So, trying to range-bound, and even here in this very range-bound case, there is a lot of work to be done to get a potential tool ready for actual use in the clinical workflow, given, quite frankly, the breadth and depth of data that is available. We just started this discovery journey, working with our EHR partners at Epic, and are looking forward to seeing what might be possible here.

To watch a video of this interview with BONUS CONTENT not in this story, click here.

Editor’s Note: This is the seventh in a series of features on top voices in health IT discussing the use of artificial intelligence in healthcare. To read the first feature, on Dr. John Halamka at the Mayo Clinic, click here. To read the second interview, with Dr. Aalpen Patel at Geisinger, click here. To read the third, with Helen Waters of Meditech, click here. To read the fourth, with Sumit Rana of Epic, click here. To read the fifth, with Dr. Rebecca G. Mishuris of Mass General Brigham, click here. And to read the sixth, with Dr. Melek Somai of the Froedtert & Medical College of Wisconsin Health Network, click here.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki

Email him: [email protected]

Healthcare IT News is a HIMSS Media publication.

Source : Healthcare IT News

You may also like