What genAI and a renewed interest in NLP mean for healthcare

When ChatGPT demonstrated its ability to respond to plain English questions, it marked a significant milestone in AI development. Yet despite this, and more than 700 FDA-approved AI applications, healthcare adoption remains limited.

Dr. Ronald Razmi believes generative AI has the potential to revolutionize healthcare but advises caution, noting the essential need for real-world performance validation. Razmi is author of AI Doctor: The Rise of Artificial Intelligence in Healthcare and cofounder and managing director of Zoi Capital.

We interviewed the physician for some valuable insights on everything from the ways generative AI can accelerate improved medical research to how another form of AI, natural language processing, can assist in extrapolating narrative data to better analyze clinician reports.

Q. Healthcare has been a bit slow to adopt AI even during the explosion of AI in the industry with the introduction of generative AI, like that used in ChatGPT. What does adoption of all types of AI look like in healthcare now?

A. A close examination of the short history of AI in healthcare indeed shows many of the systems launched to date have not gained significant traction. This includes systems in radiology, pathology, administrative workflows, patient navigation and more.

The reasons for this are multifold and complex but the lessons from the first decade of digital health teach us there are business, clinical and technical barriers that can slow down or prevent adoption of these technologies.

For an AI system to successfully gain traction, it needs to solve a mission-critical use case, receive complete and timely data in the real world, and fit with existing workflows. Many of the systems launched to date have faced challenges to check all of these boxes.

Since the launch of the large language models and generative AI, the capabilities of natural language processing (NLP), a branch of AI, has greatly improved. This creates opportunities for AI to tackle a whole set of new use cases such as documentation, prior authorization workflows, decision support in various forms and more.

Some of these use cases have been work-in-progress for years but a huge leap in capabilities of NLP now makes them more possible. While the use cases are countless and the potential benefits will ultimately be real and impactful, it is important to carefully monitor the real-world performance of these systems and declare victory only after they have successfully shown reliable and consistent results to the satisfaction of the users.

As has been the case with digital health systems historically, many of the use cases will not see short-term uptake due to reimbursement issues or that the buyers will spend their limited technology budgets on higher priority issues.

Today we’re seeing pilots and launch of a set of generative AI technologies in healthcare that address administrative and operational use cases such as copilots for documentation, clinical coding, prior authorization, resource management and more. These use cases are lower risk than clinical use cases and hold the promise of providing short-term benefits and clear ROI to the users and buyers, respectively.

Whether these applications perform up to expectations remains to be seen but the initial results are very promising. The clinical applications in radiology and other specialties will take longer to see widespread adoption since larger scale clinical trials to establish patient outcome benefits and safety are yet to be done. Also, payers will use these studies to decide which clinical AI applications they will reimburse for.

Q. You advise caution when it comes to the realm of generative AI, noting the essential need for real-world performance validation. Please elaborate.

A. All technologies used in the practice of medicine need to establish their efficacy and safety. AI technologies are no exception. Generative AI is in its early stages and we know that “hallucinations” are a real issue and can compromise the quality of the output of the solutions that use generative AI.

The issue here is that when you rely on ChatGPT, the “fake” answers can look identical to the correct responses. This means the user may not know what’s real and what’s fake. This presents serious challenges to the use of generative AI, in its current form, for clinical applications.

It is possible over time that large language models built only on high-quality medical information will address this issue. Until then, caution needs to be exercised for these types of applications.

While operational and administrative use cases are lower risk and do not necessarily need to be validated in large clinical trials, it does not mean that their outputs do not need to be validated to an acceptable level of performance prior to usage.

For example, one of the most coveted applications of AI in healthcare is in clinical documentation. When I was a practicing physician, much of my time was spent in clinical documentation and administrative tasks. It was some of the least enjoyable parts of my job. If AI can offload some or much of this from the clinical staff, it would create significant value and improve their job satisfaction.

For years, there has been a significant push to use AI for this and while the results were promising, they were not good enough to drive widespread adoption. Now, with generative AI, companies like Suki and Abridge are tackling this use case and the early results seem to suggest that systems may have reached a level of proficiency that may lead to everyday use.

The risk of declaring victory for any technology, including generative AI, before it has been tested in enough settings for a reasonable period of time, is that the users can become disillusioned and become resistant to trying future iterations if these products end up disappointing. We have seen this with AI already.

For the three years that I was writing my recent book, “AI Doctor: The Rise of Artificial Intelligence in Healthcare,” I spoke with clinicians and researchers who had tried the initial wave of AI systems in radiology and clinical research.

Many of the radiology systems underperformed in the real-world settings with too many false positives and the clinical trial patient identification systems identified too many unrelated patients. Given these lessons, we should push hard to maximize the use of generative AI to create the next wave of health AI systems but validate performance for each system rigorously before making it available for widespread use.

Q. You’re keen on natural language processing, another form of AI. How can NLP assist in, for example, extrapolating narrative data to better analyze clinician reports?

A. Modern AI is based on machine learning. Deep learning is a subset of machine learning that has significant capabilities in analyzing large amounts of data to find patterns and make predictions. Deep learning is the basis for large language models and generative AI. More than anything else, LLMs have improved natural language processing.

This is important since previous versions of NLP in healthcare have severely underperformed. This is due to a variety of reasons but some of the key issues are that there is no accepted standard for clinical notes and they are full of acronyms and specialty-specific jargon.

Given that more than 80% of healthcare data is unstructured and in narrative format, AI would have limited success in healthcare if it could not tap into this data and use it for its output. The renewed excitement for NLP is due to the incredible capabilities of the LLMs and the enthusiasm that we can finally start to seriously analyze narrative data from clinical notes or the medical literature and extract key insights.

While we’re in the early days of the LLMs, their surprising capabilities open up a world of possibilities previously unimaginable. For example, some experts feel there are mountains of insights hidden inside the existing medical literature that can be discovered using the LLMs.

Some of the most anticipated applications of AI such as chatbots for helping patients navigate their health, home voice support using smart speakers, creating clinical notes by listening to the doctor-patient encounter, and more are only possible with reliable NLP.

Investing in developing these applications using NLP based on LLMs will mean the dreams of providing proactive care to most people on an ongoing possible can become a reality. Currently, this is not possible as we do not have enough human resources to provide that kind of care at scale. Only with the help of technology, including NLP, we will be able to usher in better care delivery and discover the next generation of diagnostics and therapeutics.

Q. You are the author of the book “AI Doctor.” It certainly has a compelling title. Please talk a bit about the thesis of your book.

A. I wrote “AI Doctor” to provide a 360-degree view of what it will take to accelerate the adoption of this transformative technology in healthcare. In the first few years of AI in healthcare, significant attempts and investments have been made to build and commercialize the first wave of health AI systems.

Unfortunately, almost a decade and billions of dollars later, we don’t see widespread adoption of this technology. In fact, a recent survey showed that 76% of healthcare workers indicated they have never used AI in their jobs, including doctors and nurses. Where is the disconnect? For any digital technology to gain traction in healthcare, it needs to satisfy a number of requirements.

These include a number of business, technical and clinical factors that need to be carefully navigated. For example, institutions that buy these technologies look for the types of systems that will provide immediate ROI to their bottom lines. They also have limited budgets each year for new technologies.

So, if your technology does not result in improving their short-term financial performance, it will not be a top priority for them, even if you have AI in your name. Another issue is the availability of data in a reliable and consistent manner in the real world. Healthcare data is fragmented and often contains errors.

AI systems are useless without a reliable flow of high-quality data. Also, clinical or research workflows have been established over long periods of time and will not easily change to accommodate new technologies. As such, only well-designed systems that can fit within these workflows have a good chance of adoption. If any of these elements are missing from an AI system in healthcare, it is highly doubtful they will see significant adoption.

In this book, I lay out a set of frameworks for the users, buyers, entrepreneurs and investors to consider as they embark on their health AI journey. These frameworks allow careful analysis of the mentioned factors for an AI product: to assess if it will be able to provide value and navigate the barriers that have kept many other AI products from succeeding.

One of the issues that result in impressive-sounding AI products from achieving more success is the lack of cross-functional expertise required to build a winner. Data scientists know how to build algorithms but may not understand healthcare business models or workflows. Clinicians understand workflows but generally don’t know data science or how to build companies.

As a clinician who has training in computer and data science and built and commercialized digital technologies, I have a unique perspective. I have been on all sides of this and appreciate how much analysis and forethought is required to build an AI product that will move the needle.

AI is very well-suited for many of the issues in healthcare such as shortage of resources, slow pace of research, inefficiencies and more. As such, I’ve tried to use my experience to make a contribution to everyone working hard to use AI to address these issues. If people have the right frameworks and create AI products that have a better chance at adoption, we will see the enormous potential of this technology in healthcare materialize sooner and we will all benefit from that.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki

Email him: bsiwicki@himss.org

Healthcare IT News is a HIMSS Media publication.

Source : Healthcare IT News

Related posts

Promise and Perils of AI in Medicine

Think you’re too busy for strength training at work? Try this quick and easy guide

Comparing Fecal Immunochemical Tests; CVD Blood Test for Women