Home Health Artificial intelligence can seem like magic – but it’s crucial to see past the mirage

Artificial intelligence can seem like magic – but it’s crucial to see past the mirage

by News7

ORLANDO – Dr. Jonathan Chen began his thought-provoking performance at the HIMSS24 AI in Healthcare Forum on Monday, by invoking a famous quote from science fiction titan Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.”

In a 21st Century where technologies are advancing faster than ever – especially artificial intelligence, in all its form – it can indeed feel like we’re living in a world of the wizardry of illusion, said Chen, assistant professor at the Stanford Center for Biomedical Informatics Research.

“It’s becoming really hard to tell what is real and what is not nowadays,” he said.

To illustrate the point, Chen peppered his audience-participation-heavy demonstrations, with some pretty impressive magic tricks involving mystery rope, card guessing and a trick copy of Pocket Medicine, the indispensable reference book for residents doing their rounds.

The sleight of hand was fun, but Chen had a very serious point to make: For all the value it offers, AI – especially generative AI – is fraught with risk if not developed transparently and used with a clear-eyed understanding about its potential risks.

“As a physician, my job is to restore patients back to health all the time. But I’m also an educator. So rather than try to trick you today, I thought it might be more interesting to show you step-by-step how such an illusion is created,” said Chen.

“It’s invisible forces at play,” he said, echoing the black-box concept of machine learning algorithms whose inner workings can’t be gleaned. “Nowadays. In the age of generative AI, what can we believe anymore?”

Indeed. Chen showed a video of someone speaking, who was the very spitting image of himself. In an ever-so-slightly stilted voice, this person said:

Before we dive in, allow me to introduce myself. Although that phrase may take on a surreal meaning today. I’m not the real speaker. Nor did the real speaker write this introduction. The voice you’re hearing, the image you’re seeing on the screen, and even these introductory words, were all generated by AI systems.

We are actively amidst the arrival of a set of disruptive technologies that are changing the way all of us do our work and live our lives. These profound capabilities and potential applications could reshape healthcare, offering both new opportunities and ethical challenges.

To make sure we’re still anchored in reality. However, let’s welcome the real-life version of our speaker. Take it away, Dr. Jonathan Chen, before they start thinking I’m the one who went to medical school.

“Whoa,” said the real Dr. Chen. “That was weird.”

No question, hospitals and health systems large and small are finding real and concrete success stories with a wide array of healthcare-focused use cases, from automating administrative tasks to turbocharging patient engagement offerings.

“I certainly hope that one day, hopefully soon, AI systems can manage the overwhelming flood of emails and basket messages I’m being bombarded with,” said Chen.

In the meantime, whether they’re “actual practical uses that can save us right now” or dangerous applications that can do harm with misinformation, “the pandora’s box has been opened, good or bad,” he said. “People are using this for every possible application you can imagine – and many you wouldn’t imagine.”

He recalled a recent conversation with some medical trainees.

“One of them stopped me, said, ‘Wait a minute, we are totally using ChatGPT on ICU rounds right now. Are you saying we should not be using this as a medical reference?’

“I said, ‘No! We should not use this as a medical reference!’ That doesn’t mean you can’t use it at all. But you just have to understand what it is and what it is not.”

But what it is is evolving by the day. If generative LLMs are essentially just autocomplete on steroids, the models “are now demonstrating emergent properties which surprise many in the field, including myself,” said Chen. “Question answering, summarization, translation, generation of ideas, reasoning with a theory of mind – which is really bizarre.

“Although maybe it’s not that bizarre. Because what is all of your intellectual and emotional thought that you prize so deeply? How do you express and communicate that, but through the language and medium of words. So perhaps it’s not that strange that if you have a computer that’s so fast on manipulating words, it can create a very convincing illusion of intelligence.”

It’s crucial, he said, for clinicians to keep an eagle eye out for what he calls confabulation.

“The more popular term is hallucination, but I really don’t like that. It’s not actually a really medically accurate term here, because hallucination implies somebody who believes something that is not true. But these things, they don’t believe anything, right? They don’t think. They don’t know. They don’t understand. What they do is they string together words in a very believable sequence, even if there’s no underlying meaning. That is the perfect description of confabulation.

“Imagine if you were working with a medical student who is super book-smart, but who also just made up facts as you went on rounds. How dangerous would that be for patient care?”

Still, it’s becoming apparent that “we are converging upon a point in history where, human versus computer generated content, real versus fabricated information, you can’t tell the difference anymore.”

What’s more, the technology may actually be getting more empathetic – or, of course, getting a lot better at making it appear that it is. Chen cites a recent study by some of his colleagues at Stanford that got a lot of attention this past year.

“They took a bunch of medical questions on Reddit where real doctors answered these questions, and then they fed those same questions through chatbots. And then they had a separate set of doctors grade those answers in terms of their quality on different levels, and found that the chatbot-generated answers scored higher, both in terms of quality and in empathy. Like, the robot was nicer to people than real doctors were!”

That and other examples “tell us that I don’t think we as humans have as much of a monopoly on empathy and therapeutic relationship as we might like to believe,” said Chen, who has written extensively on the topic.

“And for better and for worse, I fully expect that in the, not just in future, more people are going to receive therapy and counseling from automated robots than from actual human beings. Not because the robots are so good and humans are not good enough – but because there’s an overwhelming imbalance in the supply and demand between our patients and people who need these types of support and a human-driven healthcare workforce can never keep up with that total demand.”

Still, there will always, always be a central need for humans in the healthcare equation.

Chen closed with another quote, from healthcare IT and informatics pioneer Warner Slack: “Any doctor that could be replaced by computer should be replaced by computer.”

“A good human, a good doctor, you cannot replace them no matter how good a computer ever gets,” said Chen. “Am I worried about a doctor replacing my job? I’m totally not.”

What concerns him is a generation of physicians “burned out by becoming data entry clerks” and by the “overwhelming need of tens of million patients” in the U.S. alone.

“I hope computers and AI systems will help take over some work so we can get some joy back in our work,” he said. “While AI is not going to replace doctors, those who learn how to use AI may very well replace those who do not.”

Mike Miliard is executive editor of Healthcare IT News

Email the writer: [email protected]

Healthcare IT News is a HIMSS publication.

Source : Healthcare IT News

You may also like