Some members of Congress are asking the U.S. Health and Human Services to back away from a years-long effort to establish government-administered artificial intelligence assurance labs and create an AI assurance lab model in partnership with industry.
“We are writing to express our significant concerns with the potential role of assurance labs in the regulatory oversight of artificial intelligence technologies, and how this will lead to regulatory capture and stifle innovation,” Reps. Dan Crenshaw, R-Texas, Brett Guthrie, R-Ky., Jay Obernolte, R-Calif., and Dr. Mariannette Miller-Meeks, R-Iowa, said in a letter addressed to Micky Tripathi, acting chief AI officer at HHS.
WHY IT MATTERS
With deregulation a priority for the incoming Trump Administration in 2025, the Republicans say they’re been concerned about how AI in healthcare will be steered.
In writing to Tripathi, who also serves as Assistant Secretary for Technology Policy Secretary and National Coordinator for Health IT, the representatives asked for clarification on the overarching objectives of the agency’s reorganization, according to a story in Politico on Monday.
Part of a larger technology restructuring effort by HHS, the new ASTP – formerly Office of the National Coordinator for Health Information Technology – announced in July that it would have increased responsibilities, including over healthcare AI, along with new staff and more funding flowing to it.
The letter also calls into question the ASTP/ONC’s statutory authorities and role in the overall healthcare system through its creation of assurance labs to supplement the U.S. Food & Drug Administration’s review of AI tools and suggests that there would be significant conflict of interest.
“We are particularly troubled by the possible creation of fee-based assurance labs which would be comprised of companies that compete,” the representatives said, adding that larger, incumbent tech companies could gain unfair competitive advantage in the industry and negatively impact innovation.
The representatives included eleven questions and requested responses by December 20.
A spokesperson for ASTP told Healthcare IT News by email that the agency is unable to comment on the letter at this time. CHAI has not responded to our request for comment, but this story will be updated if one is provided.
THE LARGER TREND
One of the letter’s signers, Rep. Miller-Meeks, had previously asked FDA’s then-director of the Center for Devices and Radiological Health about CHAI and its members.
During a House Energy and Commerce Health Subcommittee on the agency’s regulation of drugs, biologics and medical devices, Guthrie, as subcommittee chair, said during opening remarks that several regulatory missteps have caused “uncertainty among innovators.”
Miller-Meeks specifically asked if the FDA would outsource certification to the coalition. She noted that Google and Microsoft are founding members, while Mayo Clinic, which she said has more than 200 AI deployments, employs some of the coalition’s leaders.
“It does not pass the smell test,” she had said, and shows “clear signs of attempt at regulatory capture.”
CHAI, which unveiled standards for healthcare AI transparency in line with those in ASTP’s requirements for certifying health IT, said a long-awaited AI nutrition label will be coming soon.
Dr. John Halamka, president of Mayo Clinic Platform, addressed the substantial potential benefits and real potential harms that could come from predictive and generative AI used in clinical settings, earlier this year at HIMSS24.
“Mayo has an assurance lab, and we test commercial algorithms and self-developed algorithms,” he said in March.
“And what you do is you identify the bias and then you mitigate it. It can be mitigated by returning the algorithm to different kinds of data, or just an understanding that the algorithm can’t be completely fair for all patients. You just have to be exceedingly careful where and how you use it.”
Since its founding in 2021, CHAI said it has worked to deliver AI transparency, create guidelines and guardrails to address algorithmic bias in healthcare by accounting for government concerns and building on the White House’s AI Bill of Rights and NIST’s’ AI Risk Management Framework and support AI assurance as laid out in President Joe Biden’s executive order on AI directing HHS to establish a safety program.
ON THE RECORD
“The ongoing dialogue around AI in healthcare must consider the distinct authorities and duties of various agencies and offices to prevent overlapping responsibilities, which can lead to confusion among regulated entities,” the four Republican members of Congress said in their letter.
Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication.
Source : Healthcare IT News