DHS intros framework for AI safety and security, in healthcare and elsewhere

The U.S. Department of Homeland Security on Thursday published a new set of actionable recommendations to help promote safe and secure development and deployment of artificial intelligence across all U.S. critical infrastructure, including healthcare and public health.

WHY IT MATTERS

The document was developed in consultation with numerous stakeholders across the public and private sectors, said Homeland Security Secretary Alejandro N. Mayorkas during a conference call. It is meant to align with the White House executive order on AI put forth by President Biden a year ago, as it also serves as a “living document” to help guide AI use into the next administration.

DHS is charged with protecting the “methods by which Americans power their homes and businesses, make financial transactions, share information, access and deliver healthcare and put food on the table,” according to the new framework, Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure.

“As the entities that own and operate these critical infrastructure systems increasingly adopt AI, it is the Department’s duty to understand, anticipate, and address risks that could negatively affect these systems and the consumers they serve.”

The Department of Homeland Security identifies 16 critical infrastructure sectors it considers vital to domestic and global safety and stability, national economic security, national public health or safety – “or any combination thereof.”

Healthcare and Public Health are among those key areas. 

“These sectors are increasingly deploying AI to improve the services they provide, build resilience, and counter threats,” according to the new framework. But “these uses do not come without risk, and vulnerabilities introduced by the implementation of this technology may expose critical systems to failures or manipulation by nefarious actors. Given the increasingly interconnected nature of these systems, their disruption can have devastating consequences for homeland security.”

The White House EO of October 23 directed the DHS secretary to convene a board to advise him and other public- and private-sector stakeholders about the safe and secure development and use of the nation’s critical infrastructure. 

Sec. Mayorkas first gathered that board – which includes leaders from OpenAI, Anthropic, AWS, IBM, Microsoft, Alphabet, Northrop Grumman and others – this past May.

Other members of the board include the Center for Democracy and Technology, the Leadership Conference on Civil and Human Rights, the Stanford Human-centered Artificial Intelligence Institute, the Brookings Institution and other leaders at the local and state level.

Members identified several areas that were their focus of concern, such as the lack of common approaches for the deployment of AI, physical security flaws, and a reluctance to share information within industries.

The framework, developed to complement and advance existing guidance from the White House, the AI Safety Institute, the Cybersecurity and Infrastructure Security Agency, and others, was developed with the layers of the AI supply chain in mind, from cloud and compute providers to developers, along with critical infrastructure owners and operators. 

The Artificial Intelligence Safety and Security Board helped identify three primary categories of AI safety and security vulnerabilities in critical infrastructure: attacks using AI, attacks targeting AI systems, and design and implementation failures.

To address these vulnerabilities, the framework recommends that the following actions be directed to key stakeholders along the AI supply chain, according to the Department of Homeland Security:

Cloud and compute infrastructure providers play an important role in securing the environments used to develop and deploy AI in critical infrastructure, from vetting hardware and software suppliers to instituting strong access management and protecting the physical security of data centers powering AI systems. The Framework encourages them to support customers and processes further downstream of AI development by monitoring for anomalous activity and establishing clear pathways to report suspicious and harmful activities.

AI developers develop, train, and/or enable critical infrastructure to access AI models, often through software tools or specific applications. The Framework recommends that AI developers adopt a Secure by Design approach, evaluate dangerous capabilities of AI models and ensure model alignment with human-centric values. The Framework further encourages AI developers to implement strong privacy practices; conduct evaluations that test for possible biases, failure modes and vulnerabilities; and support independent assessments for models that present heightened risks to critical infrastructure systems and their consumers.

Critical infrastructure owners and operators manage the secure operations and maintenance of key systems, which increasingly rely on AI to reduce costs, improve reliability and boost efficiency. They are looking to procure, configure and deploy AI in a manner that protects the safety and security of their systems. The Framework recommends a number of practices focused on the deployment-level of AI systems, to include maintaining strong cybersecurity practices that account for AI-related risks, protecting customer data when fine-tuning AI products, and providing meaningful transparency regarding their use of AI to provide goods, services or benefits to the public. The Framework encourages critical infrastructure entities to play an active role in monitoring the performance of these AI systems and share results with AI developers and researchers to help them better understand the relationship between model behavior and real-world outcomes.

Civil society, including universities, research institutions and consumer advocates engaged on issues of AI safety and security, is critical to measuring and improving the impact of AI on individuals and communities. The Framework encourages civil society’s continued engagement on standards development alongside government and industry, as well as research on AI evaluations that considers critical infrastructure use cases. The Framework envisions an active role for civil society in informing the values and safeguards that will shape AI system development and deployment in essential services.

Public sector entities, including federal, state, local, tribal and territorial governments, are essential to the responsible adoption of AI in critical infrastructure, from supporting the use of this technology to improve public services to advancing standards of practice for AI safety and security through statutory and regulatory action. The United States is a world leader in AI; accordingly, the Framework encourages continued cooperation between the federal government and international partners to protect all global citizens, as well as collaboration across all levels of government to fund and support efforts to advance foundational research on AI safety and security.

“In his executive order on artificial intelligence, President Biden directed me to form a board that focused on the safe and secure deployment of AI in critical infrastructure,” said Sec. Mayorkas during a conference call on Nov. 15.

“I sought members who are leaders in their fields and who collectively would represent each integral part of the ecosystem that defines AI’s deployment in critical infrastructure. We have, in fact, assembled such a board, composed of leaders of cloud and compute infrastructure providers, AI model developers, critical infrastructure, owners and operators, civil society, and the public sector.

“We believe the safety and security of our critical infrastructure is a shared responsibility,” he added. “The framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access, and so much more.”.

Mayorkas said the new framework is “the first such product created through extensive collaboration with such a board, a broad, diverse set of stakeholders involved in the development for deployment of AI in our nation’s critical infrastructure.”

He added: “It is, quite frankly, exceedingly rare to have leading AI developers engaged directly with civil society on issues that are at the forefront of today’s AI debates and present such a collaborative framework.

“Second, it presents a new model of shared and separate responsibilities. It doesn’t focus on only one segment but instead is encompassing. It identifies specific recommendations that each participant in the ecosystem can and should implement to ensure the safe and secure deployment of AI in critical infrastructure.”

While the framework is “descriptive and not prescriptive,” said Mayorkas, “it is far more detailed and inclusive than voluntary commitments previously made. We are extremely proud of the board’s support for and endorsement of the framework. It is the product of close collaboration, and it will have a long-standing positive impact on AI safety and security as member organizations implement it, catalyze others to adopt it, and thereby advance our common interest by fulfilling our respective roles and responsibilities.”

Mayorkas noted that the industry stakeholders on the board were helpful in ensuring that the guidelines are relevant, practical and actionable.

“This is not a document that advances theories,” he said. “This is a document that provides practical guidance that can and should be implemented to advance safety and security.”

At DHS, said Mayorkas, “pilot projects have demonstrated tremendous AI capabilities to advance our mission. We are taking those pilots and actually integrating the AI successes into our operations. We are deploying AI to advance our mission, number one.”

Meanwhile, it was mentioned more than once during the call that there are two months left of the Biden White House, before a new president takes charge.

“I, of course, cannot speak to the incoming administration’s approach to the board that we have assembled,” said Mayorkas. “I certainly hope it persists. It’s an extraordinary gathering of critical stakeholders across the ecosystem. But this framework will endure.”

THE LARGER TREND

At the recent HIMSS Healthcare Cybersecurity Forum, Greg Garcia, executive director of the Health Sector Coordinating Council Cybersecurity Working Group, explained how HSCC – along with the other sector coordinating councils across CISA – is working to help healthcare organizations be stronger and better prepared “against a flexible and resilient adversary.”

Across the healthcare ecosystem, health systems and public health agencies need to “mobilize ourselves” against cyber threats that are getting more sophisticated by the day, said Garcia, with the bad guys honing their social engineering exploits with artificial intelligence and becoming bolder and more relentless.

Healthcare IT News has written and reported extensively about the intersection of AI, security and safety, including new assessment tools and frameworks from NIST and CHAI. 

Meanwhile, major healthcare stakeholders have pledged to comport with the Biden EO on “Fair, Appropriate, Valid, Effective and Safe” use of artificial intelligence.

ON THE RECORD

“The choices organizations and individuals involved in creating AI make today will determine the impact this technology will have in our critical infrastructure tomorrow,” said Mayorkas in a press statement.. “I am grateful for the diverse expertise of the Artificial Intelligence Safety and Security Board and its members, each of whom informed these guidelines with their own real-world experiences developing, deploying, and promoting the responsible use of this extraordinary technology.”

“As we move into the AI era, our foremost responsibility is ensuring these technologies are safe and beneficial,” said NVIDIA CEO Jensen Huang, who served on the AI board, in a statement. “The DHS AI Framework provides guiding principles that will help us safeguard society, and we support this effort.”

Mike Miliard is executive editor of Healthcare IT News

Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.

Source : Healthcare IT News

Related posts

Study reveals brain’s role in starting meals through GABA, dopamine

State Bars Health Workers From Pushing COVID Vax; Cold Deaths; Marburg Outbreak Ends

Health of Myeloma Patient’s Marriage Tied to Recovery After Transplant