NIST publishes new guides on AI risk for developers and CISOs

NIST publishes new guides on AI risk for developers and CISOs

The US National Institute of Standards and Technology (NIST) this week published four guides designed to give AI developers and cybersecurity professionals a deeper dive on the risks addressed by the organization’s influential 2023 “AI Risk Management Framework” (AI RMF).

Issued in draft form, the documents are the latest building blocks put in place by federal agencies following US President Joe Biden’s October 2023 executive order setting how the US government will require the tech industry to mitigate different types of AI risk.

Although all make good background reading for decision-makers in tech, the first three cover areas of more acute concern for people in cybersecurity:

Generative AI risks

Drawing on NIST’s generative AI working group, theAI RMF Generative AI Profile (NIST AI 600-1) lists 13 risks relating to generative AI, including malware coding, cyberattack automation, the spreading of disinformation, social engineering, AI hallucinations (“confabulation”), and the possibility that generative AI might over-consume resources. The document concludes with 400 recommendations developers can adopt to mitigate these risks.

Malicious training data

An add-on to NIST’s “Secure Software Development Framework” (SSDF), the guide “Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A) is broadly concerned with where AI gets its data from and whether this and the models weighting it are open to tampering.

According to NIST, “Some models may be complex to the point that they cannot easily be thoroughly inspected, potentially allowing for undetectable execution of arbitrary code.”

Synthetic Content Risks

Today’s first-generation AI systems are capable of maliciously synthesizing images, sound, and video well enough for it to be indistinguishable from genuine content. The guide “Reducing Risks Posed by Synthetic Content” (NIST AI 100-4) examines how developers can authenticate, label, and track the provenance of content using technologies such as watermarking.

A fourth and final document, “A Plan for Global Engagement on AI Standards” (NIST AI 100-5), examines the broader issue of AI standardization and coordination in a global context. This is probably less of a worry now but will eventually loom large. The US is only one albeit major jurisdiction; without some agreement on global standards, the fear is AI might eventually become a chaotic free-for-all.

“In the six months since President Biden enacted his historic Executive Order on AI, the Commerce Department has been working hard to research and develop the guidance needed to safely harness the potential of AI, while minimizing the risks associated with it,” said US Secretary of Commerce Gina Raimondo.

“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time.”

NIST guides are likely to become required cybersecurity reading

Once the documents are finalized later this year, they are likely to become important reference points. Although NIST’s AI RMF is not a set of regulations organizations must comply with, it sets out clear boundaries on what counts as good practice.

Even so, assimilating a new body of knowledge on top of NIST’s industry-standard Cybersecurity Framework (CSF) will still be a challenge for professionals said Kai Roer, CEO and founder of Praxis Security Labs, who in 2023 participated in a Norwegian Government committee on ethics in AI.

CISOs already give lots of attention to NIST cybersecurity regulations and those with enough resources may also start looking at AI. However, most are unlikely to be able to give it the focus it really needs,” Roer told CSO Online.

When regulation arrives, it will create a new layer of compliance anxiety.

“What keeps them [CISOs] up at night is new regulatory demands that might be impossible to implement.”

This included the likelihood that employees or supply chain partners would adopt AI for perfectly good reasons but without seeking approval or assessing a project against any rules. All this at a time when criminals will surely pounce on AI as a way of improving automation and the scale of attacks.

“CISOs are already playing catchup in many areas, and AI is not going to improve that. However, AI is also likely to present better, more effective tools. The challenge will be to weed out the vaporware and identify the tools and vendors able to provide real value,” Roer said. 

Government, Regulation, Security, Security Practices

 Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *