6 security items that should be in every AI acceptable use policy

6 security items that should be in every AI acceptable use policy

An AI acceptable use policy (AI AUP) serves as a foundational component of an organization’s security framework, helping to mitigate risks and promote the responsible use of AI technologies.

Broadly speaking, an AI acceptable use policy is a set of rules and guidelines that govern the responsible, ethical, and effective use of artificial intelligence technologies. It outlines acceptable behaviors, practices, and procedures related to developing, implementing, and using AI systems.

The primary purpose of an AI AUP is to ensure that AI technologies are used in a manner that aligns with the goals, values, and legal requirements of an organization while maximizing the benefits and minimizing the risks of these AI technologies.

Why is an AI AUP important for business security?

By defining acceptable behavior, data handling practices, and security protocols, an AI acceptable use policy helps mitigate risks, such as data breaches, malicious use of AI algorithms, and unauthorized access.

Additionally, this policy promotes accountability and ensures that AI technologies are used ethically and responsibly, safeguarding sensitive information, and maintaining trust among stakeholders. A well-crafted AI AUP policy protects against potential threats and also fosters a culture of awareness and compliance regarding AI security measures.

As an organization, it’s important to have an AI AUP that applies to all members within the organization that covers information handling, confidentiality, privacy, and threat management, says James Robinson, CISO of Netskope.

“The key implication of an AI AUP policy for security boils down to protecting a company’s information and services,” he says.

Jeff Pollard, principal analyst at Forrester, agrees that when it comes to security without an AI AUP policy an organization is leaving employees to explore and use various AI technologies and company data in whatever ways they see fit.

An AI AUP helps reduce the risk of breaches or misuse of information

“The AI AUP policy is designed to give clarity as to the guardrails, as to the permitted situations, and as to the unpermitted or impermissible uses of the AI technologies,” he says. “But once you have the policy then you give them specifics on what they can and can’t do. But it also guides part of your strategy and controls that you’re going to implement to give you the right levels of visibility into the environment to police it.”

And without clear guidelines around AI use, businesses expose themselves to risks of unintentional as well as malicious breaches and misuse of confidential information, according to JB Baker, vice president of product at ScaleFlux.

“Coming at this from ScaleFlux, which develops semiconductor components and firmware for servers and storage, protecting our intellectual property is crucial to our success,” he says. “Hence, having strict policies to control what data and documents may be used in training AI is critical, particularly considering that AI-training hardware may be hosted elsewhere.”

David Lee, founder and CEO of The Identity Jedi, adds that an AI AUP policy is critical for security because AI is everywhere and moving so fast that it’s hard to keep up with what data is being used where.

Six security items that should be in every AI AUP policy

The number of security items in an AI acceptable use policy can vary depending on various factors, such as the nature of the AI system, the industry it operates in, regulatory requirements, organizational policies, and the specific risks associated with using AI.

However, a comprehensive AI acceptable use policy should typically include these key security items:

1.       Protection of sensitive data

Corporate policies need to include a security item that deals with protecting the sensitive data that the AI system uses. By including a security item that addresses how the AI system uses sensitive data, organizations can promote transparency, accountability, and trust in their AI practices while safeguarding individuals’ privacy and rights.

“So if an AI system is being used to assess whether somebody is going to be getting insurance, or healthcare, or a job, that information needs to be used carefully,” says Nader Henein, research vice president, privacy and data protection at Gartner.

Companies need to ask what information is going to be given to those AI systems, and what kind of care is going to be taken when they use that data to make sensitive decisions, Henein says.

The AI AUP needs to establish protocols for handling sensitive data to safeguard privacy, comply with regulations, manage risks, and maintain trust with users and others. These protocols ensure that sensitive data, such as personal information or proprietary business data, is protected from unauthorized access or misuse.

“There are severe security risks due to the fact that you can generate sensitive content while using an AI based upon what seeds the data that feeds [the AI],” says Ryan O’Leary, research director, privacy and legal technology at IDC. “You could potentially leak private information. For example, if I’m taking a customer list and putting it into an AI tool to get some insights, I’m potentially leaking the personal information of all those people.”

2.       Access controls

Access controls are guidelines for controlling who has access to the AI system, including user authentication mechanisms, authorization levels, and procedures for granting and revoking access rights.

“To prevent data breaches and system misuse, companies have to limit access to sensitive information and system components to those who need it,” says Ron Hawkins, director of industry relations at the Security Industry Association.

3.       Compliance with regulatory requirements

Organizations must also ensure that their use of AI aligns with industry-specific regulations and standards related to security and privacy, such as the Health Insurance Portability and Accountability Act in the U.S., as well as relevant laws, such as the European Union’s General Data Protection Regulation (GDPR).

“Privacy laws, such as the European Union’s GDPR, often deem the use of AI to be a ‘high-risk’ activity that requires special precautions,” Hawkins says. “Having an AI policy that includes this protects against misuse and ensures compliance with applicable laws and industry regulations.”

4.       User training and awareness

Organizations need to include a security item in their AI AUP policies that provides information about training programs and resources to educate users about security best practices, potential risks, and their roles in maintaining a secure AI environment. This promotes a culture of security awareness and accountability across the organization.

“There needs to be an effort to educate and inform the workforce in terms of being aware of the risks,” O’Leary says. “[Employees] need to understand the models that underlie the AI that their organizations are using because no two models are created the same.”

Pollard says that companies have to educate employees as to what scenarios are permitted or not permitted.

For example, the security item might include information about what an acceptable is as well as information about the products and services whose technologies employees can use and what the acceptable use cases for them are, he says.

“An example would be that a company might have a policy that says employees can’t use a public source of AI, such as OpenAI or ChatGPT,” Pollard says. “An organization might say employees can’t put company data into the public instance of ChatGPT. However, if you’ve supplied a private instance of ChatGPT, then you might permit employees to share corporate data with that.”

5.       Reporting violations

An AI AUP should also include clear guidelines for reporting violations. This security item should outline the process for reporting any breaches, misuse, or suspicious activities related to the AI system.

“So if you see someone not following the policy, or you’ve experienced a situation where the policy is not being followed, what is the appropriate process to notify someone about it? You want to direct employees to the escalation path in the event that a policy violation occurs,” Pollard says.

6.       Requesting new tech with AI capabilities

This security item would include information about how departments would submit requests for new technologies. The policy would state that business units must submit requests for new technologies to the appropriate person, which will then undergo a thorough security review to ensure they align with the companies’ security standards.

This review would include a comprehensive risk assessment detailing potential security vulnerabilities and strategies to mitigate those vulnerabilities.

“So for example, if you are in the marketing department and you want to adopt a new technology that has AI capabilities, you want to ensure that within the [AI AUP} policy there is a mechanism to seek approval.”

IT Governance, Regulation, Security Practices

 Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *