April 12, 2024

Security Triad for Leveraging AI

Security Strategist and Vice President of User Experience at Networkcybersecurity visionary and technology evangelist.

Artificial intelligence (AI) has become prevalent in key business processes. For example, AI already filters potential scam uploads and provides recommendations in many popular apps and services. Its results are becoming increasingly important to organizations, and they are often given privileged access to sensitive and even regulated data.

However, the technology behind AI tools is very different from the well-known IT stack, and has emerged quite quickly. As a result, IT and security teams often do not fully understand its internal workings and dependencies.

All of this makes AI an attractive target for threat actors, who recognize that it offers new and powerful opportunities to compromise data security. Accordingly, security teams must ask an important question: how can we ensure the security of data accessed or generated by AI-powered tools?

Let’s explore how to secure AI technology using the CIA triad model: confidentiality, integrity, and availability.

AI and data confidentiality

Like it or not, some of your organization’s users are almost certainly using third-party AI tools like ChatGPT. While they may be aware of what is appropriate and inappropriate to share on social media, the perceived confidentiality of a “one-on-one chat” with an AI tool can lead to misplaced trust.

That’s why it’s critical today that your organization’s security training is updated with AI messaging. This training should guide users in maintaining compliance and security while using AI. In particular, inform them about the following best practices:

• Avoid sharing sensitive information with third-party AI tools.

• Anonymize all shared information to prevent identification of yourself or your organization.

• Be skeptical of AI advice as these tools are not yet capable of true intelligence but are advanced data analysis tools.

Even internal AI systems such as Microsoft 365 Copilot do not guarantee data confidentiality. An internally deployed AI model typically has access to private and confidential data, and you need to ensure that this does not become a backdoor for a data breach. For example, new documents generated by Copilot do not inherit sensitivity labels from the source documents. As a result, new documents containing sensitive data may be exposed to unauthorized users. Additionally, Copilot relies on the permissions assigned in Microsoft 365; if users have been given inappropriate access to content, sensitive information can quickly spiral out of control.

To meet these challenges, look for ways to leverage trusted security measures in the new reality. For example, it is critical to maintain a least-privilege approach to data access rights. Implementing automated data discovery and classification is critical to ensuring accurate and timely labeling of newly generated content so that you can apply the appropriate security controls around it to maintain confidentiality.

AI and data integrity

We don’t know how exactly AI models arrive at specific conclusions. Often they are better than average human decisions, but you’ve probably seen examples of poor judgment and even hilariously wrong responses from AI systems. And since the AI ​​decision-making process is a black box for us, it will be difficult to tell when the outcomes have been manipulated in an opponent’s favor.

One strategy to ensure confidence in the integrity of AI systems is to verify their decisions. For example, you can have human auditors examine a sample of AI output monthly or after a certain number of transactions. Manual inspection can reveal errors, biases and unexpected outcomes.

A more scalable but more complex approach is guided by a different AI model. That is, a secondary AI scrutinizes the primary AI’s decisions, looking for anomalies, biases, and deviations from the norm. A two-layer approach combines human insight with the efficiency of AI, creating a balanced mechanism for monitoring AI applications.

You should also consider more proactive controls, such as filtering and cleaning both inputs and outputs for the AI ​​models. It is particularly important to expose injection attacks (such as SQL injection), where threat actors hide malicious code in otherwise legitimate input data from a client to the application. Unlike traditional code, where data and syntax are different and therefore easy to identify, AI language models work within a natural conversation. This absence of a definitive syntax makes cleanup more complex. You should continuously improve the filters to identify potentially malicious input when it comes from a chatbot on your website or an AI tool implemented by the R&D team. A more complex secondary AI approach can also help identify typical input and output patterns and user behavior and then flag or even block deviations from those baselines.

AI and availability

Finally, you need to consider the availability of both AI models and the systems and processes that enable them. What happens when a system is overloaded by unnecessary or unauthorized requests, or even by a maliciously crafted request that consumes available computing power? How does this impact the rest of the pipeline that AI is part of? What impact will this have on your customers and the business?

To avoid availability issues, consider security controls such as comprehensive access control and high availability deployments, in addition to the input filtering discussed above.

Conclusion

AI industry experts tell us that we are still in the early stages of AI evolution. This insight implies that our understanding of associated risks and effective mitigation strategies is also still in its infancy. However, you can use the CIA triad, your current security expertise, and many existing security measures to build a solid foundation for securing AI-powered systems and processes.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Am I eligible?


Leave a Reply

Your email address will not be published. Required fields are marked *