/0101 INTRODUCTION:

Whether or not to use Artificial Intelligence (AI) is a central discussion point in many organisations across Australia. Every cyber conference of late seems to have its own AI stream, so it’s essential to understand the context of AI in cybersecurity and what it means to boards and directors regarding risk management.

/0202 ARTICLE:

The situation is that most of the security technologies we use have integrated AI and/or Machine Learning (ML) capabilities built in. And they have done for years. Behavioural anomaly detection, for example, is a foundational detection capability built into many products, from antivirus systems to SIEM products, and it has been evolving for decades to what it is today.

So, if that is the case, what’s changed? That is an easy one to answer. Since ChatGPT, Stable Diffusion and DALL-E appeared on the market – and there are many more of these – a fresh wave of interest from the general public and media focusing on one particular kind of AI technology has happened. That technology is called generative AI. With this level of interest, the media quickly sought news and stories that looked at the dark side of generative AI, especially its potential for wrongdoing. And rightly so, as we are already seeing AI integrated into hacking processes, where large language models (LLMs) are used by hackers to hone phishing attacks and build exploit code. None of this is hype. It is already happening, and these models have undoubtedly helped hackers to work faster and better in their offensive campaigns. Yet the question remains: what do we need to know about AI, and what can organisations do to mitigate increased risks?

 

Cyber Threats Using AI

With AI now at everyone’s disposal, we’re also witnessing a lowering of the barrier to entry for technical attacks. Cybercriminals can leverage LLMs like everyone else, helping even the most illiterate improve phishing scams and making it harder for us to detect fraudulent emails in security operations. Beyond that, hackers are also using AI to develop sophisticated exploits, enabling them to swiftly tailor and launch cyberattacks, which makes them more challenging for our detection systems to identify.

And that’s just the overt attacks, the obviously offensive ones we are used to dealing with. But we need to be aware of other attacks on AI systems themselves, especially if businesses are pushing forward with implementing their own. Attacks on indexes, training databases, etc., can all lead to additional business risks that could see the AI system introduce bias into its responses, which could have a marked effect on the output. If the business uses that output for decision-making, bad decisions could follow.

 

More than a Technical Issue – it’s all about Culture

One of the most prominent concerns modern organisations have, relates to AI systems handling sensitive business data – things like client data and confidential company secrets. There’s an inherent risk when personal data is fed into an ‘as-a-service’ LLM, for example. It’s vital to understand that this isn’t inherently an AI issue but one rooted in cultural, policy, and control shortcomings — a problem that predates LLM use.

Mitigation begins with embracing security and privacy by design, as it should with every technology project the business engages in. Integrating well-considered and robust security controls at every stage of technological implementation ensures every aspect of the threat model is considered. Controls should align to the risks associated with the security holy trinity of people, processes, and technology. And for that human aspect of design, security awareness is paramount. We must continually educate our teams on the responsible use of cloud-based systems, including AI and clarify the potential risks. From a process standpoint, we should reinforce the policies governing our organisation’s information assets, discuss handling and sharing data and ensure everyone understands this also means working with these new AI tools.

 

Addressing Broader Business Risks

Cyber plays a vital role in safeguarding against AI misuse, that is for sure. But the scope of AI’s impact spans beyond digital security. AI’s complexity and technical nature often place cybersecurity professionals at the forefront of managing associated risks. Yet, areas such as explainability and ethical usage fall outside the traditional remit of cybersecurity but are equally significant. The cybersecurity teams seem to have become the focal point within most businesses for highlighting risks. Still, boards and directors should pull in a broader set of stakeholders to look at the big picture, even if the cyber team remains the first point of contact.

The best approach to integrating AI into business processes is to accompany it with a comprehensive risk assessment involving a wide array of stakeholders. This broad, consultative approach ensures that AI implementations are secure and align with the overall business strategy and that ethical considerations and outcomes are fully managed and mitigated well beyond the cyber remit.

 

The Need for a Holistic Approach

Adopting AI presents a multi-faced challenge that cybersecurity cannot tackle alone. Boards and directors must recognise the necessity of this consultative approach across the whole business, including internal and external stakeholders, regulators, legislation, etc. when de-risking AI projects. Cybersecurity teams can take the lead but shouldn’t be the sole voice in this conversation. By collaborating with various departments—legal, HR, operations, and beyond—organisations can navigate the intricacies of AI with a strategy that is as comprehensive as it is robust.

We urge decision-makers to step beyond cybersecurity and engage a multidisciplinary team to ensure comprehensive discussions. Business red teams do this well and encourage playing devil’s advocate in broad risk-focused scenarios. Only then will the business gain a complete understanding of the risks of introducing or using AI.

As we push ahead in our AI-augmented world, the bad guys are using AI against us while we use it ourselves to work faster and smarter. Risks abound, so we must gain a complete understanding within and without, and press ahead with strategies that mitigate what we need, while enabling the advantages this new technology gives us.

Reach out to us today to talk about how we can help you uncover existing risks in your environment and get recommended remediation plans.

 

Let’s talk business

Think this service suits your business? We work with a multitude of different industries across the board, so get in touch with us if you think you’re in the right area and would like to talk to one of our team about becoming cyber secure.

Contact us