Generative AI enables both sides of the cybersecurity game

Managing the security risks of generative AI

February 14, 2024
|
No items found.

While AI in general, and generative AI in particular, is a huge enabler for legitimate businesses, it is also a boon to cybercriminals. In the game of cat and mouse that is cybersecurity, generative AI has put both sides on steroids. But rather than shy away from this technology – and risk losing competitive advantage – organisations need to be as nimble and as well-informed as their unseen enemy.

How generative AI assists cybercriminals

Generative AI brings exciting new opportunities for businesses to increase efficiency and productivity, but it also significantly boosts the attacker toolkit.

The reality is that it’s almost impossible to know for certain if every cyber attacker is enhancing their methodology using this technology, but it pays to be aware of the advantages generative AI provides them with. While legitimate generative AI companies, such as ChatGPT’s creator Open AI, invest in providing restrictions to protect data sources, cybercriminals are also busy building their own data models and capability. This is potentially providing cybercriminals with the tools previously only available to state-sponsored actors and governments.

On a technical level, generative AI helps cybercriminals bridge a technical gap to enable them to perform more sophisticated and new types of cyberattacks. This includes generating malicious emails that look exactly like the real thing, or malware specifically crafted to bypass certain security tools.

On a bigger scale, large language models (LLMs) and knowledge base analysis tools provide attackers with the ability to sift through billions of lines of data in a very short period of time, to find the most vulnerable target in a company, and the easiest to exploit. This task, known as ‘recon’ or reconnaissance, has traditionally been one of the most time-consuming and labour-intensive jobs for cyber attackers.

The efficiency enabled by LLMs extends to post-breach scenarios as well. If a business is compromised and terabytes of data is exfiltrated, the attacker has to work out what is the most valuable data, which can be akin to finding a needle in a haystack. LLMs provide an efficient method for attackers to automate this time-consuming task.

How enterprises are responding globally to cybersecurity risks of AI

In a report on the state of AI in 2023, research company McKinsey noted 38% of organisations in its global survey say they are working to mitigate cybersecurity risks of AI.

That figure gives pause for thought, especially considering that it will be hard to obtain or onboard a technology without AI in the coming years. As a result, cybersecurity and resilience needs to be top of mind as organisations dive into AI.

An example is in the Microsoft and Google office suites, where it is possible to automatically generate transcripts for video calls. Soon generative AI technology will extend this tool to automatically populating the digital calendars and to-do lists of those in the call, post the discussion. For example, a participant may undertake to complete a task by a certain date, and then find a reminder note has been automatically inserted into their calendar.

Generative AI benefits outweigh the risks

With the potential for harm, you might be forgiven for walking away from generative AI and pining for the days when digital transformation meant migration to the cloud. But not getting on board with generative AI, and not exploring its capabilities, is by far the biggest risk of all.

New Zealand businesses are, in the main, relatively slower to adopt generative AI compared with their global counterparts, but that is not necessarily a bad thing, given that the whole world is at the starting blocks. As noted in the McKinsey report: “In these early days, expectations for gen AI’s impact are high: three-quarters of all respondents expect gen AI to cause significant or disruptive change in the nature of their industry’s competition in the next three years.”

IT professionals can start by taking a pragmatic approach to generative AI. Conducting proof of concepts that show proof of value and by having robust conversations with senior business leaders that on one hand explain the risks, while on the other show a realistic return on investment.

Solid data governance is a key line of defence

Good data governance in organisations will provide a solid line of defence against the potential threats of generative AI.

- Ali Mosajjal, Spark Security Operations Manager

Key to this is finding and plugging leaks before cybercriminals discover them. One area where they often occur is in the ‘data pipeline’, that is the connection between enterprise systems – for example, when transferring payroll data from the HR system to the IT system so that a new user can be set up with the right digital tools and access.

Compartmentalising data is also necessary, as some data, such as that in financial systems, will by its nature have restricted access control. While other data, such as collateral archived in marketing systems, is unlikely to need as much protection. Given it's impractical to lock down every data point, tiering data and matching the level of permissions to the level of commercial sensitivity, is crucial. As is putting in place frameworks and policies to guard against information being freely shared with chatbots and other common AI tools.

Large organisations in particular are vulnerable to insider threats (that is, a rogue employee seeking to disrupt or steal data) and theft using prompt injection, prompt manipulation, malicious context, and even malicious data sources are all potential risks.

New Zealand and global organisations focused on impact of generative AI

Protecting raw data with robust data governance has been a major area of focus for New Zealand businesses in the past few years, ably assisted by government organisations such as CERT NZ, GCSB (Government Communications Security Bureau) and the Office of the Privacy Commissioner, which provides meaningful documentation and guidance in this area.

Globally the recently formed AI Alliance, a community of technology companies, science organisations and universities, is helping to ensure conversations about AI and cybersecurity are open and not vendor centric. And independent US organisations such as NIST, CISA and OWASP have released risk assessment tools that can help with understanding the pitfalls and vulnerabilities associated with generative AI.

While these moves are welcome, they are no substitute for businesses conducting their own due diligence on generative AI, exploring how it will benefit their organisation, while being mindful about how it is also enabling cybercriminals. The cat will always be there, ready to swipe its prey, but the mouse doesn’t triumph by packing up and leaving the scene. It wins by being smarter – embracing new ways to claim ground, while always being aware of the paw’s shadow.

It's important to remember you don’t have to go it alone. Our team of experts is here to help you make the most of generative AI while managing the risks this new technology presents.

Ali Mosajjal
Ali Mosajjal
Spark Security Operations Manager
Listen to this insight:
0:00
/

ABOUT THE AUTHOR

Ali Mosajjal is a cybersecurity leader and incident response expert currently working at Spark New Zealand. He has over 10 years of experience insecurity operations, threat intelligence, vulnerability management, and leading high-performing teams. Previously, Ali served as senior security engineer and researcher.

Discover how Spark Business Group can help propel your organisation
No items found.