How AI is Changing the Cyber Security Landscape

Published by Graham Foss on
AI robot and cyber security

Office-based work has gone through many changes in the last five years. There are many more remote workers, more cloud services, and systems are more integrated and interconnected. From server farms and supercomputers down to the smallest and simplest of IoT (Internet of Things) devices, the world is interconnected at ever-increasing speeds and scales.

Increasingly companies cannot rely on humans alone to deal with the amount of data generated and the complexity of their Cyber Security needs. Artificial Intelligence (AI) is now used by many businesses to manage infrastructure, interpret data, and automatically counter cyberattacks.

In many cases, there is no alternative to AI. A much larger attack surface, the sheer mass of data being generated, and the speed of response required, coupled with the lack of skilled cyber security professionals means that companies must rely on automated processes, data analytics and artificial intelligence to manage many of their cyber security activities.

AI is ideally suited to repetitive tasks, pattern recognition and responding quickly to threats. It can learn and ideally stay one step ahead of cyber criminals.

It is logical that Artificial Intelligence becomes part of the Cyber Security landscape as it is already prevalent in much of our lives online – from speech and face recognition, translating languages and the algorithms that select content for us to view – AI quietly completes its task unnoticed by the user.

Whenever we let a website know ‘I am not a robot’ we are helping AI systems to learn and grow via the CAPTCHA system, or “Completely Automated Public Turing Test to Tell Computers and Humans Apart.”

In the same way, an AI system can be trained to automatically detect cyber threats, identify new threats, generate alerts, and learn beyond its initial training and data.

DEFINING AI

The term AI is used a lot, and not always correctly. Many technologies exist that analyse data and produce outcomes based on that analysis, but this alone is not AI.  It is not an AI system unless it can automate tasks by using cognitive abilities and reasoning.

AI systems are dynamic and self-improving. They become smarter when given more data to analyse, becoming more capable and responsive as they grow, effectively learning from experience.

AI should not be confused with Data Analytics. Data Analytics are static or “hard coded” and do not learn. They can only improve or adapt with human intervention of some kind.

The form of AI most relevant to Cyber Security is Machine Learning. Machine Learning uses the data supplied to it to improve its performance methodically. This improvement is not programmed in, it is emergent as part of the process.

Machine Learning is ideal for narrow specific tasks and human intervention is still required to make changes and monitor performance. It is most useful for taking on tedious and repetitive tasks such as pattern recognition, classification, and anomaly detection. These are tasks that would be done much more slowly by humans and result in “task fatigue” due to the monotony.

AI IN CYBERSECURITY

Companies are a long way from handing over all their Cyber Security services entirely to AI. But it can take over limited, tedious, and repetitive tasks to relieve the burden on Cyber Security personnel.

For instance, email filters and warnings, automatic malware identification and threat detection. Phishing via email is still one of the biggest Cyber Security threats and AI can be used to find, highlight, and remove suspicious emails.

Reliance on AI is on the increase. It must be, to keep up with the increase in speed and quantity of Cyber Security events. AI is excellent at detecting and managing known threats, handling large quantities of data and can manage vulnerability and security events in real-time. In many cases, it can respond quicker and more effectively than humans. It is immune to “threat alert fatigue”.

AI works well as a “cyber assistant”, working with humans and performing tasks that relieve the pressure on Cyber Security teams. By filtering and removing false positives an AI system can be used to ensure that humans are not inundated with information of low importance.

Using Machine Learning techniques, Artificial Intelligence can learn from previous data and threats to better prepare for new ones. AI can identify and recognise patterns, understand what constitutes regular usage and quickly identify suspicious activity.

AI is also useful for Vulnerability Management. Thousands of new vulnerabilities are identified every year, and this amount of information can be unmanageable by humans. AI can be used to prevent and manage threats quickly and enable companies to identify suspicious activity and react almost instantly, enabling a defence against attacks (known as “zero-day” attacks) that have never been encountered before.

There are many more tasks that AI systems can perform or assist with, but there can be disadvantages to using AI to manage parts of a Cyber Security strategy.

It can be very expensive to implement and may not be a viable solution for smaller businesses. AI systems are not infallible and may be tricked into incorrect behaviour where more rigid systems would not be.

Companies moving to use AI systems may find that they must change some of their working practises to not generate false positives or introduce bias.

One final consideration is that the reverse of AI in Cyber Security is Cyber Security for AI. Artificial Intelligence systems can be as vulnerable to attack as any other system and AI is only as clever as the data that is fed into it. By manipulating that data, attackers may be able to trick the AI into behaving against its intended design, giving false positives or bypassing security.

How to protect AI from attack is still a new concept, but policies and standards are being developed on how to secure AI systems, such as by the Brookings Institution and the ETSI Industry Specification Group on Securing Artificial Intelligence.

THE RISKS ASSOCIATED WITH AI

Artificial Intelligence (AI) can be considered a double agent in terms of its role in Cyber Security. While it can be used to protect networks and systems it can also be used to attack them.

There will always be those that will try and exploit new technology with malicious intent. Cyber threat actors are always looking for new opportunities and new ways to avoid security measures.

Malicious AI can be used to identify patterns and weaknesses in Cyber Security systems and then exploit them. Companies will have to evolve and utilise the same AI techniques to counter these attacks.

Phishing emails that have been created by AI help attackers target and convince victims that the communication is genuine. This particularly helps attackers who want to send emails to people in other countries where they do not speak the local language

Malware that has an AI component may change and adapt to avoid detection. It may then lurk inside a system, gathering data and observing users, and either stealthily sending the data back to the attackers or waiting until the most opportune time to launch another form of attack.

While regular Cyber Security defences may be enough to defend against low-level or individual AI cyber threat actors, the threat from nation-states (for instance China and Russia) is much greater.

The new threats from AI-supported cyber-attacks are one thing, but there are risks associated with the use of AI for Cyber Security in general.

AI should not be viewed by any company as a total solution to Cyber Security, it is not enough to purchase an expensive AI system, sit back and believe there is nothing else required to be done.

Cyber threats are constantly evolving, and malware and viruses are constantly adapting and becoming smarter. AI systems are good at identifying suspicious activity, but it requires skilled Cyber Security professionals to make administrative decisions, interpret new information when the AI can’t and be alert for false positives or other mistakes made by the AI.

The risks of AI are not limited to its application as a form of attack, and there are some risks associated with using it in any form. There are issues around the concept of using data to learn from. Learning from data sets is fundamental to the nature of AI and the more it must work with, the better it becomes. As a result, it may not work fully for smaller companies that have smaller datasets. And there are always concerns about using any data set, that it should not conflict with privacy and data retention laws or infringe on the rights of any group or individual. There must be a balance between keeping the data anonymous but at the same time remaining usable.

Just as with Cyber Security there is a shortage of AI and Machine Learning professionals, with the demand increasing as systems become more complex. And humans are far from obsolete, AI is only there to augment a Cyber Security team, not to replace it.

Finally, to touch on a risk that is looming on the horizon rather than currently here, it is impossible to ignore the current buzz about recent developments in chatbots.

Chatbots such as ChatGPT and Bard are now perfectly capable of writing convincing emails and workable code. The fact that an AI chatbot can create code from a few simple lines of text is very impressive but also concerning.

How concerned should we be? The answer is that we don’t yet know. Cyber Security researchers have been seeing how far they can go, generating phishing emails, using them to spot errors in existing code, and even writing malicious code. While a chatbot will not tell you “How to build a bomb” researchers have found ways to bypass many of the protections. After all the data set for the chatbots is the entire internet.

By breaking up the creation of malicious software into smaller more innocent parts the researchers could then assemble the components into something that in its entirety the chatbot would have been blocked from creating.

ChatGPT for instance is designed to be extremely cautious about creating or revealing anything that is straightforwardly malicious or unethical. However, it is not as cunning as a human trying to social engineer it into doing so. For instance, by treating a situation as hypothetical or fictional the AI will reveal or create more than if it had been a straight question.

Chatbots have been known to “hallucinate”, which means they will spontaneously present information as factual when it has been made up. As with all new technologies the regulators are racing to catch up. An unusual aspect of regulating AI is the “black box” nature of the code. While software companies can be secretive about how their systems work, they do at least know how it works. With AI systems though, how the code works is a mystery even to those that developed the AI, which poses an interesting problem for those that wish to impose policies and regulations on it.

How much of this is fearmongering though, remains to be seen.

WHAT AI MEANS FOR THE FUTURE

The hypothetical next step for AI is Artificial General Intelligence (AGI), the type of AI that can understand and learn as well as any human. AGI is either ten years away, a hundred years away or impossible to attain depending on who you talk to.

In the nearer future, AI will take over more tasks from humans, and any company that uses any form of modern technology will find themselves virtually unable to operate without some form of AI to help and protect them.

The law, in the form of policy and regulation, will eventually catch up with AI, in the way that it has with Information Technology and Cyber Security.

The continued improvement and utilisation of chatbots such as ChatGPT and Bard, and their potential for harm, is much speculated on. Chatbots are already able to communicate conversationally with thousands of users each day about almost any subject known to man.

ChatGPT has caused a sensation since its launch. Although it is still capable of making mistakes or misinterpreting requests, many in the industry see it as the way that we will use the internet in the future. And the technology industry, which is traditionally sceptical about AI in general, has become almost obsessed with the potential security concerns around chatbots using machine learning to develop and instigate cyber-attacks.

On the other hand, it has also been suggested that chatbots could generate beneficial code, that could be created quickly to counter an urgent threat or neutralise a virus.

Artificial Intelligence already plays a significant part in Cyber Security, and on its current path will take over more tasks and decision-making from humans. It will be a long time before it is smart enough to do everything unattended, but with each new technological breakthrough, we come closer to that possibility.

Looking for an OT Cybersecurity solution? Get in touch today. 


Written by Graham Foss. As one of AGSL’s team of Technical Consultants, Graham Foss is responsible for implementing the company’s product development and technology strategy. Before joining AGSL in 2016, Graham was employed for 12 years as a lead software engineer at Aker Solutions Subsea Ltd, where he worked on projects in the North Sea, North Atlantic and Norway. Graham holds a degree in Computing from Edinburgh’s Napier University in Edinburgh, where he graduated with distinction. A Chartered Engineer, he is a member of the Institution ofEngineering and Technology (IET).

Posted in Blog