Of machines and men: AI and the future of cybersecurity
For many in the cybersecurity community, ‘Ghost in the Shell’, both in its source material and recent film adaptation, is an inventive representation of where the sector is heading. We still have a way to go, but the foundations are in place for the melding of human and machine. Today, at least, this is not meant physically but rather in the operational sense.
Artificial intelligence (AI) has become so important for the industry that this relationship could already be described as symbiotic. As the number and complexity of threats and attacks increases, organisations are increasingly looking to AI and machine learning to transform their security posture.
Evolution and revolution – AI in cybersecurity
AI can be thought of as applying data science and machine learning to a specific series of problems. When it looks advanced enough, we tend to call machine learning AI but should not mistake it for the level of general intelligence shown by Skynet or HAL 9000 in the movies. Simply put, machine learning allows non-human systems, such as software, to interpret and adapt to change, usually based on the information made available to the system. It is not necessarily a new technology, but one that has evolved over time and enjoyed significant advances in recent years.
Indeed, advances in machine learning have pushed forward the capabilities of cyber security software. This is how we have gone from using the tech to uncover credit card fraud through spotting anomalies in datasets, to using it to actively hunt networks for the most subtle signs of ransomware, malware, and advanced targeted attacks. By absorbing, analysing and contextualising data relating to activity like network traffic, machines can better distinguish between benign and malicious activity.
However, the future is even more exciting. Increasingly, cyberattacker behaviours are being targeted with a variety of machine learning techniques. These include ensemble learning, when multiple machine learning algorithms are stacked together for better predictive performance, and deep learning, where multi-layered artificial neural networks are set up to tackle large or complex data sets. These are expanding the ability to detect subtler attack patterns without leaning on human analysts for verification. While AI that displays flawless Turing Test and cognitive intelligence is still years away, the ability to detect ‘low and slow’ attacks through moderate-scope AI is just around the corner.
Augmenting the human
Companies have limited time, human and technical resources to protect their ever-expanding attack surfaces. Global cyber skills demand is set to exceed supply by a third before 2019, and there is already a glut of unfilled entry-level security jobs in major economies. With widespread skills shortages and high turnover, human cybersecurity teams are simply unable to process the sheer weight of the workload by themselves.
In this context, AI is reducing the number of personnel required to operate detection systems. An AI does not tire – machine learning can process large data sets at speed and do what humans would never have the time or patience to do. Automation is the rational response to a rapidly diversifying and expanding threat landscape, especially as the availability of human cyber skills and resources is so constrained.
However, the intention is not to replace the human component but to augment it. Unlike machines, humans have critical knowledge of the messy context surrounding modern business. They know, for example, when an unusually large data transfer is part of a legitimate deal and when it is something to be worried about.
What automating data collection, threat detection and operational assignments does is free staff to spend more time on higher priority tasks. Machines do the legwork while humans make the system useable and worthwhile. The combination of these skillsets – tireless data processing and human-supplied context – yields the most effective defence.
An eye to the future
Today, the role of AI in cybersecurity is to ingest sensor data and provide automated intelligence. This in turn helps humans to react faster and smarter than they could otherwise. Yet, with the recent unveiling of Elon Musk’s ambitions for ‘neural lace’ brain implants, I can see this process expanding. In the near future, sensors that wirelessly communicate with an implant in a human analyst’s brain are certainly on the cards.
A human should, of course, always make the final decision. Yet, by combining human judgement with machine analytics and response automation, decisions and responses can more informed and made almost instantaneously. These aspirations may still be in their infancy, but make no mistake: the baby has been born and it has started to grow and learn.
Ghost in the Shell’s ‘Major’ character is a depiction of this relationship running smoothly. She manages to use the best of both humans and machines, operating faster with the benefit of a global field of intelligence and the ability to learn, whilst retaining the ability to apply human context to a situation.
For the foreseeable future, humans and AI must work together to thwart most cyber attackers. In movie terms, you need a ‘Major’ or a Robocop rather than the Terminator – or you need a Terminator with human handlers. However, while Ghost in the Shell takes place in 2029, we will likely arrive there sooner. AI cyber threat hunters are already patrolling advanced organisations. While not physically embodied like Major, the melding of AI capabilities with human intelligence is happening right now. Given another 12 years, a body-and-mind collaboration will no longer be science fiction.