How to use the power of artificial intelligence to protect our networks
Frontier technologies such as artificial intelligence (AI), machine learning (ML), quantum, and high-performance computing are radically altering the national security landscape.
Organisations need to understand and harness these shifts if they are to master this disruption, enable effective change and deliver better outcomes.
In this new series from Shephard Studio, we explore what happens when technology challenges us to develop new ways of doing things.
In examining today’s technological landscape, we also consider Japanese cultural practices and concepts from which innovators can draw inspiration.
Mushin is defined as “no mind” or “the mind without mind” – a state when your brain is not preoccupied with anything else than the specific activity you are performing at a certain moment.
Governments face rapidly evolving threats to their national security network architectures, particularly from technologies such as AI, cyber and electronic warfare.
Such threats are just as acute for commercial organisations supporting national security efforts, particularly those companies working in the defence sector.
But how can organisations protect their networks without disrupting the flow of actionable intelligence?
Such threats stem from a range of sources, which are often interlinked. Those managing government and military networks must consider a range of common security problems, explains Andy Laidler, Chief Digital Officer at Fujitsu Defence and National Security.
For example, he highlighted the sheer volume of data derived from particular operational environments, which they must then communicate to major data centres to conduct analytics.
This process could involve harvesting and analysing huge amounts of information at the edge before transmitting it as necessary to a centralised location for further analysis. There are broader technical challenges too, such as equipment performance in certain operational environments.
Perhaps most significant is the ever-evolving pace of the threat. This forces militaries to consider their response, from the tempo of their patching efforts as they respond to new vulnerabilities to broader, architectural responses to the dangers.
‘How am I resilient to whatever the threat might happen to be tomorrow, rather than worrying too much about what it is today?’ Laidler asks.
Professor Kenneth Payne, professor of strategy at King’s College London, agrees that gathering information is no longer the major challenge.
On the contrary: ‘There’s a firehose of information that’s produced out there, increasingly online and digitised, and the question is how do you go about ensuring you’re gathering the right stuff? But then equally challenging, how do you go about parsing it?’
Richard Carter, a researcher at the Alan Turing Institute - which focuses on data science and AI – pointed to the ability of AI to understand what ‘normal’ looks like across a network.
‘And if you understand what normal looks like, that gives you a baseline through which you can begin to understand anomalies or something that looks different,’ Carter explains.
For example, he says, there could be a level of automated identification that captures metadata on particular network traffic.
Such an approach ties into the concept of active cyber defence, which works on two levels. First, educating operators to become more competent at defending themselves against malign operators. And second, the idea that ‘offence is a good defence’ – using cyber and AI as part of an effort to remove an adversary’s availability of resources that they could use to attack you.
‘The obvious answer is you have to use AI to take on adversarial AI… there’s no option just to throw loads of people at it, no matter how smart they are, because people just cannot operate at the speed and the scale that AI can,’ Carter says.
There are different ways in which information can become vulnerable, notes Dr Darminder Ghataoura, Director of AI and Data Science at Fujitsu Defence and National Security. He warns that any information system today must be increasingly developed on a zero-trust basis.
In the case of AI, for instance: ‘As it is becoming increasingly integrated with information systems and maybe sits within that firewall that the enterprise has created, the AI itself shouldn’t assume that the data it’s using is correct or hasn’t been manipulated.
‘So we’ve got to be careful when we train our algorithms to ensure that we’re doing a data poisoning check, making sure that data points haven’t been tampered with.’
He says that we also must be careful how we encourage AI models to become more robust. For example, the process must consider whether the data itself could be biased, perhaps even through adversarial manipulation, which could lead to the wrong outcomes.
The big step is to achieve data exploitation at scale.
Dr Dave Snelling is Director of Advanced Compute at the Fujitsu Center for Cognitive and Advanced Technologies.
He explains that in many instances, militaries are sifting information from multiple sources to track down a bad actor or perform another complex task. However, ‘building the models that adequately describe the interconnection between information spaces is not something that can happen automatically’.
In some AI applications, it may be possible to rely on large databases of pre-trained information.
‘But in many situations that we run into, the thing we’re looking for is an anomaly. So we don’t have a lot of instances of it to build a “recogniser”,’ Snelling says.
This demands more complicated forms of AI models. However, it also has an effect at the human level, with a need for ‘more and more specialist training’.
Many organisations, particularly within government, possess multiple computer systems, meaning their data is often separated.
Joanne Benbrook, Senior Solution Architect at Fujitsu Defence and National Security, notes that this could mean some systems are dealing with less risky information. In contrast, others hold data that must be kept particularly safe.
Such organisations are increasingly examining how they can exchange information between these systems, which could be necessary for various tasks.
One solution is the development of cross-domain gateway solutions to ensure this occurs in the most secure way possible, Benbrook says.
This approach requires an understanding of the network space and how to securely enable information transfer, a key element of retaining network security.
‘We’ve got knowledge and ability in our information exploitation tools, so things like data analytics, and data science and AI. Pulling all of these things together enables Fujitsu to deliver solutions and digital capability across all of these different facets,’ Benbrook explains.
Given all this, where do AI and similar technologies leave the human operator? Kenneth Payne argues that human analysts won’t find themselves jobless just yet.
While advances such as machine learning are excellent at finding even obscure patterns in vast datasets, whether these patterns are meaningful or not is another matter.
‘Machine learning lacks what we might call human common sense in trying to understand the meaning behind the patterns that it generates. There’s still a role for humans in intelligence analysis,’ he said.
Mushin is defined as “no mind” or “the mind without mind” –a state when your brain is not preoccupied with anything else than the specific activity you are performing at a certain moment.
Mushin is achieved when a person’s mind is free from thoughts such as fear, anger, or every other emotion in everyday life. That’s the state in which the person is free to act and react, whatever they do.
There’s no hesitation or disturbance in such activities. This state allows the person to react following their instincts, without thinking about their next move.