Artificial Intelligence (AI) is at the forefront of technological innovation. Even if its not your primary product, elements of AI often become the subject of R&D projects. This story is quite remarkable.

Unusually, the Government Communications Head Quarters (GCHQ) recently published a paper called ‘Pioneering a New National Security: The Ethics of Artificial Intelligence’. This lays out the many potential uses of AI in combatting a range of threats to the UK and the corresponding ethical questions to using the technology.

What is GCHQ?

Chapter 3 of the report describes GCHQ as “a world-leading intelligence, cyber and security agency at the heart of Britain’s national security community…Our extraordinary people use cutting-edge technology, technical ingenuity and wide-ranging partnerships to identify, analyse and disrupt threats.”

GCHQ work with other UK agencies, like the police, military and intelligence services. And with similar European and global institutions with the same aims – “Pioneering a new kind of security for an ever more complex world.”

What is the GCHQ attitude to using AI to combat existing and future threats?

In the Foreward to this report, Jeremy Fleming (Director of GCHQ) is very clear about how useful AI will continue to be to protecting the UK:

“The nation’s security, prosperity and way of life faces new threats from hostile states, terrorists and serious criminals, often enabled by the global internet. An ever-growing number of those threats are to the UK’s digital homeland – the vital infrastructure and online services that underpin every part of modern life.

“At GCHQ, we believe that AI capabilities will be at the heart of our future ability to protect the UK. They will enable analysts to manage the ever-increasing volume and complexity of data, improving the quality and speed of their decision-making. Keeping the UK’s citizens safe and prosperous in a digital age will increasingly depend on the success of these systems.”


What particular crimes can AI be used to tackle?

The different types of machine learning that comprise the AI field can be used to fight against different types of crime. This is similar to the work of the Serious Fraud Office, that uses innovative AI technology to process more data, more quickly than humans possibly could.

Child Sexual Abuse (CSA)

National Crime Agency figures from 2020 show the “most harmful” CSA dark websites had a global total of 2.88 million registered accounts. At least 5% of these registered here in the UK. Their estimate is that there are 300,000 people in the UK who are a sexual threat to children.  Two horrifying statistics.

GCHQ are very positive about the role of AI in identifying, tracking and catching these predators. By analysing vast quantities of data from a variety of sources, AI can be trained to:

  • Track “disguised identities” across their multiple internet accounts
  • Identify possible grooming within chatrooms and messages
  • Shine a light on illegal services and covert individuals on the dark web
  • Uncover trading of illegal CSA images
  • Analyse seized evidence, necessary to support CSA prosecution. This means that human investigators might be spared some of the trauma of cataloguing CSA images, messages and videos themselves.

National Cyber Security Threats

This GCHQ paper references the fact that “other states” are already using AI against us to spread malicious disinformation. This has become known as ‘fake news’ or ‘deepfake’ images and videos. They can use the algorithms of social media to personalise such content and individualise their targeted messaging. All with the intention of destabilising our society and leaving us vulnerable to outside attack. For example, by manipulating election results.

GCHQ plan to use the same kind of AI technology to protect us, by:

  • Identifying troll farms and bots
  • Fact checking information
  • Discover and expose the sources
  • Take down malicious software and build defences into our future developments

International drug, weapons and people trafficking

Serious organised crime (SOC) groups make full use of the opportunities presented by new technologies, including the internet, dark web, crypto currencies and encryption tools.

AI can be used in several ways, to eventually shut down these international SOC conglomerates.

  • Combining information from a number of sources to make predictions about the actual locations of the next cargo drops.
  • Find the connections between multi-layered networks that work together, globally, to fulfil the supply and delivery chains. Sorting out patterns that identify the individual players through their multi-step online transactions and multiple accounts.
  • ‘Follow the money’ to find evidence against terrorist organisations or “state sponsors” in the large scale analysis of these trades.

What’s the ‘ethics’ part of this, for GCHQ?

The Executive Summary of this report explains an interesting aspect to the ethical considerations of developing their new AI technologies.

“Thinking about AI encourages us to think about ourselves, and what it means to be human: our preferred way of life, our guiding values and our common beliefs. The field of AI ethics has emerged over the last decade to help organisations turn these ethical principles into practical guidance for software developers – helping to embed our core values within our computers and software…Left unmanaged, our use of AI incorporates and reflects the beliefs and assumptions of its creators – AI systems are no better or no worse than the human beings that create them.”

Other specific question are mentioned during the report:

  • How do you ensure that fairness and accountability is embedded in these new systems, for example?
  • How do you prevent AI systems replicating existing power imbalances and discrimination in society?
  • How should the traditional international rules-based system respond to AI and other emerging technologies?
  • How can governments and citizens build institutions capable of engaging with this digital age?

The whole report is part of GCQH’s attempt to avoid criticism of how they use data, as received in the past. To be as transparent as possible, even though a lot of their work is, necessarily, secret.  A reassurance that they are embedding ethical considerations into the development of any new technology. This is an interesting discussion for anyone involved in AI innovations. Do you have an AI ethics policy to support your R&D?

Jamie Smith