7 August 2023

UK calls artificial intelligence a “chronic risk” to its national security


The National Risk Register officially classes AI as a long-term security threat to the UK’s safety and critical systems.

Artificial intelligence (AI) has been officially classed as a security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023. The extensive document details the various threats that could have a significant impact on the UK's safety, security, or critical systems at a national level. The latest version describes AI as a "chronic risk", meaning it poses a threat over the long-term, as opposed to an acute one such as a terror attack. The UK government also raised cyber attacks from limited impact to moderate impact in the 2023 NRR.

Advanced AI technology could, at some stage, pose a significant security threat if it were used to launch a cyberattack against the UK. Meanwhile, the various cybersecurity implications of advancing AI technology such as generative AI are well documented.
AI poses a range of potential risks to the UK

AI systems and their capabilities present many opportunities, from expediting progress in pharmaceuticals to other applications right across the economy and society, which the Foundation Models Taskforce aims to accelerate, the document read. "However, alongside the opportunities, there are a range of potential risks and there is uncertainty about its transformative impact. As the government set out in the Integrated Review Refresh, many of our areas of strategic advantage also bring with them some degree of vulnerability, including AI."

For this reason, the UK government has committed to hosting the first global summit on AI Safety which will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor risks from AI, read the NRR. "The National AI Strategy, published in 2021, outlines steps for how the UK will begin its transition to an AI-enabled economy, the role of research and development in AI growth and the governance structures that will be required."

Meanwhile, the government's white paper on AI, published in 2023, commits to establishing a central risk function that will identify and monitor the risks that come from AI. "By addressing these risks effectively, we will be better placed to utilise the advantages of AI." The UK government's reference to AI lacks a detailed assessment of how it could threaten the UK, but it does mention disinformation and potential threats to the economy.

Latest NRR helps the UK better prepare for risks

"This is the most comprehensive risk assessment we've ever published, so that government and our partners can put robust plans in place and be ready for anything," said Oliver Dowden, deputy prime minister.

Providing invaluable information, the document gives the UK the power to invest, prepare, and respond to risks more effectively, added Rick Cudworth, resilience first chair and board director. "With more detail than previously, and specific scenarios, assumptions, and response capabilities set out, we encourage organisations and resilience professionals to use it to stress test and strengthen their own resilience as we all move forwards together."

It is encouraging that the government is committed to further assessing and mitigating vulnerabilities to acute risks, commented James Ginns, head of risk management policy at The Centre for Long-Term Resilience. "We look forward to supporting their work in identifying and assessing chronic risks and related vulnerabilities, especially in AI and biosecurity, in order to reinforce our resilience."

No comments: