Artificial intelligence (AI) is having a transformative impact on organisations and public services, but it is vital efforts are taken to protect AI systems from growing cyber security threats. The government is therefore taking steps to address the cyber security risks to AI.
Following the National Cyber Security Centre’s (NCSC) Guidelines for secure AI system development, a Call for Views and the publication of various research, the government is now publishing a world leading voluntary Code of Practice which will be used as the basis for a new global standard.
From securing AI systems against hacking and sabotage, to ensuring they are developed and deployed in a secure way, the Code and future standard will help developers and system operators to build secure, innovative AI products that drive growth and support the Plan for Change. This will help give confidence to organisations in adopting the technology which will enhance public services and productivity and increase growth across the economy.
An overarching theme that arose from responses to the Call for Views was that organisations would benefit from additional guidance that explained how to implement the Code of Practice. The Government has therefore published an implementation guide to support organisations.
DSIT, in close collaboration with NCSC, will be working within the European Telecommunications Standards Institute (ETSI) to develop a global standard and accompanying guide. We will also be engaging with organisations across different sectors of the economy to support their adoption of the security requirements and to encourage participation in the standards development process. This work forms part of DSIT’s wider technology security programme and its secure by design approach across all digital technologies. Further information on how this work links with DSIT’s other cyber security codes can be found here.
Brief
On 23/02/2023, the Department for Science, Innovation & Technology issued an update regarding AI cyber security. The government is publishing a world-leading voluntary Code of Practice to secure AI systems against hacking and sabotage, which will be used as the basis for a new global standard. This code aims to help developers and system operators build secure, innovative AI products that drive growth and support public services, with an accompanying implementation guide to support organisations.
Highlights content goes here...
Purpose
The purpose of this update is to address the growing cyber security threats to artificial intelligence (AI) systems by publishing a world-leading voluntary Code of Practice. This code aims to provide guidance on securing AI systems against hacking and sabotage, ensuring they are developed and deployed in a secure way, and driving growth and supporting the Plan for Change.
The government has taken steps to address these risks by following the National Cyber Security Centre’s (NCSC) Guidelines for secure AI system development and publishing research. The Code of Practice will be used as the basis for a new global standard, helping developers and system operators build secure, innovative AI products that drive growth and support public services.
The implementation guide published by the government is designed to support organisations in implementing the Code of Practice. This guide provides additional guidance on how to implement the Code, ensuring that organisations have the necessary resources to adopt the security requirements and encourage participation in the standards development process.
Effects on Industry
The effects of this update on industry will be significant. The publication of a voluntary Code of Practice and implementation guide will provide organisations with the necessary guidance to build secure AI systems, reducing the risk of cyber attacks and sabotage. This will give confidence to organisations in adopting AI technology, enhancing public services and productivity, and increasing growth across the economy.
The development of a global standard through the European Telecommunications Standards Institute (ETSI) will also have a positive impact on industry. The participation of organisations from different sectors of the economy in the standards development process will ensure that the security requirements are tailored to their specific needs, making it easier for them to adopt AI technology.
Relevant Stakeholders
The relevant stakeholders affected by this update include:
- Organisations developing and deploying AI systems
- System operators using AI technology
- Developers creating innovative AI products
- Public services relying on AI technology
- Consumers who will benefit from the increased security and reliability of AI systems
These stakeholders will be directly impacted by the publication of the Code of Practice, implementation guide, and global standard. They will need to comply with these new requirements and adapt their practices to ensure that their AI systems are secure and reliable.
Next Steps
The next steps for organisations include:
- Reviewing the voluntary Code of Practice and implementation guide
- Implementing the security requirements outlined in the Code
- Participating in the standards development process through ETSI
- Engaging with the government and other stakeholders to provide feedback and input on the new global standard
Organisations will also need to take action to ensure that their AI systems are secure and reliable, including conducting risk assessments, implementing security controls, and testing for vulnerabilities.
Any Other Relevant Information
This update forms part of the government’s wider technology security programme and its secure by design approach across all digital technologies. The development of a global standard through ETSI is a key aspect of this programme, ensuring that the UK is at the forefront of AI security and leading the way in international standards development.
The government will continue to engage with organisations across different sectors of the economy to support their adoption of the security requirements and encourage participation in the standards development process. This work will be closely monitored, and feedback will be provided to ensure that the new global standard meets the needs of industry.