UK NCSC CEO Lindy Cameron reflects on the key cybersecurity challenges the UK faces from rapidly developing AI technologies like generative AI and LLMs. Credit: Jonny Lindner The cybersecurity industry cannot rely on its ability to retrofit security into developing machine learning (ML) and artificial intelligence (AI) technology to prevent security risks introduced by innovations such as generative AI and large language models (LLMs), according to Lindy Cameron, CEO of the UK National Cyber Security Centre (NCSC). Cameron was speaking today in the opening keynote of the Chatham House Cyber 2023 conference where she addressed the key cybersecurity challenges the UK faces from rapidly developing AI technologies like OpenAI’s ChatGPT chatbot.Security has often been a secondary consideration when the pace of technology development is high, but AI developers must predict possible attacks and identify ways to mitigate them, Cameron said. Failure to do so will risk designing vulnerabilities into future AI systems, she warned. “Amid the huge dystopian hype about the impact of AI, I think there is a danger that we miss the real, practical steps that we need to take to secure AI.”UK NCSC focuses on three elements to help secure developing AIBeing secure is an essential pre-requisite for ensuring that AI is safe, ethical, explainable, reliable, and as predictable as possible, Cameron said. “Users need reassurance that machine learning is being deployed securely, without putting personal safety or personal data at risk. In addition to the overarching need for security to be built into AI and ML systems, and for companies profiting from AI to be responsible vendors, the NCSC is focusing on three elements to help with the cybersecurity of AI.”First, the NCSC believes it is essential that organisations using AI need to understand the risks they are running by using it – and how to mitigate them, Cameron stated. “For example, machine learning introduces an entirely new category of attack: adversarial attacks. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it creates potential for certain inputs to result in unintended behaviour, which adversaries can then exploit.” LLMs pose entirely different security challenges, Cameron continued. “For example – an organisation’s intellectual property or sensitive data may be at risk if their staff start submitting confidential information into LLM prompts.”As the disruptive power of AI becomes increasingly apparent, CEOs at major companies will be making investment decisions about AI and we need to ensure that security considerations are central to these deliberations, Cameron argued. Second, there is a need to maximise the benefits of AI to the cyber defence community. “AI has the potential to improve cybersecurity by dramatically increasing the timeliness and accuracy of threat detection and response. We [also] need to remember that in addition to helping make our country safer, the AI cybersecurity sector also has huge economic potential.”Third, the cybersecurity sector must understand how adversaries – whether they are hostile states or cybercriminals – are using AI, and how to disrupt them, Cameron said. “We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft.” China is positioning itself to be a world leader in AI and, if successful, we must assume that it will use this to secure a dominant role in global affairs, Cameron added. “LLMs also present a significant opportunity for states and cybercriminals too. They lower barriers to entry for some attacks. For example, they make writing convincing spear-phishing emails much easier for foreign nationals without strong linguistic skills.” Related content news UK Cyber Security Council CEO reflects on a year of progress Professor Simon Hepburn sits down with broadcaster ITN to discuss Council’s work around cybersecurity professional standards, careers and learning, and outreach and diversity. By Michael Hill Sep 27, 2023 3 mins Government Government Government news FIDO Alliance certifies security of edge nodes, IoT devices Certification demonstrates that products are at low risk of cyberthreats and will interoperate securely. By Michael Hill Sep 27, 2023 3 mins Certifications Internet Security Security Hardware news analysis Web app, API attacks surge as cybercriminals target financial services The financial services sector has also experienced an increase in Layer 3 and Layer 4 DDoS attacks. By Michael Hill Sep 27, 2023 6 mins Financial Services Industry Cyberattacks Application Security news Immersive Labs adds custom 'workforce exercising' for each organizational role With the new workforce exercising capability, CISOs will be able to see each role’s cybersecurity readiness, risk areas, and exercise progress. By Shweta Sharma Sep 27, 2023 3 mins Security Software Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe