Committee to assess large language models and what needs to happen over the next three years to ensure the UK can respond to the risks they introduce. Credit: Gorodenkoff / Shutterstock The UK's House of Lords Communications and Digital Committee will next week open its inquiry into large language models (LLMs) with evidence from leading figures in the artificial intelligence (AI) sector including Ian Hogarth, chair of the government's AI Foundation Model Taskforce. The Committee will assess LLMs and what needs to happen over the next three years to ensure the UK can respond to the opportunities and risks they introduce. LLMs are a form of generative AI which has seen significant advances in capability in recent years, including the development of OpenAI's GPT-3 and GPT-4 models. They have immense potential for organizations and are an asset for companies that generate large amounts of data. However, use and deployment of LLMs is not without risk, with LLM security threats ranging from prompt injections and data leakage to inadequate sandboxing and unauthorized code execution. Inquiry will explore uniqueness of LLMs, government's role in addressing risks The first evidence session will take place Tuesday, September 12, at the House of Lords. Giving evidence to the committee will be Ian Hogarth, chair, Foundation Model Taskforce; Jean Innes, incoming CEO, The Alan Turing Institute; Professor Neil Lawrence, DeepMind professor of machine learning at the University of Cambridge; and Ben Brooks, head of public policy, Stability AI. Issues that will be covered in the session include: How LLMs differ from other forms of AI and how they are likely to evolve over the next three years The role and structure of the Foundation Model Taskforce, its objectives, priorities, and investment plans The appropriate role for government in responding to the opportunities and risks presented by LLMs, the adequacy of government preparedness, and priorities for action The differences between open and closed source language models and the implications of how these are likely to develop AI a threat to NHS, UK national security This week, Hogarth warned that cybercriminals could use AI to attack the National Health System (NHS). Hogarth said that AI could be weaponized to disrupt the NHS, potentially rivalling the impact of the COVID-19 pandemic or the WannaCry ransomware attack of 2017. He highlighted the risks of AI systems being used to launch cyberattacks on the health service, or even to design pathogens and toxins. Meanwhile, advances in AI technology, particularly in code writing, are lowering the barriers for cybercriminals to carry out attacks, he added. Last month, AI was officially classed as a security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023. The extensive document details the various threats that could have a significant impact on the UK's safety, security, or critical systems at a national level. The latest version describes AI as a "chronic risk," meaning it poses a threat over the long term, as opposed to an acute one such as a terror attack. Related content feature Top cybersecurity M&A deals for 2023 Fears of recession, rising interest rates, mass tech layoffs, and conservative spending trends are likely to make dealmakers cautious, but an ever-increasing need to defend against bigger and faster attacks will likely keep M&A activity steady in By CSO Staff Sep 22, 2023 24 mins Mergers and Acquisitions Mergers and Acquisitions Mergers and Acquisitions brandpost Unmasking ransomware threat clusters: Why it matters to defenders Similar patterns of behavior among ransomware treat groups can help security teams better understand and prepare for attacks By Joan Goodchild Sep 21, 2023 3 mins Cybercrime news analysis China’s offensive cyber operations support “soft power” agenda in Africa Researchers track Chinese cyber espionage intrusions targeting African industrial sectors. By Michael Hill Sep 21, 2023 5 mins Advanced Persistent Threats Cyberattacks Critical Infrastructure brandpost Proactive OT security requires visibility + prevention You cannot protect your operation by simply watching and waiting. It is essential to have a defense-in-depth approach. By Austen Byers Sep 21, 2023 4 mins Security Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe