The UK\u2019s House of Lords Communications and Digital Committee will next week open its inquiry into large language models (LLMs) with evidence from leading figures in the artificial intelligence (AI) sector including Ian Hogarth, chair of the government\u2019s AI Foundation Model Taskforce. The Committee will assess LLMs and what needs to happen over the next three years to ensure the UK can respond to the opportunities and risks they introduce.\n\nLLMs are a form of generative AI which has seen significant advances in capability in recent years, including the development of OpenAI\u2019s GPT-3 and GPT-4 models. They have immense potential for organizations and are an asset for companies that generate large amounts of data. However, use and deployment of LLMs is not without risk, with LLM security threats ranging from prompt injections and data leakage to inadequate sandboxing and unauthorized code execution.\n\nInquiry will explore uniqueness of LLMs, government\u2019s role in addressing risks\n\nThe first evidence session will take place Tuesday, September 12, at the House of Lords. Giving evidence to the committee will be Ian Hogarth, chair, Foundation Model Taskforce; Jean Innes, incoming CEO, The Alan Turing Institute; Professor Neil Lawrence, DeepMind professor of machine learning at the University of Cambridge; and Ben Brooks, head of public policy, Stability AI.\n\nIssues that will be covered in the session include:\n\nAI a threat to NHS, UK national security\n\nThis week, Hogarth warned that cybercriminals could use AI to attack the National Health System (NHS). Hogarth said that AI could be weaponized to disrupt the NHS, potentially rivalling the impact of the COVID-19 pandemic or the WannaCry ransomware attack of 2017. He highlighted the risks of AI systems being used to launch cyberattacks on the health service, or even to design pathogens and toxins. Meanwhile, advances in AI technology, particularly in code writing, are lowering the barriers for cybercriminals to carry out attacks, he added.\n\nLast month, AI was officially classed as a security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023. The extensive document details the various threats that could have a significant impact on the UK\u2019s safety, security, or critical systems at a national level. The latest version describes AI as a \u201cchronic risk,\u201d meaning it poses a threat over the long term, as opposed to an acute one such as a terror attack.