Americas

  • United States

Asia

Oceania

mhill
UK Editor

UK’s Communications and Digital Committee to open inquiry into LLMs

News
Sep 07, 20233 mins
Critical InfrastructureGenerative AIRegulation

Committee to assess large language models and what needs to happen over the next three years to ensure the UK can respond to the risks they introduce.

Authentic Office: Enthusiastic Black IT Programmer Starts Working on Desktop Computer. Male Website Developer, Software Engineer Developing App, Video Game. Terminal with Coding Programming Language
Credit: Gorodenkoff / Shutterstock

The UK's House of Lords Communications and Digital Committee will next week open its inquiry into large language models (LLMs) with evidence from leading figures in the artificial intelligence (AI) sector including Ian Hogarth, chair of the government's AI Foundation Model Taskforce. The Committee will assess LLMs and what needs to happen over the next three years to ensure the UK can respond to the opportunities and risks they introduce.

LLMs are a form of generative AI which has seen significant advances in capability in recent years, including the development of OpenAI's GPT-3 and GPT-4 models. They have immense potential for organizations and are an asset for companies that generate large amounts of data. However, use and deployment of LLMs is not without risk, with LLM security threats ranging from prompt injections and data leakage to inadequate sandboxing and unauthorized code execution.

Inquiry will explore uniqueness of LLMs, government's role in addressing risks

The first evidence session will take place Tuesday, September 12, at the House of Lords. Giving evidence to the committee will be Ian Hogarth, chair, Foundation Model Taskforce; Jean Innes, incoming CEO, The Alan Turing Institute; Professor Neil Lawrence, DeepMind professor of machine learning at the University of Cambridge; and Ben Brooks, head of public policy, Stability AI.

Issues that will be covered in the session include:

  • How LLMs differ from other forms of AI and how they are likely to evolve over the next three years
  • The role and structure of the Foundation Model Taskforce, its objectives, priorities, and investment plans
  • The appropriate role for government in responding to the opportunities and risks presented by LLMs, the adequacy of government preparedness, and priorities for action
  • The differences between open and closed source language models and the implications of how these are likely to develop

AI a threat to NHS, UK national security

This week, Hogarth warned that cybercriminals could use AI to attack the National Health System (NHS). Hogarth said that AI could be weaponized to disrupt the NHS, potentially rivalling the impact of the COVID-19 pandemic or the WannaCry ransomware attack of 2017. He highlighted the risks of AI systems being used to launch cyberattacks on the health service, or even to design pathogens and toxins. Meanwhile, advances in AI technology, particularly in code writing, are lowering the barriers for cybercriminals to carry out attacks, he added.

Last month, AI was officially classed as a security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023. The extensive document details the various threats that could have a significant impact on the UK's safety, security, or critical systems at a national level. The latest version describes AI as a "chronic risk," meaning it poses a threat over the long term, as opposed to an acute one such as a terror attack.

mhill
UK Editor

Michael Hill is the UK editor of CSO Online. He has spent the past 8 years covering various aspects of the cybersecurity industry, with particular interest in the ever-evolving role of the human-related elements of information security. A keen storyteller with a passion for the publishing process, he enjoys working creatively to produce media that has the biggest possible impact on the audience.

More from this author