UK House of Commons Science, Innovation and Technology Committee identifies 12 challenges of artificial intelligence governance that policymakers and frameworks must meet. Credit: Gorodenkoff / Shutterstock The UK House of Commons Science, Innovation, and Technology Committee (SITC) has published an interim report urging the government to accelerate its implementation of a regulatory regime for AI, setting out 12 challenges of AI governance that policymakers and the frameworks they design must meet. There is a growing imperative to ensure AI governance and regulatory frameworks are not left irretrievably behind by the pace of technological innovation, the report states. Policymakers must take measures to safely harness the benefits of AI technology and encourage future innovations, whilst providing credible protection against harm, it adds. The reports comes as the UK prepares to host the Global AI Safety Summit in November. In March, the UK government set out its proposed "pro-innovation approach to AI regulation" in the form of a white paper, outlining five principles to frame regulatory activity and guide future development of AI models and tools, and their use. Earlier this week, the UK National Cyber Security Centre (NCSC) published a pair of blog posts highlighting the importance of established cybersecurity principles when developing or implementing machine learning models and calling for caution around the development and use of generative AI Large Language Models (LLMs). UK must introduce AI-specific legislation soon The UK government should prioritize introducing AI-specific legislation in the next session of Parliament, a summary of the report stated. "A tightly focused AI Bill in the next King's Speech would help, not hinder, the Prime Minister's ambition to position the UK as an AI governance leader. Without a serious, rapid, and effective effort to establish the right governance frameworks - and to ensure a leading role in international initiatives - other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer." The challenges highlighted in the report should form the basis for discussion, with a view to advancing a shared international understanding of the challenges of AI, as well as its opportunities, it added. A forum should also be established for like-minded countries who share liberal, democratic values, to ensure mutual protection against actors, state and otherwise, who are enemies of these values and would use AI to achieve their ends, the summary read. 12 challenges of AI governance that must be met The SITC report identifies 12 challenges of AI governance that policymakers and the frameworks they design must meet. These are: The bias challenge: AI can introduce or perpetuate biases that society finds unacceptable. The privacy challenge: AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants. The misrepresentation challenge: AI can allow the generation of material that deliberately misrepresents someone's behaviour, opinions, or character. The access to data challenge: The most powerful AI needs very large datasets, which are held by few organisations. The access to compute challenge: The development of powerful AI requires significant compute power, access to which is limited to a few organisations. The black box challenge: Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements. The open-source challenge: Requiring code to be openly available may promote transparency and innovation, allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms. The intellectual property and copyright challenge: Some AI models and tools make use of other people's content. Policy must establish the rights of the originators of this content, and these rights must be enforced. The liability challenge: If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done. The employment challenge: AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption. The international coordination challenge: AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking. The existential challenge: Some people think that AI is a major threat to human life. If that is a possibility, governance needs to provide protections for national security. New way of securing AI models is clearly needed "Most AI models are open source or trained on public information - the next wave of AI models will need to be trained on private or proprietary information. This raises interesting security concerns," Laurie Mercer, security architect at HackerOne, tells CSO. For example, how can these models be protected against data breaches, unauthorized access, and cyberattacks? "A new way of securing AI models is clearly needed," Mercer says. When cloud computing emerged, old vulnerabilities like server side request forgery took on new meaning, and new vulnerabilities like S3 bucket enumeration were discovered - the same will be the case with AI, he adds. "The UK has already been a pioneer in AI. As the emerging AI security domain emerges, the UK has the opportunity to lead in both the setting of standards, the development of tools, and the discovery of novel vulnerabilities created in the new AI ecosystem. The UK has a golden opportunity to leverage its existing talent, and develop new tools and techniques to secure the world's new AI powered applications." Related content news UK government plans 2,500 new tech recruits by 2025 with focus on cybersecurity New apprenticeships and talent programmes will support recruitment for in-demand roles such as cybersecurity technologists and software developers By Michael Hill Sep 29, 2023 4 mins Education Industry Education Industry Education Industry news UK data regulator orders end to spreadsheet FOI requests after serious data breaches The Information Commissioner’s Office says alternative approaches should be used to publish freedom of information data to mitigate risks to personal information By Michael Hill Sep 29, 2023 3 mins Government Cybercrime Data and Information Security feature Cybersecurity startups to watch for in 2023 These startups are jumping in where most established security vendors have yet to go. By CSO Staff Sep 29, 2023 19 mins CSO and CISO Security news analysis Companies are already feeling the pressure from upcoming US SEC cyber rules New Securities and Exchange Commission cyber incident reporting rules don't kick in until December, but experts say they highlight the need for greater collaboration between CISOs and the C-suite By Cynthia Brumfield Sep 28, 2023 6 mins Regulation Data Breach Financial Services Industry Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe