Recently, discussions on the race to develop Artificial Intelligence ("AI") – ranging from Elon Musk and Steve Wozniak's proposed six-month moratorium on training AI systems to Bill Gates' call for establishing "rules of the road" so that the benefits of AI exceed its risk appetency – have dominated international debate. Against this backdrop, the UK Government ("Government") set out its current position in the race by publishing a white paper titled "A pro-innovation approach to AI regulation" ("White Paper"), detailing a proposed 'light touch' framework which seeks to balance regulation with spurring responsible AI innovation. However, more recent announcements by the Prime Minster, Rishi Sunak, may result in a re-assessment of this strategy, largely driven by the accelerated growth of generative AI and an awakening to the significant impact this may on our lives and the economy.
This insight will provide an overview of the features of the proposed framework set out in the White Paper (the "Framework").
The White Paper is based on the original premise that a principle based framework will ensure that the "…UK [is] on course to be the best place in the world to build, test and use AI technology…" (Rt Hon Michelle Donelan MP, Secretary of State for Science, Innovation and Technology). Fundamentally, it is based on existing regimes – it is in lieu of new legislation, and it does not amend the scope of existing legislation which relates to AI (e.g. data protection laws). According to the Government, the use of existing legislative regimes (coupled with proportionate regulatory intervention) will result in a future-proof framework that can be adapted according to AI trends, opportunities and risks. Whether this approach is retained after the current consultation remains to be seen.
In contract to the light touch principles based approach in the White Paper, the EU is over 2 years into the process of agreeing a detail and prescriptive AI regulation (the "AI Act"). The AI Act is designed to regulate AI systems based on their level of risk to humans, prohibiting the use of particularly harmful AI systems, introducing stringent controls for "high-risk" AI, and imposing moderate transparency requirements for "low-risk" AI. Its granularity and detail is very similar to that under the General Date Protection Regulation.
Irrespective of the path ultimately to be taken by the UK, there are separate regimes on the horizon for the UK, EU and the rest of the world. Therefore, it is critical that businesses developing and offering AI products and systems, and those implementing AI into their operations, are aware of how the proposed rules will apply. Understanding the diverging regimes will be important in developing commercial approaches and preparing for compliance.
In light of the UK's ambition to become a science and technology superpower, the Framework aims to be an instrumental tool in "…getting regulation right…" so that international businesses feel confident in investing, and retaining their investment, in the UK. It aims to increase prosperity and growth in AI markets by removing barriers to innovation, augment public trust in AI systems to drive up AI adoption, and reinforce the UK's position as a global leader in AI.
The White Paper also notes that the Government wants to ensure that UK businesses benefit from global AI opportunities by managing cross-border risks in AI supply chains.
The Framework will apply to those developing, deploying and using AI systems across the UK, irrespective of whether they are based in the UK.
The Framework does not address all societal and global challenges associated with using and developing AI systems, such as data access and sustainability.
The Framework does not cover the allocation of liability during the AI life cycle. According to the White Paper, it would be premature at this stage to conclude on the topic of liability "…as it's a complex, rapidly evolving issue…", which requires careful manoeuvring so as to not disrupt the UK's AI ecosystem.
The Framework does not provide an oven-ready definition of AI. Instead, it describes AI by reference to "adaptivity" and "autonomy", features that are baked into the functionality of AI systems. It is designed to regulate the use – or in other terms, the outcomes – of AI systems, as opposed to regulating the technology itself. In summary:
Whilst this approach should avoid the application of a rigid definition, such a broad and flexible approach may lead to inconsistencies between regulators who could, in pure isolation, interpret the features according to the specificities of their respective industries. The Government recognises this potential pitfall and has put forward a concoction of supporting inter-regulator coordination and centralised monitoring.
The Framework is based on a set of cross-sectoral principles ("Principles") that aim to encourage responsible AI design, development and use. The use of these Principles is intended to deliver a consistent and proportionate application of the Framework, whilst affording regulators with a degree of flexible interpretation.
Since our insight on the DCMS AI Policy Statement, the Principles embedded in the Framework have been "…updated and strengthened…". The Principles now consist of:
Instead of establishing a standalone body for AI regulation, the White Paper proposes to enhance the remit and capacity of existing regulators to develop a sector-specific, principle-centered approach. Authorities such as the ICO, CMA, FCA, Ofcom, Health and Safety Executive, and the Human Rights Commission will need to adhere to Principles to foster trust and clarify guidelines for innovation.
As the Framework is in lieu of a standalone piece of cross-sectoral AI regulation, there is uncertainty as to how the Framework will effectively work within the perimeters of "… a complex patchwork of legal requirements…". Without coordination, regulatory burdens on businesses could grow, leading to small players struggling to compete, market and public confidence in AI deteriorating, and innovation being stunted. In order to overcome this, the Government has proposed greater coordination at the central and regulator level to ensure that the Framework does function in a "cross-cutting, principles-based" manner.
The government plans to offer centralized support for monitoring and evaluating the new AI regime, identifying barriers and inconsistencies, predicting emerging AI risks, fostering AI-focused sandboxes, promoting AI education for businesses and consumers, and maintaining compatibility with international frameworks. Although it's unclear which entity will fulfil this role, initial indications suggest responsibility will sit with governmental, potentially evolving into an independent regulator dedicated to AI in the future.
In practice, not all of the Principles will be relevant to a particular context and in some instances the Principles may come into conflict. In instances of conflict, regulators will be able to prioritise certain Principles in line with the White Paper's context-driven approach. The Government may adapt the Framework in the future should regulators find certain Principles irrelevant.
The Government may also adapt the Framework in light of the fact that some sectors, such as the AI-enabled military sector, already have their own principles which go beyond the scope of the Principles (e.g. The Ministry of Defence published its own AI strategy in June 2022).
The Framework will not operate in isolation, it puts forward a range of complementary tools:
The consultation under the White Paper is now closed (21 June 2023) and within the next 6 months we should expect a more detailed response and potential guidance on the implementation of the White Paper principles and proposed framework. However, there is significant scope for the approach to change, and plenty of political talk of more internationally coordinated approaches.
A number of other regulatory developments in the UK, the EU and other jurisdictions will also affect the development, use and rollout of AI systems. Impacted businesses must therefore understand how these may affect their current and future AI endeavours. These developments include (but are not limited to):
AI Providers must also consider non-legal regulation which may influence AI Systems such as AI assurance frameworks, and regulatory guidance.
Whilst the regulatory frameworks in the UK and EU, and around the world, are yet to be finalised, there is sufficient information and common themes for organisation to implement steps to prepare for new requirements that lay ahead. In fact, making headway on these now will almost certainly ease the compliance burden down the line. Actions to consider include:
Whilst the Framework's pro-innovation stance is intended to provide flexibility in how the use of AI is controlled in the UK, the fact that the Framework has to operate within a patchwork of regulations may create gaps that are in practice, too burdensome and complicated to effectively fill. The White Paper's inclusion of complementary tools and centralised functions appear to be helpful in addressing this concern (if implemented properly by regulators), however such a wide range of regulator tools and partly centralised mechanisms might lead to confusion if coordination and collaboration are not encouraged to the extent required to affect any meaningful change on the UK's AI regulatory landscape.
We eagerly await the outcome from the consultation on the White Paper and look forward to providing an update on the UK's approach in due course.