August 25, 2022

AI Regulation in the U.K. — New Government Approach

On July 18, 2022, the U.K. Government published a paper on its proposals for AI regulation “Establishing a pro-innovation approach to regulating AI” (the AI Paper). This was published alongside the Government’s AI Action Plan, the first update provided since the Government published its National AI Strategy in September 2021.

The AI Paper provides for an alternative approach to AI regulation in the U.K. when compared with the recently proposed draft legislation for AI regulation in the EU (the EU AI Act). The U.K. Government favours a more decentralised and less regimented approach: guidance, rather than legislation; sector-based, rather than cross-sector application; regulated at sector level, rather than centrally; and with a looser definition of what constitutes AI for the purposes of regulatory application. This is intended to make the U.K. an attractive environment for AI innovation, with more flexible and pragmatic regulation, although AI businesses operating in multiple sectors will potentially need to review and comply with more than one set of principles and address conflicts between them.

Definition of AI

The U.K. Government intends to “regulate the use of AI rather than the technology itself,” with a significant emphasis on effect rather than process. As such, the AI Paper sets out the core characteristics of AI to inform the scope of regulation, with the intention being that regulators will set out more detailed sector-based definitions of AI. The AI Paper sets out some examples of regulators which are currently taking action to support the responsible use of AI, or working together on AI issues, including the U.K. Information Commissioner’s Office, the Equality and Human Rights Commission, the Medicines and Healthcare Products Regulatory Agency and the Health and Safety Executive, the Bank of England and the Financial Conduct Authority.

The core characteristics given in the AI Paper are the “adaptability” and “autonomy” of a system. Systems that are considered “adaptive” are those which operate on the basis of instructions not expressly programmed with human intent, but rather those which have been “learnt” or “trained” on data. For systems to be sufficiently “autonomous,” they must not require the ongoing control of a human. This approach to defining AI systems by the U.K. is therefore adaptable as technology develops, even to the point of going beyond the traditional general definition of AI as machine learning or deep learning, towards other means for replicating a human-like response.

By contrast, the EU looks to define AI systems with a tighter, less flexible definition set out in Article 3 of the AI Act that details (in Annex I) the specific techniques or approaches that must be used in the development of the software such that it qualifies as an AI system. These Annex I techniques and approaches currently include machine learning, logic- and knowledge-based, and statistical approaches, but may be adapted over time as new techniques emerge.

U.K. vs EU approach to regulation

U.K.

The AI Paper provides that regulation of AI should be undertaken by existing regulatory bodies so that a sector-based approach may be taken which accounts for the differences in how AI is used and its impact within different contexts. In order that there is sufficient uniformity in approach and to minimise confusion, the AI Paper sets out suggested overarching principles that should be taken to apply across the board and would inform any sector-specific guidance.

The six principles are:

  • Ensuring AI is used safely.
  • Ensuring AI is technically secure, and functions as designed.
  • Ensuring AI is appropriately transparent and explainable.
  • Embedding fairness into AI.
  • Defining responsibility for AI governance.
  • Ensuring clarity of redress or contestability.

EU

The EU by contrast in its draft legislation sets out a centralised framework to be regulated centrally and overseen by a new AI Council advising the Commission at EU level. There are significant fines for breach, with the highest fines, of (the higher of) EUR 30,000,000 or 6% of global turnover, being even greater than those for a breach of the GDPR. Applications of AI are assigned to separate risk categories, with different requirements depending on the risk category.

The highest risk systems are banned outright and include use of AI for subliminal manipulation or exploitation of vulnerabilities, and the use of “social scoring” systems by public authorities to make general decisions that affect people’s lives using irrelevant factors. Systems deemed to be of minimal or no risk are permitted with no restrictions, although operators will be encouraged to self-regulate and introduce measures themselves where sensible.

“High-risk” systems are proposed to be heavily regulated in a much more prescribed way than anything currently proposed in the U.K. These systems are categorised as those that are used in certain fields that are listed in the draft EU AI Act, such as biometric ID, management of critical infrastructure and employment. Operators of high-risk systems (including both providers and users of such systems) must meet certain obligations. These include: establishing a risk management system; meeting standards for data governance and record keeping; and ensuring effective human oversight.

There are additional transparency requirements for certain AI systems, such as the requirement to label deep fakes and to inform people where they are interacting with an AI system.

Although the U.K.’s AI Paper does not set out in detail the nature of the requirements that will need to be met by operators of regulated systems, it seems likely that sector regulators will draw from the EU approach for guidance on appropriate ways to ensure that the principles defined in the AI Paper are met.

U.K. Principles in More Detail

AI systems should be used safely.

This is a principle that lends itself well to a sector-specific approach. There will clearly be circumstances under which safety will be a more pressing concern – such as use in healthcare and critical infrastructure. Regulators will need to ensure that there are safety controls in place, such as override mechanisms, and that safety is effectively learned into an AI system operating in a high-risk environment.

AI systems should be technically secure and function as designed.

This principle links with consumer expectations and mirrors, to some extent, general consumer protections in place in the UK. Essentially, an AI system should do what was intended and what is claimed. The data used in the system must be “relevant, high quality, representative and contextualised”. Subject to what is considered proportionate, this should be something that is tested and proven.

AI systems should be appropriately transparent and explainable.

AI systems should not operate opaquely, particularly where outcomes could be of vital importance, such as in assessing job applications. The key term in this principle is “appropriately”. In certain circumstances it may be more necessary than it is in others for AI to be fully explainable, and this will always be a challenge because of the very nature of AI systems.

AI systems should be fair.

As with the other principles, there will clearly be different sectors and circumstances where fairness is of greater importance. In particular, the AI Paper highlights that where there is a significant impact on people’s lives, such as credit scoring, regulators must act to ensure that there is fairness in AI systems in use under their remit.

Someone should be responsible.

There must always be someone who can be held liable for an AI system, even though it is the system itself that has autonomy in decision-making. Liability must lie with an identified/identifiable person or legal entity, and organisations should not be able to claim that they do not know how an AI system they are using delivers relevant outputs and wipe their hands of it.

Decisions should be contestable where reasonable.

Any person subject to a decision taken using an AI system should be able to contest it, so there must be systems in place to allow for this to happen. As with other principles set out in the AI Paper, this is subject to proportionality and contextual considerations, as there will be circumstances, such as where decisions are being taken about benefits or in an education or employment context, where this becomes more important.

Comment and Next Steps

The AI Paper provides a clear indication of how the UK Government plans to approach the regulation of AI and machine learning, but this is just the start. Unlike the detailed legislation proposed by the EU, this provides more of a framework rather than specific rules and requirements for business to follow day-to-day. Over time, as UK regulators develop specific policies for organisations operating in their sectors, it will become clearer what the true impact on businesses operating in the U.K. will be.

The Government is consulting on the approach set out in the AI Paper with views and evidence being accepted until September 26, 2022. The key areas still under consideration include the proposed framework and approach in its entirety, how this is put into practice in terms of the powers and remits of regulators and how progress will be assessed. This consultation provides an opportunity for all businesses with an interest in this area to have their say and influence how policy is shaped. A white paper on this is then expected towards the end of the year. Given that the U.K. is the second-largest economy in Europe and a key market for products and services, U.S. businesses operating in or selling into the U.K. should follow these developments and contribute to the consultation process.