The European Commission’s proposed approach to the Regulation of AI was published last week and could have profound implications for your technology enabled business and life. This is not a niche corner of regulation and will potentially have wide application on everything from smart products to social media platforms and connected retailers.
This note distils the proposal down to a few key points to get quickly across the main areas that the new Regulation is proposed to cover, which will be hotly debated in the months to come. We will be looking at this further from a number of perspectives over the coming weeks.
This is your speed read:
- Concept: The proposed Regulation forms the foundation of the EU’s “ecosystem of trust” in AI. A few uses of AI will be prohibited altogether. Otherwise the idea is to impose common standards on those AI systems which affect EU citizens in the most significant ways. From a lawyer’s point of view the EU proposes to achieve these aims by risk based regulation similar, in its approach, to the existing data protection and product safety regimes.
- Extra territorial scope: Regulation will apply to high-risk AI which is available within the EU, used within the EU or whose output affects people in the EU. Because the aim is to reassure and protect the key rights of EU citizens, it is irrelevant whether or not the provider or user is within the EU. So, for example, where the AI is hosted on a server outside of the EU and/or the decisions which the AI makes, or enhances, is an activity carried out outside of the EU, the regime can still apply.
- Sanctions: Fines can be very substantial and reach up to €30 million or 6% of global turnover (the higher), which could be in addition to overlapping breaches such as the 4% of global turnover in GDPR.
- AI: Artificial Intelligence is defined very broadly and will capture a number of technologies already in wide use. The definition is also future proofed because the proposal includes granting the Commission powers to update those techniques and approaches which fall within scope.
- Prohibited practices: AI practices which have significant potential to contravene fundamental rights or which seek to manipulate or take advantage of certain categories of person will be prohibited – including general surveillance, adverse behavioural advertising and social scoring.
- High risk AI systems: AI may be classified as high-risk because of safety implications or because it can impact fundamental rights; Annex III of Regulation includes a new list of AI systems deemed ‘High Risk’. In practice, many common uses of technology will fall into this category and become subject to the full compliance regime. For the remaining, non-high-risk AI, a few basic controls will apply but the Commission will also encourage voluntary codes of conduct and additional commitments by providers.
- Providers: The most onerous controls which will apply to Providers – a person or organisation that develops an AI system, or that has an AI system developed, with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge . Many of the controls which will apply to Providers will be familiar to those who have been tracking AI developments and proposals more generally: transparency; accuracy, robustness and security; accountability; testing; data governance and management practices; and human oversight. EU specific aspects include the requirement for Providers to self-certify high-risk AI by carrying out conformity assessments and affixing the CE marking, that high-risk AI must be registered on a new EU database before first use / provision, and the requirement for Providers to have in place an incident reporting system and take corrective action for serious breaches / non-compliance.
- Importers, Distributers and Users: Other participants in the high-risk AI value chain will also be subject to new controls. E.g. Importers – will need to ensure that the Provider carried out the conformity assessment and drew up required technical documentation. Users – will be required to use the AI in accordance with its instructions, monitor it for problems (and flag any to the provider/distributor), keep logs, and so on. Personal use is excluded from the User obligations.
- Next steps: The proposal will be subject to intense scrutiny and lobbying. It could still be amended before adoption by the European Council and European Parliament. Timescales are difficult to predict but once finalised and formally published there will be a lead time of at least two years before it takes effect. This will allow providers and others in the value chain time to ensure compliance in relation to existing AI.
More will follow from a UK perspective shortly.