Anyone selling a device that captures facial images, or uses artificial intelligence to process personal data, and who is considering moving their sales efforts into Europe, should be interested in the recent proposals made by the European Commission.
On April 21, 2021, the European Commission published its long-awaited proposal for a Regulation on Artificial Intelligence (“AI Regulation”).
The proposed AI Regulation introduces a first-of-its-kind, comprehensive, harmonized, regulatory framework for Artificial Intelligence, similar to the General Data Protection Regulation (GDPR).
The AI Regulation adopts a broad regulatory scope, covering all aspects of the lifecycle of the development, sale and use of AI systems, including:
- Placing AI systems on the market;
- Putting AI systems into service; and
- Making use of AI systems.
There is a tiering of regulatory requirements depending on the inherent risk associated with the AI system or practices being used. These are explained in more detail including:
- Prohibited AI practices. particularly intrusive methods of deploying AI. This includes social scoring; large scale surveillance; and adverse behavioral targeting.
- High Risk AI systems. This includes AI systems deployed in relation to credit scoring; essential public infrastructure; social welfare and justice; medical and other regulated devices; and transportation systems.
- Lower risk AI systems. AI systems which are outside the scope of those identified as ‘high risk’.
Definition of AI System
The definition of an AI system covers:
Machine-learning approaches; logic and knowledge-based approaches; and statistical approaches; which generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
Prohibited AI Practices
The AI Regulation prohibits specific AI practices (rather than AI systems) which are considered to create an unacceptable risk (e.g., by violating fundamental rights). These cover:
- AI-based dark patterns and micro targeting. These are AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior, whether individually or as part of a group;
- AI-based social-scoring: These are AI systems used by public authorities for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behavior or personality characteristics, with the social score leading to detrimental/unfavorable treatment in social contexts unrelated to the context in which the data was gathered;
- The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except for certain immediate threats.
High-risk AI System
The definition of a high-risk AI system:
- AI systems intended to be used as a safety component
- Stand-alone AI systems whose use may have an impact on fundamental rights by way of example, real-time and
- “post” biometric identification systems.
Requirements applicable to high-risk AI systems
Complete and up-to-date technical documentation must be maintained and the outputs of the high-risk AI system must be verifiable and traceable. The system must be registered.
A risk management system must be established, implemented, documented and maintained as a continuous iterative process run throughout the entire lifecycle of the system.
Any data sets used to support training, validation and testing must be subject to appropriate data governance and management practices and must be relevant, representative, free of errors and complete and have the appropriate statistical properties to support the system use.
AI systems must be designed and developed in such a way that there is effective human oversight.
In addition to the above, providers must also:
- Set-up, implement and maintain a post-market monitoring system;
- Ensure the system undergoes the relevant conformity assessment procedure (prior to the placing on the market/putting into service) and draw up an EU declaration of conformity
- Immediately take corrective actions in relation to non-conform high-risk AI systems. Inform the national competent authority of such non-compliance and the actions taken.
- Upon request of a national competent authority, demonstrate the conformity of the high-risk AI system.
Conformity Assessments and Notified Bodies / Notifying Authorities
The AI Regulation includes a conformity assessment procedure which has to be followed for high-risk AI systems – with two levels of assessment to apply.
Rules for Other (Low Risk) AI systems
- If the AI system is intended to interact with an individual, the provider must design the system to ensure the individual is aware they are interacting with an AI system (except where this is obvious, or it takes place in the context of the investigation of crimes);
- If the AI system involves emotion recognition or biometric categorization of individuals, the user must inform the individual that this is happening
- If the AI systems generates so-called ‘deep fakes,’ the user must disclose this (i.e., that the content has been artificially created or manipulated).
- Codes of Conduct.
Governance, Enforcements and Sanctions
European Artificial Intelligence Board. The AI Regulation provides for the establishment of a European Artificial Intelligence Board (“EAIB”), which is clearly modeled on the GDPR.
Enforcement. The AI Regulation requires Member State authorities to conduct market surveillance and control of AI systems in accordance with the product safety regime in Regulation (EU) 2019/1020. Providers are expected to co-operate by providing full access to training, validation and testing datasets etc.
Sanctions. Like the GDPR, monetary sanctions of up to a revenue based fine at 2% – 6% of the global annual revenue.
Clearly, these proposed regulations are complex and subject to vigorous review and oversight. If you are considering conducting business in Europe involving AI, you’d be wise to consult with a knowledgeable attorney in advance.