The AI Act introduces a comprehensive legal framework for companies dealing with AI systems in the EU. From 2 February 2025, companies subject to the regulation must take steps to ensure AI literacy and ensure that no prohibited AI practices are used. Non-compliance could lead to substantial fines.
The applicability of Chapter I and Chapter II of the AI Act
EU Regulation 2024/1689 (“AI Act”) establishes a uniform legal framework for the development, the placing on the market, the putting into service, and the use of artificial intelligence systems (“AI systems”) within the Union. The AI Act entered into force on 1 August, 2024, with its rules becoming applicable at a later date. In particular, the first two chapters of the AI Act will come become applicable on 2 February 2025, for which companies must make the necessary preparations. Below is a brief summary of the provisions contained within these chapters:
I. Chapter I - AI Literacy
Chapter I includes general provisions, outlining the scope of the AI Act and providing definitions. Article 4 of the AI Act imposes a practical obligation on companies that provide or deploy AI systems, to ensure mandatory AI literacy within the companies.
AI literacy means skills, knowledge and understanding that allow providers, deployers and affected persons to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
To meet the AI literacy requirements, companies that provide and deploy AI systems must take measures to ensure a sufficient level of AI literacy of their staff and also other persons dealing with the operation and use of AI systems on their behalf. This in practice, means promptly organizing training and education for everyone involved in the AI provision, use and deployment chain within the company.
II. Chapter II – Prohibited AI practices
The chapter on prohibited AI practices also becomes applicable on 2 February. Practices listed in this chapter are prohibited from 2 February 2025. Examples of such practices include:
- AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting behavior.
- AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons with the objective, or the effect, of materially distorting the behaviour of that person(s);
- AI systems that infer emotions in workplaces or educational settings; and
- AI systems that create or expand facial recognition databases from internet images or CCTV footage.
Non-compliance with rules on prohibited AI practices could lead to administrative fines up to EUR 35,000,000 or 7% of the company’s global annual turnover. Other sanctions, including sanctions for noncompliance with AI literacy can be established by the member states.
By Csaba Vari, Counsel, and Anna Howe, Junior Associate, Baker McKenzie