14
Thu, Nov
72 New Articles

Artificial Intelligence Regulatory Landscape: EU and Ukraine

Artificial Intelligence Regulatory Landscape: EU and Ukraine

Briefings
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The challenges posed by rapid technological advancements and potential risks associated with AI continue to grow. Data protection, cybersecurity, and IP considerations, to name a few. With this in mind, policymakers strongly believe that it is crucial to establish a legal framework regulating, among others, the development and use of AI.

The EU has probably made the most progress in this direction, as on 13 March 2024, the European Parliament approved the AI Act, the world’s first comprehensive law explicitly regulating the use of AI. While the AI Act is still subject to final approval, this marks an entry into the final stage of a lengthy and highly debatable legislative process.

Meanwhile, Ukraine is also making strides in AI regulation, trying to develop its rules for AI while keeping up with international best practices.

This article provides a general overview of a legal framework governing the use of AI within the EU and Ukraine, briefly covering the purpose, scope, and key considerations of current or proposed regulation, along with potential further developments.

EU: AI Act

The AI Act introduces a comprehensive set of rules for the development, placing on the market, putting into service and the use of AI systems in the EU. Most provisions of the AI Act will apply 24 months following its official publication. Through the new AI framework, the EU plans to promote the uptake of human-centric and trustworthy AI, ensure high-level protection of safety and fundamental rights against the harmful effects of AI systems, and support innovation.

Scope

With some national security and R&D exemptions, the AI Act is primarily planned to apply to:

  • EU-based providers and deployers of AI systems; and
  • providers and deployers based outside the EU placing AI systems on the EU market.

There are also rules applicable to importers, distributors and authorised representatives of providers of AI systems established or located within the EU. It is worth noting that the proposed definitions of the terms “AI system”, “provider”, and “deployer” are broad, aiming to cover key operations related to the deployment and use of AI systems.

Key requirements and penalties

The AI Act follows a risk-based approach, tailoring the proposed regulation to the specific level of risk associated with each AI system. It prohibits the use of certain AI systems while imposing specific requirements for the rest, including high-risk systems. These requirements include:

  • conformity assessment, registration and other similar requirements applicable to any provider putting a high-risk AI system on the EU market; and
  • transparency requirements applicable to various AI systems designed to ensure that individuals are informed when interacting with an AI system.

The AI Act proposes significant penalties for any violations, including fines of up to EUR35 million or 7% of worldwide annual turnover for prohibited AI practices.

Further developments

On 24 January 2024, the European Commission also decided to establish the AI Office within the Commission, which will serve as the centre of AI expertise within the EU.

The AI Office will support the EU Member States’ authorities in implementing the AI Act. It will enforce the rules for general-purpose AI models, and will have the powers to conduct evaluations of general-purpose AI models, request information and measures from model providers, apply sanctions, etc.

Besides the AI Act, the EU has several other regulations relevant to AI systems, including the EU GDPR and non-binding recommendations.

Ukraine

Given the developed IT sector with many companies engaged in developing AI technologies, Ukraine aims to boost innovation and align local framework with international best practices. Therefore, Ukraine is committed to following the EU approach to AI regulation.

Current framework

As Ukraine aims to align its framework with EU regulations, the scope of regulation is expected to be substantially similar. The precise scope will become clear once Ukraine presents the initial draft of its AI regulation.

At this stage, Ukraine adopted:

  • the Roadmap of AI Regulation in Ukraine; and
  • the Concept of the Development of AI in Ukraine.

There were also some developments regarding the judiciary’s view of AI. Recently, the Ukrainian judiciary expressed its views on the use of AI systems, saying that the appeal to the “positions” generated by ChatGPT in respect of a matter already considered by the court is regarded as an abuse of procedural rights and a manifestation of disrespect towards the judicial system.

Trends and further developments

For now, Ukraine has not introduced specific regulations applicable to the deployment and use of AI systems, except for limited sui generis protections provided to AI outputs. As for non-binding materials, the following apply:

  • the Ministry of Digital Transformation of Ukraine issued non-binding recommendations on the use of AI in media, aligning them with international best practices. These recommendations emphasise the key principles governing the use of AI, such as lawfulness, transparency, and responsibility. They also set out several recommended measures, including monitoring of an AI system and precautions to ensure compliance with, among others, data protection requirements; and
  • Ukraine also follows the relevant recommendations published by the OECD and the Council of Europe, including Recommendation on Artificial Intelligence (OECD/LEGAL/0449) and Recommendation CM/Rec(2020)1.

It is expected that the relevant recommendations, along with the AI Act, will lay the foundations of Ukraine’s regulation of AI systems.

Final remarks

As both the EU and Ukraine continue to develop their regulatory frameworks and navigate the complexities of AI governance, it is yet to be seen how efficient such regulations would be. Importantly, and as follows from the approach taken by, among others, the OECD and the Council of Europe, any legal framework should be flexible enough to keep up with rapid AI developments and, at the same time, promote human-centred values, ensuring that AI serves society’s best interests while mitigating potential risks.

By Oleksandr Kozhukhar, Managing Associate, and Olha Rudevych, Associate, Avellum

Avellum at a Glance

AVELLUM is a leading Ukrainian full service law firm with a key focus on Finance, Corporate, Dispute Resolution, Tax, and Antitrust.

Our aim is to be the firm of choice for large businesses and financial institutions in respect of their most important and challenging transactions.

We build lasting relationships with our clients and make them feel secure in new uncertain economic and legal realities.

We incorporate the most advanced Western legal techniques and practices into our work. By adding our first-hand knowledge, broad industry experience, and unparalleled level of service we deliver the best results to our clients in their business endeavours. Our partners are taking an active role in every transaction and ensure smooth teamwork.

AVELLUM is recognised as one of the leading law firms in Ukraine by various international and Ukrainian legal editions (Chambers, The Legal500, IFLR1000, The Ukrainian Law Firms, and others).

Firm's website: www.avellum.com