16
Mon, Dec
78 New Articles

Preparing for the Future: Romania’s National AI Strategy and the EU AI Act

Romania
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Last week was full of exciting news in relation to AI in Romania. First, the long-awaited AI Act was published in the EU’s Official Journal on 12 July, becoming Regulation (EU) 2024/1689. The AI Act is an essential part of the EU’s extensive digital strategy, alongside the Digital Services Act and the Digital Markets Act. The EU digital strategy aims to establish a thorough regulatory framework that tackles the diverse challenges and opportunities of the digital economy. Secondly, the Romanian Government approved the National Strategy regarding Artificial Intelligence (“AI Strategy”) on 11 July. The AI Strategy aims to contribute to Romania’s adoption of digital technologies in the economy and society, while respecting human rights and promoting excellence and trust in AI.

Timeline for the application of the AI Act

The AI Act will enter into force on 1 August 2024, 20 days after its publication in the official journal of the EU, becoming applicable on 2 August 2026. It is worth nothing that the EU Commission plans to launch an AI Pact. The pact involves securing commitments from AI developers, on a voluntary basis, to implement key obligations of the AI Act prior to its application. Further, there are certain exceptions regarding the 2-year deadline for the applicability of the AI Act. To this end, the provisions aimed at prohibiting unacceptable risk related to AI will be applicable starting on 2 February 2025, six months after the AI Act’s entry into force. Furthermore, twelve months after the AI Act’s entry into force, specifically on 2 August 2025, the following provisions, among others, will become applicable:

  • The obligations related to notifying authorities and notified bodies (member states must have appointed their notifying authorities);
  • The obligations of providers of general-purpose AI (for AI systems placed on the market after that date); and
  • The provisions related to penalties (except for the fines for providers of General-Purpose AI).

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the Act’s entry into force. The European AI Office, established in February 2024 within the Commission, will oversee the AI Act’s enforcement and implementation across member states. The AI Office will also lead the EU in terms of international cooperation on AI and strengthen bonds between the European Commission and the scientific community, including the forthcoming scientific panel of independent experts.

Main rules introduced by the AI Act

The AI Act introduces different rules for different risk levels. These new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems could pose minimal risks for users, these systems will still need to be assessed. First and foremost, unacceptable risk AI systems are systems considered to be a threat to people and will be prohibited. These include systems using the following:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Biometric identification and categorisation of people
  • Real-time and remote biometric identification systems, such as facial recognition

Moreover, AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories, as follows:

  1. AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
  2. AI systems falling into specific areas that will have to be registered in an EU database:
    • Management and operation of critical infrastructure
    • Education and vocational training
    • Employment, worker management and access to self-employment
    • Access to and enjoyment of essential private services and public services and benefits
    • Law enforcement
    • Migration, asylum and border control management
    • Assistance in legal interpretation and application of the law.

All high-risk AI systems will undergo an assessment before entering the market and continuously throughout their lifecycle. Users will have the right to submit complaints involving AI systems to designated national authorities. On the other hand, generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law. The main obligations in this respect will be:

  • Disclosing that content was generated by AI
  • Designing the model in a manner that prevents it from generating illegal content
  • Publishing summaries of copyrighted data used for training

High-impact general-purpose AI models that might pose a systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission. Content that is either generated or modified with the help of AI – images, audio or video files (for example deep fakes) – need to be clearly labelled as AI generated so that users are aware when they come across such content.

The idea behind the Romanian National Strategy regarding Artificial Intelligence

The AI Strategy is intended to support Romania’s central public administration in its efforts to standardise, operationalise, and regulate AI development, enhancing its positive effects. Moreover, the strategy is meant to ensure Romania’s alignment with European strategic directions regarding common rules applied to digital services. The AI strategy tries to define general objectives at a national level in relation to the use of AI, namely:

  • Supporting education in research and development and the formation of AI-specific skills;
  • Developing a resilient infrastructure along with usable and reusable datasets;
  • Strengthening R&D in the field of AI;
  • Encouraging the transfer of technology from the research-innovation environment to production;
  • Supporting measures that encourage the adoption of AI in society;
  • Establishing a governance system and an appropriate regulatory environment for AI.

Preparing for the AI Act

It is certain that the EU AI Act will significantly impact certain organisations particularly with regard to cost. However, there is still ample time to prepare and align your AI systems with the law. There is the likelihood that we may face the same chaos and uncertainty seen when GDPR became applicable, given how many organisations left their preparations for that event to the last minute. Therefore, it is crucial to begin preparations as soon as possible, in order to ensure that current and developing AI systems are compliant when the EU AI Act becomes mandatory. Otherwise, systems may need to be taken offline or garner substantial fines.

By Flavius Florea, Counsel, Wolf Theiss