07
Thu, Nov
41 New Articles

Artificial Intelligence Systems and the GDPR from a Data Protection Perspective

Hungary
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The General Secretariat of the Belgian data protection authority has published an informational booklet outlining the relationship between the EU General Data Protection Regulation and the Artificial Intelligence Regulation, which came into force on 1 August 2024. The authority’s aim with the informational booklet is to provide insights into the application of data protection requirements during the development and deployment of artificial intelligence systems. Data protection requirements and legal standards are key to ensure that artificial intelligence systems operate ethically, responsibly and lawfully.

According to the authority, the two EU legislation establish complementary rules to ensure that the processing of personal data by artificial intelligence systems is lawful, fair and transparent. Legal professionals and data protection officers play a key role in ensuring compliance with data protection regulations, especially in adhering to rules regarding the processing of personal data. Moreover, it is also crucial for professionals working with the technical side of artificial intelligence systems, such as analysts, architects and developers to be knowledgeable about data protection requirements.

The authority provides a specific definition of artificial intelligence systems under the AI Regulation. According to this definition, an artificial intelligence system is an IT system designed to analyse data, recognise patterns, and make decisions or predictions based on this. The authority emphasizes that some systems evolve as they operate, learning from themselves and thus making more detailed and accurate decisions.

The Belgian authority gives the following examples from everyday life:

  • E-mail spam filters: these filters distinguish between valid and invalid, false emails and evolve themselves over time, functioning as (i) an automated system; (ii) analysing data [the content of emails]; (iii) recognizing patterns [i.e., how a potential spam email is structured]; and (iv) making decisions [whether to direct the email to the spam folder or the inbox].
  • Content recommendation system on a streaming platform: the streaming service provider operates an artificial intelligence system on its platform to recommend and display content that may interest users, which is also (i) an automated system; (ii) analyses data [based on past video views]; (iii) recognizes patterns [based on the personal preferences of the user and other similar users]; and (iv) makes recommendations based on the identified patterns.
  • Virtual assistants: these assistants execute tasks based on voice commands, such as playing music, setting alarms, or controlling smart home devices. These assistants also (i) are automated systems; (ii) analyse data [the user’s voice commands]; (iii) recognize patterns [during interactions to understand specific commands]; (iv) make decisions [on how to respond to the user]; and (v) potentially improve themselves [e.g., by learning the user’s preferences].
  • Medical image analysis: in many hospitals and healthcare providers, image analysis solutions are used to assist doctors in evaluating, for instance, X-rays and MRI scans. These systems are typically trained with vast amounts of diagnostic images to recognize specific patterns and ultimately anomalies. These systems (i) are automated; (ii) analyse data [images]; (iii) recognize patterns [deviations, disease indicators, anomalies]; and (iv) provide decision support [helping doctors make more accurate diagnoses when evaluating images].

Requirements of the General Data Protection Regulation and the AI Regulation:

  • Lawfulness: the processing of data by AI systems must also rely on one of the six legal bases for data processing as defined in the General Data Protection Regulation [note: throughout every phase of their lifecycle, considering that these legal bases may change per cycle]. This lawfulness requirement must be interpreted together with Article 5 of the AI Regulation, which defines prohibited AI practices. According to the authority, examples of such prohibited practices include the use of social scoring systems or, with certain exceptions, the application of real-time facial recognition systems in public spaces.
  • Fairness: the authority points out that although the AI Regulation does not explicitly mandate the application of fair processes, it builds upon the principle of fair data processing as stated in Article 5(1)(a) of the GDPR. The AI Regulation’s rules are aimed at minimizing biases and discrimination during the development, deployment, and use of AI systems.
  • Transparency: the AI Regulation imposes minimum information requirements for all artificial intelligence systems. Accordingly, users must be informed that they are using an artificial intelligence system. For instance, a chatbot named "Nelson" must clearly indicate that it operates as a chatbot. High-risk AI systems require additional information, explaining clearly and understandably how the system uses data, particularly during decision-making processes, and what other factors influence decisions to mitigate bias.
  • Purpose limitation and data minimisation: the principles of purpose limitation and data minimisation outlined in Article 5(1)(b) and (c) of the GDPR ensure that AI systems do not process data for purposes they were not designed for, nor should they collect excessive data. The AI Regulation particularly for high-risk AI systems reinforces this principle, requiring that the purposes for data processing be well-defined and documented in advance. The authority mentions credit scoring AI systems as an example, which, in addition to identification data, may also process geolocation and social media data, which makes it questionable whether they comply with the principle of data minimisation.
  • Accuracy and currency of data: in line with the accuracy requirement of Article 5(1)(d) of the GDPR, the accuracy of personal data processed within AI systems must be ensured. The AI Regulation, building on this principle, mandates that data processed in high-risk AI systems must be of "high quality" [free from errors] and "objective" [complete]. For instance, in the case of a credit scoring AI system that evaluates applications based on location [postal code], bias may occur if residents of certain neighbourhoods are generally classified as lower income, which could result in discrimination against high-income applicants from the same area, even if their actual income level does not justify automatic rejection of their loan applications.
  • Storage limitation: beyond Article 5(1)(e) of the GDPR, the AI Regulation does not impose additional requirements concerning storage limitation.
  • Automated decision-making: both the AI Regulation and the General Data Protection Regulation place significant emphasis on human participation and oversight in automated decision-making, but from different perspectives. According to the GDPR, data subjects have the right to avoid being subjected to automated decision-making and can request human review of such automated decisions [see Article 22 of the GDPR]. On the other hand, the AI Regulation requires human oversight during the development, deployment, and application of high-risk AI systems. This oversight includes not only guiding decision-making but also human review of training data, measuring the AI system’s performance, and intervening in the decision-making process to ensure responsible AI development and use.
  • Data security: a core requirement of the GDPR is to ensure the confidentiality of personal data throughout the entire data processing lifecycle. However, AI systems pose additional data security risks, therefore, additional measures are needed. High-risk AI systems face specific risks, such as biases in training data that could distort decision-making or malicious manipulation of training data by unauthorized individuals. Therefore, the AI Regulation prescribes preventive measures such as identifying risks, conducting risk assessments, continuously monitoring the system for data security [for example vulnerability assessments] and biases, and ensuring human oversight to safeguard security.
  • Data subjects’ rights: the General Data Protection Regulation provides individuals with the means to exercise control over their personal data by ensuring data subject rights. However, this requires informing the data subjects in a transparent manner about the details of the data processing. Building on this basis, the AI Regulation imposes additional transparency obligations.
  • Accountability: the General Data Protection Regulation lays down several requirements regarding liability for data processing, such as ensuring transparency in data processing, developing internal policies related to data processing activities, applying and documenting appropriate legal bases, keeping various records [e.g., records of data processing activities, records of data subject requests, records of data breaches], implementing organizational and technical security measures, conducting and documenting data protection impact assessments, and appointing a data protection officer [if required].

The AI Regulation does not separately address the principle of accountability, relying instead on the obligations outlined in the GDPR, while additionally requiring the application of a risk management framework. Risks must be assessed in two steps: (i) first, determine the risk classification of the AI system; (ii) for high-risk AI systems, a more detailed, system-specific risk assessment, such as a fundamental rights impact assessment, must be conducted. Furthermore, the design and installation of the AI system must be thoroughly documented, human oversight must be ensured for high-risk AI systems, and formal procedures must be established to detect and manage incidents related to the operation of AI systems or unintended outcomes.

By Adam Liber and Tamas Bereczki, Partners, and Eliza Nagy, Associate, Provaris Varga & Partners

 

Hungary Knowledge Partner

Nagy és Trócsányi was founded in 1991, turned into limited professional partnership (in Hungarian: ügyvédi iroda) in 1992, with the aim of offering sophisticated legal services. The firm continues to seek excellence in a comprehensive and modern practice, which spans international commercial and business law. 

The firm’s lawyers provide clients with advice and representation in an active, thoughtful and ethical manner, with a real understanding of clients‘ business needs and the markets in which they operate.

The firm is one of the largest home-grown independent law firms in Hungary. Currently Nagy és Trócsányi has 26 lawyers out of which there are 8 active partners. All partners are equity partners.

Nagy és Trócsányi is a legal entity and registered with the Budapest Bar Association. All lawyers of the Budapest office are either members of, or registered as clerks with, the Budapest Bar Association. Several of the firm’s lawyers are admitted attorneys or registered as legal consultants in New York.

The firm advises a broad range of clients, including numerous multinational corporations. 

Our activity focuses on the following practice areas: M&A, company law, litigation and dispute resolution, real estate law, banking and finance, project financing, insolvency and restructuring, venture capital investment, taxation, competition, utilities, energy, media and telecommunication.

Nagy és Trócsányi is the exclusive member firm in Hungary for Lex Mundi – the world’s leading network of independent law firms with in-depth experience in 100+countries worldwide.

The firm advises a broad range of clients, including numerous multinational corporations. Among our key clients are: OTP Bank, Sberbank, Erste Bank, Scania, KS ORKA, Mannvit, DAF Trucks, Booking.com, Museum of Fine Arts of Budapest, Hungarian Post Pte Ltd, Hiventures, Strabag, CPI Hungary, Givaudan, Marks & Spencer, CBA.

Firm's website.

Our Latest Issue