AI & Law #3: GDPR Challenges in the Age of AI

In today’s “data driven world”, Artificial Intelligence (AI) is revolutionizing industries, from healthcare to finance, by providing insights, automation, and innovation like never before. However, with great power comes great responsibilities to safeguard individuals’ privacy and data rights. This is where the General Data Protection Regulation (GDPR) steps in. The GDPR, the comprehensive European data protection law, has set stringent standards for how personal data is collected, processed, and secured. As AI technologies continue to evolve and become integral to business operations, the intersection of AI and GDPR becomes increasingly critical. In this blog post, we will delve into this dynamic landscape, exploring the challenges that arise and essential compliance strategies that organizations may adopt to harness AI’s potential while respecting GDPR’s principles and rules.

Balancing Innovation and Compliance

Artificial Intelligence (AI) is the development of computer systems replicating human intelligence. Its applications encompass customer service chatbots, predictive analytics, and process automation, revolutionizing how businesses operate. AI systems, especially machine learning models, rely on large and diverse datasets to learn and make predictions or decisions. Thus, data is the key to AI’s game changing potential.

Conversely, the General Data Protection Regulation (GDPR) is a European Union legal framework aimed at protecting individuals’ personal data. Some of the GDPR’s key principles involve obtaining consent for data processing, transparency obligations on whoever handles personal data, ensuring data security, and granting individuals rights over their data.

Organizations already using or thinking about implementing AI in their business operations should assess whether such deployment involves the processing of personal data. If this is the case, compliance with the GDPR is not just a choice, it’s a legal obligation.

The complex nature of AI algorithms and data processing poses a significant challenge when it comes to ensuring GDPR compliance. But how does AI interplay with the GDPR? How does the GDPR, with its stringent data protection rules, regulate the ever-evolving landscape of AI systems? What are the challenges that the AI brings to the protection of personal data?

Navigating GDPR Challenges in AI Deployment

Automated decision making and profiling

The GDPR has provisions that impact AI-based individual decisions, especially those concerning automated decision-making (ADM) and profiling. ADM involves using algorithms to make decisions based on predefined criteria without any direct human intervention. An example of ADM would be any automated system used by an employer to scan job applications, looking for keywords and qualifications. The system may automatically shortlist or reject candidates based on predetermined criteria, such as specific skills or experience. Profiling, on the other hand, implies collecting data about individuals to create a profile that characterizes their behavior, preferences, or characteristics. That would be the case of e-commerce platforms that track user’s browsing and purchase history. Based on this information, the system generates personalized product recommendations to enhance the user’s shopping experience.

Article 22(1) grants a right to individuals not to be subject to decisions which solely rely on automated processes that have a significant legal or similar effect on them. According to the ‘Article 29 Data Protection Working Party’ (WP29), Article 22(1) only applies to situations where there is no human intervention and where there are significant and meaningful consequences.[1]

Despite Article 22’s high threshold, it has been used to protect individuals even in cases not meeting its strict criteria.[2] This is because GDPR safeguards against ADM and profiling go beyond Article 22. For example, while ADM and profiling might be allowed under certain conditions (like obtaining explicit consent or contractual necessity), there must be adequate safeguards in place. This includes the right for individuals to seek human intervention, challenge, or express their opinions regarding automated decisions. Transparency is also crucial, meaning clear explanations of the decision-making process, including logic, importance, and expected outcomes, must be provided.

Ensuring Transparency and Fairness in AI Decision-Making

As mentioned earlier, a crucial GDPR requirement is providing clear and transparent information for personal data processing. Individuals need to be provided with a meaningful explanation of the logic behind the AI system, as well as the possible consequences of the processing.

For instance, in the context of credit scoring, if an AI system determines that an individual is not eligible for a loan, providing a meaningful explanation might involve mentioning factors such as their credit history, outstanding debts, and recent payment behaviour. This helps the individual understand why the decision was made and what aspects of their financial history influenced it. Similarly, if an AI system is used to screen job applications, explaining a decision could entail highlighting specific qualifications or experience that led to a candidates rejection. This provides applicants with insights into why they were not selected and helps ensure fairness in the hiring process.

In the AI context, this can be quite challenging because AI models, particularly advanced machine learning and deep learning models,nbsp;nbsp;can occasionally yield outcomes that are challenging to explain or inadvertently introduce biases. Understanding how these systems make decisions and ensuring they align with GDPR principles can be challenging.nbsp;

Legal basis: Is consent a viable option?

GDPR requires a legitimate legal basis for data processing. Identifying the appropriate legal grounds for each AI process is essential to ensure compliance. Securing informed consent for data processing is one of the possible options. The GDPR stresses that consent requests should be easy to understand and access, using plain language. When seeking consent online, the request should be brief and not disrupt the user experience unnecessarily. Implicit consent, like pre-checked boxes or just installing an app, is not considered acceptable.[3] Finally, consent must relate to specific purposes and uses of personal data.

Consent can be used as a legal basis for data processing when organizations have a direct relationship with the individuals involved. It can be used when personalizing services in AI deployment or making predictions or recommendations. However, as previously mentioned, it must be freely given, specific, informed, and require a clear affirmative action. While consent can build trust and give individuals control, it may not be suitable for complex AI operations. The challenge lies in ensuring genuine, well-informed consent, particularly when data processing becomes complex. The key is making sure individuals understand how their data will be used and obtaining their clear and valid consent. 

Data minimization

The GDPR requires data to be processed only for specific purposes, with minimal data collected and maintained accurately, securely, and confidentially. To align with data minimization principles, only necessary data should be processed, and retention periods should be clearly defined to avoid unnecessary data storage.

However, AI often requires vast amounts of data, which can include various types of information. It can be difficult to determine the minimum data needed, especially in complex machine learning. On the other hand, insufficient data may hinder the accuracy and performance of AI models. Thus, finding the right balance between minimal data use for GDPR compliance and enough data for accurate AI results is challenging.

Similarly, the GDPR places a strong emphasis on processing data for specific and well-defined purposes. Organizations are required to ensure that data is used only for the purposes it was originally collected. Yet, AI models evolve over time and that can lead to changes in their purpose, or they can process data for multiple purposes, potentially conflicting with the original data collection purpose. It can, therefore, be challenging to ensure that each use complies with GDPR’s purpose limitation. At the same time, managing user consent and expectations as AI evolves are also a concern, especially when data usage extends beyond initial purposes.

Strategies for AI-GDPR Compliance

To address the challenges discussed earlier, organizations can put the following strategies into action to ensure their deployment of AI aligns with GDPR compliance.

Conducting Data Protection Impact Assessments (DPIAs) for AI Projects:

As a first step, organizations should conduct DPIAs early in AI project development and cooperate with data protection and AI experts to identify potential risks, assess data processing activities, and determine mitigation strategies. A DPIA is a risk assessment process aimed at identifying and addressing data protection risks in projects or systems involving personal data. It is mandatory under the GDPR in certain circumstances, including when the processing of personal data is likely to result in a high risk to the rights and freedoms of individuals.

Implementing Privacy-by-Design Principles in AI Development:

The concept of Privacy by Design in the GDPR requires incorporating privacy safeguards into the core development of systems and processes from the beginning rather than as an afterthought. Yet, integrating Privacy by Design into AI systems can be challenging when considering data minimization and purpose limitation principles. Organizations should ensure AI models are designed to collect only necessary data, have clear purposes, and adhere to GDPR principles. For example, techniques such as differential privacy, which introduces noise to data to protect individual identities, or federated learning, where the model learns from decentralized data, can safeguard data privacy while ensuring the efficient operation of AI systems.[4]

Anonymization and Pseudonymization:

Anonymizing or pseudonymizing data plays a crucial role in enhancing user privacy and ensuring compliance with the GDPR. Anonymization completely eliminates any personally identifiable information, rendering it impossible to trace data back to individuals. On the other hand, pseudonymization involves replacing identifying information with artificial identifiers. In the context of e.g. generative AI, these techniques can be employed during data pre-processing or integrated into the AI models. This adds an extra layer of protection, ensuring that personal data remains secure.

Right to an explanation and transparency:

Privacy notices may serve as basic means to explain how AI systems operate while ensuring compliance with the principles of transparency and the right to an explanation. These notices should contain essential information, including:

  • The specific purposes for processing an individual’s personal data.
  • The planned retention periods for this data.
  • Details about who the data will be shared with.

 

The right to an explanation may imply providing information on what data is used to train the model, where the data originated from or how the quality of the data is secured.

Explainable AI (XAI) is also an effective technical approach aimed at making AI systems more transparent and understandable by explaining their decision-making processes. While it doesn’t reduce the need for data, it helps pinpoint the specific data required to improve model accuracy.

Algorithmic fairness:

Algorithmic fairness is an approach for preventing discrimination in AI. It involves techniques such as adjusting datasets, modifying model training, and post-processing (e.g. re-ranking the model’s predictions) to reduce bias and promote fair outcomes. Algorithmic fairness requires regular monitoring and audits (e.g. examining historical data and model performance) to identify and rectify biases.

Conclusion

In the rapidly evolving world of AI, GDPR compliance is not just a legal necessity but a cornerstone of ethical AI deployment. Challenges related to automated decision-making, transparency and fairness, or data minimization and purpose limitations highlight the difficulties of harmonizing AI with GDPR principles. There is growing discussion about the need to reform the GDPR in order to address these challenges. Yet, by adopting proactive strategies like Data Protection Impact Assessments (DPIAs), privacy-by-design, ensuring transparency, offering explanations, and adopting an algorithmic fairness approach, organizations can already address some of these challenges and be ready for any new regulations that might come along. Furthermore, the EU latest legislative proposal, the AI Act, which introduces stricter requirements for AI systems, offers an opportunity for enhanced coordination with the GDPR, aligning their shared goals and reinforcing ethical AI practices.

To explore AI-GDPR compliance further and get personalized guidance, don’t hesitate to reach out to our legal experts. Stay tuned for more insightful articles in our next blogpost.

 

Olivier Van Raemdonck, Partner

Ward Verwaeren, Senior Associate

Aida Kaloci, Associate

 


[1] Article 29 Working Party, ‘Guidelines on Automated individ- ual decision-making and Profiling for the purposes of Regu- lation 2016/679’ (wp251rev.01, 6 February 2018) at 22

[2] https://fpf.org/blog/fpf-report-automated-decision-making-under-the-gdpr-a-comprehensive-case-law-analysis/

[3] https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU(2020)641530_EN.pdf

[4] https://edps.europa.eu/press-publications/publications/techsonar/federated-learning_en

Team

Ward Verwaeren
Managing Associate
Aida Kaloci
Associate
Olivier Van Raemdonck
Managing Partner

Expertises

IP
privacy & security