Contact us
Published:
22.10.2025
Last Updated:
22.10.2025

The Right to Be Forgotten in the Age of AI

what's inside

Privacy Challenges and Solutions

Exploring the evolving landscape of the right to be forgotten in the age of generative AI, examining legal, technical, and ethical issues that shape its implementation and efficacy.

full article

Copyright © 2025 Chetcuti Cauchi. This document is for informational purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking any action based on the contents of this document. Chetcuti Cauchi disclaims any liability for actions taken based on the information provided. Reproduction of reasonable portions of the content is permitted for non-commercial purposes, provided proper attribution is given and the content is not altered or presented in a false light.

continue learning
what's inside

Privacy Challenges and Solutions

Exploring the evolving landscape of the right to be forgotten in the age of generative AI, examining legal, technical, and ethical issues that shape its implementation and efficacy.

  • Data embedded in AI models
  • Model retraining
  • Risks of hallucinations lead to personal data
  • Lack of traceability

The rapid advancement of Generative Artificial Intelligence (Gen AI) has transformed how personal data is processed, used, and even retained. As models like ChatGPT and Gemini become deeply embedded in everyday life, they raise complex questions about privacy, particularly the "right to be forgotten." The intersection of this right with the technical realities of generative AI presents unique challenges for regulators, developers, and affected individuals alike.

The Right to be Forgotten

The right to be forgotten, enshrined in Article 17 of the General Data Protection Regulation (GDPR), empowers individuals to request the erasure of personal data held by organisations when it is no longer necessary to be kept, consent is withdrawn, or processing is unlawful. This right, first crystallized in the landmark judgement of Google Spain v AEPD and Mario Costeja González, has become a cornerstone of European data protection law and is directly applicable in Malta under the Data Protection Act.

In the era of Gen AI, where personal data is used to train Gen AI models like ChatGPT or Gemini, unprecedented challenges to the right to be forgotten emerge.

The Dilemma: Data Embedded, Not Stored

Unlike traditional databases, where personal data can be located and deleted, AI models embed data within their neural network parameters.

Businesses face significant obstacles in fulfilling data erasure requests under regulations like the GDPR. Because personal data is diffused throughout a model’s architecture, locating and removing specific information is not straightforward. In many cases, meeting the legal requirement to erase an individual’s data could require retraining the entire AI model or investing in emerging “Machine Unlearning” solutions, both of which can be costly and technically demanding. Conversely. individuals may find their ability to control, rectify, or erase their personal data severely limited in the context of generative AI.

The Persistent Memory of AI

The core of the right to be forgotten is informational self-determination which empowers individuals to control how their data is retained and disseminated. However, Gen AI “infinite memory” challenges this principle:

  • traces may remain in the model’s outputs even after data is deleted from source datasets
  • inability to fully erase personal data from AI models raises ethical concerns about consent, autonomy, and the long-term risks of data persistence.

This has led commentators to warn that, without significant legal and technological reform, the right to be forgotten risks becoming symbolic rather than enforceable.

Legal and Regulatory Gaps

The right to be forgotten was not drafted with AI in mind. It presumes that controllers can identify and erase personal data upon request. However, in the context of generative AI, even if a data subject’s information is removed from future training datasets, the model may still generate outputs that resemble or replicate that data, undermining the effectiveness of erasure.

The results:

  • Legal uncertainty for AI developers and organizations, who may have difficulty meeting erasure requests when using third-party models or datasets.
  • Legal uncertainty for individuals trying to exercise their right to be forgotten in the era of generative AI.

The Google Spain judgment remains the leading EU case, establishing that search engines must remove links to personal data upon request, balancing privacy rights against the public’s right to information. The European Court of Justice (ECJ) further clarified the scope of the right to be forgotten in GC and Others v. CNIL, emphasizing that the right is not absolute and must be balanced against other fundamental rights. However, court judgements addressing the unique challenges posed by AI systems are yet to be delivered.

Introducing Innovative Solutions

Addressing these challenges requires a multi-pronged approach:

  1. Legal frameworks evolving to provide clearer guidance on the obligations of AI developers and deployers. This includes updating privacy laws, clarifying the scope of erasure rights in the context of AI, and fostering regulatory innovation.
  2. Creating ways to protect privacy, like using techniques that add noise to data (differential privacy) or letting computers learn without sharing raw data (federated learning).
  3. Considering Machine Unlearning techniques for AI models to “forget” specific information without requiring full retraining.
  4. Businesses adopting privacy-by-design principles from the earliest stages of AI development.

What this means for you

For Businesses

Upholding the right to be forgotten under European and Maltese data protection law has become increasingly challenging with the rise of gen AI. Ensuring compliance now demands a proactive approach by:

  • implementing privacy-by-design principles
  • regularly reviewing internal policies
  • adopting advanced technical measures to manage and mitigate data retention risks
  • updating privacy practices
  • conducting thorough Data Protection Impact Assessments (DPIAs)
  • educating staff and users about the limitations of current AI technologies and the evolving legal landscape
  • prioritising transparency and user control

For Individuals

As regulatory and technological reforms progress, individuals must remain vigilant and informed, ensuring that their privacy expectations are respected and that their requests are handled with care and accountability by businesses deploying AI-driven solutions.

  • Deleted information may persist in model outputs, making complete removal difficult
  • Individuals should be aware of both their legal rights and the practical limitations posed by AI systems. Advocating for transparency from organizations, requesting clear information about data handling practices, and understanding the scope of their rights are crucial steps.

How we can help

Our team of technology and data protection lawyers brings together deep technical insight and legal acumen to help clients navigate the complexities of AI adoption and implementation. We are dedicated to ensuring that individuals’ rights—especially the right to be forgotten—are respected throughout the AI lifecycle.

By advising on compliance strategies and privacy-by-design principles, we empower organizations to embrace the opportunities of AI while safeguarding personal data and upholding the fundamental rights of individuals in accordance with evolving legal standards.

Copyright © 2025 Chetcuti Cauchi. This document is for informational purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking any action based on the contents of this document. Chetcuti Cauchi disclaims any liability for actions taken based on the information provided. Reproduction of reasonable portions of the content is permitted for non-commercial purposes, provided proper attribution is given and the content is not altered or presented in a false light.

Contact us

Speak to a
recognised expert