The GDPR enables the development of innovative and responsible AI in Europe. The two new recommendations from the CNIL illustrate this with concrete solutions to inform people whose data is used and facilitate the exercise of their rights.
GDPR enables innovative AI that respects personal data
The Artificial Intelligence Action Summit , organized by France from February 6 to 11, 2025, will host numerous events that confirm that AI holds potential for innovation and competitiveness for the coming years.
Since 1978, France has adopted rules to regulate the use of personal data by digital technologies. In Europe, these rules have been harmonized by the General Data Protection Regulation (GDPR), the principles of which inspire many countries internationally.
Aware of the challenges of clarifying the legal framework, the CNIL is working, through all of its actions, to secure stakeholders in order to promote innovation in AI while ensuring respect for the fundamental rights of Europeans . Since the launch of its action plan on AI in May 2023, the CNIL has notably adopted a series of recommendations for the development of AI systems . Thus enlightened and clarified, the application of the GDPR is a factor of trust for individuals, and legal certainty for companies.
A necessary adaptation of GDPR principles to the specificities of AI
Some AI models are anonymous: they do not have to apply the GDPR. But other models, such as the language model (LLM), may contain personal data. The European Data Protection Board has recently provided relevant criteria on the application of the GDPR to an AI model.
When the GDPR is applicable, people’s data must be protected, whether through training bases, within the models that were able to memorize them, or in the use of models by means of prompts. But the cardinal principles of this protection, which remain applicable, must be declined in the specific context of AI.
The CNIL thus considered that:
- the principle of finality will apply in an appropriate manner to general-purpose AI systems : an operator who cannot precisely define all of his future applications from the training stage will be able to limit himself to describing the type of system developed and illustrating the main possible functionalities;
- The minimization principle does not prevent the use of large training databases . The data used should, in principle, be selected and cleaned in order to optimize the training of the algorithm while avoiding the exploitation of unnecessary personal data.
- The retention period of training data can be long if it is justified and if the database is subject to appropriate security measures , particularly for databases that require significant scientific and financial investment and sometimes become standards recognized by the scientific community.
- the reuse of databases, particularly those accessible online, is possible in many cases , after having verified that the data have not been collected in a manifestly illicit manner and that the reuse is compatible with the initial collection.
The new recommendations of the CNIL
The CNIL is today publishing two new recommendations for the use of AI that respects personal data, which confirm that the GDPR requirements are sufficiently balanced to address the specificities of AI. These recommendations propose concrete and proportionate solutions to inform and facilitate the exercise of individuals’ rights:
- When personal data is used to train an AI model and is potentially stored by it, the persons concerned must be informed.
The information methods can be adapted according to the risks for individuals and operational constraints. The GDPR allows, in certain cases, in particular when the AI models are created from third-party sources ( third-party data ) and the supplier is not able to contact individuals individually, to limit itself to general information (for example, on the organization’s website). When many sources are used, as is the case for general-purpose AI models, for example, global information, indicating categories of sources, or even the name of a few main sources, is generally sufficient.
See the CNIL recommendations for informing individuals
- European regulations provide for rights of access, rectification, opposition and erasure of personal data.
However, these rights can be particularly difficult to implement in the context of AI models, whether it is to identify people within the model or to modify the model. The CNIL asks stakeholders to make every effort to take privacy into account from the model design stage. In particular, it invites stakeholders to pay particular attention to the personal data present in the training bases:- by striving to make models anonymous, where this is not contrary to the objective pursued; and
- by developing innovative solutions to prevent the disclosure of confidential personal data by the model.
- by striving to make models anonymous, where this is not contrary to the objective pursued; and
The cost, impossibility or practical difficulties may sometimes justify a refusal to exercise rights; when the right must be guaranteed, the CNIL will take into account the reasonable solutions available to the creator of the model and the time limit conditions may be adjusted. The CNIL emphasizes that scientific research is evolving rapidly in this area and calls on stakeholders to keep themselves informed of developments in the state of the art in order to best protect the rights of individuals.
See the CNIL recommendations on the rights of individuals
► See all CNIL recommendations on AI
Consultation with AI stakeholders and civil society
All of these recommendations were developed following a public consultation. Stakeholders (companies, researchers, academics, associations, legal and technical advisors, unions, federations, etc.) were able to express themselves and allow the CNIL to propose recommendations that are as close as possible to their questions and the reality of AI uses.
View the summary of contributions
The CNIL’s work to ensure full and pragmatic application of the GDPR in the field of AI will continue in the coming months with new recommendations and support for organizations. In addition, the CNIL is following the work of the European Commission’s AI Office to develop a code of good practice on general-purpose AI and is coordinated with the work to clarify the legal framework carried out at European level.