France: Personal data and “automated decisions”

PrintMailRate-it

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​published on 14 March 2025​ | reading time approx. 4 minutes


The regulations governing the protection of personal data – primarily the GDPR – take a highly cautious view of the idea that data subjects may be subjected to automated decisions based on their personal information (i.e., those made solely by algorithms without human intervention), that impact their private, public or professional lives (e.g., the denial of an employee’s promotion, personalized marketing, loan approvals, a tax audit triggered automatically by an algorithmic alert, surveillance…). 



Such processing is therefore prohibited by default unless it meets far stricter conditions than standard data processing, including (i) choosing a precise legal basis (consent, contract, or law), (ii) the obligation to clearly and transparently inform the data subjects in advance, (iii) carrying out a prior impact assessment and (iv) data subjects’ right to object to such profiling or withdraw their consent.
  
In a world where profiling and automated decision-making increasingly take center stage in key sectors such as banking, marketing, education, and public services – directly affecting data subjects – these practices raise concerns about privacy protection and the fundamental freedoms of the data subjects being profiled. In France, the algorithms employed by the “Caisse Nationale des Allocations Familiales” (National Family Allowances Fund) to detect fraud have recently faced criticism, highlighting the risk of discrimination against vulnerable groups. Similarly, the Affelnet platform, used in the educational system, has sparked debates about the transparency of the criteria applied by its algorithms and the potential biases in school placement decisions.
  
The key issue is to determine whether these algorithms are merely an “aid” to human decision-making – an approach that remains strictly regulated but is no longer outright prohibited – or whether they already constitute automated decision-making tools in themselves.
  
In this context, the questions of (i) the transparency of algorithms and their calculation methods for the data subjects and (ii) the liability of the actors involved in automated profiling are becoming increasingly significant. The rapid evolution of AI-driven tools further exacerbates these ethical and legal challenges.
  
A highly recent ruling by the Court of Justice of the European Union[1]  (CJEU) on 27 February  2025, has helped clarify the level of transparency required in informing individuals subjected to automated decisions. The Court provides valuable insight into the delicate balance between two legally protected yet sometimes conflicting rights: on one hand, the right of data subjects under the GDPR to be clearly and fully informed about any automated decision affecting them, and on the other hand, trade secret protection (stemming from a 2016 EU Directive), which safeguards commercial secrets, such as proprietary calculation methods. The Court highlights the challenge of balancing the confidentiality of such trade secrets with the protection of individual freedoms and invites further reflection on current and future practices regarding automated decision-making.
​ 
In this case, which is applicable to France, an Austrian citizen was denied a mobile phone subscription based on an automated creditworthiness assessment. Contesting this refusal, he exercised his GDPR “right of access” before the Austrian data protection authority, which ordered the company to provide explanations regarding its evaluation method. The company partially refused, citing the protection of its trade secrets. After several administrative appeals, the Vienna Administrative Court referred a preliminary question[2]  to the Court of Justice of the European Union (“CJEU”), seeking clarification on the extent of a data subject’s right of access to information regarding automated decisions affecting them, particularly in light of trade secret protection.

The Court first clarified the scope of the right provided under Article 15 of the GDPR to obtain “meaningful information about the logic involved” in profiling:
  • Every data subject has the right to demand concise, transparent, and accessible explanations from the data controller regarding how their personal data is used in automated systems.
  • Providing a complex mathematical formula, such as an algorithm or a detailed description of every step in the method is insufficient.
  • This transparency obligation imposed on data controllers engaged in profiling is particularly essential as it enables data subjects to express their views on the automated decision affecting them and to contest it.
 
The Court then examined potential limitations to this right to information in light of the protection of trade secrets or proprietary algorithms. While affirming that data subjects’ right to obtain transparent information about data processing are not absolute, the Court also ruled that trade secret protection cannot unduly restrict such data subjects’ right to the point of rendering it ineffective. It outlined both the responsibility of the data controller and the role of supervisory authorities:
  • If a data controller believes that the information to be disclosed contains trade secrets or third-party protected data, they cannot unilaterally refuse disclosure.
  • If the data controller does not directly provide the information to the data subjects, they must at least disclose it to the competent supervisory authority or court, which will then assess the rights and interests involved to determine the appropriate scope of access for the concerned individual.

In other words, the CJEU holds that when two fundamental legal principles are in conflict, a balancing test must be conducted to achieve the delicate balance necessary to ensure the effectiveness of one right without nullifying the other. In this case, while maintaining the required level of protection for trade secrets, the ruling aims to fully uphold data subjects’ rights to clear information on how their data is processed, particularly when it involves an automated decision and the underlying information is highly technical.
 
Refusing to leave this decision to the discretion of data controllers, the Court entrusts supervisory authorities and national courts with the role of intermediaries responsible for evaluating, on a case-by-case basis, whether the disclosure of such technical information excessively infringes upon trade secret protection. It also directs them to establish and oversee appropriate measures to reconcile these interests, such as limiting the disclosure to a restricted group, providing partial yet sufficient information to enlighten the data subject, etc.
 
At the core of the GDPR lies the requirement that personal data processing involving innovative methods or technologies, such as AI processing, must be subject to heightened scrutiny and stricter safeguards compared to conventional processing. Companies frequently overlook the fact that, beyond the obligation to provide clear information to data subjects, they are generally required to conduct a demanding “Data Protection Impact Assessment” (DPIA) to evaluate the impact of such processing on the rights and freedoms of the affected data subjects. This DPIA is even mandatory for automated decisions concerning employees. The French Data Protection Authority (CNIL) regularly calls for increased vigilance to ensure that technological innovation does not come at the expense of individual freedoms.
 
With the proliferation of AI-driven solutions that streamline daily operations and reduce human intervention, companies and public administrations must be aware of the existing regulations that define the risks involved and their heightened obligations regarding automated data processing. Anticipation et implementation of accurate measures is of essence!
 
Above all, businesses must remember that automated decision-making is prohibited by default unless it falls within one of the strictly defined exceptions: (i) the data subject has given explicit consent, (ii) the decision is necessary for the performance of a contract, or (iii) it is explicitly authorized by law.
 
To ensure compliance with these exceptions, companies and organizations must: 
  • Choose the legitimate legal basis and implement specific safeguards beyond existing security measures;
  • Inform data subjects: provide concise, clear, easily accessible, and understandable information about automated processing, including profiling, and the underlying logic;
  • Respect (informed) data subjects' rights to refuse : allowing data subjects to withdraw their consent (if that is the legal basis) or object to automated processing at any time (unless expressly required by law); and also enabling them to exercise their right of access to their data if they wish to verify the compliance of the said processing with the strict rules imposed.
 
​The CJEU ruling of 27 February 2025 underscores the need for businesses to anticipate the impact of transparency obligations regarding automated decision-making. They cannot simply invoke trade secret protection to avoid explaining the logic and criteria behind such decisions and will have to justify them before regulatory authorities upon request.
 
It is therefore essential to implement specific processes to regulate these automated processing operations and manage information requests from data subjects. This ensures the ability to justify their lawfulness, address potential challenges, and, if necessary, reintegrate a human intervention step should an individual object to such processing.
 
This need becomes even more pressing as artificial intelligence and predictive algorithms increasingly shape the employment, digital, commercial, and administrative landscapes. While transparency obligations will continue to grow stronger, they will also become more complex – particularly due to the opaque and sometimes evolving nature of algorithmic models, which data controllers do not always fully master… unfortunately.




[1] Source: Judgement of the Court​​ (13 March 2025)
[2] A question submitted to a higher judicial authority to obtain clarification on the interpretation of a law or legal provision​​​

Skip Ribbon Commands
Skip to main content
Deutschland Weltweit Search Menu