The Italian CHAT-GPT saga. How (not) to regulate AI

ChatGPT, the state-of-the-art AI language model from OpenAI, recently encountered a temporary suspension in Italy following a restrictive order issued by the Italian Data Protection Authority (DPA).

1. Introduction

ChatGPT, the state-of-the-art AI language model from OpenAI, recently encountered a temporary suspension in Italy following a restrictive order issued by the Italian Data Protection Authority (DPA). Citing privacy, data misuse, and ethical concerns, the order has sparked a debate on whether the decision is rooted in legal matters or driven by political agendas. This article aims to clarify the restrictive order, evaluate its legal soundness, and examine the DPA's seemingly politically-tinged commentary on the issue.

2. The content of the restrictive order.

The Italian Data Protection Authority (DPA), an independent administrative body, is responsible for overseeing the processing of personal data of individuals established in Italy. This applies to entrepreneurs, private companies, and state-owned enterprises worldwide. Comprised of four members elected by the Italian Parliament for a seven-year term, the Italian DPA operates under the guidance of the General Data Protection Regulation (GDPR). While its actions should ideally be pursued in dialogue with other EU member states' DPAs within the European Data Protection Board, this is not always the case.

On the late morning of April 30, 2023, the Italian DPA unexpectedly and unannouncedly issued a restrictive order against ChatGPT. While experts worldwide had raised concerns over ChatGPT's handling of users' personal data—particularly in light of a data leak that reportedly exposed chat history titles to other users—no other Data Protection Authority had taken steps to halt the AI-powered service. To date, the Italian DPA remains the only authority to have done so.

The order was issued using a special urgency procedure, as detailed in Article 5, Paragraph 8 of Regulation No. 1/2000 governing the DPA's functioning. This regulation states that in cases of particular urgency, when the Guarantor cannot be convened in a timely manner, the president may adopt measures that will cease to have effect if not ratified by the Guarantor within 30 days at the first useful meeting.

In an attempt to break down the legalese, the Italian DPA's restrictive order against ChatGPT is primarily based on the following concerns:

  1. Lack of Information. The DPA notes that no information is provided to users or interested parties whose data was collected and processed through the ChatGPT service. This concern relates to Article 13 of the GDPR, which requires that individuals be given certain information when their personal data is collected. This includes who is collecting the data and why, who it will be shared with, how long it will be stored, and what rights the individual has regarding their data. If the data is going to be used for a different purpose than originally intended, the individual must be informed and given additional information.
  2. Absence of Legal Basis. The order points out the absence of a suitable legal basis for collecting personal data and using it to train the algorithms underlying ChatGPT's functioning. This is particularly relevant in the context of the GDPR, which prohibits the processing of personal data without a lawful basis (Article 6 of the GDPR). In other words, there must be a legitimate reason for collecting and processing personal data under the law.
  3. Inaccurate Data Processing. The DPA highlights that the information provided by ChatGPT does not always correspond to real data, resulting in inaccurate processing of personal data. Data accuracy is a key principle outlined in Article 5 of the GDPR, which companies are required to adhere to when processing personal data. This means that businesses are responsible for ensuring that the personal data they process is accurate, up-to-date, and not misleading.
  4.  No Age Verification: The order emphasizes the lack of age verification for ChatGPT users, which, according to OpenAI's terms, should be reserved for individuals aged 13 and older. The DPA states that the absence of filters for minors under 13 years old exposes them to unsuitable responses, considering their level of development and self-awareness. Indeed, Article 8 of the GDPR states that when processing personal data of minors, companies must make "reasonable efforts" to verify that consent has been given by someone with parental responsibility.

As a result of these concerns, the DPA urgently established, pursuant to Article 58, Paragraph 2, Letter f) of the GDPR, a temporary limitation on the processing of personal data of data subjects established in the Italian territory. This measure was applied with immediate effect, with any further determinations to be made following the outcome of the ongoing investigation.

The restrictive order issued by the Italian DPA has raised some concerns among experts and the public. One point of contention is the use of the urgency procedure, which lacks a clear explanation for the urgent reasons justifying this approach. This lack of rationale casts doubts on the motives behind the decision to act so swiftly.

Another criticism revolves around the DPA's concerns about the potential inaccuracy of data provided by ChatGPT. This may be a misunderstanding of the primary function of large language models like ChatGPT, which is to generate text in response to specific prompts, not to supply correct information. This suggests that the DPA may not have a complete grasp of how large language models work and what they are designed to do.

Moreover, the DPA's emphasis on the lack of age filters exposing children to unsuitable responses must also been questioned. Although companies processing minors' personal data must make reasonable efforts to verify parental consent according to Article 8 of the GDPR, the focus on inappropriate responses seems to diverge from the primary concern of data processing and the Italian DPA's jurisdiction. This point raises questions about the relevance of this particular concern in the context of the restrictive order.

On April 11, 2023, the Italian DPA released a new provision outlining seven conditions that OpenAI must meet by April 30, 2023, in order to reinstate ChatGPT services in Italy. These conditions encompass drafting and publishing an information notice on OpenAI's website to inform data subjects about personal data collection, processing for algorithmic training, processing logic, data subjects' rights, and other GDPR-compliant elements.

Additionally, OpenAI must provide tools for data subjects to exercise their right to object to data processing, request rectification of inaccurately processed personal data, or request erasure of personal data if rectification is technologically unfeasible. The Italian DPA also mandates that OpenAI incorporate a user information notice link within the registration flow, ensuring that users, including those in Italy, can access the notice before registration and upon service reactivation.

The DPA further requires OpenAI to modify the legal basis for processing users' personal data for algorithmic training by eliminating references to contracts and relying on consent or legitimate interest. OpenAI must also furnish tools on their website, specifically for Italian users, to exercise their right to object to processing their data for algorithmic training when the legal basis is the company's legitimate interest. Lastly, the DPA calls for an age gate for all Italian users, registered or not, to filter out underage individuals based on inputted age.

Many experts and commentators have noted the complexity of these demands and their improbable achievement in the short term, casting doubt on the resumption of ChatGPT services in Italy. It remains uncertain how OpenAI will address the DPA's requirements and whether a mutually satisfactory resolution for both the regulatory body and Italian users can be reached.

3. The Aftermath. A Political Decision?

Following the initial restrictive order, the Italian Data Protection Authority (DPA) launched a public relations campaign, framing itself as a contemporary David against OpenAI's Goliath. The DPA issued a press release claiming it had "halted" ChatGPT in Italy, a victorious narrative that didn't necessarily resonate with public opinion.

The DPA's self-promotion was exemplified by Mr. Guido Scorza, a council member of the DPA, who appeared on a YouTube channel shortly after the order, donning a "Privacy First" T-shirt. Later, he penned an article for the Italian Huffington Post, which was hidden behind a paywall. This public relation efforts raised questions, as official statements were replaced by YouTube interviews and blog articles for insight into the order's rationale.

The DPA seemed to assume public support in their fight against OpenAI, but this assumption might have been misguided. On the night of March 30, ChatGPT was forced to block access for Italian users. It should be noted that the DPA lacks the authority to ban online services. OpenAI made the decision to restrict access, but with limited options. Operating in Italy would have meant violating the DPA's order, as ChatGPT inherently collects personal data, such as email addresses for subscription.

OpenAI's decision to block Italian users, while necessary for compliance, underscores the complexities businesses face when navigating various regulatory landscapes. In this instance, OpenAI had to adapt rapidly to avoid potential legal consequences in Italy, demonstrating the significant influence data protection authorities hold over global tech companies.

In response to the decision, the DPA faced backlash from displeased Italian users, who criticized the authority for perceived overreach. Surprisingly, support came from a number of Italian internet “influencers” defending the DPA's actions. To justify the seemingly unjustifiable, some mixed misleading arguments with outright falsehoods, claiming the service block was optional, not mandatory, and even denying the use of an urgent procedure.

The ChatGPT ban saga in Italy has produced fascinating moments, such as an interview with Pasquale Stanzione, the Italian DPA's head, conducted by Federico Fubini, Corriere della Sera's vice-director. Fubini aimed to challenge the DPA and solicit responses to criticisms of their decision.

Caught off guard, Stanzione defended the DPA's actions, insisting that banning ChatGPT was appropriate. The conversation took an unexpected turn, with Stanzione citing the potential violations' plurality, convergence, and gravity as reasons for the provisional and urgent action. He also mentioned the Italian DPA ratified ChatGPT's suspension and launched further European-level investigations, consistent with EU legislation.

Regarding meetings with OpenAI and progress towards resolving the ban, Stanzione was non-committal, stating that discussions were indispensable for finding a solution. He acknowledged age verification as a significant issue and expressed willingness to reopen ChatGPT access if OpenAI took necessary steps by April 30.

Stanzione justified the ban by referencing potential risks to users and claimed Italy pursued a European path toward artificial intelligence, balancing freedom, democracy, and individual dignity. He described their approach as a "third way" between the US and China's regulatory stances. However, his comments on "extractive capitalism" appeared more grounded in political rhetoric than data privacy concerns.

«FUBINI: Do you really think you can block artificial intelligence in just one democratic country?

STANZIONE: Of course not. Ours, it must be specified, is a temporary limitation. We are dealing with a multinational that goes beyond the geographical borders and the prohibitions of individual countries. But European rules, such as the GDPR, are setting the standard in the world. Like Italy we indicate a European way to artificial intelligence, which is independent of US accentuated liberalism as well as the autarkic sovereignty of China or North Korea and is located in the middle of this new cold war. Ours is an intermediate, tiring road for freedom, democracy and the dignity of the person in Europe.

INTERVIEWER: Meanwhile, German or French companies use ChatGpt and GPT-4 becoming more competitive than the Italian ones...

STANZIONE: "It is true. But sacrificing rights and freedoms on the altar of the market is incompatible with our constitutional principles. Article 41 of the Constitution establishes that private economic initiative cannot take place in such a way as to damage human freedom and dignity. A market based on forms of oppression would not be sustainable, it would not work. We cannot accept these forms of extractive capitalism".»

In conclusion, the Italian DPA's actions surrounding the ChatGPT ban have raised eyebrows and ignited debates, casting doubt on the true motives behind the decision. The mix of political rhetoric, controversial actions, and the DPA's aggressive stance have led many to question whether the regulatory body's decisions are genuinely driven by data privacy concerns or influenced by political motivations. As the ChatGPT saga continues to unfold, we will keep covering the topic and providing updates in the coming weeks. This situation serves as a reminder of the importance of transparent, consistent, and unbiased regulatory frameworks for emerging technologies like AI. Ultimately, striking a balance between safeguarding personal data and fostering technological innovation will be critical in shaping the future of AI and its impact on societies worldwide.

Indietro
  • Condividi