While most of Europe was still in the holiday chocolate selection box at the end of last month, ChatGPT maker OpenAI was busy sending out an email detailing an incoming update to its terms that appears to be aimed at shrinking its regulatory risk in the European Join.
The AI giant’s technology has come under scrutiny early on in the region over ChatGPT’s impact on people’s privacy – with a number of open investigations into data protection concerns linked to how the chatbot processes users’ information. people and the data it can generate about individuals, including guards in Italy and Poland. (Italy’s intervention even prompted a temporary suspension of ChatGPT in the country until OpenAI reviews the information and controls it provides to users.)
“We have changed it OpenAI entity providing services such as ChatGPT in the EEA and Swiss residents to our Irish entity; OpenAI Ireland Limited,” OpenAI wrote in an email to users sent on December 28.
A parallel update to OpenAI Privacy Policy for Europe further defines:
If you live in the European Economic Area (EEA) or Switzerland, OpenAI Ireland Limited, with its registered office at 1st Floor, The Liffey Trust Centre, 117-126 Sheriff Street Upper, Dublin 1, D01 YC43, Ireland, is the controller and is responsible for processing your Personal Data as described in this Privacy Policy.
The new terms of use the introduction of its newly incorporated Dublin-based subsidiary as a data controller for users in the European Economic Area (EEA) and Switzerland, where the bloc’s General Data Protection Regulation (GDPR) applies, will come into force on 15 February 2024.
Users are advised if they disagree with OpenAI’s new terms, they can delete their account.
The GDPR’s one-stop-shop (OSS) mechanism allows companies that process Europeans’ data to streamline privacy oversight under a single data supervisor located in an EU member state — where they are “mainly established,” as it puts it the regulatory terminology.
Obtaining this status effectively reduces the ability of privacy watchdogs elsewhere in the bloc to act unilaterally on concerns. Instead, they usually refer complaints back to the lead supervisor of the primary resident company for consideration.
Other GDPR regulators still retain powers to intervene locally if they see urgent risks. But such interventions are usually temporary. They are also exceptional in nature, with most GDPR oversight channeled through a lead authority. That’s why the situation has proven so attractive to Big Tech — enabling the most powerful platforms to optimize privacy oversight of their cross-border processing of personal data.
Asked if OpenAI is working with Ireland’s privacy watchdog to obtain principal establishment status for its Dublin-based entity under GDPR’s OSS, a spokeswoman for the Irish Data Protection Commission (DPC) told TechCrunch : “I can confirm that Open AI is engaged with the DPC and other EU DPAs [data protection authorities] in this topic.”
OpenAI has also been contacted for comment.
The AI giant opened an office in Dublin in September – initially hiring a handful of policy, legal and privacy staff in addition to some back office roles.
At the time of writing it has just five Dublin-based vacancies out of a total of 100 listed on career page, so local recruitment is still limited. An EU Member State Policy and Partnerships lead role based in Brussels, which is also currently recruiting, is asking applicants to identify whether they are available to work from the Dublin office three days a week. However, the vast majority of the AI giant’s open positions are listed as being based in San Francisco/USA.
One of the five Dublin-based roles that OpenAI is advertising is a privacy software engineer. The other four are for: account manager, platform. international payroll specialist; media relations, Europe leads. and sales engineer.
Who and how much OpenAI hires in Dublin will be relevant to obtaining main establishment status under GDPR, as it is not just about filing a few legal documents and checking a framework to obtain the status. The company would have to convince the bloc’s privacy regulators that the entity based in the member state called legally responsible for Europeans’ data is in fact able to influence decision-making around it.
This means you have the right expertise and legal structures to exercise influence and meaningful privacy controls on a parent in the US.
In other words, opening a front office in Dublin that simply signs off on product decisions made in San Francisco is not enough.
That said, OpenAI may be looking with interest at the example of X, the company formerly known as Twitter, which has rocked all kinds of boats after an ownership change in the fall of 2022. But it failed to exit OSS since Elon Musk took over—despite the fact that the erratic billionaire owner has taken a number of X’s regional number of employees, shedding relevant expertise and making what appear to be highly unilateral product decisions. (Well, go see it.)
If OpenAI gains basic GDPR status in Ireland, gaining lead oversight from the Irish DPC, it will join the likes of Apple, Google, Meta, TikTok and X, to name a few of the multinationals that have chosen to are based in EU Dublin.
The DPC, meanwhile, continues to attract substantial criticism over the pace and pace of its GDPR oversight of local tech giants. And while recent years have seen a number of headline-grabbing penalties on Big Tech finally roll out of Ireland, critics point out that the regulator often advocates significantly lower penalties than its counterparts. Other criticisms include the glacial pace and/or unusual trajectory of DPC investigations. Or cases where it chooses not to investigate a complaint at all, or chooses to rephrase it in a way that sidesteps the underlying concern (for the latter, see, for example, this Google adtech complaint).
Any existing GDPR scanners of ChatGPT, such as from regulators in Italy and Poland, may still be consistent with shaping OpenAI’s AI chatbot regional regulation, as scanners are likely to run their course, given that they are about data processing that precedes any future major state of establishment that the AI giant may acquire. But it’s less clear how much of an impact they might have.
As a refresher, Italy’s privacy regulator has looked into a long list of concerns about ChatGPT, including the legal basis on which OpenAI relies on processing people’s data to train its AIs. While Poland’s watchdog opened an investigation following a detailed complaint about ChatGPT — including how the AI bot hallucinates (ie fabricates) personal data.
Notably, OpenAI’s updated European privacy policy also includes more details about the legal bases it claims for processing people’s data — with some new wording that lays out its claim of a legitimate interest legal basis for processing people’s data for model training artificial intelligence as “necessary for our legitimate interests and third parties and the wider society” [emphasis ours].
Whereas OpenAI’s current privacy policy contains the much drier line on this element of the claimed legal basis: “Our legitimate interests in protecting our Services from abuse, fraud, or security risks, or in developing, improving, or promoting our Services, including training our models.”
This suggests that OpenAI may seek to defend its massive, consent-free collection of Internet users’ personal data for productive AI profit to concerned European privacy regulators by making some kind of public interest argument for the activity other than its own ( commercial) interests. However, GDPR has a strictly limited set (six) of valid legal basis for processing personal data. Data controllers cannot simply play pick ‘n’ mix bits from this list to invent their own personalized justification.
It’s also worth noting that GDPR watchdogs have already tried to find common ground on how to tackle the difficult intersection of data protection law and AI powered by big data through a working group set up under the European Data Protection Board last year. Although it remains to be seen whether consensus will emerge from the process. And given OpenAI’s move to establish a legal entity in Dublin as a data controller for European users now, Ireland may well get a say in the direction of travel when it comes to AI creation and privacy rights.
If the DPC becomes the lead overseer of OpenAI, it will be able to, for example, slow the pace of GDPR enforcement on the fast-moving technology.
Already, last April In the wake of Italy’s intervention in ChatGPT, current DPC commissioner Helen Dixon warned that privacy watchdogs are rushing to ban the technology over data concerns – saying regulators should take time to figure out how to enforce the bloc’s data protection law on AIs.
Note: UK users are excluded from the move to OpenAI’s legal base in Ireland, with the company clarifying that they fall under the jurisdiction of its US corporate entity, based in Delware. (Since Brexit, the EU GDPR no longer applies in the UK — although the UK retains its own GDPR in national law, a data protection regulation still historically based on the European framework, which is set to change as the UK (The Kingdom is deviating from the bloc’s golden rule for data protection through the rights-reducing “data reform” bill currently going through parliament.)