×

OpenAI: Behind Calls For Regulation, Lobbying

OpenAI

OpenAI: Behind Calls For Regulation, Lobbying

A document provides a glimpse of OpenAI’s lobbying activities with the EU in the context of the development of the AI ​​Act. “Regulating AI, yes, but…” This is how we summarized, a few weeks ago, what had played out before the American Congress. The latter had heard tech personalities who were generally in favor of supervision of the sector despite differences on the levers to use.

Sam Altman, CEO of OpenAI, was there. A few days later, he would co-sign, with more than 350 AI specialists and business leaders, a form of call for vigilance. The message, in broad terms, is that mitigating the risk of extinction of humanity by AI should be a global priority, like the question of pandemics or nuclear war. Behind the scenes, the tone is different. In any case, regarding the legislation on artificial intelligence (AI Act) that the European Union is preparing. 

The text, which is expected to be adopted this year, establishes specific requirements for “high-risk” AI systems. OpenAI attempts to avoid having its services classified as such. Or, at least, tried. This is evidenced by a “ white paper ” pushed, in the fall of 2022, to European elected officials… but not beyond. In four points, OpenAI sets out its “fears” and suggests reformulating, or even abandoning, various provisions included in the European Commission’s initial proposal or subsequent documents.

OpenAI Wants To Avoid The “High-Risk” Category

The “white paper” first draws attention to the position that France expressed in June 2022, when its turn as president of the Council of the EU ended. More particularly, it focuses on one aspect: the estimation of the level of risk of general AI. OpenAI is concerned that the “AI Regulation” does not, by default, include all of its services in this category. 

Article 4.c. one does establish an exemption when the supplier has explicitly excluded, in the instructions for use or in the information accompanying the AI ​​system, any possibility of high-risk use. But 4.c. two potentially changes the situation. In this case, it establishes that such an exemption cannot be justified if the supplier has “sufficient reasons” to believe that its AI system could be subject to abusive use. 

According to OpenAI, such an approach will not encourage suppliers to ask the question. And to propose its own “more incentive” version of 4.c.2, not without giving an example in the HR field. “We would not condone the use of GPT-3 to determine a candidate’s suitability for a job posting, but we may accept assistance in writing the posting.” And to conclude:

  1. GPT-3 is, in itself, not a high-risk system.
  2. These are its capabilities that can power high-risk use cases.
  3. Please let us set guidelines, best practices, and limits that we can implement in our APIs.

Protect GPT-3 And DALL-E

OpenAI also advances the case for generative AI. He refers to an April 2022 report from the European Parliament. There is talk of amending an annex to the AI ​​Act so that many content generation systems would be classified as high-risk. Specifically, if they create textual content that may appear to be human-generated or audio/video content that seems authentic.

As is, this provision could mean that GOT-3 and DALL-E would have to be classified as high-risk systems, OpenAI laments. And to suggest that section 52 of the AI ​​Act is sufficient in itself. It imposes a form of transparency on the origin of content generated or manipulated by AI.

Sparing Too Many Compliance Checks

The “white paper” also includes a point on the compliance checks imposed by the AI ​​Act. The principle is that we must re-evaluate an AI system as soon as it undergoes a “substantial” modification, in a sense, in particular, that it would modify the purpose of said system. OpenAI sees this as an obstacle to the diffusion of innovations improving the security of these systems. 

From his point of view, this type of modification, like those intended to reduce risk, should not be considered “substantial” at the risk of seeing their implementation time lengthen. In any case, as long as it can reasonably be assumed that they will not have a negative impact on the health, safety, or fundamental rights of individuals.

OpenAI Demands More Exceptions

Annex III also catches the eye of OpenAI in that it contains high-risk usage categories. The fear is that it also includes uses that should not be considered as such. OpenAI gives two examples. The first concerns the area of ​​recruitment (section 4. a). According to his reading, the use of AI systems to report vacancies would fall into the “high-risk” category. However, some of the steps involved only secondarily involve AI, with the final decision belonging to humans. This would be the case for creating job descriptions.

The same reasoning applies to section 3. b, which deals with evaluation during admission exams and training. OpenAI calls for excluding from the “high risk” category systems that only help compose exams. His appeal to elected officials could be summarized as follows: clarify your language and focus on use cases likely to have a significant impact on people’s employment or education opportunities.

Read Also: Advantages For Business Of Managing The Customer Experience

Post Comment