OpenAI Lobbied EU For Weaker AI Regulations And 'Got What They Asked For': Report
KEY POINTS
- OpenAI wanted the Act to remove the 'high risk' designation on general purpose AI systems
- An industry expert noted that OpenAI basically wants freedom to regulate its AI products
- OpenAI CEO Sam Altman is scheduled to meet EU's Thierry Breton this month
ChatGPT maker OpenAI has been lobbying European Union officials to get the bloc to weaken its strict and detailed regulations for artificial intelligence – all while championing its willingness to cooperate with world powers to keep the fast-evolving tech in check, a new report revealed.
The AI leader proposed amendments to the bloc's AI Act and some were later applied to the said regulation's final text, according to documents obtained by Time from the European Commission. The outlet noted that it obtained the files through freedom of information requests.
The documents revealed how OpenAI repeatedly argued with European officials regarding the designation of general-purpose AI systems such as GPT-3 and image generator Dall-E 2 as "high risk" technology.
Three OpenAI staff members reportedly spoke with EU officials in Brussels, as per an official record of the meeting kept by the European Commission.
A European Commission source with direct knowledge of the meeting spoke on condition of anonymity and told the outlet that European officials concluded OpenAI was wary of "overregulation" that could affect the innovation of the tech. The source said OpenAI "did not tell us what good regulation should look like" at the time.
"By itself, GPT-3 (the precursor to ChatGPT) is not a high-risk system. But [it] possesses capabilities that can potentially be employed in high risk use cases," the AI company said in a previously unpublished White Paper sent to the EU Commission and Council in September 2022.
Time noted that it appears OpenAI's lobbying bids brought good results, as the final draft of the AI Act did not contain language previously present in earlier drafts regarding general-purpose AI systems designated as high risk.
Sarah Chander, a senior policy advisor at advocacy group European Digital Rights, said OpenAI "got what they asked for" after she reviewed the company's White Paper upon Time's request.
Chander went on to say that OpenAI's document only shows how many big tech companies "used the argument of utility and public benefit of AI to mask their financial interest in watering down the regulation."
Daniel Leufer, a senior policy analyst on AI at digital rights nonprofit Access Now's Brussels office, said after reviewing the White Paper that OpenAI was basically telling EU officials to "trust us to self-regulate."
An OpenAI spokesperson told the outlet in a statement that it "provided an overview of our approach to deploying systems like GPT-3 safely, and commented on the then-draft of the [AI Act] based on that experience" upon the request of EU policymakers.
It turns out OpenAI continued to engage with EU officials after it submitted the White Paper last year, as an official Commission record revealed that the company conducted a demonstration of ChatGPT's safety features before EU officials late in March.
News of OpenAI's efforts to water down the AI Act came after OpenAI CEO Sam Altman said earlier this month that "heavy regulation" could block the growth of AI tech.
Parliament overwhelmingly approved the AI Act with 499 votes in favor, 28 against and 93 abstentions last week. EU member states will need to approve the Act before it becomes an official law.
Last month, Altman and other OpenAI executives called for the establishment of a regulatory body similar to the International Atomic Energy Agency (IAEA) which oversees the nuclear industry.
Altman also recommended "strong public oversight" and "some degree of coordination among the leading development efforts" to ensure that AI tech is rolled out safely.
The tech mogul has also testified before a U.S. Senate subcommittee, pleading for "regulatory intervention by governments" to ensure that risks associated with AI and other similar technology are mitigated.
Altman is set to meet the EU's industry chief Thierry Breton this month to discuss the AI Act and how OpenAI will implement the bloc's AI rules. Breton is expected to present a voluntary AI pact with OpenAI to get the American tech company to join other firms to begin implementing the rules ahead of the Act's enforcement.
© Copyright IBTimes 2024. All rights reserved.