Tech

OpenAI Tells Congress the U.S. Should Create AI 'Licenses' to Release New Models

Many AI researchers see this as an anti-competitive move, as requiring licensing will be much more beneficial for larger companies than smaller ones.
GettyImages-1490683209
Image: Getty Images

Sam Altman told Congress Tuesday that AI developers should get a license from the U.S. government to release their models.

“The U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements,” Altman, the CEO of OpenAI, said in his written testimony.  

Advertisement

Altman explained that he wants a governmental agency that has the power to give and take licenses so that the U.S. can hold companies accountable. In April, an OpenAI research scientist first proposed the creation of a regulatory agency, saying that it could be called the Office for AI Safety and Infrastructure Security (OASIS)

In suggesting an AI license, Altman played a card from the big tech lobbying handbook in which companies that have significant market share suggest "regulation" that would only serve to increase the barrier to entry for competitors.

Altman’s initiative to create a regulatory body reveals a company that may want to get ahead of the regulation it predicts will be inevitable and establish an advantage against competitors of being the company that is cooperating with the government first. Many AI researchers see this as an anti-competitive move, as requiring licensing will be beneficial for larger companies and harmful to smaller companies, researchers, and free, open-source alternatives. 

“One concern is that the predominant AI companies such as OpenAI and Google could use this to slow down emerging competitors who would not have the resources—financial, legal, etc.—to comply with new regulations,” Mark Riedl, a professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center, told Motherboard. 

Advertisement

“The important thing is to avoid the belief that only AI companies are knowledgeable enough to draft and enforce any new laws. This can lead to something called ‘regulatory capture’ where companies pick and choose what they want to be held accountable for and how compliance is assessed. Their concerns are not the same as the concerns of consumers and everyday people affected by algorithmic decisions,” Riedl added. 

There is also the question of whether it would even be possible to enforce the idea of an "AI license" administered by the U.S. government. It is notoriously difficult to regulate the distribution of software or code; AI is being developed not just by corporations based in the United States, but by developers all over the world, many of whom are working on collaborative, free-and-open-source projects.

During the Tuesday hearing, Senator Lindsey Graham gave his support for a government agency and asked Altman, “Do you agree the simplest way and most effective way is to have an agency which is more nimble and smarter than Congress, overlooking what you do?” 

“Yes, we’d be enthusiastic,” Altman replied. 

Riedl said that a promising start for a legal framework is the AI Bill of Rights, which was released in October by the Biden administration, that has five regulations—that people should be protected from “inappropriate or irrelevant data” in automated systems, should not face discrimination by algorithms, should have agency over your data is used, should know how an automated system is being used, and should be able to opt out of systems.

During the conclusion of the hearing, Altman laid out a three-point plan he thinks the U.S. government should adopt, which is to form a new government agency that can license AI models, create a set of safety standards for AI models, and require independent audits by experts to measure the performance of AI models. This plan misses a number of questions senators had during the hearing regarding copyright regulations and being more transparent with the datasets used to train AI models.