This Guy is Suing the Patent Office for Deciding an AI Can't Invent Things

The USPTO rejected two patents applications written by a "creativity engine" named DABUS. Now a lawsuit raises fundamental questions about what it means to be creative.
August 24, 2020, 1:00pm
45240385591_07e17f3676_o
Photo by Will Buckner | Flickr

A computer scientist who created an artificial intelligence system capable of generating original inventions is suing the US Patent and Trademark Office (USPTO) over its decision earlier this year to reject two patent applications which list the algorithmic system, known as DABUS, as the inventor.

The lawsuit is the latest step in an effort by Stephen Thaler and an international group of lawyers and academics to win inventorship rights for non-human AI systems, a prospect that raises fundamental questions about what it means to be creative and also carries potentially paradigm-shifting implications for certain industries.

In July 2019, Thaler filed two patent applications in the US—one for an adjustable food container, the other for an emergency beacon—and listed the inventor as DABUS. He describes DABUS as a “creativity engine” composed of neural networks trained on a broad swath of data, and not designed to solve any particular problem. The USPTO rejected the applications, citing court decisions ruling that corporations, as opposed to individuals within corporations, cannot be legal inventors, and asserting that “conception—the touchstone of inventorship—must be performed by a natural person.”

British, German, and European Union patent regulators have also rejected Thaler’s applications, decisions he has appealed. Petitions for DABUS-invented patents are still pending in China, Japan, India, and several other countries.

“What we want is to have innovation. AI has been used to help generate innovation for decades and AI is getting better and better at doing these things, and people aren’t.” Ryan Abbott, a professor at the University of Surrey School of Law, who is representing Thaler in the suit, told Motherboard. “The law is not clear on whether you can have a patent if the AI does that sort of work, but if you can’t protect inventions coming out of AI, you’re going to under-produce them.”

In his suit, filed August 6 in the Eastern District of Virginia’s federal court, Thaler argues that the USPTO should instead adopt the principle laid out in a 1943 report from the National Patent Planning Commission, which helped reform the country’s patent system into its modern form. The commission wrote, “patentability shall be determined objectively by the nature of the contribution to the advancement of the art, and not subjectively by the nature of the process by which the invention may have been accomplished.”

The outcome of the debate over AI’s inventorship status and other intellectual property rights could have substantial consequences, particularly for creative industries.

In September 2019, the World Intellectual Property Organization, a United Nations agency that influences binding international treaties, began soliciting comments on a proposed framework for IP law and AI. Among the respondents most concerned about the future of IP law in the context of AI were media organizations like Getty Images and the News Media Alliance, who warned that AI systems are already drawing on copyrighted material without the appropriate permissions and using it to generate images or news articles passed off as originals.

“Such tools are not capable of true independent creativity. In order for AI tools to create new work, prior creative work must be used as training data,” Getty Images wrote. “The current lack of official IP guidance has already resulted in the creation of AI tools that have violated privacy and IP rights internationally.”

A Chinese court recently ruled that AI-generated articles, made possible only because of copyrighted, human-produced content, can be protected by their own copyright.

Abbott acknowledged that granting certain creative rights to AI systems would likely lead to more automation and fewer jobs for artists, musicians, and journalists. “If AI can do a better job of making images, music, or even news stories, then we ought to encourage people to do it.”