Tech

This Database Is Finally Holding AI Accountable

The database documents everything from incidents with Alexa to robot stabbings.
1566251030985-Students-Essays-Are-Being-Graded-By-Broken-and-Biased-AI_HF
Image: Hunter French

The Artificial Intelligence Incident Database (AIID) is a crowdsourced platform with intentions to wrangle in the Wild West of AI. “Fool me once, shame on me; fool me twice shame on you,” comes to mind, as the platform is being used to document and compile AI failures so they won’t happen again. 

A self-defined “systematized collection of incidents where intelligent systems have caused safety, fairness, or other real-world problems,” the AIID’s foundation lies in creating a repository of articles about different times AI has failed in real-world applications. This means highlighting biased AI, ineffective AI, unsafe AI, etc. Examples include everything from incident 34, warning consumers of Alexa’s tendency to respond to television commercials, to incident 69, which exposes the death of a car factory worker after he was stabbed by a robot on site. 

Advertisement

The Partnership on AI (PAI) oversees the project and its genesis came when members wanted to create a taxonomy of safety-related AI failures. However, the non-profit didn’t have a list of said failures to pull from. Then Sean Mcgregor, the database’s steward, stepped in. He said his goal was to create the different “species” for the taxonomy. 

By highlighting a wide array of incidents, McGregor says he hopes to inspire rather than shame.

“We really learn a lot when a big company makes a mistake in their AI system,” said McGregor in an interview. “The thing you’re most accountable for is ‘don’t let it happen a second time.’ You should learn from the first mistake and you’re accountable for the second.”

“AI incidents are events or occurrences in real life that caused or had the potential to cause physical, financial, or emotional harm to people, animals or the environment,” as the database defines them. They must meet two qualifications to be filed on the database. First, an identifiable intelligent system must be involved. Second, if the prior is true, the systems must have caused harm or harm could have reasonably resulted from its actions. The system is lenient and would rather include an incident than reject a submission, the site says. 

Ideally, McGregor wants the archive to draw both new and experienced machine learning players. He said he hopes they will access the compiled incidents, and gain more understanding of how their tech will work in the applied situations. 

Advertisement

“AI has the capacity to cause the same problems over, and over again,” said McGregor. “The AIID is making it so people will know if the AI system they’re designing and putting into the world may cause problems of safety and fairness. If so, they can then either change their design and protect an at-risk population. You can react, improve, and avoid the negative consequences of AI which can happen all over the world.”

A self-proclaimed tech-optimist, McGregor said he’s hoping the database will create more cohesion and accountability with AI projects across the globe. As Motherboard previously reported, programs can lodge human-based, inherently biased tendencies, as these systems are products of their environment and the biases of the people who created them and the society they were made in. The PAI’s mission with the database is to mitigate these factors by educating the humans creating AI programs.

“By bringing all of this information together, we’re making a collective culture of responsibility in AI because right now imagination for what happens when AI is in the real world is really lacking,” said McGregor. “There’s a great history of people capturing the record of the past in AI, but we weren’t really archiving it until AIID.”

At the moment users are able to filter incidents by the source the story was published through, a stories author, who submitted it to the database, or the system’s incident ID. Future plans for the AIID include evolution via open source code contributions and a continued array of categorization for incidents. The PAI ends a blog post announcing the database with a call to action, reflecting their mission. 

“Artificial intelligence is already ubiquitous in society. Your report to the AI Incidents Database can help ensure AI is developed for the benefit of humanity,” it reads.