Tech

AI Could 'Harm the Global Financial System, Supply Chain' US Gov Guidelines Say

The new government guidelines present a framework for mitigating AI harms across a wide swath of society.
AI Could 'Harm the Global Financial System, Supply Chain' US Gov Guidelines Say
Image: 
Witthaya Prasongsin via Getty Images

For better or worse, artificial intelligence (AI) tools are permeating all aspects of society, and the U.S. government wants to ensure that it doesn't break it.

AI chatbots like ChatGPT are being used on school assignments, even passing tests required for being a doctor or to get a business degree. Automated art tools like DALL-E and Stable Difusion are changing the art world, to the collective outrage of human artists. Scientists have developed AI methods that can generate new enzymes. Media companies are turning to  AI to cheaply generate news articles and quizzes

Advertisement

Because of these impressive advancements, many of which are aimed at putting people out of a job, people are worried about the rise of AI. Machine learning programs are known to have baked-in racist and sexist biases, and the institutions that use them aren't any better—multiple innocent Black men have been arrested due to being mistakenly identified by facial recognition. There are concerns about digital redlining and how algorithms might decide the fates of marginalized people in the criminal justice system. And then there are the bigger, existential risks: If AI is ever put in charge of an important sector of society, can we do anything to stop it if it breaks bad?

Now, the National Institute of Standards and Technology (NIST) has decided to take matters into its own hands. On January 26, the government agency released a set of guidelines which, according to a press release, is for  “voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies.” The press release also notes that this framework follows a direction from Congress, has been in the works for 18 months, and was a collaboration with both the private and public sectors from more than 240 different organizations. 

The 40+ page document acknowledges AI’s potential, noting that it can transform people’s lives, drive inclusive economic growth, and support scientific advancements that improve the conditions of the world. However, it then pivots to discussing how the tool can also be a risk to society, and how the risks posed by AI systems are unique. For example, AI systems are complex, are often trained on data that can change over time, and are inherently socio-technical in nature. The document also outlines that there are various types of risks: harm to people, harm to an organization, and harm to an ecosystem. Specifically, the document refers to "harm to the global financial system, supply chain, or interrelated systems."

As we all now know all too well thanks to the many shocks to the system delivered by the COVID-19 pandemic, these are the types of harms that can make important aspects of society grind to a halt.

Advertisement
NIST document outlining AI harms.

Image: NIST

“Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities,” the document says. “AI systems can mitigate and manage inequitable outcomes. AI risk management is a key component of responsible development and use of AI systems. Responsible AI practices can help align the decisions about AI system design, development, and uses with intended aim and values.” The document also cites human centricity, social responsibility, and sustainability as core concepts in responsible AI.

The rest of the first part of the document discusses how organizations can frame risks related to AI, as well as analyzes AI risks and trustworthiness. The document point out that AI systems should be: valid, reliable, safe, secure and resilient, accountable and transparent, explainable and interpetable, privacy-enhanced, and fair “with harmful bias mitigated.” 

According to the authors, bias is broader than demographic balance and data representativeness, and can occur in the absence of prejudice, partiality, or discriminatory intent. NIST recommends monitoring, recourse channels, written policies and procedures, and other documentation to reduce bias. 

Part two focuses on the “Core of the Framework,” in which they propose four specific functions (Govern, Map, Measure, and Manage) to help organizations address AI risks at play. 

The creators of the the document see it as a new way to integrate responsible practices and actionable guidance from other sectors to AI. 

“AI risk management can drive responsible uses and practices by prompting organizations and their internal teams who design, develop, and deploy AI to think more critically about context and potential or unexpected negative and positive impacts,” the document says. “Understanding and managing the risks of AI systems will help to enhance trustworthiness, and in turn, cultivate public trust.”