SEC Chair Gary Gensler. Image Credit: Getty Images
AI could cause a “nearly unavoidable” financial crisis if regulators cannot get a handle on it soon, the head of the SEC said on Monday. Gary Gensler, the chairperson of the Securities and Exchange Commission, told the Financial Times that the lack of diversity in AI models used by companies could one day pose a significant threat to U.S. financial stability. While various open-source AI models exist, most entities today rely on a small number of tools developed by a select group of players, such as OpenAI’s ChatGPT.
“It’s frankly a hard challenge,” Gensler said. “It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do.” Gensler continued that implementing AI regulation would be a “horizontal issue,” because “many institutions might be relying on the same underlying base model or underlying data aggregator.” Wall Street has begun introducing AI-powered technology from market monitoring to automating account opening, FT noted. “I do think we will in the future have a financial crisis,” Gensler told FT. “In the after action reports, people will say ‘Aha! There was either one data aggregator or one model…we’ve relied on.’” He continued that artificial intelligence has such a powerful “economics of networks” that a crisis could happen as soon as the late 2020s or early 2030s. He said such a crisis was, at present, “nearly unavoidable.” Gensler’s comments are notable because something similar has already happened. In 2010, the stock market briefly “flash crashed” by over a trillion dollars and immediately rebounded—an unprecedented occurrence that did not result in wider shocks but left regulators and market participants looking for answers. Regulators eventually concluded that high-frequency trading algorithms had contributed to the crash with a cascade of rapid trades.The cloud computing companies which offer AI services, and the tech companies that generate those models, are also often not subject to the strict regulations of Wall Street, Gensler noted. “And how many cloud providers do we have in this country? I think it’s really a cross-regulatory challenge.” U.S. regulators have slowly begun launching initiatives to start regulating artificial intelligence, amid concerns that the current concentration in the market could have anticompetitive effects that might lead to monopolies. The SEC has proposed a rule that would require stock brokerage and investment advising firms to “address conflicts of interest” arising from their individual use of predictive analytics. In July, the FTC opened an investigation into OpenAI over the “false” and “disparaging” information its ChatGPT model sometimes generates, the agency’s complaint stated.