Risk Management
Managing Systemic Risks in Tech
Lessons from finance
The models here are many, ranging from licensing requirements akin to those used in banking and pharmaceutical to stricter corporate legal liabilities. Credible whistleblower processes and governance standards such as organizational structures, boards, disclosure requirements, contingency plans, and transparency also need to be put in place. Of course, product safety requirements will continue to hold, but given the probabilistic nature of AI systems, new processes, such as continuous monitoring, will need to be developed.
Self-regulation is required, but insufficient
Similar to finance, certain tech actors—like Facebook or X (formerly Twitter)—are crucial to the entire system. Just as banks that are deemed domestically or globally “systemic” in terms of their importance face stricter regulatory oversight and liquidity requirements, so tech giants could be required to create redundancy for critical infrastructure, explainability standards for AI use, or mandatory stress tests and red teaming. Indeed, the DSA already imposes significantly stricter requirements on very large online platforms (defined as having more than 45 million unique monthly users in Europe).
Tech demands faster regulatory processes
Striking a balance between rigorous regulation and sector profitability is important to ensuring that there is continued investment in new technologies—including ways to make AI safer. For instance, stricter rules in EU banking have arguably affected overall profitability compared to U.S. banks. This asymmetry in a global financial market is simply not sustainable. It poses a risk that EU banks might not be able to efficiently recycle capital and fuel growth and stability of their countries, especially relative to their U.S. competitors. A parallel situation regarding AI would generate strategic costs for lagging behind in technology development and could mirror the huge gap in profitability that affects the U.S. and European bank sectors. This is not a call for weakening regulation, but for designing it in a thoughtful and more agile manner.
Learn with vigor, proceed swiftly, and remain prudent
The financial sector has grappled with the phenomenon of systemic risk—understood as the risk that a shock to specific components of the financial system (say, individual banks) may have cascading effects that endanger the entire system. This is what happened in 2007–2008, when a shock in the U.S. subprime mortgage lending space evolved into a global financial crisis. The repercussions extended well beyond finance, affecting global migration patterns and inequality within and across countries. The crisis was therefore “systemic” in yet another sense: A disruption within a single industry profoundly affected the entire “global system.” This is exactly the risk that many fear AI poses.
Regulatory dialogue should largely take place at industry level and aim to strike a balance between keeping an industry innovative and competitive while protecting society. Too often the debate is about regulators sanctioning a particular “systemic” agent. However, true effectiveness lies in industry and government partnering to govern and manage systemic risks. Interestingly, such partnerships have been more forthcoming in Canada and Scandinavia, which benefit from more collaborative and less individualistic cultures.
While self-regulation is insufficient, tech firms should nevertheless adopt strict risk management practices, with checks and balances and a governance structure not unlike that of banks. This essentially involves giving independent authority within the company to AI experts who can assess the appropriateness of deploying the technology in specific business cases. An “AI watchdog board” with real independence and teeth can enable companies that develop or use AI to define, implement, and evolve rigorous internal risk-management practices. Beyond individual firms, however, the tech industry needs to be regulated in each jurisdiction by appropriate agencies.
While the tech sector can learn valuable lessons from finance regarding industry-level oversight and international cooperation, there are also practices it should avoid emulating.
Tech needs to remain continuously mindful of its unknowns. Photo by Michael Shannon on Unsplash
New global institutions and international coordination are paramount
This may also be true for AI. For example, while large language models (LLMs) may come from big tech, applications by smaller players across industries could pose major risks in specific domains, for example, in critical infrastructure safety. A broader view of the tech system, considering sensitive applications within or by nontech companies, is essential to effectively manage risk.
As the industry becomes more interconnected, financial regulators have started to realize that size alone is an insufficient measure of risk. The recent collapses of Silicon Valley and Signature banks illustrate the point. Although the contagion was rapidly mitigated by regulators, it was clear that these institutions’ failure did pose significant risk to the system, despite falling below the size threshold for the strictest scrutiny by the Federal Reserve.
Published Aug. 29, 2023, on INSEAD.