Managing Systemic Risks in Tech

Risk Management

Managing Systemic Risks in Tech

Lessons from finance

The models here are many, ranging from licensing requirements akin to those used in banking and pharmaceutical to stricter corporate legal liabilities. Credible whistleblower processes and governance standards such as organizational structures, boards, disclosure requirements, contingency plans, and transparency also need to be put in place. Of course, product safety requirements will continue to hold, but given the probabilistic nature of AI systems, new processes, such as continuous monitoring, will need to be developed. 

Self-regulation is required, but insufficient

Similar to finance, certain tech actors—like Facebook or X (formerly Twitter)—are crucial to the entire system. Just as banks that are deemed domestically or globally “systemic” in terms of their importance face stricter regulatory oversight and liquidity requirements, so tech giants could be required to create redundancy for critical infrastructure, explainability standards for AI use, or mandatory stress tests and red teaming. Indeed, the DSA already imposes significantly stricter requirements on very large online platforms (defined as having more than 45 million unique monthly users in Europe). 

Tech demands faster regulatory processes

Striking a balance between rigorous regulation and sector profitability is important to ensuring that there is continued investment in new technologies—including ways to make AI safer. For instance, stricter rules in EU banking have arguably affected overall profitability compared to U.S. banks. This asymmetry in a global financial market is simply not sustainable. It poses a risk that EU banks might not be able to efficiently recycle capital and fuel growth and stability of their countries, especially relative to their U.S. competitors. A parallel situation regarding AI would generate strategic costs for lagging behind in technology development and could mirror the huge gap in profitability that affects the U.S. and European bank sectors. This is not a call for weakening regulation, but for designing it in a thoughtful and more agile manner. 

Learn with vigor, proceed swiftly, and remain prudent

The financial sector has grappled with the phenomenon of systemic riskunderstood as the risk that a shock to specific components of the financial system (say, individual banks) may have cascading effects that endanger the entire system. This is what happened in 2007–2008, when a shock in the U.S. subprime mortgage lending space evolved into a global financial crisis. The repercussions extended well beyond finance, affecting global migration patterns and inequality within and across countries. The crisis was therefore “systemic” in yet another sense: A disruption within a single industry profoundly affected the entire “global system.” This is exactly the risk that many fear AI poses. 

Regulatory dialogue should largely take place at industry level and aim to strike a balance between keeping an industry innovative and competitive while protecting society. Too often the debate is about regulators sanctioning a particular “systemic” agent. However, true effectiveness lies in industry and government partnering to govern and manage systemic risks. Interestingly, such partnerships have been more forthcoming in Canada and Scandinavia, which benefit from more collaborative and less individualistic cultures.

While self-regulation is insufficient, tech firms should nevertheless adopt strict risk management practices, with checks and balances and a governance structure not unlike that of banks. This essentially involves giving independent authority within the company to AI experts who can assess the appropriateness of deploying the technology in specific business cases. An “AI watchdog board” with real independence and teeth can enable companies that develop or use AI to define, implement, and evolve rigorous internal risk-management practices. Beyond individual firms, however, the tech industry needs to be regulated in each jurisdiction by appropriate agencies. 

While the tech sector can learn valuable lessons from finance regarding industry-level oversight and international cooperation, there are also practices it should avoid emulating.

Tech needs to remain continuously mindful of its unknowns. Photo by Michael Shannon on Unsplash

New global institutions and international coordination are paramount

This may also be true for AI. For example, while large language models (LLMs) may come from big tech, applications by smaller players across industries could pose major risks in specific domains, for example, in critical infrastructure safety. A broader view of the tech system, considering sensitive applications within or by nontech companies, is essential to effectively manage risk. 

As the industry becomes more interconnected, financial regulators have started to realize that size alone is an insufficient measure of risk. The recent collapses of Silicon Valley and Signature banks illustrate the point. Although the contagion was rapidly mitigated by regulators, it was clear that these institutions’ failure did pose significant risk to the system, despite falling below the size threshold for the strictest scrutiny by the Federal Reserve.

Published Aug. 29, 2023, on INSEAD


Last month, the heads of seven major American AI companies emerged from the White House with an agreement on “self-regulation.” On the other side of the Atlantic, Europeans debate the long-awaited EU AI Act, the next major digital regulation following the EU’s Digital Services Act (DSA). The DSA is aimed at containing “systemic risks” from tech that include the “potentially rapid and wide dissemination of illegal content and of information” that is “incompatible with” large online platforms’ terms and conditions.

Large tech companies operate globally and must adapt to diverse regulatory environments. As has been the case in finance, global cooperation is crucial to prevent “jurisdictional arbitrage” and properly coordinate responses to crises across governments. Some consistency and homogeneity of policies and their implementation within and across geographies and business models is necessary. For example, a safety net for the financial system in the event of a systemic crisis is to allow time (30 days in the case of the banking system) for G20 governments to coordinate their responses. Hence, those governments, through the liquidity coverage ratio, require all systemic institutions to be able to survive for 30 days if the world comes to a standstill. 

Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.

Published: Wednesday, September 20, 2023 – 12:02

However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads.

Given the speed of innovation, managing tech’s systemic risks necessitates swift collaboration between regulators and the industry. Fortunately, there are lessons to be learned from other sectors without repeating costly mistakes—such as overreliance on self-regulation. The financial industry has spent decades, if not centuries, developing and refining mechanisms to contain, mitigate, and respond to relatively similar risks. These efforts can provide a starting point for tech regulation.

Learning from finance

While large banks are sizable, the concentration of power is considerably higher in the tech space, particularly within the domain of AI. The system is poised to depend on a smaller number of behemoths that control critical IP and resources underpinning advanced AI products. This, coupled with the gap of technical understanding vis-à-vis regulators, calls for more collaboration between large tech firms and regulators, as well as a greater commitment to the public-interest duty by the former. Tech firms can help regulators design the right principles-based, rather than rules-based, regulatory framework that the rapidly evolving field of AI is likely to require. 

Firms and regulators in finance can rely on quantitative risk models that leverage a wealth of historical data about previous crises. As noted earlier, finance has obtained a clearer sense of what a crisis looks like, even if potential root causes aren’t always identified. But matters are very different in the age of AI because there’s no history to build on, or data about past crises. Thus, any effort at replicating “riskometers” like those used in finance may overlook crucial sources of risk in the rapidly evolving tech landscape. 

Collaborative learning is at the core of intelligence 

Regulatory dialogue should involve the whole industry

Tech likely requires a different engagement model 

Ongoing innovation requires balancing regulatory stringency with sector profitability and competitiveness

Tech needs ‘nested’ lines of defense 

If there is one lesson that the tech industry can learn from the financial sector, it is this: While it is not possible to eliminate or predict all risks, proactive and reactive regulations can coexist harmoniously. Ultimately, the key lies in continuously learning, adapting, and improving. The recent advances in AI are built upon the power of (machine) learning, which is at the core of intelligence. It should come as no surprise to the AI and tech community that establishing deep learning processes might be the most crucial guiding principle for regulating technology as well.

These are radically different approaches to address the AI challenge. The risks posed by AI have long been debated, including potentially systemic risks to political systems or public health due to misinformation or disinformation boosted by recommender systems and deepfake technologies. Striking the right balance between fostering innovation and ensuring safety is at the center of the debate. 

Hardly any attempts at self-regulation in tech have been successful (perhaps with the exception of the Japanese gaming sector). Even adequate risk management at the firm level may fail to address systemwide risks. Tech should embrace some form of external oversight to ensure what the finance world has come to accept: the role of regulators and independent third parties (like auditing firms) in ensuring and safeguarding the public interest and the firms’ long-term “social license.” 

There’s a notable difference in the speed of operations between tech and finance. Despite centuries of financial regulation, the quickest response time stands at 30 days. Most will agree that the response time for AI needs to be set to one day at most in serious crisis situations. This requires regulators and the industry to agree on rapid processes and protocols that finance doesn’t even consider today. This should be approached with a balance of swift and gradual methods to avoid rocking the system and making the regulator a risk factor.

While tech and finance may both create systemic risks, they differ significantly in their approach to risk management. The tech sector, as a newcomer, would be wise to learn from the world of finance, given the similarities between AI and finance. Both sectors rely on opaque mathematical models built on large amounts of data and complex computations. More important, these models end up being used, in both industries, by executives with limited understanding of the models themselves, while boards and regulators are distanced even further from the models they ought to govern. Similarities also exist regarding other risks, such as anti-money laundering concerns and the need to effectively monitor processes in handling so-called AI incidents. 

‘Too big to fail’ is also true in tech…