Artificial intelligence and systemic risk

In recent months we have observed sizeable corporate investment in developing large-scale models – those where training requires more than 1023 floating-point operations – such as OpenAI’s ChatGPT, Anthropic’s Claude, Microsoft’s Copilot and Google’s Gemini. While OpenAI does not publish exact numbers, recent reports suggest ChatGPT has roughly 800 million active weekly users.

Figure 1 shows the sharp increase in the release of large-scale AI systems since 2020. The fact that people find these tools intuitive to use is surely one reason for their speedy widespread adoption. In part due to the seamless inclusion of these tools in existing day-to-day platforms, companies are working to integrate AI tools into their processes.

Notes: Data for 2025 up to 24 August. The white box in the 2025 bar is the result of extrapolating the data to that date for the full year.

Source: World in Data.

A growing literature examines the implications for financial stability of AI’s rapid development and widespread adoption (see, among others, Financial Stability Board 2024, Aldasoro et al 2024, Danielsson and Uthemann 2024, Videgaray et al 2024, Danielsson 2025, and Foucault et al 2025). In a recent report of the Advisory Scientific Committee of the European Systemic Risk Board (Cecchetti et al 2025), we discuss how the properties of AI can interact with the various sources of systemic risk. Identifying related market failures and externalities, we then consider the implications for financial regulatory policy.

Artificial intelligence – encompassing both advanced machine-learning models and, more recently, developed large language models – can solve large-scale problems quickly and change how we allocate resources. General uses of AI include knowledge-intensive tasks such as (i) aiding decision making, (ii) simulating large networks, (iii) summarising large bodies of information, (iv) solving complex optimisation problems, and (v) drafting text.

There are numerous channels through which AI can create productivity gains, including automation (or deepening existing automation), helping humans’ complete tasks more quickly and efficiently, and allowing us to complete new tasks (some of which have not yet been imagined). However, current estimates of the overall productivity impact of AI tend to be quite low.

In a detailed study of the US economy, Acemoglu (2024) estimates the impact on total factor productivity (TFP) to be in the range of 0.05% to 0.06% per year over the next decade. Since TFP grew on average about 0.9% per year in the US over the past quarter century, this is a very modest improvement.

Estimates suggest a diverse impact across the labour market. For example, Gmyrek et al (2023) analyse 436 occupations and identify four groups: those least likely to be impacted by AI (mainly composed of manual and unskilled workers), those where AI will augment and complement tasks (occupations such as photographers, primary school teachers or pharmacists), those where it is difficult to predict (amongst others financial advisors, financial analysts and journalists), and those most likely to be replaced by AI (including accounting clerks, word processing operators and bank tellers).

Using detailed data, the authors conclude that 24% of clerical tasks are highly exposed to AI, with an additional 58% having medium exposure. For other occupations, they conclude that roughly one-quarter are medium-exposed.

We should emphasise that the global nature of AI makes it important that governments cooperate in developing international standards to avoid actions in one jurisdiction creating fragilities in others

Our report emphasises that AI’s ability to process immense quantities of unstructured data and interact naturally with users allows it to both complement and substitute for human tasks. However, using these tools comes with risks. These include difficulty in detecting AI errors, decisions based on biased results because of the nature of training data, overreliance resulting from excessive trust, and challenges in overseeing systems that may be difficult to monitor.

As with all uses of technology, the issue is not AI itself, but how both firms and individuals choose to develop and use it. In the financial sector, uses of AI by investors and intermediaries can generate externalities and spillovers.

With this in mind, we examine how AI might amplify or alter existing systemic risks in finance, as well as how it might create new ones. We consider five categories of systemic financial risks: liquidity mismatches, common exposures, interconnectedness, lack of substitutability, and leverage.

As shown in Table 1, AI’s features that can exacerbate these risks include:

  • Monitoring challenges where the complexity of AI systems makes effective oversight difficult for both users and authorities.
  • Concentration and entry barriers resulting in a small number of AI providers creating single points of failure and broad interconnectedness.
  • Model uniformity in which widespread use of similar AI models can lead to correlated exposures and amplified market reactions.
  • Overreliance and excessive trust arising when superior initial performance leads people to place too much trust in AI, increasing risk taking and hindering oversight.
  • Speed of transactions, reactions, and enhanced automation that can amplify procyclicality and make it harder to stop self-reinforcing adverse dynamics.
  • Opacity and concealment in which AI’s complexity can diminish transparency and facilitate intentional concealment of information.
  • Malicious uses where AI can enhance the capacity for fraud, cyber-attacks and market manipulation by malicious actors.
  • Hallucinations and misinformation where AI can generate false or misleading information, leading to widespread misinformed decisions and subsequent market instability.
  • History constraints where AI’s reliance on past data makes it struggle with unforeseen ‘tail events’, potentially leading to excessive risk taking.
  • Untested legal status in which the ambiguity around legal responsibility for AI actions (eg. the right to use data for training and liability for advice provided) can pose systemic risks if providers or financial institutions face AI-related legal setbacks.
  • Complexity makes the system inscrutable so that it is difficult to understand AI’s decision-making processes, which can then trigger runs when users discover flaws or behaviour is unexpected.

Notes: Titles of existing features of AI are red if they contribute to four or more sources of systemic risk and orange if they contribute to three. Potential features of AI are coloured orange to show that they are not certain to occur in the future. In the columns, sources of systemic risk are coloured red when they relate to ten or more features of AI and orange if they relate to more than six but fewer than ten features of AI.

Source: Cecchetti et al (2025).

Capabilities we have not yet seen, such as the creation of a self-aware AI or complete human reliance on AI, could further amplify these risks and create additional challenges arising from a loss of human control and extreme societal dependency. For the time being, these remain hypothetical.

In response to these systemic risks and associated market failures (fixed cost and network effects, information asymmetries, bounded rationality), we believe it is important to engage in a review of competition and consumer protection policies, and macroprudential policies. Regarding the latter, key policy proposals include:

  • Regulatory adjustments such as recalibrating capital and liquidity requirements, enhancing circuit breakers, amending regulations addressing insider trading and other types of market abuse, and adjusting central bank liquidity facilities.
  • Transparency requirements that include adding labels to financial products to increase transparency about AI use.
  •  ‘Skin in the game’ and ‘level of sophistication’ requirements so that AI providers and users bear appropriate risk.
  • Supervisory enhancements aimed at ensuring adequate IT and staff resources for supervisors, increasing analytical capabilities, strengthening oversight and enforcement and promoting crossborder cooperation.

In every case, it is important that authorities engage in the analysis required to obtain a clearer picture of the impact and channels of influence of AI, as well as the extent of its use in the financial sector.

In the current geopolitical environment, the stakes are particularly high. Should authorities fail to keep up with the use of AI in finance, they would no longer be able to monitor emerging sources of system risk. The result will be more frequent bouts of financial stress that require costly public sector intervention.

Finally, we should emphasise that the global nature of AI makes it important that governments cooperate in developing international standards to avoid actions in one jurisdiction creating fragilities in others.

Stephen Cecchetti is the Rosen Family Chair in International Finance, Brandeis International Business School, at Brandeis University, and Vice-Chair of the Advisory Scientific Committee at European Systemic Risk Board; Robin Lumsdaine is the Crown Prince of Bahrain Professor of International Finance, Kogod School of Business, at the American University, and Professor of Applied Econometrics, Erasmus School of Economics, at Erasmus University Rotterdam; Tuomas Peltonen is Deputy Head of the Secretariat, European Systemic Risk Board; and Antonio Sánchez Serrano is Senior Lead Financial Stability Expert at the European Systemic Risk Board.

References

Acemoglu, D (2024), “The simple macroeconomics of AI”, NBER Working Paper No 32487.

Aldasoro, I, L Gambacorta, A Korinek, V Shreeti and M Stein (2024), “Intelligent financial system: how AI is transforming finance”, BIS Working Paper No 1194.

Cecchetti, S, RL Lumsdaine, T Peltonen and A Sánchez Serrano (2025), Artificial intelligence and systemic risk, Report of the ESRB Advisory Scientific Committee No. 16.

Danielsson, J (2025), “Artificial intelligence and stability”, VoxEU.org, 6 February.

Danielsson, J and A Uthemann (2024), “Artificial intelligence and financial crises”, working paper.

Financial Stability Board (2024), “The financial stability implications of Artificial Intelligence”, November.

Foucault, T, L Gambacorta, W Jiang and X Vives (2025), Artificial Intelligence in Finance, The Future of Banking 7, CEPR Press.

Gmyrek, P, J Berg and D Bescond (2023), “Generative AI and jobs: A global analysis of potential effects on job quantity and quality”, ILO Working Paper No 96.

Videgaray, L, P Aghion, B Caputo, T Forrest, A Korinek, K Langenbucher, H Miyamoto and M Wooldridge (2024), Artificial Intelligence and economic and financial policymaking, A High-Level Panel of Experts’ Report to the G7, December.

This article was originally published on VoxEU.org.

Translate »