When artificial intelligence becomes a central banker

Jon Danielsson is the Director of the Systemic Risk Centre at the London School of Economics and Political Science

Artificial intelligence is expected to be widely used by central banks as it brings considerable cost saving and efficiency benefits. However, as this column argues, it also raises difficult questions around which tasks can safely be outsourced to AI and what needs to stay in the hands of human decision makers.

Senior decision makers will need to appreciate how AI advice differs from that produced by human specialists, and shape their human resource policies and organisational structure to allow for the most efficient use of AI without it threatening the mission of the organisation.

Central banks are rapidly deploying artificial intelligence (AI), driven by the promise of increased efficiency and cost reductions. AI engines are already serving as central bankers. But with most AI applications today low level, and with the conservative nature of central banks, AI adoption is slower than in private sector financial institutions.

Still, the direction of travel seems inevitable, with AI set to take on increasingly important roles in central banking. That raises questions about what we can entrust to AI and where humans need to be in charge.

We might think the economy and especially the financial system – the domain of the central banks – is the ideal application for AI. After all, the economy and the financial system generate almost infinite amounts of data, so plenty for AI to train on.

Every minute financial institutional decision is recorded and trades are stamped to the microsecond. Emails, messages, and phone calls of traders and important decision makers’ interactions with clients are recorded, and central banks have access to very granular economic data.

But data do not equal information, and making sense of all these data flows is like drinking from a fire hose. Even worse, the information about the next crisis event or inflationary episode might not even be in observed data.

What AI can and can’t do

At the risk of oversimplifying, it is helpful to think of the benefits and threats of AI on a continuum.

On one end, we have a problem with well-defined objectives, bounded immutable rules, and finite and known action space, like the game of chess. Here, AI excels, making much better decisions than humans. It might not even need data because it can generate its own training datasets.

For central banks, this includes ordinary day-to-day operations, monitoring, and decisions, such as the enforcement of microprudential rules, payment system operation, and the monitoring of economic activity. The abundance of data, clear rules and objectives, and repeated events make it ideal for AI.

We already see this in the private sector, with Blackrock’s AI-powered Aladdin serving as the world’s top risk management engine. Robo-regulators in charge of ‘RegTech’ are an ideal AI application. At the moment, such work may be performed by professionals with a bachelor’s or master’s degree, and central banks employ a large number of these.

Central banks may first perceive value in having AI collaborate with human staff to tackle some of the many jobs that require attention, while not altering staff levels.

However, as time passes, central banks may grow to embrace the superior decisions and cost savings that come from replacing employees with AI. That is mainly possible with today’s AI technology (Noy and Zhang 2023, Ilzetzki and Jain 2023.)

As the rules blur, objectives become unclear, events infrequent, and the action space fuzzy, AI starts to lose its advantage. It has limited information to train on, and important decisions might draw on domains outside of the AI training dataset.

This includes higher-level economic activity analysis, which may involve PhD-level economists authoring reports and forecasting risk, inflation, and other economic variables – jobs that require comprehensive understanding of data, statistics, programming, and, most importantly, economics.

Such employees might generate recommendations on typical monetary policy decisions based on some Taylor-type rule, macroprudential tuning of the composition and the amount of liquidity and capital buffers, or market turmoil analysis.

While the skill level for such work is higher than for ordinary activities, a long history of repeated research, coupled with standard analysis frameworks, leaves significant amount of material for AI to train on. And crucially, such work does not involve much abstract analysis.

Decision makers […] must both appreciate how AI advice differs from that produced by human specialists and shape their human resource policies and organisational structure to allow for the most efficient use of AI without it threatening the mission of the organisation

AI may in the future outperform human personnel in such activities, and senior decision makers might come to appreciate the faster and more accurate reports by AI. This is already happening rapidly, for example, with ChatGPT and AI-overseen forecasting.

In extreme cases, such as deciding how to respond to financial crises or rapidly rising inflation – events that the typical central banker might only face once in their professional lifetime – human decision makers have the advantage since they might have to set their own objectives, while events are essentially unique, information extremely scarce, expert advice is contradictory, and the action space unknown. This is the one area where AI is at a disadvantage and may be outperformed by the human abstract analyst (Danielsson et al 2022)

In such situations, mistakes can be catastrophic. In the 1980s, an AI called EURISKO used a cute trick to defeat all of its human competitors in a naval wargame, sinking its own slowest ships to achieve better manoeuvrability than its human competitors. And that is the problem with AI.

How do we know it will do the right thing? Human admirals don’t have to be told they can’t sink their own ships; they just know. The AI engine has to be told. But the world is complex, and creating rules covering every eventuality is impossible. AI will eventually run into cases where it takes critical decisions no human would find acceptable.

Of course, human decision makers mess up more often than AI. But, there are crucial differences. The former also come with a lifetime of experience and knowledge of relevant fields, like philosophy, history, politics, and ethics, allowing them to react to unforeseen circumstances and make decisions subject to political and ethical standards without it being necessary to spell them out.

While AI may make better decisions than a single human most of the time, it currently has only one representation of the world, whereas each human has their own individual worldview based on past experiences. Group decisions made by decision makers with diverse points of view can result in more robust decisions than an individual AI. No current, or envisioned, AI technology can make such group consensus decisions (Danielsson et al 2020).

Furthermore, before putting humans in charge of the most important domains, we can ask them how they would make decisions in hypothetical scenarios and, crucially, ask them to justify them. They can be held to account and be required to testify to Senate committees.

If they mess up, they can be fired, punished, incarcerated, and lose their reputation. You can’t do any of that with AI. Nobody knows how it reasons or decides, nor can it explain itself. You can hold the AI engine to account, but it will not care.

Conclusion

The usage of AI is growing so quickly that decision makers risk being caught off guard and faced with a fait accompli. ChatGPT and machine learning overseen by AI are already used by junior central bankers for policy work.

Instead of steering AI adoption before it becomes too widespread, central banks risk being forced to respond to AI that is already in use. While one may declare that artificial intelligence will never be utilised for certain jobs, history shows that the use of such technology sneaks up on us, and senior decision makers may be the last to know.

AI promises to significantly aid central banks by assisting them with the increasing number of tasks they encounter, allowing them to target limited resources more efficiently and execute their job more robustly. It will change both the organisation and what will be demanded of employees.

While most central bankers may not become AI experts, they likely will need to ‘speak’ AI – be familiar with it – and be comfortable taking guidance from and managing AI engines.

The most senior decision makers then must both appreciate how AI advice differs from that produced by human specialists and shape their human resource policies and organisational structure to allow for the most efficient use of AI without it threatening the mission of the organisation.

References

Danielsson, J, R Macrae and A Uthemann (2020), “Artificial intelligence as a central banker”, VoxEU.org, 6 Mar.

Danielsson, J, R Macrae and A Uthemann (2022), “Artificial intelligence and systemic risk”, Journal of Banking and Finance 140, 106290.

Noy, S and W Zhang (2023), “The productivity effects of generative artificial intelligence”, VoxEU.org, 7 June.

Ilzetzki, E and S Jain (2023), “The impact of artificial intelligence on growth and employment”, VoxEUorg, 20 June.

This article was originally published on VoxEU.org.