Artificial bugs for enhanced cybersecurity

Most people are aware that cyberattacks occur, but few realise that the number of cyberattacks has doubled since 2019, before the onset of the COVID-19 pandemic (Jamilov et al 2023). Most of us are unaware of this increase because the direct losses reported after the attacks were not particularly great.

Yet, this situation may soon change – the risks of extreme losses are as great as the increase in attacks, and the financial sector is particularly at risk. Currently, one-fifth of all cyberattacks target this sector, and this share is expected to rise in the short term (IMF 2024).

So far, cyber incidents in the financial sector have not been systemic, but the interconnectedness of the sector’s financial and technological aspects puts its stability at high risk (Glasserman and Young 2016). These risks vary with the type of interconnectedness concerned. At the technological level, for instance, a disruption of critical services can occur. More generally, disruptions can entail a lack of trust in the entire system.

A ransomware attack on the Industrial and Commercial Bank of China had an impact on the US treasury market, disrupting trade (Financial Times 2023). Such global effects are reinforced by two further problems: first, the current level of reporting in the financial sector is insufficient; second, the current solutions for insurability do not account for fast-changing risks.

Insurers have trouble adapting their pace to the continuous advent of new, changing risks, and the design of efficient boundaries for insurability is a recurring challenge.

As cyberattacks become more frequent, and uncertainty grows about potential future events, quantification and measurement of cyber risk and uncertainty will become pressing issues for policymakers and scholars alike.

For both insurers and insurees, the correct and efficient appraisal of losses is key, be it for incurred risks – with the purpose of designing appropriate insurance solutions – or for already incurred losses and estimation in case of damage.

Traditionally, a company’s losses from cyberattacks could be measured in terms of production losses, information losses, reputation losses, or through changes in market capitalisation. These categories no longer suffice. The assessment of risks must be diversified, and new policy approaches are now being explored.

There are several ways to evaluate, measure, mitigate, and prevent cyberattacks. Some methods advocate close monitoring for early risk recognition, others are based on system testing (Dreyer et al. 2018). In this column, we focus on a new proposal to develop cyber risk testing with crowdsourced security and artificial bugs.

In 2019, a ‘significant flaw’ was discovered in the proposed Swiss e-voting system. Given the danger of potential vote manipulation, the Federal Council paused development and ordered a redesign of the system (Federal Chancellery of Switzerland 2019).

The discovery of an attack was part of a public intrusion test in which ‘ethical hackers’ probed the software and reported any vulnerabilities in exchange for monetary rewards. This type of programme, often called ‘bug bounty’ or ‘crowdsourced security’, has become a major tool for detecting software vulnerabilities, used by firms and government organisations (Malladi and Subramanian 2020).

In fact, the success of crowdsourced security in recent years has led the authorities to adopt bug bounty programmes systematically as a measure of governmental cybersecurity. In a recent press release, the Federal Council of Switzerland stated that “standardised security tests are no longer sufficient to uncover hidden loopholes. Therefore, in the future, it is intended that ethical hackers will search through the Federal Administration’s productive IT systems and applications for vulnerabilities as part of so-called bug bounty programmes” (Federal Department of Finance of Switzerland 2022).

Despite its promise, crowdsourced security has major inefficiencies due to the misalignment of incentives (Akgul et al 2023). First, bug bounty programmes create contests between ethical researchers. Since participating in a programme is costly in terms of time and resources, each hacker must make a strategic decision about which programme to participate in, if any. This creates a crowding-out effect, whereby the incentives to participate in a certain programme decline as more hackers are expected to participate.

Second, there is tremendous inefficiency in triage and verification efforts. Zhao et al (2017) report that, across major bug bounty platforms, less than 25% of submissions are valid.

Third, interest and participation in these programmes are difficult to sustain. Maillart et al (2017) and Sridhar et al (2021) show that bug bounty programmes meet less engagement over time, resulting in the probability of finding bugs decaying consistent with the power law.

As cyberattacks become more frequent, and uncertainty grows about potential future events, quantification and measurement of cyber risk and uncertainty will become pressing issues for policymakers

One of the most promising ways to evaluate risks and potential damage from cyberattacks is to simulate controlled cyberattacks under real-life conditions, as currently practiced by the ECB’s cyber risk stress testing framework (ECB 2024).

Along the same line as the ECB’s framework, a recent scientific article by Gersbach et al (2024) proposes a testing process for systems that consists of an invited cyberattack against a system containing (1) artificially inserted bugs, and (2) potential yet unknown ‘real’ (organic) bugs.

In crowdsourced security, finding bugs is rewarded by monetary or reputational capital, so inserting artificial bugs into the system raises the chances of invited ethical cyber attackers finding bugs and obtaining a reward. This should provide greater incentives to participate in the bug search, triggering stricter testing than a process in which only organic bugs can be found. This ‘crowdsearch process’ (Gersbach et al 2023) allows us to find the organic bugs in any system, fix them, and increase system safety.

The key to bug bounty programmes is the reward system, which generates incentives to participate in the search. Our suggestion – to augment the system by inserting artificial bugs, which would raise the chances of finding a bug and being rewarded – should prove especially helpful, as the total budget for rewards is usually limited.

Additionally, the importance that designers attach to finding organic bugs can vary. There might be less important organic bugs, which do not immediately endanger the well-functioning of a system, and more threatening organic bugs, which must be found before they can damage the system.

If finding organic bugs is valued by the designer, it will be useful to augment the incentives for participating in the crowdsearch contest by inserting artificial bugs. With the insertion of artificial bugs, the incentives to participate rise to overcome the crowding-out effect, which in turn might allow for a lower budget for rewards, entailing a search for organic bugs at lower costs.

Inserting artificial bugs can also address other inefficiencies in crowdsourced security. For example, artificial bugs can help screen invalid and spurious submissions by allowing organisations to prioritise submissions from researchers who also reported artificial bugs. Additionally, inserting artificial bugs may help sustain interest over time by promoting new opportunities for rewards.

As to the design of the crowdsearch contest in practice, one could envision several ways to proceed. For instance, the reward granted for artificial bugs might differ from the reward for organic bugs. Then, the organisation would have to be able to prove to the crowdsearch participants which bugs were artificial, and which were not.

The organisation might also have to prove that there was indeed an artificial bug in the system if it is not found during the crowdsearch. Variants require credible approaches, such as asymmetric encryption commitment schemes or zero-knowledge proofs.

Which approach is the most useful will depend on the specific applications as well as the reputation and engineering capabilities of the bug bounty designer. Hence, the inserted bug can be chosen if a zero-knowledge proof for it is available.

For instance, a zero-knowledge proof requires sophisticated compilers, but a key benefit of zero-knowledge proofs over other approaches is that participants can verify at the start rather than the end of the crowdsearch that an artificial bug exists.

With the simple device of artificial bugs, we expect to improve the appeal and effectiveness of bug bounty systems and reduce cyber risk. The potential losses generated by organic bugs can endanger entire interconnected systems on which society depends, so it is crucial to find such bugs as efficiently as possible in a cost-saving manner.

Artificial bugs can be designed in versatile ways, from very simple bugs that yield fast rewards and attract many participants to sophisticated bugs that only cutting-edge crowdsearch participants will find, proving that the crowdsearch was operated at a high level.

It is time now to experiment with artificial bugs in all variants, with different reward frameworks, to assess which crowdsearch contests are the best.

References

Akgul, O, T Eghtesad, A Elazari, O Gnawali, J Grossklags, M L Mazurek, D Votipka and A Laszka (2023), “Bug Hunters’ perspectives on the challenges and benefits of the bug bounty ecosystem”, 32nd USENIX Security Symposium (USENIX Security 23): 2275–91.

Dreyer, P, TM Jones, K Klima, J Oberholtzer, A Strong, JW Welburn and Z Winkelman (2018), “Estimating the Global Cost of Cyber Risk: Methodology and Examples”, RAND Corporation.

Federal Chancellery of Switzerland (2019), “Release of source code leads to discovery of flaw in Swiss Post’s new e-voting system”, press release, 12 March.

Federal Department of Finance of Switzerland (2022), “Federal Administration procures platform for bug bounty programmes”, press release, 3 August.

Financial Times (2023), “Ransomware attack on ICBC disrupts trades in US Treasury market”, 10 November.

Gersbach, H, A Mamageishvili and F Pitsuwan (2023), “Crowdsearch”, CEPR Discussion Paper DP18529, 16 October.

Gersbach, H, F Pitsuwan and P Blieske (2024), “Artificial Bugs for Bug Bounty”, CEPR Discussion Paper DP19047, 6 May.

Glasserman, P and HP Young (2016), “Contagion in financial networks”, Journal of Economic Literature 54(3): 779–831.

International Monetary Fund (2024), “Global Financial Stability Report: The Last Mile: Financial Vulnerabilities and Risks”, Washington, DC, April.

Jamilov, R, H Rey and A Tahoun (2023), “The Anatomy of Cyber Risk”, NBER Working Paper 28906.

Maillart, T, M Zhao, J Grossklags and J Chuang (2017), “Given enough eyeballs, all bugs are shallow? Revisiting Eric Raymond with bug bounty program”, Journal of Cybersecurity 3(2): 81–90.

Malladi, SS and HC Subramanian (2020), “Bug Bounty Programs for Cybersecurity: Practices, Issues, and Recommendations”, IEEE Software 37(1): 31–39.

Sridhar, K and M Ng (2021), “Hacking for good: Leveraging HackerOne data to develop an economic model of Bug Bounties”, Journal of Cybersecurity 7(1): 1–9.

Zhao, M, A Laszka and J Grossklags (2017), “Devising Effective Policies for Bug Bounty Platforms and Security Vulnerability Discovery”, Journal of Information Policy 7:372–418.

European Central Bank (2024), “ECB to stress test banks’ ability to recover from cyberattack”, press release, 3 January.

This article was originally published on VoxEU.org.

Translate »