Skip to content

ML accelerates the cyber arms race — we need real security more than ever


Machine learning is en vogue, being applied to many classes of problems. One of them is cybersecurity, where ML is used to find vulnerabilities in code, simulate attacks, and detect when an intruder has breached a system’s defenses. Ignoring that intrusion detection is an admission of defeat (it comes into play when your system is already compromised!) this sounds like a good development: helping defenders to find weaknesses faster, hopefully before the attacker does.

This rather optimistic view of the role of ML in cybersecurity ignores the fact that the attacker will use the same techniques to find weaknesses faster. Furthermore, let’s assume, optimistically, that ML speeds up the defense (proactively detecting weaknesses, detecting intrusions) as much as the attack (detecting and exploiting weaknesses). This is a pretty big assumption, as the attacker can choose where to attack, while the defender must defend everywhere. But even if this assumption holds, detecting vulnerabilities is only part of the defence: the defender must also remove those vulnerabilities, and that part is not accelerated by ML, as it still requires humans to analyse, modify, test, and deploy programs.

The path-and-pray cycle
The patch-and-pray cycle works for the attackers

In other words, the “patch” part of the traditional, reactive patch-and-pray cycle of software debugging isn’t accelerated, only the “pray” part. So, rather than strengthening the defender, the net effect of ML is increasing the attacker’s advantage in the cybersecurity arms race.

This is not an argument for stopping the defensive use of ML in cybersecurity. It is an argument that ML is not the technology to win the cybersecurity war — it will, at best, delay the inevitable defeat. That’s still better than doing nothing, but it’s a fatal mistake to think it’s all you need to do.

In this respect, it is bewildering to see the widespread ML-mania everywhere in cybersecurity; for example, the Australian Government’s National Security Challenges for the National Intelligence and Security research grants program mentions ML a lot, but is surprisingly quiet about anything that will prevent attacks in the first place. Other countries don’t seem much better. 

ML hasn’t changed the fact that systems will be compromised if they aren’t secure by design, and their critical components operate to specification. Cybersecurity work needs to focus on these fundamental approaches. Anything else buys us at best some breathing space and is at worst a detraction creating a fatal illusion of security.

We need real cybersecurity more than ever, especially since the advances in ML shift the battlefield further in favour of the attacker. We need to re-focus on the fundamentals: Security-oriented design that enables proof of security enforcement, and implementations that can be proved to match the design. The seL4 microkernel and work based on it show that it is possible, but as a community, we need to continue to work on scaling these guarantees up to full, real-world systems. ML won’t do it for us.

This post originally appeared on 2022-11-08 on the ACM SIGBED blog.

  1. Matthew Fernandez permalink

    Isn’t this (advantaging the attacker more than the defender) true of most general purpose technologies? E.g. attackers use verification technology like SMT solvers to derive control flow paths to vulnerable code.

    • Sure. Although it seems SMT solvers are fiddly beasts to get anything out of…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: