Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

What happens when cybercriminals start to use machine learning?

Tamlin Magee | Oct. 23, 2017
Andrew Tsonchev, director of cyber analysis for AI threat defence business Darktrace, says it could enable hackers to launch attacks that were previously only within the reach of nation state actors.

Right now, Tsonchev said, Darktrace hasn't spotted a true machine learning attack in the wild.

"This is something we are super focused on - it's what we do - and we're very aware of the benefits so we are very worried about the stage when there is widespread access and adoption of AI-enabled malware and toolkits for attackers to use," explained Tsonchev.

"The danger with AI is that it threatens to collapse this distinction," Tsonchev said. "In that suddenly, you can use AI-enabled tools to replicate en masse scale, the kind of targeting and tailoring that at the minute is only possible on a case-by-case basis.

"That is because by and large, applications of AI unlock decision-making, and that is what human-driven attacks do. You have an attacker in a network, on a keyboard, and they can case the joint. They can see what the weak points are. They can adapt the attack path they follow to the particular environment they find themselves in, that's why they're hard to detect.

"We're very worried about malware that does that: malware that uses machine learning classifiers to land and observe the network and see what it can do."

This sort of thinking could be applied to all the attacks that we have come to be familiar with. Take the majority of phishing attacks: for the most part these are 'spray and pray' approaches directed at the world in general, and if someone bites, then great.

Spearphishing - its highly targeted cousin - requires the attacker to pay close attention to their target, to stalk their social media accounts, to build a profile of them that they can manipulate with an email that's convincing enough to pass what Tsonchev calls the 'human sanity check'.

"The worry is that AI will be used to automate that process. Custom development, where the AI systems are trained to make phishing emails that pass the suspiciousness test. You can train an AI classifier on a bunch of genuine emails and learn what makes something convincing.

"And once you've got that and if that works and it gets in the wild, then there's no barriers of entry to do this to everybody, to every sized organisation.

"So you might get opportunistic attacks against small and medium sized enterprises that have the level and sophistication that currently only nation states do against high value targets.

"And that's really worrying."


Previous Page  1  2 

Sign up for CIO Asia eNewsletters.