Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

AI isn't just for the good guys anymore

Maria Korolov | Feb. 2, 2017
Criminals are beginning to use artificial intelligence and machine learning to get around defenses

They can't just add in more randomness since human behavior is not actually random, he said. Spotting subtle patterns in large amount of data is exactly what machine learning is good at -- and what the criminals need to do in order to effectively mimic human behavior.

Smarter email scams

According to the McAfee Labs 2017 Threats Predictions report, cyber-criminals are already using machine learning to target victims for Business Email Compromise scams, which have been escalating since early 2015.

"What artificial intelligence does is it lets them automate the tailoring of content to the victim," said Steve Grobman, Intel Security CTO at Intel, which produced the report. "Another key area where bad actors are able to use AI is in classification problems. AI is very good at classifying things into categories."

For example, the hackers can automate the process of finding the most likely victims.

The technology can also be used to help attackers stay hidden inside corporate networks, and to find vulnerable assets.

Identifying specific cases where AI or machine learning is used can be tricky, however.

"The criminals aren't too open about explaining exactly what their methodology is," he said. And he isn't aware of hard evidence, such as computers running machine learning models that were confiscated by law authorities.

"But we've seen indicators that this sort of work is happening," he said. "There are clear indications that bad actors are starting to move in this direction."

Sneaker malware and fake domains

Security providers are increasingly using machine learning to tell good software from bad, good domains from bad.

Now, there are signs that the bad guys are using machine learning themselves to figure out what patterns the defending systems are looking for, said Evan Wright, principal data scientist at Anomali.

"They'll test a lot of good software and bad software through anti-virus, and see the patterns in what the [antivirus] engines spot," he said.

Similarly, security systems look for patterns in domain generation algorithms, so that they can better spot malicious domains.

"They try to model what the good guys are doing, and have their machine learning model generate exceptions to those rules," he said.

Again, there's little hard evidence that this is actually happening.

"We've seen intentional design in the domain generation algorithms to make it harder to detect it," he said. "But they could have done that in a few different ways. It could be experiential. They tried a few different ways, and this worked."

Or they could have been particularly intuitive, he said, or hired people who previously worked for the security firms.

One indicator that an attack is coming from a machine, and not a clever -- or corrupt -- human being, is the scale of the attack. Take, for example, a common scam in which fake dating accounts are created in order to lure victims to prostitution services.

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.