Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

AI isn't just for the good guys anymore

Maria Korolov | Feb. 2, 2017
Criminals are beginning to use artificial intelligence and machine learning to get around defenses

Last summer at the Black Hat cybersecurity conference, the DARPA Cyber Grand Challenge pitted automated systems against one another, trying to find weaknesses in the others' code and exploit them.

"This is a great example of how easily machines can find and exploit new vulnerabilities, something we'll likely see increase and become more sophisticated over time," said David Gibson, vice president of strategy and market development at Varonis Systems.

His company hasn't seen any examples of hackers leveraging artificial intelligence technology or machine learning, but nobody adopts new technologies faster than the sin and hacking industries, he said.

"So it's safe to assume that hackers are already using AI for their evil purposes," he said.

The genie is out of the bottle.

"It has never been easier for white hats and black hats to obtain and learn the tools of the machine learning trade," said Don Maclean, chief cybersecurity technologist at DLT Solutions. "Software is readily available at little or no cost, and machine learning tutorials are just as easy to obtain."

Take, for example, image recognition.

It was once considered a key focus of artificial intelligence research. Today, tools such as optical character recognition are so widely available and commonly used that they're not even considered to be artificial intelligence anymore, said Shuman Ghosemajumder, CTO at Shape Security.

"People don't see them as having the same type of magic as it has before," he said. "Artificial intelligence is always what's coming in the future, as opposed to what we have right now."

Today, for example, computer vision is good enough to allow self-driving cars to navigate busy streets.

And image recognition is also good enough to solve the puzzles routinely presented to website users to prove that they are human, he added.

For example, last spring, Vinay Shet, the product manager for Google's Captcha team, told Google I/O conference attendees that in 2014, they had a distorted text Captcha that only 33 percent of humans could solve. By comparison, the state-of-the-art OCR systems at the time could already solve it with 99.8 percent accuracy.

The criminals are already using image recognition technology, in combination with "Captcha farms," to by-pass this security measure, said Ghosemajumder. The popular Sentry MBA credential stuffing tool has it built right in, he added.

So far, he said, he hasn't seen any publicly available tool kits based on machine learning that are designed to bypass other security mechanisms.

But there are indirect indicators that criminals are starting to use this technology, he added.

For example, companies already know that if there's an unnaturally large amount of traffic from one IP address, that there's a high chance it's malicious, so criminals use botnets to bypass those filters, and the defenders look for more subtle indications that the traffic is automated and not human, he said.


1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.