Network-security tools have long focused on identifying compromises that they recognise from past encounters, but what do you do about the attacks that you've never seen before - or even thought to look for?
This question is guiding the development and refinement of a new generation of security-intelligence tools that complement the search for well known and well understood attacks with advanced data analytics that are designed to identify threats by finding anomalous behaviour within an organisation's IT environment.
It's a more flexible approach that Mike O'Keeffe, Product Director for Financial Crime and Cyber with New Zealand data-analytics success story Wynyard Group, says is proving remarkably good at finding the 'unknown unknowns' of network and user behaviour - the threats that you not only can't detect, but don't even know to look for.
"Organisations are currently using technologies that are great at stopping the things that people know about using preventative technologies," he explains, "but they're not great at stopping the things that people don't know about. Those are the things that can cause the organisation to have a 'very bad day'."
Identification of those unknown unknowns happens through the application of unsupervised machine-learning algorithms against standard log files that represent user behavior, network activity and data movement. These algorithms - which Wynyard Group has extrapolated from years developing expertise in the highly specialised field of forensic data matching for law-enforcement authorities around the globe - were recently built into a new proactive monitoring tool called Advanced Cyber Threat Analytics (ACTA).
When applied to a corporate IT environment, ACTA uses unsupervised machine learning to build a baseline of activity that is considered normal, and then flag deviations from these patterns. These 'anomalies' may not be necessarily be malicious - a user who suddenly logins in from overseas may simply be on holiday, but equally his login identity could be compromised.
These machine-learning algorithms have proved astute at picking up anomalous behaviour that can often be attributed to previously unknown, zero-day compromises, compromised user accounts and suspicious data movement "If you're telling the computer what it is that it needs to look for, essentially you're going down the same route as rules and signatures," O'Keeffe says. "We want to let the machine figure out what's unusual for itself. The natural consequence of that is that we will find specific sets of activities that can be attributed to particular sets of attacks."
A trial with an unnamed UK-based Risk Consultancy identified a potential internal compromise that had been carried out by a specific user who had downloaded a potentially unwanted program "that may have left the network open to being attacked," O'Keeffe says. "We're finding stuff that organisations are not aware of."
Sign up for CIO Asia eNewsletters.