That, said Don McLean, chief technologist at DLT and an ICIT fellow, is both understandable and appropriate. “If a hospital administrator has limited funds, and needs to choose a new DLP system to protect data or a new defibrillator to rescue dying patients, they’ll pick the latter every time – and they should,” he said.
Given that, what are the chances that AI/ML will become common enough in the health sector to reverse, if not “crush” the ransomware trend?
On the “non-intrusive” front, it gets high marks. “One of the selling points of AI/ML is that it is not intrusive and works in the background,” said Mike Davis, CTO of CounterTack.
Scott agrees. “AI and ML solutions would automate and streamline the cyber-hygiene and security solutions that regularly inhibit health sector response time,” he said.
On the cost front, however, things are more vague, in part for the obvious reason that there is no “typical” healthcare organization.
But Davis said it isn’t cheap, and when it comes to the bottom line, administrators may conclude that it is cheaper to pay ransoms than to pay AI/ML vendors.
AI/ML can be three times the cost of anti-virus solutions, he said, “and healthcare organizations are already fighting for every budget dollar they have.
“If the average cost of a ransomware attack is $300 – which was reported by the ICIT in 2016 – why would I spend tens of thousands of dollars more per year to prevent that risk? I’d need 30 or 40 successful attacks before the cost makes sense.”
An even more significant barrier, however, is simply that nothing – not even AI/ML – is a “set it and forget it” security solution. It takes time both to configure it and maintain it.
Experts, including advocates like Scott, agree that it is a component of a “layered” security posture.
Matt Mellen, security architect, health care, at Palo Alto Networks, said AI and ML are “proving to be very effective at one of hardest things to get right in security – to identify what’s normal versus malicious.”
But he, like others, adds the caveat that, “no single capability, like AI or ML, is going to be able to stop all attacks. Hence, it’s important to carefully employ multiple advanced prevention capabilities.”
Davis has the same warning. “Using AI/ML technologies raises the bar but it does not eliminate the risk. There is a lot more a company has to do to really address the risks discussed in the report,” he said.
“Attackers can simply move to different techniques – for example non-malware attacks that do not use binaries but scripts or macros – which are much harder to train/learn from an AI/ML perspective. Any preventative technology that relies on the classification of good or bad is always susceptible to the arms race,” he said.
Sign up for CIO Asia eNewsletters.