Reza Chapman, managing director of cybersecurity in Accenture’s health practice, said maintaining the effectiveness of AI/ML can require significant maintenance. “Detection thresholds need to be adjusted to reach a balance between false alarm rate and missed detection rate,” he said.
“Further, constant tuning is often necessary within the specific operation environment. Overall, this is not a reason to steer away from these technologies. Instead, consider AI and ML as complementary to the personnel in your security program.”
And Chapman said he doubts AI/ML will discourage ransomware attackers. While those technologies will certainly raise a barrier, “the payoffs of mounting a ransomware or other malicious campaign are high, and attackers will likely continue to evolve,” he said. “Further, attackers are likely to employ AI and ML techniques for their own efforts.”
Perry Carpenter, chief evangelist and strategy officer at KnowBe4, agreed that, “like any technology, the devil is in the details. These systems need to be implemented, baselined, tuned, and proven effective in an ongoing manner.”
And he added another caveat – that while AI/ML are promising technologies, both for detection of threats and in being “self-healing and self-protecting,” they can still be undermined by negligent humans.
While they can adapt to, “nuances of human behavior, it would be a mistake to believe that they can fully account for the unpredictability of humans,” he said.
Beyond all that, if a healthcare organization decides to implement an AI/ML solution, that takes some advance due diligence as well.
Scott said while there are hundreds of companies offering it, “many of these organizations are faux experts and snake-oil salesmen that are using AI and ML as buzzwords and whose products lack any substance.
“A conservative guess would be that at the moment, there are less than a dozen actual, reputable vendors,” he said.
Davis is skeptical in the other direction – he said vendors that exclusively offer AI/ML may be charging a premium for a product that is not necessarily superior. “Many [antivirus] vendors have already moved to AI/ML models, and usually for a price much cheaper than the new ‘ML/AI only’ vendors,” he said.
So, the best advice is to ask around. “Ask for a demonstration,” Scott said. “Seek input from the product’s clients, and examine what technology the solution actually employs, how it deploys that technology, and whether it can deliver on its promised results.”
Finally, Scott acknowledges that attackers will eventually adapt to any new defense, but said he believes it will be five to 10 years before that happens. Meanwhile, “algorithmic solutions are adaptable, so they constantly learn and can be updated and retooled to respond to emerging threats,” he said.
“AI and ML will not become obsolete – they will be the foundation for all future defense-grade cybersecurity solutions.”
Sign up for CIO Asia eNewsletters.