These are all specious arguments.
Let’s start with the claim that these were critical systems that couldn’t be shut down for patching. I’m sure some of them were indeed critical, but we’re talking about something like 200,000 affected systems. All of them were critical? It doesn’t seem likely. But even if they were, how do you argue that avoiding planned downtime is better than opening yourself up to the very real risk of unplanned downtime of unknown duration?And this very real risk is widely recognized at this point. The potential for damage from wormlike viruses has been well established. Code Red, Nimda, Blaster, Slammer, Conficker and others have caused billions of dollars of damage. All of these attacks targeted unpatched systems. Organizations cannot claim that they did not know the risk they were taking by not patching systems.
But let’s say some systems really couldn’t be patched, or needed more time. There are other ways to mitigate the risk, also referred to as compensating controls. For example, you can isolate vulnerable systems from other parts of the network or implement whitelisting (which limits programs that can run on a computer).
The real issues are budget and underfunded and undervalued security programs. I doubt that there was a single unpatched system that would have been left unprotected if security programs had been allocated the appropriate budget. With enough funding, patches could have been tested and deployed, and incompatible systems could have been replaced. At the very least, next-generation anti-malware tools such as Webroot, Crowdstrike and Cylance that were able to detect and stop WannaCry infections proactively could have been deployed.
So I see several scenarios for blame. If security and network teams never considered the well-known risks associated with unpatched systems, they are to blame. If they did consider the risk but its recommended solutions were rejected by management, management is to blame. And if management’s hands were tied because its budget is controlled by politicians, the politicians get a share of the blame.
But there’s plenty of blame to go around. Hospitals are regulated and have regular audits, so we can blame the auditors for not citing failures to patch systems or to have other compensating controls in place.
Managers and budget appropriators that undervalue the security function have to understand that, when they make a business decision to save money, they are assuming risk. In the case of hospitals, would they ever decide that they just don’t have the money to properly maintain their defibrillators? It’s unimaginable. But they seem to be blind to the fact that properly functioning computers are also critical. Most of the WannaCry infections were the result of the people responsible for those computers simply not patching them as part of a systematic practice, without any justification. If they considered the danger, they apparently chose not to implement compensating controls as well. It all potentially adds up to negligent security practices.
Sign up for CIO Asia eNewsletters.