Photo - Drew Williams, CEO, ConZebra (Condition Zebra)
The media in and around Malaysia has been abuzz about the recent web attack on MyNIC, the Malaysian registrar of ".MY" domains. [One of the official news accounts is linked here.].
Actually, the attack was on the Domain Name Service used by MyNIC to provide hosting service for regional branches of such organisations as Kaspersky Lab, Skype, Dell, and others. Domain Named Services, just as a refresher, translate a numeric IP [internet protocol] address into a recognisable name (like "Computerworld.com.my" instead of "188.8.131.52"). The attack type, called a "DNS Poisoning" or a "DNS Cache Spoofing" attack, is a way in which a hostile entity feeds random false address numbers into a weakly guarded DNS server, ultimately gaining control, and then rerouting the short-term cached files of other sites to a third-party site, which usually conveys some "Gotcha" message (as in the case with the MyNIC incident).
This attack is quite old, as Internet age is concerned, and was first reported to the U.S.-based "Internet Engineering Task Force (IETF") in 1999, and then further expounded upon by veteran Internet security guru Dan Kaminsky. The attack has all but vanished from the scene, as most server manufacturers have advanced their technologies beyond the DNS spoof exploits. On an occasion where an organization might be relying on an archaic device, a weak infrastructure, or just not paying attention, this type of attack can still offer marginal impact.
DNS spoofing, and other "Man-in-the-Middle" attacks are usually low-level in causing damage, but on the occasion where a distributed denial of service attack may be the center of attention, bad things can happen--as in the case with recent attacks launched at Sony (which resulted in more than US$1 billion in lost assets and market share).
In the case of MyNIC, however, the attack was more embarrassing than anything else, in that not only is MyNIC an "up-stream" domain registrar for the country: the incident revealed much-needed upgrades to ages-old devices and systems. Never a good thing when hosting a whole nation's web addresses.
The incident at MyNIC, as well, does suggest a greater risk that dwells generally in the culture of Asian business computing: Apathy.
Such attacks take advantage of organisations not paying attention to their computing infrastructure, thereby allowing themselves to become compromised. Compounding the problems was also the fact that many organisations actually are made aware of the risk, the exposed vulnerability and the course of action needed to mitigate the problem before such incidents.
This case highlights a growing trend across Asia, in the idea that "Risk Management" like car insurance, perhaps, is better evaluated after the incident. The consequences of this "don't-watch-but-wait-anyway" approach to managing and mitigating risk is becoming as great a risk as the threats that pose the highest potential to loss of assets (and user confidence). As a sense of perception, the idea that doing nothing is better than taking even small steps to reduce one's digital risk leads to the potential for systematic and total collapse of communications infrastructures (e.g., "Well, there's nothing we can do about it anyway, so let's just sit here and wait for tragedy to strike").
History suggests that defeatist positioning is never good for advancing business, technology or humanity, for that matter.
So how does one ignite some level of passion into the minds of business leaders, IT managers and a society as a whole, where risk management and corrective steps to mitigation (and prevention) are concerned? Here are three ideas:
#1. Consider the fact that time is money. To maximize any investment, organizations need to always be ready to advance their objectives, but do so safely and confidently. To achieve this, consider the idea that "Risk" becomes (in the beginning) a series of check boxes on an inventory of courses of action to be taken on a regular basis. Start with simple directives: "Have we got back-up server addresses?" "Do we have a strong password procedure for accessing our DMZ?" "Are we using authentication in registering DNS addresses?" Before they know it, they'll have a good start on a risk management policy.
#2. People don't like uncertainty. If an organisation is unsure of how its hosting service operates, the first clue could be on how much it's charging for monthly support, AND how much the client is willing pay. For example, a multi-billion-dollar investment firm from Singapore was recently hacked with a common injection attack, which took our team all of four hours to fix, but two days to revise. The reason? This organisation "invested" about US$2.50 per month on web support (that's two dollars and fifty cents), yet they were shocked when they realized that their not having back-ups to their web presence (since 2011), would lead to expensive upgrade challenges, and a boatload of attack risks. Unbelievable.
#3. If IT infrastructure is an important, dependent component of your business, take the time to evaluate the processes you rely so heavily on, and don't be afraid to invest a little funding into protecting those sensitive assets your organisation worked so diligently to establish. It beats a free lunch and a bunch of boring speakers every time!
Finally, remember that consumers are tolerant, but only to a certain extent, as long as they know corrective and preemptive action is being taken to protect their interests.
But when you have the answers right in front of you, well, there's no excuse.
Drew Williams is CEO of US-headquartered risk management consulting services company Condition Zebra (ConZebra) with its APAC office based in Malaysia.
All views expressed in Blog posts and Guest articles are not necessarily representative of those held by Fairfax Media or any of its related media channels.
Sign up for CIO Asia eNewsletters.