Real tales of cyberattack response and recovery are hard to come by because organizations are reluctant to share details for a host of legitimate reasons, not the least of which is the potential for negative financial fallout. However, if we never tell our stories we doom others to blindly walk our path. Better to share our real-world battlefield experiences and become contributors to improved threat intelligence.
We are a SaaS-based supplier of Web content management for mid- and large-size enterprises. Our customers manage hundreds, sometimes thousands of websites around the globe in high profile industries such as pharmaceuticals and financial services. The customer in this story prefers to remain anonymous, but to provide some context, the company is a large, public healthcare services organization focused on helping providers improve financial and operational performance. The company counts thousands of hospitals and healthcare providers as clients, managing billions in spending.
The scale of this particular DDoS attack was enormous at its peak 86 million concurrent users were hitting our customer's website from 100,000+ hosts around the world. The FBI was called. When it was all over 39 hours later, we had mounted a successful defense in what proved to be an epic battle. Here's how it happened.
Initial Attack Vector
On the eve of the company's annual conference where it was set to host 15,000 attendees, we received a troubling alert. The company's web servers were receiving unbelievable amounts of traffic. The company is a SaaS provider of content and analytics for its clients, so this slowdown had the potential to dramatically impact service availability and reputation. There was no time to waste.
On the initial attack vector:
- All of the requests were 100% legitimate URLs, so we couldn't easily filter out malicious traffic
- Attacks were originating from all around the world including North Korea, Estonia, Lithuania, China, Russia and South America
- 60% of the traffic was coming from inside the United States
- The attack was de-referencing DNS and attacking IP addresses directly
We were able to successfully defend against this initial wave by going in behind the scenes to our courtesy domain in Amazon's Route 53, rearranging things a bit and immediately cutting out the traffic to those IP addresses. Things returned to normal and we breathed a sigh of relief... in our ignorance, thinking everything was going to be OK. As it turns out that was only the first wave. Next came the tsunami.
That evening the attackers came back, and came back with a vengeance, targeting the site via its DNS name, which meant we couldn't employ the same IP blocking tactic we had used earlier. Traffic shot up dramatically.
The existential question you ask yourself in moments like this is, "Are we going to lie down and die, or are we going to step it up?" This led to a seminal moment in conversation with the customer CIO, and we decided with a handshake to step it up. As SaaS companies, our ability to deliver continuous, reliable service is paramount, so both of our reputations were on the line. We agreed to share the cost -- potentially tens of thousands of dollars -- to fight the good fight.
Sign up for CIO Asia eNewsletters.