A particularly influential event was the 300Gbps DNS reflection attack on anti-spam organisation Spamhaus in March 2013, which brought home the potential threat for small organisations. At the time ill-informed journalists claimed this attack had 'slowed the Internet', a ludicrous claim of course. But the alarm about the potential damage of large DDoS attacks was palpable.
Google also invested in Dashboard, a tool that would allow researchers and journalists a means to track money laundering, and Password Alert, a service for notifying users when their account passwords have been unsuccessfully tried from other locations (later heavily criticised).
Who is it for?
Project Shield is not for everyone. Google has made clear that the initiative is for "news, human rights, or elections monitoring websites," which means that general businesses can't apply. It does appear, however, that commercial organisations will be eligible as long as they perform information dissemination of some kind within defined general criteria.
"Generally, news sites should have original content, cite news sources, and report on timely and newsworthy topics. For example, websites with strictly informational content like stock data or weather forecasts aren't eligible," states Google's definition.
Geographically, sites based in Crimea, Cuba, Iran, North Korea, Sudan, and Syria can't apply though some exceptions might be made even here.
Although individuals won't be eligible, sites set up within Google's own services (such as Blogger) are already protected. As plum targets, third-party hosting systems often already have their own protection too.
How does it work?
Google says that Shield can be set up by an admin in ten minutes as long as they have a Google account (the connection can also be turned off through DNS settings). One configured as a reverse proxy (preferably with SSL turned on), the service works in two ways that parallel the sort of protection on offer inside its own services. A first layer filters distinguishes good traffic from bad, something an Internet presence as massive as Google should be adept at. Essentially, Google can see which attacks are unfolding and where and work out whether some of this is being directed at a particular website.
Because Google is effectively proxying a given site it is not itself hosting, another technique used is caching, where Google holds a copy of the site on its servers, keeping the site up and running even when the original might be stressed. This approach might not be for everyone.
What appears to be on offer here is no exactly the same as DDoS mitigation but should work fine as long as sites aren't large and complex. Some might notice some latency while others might experience better performance.
Sign up for CIO Asia eNewsletters.