As an alternative to in-path detection and sampling, mirrored data packets provide the full detail for analysis, while not necessarily in the path of traffic. This allows for fast detection of anomalies in traffic, which may have entered from other entry points in the network. While setting up a scalable mirroring solution in a large network can be a challenge, it can also be an excellent method for a centralized analysis and mitigation center.
Watch your performance metrics
Bandwidth is an important metric for most people. When shopping for home Internet connection, people most often compare the bandwidth metric. While it is important, as with many things, the devil is in the details. Networking devices ultimately process network packets, which typically vary in size. Small packets use less bandwidth, while large packets amount to larger bandwidths. The main limitation of the networking node is set by the amount of packets a device can process within a second. By sending many small packets at a high rate, an attacker can stress out the infrastructure quite quickly especially traditional security infrastructure such as firewalls, or Intrusion Detection Systems. These systems are also more vulnerable to stateless, high-rate assaults such as many flooding attacks, due to their stateful security approach.
Verizon's 2014 Data Breach Investigations Report notes that the mean packet-per-second (pps) attack rate is on the rise, increasing 4.5 times compared to 2013. If we carefully extrapolate these numbers, we can expect 37 Mpps in 2014 and 175 Mpps in 2015. These are the mean values to show the trend, but we have seen many higher pps rates. While the mean value demonstrate the trend, to properly prepare your network, you should focus on worst-case values.
Assure your Scalability
As DDoS attacks, and especially volumetric attacks, enter the network with extreme packet-per-second rates, you need a mitigation solution with adequate packet processing power
Scaling the analytics infrastructure is also an important consideration. Flow technology scales rather well, but at a massive cost: it compromises granularity and time-to-mitigate.
If your vendor provides performance numbers that match your network size, be aware that the real-world performance may be lower. The current trend is that attacks use multiple attack vectors; multiple attacks methods are launched simultaneously. Datasheet performance figures provide a good indicator to match the product to your needs, but it is advisable to test your prospect mitigation solution, and validate it through a series of tests to see how it holds up against a set of attack scenarios in your environment.
The multi-vector attack trend illustrates the importance of validating performance. Running a basic attack such as a SYN flood puts a base stress level onto the CPUs - unless, of course, the attack is mitigated in hardware. Making the system simultaneously fight a more complex application-layer attack such as an HTTP GET flood attack could push a system over its limit.
Sign up for CIO Asia eNewsletters.