Services are a relatively new concept in WANs. Devices and configurations were traditionally what made up a WAN, with routers, switches, load balancers, firewalls, proxy servers and other components positioned at appropriate points in the network. Enterprises have long grown accustomed to the use of appliances—or “middle boxes” to perform a single function, and the maintenance and management of these devices can be a real headache for IT teams.
Service chaining first emerged as a concept for carriers and other network operators. The basic premise was that services—firewalling, intrusion detection, carrier-grade NAT, deep packet inspection—could be deployed using generic compute and storage resources at strategic points within the network, and traffic would be programmatically directed to (and through) these services as required. This may not be the most efficient path for the traffic to take from a network topology perspective, but this can often be outweighed by the efficiencies gained from deploying these services at scale.
We are now starting to see service chaining emerge as a concept in enterprise networks. As with many newer terms, various vendors have adopted it to have a meaning that relates to their own product set and capabilities. This can lead to some confusion for the enterprise. However, there are some key principles—and benefits—that apply to service chaining in the enterprise WAN.
Resources inside the WAN can be used more dynamically
Many enterprises already backhaul internet egress to headquarters or data center sites due to the placement of large firewalls, IDS/IPS infrastructure and proxy servers. In traditional WANs it is challenging to make this a flexible policy. Typically the redirection is performed using complex PAC files or a default route in the network. But suppose the requirement is “send web traffic to the best egress point based on path performance, except Office 365, Salesforce and a local banking site, which should go directly to the Internet.”
This is horrendously complicated in a traditional WAN, but it can be extremely straightforward in many SD-WAN solutions. Resources such as regionalized Internet egress points can be defined as services, then policies can be created to “chain” these services into the traffic flow for specific matching application types. Real-time path performance can be used as a factor to determine which service should be used, and the enterprise can manually adjust the ordering if required. This can enable the use of policies that are more in line with what many enterprises now demand as a result of their application mix and traffic flows.
Services outside the WAN can replace boxes inside the WAN
Using internal appliances as services is only half the story. For many enterprises, the real value comes from leveraging services outside the network to replace physical devices in data centers. A perfect example of this is the growth in recent years of public cloud-based services such as Zscaler and Cisco Cloud Web Security to provide internet content filtering and access control services that were previously possible only via on-premises solutions.
Sign up for CIO Asia eNewsletters.