Is OpenFlow destined to become the new way to forward traffic through a network?
OpenFlow's long-term future is uncertain at this point. Arguably, OF has proven most useful in soft switches that run at the network edge in a hypervisor, relying on server-based x86 computing power to do the needed processing. However, when implemented in traditional network hardware switches, OF's usefulness has depended on the silicon in the switch and the ability of that silicon to handle OpenFlow operations at the scale required for a given use case.
Network designers evaluating OpenFlow hardware must carefully evaluate vendors, as not all OF switches are created equal. Another point against OpenFlow as a long-term replacement for traditional forwarding is that OF doesn't necessarily replicate all the hardware capabilities custom ASIC designers like Cisco, Juniper and Brocade bake into their chips. While these vendors might support OF as an adjunct means of populating forwarding tables and policies, they are also exposing their own APIs that take full advantage of their hardware's capabilities.
Some argue that OF has scalability problems because of limited flow entries and the latency of punting to the controller. Is this true?
It is true that network switches with OF capability tend to have maximum flow entries under 10K. Whether this is a limitation depends on the use case and overall network design. Vendors point out that if using OF at the network edge (as opposed to the core), several thousand flow entries are unlikely to present a limitation, and that a simplified core (where edge tenants are obscured by an overlay) can also succeed.
It is also true that when an OpenFlow switch has no matching flow entry for a given bit of traffic, that traffic must be punted to the controller. And that introduces latency of anywhere from dozens to hundreds of milliseconds. In addition, an OpenFlow switch CPU can only punt so fast, typically limiting punting operations to 1,000 or less per second. While that sounds slow to the ear of a network designer used to line-rate forwarding of L2 and L3 traffic at terabit scale, vendors point out that in a typical deployment, flow tables can be pre-populated with flow entries, as endpoints are known to the controller. This minimizes the need for punting.
Isn't an SDN controller a single point of failure?
One of SDN's big ideas is that a centralized controller knows the entire network topology, and can therefore program the network in ways that a distributed control plane cannot. Vendors recognize the mission-critical role of the controller, and typically offer the controller as a distributed application that can be run as a clustered appliance, or as a virtual machine that takes advantage of a hypervisor's high availability. In addition, it doesn't necessarily follow that if the controller goes down, the network goes down with it. While architectures vary by vendor, it's usually a reasonable assumption that the network will continue to forward traffic (at least for a while) even if the controller is no longer present.
Sign up for CIO Asia eNewsletters.