Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

SDN showdown: Examining the differences between VMware's NSX and Cisco's ACI

Ethan Banks, Owner, Packet Pushers Interactive | Jan. 7, 2014
The arrival of Software Defined Networking (SDN), which is often talked about as a game changing technology, is pitting two industry kingpins and former allies against each other: Cisco and VMware.

If the vSwitch is the heart of the NSX solution, the NSX controller is the brain. Familiar in concept to those who are comfortable with SDN architectures, the NSX controller is the arbiter of applications and the network. The controller uses northbound APIs to talk to applications, which express their needs, and the controller programs all of the vSwitches under NSX control in a southbound direction to meet those needs. The controller can talk OpenFlow for those southbound links, but OpenFlow is not the only part of the solution, or even a key one. In fact, VMware de-emphasizes OpenFlow in general.

With NSX, the controller could run as a redundant cluster of virtual machines in a pure vSphere environment, or in physical appliances for customers with mixed hypervisors.

A distributed firewall is another key part of NSX. In the NSX model, security is done at the network edge in the vSwitch. Policy for this distributed firewall is managed centrally. Conceptually, the NSX distributed firewall is like having many small firewalls, but without the burden of maintaining many small firewall policies.

Creating the virtual network segments are overlay protocols. VMware's choice to support multi-hypervisor environments means they also support multiple overlays. Supporting Virtual eXtensible LAN (VXLAN), Stateless Transport Tunneling (STT) and Generic Routing Encapsulation (GRE), NSX builds a virtual network by taking traditional Ethernet frames and encapsulating (tunneling) them inside of an overlay packet. Each overlay packet is labeled with a unique identifier that defines the virtual network segment.

Of course, not all networks would know what to do with NSX-defined virtual networks. To connect non-NSX networks to NSX environments and vice-versa, traffic passes through an NSX gateway, described by VMware as the "on ramp/off ramp" into or out of logical networks.

Multi-hypervisor support is an important part of the NSX strategy, adding, as it does, Citrix Xen and KVM users to the mix. In fact, NSX is agnostic to many environment elements, including network hardware, which is an important attribute. From a network engineering perspective, this is critical to understand.

Hedlund put it this way:  "When you put NSX into the picture with network virtualization, you're separating the virtual infrastructure from the physical topology. With the decoupling and the tunneling between hypervisors, you don't necessarily need to have Layer 2 between all of your racks and all of your VMs. You just need to have IP connectivity. You could keep a Layer 2 network if that's how you like to build. You could build a Layer 3 fabric with a Layer 3 top of rack switch connected to a Layer 3 core switch providing a scale-out, robust, ECMP IP forwarding fabric. Now the Layer 2 adjacencies, the logical switching and the routing is all provided by the programmable vSwitch in the hypervisor."

 

Previous Page  1  2  3  4  5  6  7  8  Next Page 

Sign up for CIO Asia eNewsletters.