We tested Microsoft's virtual network switching component, and found it easy to manipulate, although we disconnected several remote hosts by making inept choices. The instructions weren't clear to us, and we were able to crater the communications of two of our test servers with ease. The drive to our network operations center is a long one.
The SDNs inside Hyper-V V3 are more easily manipulated by System Center 2012. Microsoft includes IP Address Management/IPAM in Hyper-V, and as VMware has found, is heaven-sent for those with genuine desires to form virtualization platforms where VMs can be easily transported from host to host for either performance or isolation within a defined fabric/VM farm.
Hyper-V V3 resources can be aggregated into clusters, and through the use of new VHDX sharable disk stores, can create islands internally -- or for cloud-hosted purposes, external clouds whose resources should be opaque to other cloud components. We were not able to successfully find constructs to test the opaque nature of what should be isolated clouds, but rudimentary tests seemed to prove isolation. The VHDX format can also be dynamically re-sized as the need arises; we found that the process is fast, although during that period, disk and CPU resources can peak until the modification is over. Heavy CPU/disk-imposed limitations thwart resizing by slowing it.
We also tested Hyper-V and 2012R2 IPAM and Microsoft's SDN successfully under IPv4 (other limitations prevented heavy IPv6 testing). Software defined networks (SDN) cross a turf that is divided in many organizations: virtualization and network management teams. Network management staff have traditionally used IPS, routing, switching and infrastructure controls to balance traffic, hosts, even NOC hardware placement. SDN use means that what were once separate disciplines are now forced to work together to make things work inside the host server's hypervisor, where the demarcation was once where the RJ-45 connector meets the server chassis.
IPAM allowed us to define a base allocation of routeable and/or non-routeable addresses, then allocate them to VMs hosted on Hyper-V hosts or other hosts/VMs/devices on our test network. We could in turn, allocate virtual switches, public private or internal, connected with static/blocked and sticky DHCP. Inter-fabric VM movements still require a bit of homework, we found. Using one IPAM is recommended.
What we like is that the SDN primitives and IPAM can work well together, with well-implemented planning steps. We could create clouds easily, and keep track of address relationships. A Microsoft representative mused over the spreadsheets that carry IP relationship management information in many organizations, calling it crazy. We would agree, and believe that hypervisor or host-based IPAM is a great idea. If only DNS were mixed in more thoroughly -- and it's not -- we'd be complete converts to the concept. We found it very convenient nonetheless, although errors were more difficult to find when they occurred, such as address pool depletions. Uniting networking and virtualization/host management disciplines isn't going to be easy.
Sign up for CIO Asia eNewsletters.