You will probably protest, but there is a lot of industry chatter about the inherent limitations in your overlay approach. What are those limitations in your view?
If you look at what Cisco has done, it's a very similar architecture. They do exactly what we do; they use overlays, but they used proprietary headers in VXLAN and they tie it to their physical hardware. I get what they're doing. They make money when they sell hardware so they have to tie it to the physical hardware. We look at it and say, "Not necessarily." I think it's good to give the customer choice.
OK, but you didn't really answer the question about the limitations of the overlay approach. For example, you say rack and stack and leave it and we'll do the rest, but you still have infrastructure provisioning and optimization and management issues to deal with, which capital letter Software Defined Networking promises to address.
I've been in networking for 25 years and I can tell you that vision will never happen. People will talk about that for another five years and then they'll grow tired of it. Watch. That will never happen because it's not needed. I mean, one of the things is there will be connections where there need to be connections and there will be interfaces between the overlay and the underlay, but all that is needed is a loose coupling. It does not need to be a hard coupling.
People talk about elephant flows and mice flows, where an elephant flow is a long-lasting big flow that can stomp on smaller flows, the mice flows, and make for a bad SLA for those mice flows, and say you need a tight coupling of the overlay and the underlay for that reason.
Hogwash. From inside the hypervisor we have a much better way to actually highlight those elephant and mice flows, and then we signal to the physical infrastructure, "This is an elephant flow, this is a mice flow, go do what you need to do." And we'll be able to have that coupling not just for one set of hardware, but for everybody, whether it's Arista or Brocade or Dell, HP, Juniper, etc. We'll be able to work with anyone and actually do that handoff between the overlay and the underlay. So you can go through every single one of those examples and show that a generalized solution and a loose coupling is actually as good or better and gives you the flexibility of choice.
How do you do traffic engineering across the whole network though, if you're trapped in your world?
If you look at the management and the visibility of networking now, it's horrendous. So through network virtualization you actually improve the visibility because of our location in the hypervisor. As soon as everything went virtual, the physical network lost visibility because it wasn't in the right spot. The edge of the network has moved into the server, so you have to have a control point inside the vSwitch as a No.1 starting point. And honestly, once you own that point, you have way more context about what's going on and what applications are being used and response time and everything else like that compared to if you're just looking at headers inside the physical network. When you're looking at a packet inside the network you don't have a lot of context.
Sign up for CIO Asia eNewsletters.