As more and more servers are virtualized, connections between them are increasingly handled by virtual switches running on the same servers, begging the question, does the top of rack data center network switch ultimately get subsumed into the server?
Advocates say yes, especially given that servers today are packed with multicore processors, additional Layer 2 intelligence and dense optical connectors. Upstream core connectivity could then be provided by optical cross connects that just move traffic based on directional guidance from the server.
Pessimists say no, or not right away. Servers will continue to assume more switching duties between virtual machines, but the ToR will live on for some time to come.
"The short answer is no," says Alan Weckel, switching analyst at Dell'Oro Group, when asked if servers will eventually replace ToR switches. "At the end of the day, it will be rack servers connected to top of rack switches. That's 80% of the market now. That ToR isn't going anywhere."
Alan Weckel, switching analyst at Dell'Oro Group
Fiber Mountain is one company that disagrees. The startup makes software-controlled optical cross connects designed to avoid as much packet processing as possible by establishing what amounts to directly attached, point-to-point fiber links between data center server ports.
"We're getting rid of layers: layers of switches, layers of links between switches," says MH Raza, Fiber Mountain founder and CEO. "Switching as a function moves from being inside a box called a switch to a function that co-resides inside a box we call a server. If we put the switching function inside a server, it's the same logic as a rack front-ending a number of servers; it's the housing of a server with a switch in it front-ending a bunch of VMs. Why can't that decision be made at the server? It can be made at the server."
Raza says he knows of a vendor whom he wouldn't name offering an Intel multicore server motherboard with a Broadcom Trident II switch chip and a high capacity fiber connector. The 1U device has a fiber port that can support up to 64 25Gbps lanes at 800G to 1.6Tbps of capacity which is similar in capacity to the Intel and Corning MXC connector. With the MXC and similar silicon photonics, servers can communicate directly without any switch between them, Raza says.
"The decision could be made by the server," he says. "I can assign packets going out the right lane. How many places does it need to go? Ten, 12, 40? Not a problem. When you have an MXC connector you could take them to 32 different destinations."
Raza says this is possible now but no one is talking about it due to its disruptive potential. We are still wearing the blinders of traditional network thinking. "Nobody is talking about this because it is based on how fast silicon photonics will get adopted," Raza says. "But it can be done now. The timing depends on investments and shifts" in technology and markets.
Sign up for CIO Asia eNewsletters.