While service providers such as AT&T, Sprint, and Verizon have long touted the capabilities of their networks, many of the large web scale providers are now openly disclosing their internally developed network designs.
Google, Amazon, Facebook, Microsoft and others have invested heavily in Software Defined Networking both in their data centers and their wide area networks (WAN), and many have published details about their homegrown SDN software and white box switch implementations. In fact, achieving high performance, massive scale, and low latency is now an arms race for the hyperscale cloud providers.
What differentiates hyperscale SDN networks is their scale, performance, reliability, and their provisioning/management requirements. Web scale organizations are truly massive in the scale of their compute and storage capacity. Their data centers typically range in size from 250,000 to more than 500,000 square feet (larger than a typical Walmart store) and house 100,000 to 500,000+ physical servers.
Hyperscale data centers require huge amounts of network bandwidth between physical and virtual servers, so over the next several years Ethernet links will be migrating from 10GB to 100GB, with potential interim speeds of 25GB, 40GB and 50GB. Transmitting traffic between these huge data centers and/or to the Internet also requires creative thinking. In Facebook’s new data center network design, for example, a bank of edge switch pods provides 7.68TB/sec of switching capacity up to the Internet or metro DCI. Optical suppliers, such as Infinera, Ciena, and ALU are moving to deliver dense, cost effective optical connections to support web scale needs for 100s of 100GB optical links per data center location.
The growth in capacity of the hyperscale data centers makes provisioning network bandwidth a significant challenge for the IT organization. Facebook (and others) have said that their network is too large to physically provision and that automation (zero human touch) is a key goal for their SDN implementation.
Web scale providers market their hyperscale SDN networks
At recent trade shows such as the Open Network Summit (ONS) and Open Network User Group (ONUG), executives from Google, Facebook and Microsoft provided significant details on the scale, performance and architecture of their data center and wide area networks. Their unique network environments have incented them to develop their own SDN software and, in the case of Facebook, white box switch hardware. The benefits of these R&D efforts are now starting to trickle down market as the details are publically disclosed. The goal of the web scale suppliers is to commoditize the network at scale in the same way they have helped to commoditize the server and storage market.
Details on SDN deployments include:
- Google leveraged SDN principals to create a centralized software control stack to manage thousands of switches within the data center and treat them as one large fabric. It says its network allows 100,000 servers to exchange information at 10Gbps within a single data center - a capacity increase of 100x over the last several years.
- Facebook created FBOSS software which implements a hybrid of distributed and centralized control to manage its network. Facebook introduced a Linux-based, top-of-rack network switch called the Wedge that it plans to make available as an open-source hardware design through its Open Compute Project. Its “6-pack” platform is the core of its new fabric, and it uses “Wedge” as its basic building block.
- Microsoft Azure storage and compute usage is doubling every six months and its cloud platform consists of 22 hyper-scale regions around the world with millions of hosts. The Azure network requires a self-provisioning, virtualized, partitioned and scale-out design, delivered via SDN on commodity servers. Microsoft uses internally designed Vnets and Virtual Filtering Platform (VFP) to enable Azure’s data plane to act as a Hyper-V virtual network switch.
Sign up for CIO Asia eNewsletters.