Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Facebook’s Open Compute Project helps competitors build hyperscale data centres together

Steven Max Patterson | March 15, 2016
Facebook’s open source hardware development and procurement strategy grows with new competitors and new industries.

The scale that these data centers operate at is so large that the consumer and many enterprise IT professionals can’t relate. For instance, Facebook is deploying 100GB optical Ethernet in some data centers later this year and will convert all its data centers to 100GB in 2017 while it investigates how to build the next bump to 400GB within the OCP. And, Intel and Facebook codeveloped a next-generation data center storage architecture that pools 120 Terabytes of high-speed NVM Express solid-state drives into a single 2U (19" x 3.5" x 24") enclosure.

The influence of open source software on the OCP hardware can’t be overstated. It extends beyond and below the LAMP (Linux, Apache, MySQL and PHP) stack that software developers use. A good example is an open source project that layers RocksDB write optimized database project on top of MYSQL data stores so that Replication and other familiar tools can be used.

Another is a set of open source libraries that Intel produced for programming Arria 10 GX FPGA. The FPGAs accelerate functions like encryption and compression can be added to data center motherboards. Running this code on FPGA hardware increases its performance. FPGAs can be reprogrammed as algorithms improve allowing critical hardware functions to evolve with the underlying open source projects specific to hyperscale applications.

OCP’s progress in data centers became clear in light of the statements of the new members from the telecom industry, just getting started with open hardware innovation who want the same results for implementing the next generation of 5G wireless.

Verizon Senior Vice President Mahmoud El-Assir Verizon said that networking up to now has been about speed and running different protocols for data, voice and video. Today’s networks are created with one box that contains software, hardware and networking. To change the network you need to change the box. Disaggreating the building blocks into hardware, software and networks into a virtualized cloud computing network will change how networks are built and provisioned. It will let telecom operators script the network to create virtual customer premise equipment (CPE) using automation capabilities that do not exist in today’s networks. With 5G and virtualization, networks will become flexible and agile in running application like Facebook’s and Google’s data centers.

Even though El-Assir says that 5G will be 50X the speed of 4G with single digit latency the G doesn’t stand for Gigabits and 5G networks will be much more valuable with cloud computing at the edge of the networks. Representatives from AT&T, Deutsche Telekom and South Korea’s SK Telecom all shared similar perspectives on 5G.

SK Telecom’s Vice Presient of R&D Kangwon Lee also sees open hardware innovation as a critical technical and operational component in the transition to 5G because a lot of different types of traffic will connect through the network such as the IoT, connected cars and virtual and augmented reality. Without open hardware innovation and virtualization it will be very difficult to setup networks that have very different quality of service (QoS) requirements without a virtual white-box approach. SK Telecom’s telco-engineering model of network deployment is very physically intensive. To scale the 5G network Lee said it will take a software engineering approach.


Previous Page  1  2 

Sign up for CIO Asia eNewsletters.