Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

BLOG: SDN success in the real world

Andrew Warfield | Jan. 30, 2014
You've heard the SDN hype, but where's the benefit? According to Andrew Warfield of Coho Storage, it's in the applications, and he has an example to prove it.

As a result, in almost all situations, running a single fat pipe between the array and the network has been sufficient. The bottleneck has always been the disks.

Suddenly, flash is a problem
We've had flash hardware in storage for about 10 years. Early flash was expensive and unreliable. It was great at random access, but otherwise performed a lot like disks, and that's exactly how it was treated in storage systems. Vendors replaced some of the spinning disks with SSDs and generally used those SSDs as a cache. Business as usual, the SSDs and disks shared a pretty slow SAS or SATA bus, which still had aggregate performance that could fit on a 10Gb connection.

Then everything changed. In 2010, we began to see the emergence of PCIe-based flash hardware. Flash devices moved off of the storage bus altogether and now share the same high-speed interconnect the NIC lives on. Today, a single enterprise PCIe flash card can saturate a 10Gb interface.

This is one of the predicating observations that we made a few years ago in starting our company: Storage was about to change fundamentally from a problem of aggregating low-performance disks in a single box into a challenge of exposing the performance capabilities of emerging solid-state memories as a naturally distributed system within enterprise networks. By placing individual PCIe flash devices as addressable entities directly connected to an SDN switch, our approach has been to promote a lot of the logic for presenting and addressing storage into the network itself.

How SDN solves storage problems
The initial customer environment for storage that we are trying to address is that of a virtualized NFS-based environment. VMware, for instance, is deployed across a bunch of hosts and is configured to use a single, shared NFS server. How can we take advantage of SDN in order to allow expensive PCIe flash to be shared across all of these servers and avoid imposing a bottleneck on performance?

Problem 1: The single IP endpoint. We can't change the client software stack on a dominant piece of deployed software like ESX. Scalability and performance have to be solved in a way that supports legacy protocols. IP-based storage protocols like NFS bake in an assumption that the server lives at a single IP address. In the past, people have built special-purpose hardware to terminate and proxy NFS connections in order to cache or load-balance requests, but SDN allows us to go further.

The NFS server implementation in our system includes what is effectively a distributed TCP stack. When a new NFS connection is opened to the single configured IP address, an OpenFlow exception allows us to assign that connection to a lightly loaded node in our system. As the system runs, our stack is free to migrate that connection, interacting with the switch to redirect the flow across storage resources. As a result, we are able to offer the full width of connectivity through the switch as a path between storage clients and storage resources. This approach is similar to proposals to use OpenFlow as the basis of load balancing, with the difference that it is the application itself that is driving the placement and migration of connections in response to its own understanding of how those connections can best be served.

 

Previous Page  1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.