Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Research community looks to SDN to help distribute data from the Large Hadron Collider

John Dix | May 27, 2015
Most advanced research and education networks have transitioned to 100 Gbps, but as fast as the core networks progress, the capabilities at the edge progress even faster.

ESnet's dynamic circuits, which have been in service for quite a while, are called OSCARS. There is an emerging standard which is promoted by the Open Grid Forum called NSI, and we'll integrate NSI with the application, so that's one upcoming milestone for the LAT ONE part of this picture.

One might ask, "What sets the feasible scale of worldwide LHC data operations?" Two aspects are the data stored, which is hundreds of petabytes and growing at a rate that will soon reach an Exabyte, and the second major factor is the ability to send the data across networks, over continental and transoceanic distances.

One venue to address the second factor and show the year-to-year progress in the ability to transfer many petabytes of data at moderate cost using multiple generations of technology is the Supercomputing Conference. This is a natural place to bring our efforts on network transfer applications, state of the art network switching and server systems and software defined network architectures together, in one intensive exercise spanning a week from setup to teardown.

Caltech and its partners, notably Michigan, Vanderbilt, Victoria and the US HEP labs (Fermilab and BNL), FIU, CERN, and other university and lab partners, along with the network partners mentioned above, have defined the state of the art (nearly) every year since 2002 as our explorations of high speed data transfers climbed from 10 Gbps to 100 Gbps and, more recently, hundreds of Gbps.

The Supercomputing 2014 event hosted the largest and most diverse exercise yet, defining the state of the art in several areas. We set up a Terabit/sec ring with a total of 24 100 Gbps links among the Caltech, NITRD/iCAIR and Vanderbilt booths, using optical equipment from Padtec, a Brazilian company, and Layer 2 Openflow-capable switching equipment from Brocade (a fully populated MLXe16) and Extreme Networks.

We also connected to the Michigan booth over a 100 Gbps dedicated link and we had four 100 Gbps wide area network links connecting to remote sites in the US, Europe and Latin America over ESnet, Internet2, and the Brazilian national and regional networks RNP and ANSP.

In addition to the networks we constructed a compact data center capable of very high throughput using state of the art servers from Echostreams and Intel with many 40 Gbps interfaces and hundreds of SSDs from Seagate and Intel.

Then apart from the high throughput and dynamic changes at Layer 1, one of the main things was to be able to show software-defined networking control of large data flows. So that's where our OpenFlow controller, which is written mainly by Julian, came in.

We demonstrated dynamic circuits across this complex network, intelligent network path selection using a variety of algorithms using Julian's OpenDaylight controller, and the ability of the controller to react to changes in the underlying optical network topology, which were driven by an SDN Layer 1 controller written by a Brazilian team from the university in Campinas.

 

Previous Page  1  2  3  4  5  6  7  8  Next Page 

Sign up for CIO Asia eNewsletters.