Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Research community looks to SDN to help distribute data from the Large Hadron Collider

John Dix | May 27, 2015
Most advanced research and education networks have transitioned to 100 Gbps, but as fast as the core networks progress, the capabilities at the edge progress even faster.

We're also following the OpenDaylight releases. We worked with the Hydrogen release and then, after the SC14 conference, we tried some exercises with the Helium release. So we look at what's being developed and what features there are and if any are important we adapt them. The next OpenDaylight release, which is called Lithium: it's an enhancement of Helium, and we will use it when it's available.move past in June.

Speaking of timing, what's the next step? How long will it take to see this vision through?

NEWMAN: It's very progressive. We're starting to get it out in the field. Our test bed at Caltech has six switches, three different types, including Brocade MLXe and CER switch routers and others, and we're going to add a fourth type at Michigan. Julian is set up to try his flow rules and we have our mechanism to integrate with the end application where we can get these lists of IP addresses which we can use to match to the setup flow rules for those particular IP addresses. As soon as we exercise that we start to do it again in the wide area.

Part of our team is at CERN in Geneva and we certainly will want to set up a switch there. That should happen in the next few months, and then the idea is to set up a preproduction operation starting with some of these managed flows and the application in my CMS experiment, so in the next year or two we'll be well on our way to production.

So this is predominately a wide area thing, but are there data center or campus implications as well?

NEWMAN: It depends. Campus, maybe. Brocade's ICX campus switches are OpenFlow 1.3 ready, so flow control can be done down to the workstation or server level. Data center and directing flows, I can see a lot of potential there. The point is where you have shifting loads and you have large data flows and want to have them go efficiently, this could be very useful. It clearly is a big vision. We'll start to implement this and see how it goes. But I think it will have a big impact, with implications for research and education networks and the universities and labs they serve.

The scale you guys deal with is so different from the enterprise folks I typically talk to, so it's very interesting.

NEWMAN: Yes, I should give you some numbers. In 2012 during the last LHC run, about 200 petabytes of data were transferred. After that we stopped taking data and you'd think the level of activity would be less, but we still sent 100 petabytes. The next run of the LHC, which is a three-year run, will start in June (commissioning of the accelerator is going on right now), and we're expecting much larger data flows than before. (Since the interview, the next run of the LHC has started.)

 

Previous Page  1  2  3  4  5  6  7  8  Next Page 

Sign up for CIO Asia eNewsletters.