Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Guest View: A cloud fit for finance

James Walker | Dec. 16, 2013
As a business model, the cloud’s massive resources and ubiquity offers unbeatable value – but it has evolved as a general-purpose solution, and not one geared to the very special demands of financial services, says James Walker, President of the CloudEthernet Forum.

What is needed for the financial industry - as well as many other large organisations impacted by these issues - is a fundamental rethink of the cloud's priorities? How the data is delivered could be as vital as the delivery itself. In fact it would be better to destroy and lose some data than have it delivered via a route that breaks the law.

Mechanisms such as SDN (Software-Defined Networking) are currently being explored that could provide visibility and control into the routes taken by network traffic. These controls are not innate to cloud culture, they must be actively insisted upon by stakeholders that would benefit from it, and this issue is high on the agenda of the CloudEthernet Forum (CEF) - an independent industry body recently created to develop open standards for large global datacenter deployments.

If the cloud is to deliver its colossal benefits to the finance industry, its storage must be specified to location, and its transport routes must be bounded. How this happens, and whether the solution is one that suits the financial industry will depend on early commitment to the relevant forum working groups.

A time-sensitive cloud

Financial traders know all about the threat of latency, how a few microseconds can make or break a deal, and specialist providers have responded with dedicated services guaranteed to reduce latency to a minimum between sites. Carrier Ethernet is playing an important role in this by removing the need for translation between LAN and WAN protocols and providing fast connection between nodes.

However, there are many financial applications where "minimal latency" is not the need so much as "guaranteed latency". If you are running trading applications that rely on being, say, ten microseconds behind the market, then a provider who promises "latency less than five microseconds 99% of the time" may deserve a pat on the back, but not your custom. Because what you need is "latency less than ten microseconds 100% of the time". Financial data turns poisonous a lot faster than any foodstuff, so data past its "sell-by microsecond" may be no longer actionable and had rather be trashed than used.

Another angle on the importance of controlled latency is the "split brain" problem that can arise between sets of mirrored data. It makes sense to build in redundancy and have a secondary backup system that can step in as soon as a fault arises in the primary system. At that point the secondary system becomes primary and timing must be very strictly controlled to stop data being updated independently on the two systems. Once the supposedly identical mirror sets are allowed to diverge, it can become a nightmare to reconcile the two.

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.