Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

BLOG: How milliseconds can make or break a deal

John Hoffman, Head of Ethernet Product Management, Tata Communications | Dec. 5, 2012
The sophisticated applications of today are more latency sensitive and need optimal conditions to operate properly. As a result, companies are demanding networks with lower latency and less latency variation.

Global companies with mission-critical connectivity requirements need access to high-speed fibre-optic networks with minimal latency between the world's business capitals. The applications running on networks today are becoming increasingly sophisticated and often are mission critical or revenue generators for the firm. The sophisticated applications are more latency sensitive and need optimal conditions to operate properly, and as a result, companies are demanding networks with lower latency and less latency variation.

Increasingly, this means adopting a low latency network or adding low latency links within their network. While firms in the banking and financial services industries are the primary users of low latency networks today, more and more businesses in other markets are starting to pay a premium for the improved performance delivered by low latency connectivity.

What is latency?

Latency is the time it takes for a data frame to go from the A end to the Z end and back again. Typically latency is measured in milliseconds and is a "round trip" measurement. As a result, distance is the variable with the most impact on latency. There are other factors that influence latency, such as equipment processing time, frame size and network congestion. The key to minimising latency therefore is to be on the shortest path, using equipment designed for low latency and avoiding needless routing and switching.

For example, exchanges between layer 2 and 3 usually have a higher latency than a transmission system on layer 1. This is due to latencies that occur in routers and exchanges, and it is caused by moving data packages from the input interface to the output interface.

To keep latencies to their absolute lowest, most firms will purchase twice the bandwidth they need. For example, if a firm expects to send 100 Meg on a low latency link, then the firm will purchase a 200 Meg link as this removes congestion. Another consideration is the resiliency a firm requires. Low latency links are often sold as unprotected links and the customer will purchase multiple links and perform the protection switching themselves.  This allows the customer to ensure they have the lowest latency link protected by the second lowest latency link in the market.

Option for optimisation

Ethernet was first developed in the early 1970's and is one of the most prevalent protocols used in networks today. Ethernet is a data transport protocol that is designed to work with frames and provide maximum performance for data frames. Ethernet is also very flexible and offers a variety of configurations, allowing more flexibility and cost savings than traditional protocols.

Low latency networks differ in the cost, speed and level of security they provide. Businesses need to consider if the data transmitted is highly sensitive, and if so, whether the data will be secure on the service provider's network. The financial strength of the service provider is also important, especially if a critical part of the business is run on the network. No one can afford to wake up one day and find that their service provider has gone out of business and their network is permanently down. 

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.