Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

The vagaries of FTP: What to look for in a secure large file transfer alternative

Ian Hamilton, CTO, Signiant | April 4, 2016
When it comes to moving big files quickly, there are only a few options.

This vendor-written tech primer has been edited by Executive Networks Media to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

FTP turns 45 this year. And, while this original protocol for transferring files over the Internet is still widely used, many companies are looking for a more modern alternative. Initially, concerns about FTP centered on security. But, as IP technology became ubiquitous for global data exchange, FTP’s more fundamental performance limitations also became apparent.

Because FTP was originally designed without security features like data integrity and confidentiality, the first security concerns arose around privacy of control channel data like user IDs and passwords, and then spread to the actual data being transferred. “Secure” FTP (FTPS) was developed in response. FTPS is FTP with Transport Layer Security (TLS), which protects file content and user names and passwords while in transit over the Internet from eavesdropping and modification.

However, FTPS doesn’t protect data at rest on servers, which are, by necessity, accessible from the Internet to allow FTP access for off-site business partners. To solve these issues, companies often built external security controls around their FTP infrastructure.

In addition to the need for security enhancements, FTP lacks many basic features and functions necessary for organizations to operate efficiently, from email notifications and file transfer tracking to automation and user and storage management. All of these either require scripts to be written for use with FTP servers and clients or the purchase of additional software.

For most companies, the result is a system that requires heavy IT involvement and is very difficult to scale, update and manage. Basic tasks like adding users to support new business initiatives or adding partners required IT involvement, and those new users often have difficulty using archaic FTP interfaces.

However, the biggest vagary for FTP users is its painful slowness and tendency to fail when sending large files over Wide Area Networks (WAN). And this is not something that can be solved by scripts or changes to the protocol.

FTP is just slow by nature. Why?

FTP almost always runs on top of the Transport Control Protocol (TCP), which is also the underlying protocol for Hypertext Transfer Protocol (HTTP). While TCP was originally built to ensure accuracy and reliability, performance on today’s high-bandwidth long-distance networks wasn’t given much consideration.

In order to ensure reliability and prevent congestion, TCP utilizes an acknowledgment mechanism, only sending a defined amount of data before requiring a response from the other end before more data is sent. All of this handshaking is impacted by latency, which delays both delivery and acknowledgments of data, and the longer the distance the higher the latency. Also, in the event of a connection failure due to a network disruption, TCP typically requires FTP to retransfer the entire file from the beginning.

 

1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.