Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Everything you need to know about NVMe

Jon L. Jacobi | April 6, 2015
As SSDs become more common, you'll also hear more about Non-Volatile Memory Express, a.k.a. NVM Express, or more commonly--NVMe. NVMe is a communications interface/protocol developed specially for SSDs by a consortium of vendors including Intel, Samsung, Sandisk, Dell, and Seagate.

As SSDs become more common, you'll also hear more about Non-Volatile Memory Express, a.k.a. NVM Express, or more commonly — NVMe. NVMe is a communications interface/protocol developed specially for SSDs by a consortium of vendors including Intel, Samsung, Sandisk, Dell, and Seagate. 

Like SCSI and SATA, NVMe is designed to take advantage of the unique properties of pipeline-rich, random access, memory-based storage. The spec also reflects improvements in methods to lower data latency since SATA and AHCI were introduced 

Advances include requiring only a single message for 4KB transfers as opposed to two, and the ability to process multiple queues instead of only one. By multiple, I mean a whopping 65,536 of them. That's going to speed things up a lot for servers processing lots of simultaneous disk I/O requests, though it'll be of less benefit to consumer PCs.

NVMe: Built for SSDs

If you've read our SSD coverage over the past couple of years, it shouldn't be news that solid state storage has run into a significant hurdle: legacy storage buses. Serial ATA and Serial Attached SCSI (SAS) offer plenty of bandwidth for hard drives, but for increasingly speedy SSDs, they've run out of steam.

Because of SATA's 600GBps ceiling, just about any top-flight SATA SSD will score the same in our testing these days — around 500MBps. Even 12GBps SAS SSD performance stalls at around 1.5GBps. SSD technology is capable of much more.

The industry knew this impasse was coming from the get-go. SSDs have far more in common with fast system memory than with the slow hard drives they emulate. It was simply more convenient to use the existing PC storage infrastructure, putting SSDs on relatively slow (compared to memory) SATA and SAS. For a long time this was fine, as it took a while for SSDs to ramp up in speed. Those days are long gone.

Leveraging existing technology

Fortunately, a suitable high-bandwidth bus technology was already in place — PCI Express, or PCIe. PCIe is the underlying data transport layer for graphics and other add-in cards, as well as Thunderbolt. (Gen 2) offers approximately 500MBps per lane, and version 3.x (Gen 3), around 985MBps per lane. Put a card in a x4 (four-lane) slot and you've got 2GBps of bandwidth with Gen 2 and nearly 4GBps with Gen 3. That's a vast improvement, and in the latter case, a wide enough pipe for today's fastest SSDs.

PCIe expansion card solutions such as OCZ's RevoDrive, Kingston's Hyperx Predator M.2/PCIe, Plextor's M6e and others have been available for some time now, but to date, they have relied on the SCSI or SATA protocols with their straight-line hard drive methodologies. Obviously, a new approach was required. 

 

1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.