Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Flash your way to better VMware performance

Keith Schultz | Jan. 28, 2014
PernixData FVP clusters server-side flash to improve virtual machine performance and reduce SAN latency

You can create the flash cluster using any PCIe or SSD flash device found on VMware's hardware compatibility list. Best of all, the server-side flash can consist of a heterogeneous mix of devices. You don't have to install the same flash hardware, or even the same capacity, in each host. As of this 1.0 release, FVP only works with block-based network storage, such as iSCSI and Fibre Channel SANs. Support for file-based storage (that is, NFS) will be available in future versions, as will support for additional hypervisors beyond VMware vSphere.

The size of the flash cluster is based on the I/O activity of the running VMs and not the underlying storage footprint. Thus, for a multiterabyte datastore for all of your VMware VMs, you won't have to break the bank and install multigigabyte or multiterabyte flash devices. However, as with any caching system, PernixData FVP will suffer some read misses during VM startup and for a short initial period.

The write stuff
The flash clusters in PernixData FVP work in either write-through or write-back mode. A write-through flash cluster accelerates reads from the SAN but not writes. Because writes are committed to the server-side flash and the SAN simultaneously, write performance is still bound to the write latency of the SAN.

The write-back policy accelerates both reads and writes, with writes committed to the cache first, then copied to the SAN in the background. Keeping cached I/O data safe is essential because there's always a chance that uncommitted data will be lost in the event of a host failure. FVP prevents this by replicating writes to other cache nodes. Write-back mode allows for zero, one, or two replicas to be stored across the cluster to help prevent data loss. You configure this on a per-VM basis, so you can make the cache fault tolerant for some VMs but not others.

The PernixData dashboard provides excellent real-time views into how the flash cluster is performing. This VM IOPS chart shows an effective IOPS of nearly 60,000, with local flash contributing roughly 50,000 and the SAN only about 9,000.

Admins will have no trouble analyzing the health and performance of the flash cluster. The PernixData management console integrates into vCenter Server. Find the Performance section under the PernixData tab in vCenter, and a wide range of customizable charts and graphs are available, including virtual machine IOPS, virtual machine latency, and cache hit rate and eviction.

The hit rate and eviction rate charts are the keys to ensuring the flash cluster is sized correctly for the number of running VMs. Hit rate is a measure of how many I/O operations are being served by the server-side flash as opposed to being served by the SAN. A hit rate of 100 percent tells us our flash cluster is sized correctly for the running VMs in our environment. A real-world hit rate of 85 percent is typical and reasonable.

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.