Sharing files using the Network File System (NFS), a common task for NAS devices, served as the focus for much of our performance testing.
We constructed a six-step scenario in which a client mounted an NFS drive on the ReadyNAS; created a directory; wrote a 10-kbyte file in the new directory; deleted the file; deleted the directory; and then unmounted the drive (see "How We Did It").
Even though that's only six steps for the user, a sample packet capture showed 118 unique NFS and related calls on the wire. Working with the Mu Test Suite from Mu Dynamics, we created a scenario that replayed this sequence, but substituted unique port numbers, file names and other attributes each time. That's a big improvement over simple capture/replay testing, and a much better predictor of NFS performance in production. (The capture we used can be downloaded from Mu's pcapr community site.)
The goal of the NFS tests was to plot response time against the number of concurrent sessions. Each "session" included all the steps named, and ran repeatedly for five minutes. At the end of each iteration, the Mu Test Suite reported response time statistics as well as the number of transactions and errors.
The ReadyNAS proved a capable performer for up to 128 concurrent NFS users. Average response times remained very low less than 20 milliseconds for up to 16 concurrent sessions and less than 200 millisec for up to 128 sessions. Maximum response times also scaled linearly, with worst-case times of less than 200 millisec for 32 sessions or fewer, and less than 500 millisec for 128 concurrent sessions.
Errors began to occur in tests with 256 and 512 concurrent sessions, meaning one or more NFS sequences was unable to complete successfully. Still, 128 concurrent sessions is a relatively large number, especially for small- and mid-sized organizations whose NFS traffic isn't likely to be anywhere near as stressful. Even in an organization with thousands rather than hundreds of users, it's unlikely all users would concurrently exercise NFS as rigorously as the Mu Test Suite did here.
ISCSI support is a major new feature in this release, making ReadyNAS a candidate for storing virtual machines created with VMware and other virtualization products. This ReadyNAS device carries VMware certification, and we verified it works well when using VMware's vMotion to move virtual machines between hosts.
A central question when moving to any sort of shared storage is what performance penalty, if any, is involved. To answer that question, we compared disk I/O performance for virtual machines using local and iSCSI storage. We compared performance using Windows Server 2008 R2 and CentOS 5.5 virtual machines, measuring I/O performance first on a local datastore, and then using the ReadyNAS datastore over iSCSI. We used the open-source IOzone tool to measure I/O performance.
Sign up for CIO Asia eNewsletters.