Home > Business Critical Applications, VCDX > VMware vSphere 5.5 Virtual Storage Adapter Performance

VMware vSphere 5.5 Virtual Storage Adapter Performance

A number of you have shown an interest in the relative performance of the different virtual storage adapters that are available in vSphere 5.5. I haven’t seen anything published by VMware so I thought I’d do my own testing in my home lab. This is a very brief article to share with you the results I found. Please note: My testing is not real world. I used IOMeter to drive a high amount of IO through a VM for the sole purpose of measuring the relative performance of different adapters under the same conditions. This was not a test to see how fast the VM itself could go, nor what the likely performance might be in a real world scenario. You can’t rely on these test results for capacity planning or as a basis of virtualizing one of your applications. Your result may vary and you should verify and validate your own environment and understand your particular application workloads as part of the virtualization process. These results may however assist you when choosing between different virtual storage adapters.

To produce the test results in the image below I used IOMeter on a VM with 2 vCPU and 8GB RAM with different storage adapters and virtual disks connected to each adapter. I used a 100% random read workload and various IO sizes. To keep this article short I’ve included the data from the 8KB IO size test run. The other tests showed similar relative performance between the different adapters. IOMeter was configured to use a single worker thread and run a different number of outstanding IO’s against a single VMDK for each test. VMware vSphere 5.5 Virtual Storage Adapter PerformanceAs you can clearly see from the above graph PVSCSI shows the best relative performance with the highest IOPS and lowest latency. It also had the lowest CPU usage. During the 32 OIO test SATA showed 52% CPU utilization vs 45% for LSI Logic SAS and 33% for PVSCSI. For the 64 OIO test CPU utilization relatively the same. If you are planning on using Windows Failover Clustering you are not able to use PVSCSI as LSI Logic SAS is the only adapter supported. Hopefully VMware will allow PVSCSI to be used in cluster configurations in the future.

Final Word

Where possible I recommend using PVSCI. Before choosing PVSCSI please make sure you are on the latest patches. There have been problems with some of the driver version in the past, prior to vSphere 5.0 Update 2. VMware KB 2004578 has the details.

This post appeared on the Long White Virtual Clouds blog at longwhiteclouds.comby Michael Webster +. Copyright © 2014 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.

  1. forbsy
    January 14, 2014 at 2:33 am | #1

    Hi Michael. There used to be issues where if you used the PVSCSI adapter without considering the workload, performance could actually suffer. Has that been resolved and we don't have to give type of workload any thought – just use it?

    • vcdxnz001
      January 14, 2014 at 7:37 am | #2

      Yes, that was resolved in vSphere 4.1. See KB 1017652.

  2. Jeff Drury
    January 14, 2014 at 9:55 am | #3

    Michael,

    What underlying storage tier are you using for these VSA tests? The IOPS seem all over the board based on the software drivers. Wondering if everything was constant at the hardware level.

    • vcdxnz001
      January 14, 2014 at 1:16 pm | #4

      Single VMFS5 data store to a single Fusion-IO ioDrive 2 1.2TB MLC PCIe Flash card, using the SCSI driver. All VMDK’s on the same card. Only one VMDK active at a time. Performance results were consistent within an acceptable margin of error over multiple test runs.

  1. No trackbacks yet.

Leave a Reply

%d bloggers like this: