A number of you have shown an interest in the relative performance of the different virtual storage adapters that are available in vSphere 5.5. I haven’t seen anything published by VMware so I thought I’d do my own testing in my home lab. This is a very brief article to share with you the results I found. Please note: My testing is not real world. I used IOMeter to drive a high amount of IO through a VM for the sole purpose of measuring the relative performance of different adapters under the same conditions. This was not a test to see how fast the VM itself could go, nor what the likely performance might be in a real world scenario. You can’t rely on these test results for capacity planning or as a basis of virtualizing one of your applications. Your result may vary and you should verify and validate your own environment and understand your particular application workloads as part of the virtualization process. These results may however assist you when choosing between different virtual storage adapters.
To produce the test results in the image below I used IOMeter on a VM with 2 vCPU and 8GB RAM with different storage adapters and virtual disks connected to each adapter. I used a 100% random read workload and various IO sizes. To keep this article short I’ve included the data from the 8KB IO size test run. The other tests showed similar relative performance between the different adapters. IOMeter was configured to use a single worker thread and run a different number of outstanding IO’s against a single VMDK for each test. As you can clearly see from the above graph PVSCSI shows the best relative performance with the highest IOPS and lowest latency. It also had the lowest CPU usage. During the 32 OIO test SATA showed 52% CPU utilization vs 45% for LSI Logic SAS and 33% for PVSCSI. For the 64 OIO test CPU utilization relatively the same. If you are planning on using Windows Failover Clustering you are not able to use PVSCSI as LSI Logic SAS is the only adapter supported. Hopefully VMware will allow PVSCSI to be used in cluster configurations in the future.
Final Word
Where possible I recommend using PVSCI. Before choosing PVSCSI please make sure you are on the latest patches. There have been problems with some of the driver version in the past, prior to vSphere 5.0 Update 2. VMware KB 2004578 has the details.
—
This post appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by Michael Webster +. Copyright © 2014 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
Hi Michael. There used to be issues where if you used the PVSCSI adapter without considering the workload, performance could actually suffer. Has that been resolved and we don't have to give type of workload any thought – just use it?
Yes, that was resolved in vSphere 4.1. See KB 1017652.
Michael,
What underlying storage tier are you using for these VSA tests? The IOPS seem all over the board based on the software drivers. Wondering if everything was constant at the hardware level.
Single VMFS5 data store to a single Fusion-IO ioDrive 2 1.2TB MLC PCIe Flash card, using the SCSI driver. All VMDK’s on the same card. Only one VMDK active at a time. Performance results were consistent within an acceptable margin of error over multiple test runs.
[…] VMware vSphere 5.5 Virtual Storage Adapter Performance This is some fascinating research that Michael has done and it has a surprising finish to it as well. I just added a note to my todo list in my lab to check out pvscsi. See the full story here. […]
[…] E1000 driver is not always the best one to use with VMs. There has been some good work lately by Michael Webster to confirm that PVSCSI is a pretty good choice. So we use VMs with E1000 to check the status of […]
[…] Switching to the PVSCSI controller Steve has an interesting article on getting the PVSCSI controller working in a VM. He has trouble making it work in SLES but he figured it out. So good info. He also has some links to other material on PVSCSI. Why this interest in this controller? In a word – Performance. A good article on this can be found here. […]
[…] use this adapter by default in SAN and NAS. I think for home labs maybe not. Here is a good reason why I say […]
[…] Storage Controllers are a critical choice that needs to be correct as the different controllers, you can read more here: More here. […]
[…] for a virtual disk. Michael Webster has a great post demonstrating the performance differences here, and I have a how-to guide that you can use to retrofit your existing I/O-intensive virtual […]
[…] IO’s (OIO) per disk device. Other controllers may have other limits, some can be tuned (see this article), and some can’t. This means, in this example at leasts, that you can issue a maximum of 32 […]