As part of the development of Virtualizing SQL Server with VMware: Doing IT Right (VMware Press), which I co-authored with Michael Corey and Jeff Szastak, I needed to provide guidance around virtual networking. To do this I figured it would be a good idea to do some performance testing of various different virtual network adapters in VMware vSphere 5.5, as there wasn’t much in the way of performance data around. In all I would have performed approximately 600 individual test runs. All of the important details and much more (including tuning advice to get optimal performance) can be found in the book. But I thought I’d share with you some of the highlights of the results.
For the test harness I used NetPerf, as it was easy to use and could do both request / response to test small size transactions per second and latency, in addition to TCP stream to test throughput. The VM’s were configured with Virtual Hardware Version 10, 2 vCPU, 4GB RAM, Windows 2012. I used 4 VM’s in total, this was so I could perform tests on the same host with 2 VM’s and then across hosts also. All VM’s had 3 network adapters of different types (VMXNET3, E1000, E1000E) on different IP subnets. The hardware platform was a Nutanix NX-3450 with Intel Xeon IvyBridge Processors (E5-2650 v2 – 2.6GHz). Each test run of each combination of options was 60 seconds long. Multiple (3) test runs were executed per combination of configuration options (Local host, remote host, 1500MTU, 9000 MTU, driver tuning etc). All of the tests with results shown were done with Interrupt Moderation in the VMXNET3 driver disabled. The default setting is Interrupt Moderation enabled. The default is optimized for throughput and less CPU consumption, whereas I wanted to push the performance to the limit and reduce latency. The default setting would usually show more latency, but even lower CPU utiilization. For any workloads that are sensitive to latency interrupt moderation in the VMXNET3 driver should be disabled.
Note: This testing and the results are not based on real world application tests, are provided for informational purposes only, and your results may vary.
Standard 1500 MTU test between two hosts:
As you would expect VMXNET3 is the clear winner. VMXNET3 offers the lowest CPU usage in this test, combined with the highest throughput. This is important as you consolidate multiple high performance VM’s onto the same host.
Jumbo Frames 9000 MTU between two hosts:
In this test VMXNET3 is again the clear throughput leader, however it did use 5% more CPU cycles than E1000E.
Jumbo Frames 9000 MTU between VM’s on the same host:
In this test VMXNET3 is again the clear winner in terms of throughput. It used more CPU usage, however for that you got almost double the throughput compared to E1000 and E1000E. The effective throughput per CPU cycle is much more efficient on VMXNET3.
Standard 1500 MTU request response between hosts:
In the request response test VMXNET3 is again the clear leader in terms of transactions per second, even though it used slightly more CPU in this test. This equates to being able to process each request with less latency.
Standard 1500 MTU request response between VM’s on same host:
VMXNET3 again leads the way with a significant performance advantage over E1000E and E1000 with request response between VM’s on the same host.
Final Word
In all of the tests VMXNET3 comes out on top, this is why VMware made it a best practice to use VMXNET3. Even though you may have to adjust some settings to get optimal performance it is worthwhile using VMXNET3 as the default. This begs the question, why is VMXNET3 not the default? I think the answer is probably because it requires VMware Tools to be installed and there isn’t drivers automatically loaded into all operating systems that support it. But once you have VMware Tools you’re good to go. If you want a lot more detail on VMware vSphere Network performance, design considerations and network virtualization with NSX specifically related to SQL Server and business critical apps, then check out Virtualizing SQL Server with VMware: Doing IT Right. If you’re lucky enough to go to VMworld this year, or vForum Sydney I’ll be there with my co-authors signing copies of the books also. As always your comments and feedback appreciated.
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – 2014 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
Hi Michael, Excellent information. I just received my pre-release of your book and I look forward to reading it this weekend. We are planning on virtualizing all MS SQL servers in the coming weeks.
Great news! I hope you enjoy the book and get a lot out of it.
[…] VMware vSphere 5.5 Virtual Network Adapter Performance Michael has done a nice job proving what virtual network adapter we should use. If you just want to know the answer it is VMXNET3 but if you want to learn why and how that was determined check out Michael’s article. […]
Could you provide the same results testing using RHEL 5 or 6? I see where many RedHat admins are sticking to the old E1000 instead of the VMXNET3
[…] NIC choices are also an integral part, but if you choose the wrong one, you may be harming your throughput. Again Michael has done up another great post here. […]
[…] http://longwhiteclouds.com/2014/08/01/vmware-vsphere-5-5-virtual-network-adapter-performance/ […]
[…] also check out some of Mike's SQL threads as well. And, if you need to hit Michael up on Twitter. VMware vSphere 5.5 Virtual Network Adapter Performance | Long White Virtual Clouds 2015 Goals: Completed: Fortinet NSE-1 To do: Fortinet NSE-2-8, Asigra ACE, VCP6-DCV, […]
Hi Mike
Thank you for the great post. I have used one of the image in the above post on my blog (it is just a free wordpress blog):
https://theamvj.wordpress.com/vmware/
Hope that is okay for you. I have mentioned the source of the image.
Thanks
amvj
No problem. Thanks for linking back to the source.
[…] to find the optimal combination. This could be seen as mundane. For example, in my article VMware vSphere 5.5 Virtual Network Adapter Performance I performed over 2,000 (two thousand) individual combinations of tests to get the results, each […]