VMware recently published a record breaking network performance test where a single ESXi host showed close to line rate throughput over 8 x 10Gb/s NIC’s. The single host achieved close to 80Gb/s throughput using standard MTU size (1500B), 16 VM’s (each with 1 vCPU 2GB RAM), and 8 vSwitches, on top of a Dell R820 physical platform. Not only that this test shows 7.5M PPS, which is a very high rate of packet throughput for a single host. While not a realistic real world workload this demonstrates the power of vSphere and modern server hardware. This is a very impressive result in my opinion.
So what did the test not show? The test results in the article made no comment about the maximum throughput of a single vSwitch or a single VM, or why 16 VM’s were used on 8 vSwitches. Jumbo Frames was not used in the tests either, so we can’t see what if any benefit that would have made. We also don’t get to see what if any performance improvements have been made for UDP traffic as all the traffic in the test was TCP based. The test didn’t cover the performance that could be expected when adding in other solutions on top of the host such as NSX or vCNS App, as we move to an ever more software defined network and network overlay technologies are adopted far more widely covering performance of these solutions will be vitally important. Hopefully this type of information will be made available in the future.
You will notice when you read the full article titled Line-Rate Performance with 80GbE and vSphere 5.5 that the CPU usage on the host was very high. The host itself has 4 x 8 Core E5-4650 CPU’s at 2.7GHz each. With 16 x 1 vCPU VM’s they would have taken up two sockets leaving the remaining sockets to process everything else on the host. So from the graph at the bottom of the article you can see roughly 90% of the hosts CPU resources was used to process receive traffic, vs around 60% for send traffic. I would think that Jumbo Frames would have made a big difference to the CPU utilization on the host if nothing else.
Final Word
Hopefully we’ll get to see some results with Jumbo Frames used in the future. But until then at least you know you can push a host to 80Gb/s with 8 x 10Gb/s NIC’s, or probably with 2 x 40Gb/s NIC’s now that 40GbE is supported on vSphere 5.5. For those ultra heavy network throughput based applications that need a high packet rate vSphere 5.5. will be a great platform.
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by Michael Webster +. Copyright © 2013 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
[…] vSphere 5.5 Record Breaking Network Performance (Long White Clouds) Greenplum Database Performance on VMware vSphere 5.5 (VMware Tech Paper) Performance of vSphere Flash Read Cache in VMware vSphere 5.5 (VMware Tech Paper) IPv6 performance improvements in vSphere 5.5 (VMware Vroom! Blog) VMware vSphere 5.5 Host Power Management (HPM) saves more power and improves performance (VMware Vroom! Blog) […]
Hello,
Interesting …
It will also be interesting to the same test with it vShield Edge ( NSX ).
It will indeed be interesting to do the same test with Edge. Once I get my VCNS or NSX Lab setup running I'll repeat the tests. I'll also repeat the vMotion Jumbo Frames Tests on 5.5 once my full lab is upgraded also.
[…] platform for all types of business critical apps. Amongst the highlights of vSphere 5.5 we see Record Breaking Network Performance, Microsoft Failover Clustering Enhancements and Jumbo […]
[…] http://longwhiteclouds.com/2013/09/23/vsphere-5-5-record-breaking-network-performance/ […]
[…] vSphere 5.5 Record Breaking Network Performance (Long White Clouds) Greenplum Database Performance on VMware vSphere 5.5 (VMware Tech Paper) Performance of vSphere Flash Read Cache in VMware vSphere 5.5 (VMware Tech Paper) Performance Best Practices for VMware vSphere 5.5 (VMware Tech Paper) SEsparse in VMware vSphere 5.5 (VMware Tech Paper) IPv6 performance improvements in vSphere 5.5 (VMware Vroom! Blog) vSphere Replication 5.5 Performance Findings (VMware Vroom! Blog) VDI Benchmarking Using View Planner on VMware Virtual SAN (VSAN) (VMware Vroom! Blog) A few cautionary notes about replication performance (VMware vSphere Blog) VMware vSphere 5.5 Host Power Management (HPM) saves more power and improves performance (VMware Vroom! Blog) […]