VMware recently published a record breaking network performance test where a single ESXi host showed close to line rate throughput over 8 x 10Gb/s NIC’s. The single host achieved close to 80Gb/s throughput using standard MTU size (1500B), 16 VM’s (each with 1 vCPU 2GB RAM), and 8 vSwitches, on top of a Dell R820 physical platform. Not only that this test shows 7.5M PPS, which is a very high rate of packet throughput for a single host. While not a realistic real world workload this demonstrates the power of vSphere and modern server hardware. This is a very impressive result in my opinion.
So what did the test not show? The test results in the article made no comment about the maximum throughput of a single vSwitch or a single VM, or why 16 VM’s were used on 8 vSwitches. Jumbo Frames was not used in the tests either, so we can’t see what if any benefit that would have made. We also don’t get to see what if any performance improvements have been made for UDP traffic as all the traffic in the test was TCP based. The test didn’t cover the performance that could be expected when adding in other solutions on top of the host such as NSX or vCNS App, as we move to an ever more software defined network and network overlay technologies are adopted far more widely covering performance of these solutions will be vitally important. Hopefully this type of information will be made available in the future.
You will notice when you read the full article titled Line-Rate Performance with 80GbE and vSphere 5.5 that the CPU usage on the host was very high. The host itself has 4 x 8 Core E5-4650 CPU’s at 2.7GHz each. With 16 x 1 vCPU VM’s they would have taken up two sockets leaving the remaining sockets to process everything else on the host. So from the graph at the bottom of the article you can see roughly 90% of the hosts CPU resources was used to process receive traffic, vs around 60% for send traffic. I would think that Jumbo Frames would have made a big difference to the CPU utilization on the host if nothing else.
Final Word
Hopefully we’ll get to see some results with Jumbo Frames used in the future. But until then at least you know you can push a host to 80Gb/s with 8 x 10Gb/s NIC’s, or probably with 2 x 40Gb/s NIC’s now that 40GbE is supported on vSphere 5.5. For those ultra heavy network throughput based applications that need a high packet rate vSphere 5.5. will be a great platform.
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by