Back in 2010 I was helping a large company troubleshoot their virtualized SAP environment, which was experiencing instability and performance problems. One thing we noticed was that the buffers on the NIC’s were periodically overloading due to the large amount of small packets. This was on vSphere 4.0, with Windows 2003 64bit OS at the time and using VMXNET3. Unfortunately at that stage the VMXNET3 driver for Windows didn’t support increasing the send or receive buffers and as a result we had to switch over to E1000 and increase the TX and RX buffers, which resolved the problem (in addition to adding memory reservations to the VM’s). However since vSphere 4.1 it has been possible to modify the buffers in VMXNET3 to resolve these sorts of issues. I have been experiencing this myself in my home lab and have as a result modified the buffers, but it appears I may not be alone in experiencing this.
I thought this was just something I had done in my lab environment. But after reading Michael White’s Newsletter and the VMware KB 2039495 – Large packet loss at the guest OS level on the VMXNET3 vNIC in ESXi 5.x / 4.x, it appears I’m not alone in this. Fortunately it is easy to make the necessary modifications to the buffers and resolve the majority of the packet loss issues as follows:
- Click Start > Control Panel > Device Manager.
- Right-click vmxnet3 and click Properties.
- Click the Advanced tab.
- Click Small Rx Buffers and increase the value. The default value is 512 and the maximum is 8192.
- Click Rx Ring #1 Size and increase the value (repeat for RX Ring #2). The default value is 1024 and the maximum is 4096.
In my environment I’ve also set my Large RX Buffers to 8192 and my TX Ring Size to 4096.
If you suspect that your virtual machines may be dropping packets or losing packets then you should consider adjusting the RX and TX buffers. This may well lead to increased performance and more importantly application stability. Sometimes in addition to increasing the buffers you may need to reserve the memory if it’s a very important app. This will ensure it can receive the resources it needs.
Final Word
In most cased the default settings are fine. In some cases there are some adjustments needed. This is one of the cases, if you are experiencing this problem, where adjustments are needed. There is no patch as such to address this problem at this time. But VMware will hopefully make improvements to its drivers and IP stack in future versions of vSphere.
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – 2014 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
[…] Packet loss at Guest OS Level in ESXi when using VMXNET3 Michael wrote about this recently in this. If you have a busy server and there are drops in the network you may need to make a change in […]
I ran into a this issue when virtualizing our Exchange 2010 Environment, even adjusting the Ring Buffers with support reduced, but did not remove the large packet loss. We had hoped the previous issues with the vmxnet 3 adapater had been resolved, but alas was not the case.
In the end we had to move back towards a e1000 adapater
HI Karl, Thanks for the feedback, that's good to know. Hopefully VMware fix these issues they've been having in their network stack for some time in the next release and in upcoming bug fixes. It's quite frustrating when the networking in your VM's goes wrong as it's quite hard to troubleshoot, especially when there are no signs that anything is obviously wrong.
Karl we had the same problem with our enviorment with both the Exchange and Domain Controllers. Even with the buffer change we are still seeing problems. We were worried about moving back to the e1000 with all of the other problems that were happening during that time. http://kb.vmware.com/selfservice/microsites/searc…
Hi Michael, did you have the same problem with Linux OS (Rhel, CentOS)? Thanks
[…] out on top, this is why VMware made it a best practice to use VMXNET3. Even though you may have to adjust some settings to get optimal performance it is worthwhile using VMXNET3 as the default. This begs the question, why is VMXNET3 not the […]