Although my Monster VM panel was in the top 10 sessions of VMworld 2013 and we did a Monster VM and Business Critical Apps panel for TAM day this year neither session will be included at VMworld in the USA or Europe. But not to worry. There is plenty of great content at VMworld for everyone to enjoy and I’ll be there to talk about Monster VM’s on vSphere as always, but mostly at the Nutanix Booth #1535. This doesn’t mean you have to miss out on all the Monster VM goodness though, you can grab it all online right now, for free, thanks to VMware opening up the session catalog to some great sessions online through VMworld TV. So below I include two great panel discussions from VMworld 2013 that you can review now, and get a better understanding of how to virtualize critical apps and Monster VM’s.
The great thing about working for a company that is in the business of making software defined datacenter a reality (Nutanix) is that whenever we make a software improvement every existing customer benefits. They get all the software benefits even without changing hardware. This often includes things like performance improvements, but it also includes things such as manageability improvements and also support improvements. Let’s briefly cover some of the great enhancements to one click upgrade, and the support user experience of the revamped Nutanix customer support portal that was released tonight.
As part of the development of Virtualizing SQL Server with VMware: Doing IT Right (VMware Press), which I co-authored with Michael Corey and Jeff Szastak, I needed to provide guidance around virtual networking. To do this I figured it would be a good idea to do some performance testing of various different virtual network adapters in VMware vSphere 5.5, as there wasn’t much in the way of performance data around. In all I would have performed approximately 600 individual test runs. All of the important details and much more (including tuning advice to get optimal performance) can be found in the book. But I thought I’d share with you some of the highlights of the results.
As more and more companies start to look at hyperconverged, web scale (such as Nutanix – where I work) or just converged solutions for your vitualization platform there is a need to make sure you know what your looking at and go in eyes wide open. There are a number of different aspects to evaluate, none of the different options are the same. There are no right or wrong answers, as many different solutions may meet your requirements, or be best suited to your requirements. The idea behind this article is to give you a list of questions to consider and to ask any potential vendor. This list is designed to be vendor neutral, and there are really no right or wrong answers, it’s just so you understand what you’re getting for some important aspects, based on my experience.
One of the little known and infrequently used features of vSphere since version 4.1 is the ability to connect a USB device to an ESXi host and then mount that device to a VM, and still allow vMotion to work without any problems. This is usually used for USB dongles required for software licensing, but can be used with a number of other devices. More often these days the USB connectivity is being used from a VMware Horizon View client to connect a USB device to a desktop. But if you’re 9000 miles away from your desktop, and you’ve been asked to connect a console cable to a physical network switch in the same location and run some debug commands, how can you do that? Well I figured it out and it makes for a good story.
Back in 2010 I was helping a large company troubleshoot their virtualized SAP environment, which was experiencing instability and performance problems. One thing we noticed was that the buffers on the NIC’s were periodically overloading due to the large amount of small packets. This was on vSphere 4.0, with Windows 2003 64bit OS at the time and using VMXNET3. Unfortunately at that stage the VMXNET3 driver for Windows didn’t support increasing the send or receive buffers and as a result we had to switch over to E1000 and increase the TX and RX buffers, which resolved the problem (in addition to adding memory reservations to the VM’s). However since vSphere 4.1 it has been possible to modify the buffers in VMXNET3 to resolve these sorts of issues. I have been experiencing this myself in my home lab and have as a result modified the buffers, but it appears I may not be alone in experiencing this.
Nutanix will be once again making WebScale Waves at VMworld 2014 at booth 1535. This year we are including a big focus on enterprise applications, including SAP, Oracle, SQL Server, Exchange and Java. I will be spending time on the booth along with many other experts at Nutanix and taking 1:1 meetings, in addition to the sessions that we’ll be presenting at VMworld. My co-authors and I may also be signing copies of Virtualizing SQL Server with VMware: Doint IT Right at the Nutanix booth, so bring your copy along. To find out how you can learn more about greatly simplifying your IT infrastructure with Nutanix WebScale IT and where to party at VMworld, read on.
VMware ESXi hypervisor like many systems that implements TCP/IP uses TCP Delayed Acknowledgement to try and improve network efficiency. Essentially the idea of Delayed Ack is that a system can acknowledge every other full segment received, and must acknowledge by a certain threshold. The actual threshold can vary and be up to 500ms. While this works well for general communications and consistent streams of traffic and where full TCP segments are being sent, it can cause negative performance impacts for IP based storage systems. The main impact can be increased latency, due to the Delayed Ack Timeout, if small IO’s are being sent and received. For iSCSI on ESXi this is documented in VMware KB 1002598, however for NFS it isn’t as straight forward. I’ll take you through how to disable Delayed Ack for NFS.
Designing virtual desktop infrastructure can be complex, there are many moving parts, it’s a business critical application, especially when done at scale. In a traditional infrastructure the consequences of miscalculations can be significant. Fortunately Nutanix takes away the risk and the complications by providing a standard building block approach, pay as you grow, linear scalability and a known quantity of desktops for the infrastructure. Nutanix and Citrix have now announced the Citrix Validated Solution (CVS) for Hosted Virtual Desktops and Hosted Shared Desktops. This provides a complete end to end software and infrastructure solution that is fully supported, tested and validated by both Citrix and Nutanix. Nutanix is one of the few vendors in the world that currently has a validated solution. This article will give you some of the highlights and links to the full details.
I don’t like it when I get Purple Diagnostic Screens a.k.a. Purple Screen of Death or PSOD for short. Fortunately these are fairly rare. However there is one I came across just recently with a customer running vSphere 5.1 U1 and it is quite nasty. The PSOD was caused by TCP Heap exhaustion on an ESXi 5.1 U1 host. The host has the recent patches, and the usual search of the knowledge base didn’t really turn much up. The customer is running NFS, although the symptoms may not be tied only to NFS, any host based IP storage protocols (NFS or iSCSI) could be impacted. I’ll briefly tell you what we have found out, the logs to watch out for and some KB’s that will be helpful and steps you can take to prevent this from happening.