With all of the excitement around vSphere 5.1 last year most of us had forgotten to mention the little 2012 Christmas present that VMware left for customers that are still running vSphere 5.0. What I’m referring to is vSphere 5.0 Update 2. Now that Partner Exchange in Las Vegas is over I’ve had time to revisit this release and the importance of it for customers running business critical apps.
Some time ago I wrote about the IO Blazing Datastore Performance with Fusion-io that I was able to achieve with a single VM, connected to a single VMFS volume, and using thin provisioned VMDK’s. Since then a new version of ESXi 5.0 has been released (U2) and some new drivers and firmware have come out for Fusion-io. I was provided an additional Fusion-io ioDrive1 card to go along side the ioDrive2 (both MLC based), and Micron sent me one of their PCIe cards to test (SLC based). So I thought I’d reset the baseline benchmarks with a single VM utilizing all of the hardware I’ve got and see what performance I could get. I was suitably impressed with Max IOPS > 200K and Max throughput >3GB/s in different tests (see graphs below). This baseline will feed into further testing of ioTurbine for Oracle and MS SQL server, which I will write about once it’s done.
Some time ago I wrote an article about EMC’s Blueprint for Successful Large Scale Oracle Virtualization on vSphere. Now Cisco IT has published a similar whitepaper and study after having virtualized a large number of their corporate Oracle databases on top of their Unified Computing System (UCS) platform. The results are quite impressive in my opinion and you may be able to learn a lot from their effort. The difference here is that Cisco tested with NFS and D-NFS, not Fibre Channel (as in the EMC case study).
I recently noticed VMware KB that talks about vSphere 5.x hosts may fail to mount an ATS-Only VMFS Datastore on some storage arrays.
Hypervisor competition is really starting to heat up. VMware just released vSphere 5.1 and Microsoft has recently released Windows Server 2012 and the new version of Hyper-V. A significant new feature available now in Hyper-V / Windows 2012 is a new disk format VHDX, which has a maximum size of 64TB. With the new filesystem in Windows Server 2012 (ReFS) the maximum volume size increases to 256TB ( NTFS was limited to 16TB @ 4K cluster size). So how does vSphere 5 and 5.1 compare and what are the key considerations and gotchas? What are the implications for business critical applications? Read on to find out.
In my previous article “The Good, The Great and the Gotcha with Multi-NIC vMotion in vSphere 5” I discussed an issue that could cause unicast port flooding. One of my large financial customers has come up with a workaround for this problem. This is an unsupported workaround but might do the trick until the official fix is available.
Thanks to Simon Williams (@simwilli) from Fusion-io I’ve had the opportunity to try out a couple of the Fusion-io ioDrive2 1.2TB MLC cards over the past few weeks. I was also provided with ioTurbine software, which combined with an in guest driver acts like a massive read cache and still supports vMotion. ioTurbine’s objective is to allow you to consolidate many more systems on the same server without having to have lots of RAM assigned to act as IO cache to get acceptable performance. This article will focus on the raw IO performance when the Fusion-io IODrive2 cards are used as a datastore. I will follow up with another article on ioTurbine when used with Linux when testing high performance Oracle Databases.
One of the features many people may not be aware of that was released in vSphere 5 is Multiple-NIC vMotion. This is a feature that allows you to load balance a single or multiple vMotion transmissions over multiple physical NIC’s. This is of significant benefit when you’ve got VM’s and hosts with large amounts of memory, as vMotion migrations will complete significantly faster. So your Business Critical Applications with large amount of memory and CPU’s can now migrate without disruption even faster. Below I’ll briefly cover the good and great of this technology and also a gotcha that you need to be aware of.
One of my fellow vExperts, Prasenjit Sarkar, has recently published a blog article titled Virtualizing BCA – What about application IO characteristics. I recommend that you take a look at this as it gives a good overview of a lot of the considerations around storage for Business Critical Applications. There are a few things I feel are also important over and above what is mentioned in the article and these may have a significant impact on your architecture design and application performance. Here I’ll cover some things you must consider to provide a solid storage design to support your most critical systems.
The vSphere 5 Security Guide has been officially released. There are a number of changes and enhancements and you should go through each to review the applicability to your environment and compare it to the vSphere 4.1 Hardening Guide. Since the public draft there have also been some significant changes that you should take time to review.