Chris Wahl over at the WahlNetwork has just posted a great article on Virtualization Phase 2 – Adventures in VBCA (Virtualizing Business Critical Applications). It is an article that is definitely worthwhile reading. I think one thing that is interesting is how different regions of the world are at different stages within their virtualization journey. On the whole Australia and New Zealand and a few others have been in the Virtualization Phase 2 of VBCA for a couple of years already (I’ve been focused on it since 2007), but it’s still only just getting started. Now many other developed markets are starting to move on to virtualize the easier business critical applications, including Unix to VMware migrations. The opportunity for customers, partners and VMware is massive.
Some time ago I wrote about the IO Blazing Datastore Performance with Fusion-io that I was able to achieve with a single VM, connected to a single VMFS volume, and using thin provisioned VMDK’s. Since then a new version of ESXi 5.0 has been released (U2) and some new drivers and firmware have come out for Fusion-io. I was provided an additional Fusion-io ioDrive1 card to go along side the ioDrive2 (both MLC based), and Micron sent me one of their PCIe cards to test (SLC based). So I thought I’d reset the baseline benchmarks with a single VM utilizing all of the hardware I’ve got and see what performance I could get. I was suitably impressed with Max IOPS > 200K and Max throughput >3GB/s in different tests (see graphs below). This baseline will feed into further testing of ioTurbine for Oracle and MS SQL server, which I will write about once it’s done.
The people in VMware Technical Marketing and Engineering have been very busy as usual and have recently published an excellent and deep paper on the VMware vSphere 5.1 CPU Scheduler. This paper is an update from previous papers that have been written about it. Getting the most out of your CPU’s and tuning the environment for peak performance from a CPU perspective starts here.
Thanks to Simon Williams (@simwilli) from Fusion-io I’ve had the opportunity to try out a couple of the Fusion-io ioDrive2 1.2TB MLC cards over the past few weeks. I was also provided with ioTurbine software, which combined with an in guest driver acts like a massive read cache and still supports vMotion. ioTurbine’s objective is to allow you to consolidate many more systems on the same server without having to have lots of RAM assigned to act as IO cache to get acceptable performance. This article will focus on the raw IO performance when the Fusion-io IODrive2 cards are used as a datastore. I will follow up with another article on ioTurbine when used with Linux when testing high performance Oracle Databases.
One of the features many people may not be aware of that was released in vSphere 5 is Multiple-NIC vMotion. This is a feature that allows you to load balance a single or multiple vMotion transmissions over multiple physical NIC’s. This is of significant benefit when you’ve got VM’s and hosts with large amounts of memory, as vMotion migrations will complete significantly faster. So your Business Critical Applications with large amount of memory and CPU’s can now migrate without disruption even faster. Below I’ll briefly cover the good and great of this technology and also a gotcha that you need to be aware of.
I’m not sure if I’m lucky or just a glutton for punishment. But I have picked up a third breakout session slot at VMworld and I will be co presenting with Mark Achtemichuk (@vmMarkA on twitter) who is a fellow VCDX and performance specialist in the VMware Technical Marketing Team. The session is APP-BCA1624 – Virtualizing Oracle: An Architectural and Performance Deep Dive. If you have an interest in virtualizing Oracle and getting the best performance then this will be a session not to be missed. Here is the session abstract to wet your appetite.