I saw a tweet today (shown below) that reminded me of 27th August, 2012. This was the day that VMware published this article, which demonstrates how VMware vSphere (5.1 at that point) could achieve 1 million IOPS in a single VM. Things have undoubtedly gotten better in vSphere 5.5, and even more so with vSphere 6.0, which was recently released. Even though the test setup at the time (3 years ago) required two dedicated all flash arrays for this one VM, it demonstrated clearly that the hypervisor is not a bottleneck to storage. It also clearly demonstrated that even using VMFS and going through multiple layers to the storage and back, isn’t a bottleneck to high performance. vSphere itself adds so little overhead that it’s a great platform for running any workload. This is important, because 3 years on from this test we have all sorts of things running on top of vSphere in the ever increasing capabilities of the platform. Even high performance storage controllers and not just the variety in the hyper-converged platforms, but also in mainline storage vendor arrays. But this article is more about demonstrating that vSphere is a great place to run databases, and I’ve used some results that I produced in my spare time last weekend.
Here is the tweet from Duncan Epping, Office of the CTO at VMware. It shows the path an IO operation might take through a virtual storage appliance. The appliance being just another form of high performance virtual machine, which as we’ve already covered, is quite at home running on VMware vSphere.
a wise man once said… pic.twitter.com/tMTdGLnTdc
— Duncan Epping (@DuncanYB) March 27, 2015
For my test environment I had a SQL Server 2012 template configured with 2 vCPU’s and 32GB RAM and a few hundred GB of disk (4 VMDK’s, OS, Data, TempDB, TLog). I decided to use the freely available Dell DVDStore to generate a workload on the databases, which simulates an online DVD stores transactions and is an OLTP type workload. One of the developers of the Dell DVDStore Benchmark is Todd Muirhead from the VMware Performance Engineering Team, and I recommend you review his work, especially on databases such as SQL Server and Oracle. Dell DVDStore can be used to test many different types of databases and works with Windows and Linux. Although this is a benchmark and the results aren’t necessarily real word, the goal was the demonstrate the scalability aspects of vSphere as a hypervisor.
In this case it was also running on a Nutanix NX3450 Platform (which is what I have available as I work for Nutanix), but the same would be true on any suitable enterprise hardware that is supported on the VMware HCL. This platform includes 4 vSphere 5.5 U2 Hosts (Nodes), each with 256GB RAM, 2 x Intel E5-2650 CPU’s, 2 x 400GB SSD’s and 4 x 1TB HDD’s. This system was 80+% full at the time, and I was using inline compression on all of the datastores as I was running short on space. Compression ratio at the time on my SQL Server Datastore was 2:1 (saving approx 60%), but values vary widely based on workload. Here is an image of the datastore containing my SQL DB’s, this is from the Nutanix PRISM UI.
I used the 100GB Dell DVDStore Database Size, and tested the scale up performance of a single VM and also the scale out performance of multiple VM’s. I was time limited, so there are many more combinations and configurations that could have been tested, but I thought these were good enough. For each test all VM’s used in the test were provisioned fresh from template and upon the test completing they were deleted. Each time fresh VM’s were created so that a previous test didn’t impact the results of the next test. This also allowed the use of VAAI-NAS, which is another great feature of VMware vSphere.
All output is measured in terms of Orders Per Minute, how many online DVD Orders, based on the DVDStore workload, are completed per minute.
Firstly lets look at the results of a single VM per Host (4 VM’s Total).
From the graph you can see that in order to achieve more orders per minute you can add vCPU’s to the VM and it scales almost completely linearly.
Let’s look at the test with multiple VM’s per host. In this case from 1 to 4 VM’s per host, each VM configured with 2 vCPU.
From this graph you can see that when you add VM’s you get almost the same performance as if you scaled up a single VM. This shows the efficiency of the VMware vSphere scheduler to give fair access to resources to multiple VM’s.
Final Word
You can see from the above that VMware vSphere is a great place to run enterprise database applications. In this case the tests were done on a Nutanix platform, and that means there was a virtual storage controller running on each vSphere Host. The Controller VM, which creates a distributed file system, during the testing was using up to 4 vCPU’s (although configured with 8) and was configured with 32GB RAM. The Controller VM uses VMDirectPathIO to bypass the normal hypervisor IO path and directly access the physical storage controller and disks, and uses the native storage controller drivers. The resources assigned to the Controller VM were not being wasted, they were being used to produce the performance required to service the SQL Database VM’s, and were also being used to save 60% of the storage capacity by using inline compression (cluster was 80+% full, so I needed to use compression to save space). All from a single 2U appliance that included all of the highly available and high performance storage, CPU and RAM, required to run VMware vSphere and execute the tests. Only other components that were needed were a couple of standard 10G Ethernet switches. All in all, not bad. VMware customers can have confidence that they can run demanding enterprise databases on their virtualized platforms, so long as the underlying hardware is sufficient to meet their requirements. The hypervisor is not the bottleneck.
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – 2015 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
[…] vSphere: A great place to run Enterprise Databases Michael does a very nice job explaining how this is very true. He is a doing it on a Nutanix but do not let that distract you, […]
Wow. This conversation on kernel mode vs VSA must have really hit a nerve at Nutanix :). Josh Odgers, Andre L and yourself have all written articles in the last few days that talk about it. Judging from the information and diagrams used, I think it’s fair to say it’s in relation to Frank Dennemen’s article ‘Basic elements of the flash virtualization platform – Part 1’ – http://frankdenneman.nl/2013/06/18/basic-elements…
I suppose the question I ask myself is if guys like Duncan, and Frank (who have probably forgotten more about how the vSphere kernel operates than most know) believe that kernel integration introduces less performance penalty with respect to IOPS (but probably more importantly less latency), I suppose I tend to believe that.
Satyam Vaghani (Godfather of VMFS and various VMware storage innovations) probably has a good idea about kernel mode vs VSA for servicing IO. It’s doesn’t go unnoticed that he choose kernel mode integration for his FVP product, rather than depending on VSA’s.
I think your test works well in a small controlled environment that you setup. I would like to see the results of hundreads of dissimilar workloads. With the Nutanix concept of locality (which is great) the local CVM would be servicing a greater # of transactions. It’s difficult to believe that scheduling wouldn’t start to pose some issues. Enough that latency (perhaps going from microseconds to milliseconds, which when dealing with flash resources can be material) might be an issue.
Lastly, how much resources does each CVM take up? Wouldn’t it be nicer to distribute those valuable CPU and RAM resources to the workloads in the cluster that could use them? By introducing VSA’s I’m having to size my clusters to account for those resources, instead of leaving those resources to my various applications. Would it be fair to ask, if Nutanix had the opportunity during development to use kernel integration vs CVM’s, would they have remained with CVM’s being the best option?
Hi Forbes, some good points you've raised. We actually find apps running on the local node with the CVM can be more efficient as their CPU's aren't waiting around as long for IO and consequently get more value from the local controllers. This is also true of other local caching technologies. Plus this adds more predictability, you know what you're going to get as the architecture scales. The real point is that the hypervisor isn't a bottleneck for IO at all, regardless of the actual underlying architecture. Even for a SAN Device, which has a much longer IO path, the hypervisor isn't a bottleneck. So regardless if you have something in the kernel, or in a VM, the key consideration isn't actually performance. Both approaches use local system resources to service IO. So you're not losing out on those resources, and DRS will still distribute your workloads so they get the resources they require. Also the platform is distributed, so you are getting the benefit of all of those resources across all of the nodes to provide you value, including not having a dedicated rack of disks that can't run VM's.
In answer to your question how much resources does the Nutanix Controller or CVM take up, it depends on how it's been configured, being a VM you have the option to change it's size if you desire and you can have control over it. But the default configuration is 8 vCPU and 16GB RAM. It will only use those resources to deliver the performance if applications require it (proportional to the value it delivers), so under a high load it could use the resources assigned. Also if you want to enable additional features, such as dedupe, you would increase the vRAM to 24GB. Features all have a resource cost, as does running any sort of hyperconverged environment. However the important thing is that the benefits outweigh the costs and the overall efficiency is higher. In the experiment I ran for this article, to show how well the DB workloads scale up and out, my CVM is configured with 8 vCPU and 32GB RAM. The workload is over 70% write, and 100% random, so doesn't benefit from read cache, I saved many TB of storage by using VAAI primitives, and I saved many more TB of storage by using compression, all while seeing great performance. I also produced per transactions per minute that similar tests done on other hyperconverged platforms, demonstrating the value of the approach (while saving 60% storage using compression and much more using VAAI).
My actual problem with this whole in kernel debate is that it harms the credibility of the hypervisor as a place to run business critical applications, which is an area I've focused 10 years on. Getting confidence in customers to run business critical apps on a hypervisor is not a trivial process, and performance is one area that still constantly comes up. If you can't get performance from a VM on the hypervisor (VSA or otherwise), how can you know you'll get performance for the high IO business critical apps. So I think it was a big mistake VMware taking this path with their messaging. I know through experience of virtualizing some of the largest systems on the planet that the hypervisor is a great place to run high IO apps, and I have confidence in ESXi as a platform for great performance of any app, which is one of the reasons I have confidence that it's a great place to run a storage controller. In addition to all the reasons I outlined in http://longwhiteclouds.com/2015/02/27/in-kernel-o….
[…] and this is one of the reasons VMware vSphere is a great place to run Business Critical Apps and Enterprise Databases. A lot of this article should just be common sense, but a lot of times it is uncommon. I’d be […]
[…] Dell DVDStore is a freely available Database Benchmark that you can use to test a variety of different databases, such as SQL, Oracle, PostgreSQL, MySQL, on either Windows or Linux. It has been around since 2007 and was originally developed by Dave Jaffe at Dell and Todd Muirhead at VMware. It is also included as one of the workloads in the VMmark 2 VMware virtualization benchmark. The current version is 2.1, released in 2011, and it has proved to be a solid and reliable tool. I’ve heard through reliable sources that an updated version may be in the works, so it would be worth looking out for it. This article will briefly cover some of the tips and tricks that I’ve found during testing that I’ve been conducting with DVDStore, such as in my recent article titled VMware vSphere: A Great Place to Run Enterprise Databases. […]