There are a lot of environments that are running MySQL and PostgreSQL to support their systems. My team at Nutanix and I have been getting a lot of enquiries about how to set up these databases for best performance, and customers have also been using them to benchmark and baseline different systems. One of the challenges with these databases is that they give only limited control over where data files and transaction logs can be placed, and this makes increasing parallelism of IO a little bit of a challenge. Your database is just an extension of your storage and all storage devices, even virtual ones, have a limited queue depth that you can work with. Unlike Oracle, SQL Server, Sybase, DB2, etc you can’t just create a whole bunch of mount points and spread your data files over them (which increases available queue depth and potential IO parallelism). But the solution to this problem is made quite simple with Linux LVM (Logical Volume Manager). I’ll take you through some of the steps I took to set up a test VM for MySQL testing with HammerDB and PostgreSQL for testing with PGBench.
There is a lot of FUD about Data Corruption, Torn IO, Write Ordering and other aspects when using NFS as a datastore in VMware vSphere, even when the VM’s are configured to use Virtual Disks. This seems surprising, especially given some very large VMware vSphere based clouds are built on NFS storage presented as datastores for use with VM’s, and that for years numerous companies have been running business critical apps on NFS, presented as datastores, or otherwise. Many of you may not know that VMware has actually patented the process for presenting NFS as a datastore to VM’s that use Virtual SCSI disks (US7865663), so that it emulates the SCSI protocol. You also may not know that not all storage systems, even when using block based storage such as FC, FCoE or iSCSI, honour all of the techniques to keep your data safe. A lot of it comes down to individual storage system implementation. Enterprise storage systems that take data protection seriously and implement the appropriate IO protections are all suitable for running business critical apps, even when presenting NFS for use as a datastore to VMware vSphere. So what do you need to know?
Nutanix Web Scale NoSAN now meets NoDisk. I didn’t know that the band Queen could predict the future of IT when I first listened to their song Flash Gordon. But the lyrics I’ve quoted above seem to suggest they could somewhat predict the future of the storage industry. Flash will undoubtedly have a big impact on IT, even if it is only just starting to penetrate the datacenter now (only a small percentage of total deployed storage is flash). So it is probably no surprise that eventually Nutanix Web Scale Converged Infrastructure platform would include options for all flash. Then on top of that we add Metro Availability, the metro storage cluster type availability that is only a few clicks to set up, and significantly simpler to operate and test compared to traditional metro solutions. So you can have your all flash and you don’t need to compromise on any data services. Of course Metro Availability is just a software feature so is available in any of the Nutanix platforms, it will just take a software upgrade once the new version of the Nutanix OS is available (Available from 4.1). So why all flash?
I’ve always enjoyed visiting Europe and every year when I visit Barcelona for VMworld it is special. It might be a smaller event than VMworld in San Francisco, but it lacks nothing in substance, networking opportunities, or announcements. The excitement level in Barcelona appears to be higher than what it was in San Francisco. This article is my thoughts on the keynotes with close to 9000 people in attendance. Read more…
I have written about the Oracle FUD when it comes to virtualized environments quite a bit before. Now it appears there is some new FUD circulating that might catch out unsuspecting customers. There is a new Phantom Menace from Oracle. This time it is to do with their interpretation of some new capabilities in VMware vSphere 5.1 and above. As with all the previous FUD it is very easy to combat. You simply and calmly ask your Oracle representative to show you the page in your contact, which is the legally binding and enforceable document that replaces all prior verbal and written agreement, where this new policy exists. It simply does not exist (unless you’ve been suckered into accepting some non-standard wording to your disadvantage). So what is this new FUD? Let’s take a look.
Nutanix has recently published a Best Practice Guide for Mictosoft Exchange on VMware vSphere and Josh Odgers explains some of it’s contents and benefits of Exchange on Nutanix in his blog article here. If you are interested in virtualizing Exchange, and/or using Nutanix, you might want to get hold of the guide and have a read through it. It explains how to simply set up Exchange on Nutanix, the benefits of it, how it compares to a traditional physical JBOD approach and much more. The paper introduces the capability of running Exchange on the Nutanix NX-8150 nodes, which have been specially designed to run large applications, such as Exchange, SQL Server, Oracle and SAP. This is the node type Josh Odgers and I used as part of a design capable of hosting 1.4 million Exchange 2013 Mailboxes, which demonstrates the building block architecture of Nutanix and the ability to scale to meet requirements for large environments. Let’s take a look at that design at a high level.
Nutanix is synonymous with Web Scale Converged Infrastructure, which brings a simpler, easier and much faster model for deploying virtual infrastructure from small scale, to any scale. Web Scale is really about standardized hardware, simplified systems and operations that are designed to be always on, resilient to failure, non-disruptively upgraded and maintained. This article will give you a real world example from one of Nutanix larger customers that recently deployed a fairly large number of new Nutanix systems and all the VM’s they needed in a very short space of time. In this case the customer deployed 240 Nutanix nodes (in groups of up to 32 hosts per cluster) on VMware vSphere, and 5000 VM’s in just over 2 days. So what does this look like? Take a look at this tweet that was sent out, including photo’s of the customers datacenter.
I’ve recently been working with one of our large customers that has been virtualizing SQL Server and Oracle on Nutanix 6060 nodes. I thought others might like to know the sorts of enterprise scale business critical workloads that are being run on Nutanix. This particular customer still has a lot of room for growth in the environment, and like all Nutanix customers gets to benefit from non-disruptive upgrades and performance enhancements with each release. They are also by no means at the limits of the capability of the platform, but this is a good example of what can be done for enterprise applications on Nutanix Web Scale Converged Infrastructure, from a real customer that has done it.
As I type this I’m flying 36,000 feet above Australia on my way to Singapore. It’s great now that Singapore Airlines has Wifi on their flights from New Zealand. I’m connected into my virtual desktop back in Auckland and have some performance tests running on my Nutanix 3450 system, using Oracle RAC. The same system that also happens to host my virtual desktop and all the supporting VM’s. But is my desktop session performance impacted while I’m running a high performance Oracle RAC database test? No. No longer is it necessary to have completely separate silos of resources to support different performance requirements in a virtualized environment. This is the same system that just days ago I upgraded the storage controller firmware and system firmware with a single click, without any downtime at all, without a reboot, without even having to migrate a single virtual machine. This is a new way of operating, a much simpler way, for a new always on world. This is what we at Nutanix call Web-Scale. This new way is even suitable for business critical enterprise applications, such as Oracle databases, even Oracle RAC, which I have been very successful virtualizing for a long time and at large scale. This is a much easier way to implement, manage and run the applications you need to support without compromising SLA’s, functionality, or performance. Now I’ll share with you a demonstration of this capability in action and also some of the best practices, along with where to get the complete best practices guide.
I had heard the murmurs through the ether that something might be up. But it was at that time an unsubstantiated rumour. I couldn’t really believe that a tier 1 storage company would have an array that involved complete data migration / destruction and disruption in order to do a software/firmware upgrade between firmware versions. This isn’t the SDDC you’re looking for. I didn’t see the point in slinging mud for something that may be untrue, or be corrected in time for a GA release (Customers are still hoping). Everyone who’s been in the IT game for long enough knows that things do go wrong from time to time, despite the best efforts of everyone, but planned data destruction for an upgrade is kinda hard to take in this day and age. This is certainly not the always on, non-disruptive upgrades that we’ve all gotten used to, at least some of us. It appears however that the rumours are true and they’ve been reported by Andrew Dauncey – The Odd Angry Shot XtremIO Gotcha, Chad Sakac – Virtual Geek on Disruptive Upgrade (transparency on this issue is good), El Reg – No Biggie: EMC XtremIO Firmware Upgrade Will Wipe Data, and IT News – Extreme upgrade pain for XtremIO Customers. To upgrade XtremIO from 2.4 line of code to 3.0 will involve removing all of the data and putting it back after the upgrade completes. That’s right, anything left on the array during the upgrade, will in effect be lost. Not to mention the required downtime. What’s my take?