Nutanix Web Scale NoSAN now meets NoDisk. I didn’t know that the band Queen could predict the future of IT when I first listened to their song Flash Gordon. But the lyrics I’ve quoted above seem to suggest they could somewhat predict the future of the storage industry. Flash will undoubtedly have a big impact on IT, even if it is only just starting to penetrate the datacenter now (only a small percentage of total deployed storage is flash). So it is probably no surprise that eventually Nutanix Web Scale Converged Infrastructure platform would include options for all flash. Then on top of that we add Metro Availability, the metro storage cluster type availability that is only a few clicks to set up, and significantly simpler to operate and test compared to traditional metro solutions. So you can have your all flash and you don’t need to compromise on any data services. Of course Metro Availability is just a software feature so is available in any of the Nutanix platforms, it will just take a software upgrade once the new version of the Nutanix OS is available (Available from 4.1). So why all flash?
Scale-Out Storage Processing Power with Your Flash:
Flash requires storage processing power to drive IOPs and performance. Why put all your flash behind two controllers? Dual-controller architectures cannot sufficiently drive large amounts of flash. Each controller has limited performance, plus you need to run at only 50% utilization to ensure performance is available during maintenance and failure. By spreading flash devices across many controllers, you can drive higher aggregate performance. This performance also increases as you scale out the number of controllers, with no technical upper limit. All within a single datastore, namespace, and management domain, and without any single point of failure.
Scale-Out trumps Rip and Replace:
Traditional storage vendors live by a three year rip-and-replace lifecycle. Storage Controllers need to be swapped out in order to take advantage of advancements in Intel x86 processor capabilities. With Nutanix’s revolutionary file system, new and old storage controllers can co-exist in the same cluster, allowing you to immediately employ advancements in Intel computing technologies. More importantly, you can increase performance without a destructive and risky Rip and Replace. You don’t have to wait three years, you can just add a single node at a time, when needed, on demand, without any disruption, and get all the benefits straight away. Being software defined means with a simple software upgrade you also get any new software enhancements and continued investment protection on the same hardware. Your same hardware just keeps getting faster. This is true for all flash as it is for hybrid disk and flash systems.
Put flash next to your VMs with Nutanix’s data-locality:
Data-locality matters. Getting the performance and data closer to your VM’s greatly improves performance and provides performance isolation from noisy neighbours. The Virtual Machine data-locality built into the Nutanix Distributed File System keeps the majority of write and read storage I/O on local flash. Read I/O does not need to traverse the network. This results in an improvement of read latency while also reducing Network bandwidth consumption.
Density, Performance, and Scale:
Up to 32 Nodes of the Nutanix 9040 per rack with 288TB of enterprise grade flash storage, 768 CPU Cores, 16TB RAM. Each node containing up to 9.6TB flash, 2 x Intel Xeon CPU’s (10 Core 3GHz, or 12 Core 2.7GHz), and 512GB RAM. Get all the flash and compute you need to run all of your high performance VM’s. This platform is built for serious workloads that need lots of consistently low latency storage access and high throughput. Especially where software licensing means you want to scale up performance on fewer systems (Like Oracle DB’s and App servers for example). All with < 16KW power consumption per rack!
Nutanix is constantly evaluating how we can bring uncompromising simplicity and web scale converged infrastructure to more use cases to meet our customers requirements. We are squarely focused on the future of the software defined datacenter and new storage technologies and we can bring these to market very quickly. This is the first all flash platform, but I’m sure it won’t be the last. No need to compromise on data services, such as metro availability, replication, snapshots and DR to go all flash. From NOS 4.1 you’ll be able to have metro availability with a few clicks of a button on any Nutanix platform.
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – 2014 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
Hey Michael, nice article. I'm with you when it comes to scale out vs dual controllers. So how much exactly does data locality matter? When you say "Getting the performance and data closer to your VM’s greatly improves performance…" what does "greatly" mean?
Hi James, great question. Of course end results will depend on individual circumstances and workload, but at a high level accessing the storage through a local storage controller on the host through memory to the local flash storage and hard disks saves many CPU cycles and allows for lower latency access compared to going over a network. But even going over a network the latency is probably still higher at the storage devices, and the cycles involved in accessing the network, than that actual transmission over the network. In terms of reads this means < 1ms response times are achievable (depending on load), with regards to writes, as they have to be replicated for data protection to persistent storage, this means low ms response times (2ms is achievable). Performance isn't just how many IOPS or how low the latency is, it also includes power consumption and cooling, and how many storage devices are actually needed to achieve certain performance metrics. As an example, in the Oracle on Nutanix Best Practice Guide testing I did achieved about 30% less performance (in terms of database transactions per second) than a high end UCS setup connected via FCoE to VMAX Cloud Edition with 146 disks (inc 24 SSD's), but the Nutanix system I was testing was a mid range platform and only had 8 SSD's and 16 HDD's total, 4 nodes in total, significantly less complex, less costly, and considerably less power consumption. In terms of more quantitative measures, from the testing I've done the difference can be between 20% – 50% improvement in IOPS and latency, again depending on workload pattern. From a recent PostgreSQL benchmark test of a single database VM, data being local meant an improvement in IOPS of 20% (8K vs 10K) and latency improvement of 2ms (< 5ms to 7ms).