Radically Simple High Performance Graphics Desktops and Increased Density Delivered Just in Time for VMware View 5.3
Today Nutanix (@nutanix) delivered major hardware updates to it’s radically simple Google like infrastructure platform for the masses. The updates included integration with new GPU and Teradici APEX encoding offload cards for the most graphic intensive desktops, while at the same time providing increased VM density and lower TCO across a new range of hardware options. This announcement came just in time for the VMware Horizon View 5.3 GA release, which was also today. The new Nutanix 7110 platform breaks the final barrier to delivering all applications to all virtual desktop users and powers video and graphics-rich applications with workstation-level performance. This is all achieved while maintaing simplicity customers have come to know and love, and without losing any availability and manageability normally associated with dedicated hardware required for workstation class CAD/CAM and 3D desktops and other high end graphics use cases. The Nutanix 7110 platform and VMware Horizon View 5.3 are a powerful combination for All Virtual Desktops, but wait there’s more.
As some of you may have seen on the Register article Nutanix building an elite squad of crack VMware designers. After almost 4 years working directly with VMware in various capacities I’ve decided to take on an exciting new challenge. I greatly enjoyed my time working with VMware and learned a lot in the process. It truly is one of the best places to work in the IT industry and I would highly recommend it for anyone who gets the opportunity. In this new phase I will be able to work with VMware and directly contribute to the R&D of the Nutanix Virtual Computing Platform. As I’ll be a full time employee of Nutanix I’ll be stepping down from day to day running of my consulting business, but rest assured this blog will continue and I’ll keep brining my unique perspective on all things virtualization, cloud and business critical apps.
I will be joining Josh Odgers, VCDX-090 (@josh_odgers), Jason Langone, VCDX-054 (@langonej) and Lane Leverett, VCDX-053 (@wolfbrthr) as the 4th VCDX at Nutanix. Nutanix now has more VCDX on staff than any other company outside of VMware itself. I’ll be joining an incredibly talented and innovative team helping to redefine the radically simple Nutanix Virtual Computing Platform for Business Critical Applications and Unix to VMware Migrations, an extension of the work I was doing for VMware. As part of the Solutions and Performance Engineering team, I’ll be working closely with Josh to bring you some great performance papers, case studies and reference architectures.
Why Nutanix? Why now? Why does Nutanix see value in hiring VCDX Architects?
Those that are familiar with VMware technology will know vMotion well, and how reliably it works. VMware has worked hard on this for many years. But when talking to customers running traditional Unix systems for their Oracle databases, especially RAC, especially when under high load, and when the system is a monster, sometimes they are sceptical. To alleviate any concerns VMware teamed up with Cisco, EMC and Principled Technologies to produce a white paper demonstrating the vMotion of three highly utilized RAC Nodes doing thousands of transactions per second without any client disruption. This article to very briefly discuss the test and give you a link to the white paper so you can download it and read it for yourself.
In my article vSphere 5.5 Jumbo VMDK Deep Dive I covered the new PDL AutoRemove feature of vSphere 5.5 briefly, including it’s impact on vSphere Metro Stretched Cluster Configurations (vMSC). The reason I’m writing this article is to let you know that there are some impacts to vMSC configurations with the default settings, and in these types of environments you’ll need to modify the defaults.
During the Monster VM Design Panel at VMworld in San Francisco and Barcelona our panel was asked about vNUMA and the impact on performance of various different settings including modifying the number of cores per virtual socket. Mark Achtemichuk (Mark A for short) has written an article on the VMware vSphere Blog taking a look at this with some great test data to go with it. I’ll give you some highlights and then you can check out the actual article for yourself.
#vForum2013 in Sydney, Australia is coming up fast, and this year we will be Defying Convention Again. It’s at the Sydney Convention and Exhibition Centre in Darling Harbour on October 21st and 22nd and we’re expecting to see up to 7000 attendees this year. It is going to be big, and it’s going to be special. More importantly for my regular readers and followers I will be there and I’ll be bringing you lots of Monster VM and Virtualizing Business Critical Applications Deep Dive content from around the APJ region and around the world, in conjunction with some great co-speakers. You will learn from my sessions what amazing things VMware customers are already doing and how you can take away valuable insight and apply it to your environments. To find out more, read on.
If you’re upgrading from vSphere 5.1 to vSphere 5.5 and you ARE NOT using Custom CA SSL Certificates then you might run into an error. The error will be encountered during the upgrade of SSO, and specifically the Lookup Service, and only occurs in specific conditions, such as when using the default VMware Self-Signed Certificates. If you run into this problem your upgrade process will roll back, but leave behind some upgrade files that need to be cleaned up. This article will briefly touch on the recommended solution to this problem.
VMware recently published a record breaking network performance test where a single ESXi host showed close to line rate throughput over 8 x 10Gb/s NIC’s. The single host achieved close to 80Gb/s throughput using standard MTU size (1500B), 16 VM’s (each with 1 vCPU 2GB RAM), and 8 vSwitches, on top of a Dell R820 physical platform. Not only that this test shows 7.5M PPS, which is a very high rate of packet throughput for a single host. While not a realistic real world workload this demonstrates the power of vSphere and modern server hardware. This is a very impressive result in my opinion.
For those of you who follow me on twitter you’ll know that I recently sat the VMware VCAP5-DCA (VMware Certified Advanced Professional – Datacenter Administration) exam. I learned a few valuable things along the way that I think could help others prepare, and suffered a few glitches as well. So I thought I’d share this with you all. Hopefully this will help you successfully prepare for and pass the VCAP5-DCA Exam. For those of you who want to pursue VCDX this is one of the exams you will have to pass in addition to VCAP5-DCD (Datacenter Design) before you can submit a design for a defence.
During VMworld USA 2013 where vSphere 5.5 was launched we heard all about the new enhancements. Some of them were less publicised than others. This article will fill you in on another great reason to consider moving to vSphere 5.5 when it is released. vSphere 5.5 brings with it huge enhancements to the support of Windows Failover Clustering (WFC) previously known as Microsoft Cluster Services (MSCS). This by itself could be a major reason customers choose vSphere 5.5 over previous releases. You may recall that clustering support in vSphere 5.1 was quite a complex matrix to consider, and I tried to explain the various options in my article The Status of Microsoft Failover Clustering Support on VMware vSphere 5.1, which was followed shortly thereafter by Windows Server 2012 Failover Clustering Now Supported By VMware With Some Caveats after the VMware KB (KB 1037959 Microsoft Clustering on VMware vSphere: Guidelines for Supported Configurations) was updated. The release of vSphere 5.5 once again rewrites the rulebook for Microsoft Failover Clustering. So lets dive into it a bit and see what’s changed.