One of my colleagues ran into some trouble with connecting a vCenter Virtual Appliance to Active Directory for authentication. He was getting a weird error saying that the FQDN (Fully Qualified Domain Name) was wrong. The actual error message was as follows:
Failed to execute ‘/usr/sbin/vpxd_servicecfg ‘ad’ ‘write’ ‘administrator@<domainfqdn>’ CENSORED ‘<domainfqdn>’
VC_CFG_RESULT=302(Error: Enabling Active Directory failed.)
After quite a bit of research the solution was found.
There are a lot of environments that are running MySQL and PostgreSQL to support their systems. My team at Nutanix and I have been getting a lot of enquiries about how to set up these databases for best performance, and customers have also been using them to benchmark and baseline different systems. One of the challenges with these databases is that they give only limited control over where data files and transaction logs can be placed, and this makes increasing parallelism of IO a little bit of a challenge. Your database is just an extension of your storage and all storage devices, even virtual ones, have a limited queue depth that you can work with. Unlike Oracle, SQL Server, Sybase, DB2, etc you can’t just create a whole bunch of mount points and spread your data files over them (which increases available queue depth and potential IO parallelism). But the solution to this problem is made quite simple with Linux LVM (Logical Volume Manager). I’ll take you through some of the steps I took to set up a test VM for MySQL testing with HammerDB and PostgreSQL for testing with PGBench.
There is a lot of FUD about Data Corruption, Torn IO, Write Ordering and other aspects when using NFS as a datastore in VMware vSphere, even when the VM’s are configured to use Virtual Disks. This seems surprising, especially given some very large VMware vSphere based clouds are built on NFS storage presented as datastores for use with VM’s, and that for years numerous companies have been running business critical apps on NFS, presented as datastores, or otherwise. Many of you may not know that VMware has actually patented the process for presenting NFS as a datastore to VM’s that use Virtual SCSI disks (US7865663), so that it emulates the SCSI protocol. You also may not know that not all storage systems, even when using block based storage such as FC, FCoE or iSCSI, honour all of the techniques to keep your data safe. A lot of it comes down to individual storage system implementation. Enterprise storage systems that take data protection seriously and implement the appropriate IO protections are all suitable for running business critical apps, even when presenting NFS for use as a datastore to VMware vSphere. So what do you need to know?
Cumulus Linux is the answer to companies that want to run software defined networking on a range of open networks industry standard switches, without necessarily being locked into one physical switch hardware vendor. But unlike network virtualization solutions such as NSX, Cumulus Linux is the Network OS (NOS) for the physical switches, rather than a virtualization layer on top. Cumulus is part of the NSX ecosystem and integrated into NSX, so essentially you can use Cumulus to run on the physical switches and integrate it to NSX to provide the network virtualization (termination and VXLAN switching/routing in hardware also supported on some switches). Cumulus is Linux for network switches, so it’s easy to manage, and very easy to automate. I happen to be working on a project now to build the best practices for Cumulus Linux with Nutanix and VMware vSphere. So I needed an easy way to get Cumulus installed on my lab switches, from my MacBook Pro, which is what the remainder of this article is about.
My colleague Magnus Andresson (VCDX-56 and Double VCDX DCV/Cloud) has put together some short videos showing some example solutions with Nutanix and vCloud Automation Center working together. vCloud Automation Center has recently been renamed vRealize Automation also known as vRA (vee Raa! – intentionally not used in the title). I hope you enjoy these videos and it gives you some ideas of how you can integrate vCloud Automation Center into your solutions with Nutanix.
Another VMworld event is over and it’s hard to believe it’s been a whole 12 months since the last one. Certainly during the keynotes there was a lot of coverage about what VMware has achieved over the last 12 months and it is impressive especially in the end user computing and hybrid cloud spaces. But overall I felt that VMworld USA 2014 lacked some of the sparkle of last year. But I guess it’s hard to top last year considering it was the 10th anniversary. This year seemed much more about building a solid foundation for a software defined datacenter, a software defined enterprise and a hybrid cloud model integrating applications with infrastructure, providing ability and flexibility, but without compromise. Although attendance was flat or a little down on last year the breakout sessions were packed, right up to the last session on Thursday. Instead of having our heads in the clouds this year it was all about the vCloud Air, and we vRealized the product naming is about to be changing. So lets dive into what I think are some of the highlights.
VMware has announced that it will turn off TPS in upcoming version of it’s hypervisor ESXi and vCloud Air hybrid cloud service. This is due to a security bug, considered a very rare possibility and only exploitable in very controlled and largely misconfigured environments. TPS also known as Transparent Page Sharing is a memory management technique that allows multiple VM’s to share a read only copy of the same memory page. When a VM needs to update or write to a page a new copy is created. The idea is that if there are many VM’s with similar memory pages on the same physical host server it will de-duplicate the pages and only store one copy. The result is that you can run more VM’s per physical server while still achieving very good performance.
TPS has for a long time been used as a competitive advantage by VMware over all of the other hypervisors. But realistically it hasn’t been in wide use by most customers for some time (since ESX 3.5) as the amount of RAM per host has increased, because of the use of large memory pages (2MB instead of 4KB) in Nehalem and above processors, and because most customers don’t want to run their systems at 100% utilization so that they can handle bursts of activity. When using large pages TPS only kicked in when systems were over 96% memory utilization, at which point large pages would be broken down into small pages that could be shared. However this has been a popular technique with service providers and with virtual desktop environments, and in some test and development environments, where over commitment of memory may have been acceptable.
I’ve always enjoyed visiting Europe and every year when I visit Barcelona for VMworld it is special. It might be a smaller event than VMworld in San Francisco, but it lacks nothing in substance, networking opportunities, or announcements. The excitement level in Barcelona appears to be higher than what it was in San Francisco. This article is my thoughts on the keynotes with close to 9000 people in attendance. Read more…
Nutanix has recently published a Best Practice Guide for Mictosoft Exchange on VMware vSphere and Josh Odgers explains some of it’s contents and benefits of Exchange on Nutanix in his blog article here. If you are interested in virtualizing Exchange, and/or using Nutanix, you might want to get hold of the guide and have a read through it. It explains how to simply set up Exchange on Nutanix, the benefits of it, how it compares to a traditional physical JBOD approach and much more. The paper introduces the capability of running Exchange on the Nutanix NX-8150 nodes, which have been specially designed to run large applications, such as Exchange, SQL Server, Oracle and SAP. This is the node type Josh Odgers and I used as part of a design capable of hosting 1.4 million Exchange 2013 Mailboxes, which demonstrates the building block architecture of Nutanix and the ability to scale to meet requirements for large environments. Let’s take a look at that design at a high level.
If you thought Ebola was deadly to humans wait till you get a load of the latest security issue impacting the world wide web and most everything connected to it including potentially your phone, lights, servers and the list goes on (excluding Windows systems). If Heart Bleed wasn’t bad enough at the start of the year the new Shell Shock bug certainly is. It is what I would term the Mother of All Bugs (MOAB). It impacts almost all Unix, Linux and Mac systems and allows a remote attacker to execute arbitrary code and potentially steal your data, credit cards and other information. So how serious is this? Well the NIST CVE Alert Rating on this is a 10 for severity, and a low for complexity to exploit (read my 7yr old could exploit this bug). So basically the worst possible kind. Oh, but wait, there’s more…