A great way to learn Nutanix technology is by using the Nutanix Community Edition, which is a community supported, free, but fully functional version of the Nutanix software and Acropolis Hypervisor. If you don’t have spare hardware lying around your house or lab then a great way to use this initially is nested on top of ESXi. Recently my 8yr Old Son Sebastian passed the Nutanix Platform Professional Certification and I decided to give him his own home lab as a reward and also as a late birthday present. I had some Dell T710 servers from my VMware lab prior to Nutanix that I thought would make a great first home lab, and they’re connected into 10G switches. This article will cover how I put this environment together and give a high level overview. It was super easy, and anyone can do it. You can build very powerful demo or learning environments this way.
Firstly I would like to acknowledge three great resources that I used to help get this up and running. In no particular order:
JOEP PISCAER of Virtual Lifestyle – #NextConf running Nutanix Community Edition nested on Fusion
Albert Chen – How to install Nutanix CE into ESXi 6.0
William Lam – Emulating an SSD Virtual Disk in a VMware Environment
I took the work that Joep and Albert did and mixed in a bit of William Lam’s magic to fool Community Edition into thinking that a plain virtual disk was an SSD so that I could get the environment up and running on VMDK’s running on a normal datastore. This was for functional testing only and just for a home lab for my son, so it’s not going to be breaking any performance records anyway. What I am about to describe assumes you are setting the environment up for a minimum of 3 Nutanix CE VM’s. Here is the high level process I went through:
- Start with a funning ESXi 6.0 Host that has some storage attached
- Create a Portgroup on a Virtual Switch to attach the Nutanix Community Edition VM to and ensure that the security settings allow Promiscuous Mode (I called this NXCVM)
Note: If you wish to trunk multiple VLAN’s to the Nutanix CE VM’s you can use VLAN 4095 on a standard switch or Virtual Port Trunking on a Distributed Switch. The Default or Native VLAN that Nutanix CE is connected to should have DHCP on the network (or use static addresses).
- Download the Nutanix Community Edition Image File
- Unzip the image file and rename it <image file name>-flat.vmdk
- Upload the flat.vmdk file using the datastore browser to the datastore on the ESXi 6.0 host
- Upload the ce.vmdk descriptor file using the datastore browser to the ESXi 6.0 host (descriptor I used included below)
- Using the vSphere Web Client via vCenter Create a new VM (You will see why it’s important to use the web client in a second)
- Configure Compatibility as ESXi 6.0 and later
- Configure the new VM as a CentOS Linux 4/5/6/7 64Bit VM
- Configure at least 4 vCPU’s and expose hardware assisted virtualization options to the guest OS
- Configure at least 16GB RAM (this allows to run VM’s on top of the Acropolis Hypervisor as well as the Nutanix Software etc)
- Change the Network Adapter to E1000
- Change the SCSI Controller to PVSCSI i.e. VMware Paravirtual (yes this works just fine), and delete the 16GB virtual disk that is listed by default
- Add an Existing Hard Disk and use the ce.vmdk image that you previously uploaded attached to SCSI 0:0
- Add a New Hard Disk, I used 500GB thin provisioned, this will act as the virtual SSD, and attach it to SCSI 0:1 on the PVSCSI Adapter
- Add a New Hard disk, I used 500GB thin provisioned, this will act as the HDD, and attach it to SCSO 0:2 on the PVSCSI Adapter
Note: when using thin provisioned VMDK’s they will not grow until real data is written to them. This means you can run lots of virtual Nutanix Community Edition clusters without taking up much if any storage space. All of the Nutanix data optimization and reduction features also work on top of this reducing the real data footprint even further :). In my case however I’m only running one CE Node per ESXi host, but you could run many for a training or demo environment for example.
- Click VM Options, Expand Advanced and Edit the Configuration Parameters and Add a new row with the following:
- Click Next and then Finish
- Clone the VM you’ve just created so that you have a template that can be used later
- Power on the Nutanix CE VM you’ve just created
Note: Depending on how powerful your hardware is and if you are using a real SSD or not you may want to modify the /home/install/phx_iso/phoenix/sysUtil.py file and change the SSD_rdIOPS_thresh = 50 and SSD_wrIOPS_thresh = 50 using vi by logging in as root. Then you can exit and run through the install process.
- Deploy your additional Nutanix CE nodes by cloning from the template you created. No need to run through the above steps again.
Here is the ce.vmdk descriptor file that I used:
# Disk DescriptorFile
# Extent description
RW 14540800 VMFS “ce-2015.06.08-beta-flat.vmdk”
# The Disk Data Base
ddb.adapterType = “lsilogic”
ddb.geometry.cylinders = “905”
ddb.geometry.heads = “255”
ddb.geometry.sectors = “63”
ddb.longContentID = “ac2a7619c0a01d87476fe8124470c879”
ddb.uuid = “60 00 C2 9b 69 2f c9 76-74 c4 07 9e 10 87 3b f9”
ddb.virtualHWVersion = “10”
Now you have deployed your Nutanix CE VM’s (assuming more than one) you can create a Nutanix cluster. This is done simply by logging into the Nutanix CVM on one of the VM’s using either the DHCP IP address or the static IP address you used. Use the username nutanix and password nutanix/4u. Run cluster -s <cvmip>,<cvmip>,<cvmip> create and it will create your cluster. Add DNS Servers to the cluster using ncli cluster add-to-name-servers servers=<dns server>,<dns server>.
You’re done. Log into the CVM IP address of any of the nodes on HTTPS port 9440 and you can update the PRISM Admin User Password and begin to create VM’s.
After creating a cluster to ensure everything is working successfully I recommend running a diagnostics test. To do this log out of the CVM and back into the CVM again using the Nutanix user. Issue the following command:
diagnostics/diagnostics.py –display_latency_stats –run_iperf run; diagnostics/diagnostics.py cleanup
The output will include network performance, latency stats and IO stats for random and sequential reads and writes. Your performance will be completely dependent on the hardware you deploy on, and as this is not meant for performance, but functional testing, expect it to be lower than you would get from a real production / enterprise class Nutanix system.
Once everything is complete you should have something similar to the following to use and learn from:
TIP: If you want better network performance and less dropped packets in your nested environment you should install the VMware Fling Mac Learning Filter Driver as described in this article and further explained by William Lam in this article.
Nutanix Community Edition is a great tool to use to try out Nutanix software and become familiar with the interface and the power and simplicity of Nutanix solutions. It’s free, supported by the community, and runs on a wide range of hardware or nested on ESXi as I’ve shown here. It’s suitable for home labs, demos, API integration testing, and training environments. It is a bit easier to get running if you’re using bare metal, as you can just dump the image on a USB stick and boot into the installer and be up and running a bit quicker, but running nested gives you a lot more flexibility and the ability to over provision hardware for dev/test/training/demo environments. It’s meant for experimentation. So register to get Nutanix Community Edition today and try it out.
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – 2015 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.