Below I describe the lab environment that I've set up for my use, is not a recommendation, it's just what I've done. This lab is located in my office at home. It's a pretty serious lab environment for pretty serious testing.
I wanted something supported and close to what a customer might have. I wanted to be able to reproduce problems and troubleshoot them. I chose hardware from a major vendor. Models are fully supported on the VMware HCL, and have embedded ESXi hypervisor. All equipment purchased new over a few years. I have an EMC Clariion CX500 to my lab and a Cisco MDS9120 FC switch, which were donated by a very kind soul, but it remains powered off due to the high energy costs.
Before I give you all the technical details here is a photo of what this setup used to look like. I have updated it a little.
My Lab has gone through a bit of a transition in an attempt to make it more functional and user friendly. I also need to reclaim some space for additional equipment. Some of the old equipment previously hidden behind the rack has been moved out to a rack in my garage. Below you can see the updated photos.
That is my middle minion, Bradley, hiding in the back. My kids love helping me work in my lab. They were most interested when I was pulling it apart and reorganising it. I got them to help out and showed them the insides of my servers while I was installing new cards, and when I fix the odd fault.
1 x Nutanix 3450 (4 Nodes) - Each Node has 256GB RAM, 2 x 8 Core E5-2650 v2 processors, 2 x 400GB SSD, 4 x 1TB SATA (3.2TB SSD, 16TB SATA Total), 2 x 10G SFP+ - NDFS Cluster. This is my main work lab and is a rocket. This is where I do all the solution and performance testing to bring Nutanix customers white papers and tech notes on things like Oracle and Oracle RAC.
4x Dell T710, 72GB RAM, 3 x Dual Socket X5650 and 1 x Dual Socket E5504, 8 x 1Gb/s NIC ports (4 on board, 4 add on quad port card) 2 x 10Gb/s NIC ports (dual port card), 1 x Dual Port Emulex LPe11002 4Gb/s FC HBA (T710 E5504 is primary management host)
2 x Dell R320, 32GB RAM, Single Socket E5-2430 CPU, 2 x 1Gb/s NIC Ports On board, 2 x 10Gb/s NIC ports (dual port card), 1 x Dual Port Emulex LPe11002 4Gb/s HBA (Beta Testing Hosts)
6 x vESXi hosts with 8GB RAM (used for vShield, Lab Manager, vCloud Director, and Cisco Nexus 1000v testing)
All hosts running ESXi 5.5.
vSS0 - vmkernel for management, iSCSI1, iSCSI2, N1KV VSM's (2 x 1Gb/s Uplinks)
vDS0 - VM Networking, multiple port groups and VLAN's including a trunk promiscuous VLAN for vESXi servers, AppSpeed port group (2 x 1Gb/s Uplinks)
vDS1 - Management vSwitch for vMotion, main iSCSI port groups (2), FT, and VM port groups (2 x 10Gb/s Uplinks)
N1KV - VM Networking, N1KV packet, control, management (4 x 1Gb/s Uplinks in two uplink port profiles)
Physical Network has 2 main 48 port 1Gb/s Dell N2048 switches, stacked together with 42Gb/s, which has a dual 1Gb/s port LAG split across each of the core 10G switches, 24 port Dell 8024 and Dell 8132 (full layer 3, qos etc) switch as the core, which also connects the hosts and shared NAS storage. My main internet routers (two of them, load balanced over 2 x VDSL connections) and my HAN (Home Area Network) access, presentation equipment, AV equipment, WIFI access points etc, all connect to the 10G core at 1Gb/s as I had spare ports.
Fusion-IO IODrive2 1.2TB SSD x 8 (was previously 2), also one Micron PCIe SLC Flash Card (320GB)
Nutanix NDFS, this is the main horsepower in my lab and where I do all my testing for my work. The rest of my equipment is just for my own testing in my spare time.
Each server has 8 x 300GB 15K SAS disks locally which is configured as a single RAID10 datastore. On top of the datastore I've place a HP P4000 VSA(per host), which consume 80% of the local datastore, rest used for local appliances/VM's. The VSA's are in one management group. I have volumes configured and presented to the hosts as Network RAID5 and RAID10, all of it is thin provisioned. Performance is ok, maxes out at about 300MB/s during performance tests. The VSA's are connected to the port groups with the software iSCSI initiators on the 10Gb/s vDS.
Tier 3: Qnap 4 disk NAS, and a QNAP 8 disk NAS serving out NFS (test VM's and templates) and iSCSI. This is my general file storage dumping ground. The 8 Disk unit has 4TB disks.
3 x vCenter servers (3 x 5.5 U1a)
1 x vSyslog (FT Protected)
SQL DB for vCenter
VUM, VUMDS, View, View Sec Server (with PCoIP)
5.5 vMA for general management
vCOps Enterprise v5.8
Virtual Infrastructure Navigator
vCenter Configuration Manager
vSphere Web Client (2 instances load balanced by the F5 LTM/VE)
Nexus 1000v (2 x VSM's)
2 x Dell 1920W UPS - Core Servers, core switches, storage
3 x APC 1500 VA Smart-UPS - Edge Swtiches, presentation equipment, wireless network, management host, desktops
I'm sure many people could get away with a lot less, especially if you have got access to a company lab, or if just using it for functional testing. But I also wanted to be able to do performance testing and be able to simulate real world situations. I've used this setup to identify multiple bug's and design / config errors and then get them fixed for customers. At the time I built this I didn't have access to another company lab that was up to scratch, so I decided to invest significantly in building my own.
Some other home lab's you should definitely check out are:
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 - 2014 - IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.