My Lab Environment
Below I describe the lab environment that I’ve set up for my use, is not a recommendation, it’s just what I’ve done. This lab is located in my office at home. It’s a pretty serious lab environment for pretty serious testing.
I wanted something supported and close to what a customer might have. I wanted to be able to reproduce problems and troubleshoot them. I chose hardware from a major vendor. Models are fully supported on the VMware HCL, and have embedded ESXi hypervisor. All equipment purchased new over a few years. I have recently added an EMC Clariion CX500 to my lab and a Cisco MDS9120 FC switch, which were donated by a very kind soul.
Before I give you all the technical details here is a photo that I’ve recently taken to give you an idea of what this setup looks like.
1 x Dell PE1900, 16GB RAM, 1 x E5345 CPU, 7 x 1Gb/s NIC ports, 1 x Single Port QLogic 2312 2Gb/s FC HBA – Backup Management Host
4x Dell T710, 72GB RAM, 3 x Dual Socket X5650 and 1 x Dual Socket E5504, 8 x 1Gb/s NIC ports (4 on board, 4 add on quad port card) 2 x 10Gb/s NIC ports (dual port card), 1 x Dual Port Emulex LPe11002 4Gb/s FC HBA (T710 E5504 is primary management host)
2 x Dell R320, 32GB RAM, Single Socket E5-2430 CPU, 2 x 1Gb/s NIC Ports On board, 2 x 10Gb/s NIC ports (dual port card), 1 x Dual Port Emulex LPe11002 4Gb/s HBA
6 x vESXi hosts with 8GB RAM (used for vShield, Lab Manager, vCloud Director, and Cisco Nexus 1000v testing)
All hosts running ESXi 5.
vSS0 – vmkernel for management, iSCSI1, iSCSI2, N1KV VSM’s (2 x 1Gb/s Uplinks)
vDS0 – VM Networking, multiple port groups and VLAN’s including a trunk promiscuous VLAN for vESXi servers, AppSpeed port group (2 x 1Gb/s Uplinks)
vDS1 – Management vSwitch for vMotion, main iSCSI port groups (2), FT, and VM port groups (2 x 10Gb/s Uplinks)
N1KV – VM Networking, N1KV packet, control, management (4 x 1Gb/s Uplinks in two uplink port profiles)
Physical Network has 2 main 24 port 1Gb/s switches which the 1Gb/s uplinks are split across with a 24 port 10G Dell 8024 (full layer 3, qos etc) switch as the core, which also connects the hosts and shared NAS storage. Management host is connected to a separate 24 port 1Gb/s switch with an 2 uplinks to one of the edge switches (due to space constraints)
Fusion-IO IODrive2 1.2TB SSD x 2 – On loan temporarily for testing, may make them permanent
Thanks to a very kind donation I have a much loved CX500 and Cisco MDS9120 in my lab as my Tier 1 storage. This has 30 x 146GB 10K FC disks. I’ve configured them in two RAID groups (12 disks each), excluding the first 5 disks used for FLARE (26), and 1 as hot spare. Two RAID 1/0 LUN’s are configured, one on each RAID group. I’m using single target single initiator zoning for my fabric, which is split into two VSAN’s to create the two fabric’s you’d normally have.
Each server has 8 x 300GB 15K SAS disks locally which is configured as a single RAID5 datastore. On top of the datastore I’ve place a HP P4000 VSA(per host), which consume 80% of the local datastore, rest used for local appliances/VM’s. The VSA’s are in one management group. I have volumes configured and presented to the hosts as Network RAID5 and RAID10, all of it is thin provisioned. Performance is ok, maxes out at about 300MB/s during performance tests. The VSA’s are connected to the port groups with the software iSCSI initiators on the 10Gb/s vDS.
Tier 3: Qnap 4 disk NAS, serving out NFS (test VM’s and templates) and iSCSI (currently unused as it’s not T10 compliant and vSphere 5 generates log spam as a result)
Tier 4: Open Filer on a desktop, only used for archiving and vCloud Director
3 x vCenter servers (3 x 5.0), I use SRM along with the HP P4000 VSA’s, SRM is now at v5
1 x vSyslog (FT Protected)
SQL DB for Protected Site vCenter, Oracle DB for Recovery Site vCenter
VUM, VUMDS, View, View Sec Server (with PCoIP)
vCenter Mobile Access (use with iPad App)
4.0 vMA for auto UPS shutdown of environment
5.0 vMA for general management
vCOps Enterprise v5
CapIQ (Still there but now included in vCOPs)
Virtual Infrastructure Navigator
vCenter Configuration Manager
AppInsight (Part of the New Application Performance Management Suite)
vDR for backup
vSphere Web Client (2 instances load balanced by the F5 LTM/VE)
Nexus 1000v (2 x VSM’s)
2 x Dell 1920W UPS – Core Servers, core switches, storage
3 x APC 1500 VA Smart-UPS – Edge Swtiches, presentation equipment, wireless network, management host, desktops
Notable omissions – vCenter Heartbeat. I did have Heartbeat, due to lack of resources removed it temporarily.
I’m sure many people could get away with a lot less, especially if you have got access to a company lab, or if just using it for functional testing. But I also wanted to be able to do performance testing and be able to simulate real world situations. I’ve used this setup to identify multiple bug’s and design / config errors and then get them fixed for customers. I don’t have access to another company lab that is up to scratch, so I decided to invest significantly in building my own.
Some other home lab’s you should definitely check out are:
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.