Home > Business Critical Applications, VMware > My Lab Environment

My Lab Environment

Below I describe the lab environment that I’ve set up for my use, is not a recommendation, it’s just what I’ve done. This lab is located in my office at home. It’s a pretty serious lab environment for pretty serious testing.

I wanted something supported and close to what a customer might have. I wanted to be able to reproduce problems and troubleshoot them. I chose hardware from a major vendor. Models are fully supported on the VMware HCL, and have embedded ESXi hypervisor. All equipment purchased new over a few years. I have recently added an EMC Clariion CX500 to my lab and a Cisco MDS9120 FC switch, which were donated by a very kind soul.

Before I give you all the technical details here is a photo of what this setup used to look like. I have updated it a little.

IMG_5647

 

 

[Updated 08/06/2014]

My Lab has gone through a bit of a transition in an attempt to make it more functional and user friendly. I also need to reclaim some space for additional equipment. Some of the old equipment previously hidden behind the rack has been moved out to a rack in my garage. Below you can see the updated photos.

IMG_7735IMG_7743IMG_7717

 

That is my middle minion, Bradley, hiding in the back. My kids love helping me work in my lab. They were most interested when I was pulling it apart and reorganising it. I got them to help out and showed them the insides of my servers while I was installing new cards, and when I fix the odd fault.

Compute:

1 x Nutanix 3450 (4 Nodes) – Each Node has 256GB RAM, 2 x 8 Core E5-2650 v2 processors, 2 x 400GB SSD, 4 x 1TB SATA (3.2TB SSD, 16TB SATA Total), 2 x 10G SFP+ – NDFS Cluster. This is my main work lab and is a rocket. This is where I do all the solution and performance testing to bring Nutanix customers white papers and tech notes on things like Oracle and Oracle RAC.

4x Dell T710, 72GB RAM, 3 x Dual Socket X5650 and 1 x Dual Socket E5504, 8 x 1Gb/s NIC ports (4 on board, 4 add on quad port card) 2 x 10Gb/s NIC ports (dual port card), 1 x Dual Port Emulex LPe11002 4Gb/s FC HBA (T710 E5504 is primary management host)

2 x Dell R320, 32GB RAM, Single Socket E5-2430 CPU, 2 x 1Gb/s NIC Ports On board, 2 x 10Gb/s NIC ports (dual port card), 1 x Dual Port Emulex LPe11002 4Gb/s HBA (Beta Testing Hosts)

6 x vESXi hosts with 8GB RAM (used for vShield, Lab Manager, vCloud Director, and Cisco Nexus 1000v testing)
All hosts running ESXi 5.5.

Network:
vSS0 – vmkernel for management, iSCSI1, iSCSI2, N1KV VSM’s (2 x 1Gb/s Uplinks)
vDS0 – VM Networking, multiple port groups and VLAN’s including a trunk promiscuous VLAN for vESXi servers, AppSpeed port group (2 x 1Gb/s Uplinks)
vDS1 – Management vSwitch for vMotion, main iSCSI port groups (2), FT, and VM port groups (2 x 10Gb/s Uplinks)
N1KV – VM Networking, N1KV packet, control, management (4 x 1Gb/s Uplinks in two uplink port profiles)

Physical Network has 2 main 48 port 1Gb/s Dell N2048 switches, stacked together with 42Gb/s,  which has a dual 1Gb/s port LAG split across each of the core 10G switches, 24 port Dell 8024 and Dell 8132 (full layer 3, qos etc) switch as the core, which also connects the hosts and shared NAS storage. My main internet routers (two of them, load balanced over 2 x VDSL connections) and my HAN (Home Area Network) access, presentation equipment, AV equipment, WIFI access points etc, all connect to the 10G core at 1Gb/s as I had spare ports.

Storage:

Dell Hosts

Tier 0:
Fusion-IO IODrive2 1.2TB SSD x 8 (was previously 2), also one Micron PCIe SLC Flash Card (320GB)
Tier 1:
Nutanix NDFS, this is the main horsepower in my lab and where I do all my testing for my work. The rest of my equipment is just for my own testing in my spare time.
Tier 2:
Each server has 8 x 300GB 15K SAS disks locally which is configured as a single RAID10 datastore. On top of the datastore I’ve place a HP P4000 VSA(per host), which consume 80% of the local datastore, rest used for local appliances/VM’s. The VSA’s are in one management group. I have volumes configured and presented to the hosts as Network RAID5 and RAID10, all of it is thin provisioned. Performance is ok, maxes out at about 300MB/s during performance tests. The VSA’s are connected to the port groups with the software iSCSI initiators on the 10Gb/s vDS.
Tier 3: Qnap 4 disk NAS, and a QNAP 8 disk NAS serving out NFS (test VM’s and templates) and iSCSI. This is my general file storage dumping ground. The 8 Disk unit has 4TB disks.

Management:
3 x vCenter servers (3 x 5.5 U1a)
1 x vSyslog (FT Protected)
SQL DB for vCenter
VUM, VUMDS, View, View Sec Server (with PCoIP)
5.5 vMA for general management
vCOps Enterprise v5.8
Virtual Infrastructure Navigator
vCenter Configuration Manager
vCloud Director
vShield App
F5 LTM/VE
vSphere Web Client (2 instances load balanced by the F5 LTM/VE)
Nexus 1000v (2 x VSM’s)

Power:
2 x Dell 1920W UPS – Core Servers, core switches, storage
3 x APC 1500 VA Smart-UPS – Edge Swtiches, presentation equipment, wireless network, management host, desktops

I’m sure many people could get away with a lot less, especially if you have got access to a company lab, or if just using it for functional testing. But I also wanted to be able to do performance testing and be able to simulate real world situations. I’ve used this setup to identify multiple bug’s and design / config errors and then get them fixed for customers. At the time I built this I didn’t have access to another company lab that was up to scratch, so I decided to invest significantly in building my own.

Some other home lab’s you should definitely check out are:

Jason Boche’s Lab

David Klee’s Lab

This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – 2014 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.

  1. arielantigua
    May 14, 2012 at 5:39 pm | #1

    Can you post some picture of this? Looks like you are having so much fun in that LAB!!!

    Congrats!

  2. July 16, 2012 at 11:35 pm | #2

    Looking forward to comparing notes on the FusionIO benefits.

    Get some VMware view goodness into your lab!

    • July 16, 2012 at 11:36 pm | #3

      I've got VMware View 5.0. I use it for remote access and testing.

  3. July 17, 2012 at 7:53 am | #4

    AWESOME lab man

  4. Anuj Modi
    July 17, 2012 at 12:39 pm | #5

    Lot of hardwork has been done in planning such a beautiful lab…hats off !!

  5. July 17, 2012 at 5:23 pm | #6

    Awesome…

  6. Joe
    July 20, 2012 at 11:31 am | #8

    Have fun replacing that every 3-4 years when the SAN and switches become obsolete.

    • July 21, 2012 at 12:03 am | #9

      There is a good chance it'll last for more than 5 years given I bought enterprise class equipment, but in any case I replace and enhance some of it each year and spread the investment over multiple years to avoid a massive single year hit.

  7. July 23, 2012 at 2:46 am | #10

    That is an amazing lab! Very nice!

  8. Gert
    July 24, 2012 at 3:20 pm | #11

    Awesome lab….

  9. August 19, 2012 at 11:39 am | #12

    Great lab! Thanks for sharing ..

    How's the noise level with all those 19" dell's and so many 15k disks. Because this is the one thing most bothering me in my own lab.

    • August 19, 2012 at 12:09 pm | #13

      Actually not that bad since I put it in the rack. But if your used to quiet then it is pretty noisy. The FC switch and CX500 array is very noisy though and so I don't have that turned on much. Plus it uses heaps of power.

      • August 19, 2012 at 12:41 pm | #14

        Thanks for commenting!

        Mmmhhh, i'm thinking of moving the stuff over to a friend, which has plenty of space. But i'm not quit sure if i'm gonna feel happy with working on the lab only from a remote location. Actually that's something i've never seen much on the net. Either it's a "Home"lab or a dc (co)location.

        Putting it in a DC seems to much hassle (limitations) and to expensive (around $ 149 – 169 a month).

        A remote (friends) location only would cost me a inet-uplink, i guess $ 39,- month and some electricity comp. *

        * My Power consumption is moderate, between 270 – 480 watts (@ 230 volts offcourse)

        i have 3 HP ML110G7 's as phy hosts, 1 big supermicro, 2x passive cooled cisco gbit sw, and one qnap 8xx (tier 4 like you) Main storage is a Nexenta zfs as vsa (with pass-through storage) and as secondary 2x HP

        p4000 vsa's (mirrored R10)

        Your correct to think "that's not 100% hcl stuff .. " Allthough i've had no single issue anywhere yet. At the time of starting the lab it was budget vs hcl.

        Well, enough off-topic raming for now i guess :)

  10. September 5, 2012 at 1:23 am | #15

    Fantastic lab! I'm super happy to see more people with great home setups. Great job!

  11. Alessandro
    November 23, 2012 at 11:01 pm | #16

    Hi mate,

    Do you know where can i get those vault disks for Clariion CX500? I have a CX500 but no vault… Sad… :(

    Cheers

    • November 26, 2012 at 6:36 pm | #17

      Hi Allesandro, I'm not sure where you can get the vault disks or how to create them. Have you tried looking for the procedure on EMC PowerLink? Perhaps you could find some spare parts on EBay or similar site.

  12. November 25, 2012 at 5:57 am | #18

    Michael. Thanks for all the hard work and effort to support the virtualization community. I reference your blogs at least once per month.

  13. February 12, 2013 at 2:57 am | #19

    Whoa! This is the mother of all home lab. You need to get it registered for guiness book record mate! I'm fortunate to be able to use VMware ASEAN lab as my den.

  14. March 11, 2013 at 12:57 pm | #20

    That's one serious Mutha of a Home Lab! This must cost a fortune to run!

  15. April 6, 2013 at 2:24 pm | #21

    Awesome Michael Webster, I wish to have like this.

  16. VNXDude
    May 29, 2013 at 12:47 am | #22

    Probably one of the most unreasonable home labs I have ever seen just from a hardware cost and power consumption point of view. Also, putting two Fusion-io 1.2 TB cards into a home lab a la "may make them permanent" is total BS. No one in their right mind puts two multi-thousand dollar PCIe SSDs into a home lab which will never ever see enough load to come even close to needing that type of performance. To me this looks like a sensationalist attention-grabbing post.

    • @vcdxnz001
      May 29, 2013 at 1:27 am | #23

      You may find that if you read the other posts on this blog that I do put my Fusion-io MLC and Micron SLC cards to good use and under load, for example IO Blazing Single VM Storage Performance with Micron and Fusion-io. In fact I'll be testing some new flash based virtualization technology in the near future. I'm very grateful for Fusion-io and Micron for supplying those cards for me to test. I use my lab often for performance testing to provide data to support articles on this blog and also for presentations and to solve customer configuration and performance problems. It is for this reason that I also have 10G infrastructure and enterprise grade servers.

  1. No trackbacks yet.

Leave a Reply

%d bloggers like this: