Below I describe the lab environment that I’ve set up for my use, is not a recommendation, it’s just what I’ve done. This lab is located in my office at home. It’s a pretty serious lab environment for pretty serious testing.
I wanted something supported and close to what a customer might have. I wanted to be able to reproduce problems and troubleshoot them. I chose hardware from a major vendor. Models are fully supported on the VMware HCL, and have embedded ESXi hypervisor. All equipment purchased new over a few years. I have an EMC Clariion CX500 to my lab and a Cisco MDS9120 FC switch, which were donated by a very kind soul, but it remains powered off due to the high energy costs.
Before I give you all the technical details here is a photo of what this setup used to look like. I have updated it a little.
[Updated 08/06/2014]
My Lab has gone through a bit of a transition in an attempt to make it more functional and user friendly. I also need to reclaim some space for additional equipment. Some of the old equipment previously hidden behind the rack has been moved out to a rack in my garage. Below you can see the updated photos.
That is my middle minion, Bradley, hiding in the back. My kids love helping me work in my lab. They were most interested when I was pulling it apart and reorganising it. I got them to help out and showed them the insides of my servers while I was installing new cards, and when I fix the odd fault.
Compute:
1 x Nutanix 3450 (4 Nodes) – Each Node has 256GB RAM, 2 x 8 Core E5-2650 v2 processors, 2 x 400GB SSD, 4 x 1TB SATA (3.2TB SSD, 16TB SATA Total), 2 x 10G SFP+ – NDFS Cluster. This is my main work lab and is a rocket. This is where I do all the solution and performance testing to bring Nutanix customers white papers and tech notes on things like Oracle and Oracle RAC.
4x Dell T710, 72GB RAM, 3 x Dual Socket X5650 and 1 x Dual Socket E5504, 8 x 1Gb/s NIC ports (4 on board, 4 add on quad port card) 2 x 10Gb/s NIC ports (dual port card), 1 x Dual Port Emulex LPe11002 4Gb/s FC HBA (T710 E5504 is primary management host)
2 x Dell R320, 32GB RAM, Single Socket E5-2430 CPU, 2 x 1Gb/s NIC Ports On board, 2 x 10Gb/s NIC ports (dual port card), 1 x Dual Port Emulex LPe11002 4Gb/s HBA (Beta Testing Hosts)
6 x vESXi hosts with 8GB RAM (used for vShield, Lab Manager, vCloud Director, and Cisco Nexus 1000v testing)
All hosts running ESXi 5.5.
Network:
vSS0 – vmkernel for management, iSCSI1, iSCSI2, N1KV VSM’s (2 x 1Gb/s Uplinks)
vDS0 – VM Networking, multiple port groups and VLAN’s including a trunk promiscuous VLAN for vESXi servers, AppSpeed port group (2 x 1Gb/s Uplinks)
vDS1 – Management vSwitch for vMotion, main iSCSI port groups (2), FT, and VM port groups (2 x 10Gb/s Uplinks)
N1KV – VM Networking, N1KV packet, control, management (4 x 1Gb/s Uplinks in two uplink port profiles)
Physical Network has 2 main 48 port 1Gb/s Dell N2048 switches, stacked together with 42Gb/s, which has a dual 1Gb/s port LAG split across each of the core 10G switches, 24 port Dell 8024 and Dell 8132 (full layer 3, qos etc) switch as the core, which also connects the hosts and shared NAS storage. My main internet routers (two of them, load balanced over 2 x VDSL connections) and my HAN (Home Area Network) access, presentation equipment, AV equipment, WIFI access points etc, all connect to the 10G core at 1Gb/s as I had spare ports.
Storage:
Dell Hosts
Tier 0:
Fusion-IO IODrive2 1.2TB SSD x 8 (was previously 2), also one Micron PCIe SLC Flash Card (320GB)
Tier 1:
Nutanix NDFS, this is the main horsepower in my lab and where I do all my testing for my work. The rest of my equipment is just for my own testing in my spare time.
Tier 2:
Each server has 8 x 300GB 15K SAS disks locally which is configured as a single RAID10 datastore. On top of the datastore I’ve place a HP P4000 VSA(per host), which consume 80% of the local datastore, rest used for local appliances/VM’s. The VSA’s are in one management group. I have volumes configured and presented to the hosts as Network RAID5 and RAID10, all of it is thin provisioned. Performance is ok, maxes out at about 300MB/s during performance tests. The VSA’s are connected to the port groups with the software iSCSI initiators on the 10Gb/s vDS.
Tier 3: Qnap 4 disk NAS, and a QNAP 8 disk NAS serving out NFS (test VM’s and templates) and iSCSI. This is my general file storage dumping ground. The 8 Disk unit has 4TB disks.
Management:
3 x vCenter servers (3 x 5.5 U1a)
1 x vSyslog (FT Protected)
SQL DB for vCenter
VUM, VUMDS, View, View Sec Server (with PCoIP)
5.5 vMA for general management
vCOps Enterprise v5.8
Virtual Infrastructure Navigator
vCenter Configuration Manager
vCloud Director
vShield App
F5 LTM/VE
vSphere Web Client (2 instances load balanced by the F5 LTM/VE)
Nexus 1000v (2 x VSM’s)
Power:
2 x Dell 1920W UPS – Core Servers, core switches, storage
3 x APC 1500 VA Smart-UPS – Edge Swtiches, presentation equipment, wireless network, management host, desktops
I’m sure many people could get away with a lot less, especially if you have got access to a company lab, or if just using it for functional testing. But I also wanted to be able to do performance testing and be able to simulate real world situations. I’ve used this setup to identify multiple bug’s and design / config errors and then get them fixed for customers. At the time I built this I didn’t have access to another company lab that was up to scratch, so I decided to invest significantly in building my own.
Some other home lab’s you should definitely check out are:
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – 2014 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
[…] Additional details regarding My Lab Environment. […]
[…] of you who have read about My Lab Environment know I run some pretty serious kit. But unfortunately this is not housed in a robust commercial […]
Can you post some picture of this? Looks like you are having so much fun in that LAB!!!
Congrats!
Looking forward to comparing notes on the FusionIO benefits.
Get some VMware view goodness into your lab!
I've got VMware View 5.0. I use it for remote access and testing.
[…] […]
AWESOME lab man
Lot of hardwork has been done in planning such a beautiful lab…hats off !!
Awesome…
"Server room" is in your bed room.
Have fun replacing that every 3-4 years when the SAN and switches become obsolete.
There is a good chance it'll last for more than 5 years given I bought enterprise class equipment, but in any case I replace and enhance some of it each year and spread the investment over multiple years to avoid a massive single year hit.
That is an amazing lab! Very nice!
Awesome lab….
[…] the testing I used two of the Dell T710 Westmere X5650 (2 x 6 Core @ 2.66GHz) based systems from My Lab Environment, each with 1 x ioDrive2 1.2TB Fusion-io card installed in an 8X Gen2 PCIe slot. I configured the […]
Great lab! Thanks for sharing ..
How's the noise level with all those 19" dell's and so many 15k disks. Because this is the one thing most bothering me in my own lab.
Actually not that bad since I put it in the rack. But if your used to quiet then it is pretty noisy. The FC switch and CX500 array is very noisy though and so I don't have that turned on much. Plus it uses heaps of power.
Thanks for commenting!
Mmmhhh, i'm thinking of moving the stuff over to a friend, which has plenty of space. But i'm not quit sure if i'm gonna feel happy with working on the lab only from a remote location. Actually that's something i've never seen much on the net. Either it's a "Home"lab or a dc (co)location.
Putting it in a DC seems to much hassle (limitations) and to expensive (around $ 149 – 169 a month).
A remote (friends) location only would cost me a inet-uplink, i guess $ 39,- month and some electricity comp. *
* My Power consumption is moderate, between 270 – 480 watts (@ 230 volts offcourse)
i have 3 HP ML110G7 's as phy hosts, 1 big supermicro, 2x passive cooled cisco gbit sw, and one qnap 8xx (tier 4 like you) Main storage is a Nexenta zfs as vsa (with pass-through storage) and as secondary 2x HP
p4000 vsa's (mirrored R10)
Your correct to think "that's not 100% hcl stuff .. " Allthough i've had no single issue anywhere yet. At the time of starting the lab it was budget vs hcl.
Well, enough off-topic raming for now i guess 🙂
Fantastic lab! I'm super happy to see more people with great home setups. Great job!
[…] http://longwhiteclouds.com/2011/08/24/my-lab-environment/ http://www.vmware.com/go/vcdx […]
Hi mate,
Do you know where can i get those vault disks for Clariion CX500? I have a CX500 but no vault… Sad… 🙁
Cheers
Hi Allesandro, I'm not sure where you can get the vault disks or how to create them. Have you tried looking for the procedure on EMC PowerLink? Perhaps you could find some spare parts on EBay or similar site.
Michael. Thanks for all the hard work and effort to support the virtualization community. I reference your blogs at least once per month.
[…] testing I used two of the Dell T710 Westmere X5650 (2 x 6 Core @ 2.66GHz) based systems from My Lab Environment, one with 1 x ioDrive2 1.2TB and 1 x ioDrive1 640GB Fusion-io cards installed and the other with […]
Whoa! This is the mother of all home lab. You need to get it registered for guiness book record mate! I'm fortunate to be able to use VMware ASEAN lab as my den.
That's one serious Mutha of a Home Lab! This must cost a fortune to run!
Awesome Michael Webster, I wish to have like this.
Probably one of the most unreasonable home labs I have ever seen just from a hardware cost and power consumption point of view. Also, putting two Fusion-io 1.2 TB cards into a home lab a la "may make them permanent" is total BS. No one in their right mind puts two multi-thousand dollar PCIe SSDs into a home lab which will never ever see enough load to come even close to needing that type of performance. To me this looks like a sensationalist attention-grabbing post.
You may find that if you read the other posts on this blog that I do put my Fusion-io MLC and Micron SLC cards to good use and under load, for example IO Blazing Single VM Storage Performance with Micron and Fusion-io. In fact I'll be testing some new flash based virtualization technology in the near future. I'm very grateful for Fusion-io and Micron for supplying those cards for me to test. I use my lab often for performance testing to provide data to support articles on this blog and also for presentations and to solve customer configuration and performance problems. It is for this reason that I also have 10G infrastructure and enterprise grade servers.
[…] and Josh was joining, I really wanted to know more and to see if I could get a block to test for my home lab. I also started to see Nutanix pop up at events in New Zealand and Australia, and was able to speak […]
[…] them. This was so they don’t take up space on my higher more valuable tiers of storage (See my lab environment). I hear you saying, so what could possibly cause a problem with […]
[…] been setting up Nutanix NX-3450 block in my lab environment, as the foundation of my testing. Preparing the VMware vSphere 5.5 environment and configuring it […]
[…] been setting up Nutanix NX-3450 block in my lab environment, as the foundation of my testing. Preparing the VMware vSphere 5.5 environment and configuring it […]
[…] 5.5. I haven’t seen anything published by VMware so I thought I’d do my own testing in my home lab. This is a very brief article to share with you the results I found. Please note: My testing is […]
[…] Dragon home labs: Michael Webster, David Klee, Jason […]
[…] barriers and provide massive business value. Before we start the story you might like to check out my home lab environment. This is where my VMware View virtual desktop is located (Windows 8.1). My environment is connected […]
[…] have an ordinary home network. Most people are happy with a simple WIFI access point. But for me, my home lab and home network is serious business. It’s where I perform all my testing for my work, […]
[…] It’s very difficult to gleam all of the rated performance of flash devices from a storage array. The big guys do it by arming their boxes with a mixture of massive amounts of CPU, RAM, PCIe cards, and software wizardry that takes advantage of all these hardware resources. This is rarely an option in a home lab unless you’re Jason Boche or Michael Webster. […]
[…] been having a few weird things go on in my network with my home lab and from time to time the engineers I’m working with on this problem have asked me to run […]
[…] Even thought I really would like to, having this kind of equipment at home, is too noise, expensive … […]
Do you have recurring licensing costs?
No recurring licensing costs as I'm a member of the evangelist programs of all of the relevant companies software that I use mostly.