I just read a great article written by James Knapp at ViFX (One of VMware’s Premier Partners in New Zealand) on the Best Practices (and good practices) of utilizing a Management Cluster as part of your VMware vSphere architecture. I would recommend you read it. I’ll give you my very brief take on this.
James’ article is titled Best Practice for your Management Cluster. He outlines the reasons for having a Management Cluster as you start to grow you environment and as you start to utilize more technologies such as Virtual Desktop/End User Computing, vCloud Director, and Business Critical Applications, this will become even more important. One of the key points to consider is that having a Management Cluster doesn’t mean you necessarily need more servers, as the Management Workloads will still consume the same amount of resources. But it can add considerably to the availability of your management systems and your availability and ability to manage your environment.
Separation of management infrastructure is not a new concept. It has been around for as long as I’ve been in IT (a little while). Traditionally your management functions, including authentication, would be separated off from all of the productive workloads, especially your network and security management systems. Going back a bit I remember that with Windows NT domain design we architected User Domains and Resource Domains. So the concept of management clusters and resource clusters for VMware vSphere should not be that foreign to those of us that have been in this game a while.
James nails the benefits and the separation considerations including storage. But the benefit that is often overlooked, especially by organizations that still insist on running vCenter installed on a physical / native OS (yes there are some). Having a management cluster gives you far higher availability, scalability and flexibility, without adding any costs, compared to the traditional approach. You also have far greater control over performance and quality of service with the latest advances in VMware Technology.
Even if your management cluster isn’t a management cluster at all, but a single management host, you are still far better off having your management functions separated and virtualized to gain the benefits of hardware independence and quality of service. In a very small environment a single management vSphere Host may be enough, and this is easily scalable as your environment grows far larger. Granted this won’t give you the benefits of high availability, but it’s still better than having no management cluster at all, and you can easily add additional hosts as and when needed later. The important thing is to have the separation of resources in the first place.
I highly recommend everyone to have separation of management in their conceptual, logical and physical designs. It will make your job that much easier when things are going right and when they go wrong. This is also an important topic if you have any plans to sit the VMware VCAP exams or to become VCDX. It’s just these types of considerations that will help you become successful, not just in exams and qualifications, but also in the real world. Thanks James for bringing up this important topic.
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by Michael Webster +. Copyright © 2013 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
I agree 100%! In my environments I usually setup a two node critical services/management cluster for AD, vCenter/SQL, and other IT infrastructure mission critical services. This helps during cold power ups, as you know your vCenter, AD, and SQL VMs are on one of two hosts.
A management cluser could be really recomended in each mid-enterprise environment. Or in enviroment with mixed virtualization technologies, like vCloud Directory (but not for all the hosts) or View
I would argue that you should have management separation in all environments. That might mean a management cluster of only a single host. This is still superior to any of the alternatives, including having a physical vCenter. Having a Management Cluster if 2 hosts is the ideal starting point. This is especially important for View as it gives clear separation of the management functions vs the production functions of running the desktops. Else your management functions will be taking away important resources from your desktops and your over commitment levels are likely to be very different between the two types of workloads. This is the reason that a Management Cluster is a key tenant of the View Pod Reference Architectures. Whether you have a management cluster or not your management systems will still consume exactly the same amount of resources, so why not separate them? It makes troubleshooting and operations much simpler. The smallest environment I've designed that had a management cluster was 4 hosts total. 3 hosts for production and one host for management. This cluster was for a financial application that needed to meet PCI compliance and have vCloud Networking and Security deployed. Without having logical and physical management separation the solution would not have been possible. This environment has since been expanded to include a second host in the management cluster as more management workloads have been added.
How would the licensing work? We currently have a single 4-node cluster under vSphere Enterprise Plus. If I scrounged up 2 additional hosts for a management cluster, would they need to be licensed the same as production? I have a feeling this might be difficult to justify to the check writers as I'd potentially need another 4 CPU licenses.
The management cluster doesn't have to be licensed with the same edition as production. But you might want to do that anyway for OPEX reasons.
If your management workloads are already virtual you won't need additional hosts. You'll just need to partition some of the existing hosts. As the management workloads are already consuming compute resources from your cluster.
You still need the same amount of net compute resources regardless to run management workloads. But by virtualizing them you can consolidate also. The business case should be the same as for other workloads. But with the additional reduced risk of having the separate management cluster and more predictable and isolated resources for your other VM's.
Let me be the naysayer, although the VMware reference architectures tell me I'm wrong. If you split your assets (in this case four hosts) into a 1 Host Mgt Cluster- 3 Host Prod Cluster, I see some deficiencies.
1) my mgt assets lose HA (at least lesser HA than if a single cluster regardless of total host count)
2) if my Prod workloads spike to more than 3 hosts of demand, they can't use the fourth host.
3) if my Mgt workloads spike to more than 1 host of demand, they are stuck too.
I agree it's easier to bring up a downed site if the mgt assets are easily located, but that's relatively rare and I can do that with DRS affinity.
If I can trust DRS, SIOC, NIOC, and Resource Pools, why not have the largest pool possible? I understand that a Mgt spike could impede production workloads, but if it's an important (and resource pool prioritized) workload, isn't that a correct operational decision?
Please help walk me to the light..
Anyone know what happened to the article? The link is dead. Please post it if you have it!
Here’s the updated link: http://www.vifx.co.nz/blog/best-practice-for-your-management-cluster
[…] first place. If you are going to run these management tools and others you should consider having a management cluster, that is separate from the infrastructure being managed. You should also consider the backup, […]