VMware’s vCloud Director is a very good way for organisations to start to take advantage of cloud computing, including private, public and hybrid models. Cloud computing can offer new efficiencies and cost savings for organisations that optimise it’s use. But where do you start? If you go searching for best practices, design considerations and references for designing and building a vCloud Director environment you will find plenty for large scale deployments. But it might seem difficult to find much in the way of design considerations for starting off with a small vCloud Director environment, such as for a proof of concept, small lab or pilot. In this article I hope to offer some advice that will allow you to dip your toes in the water and get up and running quickly without much complication, while allowing you to scale up in the future. This article looks at the different allocation models. Future article will discuss the rest of the components you need to consider.
Even if designing for a small environment one resource I highly recommend you review if the vCloud Architecture Toolkit (vCAT), which is a VMware Validated Architecture for Cloud Computing and supporting tools. The vCAT is fully supported by VMware, so if you leverage the design considerations and guidance contained within it you know you can get support.
This article focuses on Resource Allocation Models in vCloud Director. In future articles I will cover additional design considerations.
Pre-requisites and assumptions
- You have enough knowledge to install and configure a simple VMware vCloud Director environment.
- You have a minimum of 3 hosts available (1 for Management, 2 for vCloud consumption).
Allocation Models
vCloud Director allows resources to be allocated from a Provider Virtual Datacenter (PvDC) to Organisation Virtual Datacenters (OrgvDC) for consumers to use. The allocation is done in accordance with three different models. Pay as You Go (PAYG), Allocation Pool, and Reservation Pool.
The advantage with PAYG is that you get a defined reserved amount of resource per VM when the VM’s are powered on only, so they don’t consume any resources from the OrgVDC or PvDC while they are powered off. The system admin defines the overcommitment and service levels for compute resources. New in vCloud Director 5.1 the PAYG Virtual Datacenter can have a limit set to prevent one OrgVDC consuming all cloud resources.
With Allocation Pool the OrgVDC consumes the guaranteed portion from the PvDC regardless if VM’s are powered on, but this at least gives them a defined amount of resources. Each VM then is set a reservation equal to the percent that is guaranteed and the VM’s deployed can consume the resources up to the defined limit of the allocation model. Powered Off VM’s don’t consume resources from the OrgVDC. The system admin defines the overcommitment and service levels for compute resources. New in vCloud Director 5.1 all vCPU’s have a defined limit set which will impact how many VM’s can be powered on within an OrgVDC. This vCPU limit is defined by the System Administrators.
With the reservation pool the you set a defined limit and then it’s up to the tenant of the OrgVDC to define the level of overcommitment and how much resources are reserved for each VM/vApp. You guarantee the resources from the PvDC to the OrgVDC and the OrgVDC consumes those resources regardless if a vApp is powered on. VM’s or vApps can have different reservations or no reservation. It’s up to the tenant to choose. But the catch is the tenant or consumer can cause performance problems within their OrgVDC if they overcommit too aggressively. The tenant needs to take some care with capacity planning around their usage.
In some ways an Allocation Pool with 100% guaranteed is superior to a Reservation Pool, as the system admin / provider is defining the SLA of the pool and VM’s to be 100% guaranteed, I.e. Allocation = Reservation. The tenant then can’t get themselves into trouble as easily. It’s up to the system admin to manage capacity.
The Allocation Pool and Reservation Pool offer more predictability of billing to the tenant, but are less efficient from a service provider perspective in terms of resource utilization. PAYG offers less predictable billing to the tenant, but more efficient resource utilization potentially, and more flexibility in a small environment.
Recommendation
When designing a small environment I generally recommend you start with PAYG resource models, especially where usage is unknown. This allows the best sharing of resources between OrgVDC’s and Organisations. PAYG provides a dynamic on demand environment, but can still have an upper limit set per OrgVDC. The PAYG model supports an Elastic VDC, which means it can grow across clusters. This makes it easy to expand the environment as and when needed. The Allocation Pool Model also supports elastic vDC from v5.1 with some restrictions, but it will depend which update version is actually being used on what functionality of the Allocation Pool model you receive.
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by Michael Webster +. Copyright © 2013 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
Very interesting, thank you.. one question: what do you think of the vCloud Integration Manager guidelines for a public cloud, released here? vmware.com/pdf/vCloudIntegrationManager10_Cloud_Guidelines.pdf
Do you think that this design would require two separate VLAN connections if co-locating inside a datacenter, or it would be possible to have a single VLAN to the internet and still adhere to the design?
Hi George, I think the guidelines are appropriate and I think you'd need to implement the VLANS as recommended to adhere to the design. When you mean co-locating inside a datacenter would your use case still be for public cloud? I guess you could reduce by one VLAND and have the hosts vmkernel port for management connect to the management network, which is quite common in public cloud implementations. But this would be a variation from what is recommended in the implementation guidelines and the decision would really be dependant on what your security requirements are. Given it's only the management vmkernel port that would be on the management VLAN ordinarily I wouldn't have a problem with that. But you need to consider all the security requirements and constraints before making a decision.
Thank you Michael.. sorry for the confusion: yes it would still be for public cloud use case, my problem was my colo provider could not offer two separate internet vlan's (i'm only referring to the internet connection vlan's from my colo provider, not my internal/private VLANs)
According to the diagram from the above pdf I (snapshot here: http://www.picpaste.com/Screen_Shot_2013-02-02_at… ), the right part shows one connection needed for the customer's workloads, and one connection needed for my cloud management workloads (where my central firewall sits)..
unless if im not interpreting the diagram correctly?
Keep up the great work!!
Hi George, In that case one VLAN for Internet is fine. There is no need as far as I can see to separate them out. Given they are both considered the Internet, i.e. wild west. Provided your internal workloads and vCD Cells etc are still protected by firewall still of course.
Great, thanks!! Then I'll just need to make sure my switches and firewall will be properly configured..
(colo wild west/internet) (firewall) (pair of switches) (4 esxi hosts running the internal workload/management VMs and cloud consumption VMs)
I will just configure all of my internal VLAN IP ranges in the firewall, and then setup a firewall rule: to not scan any traffic that originates from VLAN K (vCDNI) , since this traffic will be secured by the automated vShield VM.
VLAN J and VLAN C will have the same gateway (the one assigned by my colo)..
Have a great weekend!