When previous VMware CTO Dr. Steve Herrod coined the term Software-Defined Datacenter and presented the concept at VMworld 2012 a lot of people in the IT industry immediately jumped on the band wagon. The concept of Abstracting, Pooling, and Automating not just compute, but network, storage, security and compliance, make complete sense to many people. I think a lot of people will agree that the vision of a Software-Defined Datacenter(s) is an excellent theoretical goal and roadmap for the future of IT. Since then just about every IT vendor has come out with their Software-Defined Something. But what about the reality?
Many of the technical building blocks are available today or will be in the very near future to allow you to build a Software-Defined Datacenter. Depending on your technology vendor selections you can get Software-Defined Storage (SDS), Software-Defined Networking (SDN), and VMware for some time has been able to deliver Software-Defined Compute (SDC), security and compliance. In fact this image below was my expression of what I thought the Software-Defined Datacenter would look like when you automate security and compliance with DR across primary and recovery locations as I presented at VMworld 2012.
The idea here is that you can provision and protect for DR purposes any workload and also the policies and service classifications that are part of that workload definition, which are automatically applied, that the policy will be replicated to the DR site, and that you can test the applications in conjunction with their policies without disruption to production. This dramatically reduces the time to bring a new service online, ensure it’s protected, in compliance with regulatory and security policy (correct firewall rules, hardening options etc), and know that when it fails over to DR it’ll work as you expect. You will have automated monitoring, performance management and capacity planning in addition, so that your job of operating and managing the environment is greatly simplified and much less complex. This is why VMware created the vCloud Suite as the foundation of the Software-Defined Datacenter.
The rationale for Software-Defined Everything is that resource Abstraction, Pooling and Automation allows much more efficient use of each resources while at the same time still providing quality of service. The same idea was behind virtualization originally. You can take multiple servers, pool them together into a cluster using VMware HA and VMware DRS, and make much more efficient use of the servers compute and IO capacity so that it was not necessary to over-provision, and capacity can be increased just in time. Environments were servers are virtualized have much higher average utilization and therefore cost savings. In enterprise environments today some of the most inefficiently utilized resources are the networks and storage, and this is where there are some big gains to be made. In effect the Software-Defined Datacenter is all about breaking down silos of infrastructure and applications and having a completely converged environment, and one that can be either private, public or a hybrid. Each application is provisioned where the resources of the Software-Defined Datacenter are best able to meet it’s policy and service definition.
This brings us nicely to the title of this article and application licensing policies, which I believe is the biggest barrier to making a true automated Software-Defined Datacenter a reality. This problem is not technical, it is commercial. The reason this is a problem is that different application vendors have vastly different ways of licensing and many of these are not compatible with the vision of an abstracted, pooled infrastructure. Many application vendors require that you license all of the CPU Cores in a host for their application, which then makes it uneconomic to run other applications on that server, or to license every server for every possible application, which is what would otherwise be required. The result of these physical hardware core based licensing policies is silos of capacity, the exact thing that the Software-Defined Datacenter is aiming to prevent, and is inefficient from a provisioning, availability and operations perspective. Here is an image depicting an enterprise environment with multiple applications that the customer has chosen to license by physical processor and separate into individual silos.
In the case of the environment above every time a new application instance is deployed it needs to be placed in the appropriate cluster. Unfortunately many provisioning engines, including VMware’s provisioning capability (vCloud Automation Center) as part of the vCloud Suite, lacks software licensing placement and management awareness. So right now you still need humans managing software licensing policies and placement decisions in some cases. Having all of these silos prevents greater abstraction, pooling and automation. The best that can be done right now with vCloud Automation Center is forcing certain application templates to provision to certain fixed cluster locations. Even though using the tagging features of vSphere 5.1 Web Client would allow certain flexibility in this regard the lack of a tagging API makes that impossible at this time.
Of course licensing dedicated clusters that are right sized for your applications isn’t entirely a bad thing, in fact it can be very licensing efficient and economically efficient. Especially when you consider the higher cost of applications compared to infrastructure (in the case of things like Oracle, IBM and other enterprise software). Having dedicated clusters that are right sized allows you to maximize the investment you’ve made in your application licenses and also allows very simple license compliance. Once a cluster is licensed for an application you can run an unlimited number of instances of that application provided you have sufficient physical resources and have acceptable performance. This allows you to take advantage of vCPU over-provisioning or overcommitment (assuming performance objectives can be met and not all systems run 100% utilized), which would not be economically viable in many cases if using per vCPU based licenses, while keeping within SLA’s. Being able to get better licensing and system consolidation in this way can save an organization millions of dollars in capital purchases and operational expenses. My point is that dedicated clusters isn’t bad from a licensing point of view at all, it’s just it goes against the concept of the Software-Defined Datacenter and the different licensing methods and complicated licensing compliance makes achieving a true Software-Defined Datacenter incredibly difficult in reality, especially without having that license awareness built into the SDDC Platform itself.
There are some enlightened application vendors in the market right now that offer virtualization and cloud friendly licensing policies, where you can license based on the virtual resources assigned to a virtual machine or license based on the number of users, and move the licenses between private and public clouds. Two examples immediately come to mind, SAP and Microsoft. With SAP because its licensed based on named users you can deploy as many instances as you need and deploy them wherever you want, including the Oracle DB if you licensed it from SAP. With Microsoft provided you have licenses that are covered by software assurance you can migrate the systems between hosts or move them to a hosted or cloud environment as you wish. With the Microsoft licensing for SQL Server for example you have the option of licensing by virtual cpu core (minimum of 4) or physical cpu socket (minimum of 4 cores per socket). So with SAP and Microsoft it is incredibly easy for you to achieve a real Software-Defined Datacenter that is abstracted, pooled, and automated and while remaining in compliance with their software license policies in a very simple way. You also get the flexibility of running your systems in a private, public or hybrid cloud or hosting environment.
I have also worked with IBM WebSphere Application Server and used it with sub-capacity licensing (by vCPU), which also makes it easier to run in a Software Defined-Datacenter (with the IBM License Metric Tool used for compliance management). However because the licenses are based on Processor Value Units (PVU) it makes it hard to use in a cloud environment when the customer may not know the hardware specifications and therefore PVU’s of the underlying servers. This could potentially create a licensing compliance issue.
Due to variance in licensing methods, complexity around compliance and lack of tooling to manage and control licensing built into the base platform (VMware vCloud Suite), application licensing will remain a challenge to achieving a Software-Defined Datacenter reality. It is good to see some enlightened application vendors already have licensing policies that work well in the new SDDC environment and are simple and easy to manage. These enlightened vendors give their customers flexibility and choice and we should all support them. I hope the more application software vendors take notice and start to follow their lead. I also hope that customers insist on licensing policies that support license portability between private, public and hybrid clouds, and fully support the concepts of the Software-Defined Datacenter when going through RFP’s. The opportunities for economic savings from the Software-Defined Datacenter are massive, but it will take some time for the commercials and licensing policies to catch up. It’s up to us all to educate our application vendors and encourage them to change their policies to more easily fit into this new paradigm of IT.
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by Michael Webster +. Copyright © 2013 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.