During the Monster VM Design Panel at VMworld in San Francisco and Barcelona our panel was asked about vNUMA and the impact on performance of various different settings including modifying the number of cores per virtual socket. Mark Achtemichuk (Mark A for short) has written an article on the VMware vSphere Blog taking a look at this with some great test data to go with it. I’ll give you some highlights and then you can check out the actual article for yourself.
Mark’s article goes into the history of the Cores Per Socket (Number of vCPU Cores per Virtual CPU Socket) in a bit of detail. It was really meant for licensing, not performance. As Mark shows in the article for best performance you should configure your VM’s to be wide and flat (1 vCPU per vSocket), and let vSphere and vNUMA do it’s thing to optimize performance. Except in the case where you need to configure it differently for licensing reasons.
Many times during the Monster VM Panel Discussions we were asked about NUMA and the benefits or penalties of different configuration options. Especially around vNUMA and crossing NUMA boundaries. I would like to put your mind at ease. If you’re VM is right sized, and it actually needs lots of vCPU’s and Memory, then crossing a NUMA boundary, regardless of the penalty, is much more beneficial for performance than not having the necessary resources at all. This also assumes that the hosts aren’t too aggressively overcommitted. The reason for this is that access other resources (such as network and disk) takes a lot longer than it does to make a remote memory / processor call. Plus vSphere is very intelligent when it comes to scheduling your VM’s and generally does a very good job of optimizing for performance without you needing to tweak anything. Aside from following some common sense best practices, such as sizing VM’s so they are easily divisible by the size of your NUMA nodes, you don’t usually have to tweak too much.
If you don’t know what I’m talking about, i.e. you’re not sure what NUMA is check out this WikiPedia Article on Non-Uniform Memory Access. NUMA is also often referred to as Non-uniform Memory Architecture.
To get the full low down on Cores Per Socket and vNUMA check out Mark A’s Article – Does corespersocket Affect Performance?
Final Word
For those of you going to vForum in Sydney and Singapore you’ll have your opportunity to attend the Monster VM Panel in person and ask your panelists all of the toughest Monster VM related questions you’ve got. I hope to see a lot of you there. #vForumAU #vForumSG
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by Michael Webster +. Copyright © 2013 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
This is a very interesting recommendation. We have a large directory services implementation of Oracle Identity Manager that serves half-a-million students. At times we get bulk record update jobs (sent to databases on an exadata) that max out the CPU on our virtual machines. We have been looking at possible performance issues throughout the stack. The VMs were all 2 socket / 1 core CPUs and we were asked to double the CPU on each VM. Due to the fact these are all RHEL VMs, to save on licensing (the hosts are licensed for 2-socket unlimited RHEL), we chose to go 2 socket / 2 core instead of 4 socket / 1 core. Our middleware folks told us that the jobs ran the same amount of time despite the fact each VM had 2x the CPU power.
This makes me wonder if choosing multi-core sockets for our Red Hat virtual machines are part of the problem? Our Oracle DBAs tell us that the exadata isn't the bottleneck. Network and storage are not the bottleneck, either. Of course, running this Oracle application on vSphere-based RHEL VMs are making our UNIX admins cry foul since Oracle won't fully support the application unless it is running on Oracle products from top-to-bottom (physical Solaris, or OEL on Oracle VM).
Pretty frustrating.
Hi Mark, The good news is that your products are fully supported running on VMware vSphere as Oracle Certifies down to the OS level, and provided your running a version of RedHat that Oracle Supports you'll have no trouble interacting with Oracle Support Engineers. VMware does not change the OS at all, so it does not invalidate your Oracle Support. Further Oracle has specific support statements for VMware for most of it's products and these can be found in My Oracle Support Online (Formally Metalink). For example Oracle DB ID is 249212.1. Also if there was ever any doubt you could log a call with the VMware Oracle Support team and they'd help you resolve it in the unlikely event you have any issues with Oracle Support directly. Although my interactions with Oracle Support have always been very good.
The performance problem is pretty unlikely to only be caused due to the configuration of the vCPU's on the VM's. There is most likely some other factor limiting performance. It could well be the DB connection pool back to the Exadata from the OIM servers. I've seen this many times. Also there could be some Guest OS side things limiting performance. Without analyzing the environment in detail it's hard to say. Do you have any reservations in place on your OIM VM's? Have you done any tuning at the OS Level? Whatever is limiting the performance there will be something in the logs or visible in the OS / vSphere Host or Application Logs. Do you have vCenter Operations and Log Insight to help you with the performance analysis?
Michael-
Thanks for the quick reply. The OIM VMs exist in a resource pool which reserves the entire capacity of the memory footprint so there is no need for memory reclamation. The version of Red Hat we are using is 6.4 but at Oracle Open World, we (not me) were told that would not be fully certified. I had my doubts. As for other tuning items, we told the middleware folks to limit the java garbage collection to 1 thread per vCPU core (in our case, 4) and they configured the java heap size (8 GB) to half the RAM in each VM (16 GB). As the only layer I am responsible for is the hypervisor, it is difficult for me to do more than what I am already doing. I may need to do some due diligence or risk the environment being removed from vSphere altogether.
Unfortunately, we do not employ vCenter Operations or Log Insight. We are a fairly new exadata shop, and recently lost our OCP-certified DBA. I will have to look into the DB connection pool issues to which you have referred. The main issue is our current DBA is a huge fan of Solaris Zones on Sun Fires and he was told this app needed to be implemented on VMware by our CTO who has since moved on.
Thanks again for your insight.
Matt
Hi Mark,
GC threads need to be < # vCPU\'s. I would recommend starting with 2 on a 4 vCPU VM. Also check the resource pool shares and CPU ready time. What is the physical hardware running underneath vSphere?
Thanks for the GC note. No contention on the hosts (running < 50% even during these batch operations). Underlying infrastructure, HP C7000 / Gen8 blades, 2x8core 2.7GHz CPUs, FlexFabric, 3PAR T400 storage with ample spindles.
Matt
Hi
Check bios setting on your Gen8, they are delivered with balanced power setting = not full performance.
Set it to optimal or os controled(you can then use vsphere cluster setting to get full performance.
Hi Michael,
I also have some query on vNUMA.
If I have a box of ESX 2 socket 8 core with 512GB RAM. So each NUMA nude is around 250GB RAM.
If I have 3 VM, each will have 8 vCPU and 160GB RAM with memory reservation due to they are running in JVM, which recommend to have memory reservation. Do you think I should enable the vNUMA for this? As 2 of the VM can't share the same NUMA node memory (2VM already 320GB RAM > 1 NUMA node memory 250GB), so it mean it will across to remote node memory.
Another question what is the draw back for the vNUMA?
Thank you
Marco
[…] Sizing Many Cores per Socket or Single-Core Socket Mystery Does corespersocket Affect Performance? Cores Per Socket and vNUMA in VMware vSphere Performance Best Practices for VMware vSphere® […]