8 Responses

  1. Matt H
    Matt H at |

    This is a very interesting recommendation. We have a large directory services implementation of Oracle Identity Manager that serves half-a-million students. At times we get bulk record update jobs (sent to databases on an exadata) that max out the CPU on our virtual machines. We have been looking at possible performance issues throughout the stack. The VMs were all 2 socket / 1 core CPUs and we were asked to double the CPU on each VM. Due to the fact these are all RHEL VMs, to save on licensing (the hosts are licensed for 2-socket unlimited RHEL), we chose to go 2 socket / 2 core instead of 4 socket / 1 core. Our middleware folks told us that the jobs ran the same amount of time despite the fact each VM had 2x the CPU power.

    This makes me wonder if choosing multi-core sockets for our Red Hat virtual machines are part of the problem? Our Oracle DBAs tell us that the exadata isn't the bottleneck. Network and storage are not the bottleneck, either. Of course, running this Oracle application on vSphere-based RHEL VMs are making our UNIX admins cry foul since Oracle won't fully support the application unless it is running on Oracle products from top-to-bottom (physical Solaris, or OEL on Oracle VM).

    Pretty frustrating.

    Reply
  2. Sten-TAM
    Sten-TAM at |

    Hi

    Check bios setting on your Gen8, they are delivered with balanced power setting = not full performance.

    Set it to optimal or os controled(you can then use vsphere cluster setting to get full performance.

    Reply
  3. Marco Law
    Marco Law at |

    Hi Michael,
    I also have some query on vNUMA.
    If I have a box of ESX 2 socket 8 core with 512GB RAM. So each NUMA nude is around 250GB RAM.
    If I have 3 VM, each will have 8 vCPU and 160GB RAM with memory reservation due to they are running in JVM, which recommend to have memory reservation. Do you think I should enable the vNUMA for this? As 2 of the VM can't share the same NUMA node memory (2VM already 320GB RAM > 1 NUMA node memory 250GB), so it mean it will across to remote node memory.

    Another question what is the draw back for the vNUMA?
    Thank you

    Marco

    Reply
  4. What is vNUMA?
    What is vNUMA? at |

    […] Sizing Many Cores per Socket or Single-Core Socket Mystery Does corespersocket Affect Performance? Cores Per Socket and vNUMA in VMware vSphere Performance Best Practices for VMware vSphere® […]

Leave a Reply to Marco LawCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.