This is a brief article to let you all know that VMware has greatly improved the default and maximum limit of open VMDK’s per host in the latest patches to ESXi 5.0. Once you apply patch ESXi500-201303401-BG the default and maximum amount of open VMDK storage per host will be 60TB on VMFS5, up from 25TB previously. The VMFS Heap Size is increased from a default of 128MB to a default of 640MB.
As some background I originally wrote about this problem in my article The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS and Marcel van den Berg has recently followed up with an article titled A Small Adjustment and a New VMware Fix will Prevent Heaps of Issues on vSphere VMFS Heap. Once the latest patch is applied customers can safely run up to 60TB of VMDK’s per host. This will allow for a lot larger VM’s in terms of storage footprint to be run on each vSphere host. Note this patch is only for ESXi 5.0, ESXi 5.1 Update 1 contains the same change. The release notes for the patch also highlight another problem that was fixed which impacted VM’s with 18 or more VMDK’s which were each configured with more than 256GB per disk (also addressed in 5.1 Update 1). Note that the original KB article with regard to the VMFS heap size KB 1004424 – An ESX/ESXi host reports VMFS heap warnings when hosting virtual machines that collectively use 4 TB or 20 TB of virtual disk storage has been updated with the new limits.
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by Michael Webster +. Copyright © 2013 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
yesterday i have deployed same scenario in our environment & able to add proper 48TB Storage on esxi server 5.1 with following VMware KB,
http://kb.vmware.com/kb/1004424
Regards
Tariq Shahzad
Hi Tariq,
Being able to add 48TB to a server is one thing, this was always possible, the problem is later you will find things aren't working properly. You will find when you go to run backups or when all of the storage on the server is accessed that you run into the out of heap errors in the VMKernel Logs. In the worst case you could experience data corruption or data loss. So although you may think you can have 48TB on a vSphere 5.1 server safely that is not yet the case, until the correct patches come out for it. There is no hard limit, it's a soft limit and it's a limit there is no warning about until you run into it. So I would advise that you take care when using that amount of storage on vSphere 5.1 right now until the next lot of patches come out.
hi michael, thanks for this info. do you know if it´s apply only to Vmdk's on Vmfs or also on nfs ?
Hi Sebastien, This only applies to VMDK's on VMFS, not on NFS.
Just an FYI. If you upgrade from a previous version of ESXi, you inherit the previous heap size which is 256MB maximum. You will then have to edit the Advanced Settings of the host and increase it accordingly. New installs get the 640MB by default as Michael states.
Great info Mike & Cormac. About to add a VM with a large storage footprint, and i had to refer back to this article to check what the sizes were.
The advance setting is under VMFS3.MaxHeapSizeMB.