This is a brief article to let you all know that VMware has greatly improved the default and maximum limit of open VMDK’s per host in the latest patches to ESXi 5.0. Once you apply patch ESXi500-201303401-BG the default and maximum amount of open VMDK storage per host will be 60TB on VMFS5, up from 25TB previously. The VMFS Heap Size is increased from a default of 128MB to a default of 640MB.
As some background I originally wrote about this problem in my article The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS and Marcel van den Berg has recently followed up with an article titled A Small Adjustment and a New VMware Fix will Prevent Heaps of Issues on vSphere VMFS Heap. Once the latest patch is applied customers can safely run up to 60TB of VMDK’s per host. This will allow for a lot larger VM’s in terms of storage footprint to be run on each vSphere host. Note this patch is only for ESXi 5.0, ESXi 5.1 Update 1 contains the same change. The release notes for the patch also highlight another problem that was fixed which impacted VM’s with 18 or more VMDK’s which were each configured with more than 256GB per disk (also addressed in 5.1 Update 1). Note that the original KB article with regard to the VMFS heap size KB 1004424 – An ESX/ESXi host reports VMFS heap warnings when hosting virtual machines that collectively use 4 TB or 20 TB of virtual disk storage has been updated with the new limits.
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by Michael Webster +. Copyright © 2013 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.