24 Responses

  1. The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS | Jonathan Frappier's Blog

    […] The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS. Share this:TwitterFacebookLike this:LikeBe the first to like this. […]

  2. The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS « Storage CH Blog

    […] on here Rate this:Share this:TwitterEmailLinkedInPrintDiggFacebook Leave a Comment by rogerluethy on […]

  3. Simon Williams
    Simon Williams at |

    Mike,

    Can you suggest any names / contacts for meetings with my CEO & I in Wellington on the 27th & 28th?

    So far we are meeting BNZ and Weta… Cheers.

    Simon

    Simon Williams

    Sales Director – Australia & New Zealand
    Fusion-io
    Ph. +61 488 488 328
    Twitter: @simwilli
    Email: swilliams@fusionio.com

    Reply
  4. Virtualization.net
    Virtualization.net at |

    Alternatives have worked alright for now but my main concern with this is how it effects the performance. Moreover, backups and restores take ridiculously long time which could be a problem depending on company's RTO/RPO requirements.

    Reply
  5. Top 5 Challenges for Virtual Server Data Protection « Jaime's Blog

    […] The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS (longwhiteclouds.com) […]

  6. Iwan 'e1'
    Iwan 'e1' at |

    Thanks Mike. Always enjoyed reading your article. Just adding some points, and do correct me if I'm wrong:

    – doing in-guest means the IP storage traffic is not visible to ESXi (and hence vCenter). So it can't be monitored using the standard (built-in) tools. vCenter Operations will also "miss" this data as it won't classify it as Storage. For example, high workload on this vmnic will not impact the Workload badge of the corresponding VM.

    – doing in-guest means the VM sees the storage network. This creates complexity and security should be incorporated to address this, because VM local admin is typically given to the VM owner (the Sys Admin of that VM). Personally, I'd like to keep the separation clean, so it's easier operationally.

    – I'm not 100% certain if doing concatenation at software level (be it hypervisor OS or Guest OS) is a potential of bottleneck. I thought it's always the physical spindle. An EMC Resident Engineer told me that's the bottleneck when we were discussing a storage issue at a large client.

    I also agree with Alastair. Well said mate 🙂

    Reply
    1. @vcdxnz001
      @vcdxnz001 at |

      Hi Iwan, You've raised some good and valid points that should also be considered when looking at guest storage design. The back end storage isn't always the bottleneck though. Often the Guest OS configuration is also a bottleneck. The bottlenecks will vary greatly between different customers, different workloads and different designs or configurations. For example if you have SSD's backing the VM that can easily handle a queue depth of 255 and you're using a VM with a single virtual disk and a queue depth of 32 your Guest VM config could be a major bottleneck. But even with concatenation it's not a silver bullet solution if all the IO's happening on a single virtual disk that makes up the larger volume. It'll all depend on the workload. Most solutions are never perfect as there are always constraints and compromises that need to be made.

      Reply
  7. Jim Nickel
    Jim Nickel at |

    I recently had to use in guest disk managers to build 1 20 tb file server. Then i also made 2 20 tb Exchange mailbox servers.

    Both of these for a fairly large client. While this works today, I can see potential problems with this in the future.

    I would very much like to see >2TB VMDK support soon.

    Jim

    Reply
  8. Troy MacVay
    Troy MacVay at |

    Very interesting post. We are a Cloud Provider and had a long standing issue in our CommVault environment that was the result of Heap Size. We run CommVault on stad alone ESXi hosts and use the HotAdd transport for backup. For us we started getting random HotAdd failures. We spend way too much time troubleshooting with out any real resolution. We even had worked with VMware support. Come to find out that we found the Heap Size issue in some last ditch troubleshooting and for us it totally added up.

    We were limited to 8TB per host of active VMDK. This was an issue for us as we tend to HotAdd much more than this on the hosts as part of the backup process. Increased the value to max and HotAdd issues are gone.

    It does for us lead us to some questions around the possibility of large RAM hosts and capacity planning. Think of a host that has 1TB of RAM and I can bet it will need to have more than 25TB of attached VMDK's to support the VM workloads.

    Cheers,

    Reply
  9. When 60 SCSI Devices Are Not Enough: Virtualizing Databases On vSphere 5.1 « vArchitect Musings

    […] Applications (BCAs), we are starting to see these limits tested.  Michael Webster has a great blog post arguing for an increase in the 2 TB limit for a virtual disk and presenting various options for how […]

  10. Eric Miller
    Eric Miller at |

    We are also a cloud provider and have design issues while trying to stay below the 256 LUN limit per host. When each customer has multiple Datastores, the number of Datastores (and thus LUNs or NFS mounts) and escalate pretty quickly.

    The 25TB of attached VMDK's seems a bit absurd. I certainly hope some of these scalability issues are taken seriously soon. It seems that ever since day one, whether it be ESX or vCenter, VMware hasn't thought this through carefully, and instead let customers troubleshoot ridiculous issues, while the support staff at VMware has little or no real-world knowledge of larger environments.

    Eric

    Reply
  11. skyfx
    skyfx at |

    Great article! Quick question – you suggest the use of physical RDM as one of the approaches of circumventing the 2TB limit. I understand that virtual RDM does not offer the same advantage, but I don't fully understand why. Could you elaborate on the limitations of virtual RDM?

    In our case, we have two disk arrays configured in a RAID 6 array comprising multiple TBs. If we were to take the RDM approach, my understanding is that we would separate the array into two partitions:

    1) A VMFS partition to host the guest OS .vmdk's as well as the RDM mapping .vmdk's

    2) A raw partition

    Assuming partition 2 is greater than 2TB, could we not use it as a virtual RDM?

    Thanks 🙂

    Reply
  12. » 5 Tips to Help Prevent 80% of Virtualization Problems Long White Virtual Clouds

    […] wrote an article a while ago titled The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS about some of the limits within VMware vSphere, some of which are documented in KB’s but not […]

  13. A small adjustment and new VMware patch will prevent heaps of issues on vSphere VMFS heap | UP2V
  14. » Latest ESXi 5.0 Patch Improves VMFS Heap Size Limits Long White Virtual Clouds

    […] some background I originally wrote about this problem in my article The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS and Marcel van den Berg has recently followed up with an article titled A Small Adjustment and a […]

  15. Monster VMs & ESX(i) Heap Size: Trouble In Storage Paradise » boche.net – VMware vEvangelist

    […] The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS « Long White Virtual Cloud… […]

  16. Heads Up! New Patches for VMFS heap |
    Heads Up! New Patches for VMFS heap | at |

    […] Many of you in the storage field will be aware of a limitation with the maximum amount of open files on a VMFS volume. It has been discussed extensively, with a blog articles on the vSphere blog by myself, but also articles by such luminaries as Jason Boche and Michael Webster. […]

  17. TinkerTry IT @ home | Wow, is that a 62TB drive in my home lab?

    […] also: The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS by Michael Webster Sep 17 […]

  18. Paul Braren
    Paul Braren at |

    Been having some fun testing 62TB virtual drives under ESXi 5.5, so far, so (very) good!

    Reply
  19. Mike
    Mike at |

    Paul, I'm concerned about the 256 SCSI devices per host you mentioned. My 5.0 hosts connect to 40 iSCSI VMFS datastores containing C drive VMDKs and 150 pRDMs containing SQL and Exchange data, so the iSCSI software adapter sees 190 devices. So is 256 my limit if using pRDMs and datastores? If so, what do I do? Convert the pRDMS into VMDK files on additional larger VMFS datastores and just use VMDKs for the future instead of multiple small pRDMs? Do you know if a future release will address this?

    Reply
    1. Mike
      Mike at |

      Sorry, this was a question for Mike, not Paul.

      Reply
  20. » vSphere 5.5 Jumbo VMDK Deep Dive Long White Virtual Clouds

    […] of you may recall the article I wrote titled The Case for Larger Than 2TB Virtual Disks and The Gotcha with VMFS. In that article I put forward the pros and cons for larger than 2TB virtual disks, some solutions […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.