23 Responses

  1. Fatih Solen
    Fatih Solen at |

    Great article Michael thanks.

    Reply
  2. Michael
    Michael at |

    Great Article. I am also testing Nutanix. As I did build to separate Nutanix clusters and Storage VMotioned the VM from Nutanix cluster one to Nutanix cluster two, it was slow. As I mounted the nfs datastore from Nutanix cluster one to two it was faster. If I understood it write in the first case it uses fsdm and in the second fsdm3. I don’t know if hardware offload is used
    Very helpful was also frank dennemans article. http://frankdenneman.nl/2012/11/06/vaai-hw-offloa
    Also helpful is the comment of Birk: If you use multiple Netapp Boxes in cluster mode, VAAI will offload the Storage vMotion task
    http://dresxi.blogspot.de/2013/07/storage-vmotion

    •fsdm – This is the legacy 3.0 datamover which is the most basic version and the slowest as the data moves all the way up the stack and down again.
    •fs3dm – This datamover was introduced with vSphere 4.0 and contained some substantial optimizations so that data does not travel through all stacks.
    •fs3dm – hardware offload – This is the VAAI hardware offload full copy that is leveraged and was introduced with vSphere 4.1. Maximum performance and minimal host CPU/Memory overhead.

    Reply
  3. Michael
    Michael at |

    Your Article could mean, that we see maybe a new datamover with version 5.5 Update 1 when vsan is added? Good question is, what happens when we build a cluster one with 3 esxi hosts and a cluster two with 3 esxi hosts, An both is vsan enabled. Then a VM is storaged moved from cluster one to cluster two.
    For the old datamovers the problem still exists. This means VMware means a new one in ESXi 5.5 Update 1.

    Reply
  4. Sebastien Simon
    Sebastien Simon at |

    Thanks Michael. your article is very interesting and I am very interested to hear about Vmware's next improvements on their Data Movers. Also when I read your post, and comments, It seems that FSDM is always used during NFS to NFS operations, but what about if these volumes are on the same NAS , FS3DM hw accelerated won't be used ?

    Reply
  5. -AM-
    -AM- at |

    Michael, thanks for the great article.

    Would you answer some questions to clarify some technical details, please?

    I would like to cleary understand the prereqs needed to ensure that no zero bytes are copied If a move a huge, thin provisioned VM from one physical storage to second physical storage (not the same vendor).

    You wrote "[….] VMFS5 to VMFS5 and it used the FS3DM [….]"

    1. If FS3DM is used to move thin provisioned VMs, than zero bytes are never copied/replicated – correct?

    2.1 Does VMware always use FS3DM in case of VMFS5, regardless of type of storage and combination?
    2.2 NFS to iSCSI ?
    2.3 NFS to NFS?
    2.4 iSCSI to iSCSI?

    3.1 Is VMFS5 the only prereq to avoid copy of never written blocks/bytes?
    3.2 Or does one of the storage needs to have VAAI support – or only the target?
    3.3 If VAAI support is required, which VAAI primitive is exactly needed ensure that no zero bytes are copied (Reference: http://kb.vmware.com/selfservice/microsites/searc… ).

    Thanks
    -AM-

    Reply
  6. Rachit Srivastava
    Rachit Srivastava at |

    Hi Michael,

    Excellent Article and very helpful.

    Its an old post so it would be great if you could help ?

    "FS3DM Hardware Accelerated is used if it is supported by the array."

    Considering the above statement I have few questions ?

    1) What process (Block replication etc) does an EMC VNX follow in the background when a VMDK is moved from one datastore to the other datastore. In this case both datastores are on LUNS coming from same array.

    2) Are their any licensing requirements on Storage side for VAAI support ?

    Reply
  7. compendius
    compendius at |

    Interesting post.
    I am running vCSA 5.5 and ESXi 5.5 u1 with shared iSCSI storage all VMFS5.

    If I Storage vMotion a thick provisioned 190GB vmdk it takes about 1 minute.
    If I Storage vMotion a thin provisioned 6TB vmdk with only 190GB reported as used by ESXi it takes forever…..still waiting…….got bored after 1 hour.

    Running a dedicated 10Gp storage network (end to end 10Gb with jumbo frames)

    Same issue?

    Reply
  8. compendius
    compendius at |

    I figured it out.

    We have SAN latency issues, which were the cause (too much IO with several svMotions)..so ignore this as svMOtion on iSCSI with thin vmdk works properly

    Reply
  9. How to shrink thin-provisioned disks | vcloudnine.de

    […] Really slow. If you have a monster VM, a vMotion can take a looooong time (worth reading: “VMware Storage vMotion, Data Movers, Thin Provisioning, Barriers to Monster VM’s” by Michael […]

  10. squebel
    squebel at |

    Great article; this is exactly the info I was looking for. We’re finding “issues” with this as we move hundreds of terabytes of THIN PROVISIONED vmdk’s around our Netapp NFS volumes. Even though we have the VAAI NFS primitives working, there are limitations and we’re seeing the need to read through ALL of those blocks even though they are completely empty. It’s really slowing the process down and putting unneeded stress on the filers. I wish there was a better way but I guess not until Netapp and/or VMware progress the VAAI NFS primitives.

    Reply
  11. Jason
    Jason at |

    Just found this & was curious… Is this still an issue with vSphere 6+?

    To clarify your earlier comment re clones on Nutanix, would any clone ((running VM & powered off VM)) done via VMWare be instant (and no bloat from empty data)?

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.