10 Responses

  1. crazycanuck
    crazycanuck at |

    Interesting. Definitely something to be concerned with a hyper converged solution as a node failure now constitutes a loss of both compute and storage resources. Not something I have to worry about if I had storage separated from compute (ie all flash or flash hybrid storage array).

    In the latter I don’t have to oversubscribe my storage to facilitate compute maintenance or failure.

    Definitely not knocking hyperconverged solutions. Just saying it’s a shift in thinking about properly sizing storage.

    Reply
  2. crazycanuck
    crazycanuck at |

    I'm not entirely sure I understand your comment about "you still have to over provision storage". Even with an active/passive storage architecture, you're still only right-sizing your storage. If a storage controller fails, the standby controller takes over the network and storage resources. The impact of the controller failure isn't even felt by the workloads housed on that storage. So, still confused about the mandatory over provision of storage.
    Right-sizing a storage design should always take an acceptable level of growth and buffer into account – so you're not running at 80-90% full.
    Anyways, this is all getting away from my original point that hyperconverged solutions demand that you now consider storage resource capacity planning in a different manner. Nobody is taking a swipe at Nutanix :). Just nice to know that I now need to account for extra storage for my Nutanix design.

    Reply
  3. Sam
    Sam at |

    Hi Micheal

    Very good idea to have a space reservation set! I am however interested to hear what happens if you would actually not keep free space to accommodate for re-protection. Let's say you have a three node cluster and you are using 80% of its capacity physically. Now one node fails. Whats gonna happen? Will all remaining space be filled up when re-protecting the data or will the system leave some space for newly allocated blocks (let's say from the existing VMs already provisioned which didn't fill all of their space/VMDK yet)? Or does the system refuse to allocate new blocks because it knows it will eventually not be able to replicate all existing blocks (which were not yet copied to the second node)?

    Sam

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.