10 Responses

  1. Mike Marseglia
    Mike Marseglia at |

    What’s the difference between running something like this in “user space” and a kernel module that can enabled or disabled at boot?

    Reply
  2. Rob Turk
    Rob Turk at |

    There’s a typo in your article which takes away the main point you are trying to make. You write “By tying VSAN to the kernel you are not limiting the ability to update it without updating the entire hypervisor…”

    I’m pretty sure you want to replace ‘not’ with ‘now’.

    Reply
  3. terafirma
    terafirma at |

    I would say that some of this is actually incorrect. If storage has an issue it will take down compute with it any way last time I saw an APD the VM’s die. While it does restrict the damage this could be done in kernel with docker just as well this would also reduce attack surfaces by only loading what is needed.

    One question not covered here is data. e.g.:

    Data is so important it is the first word in data center it is the sole reason DC’s exist and we have jobs. By running it on compute you are putting your most critical part that is entirely about persistence on top of a disposable compute layer.

    Would you also say that the correct place for a vswitch is running in a VM? My view is that a hypervisor is an infrastructure visualizer providing IaaS to all consumers.

    Then on scale and innovation the same can be done with modern SAN’s while AFA will scale higher than VSA.

    Reply
  4. Keith Hooker
    Keith Hooker at |

    Couldn’t you also argue that keeping the storage in the kernel allows you to keep the components “in sync”, so that upgrades are always done together? I now have one less thing to upgrade, since the storage is built right into the hypervisor.

    Reply
  5. Newsletter: March 8, 2015 | Notes from MWhite

    […] or Not In Kernel – This is the Hyperconverged Question Michael has a very thoughtful article here on the idea of storage (or other things actually too) in the Kernel or not.  Even though I am a […]

  6. Daniel
    Daniel at |

    We all know that storage performance is “how do I bring the I/O to disk”. The real performance is impacted how writes are committed in cache, which block size is used by the solution, in the background and which block size does my application produce. Are there mechanisms to reduce backend I/Os and how effective does the tiering algorithm of the solution work. The discussion if it runs in or out of the kernel is so high level, that there is no real world advantage of it. All the other things do have much more impact on performance, from my point of view the “in or outside the kernel performance discussion” is the needle in the haystack.

    Would be interesting to get insights about differences of the solutions that compared on other level than just “kernel”.

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.