25 Responses

  1. marco
    marco at |

    Do you know what effect the jumbo frames have on cisco switching buffers?

    Reply
    1. @vcdxnz001
      @vcdxnz001 at |

      Hi Marco, I haven't tested Jumbo Frames on Cisco switches for a while. Last time I tested Cisco equipment the performance was very good with Jumbo (but a bigger difference between Jumbo and Non-Jumbo). But this is very equipment dependent, there are many options with Cisco switches, and not all will perform the same way. Some line cards I know only have full buffers available if you distribute your connections to every 4th port, rendering the other ports unusable. But the assumption is that you won't run every port on the line card at full performance all of the time. A customer recently ran perf tests using JPerf on Windows 2008 R2 64bit with ESXi 4.1 on Nexus 2K into 5K and was getting 14Gb/s using LACP when Jumbo was enabled. Not sure the impact on buffers though.

      Reply
      1. marco
        marco at |

        I'm going to discuss this with my networking guy. We are currently using 2x 10Gb per ESX host, with one active other passive. Cisco 4900 switches. I can remember we turned off jumbo frames as per Duncan's blog about "No Jumbo frames on your Management Network!" and some buffer thing with the cisco's. I think It wass something when you enable jumboframes the cisco's would use less buffers for your normal traffic and will use a lot for the big sized packets. I will ask him to clarify and will get back about this.

        And now that I browsed back to Duncan's post, he corrected it into: Just received an email that all the cases where we thought vSphere HA issues were caused by Jumbo Frames being enabled were actually caused by the fact that it was not configured correctly end-to-end. Please validate Jumbo Frame configuration on all levels when configuring. (Physical Switches, vSwitch, Portgroup, VMkernel etc)

      2. @vcdxnz001
        @vcdxnz001 at |

        I was quite surprised when I first saw Duncan's post on that, as I've been running Jumbo for ages and had no problems with HA on vSphere 5. With your Cisco environment, do check things out carefully. If possible test the implementation in an isolated environment. You may find that you need to upgrade to the latest Cisco software release to get the best performance from Jumbo. I've not done any testing with Jumbo on 4900 series switches, so if you do test it I'd love to hear your results.

  2. Chris
    Chris at |

    Very interesting results. I do recall the 1GbE test where the gains were minimal and in some cases negative. Thanks for performing these tests.

    Reply
  3. Paul Kelly
    Paul Kelly at |

    I know you have some pretty extensive lab kit, would you believe I was going to pose this very question to you today? 😉

    Congratulations, you passed the mind reading test!

    Reply
  4. mikidutzaaMihai
    mikidutzaaMihai at |

    Do you have a graph with CPU usage also? I am curious what was the exact impact on CPU as well.

    Thanks

    Reply
  5. @vcdxnz001
    @vcdxnz001 at |

    I didn't keep the CPU graphs during the tests as my primary goal was throughput differences. I will repeat some tests and include VM CPU Usage. It was quite high, up to 70% if I recall correctly. Jumbo was similar CPU usage, but with higher throughput, yielding more CPU efficiency. Are you more interested in Host CPU or VM CPU?

    Reply
    1. mikidutzaaMihai
      mikidutzaaMihai at |

      Thanks for the reply. I was curious whether there was a significant difference in host CPU usage (i.e. if it's worth enabling for CPU efficiency reasons).

      Reply
  6. Jumbo Frames on vSphere 5 Update 1 « Long White Virtual Clouds

    […] previously posted an article regarding Jumbo Frames on vSphere 5 but was unable to test Jumbo Frames performance on Windows 2008 R2 because of a bug in the VMware […]

  7. Technology Short Take #21 - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers

    […] other posts also made it into my list of “things to mention”: this post on jumbo frames on vSphere 5 (with more results from vSphere 5 Update 1) and this post on CA SSL certificates and vCenter […]

  8. Tim Patterson
    Tim Patterson at |

    If you are testing on a pure 10Gb network, how well would a MTU of 15500 (not a typo) compare against 1500 and 9000?

    Reply
    1. @vcdxnz001
      @vcdxnz001 at |

      Hi Tim, my switching equipment currently only goes up to 9216 or 10K, and vSphere only allows up to 9000. So it's not possible to test an MTU of that size. It may have an incremental benefit if an when it's ever supported, but it's hard to say. Things may change with the adoption of 40G and 100G in the future. Most network equipment I regularly deal with uses 9216 as the maximum MTU, which is sufficient with the VM's using 9000 and the necessary other overhead bytes that may be needed.

      Reply
  9. Glenn
    Glenn at |

    Interesting that there is no comment on flow control in this article. IMHO flow control & jumbo need to go hand in hand with 10G setups to prevent packet loss and big performance hits when a host or device is overloaded.

    Reply
    1. @vcdxnz001
      @vcdxnz001 at |

      Hi Glenn, you raise a good point. However flow control isn't always a good thing. In some situations it can cause more problems than it solves (excessive pause frames). It will also depend on how much buffers are available per switch port and if all ports have full buffers, and if links are being oversubscribed at L2. I have flow control active on my switch however and it was active during the test. I have a split between customers where it is enabled and where it has been disabled. It's definitely not a one size fits all decision. When I test vMotion again I will add a scenario with flow control turned off and see what difference if any are observed.

      Reply
  10. The Good, The Great, and the Gotcha with Multi-NIC vMotion in vSphere 5 « Long White Virtual Clouds

    […] of Multi-NIC vMotion with 2 x 10Gb/s NIC’s in my home lab and got almost 18Gb/s when using Jumbo Frames on vSphere 5. Hosts go into maintenance mode so fast you better not blink! I haven’t retested Multi-NIC […]

  11. -AM-
    -AM- at |

    @vcdxnz0 wrote:

    > Are you more interested in Host CPU or VM CPU?

    I really would be interested in both – but if had to decide: I would choose Host CPU.

    No real-life measurements about Host CPU load with iSCSI in 10Gb environments available on the web (not later than 2012).

    Any plans to re-test…?

    Reply
    1. @vcdxnz001
      @vcdxnz001 at |

      Yes, I do plan to retest. I'm going to be retesting this when the next release of vSphere is GA'd later this year. When I retest I will include CPU usage into the calculations. With the modern 10G cards and modern CPU's the performance of iSCSI and NFS is on par with 8G Fibre Channel when architected in a similar manner.

      Reply
      1. -AM-
        -AM- at |

        Thanks Michael, looking forward to!

  12. » Vote for the Top Virtualization Blogs of 2012 Long White Virtual Clouds

    […] Jumbo Frames on vSphere 5 […]

  13. » The Great Jumbo Frames Debate Long White Virtual Clouds

    […] various test results showing at least a 10% benefit in performance, including my previous articles Jumbo Frames on vSphere 5 and Jumbo Frames on vSphere 5 Update 1. However my previous testing was not for storage access, […]

  14. Back To Basics: Configuring Standard vSwitch (Part Two of Three) « Mike Laverick…

    […] as well. Generally, performance does improve – and Michael Websters (VCDX) blogpost “Jumbo Frames on vSphere 5” is good starting point in understanding the benefits. For instance MTU could be enabled on […]

  15. Jumbo frames performance with 10GbE iSCSI | vStorage
  16. VMware for Small-Medium Business Blog: Back To Basics: Configuring Standard vSwitch (Part Two of Three) | System Knowledge Base

    […] machines as well. Generally, performance does improve – and Michael Websters (VCDX) blogpost “Jumbo Frames on vSphere 5” is good starting point in understanding the benefits. For instance MTU could be enabled on […]

  17. Back To Basics: Configuring Standard vSwitch (Part Two of Three) | VMware SMB Blog - VMware Blogs

    […] machines as well. Generally, performance does improve – and Michael Websters (VCDX) blogpost “Jumbo Frames on vSphere 5” is good starting point in understanding the benefits. For instance MTU could be enabled on […]

Leave a Reply to Tim PattersonCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.