12 Responses

  1. douglas o'storm
    douglas o'storm at |

    this is not a new issue?

    Reply
  2. Jonas Nagel
    Jonas Nagel at |

    I find it funny if not even bizarre that you recommend to "every customer" to conduct QA and burn-in tests on hardware that's been qualified by VMware and/or HP. Isn't it the task of VMware to make sure that they in fact DID make burn-in tests before they qualifiy such hardware with vSphere? Indeed I do know customers who can afford keeping newly bought hardware for months out of production cycle just to make sure it all works as expected, but this is certainly not for everyone, because time is money and usually they buy hardware qualified by VMware for a reason.

    Reply
  3. Stan
    Stan at |

    I have been dealing the HP engineers on Emulex OneConnect based NICs since Apr 2012. Both HP and VMWare are not responsible in writing firmware and drivers for them, Emulex is responsible. And so far everything released up til Nov 2012 from Emulex had stability issues. Dec 2012 and the more recent Feb 12th release are more stable (at least on HP infrastructures).

    Everyone who picked Emulex as supplier has been burnt by this. IBM, Dell and HP all use them. At least with HP, their G8 lines no longer force you to take on Emulex and you can actually choose to go back to Broadcomm.

    The problem is worse if you use these Emulex NICs for IP-base storage.

    Reply
  4. Jhonny Nemonic
    Jhonny Nemonic at |

    Hi IHAC with this problem, as a matter of a fact they are scare to move from 5 to 5.1, it's recommend to have a bur in test in this scenario? Or even thougth this could happen again?? I mean PSOD

    Reply
  5. Allen Crawford
    Allen Crawford at |

    Good post, though that HP advisory is a bit of a joke. They still continue to reference an ancient version of the ESXi version (5.0.601) of the nx_nic driver/firmware combo. They are now up to version 5.0.626 on VMware's site and it is still not at all stable. We have the NC522SFP NICs that frequently just go "offline" for lack of a better word. Sometimes that is just really high latency, sometimes it is many dropped packets, and sometimes it is completely loss of connectivity (though we still have a physical link). Only a reboot resolves this. We've got an escalated case open with HP and VMware trying to make some progress, but I'm 100% of the opinion the issue is with the nx_nic driver, written by QLogic. Because the problem also occurs with the integrated NICs on our HP DL580 G7 servers (the NC375i). This is the fourth version of the driver we've used and they all have been awful. Running ESXi 5.0 U2 here. Not sure if it works better with 5.1 or not.

    Reply
  6. Allen Crawford
    Allen Crawford at |

    Yep, we're running the latest SPP from HP, though the firmware is included with the VMware driver and loaded at runtime. So the firmware you see during POST will not match the running firmware if you are using the 5.0.626 driver as it includes a newer version. HP is spot checking some servers as well for the "bad batch" issue but we're still in the middle of that process.

    Reply

Leave a Reply to Jonas NagelCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.