I had a very interesting problem this past week that I thought I would share and even seek comment on. We have quite a few vSphere ESX 4.0 and even vSphere ESX 4.1 hosts running in our network. This past week we installed 2 HP DL-380 servers with vSphere ESXi 4.1 Update 1 and we immediately noticed an issue with NIC teaming on the management interface (vmk0). We were connecting the ESXi 4.1 hosts to a pair of Cisco Nexus 7010 switches over a virtual port-channel (vPC). I should comment that this is our first ESXi deployment, all previous VMware deployments had been ESX.
Loss of connectivity
In the past we’ve taken different approaches with respect to how to configure the core/distribution network switches when connecting them to ESX servers for NIC teaming. In some instances we’ve created SMLTs, MLTs (Nortel/Avaya) or vPCs, PCs (Cisco) depending on the location and equipment. In this instance no matter how we configured the switch port we would always loose network connectivity to the management interface the moment we brought up the second NIC. We had VMware ESX 4.1 hosts running from the exact same Cisco Nexus 7010s so I was at a loss to immediately explain the problem.
Solution – VMware KB Article
Thankfully I’ve become a master (as have many other technical folks) in the use of Google and pretty quickly stumbled across VMware KB article 1022751 entitled, NIC teaming using EtherChannel leads to intermittent network connectivity in ESXi.
When trying to team NICs using EtherChannel, the network connectivity is disrupted on an ESXi host. This issue occurs because NIC teaming properties do not propagate to the Management Network portgroup in ESXi. When you configure the ESXi host for NIC teaming by setting the Load Balancing to Route based on ip hash, this configuration is not propagated to Management Network portgroup.
I’ll be the first to admit that I haven’t spent too much time with ESXi, but seeing that VMware has already commented the ESX will be going way I probably need to do some catching up.