HP’s Virtual Connect supports a feature called Smart Link, a network enabled with Smart Link automatically drops link to the server ports if all uplink ports lose link. This feature is very similar to the Uplink Failure Detection (UFD) that is available on the HP GbE2, GbE2c and most ProCurve switches. I believe there is a similar feature available on Cisco switches called Link State Tracking.
You might be asking, so what. Well a reader recently mentioned Smart Link in conjunction with my post concerning HP Virtual Connect & vSphere 4 so I thought I’d post some of my thoughts about Smart Link. It’s seems redundant to me given the presence of LACP but let me explain.
From a Nortel perspective you can connect two NICs from a single server to a Nortel switch cluster, two ERS 5500 or 8600 switches, using LACP and get a “802.3ad Dynamic with Fault Tolerance” configuration. This essentially provides an active/active solution utilizing both NICs to their fullest. LACP is used to determine network path failures.
From a Virtual Connect perspective the same applies as above. The Virtual Connect Ethernet interconnect modules act as a single switch fabric allowing you to create a “802.3ad Dynamic with Fault Tolerance” configuration providing an active/active solution. While you can do this to servers you can’t span external uplinks across interconnects out of the enclosure in an active/active configuration.
In the old days you’d only get a “Transmit Load Balancing with Fault Tolerance” or “Network Fault Tolerance” configuration when your server NICs spanned two switches. This essentially provided an active/standby solution. Network Fault Tolerance only uses link status as a determinator to whether the network is functional or not. In order to detect a failure the server would need to see a link loss on the primary NIC before failing over to the standby NIC. Smart Link provides the ability to shutdown the server switch ports if all the uplink switch ports go down so the NIC teaming configuration can detect the link status change on the server and fail-over to the standby NIC which would be cabled to a different network switch.
In this case it would appear to me that LACP has really replaced what I would describe as a legacy feature, Smart Link. You can have multiple external uplinks out of an enclosure spread across multiple interconnects. While only the external uplinks on a single interconnect will be active, any remaining uplinks on any other interconnects will be in a standby mode.
I’m telling this as I see it, I’m no expert concerning Virtual Connect by any means so please tell me if I’m wrong!
Thoughts?
References;
http://blog.michaelfmcnamara.com/2009/01/hp-nic-teaming-with-nortel-switches/
http://www.michaelfmcnamara.com/files/TeamingWP.pdf
gnijs says
Hi Michael,
We are just poc-ing VC and from what HP specialist told me, i understand the following: Smartlink would be interesting because it provides redundancy between different “shared uplink set”. Within one uplink set across two VC modules, all uplinks on VC1 remain Active, on VC2 they are Standby. If the active fails, traffic from your server nic is forwarded across the interconnect to VC2 and the standby will transition to Active. Your server however, hasn’t noticed a thing and is still active on the same nic. You don’t need Smartlink in this scenario. However, i don’t like using expensive 10GE uplinks in standby. So why not create two Shared Uplinks sets ? One on VC1 and one on VC2. Both will be active (and have no interfaces in standby). However, to make redundancy work in this scenario, the server nic must go down when VC1s uplink goes down. The servers nicteaming will then switch to the interface on VC2 and use VC2 uplinks…..haven’t tested this yet though.
Another limitation the HP guy mentioned was, if you have a ‘trunked’ downstream port (multiple vlans), ALL vlans must go down, before the downsteam interface goes down. this is no problem if all vlans are on the same Uplinks set (most cases). if the uplink fails, all vlans will fail. however, if you have spread vlan traffic across multiple different uplinks, this could be a problem.
Stefan Jagger (@StefanJagger) says
Hi Gnijs
This is best practice to create two separate Shared Uplink Sets, or 4 if 4 Interconnects.
Interesting info about the VLAN’s.
With regards to VMware… we like to keep Smart Link disabled. If the uplinks fail we would rather not have VMware HA go nuts with failover and would rather the chassis inter-chassis VM comms to remain active.
Physical blades windows / RHEL etc = Smart Links.
Stefan
Ryan says
Hi Michael,
We are looking at implementing LACP between our 8600 cluster (two switches) with Virtual Connect (hp c7000 enclosure). My understanding is that VLACP is required as the LACP is split across the two cluster members.
My questions/concerns are:
1) Do you know of any knowen issues with code release 5.0.0.1 and VLACP?
2) The 8600’s in production and I cant seem to get a clear answer if enabling and configuring VLACP would cause network flap while doing the changes.
Could you provide your thoughts?
Many thanks
Michael McNamara says
Hi Ryan,
You absolutely DO NOT need VLACP if you want to setup a LACP link aggregation group (LAG).
Have a look at this blog post for a quick explanation of VLACP.
Let’s say you really really wanted to use VLACP… you can’t. VLACP is a Nortel proprietary protocol and only supported by Nortel switches. You don’t need VLACP, just configure the ports for LACP and use the same admin keys and you’ll be good.
If you enable VLACP on a switch port without VLACP being enabled on the opposite remote switch the local switch will take down the port with a ‘VLACP down’ alarm in the logs. You should only use VLACP between Nortel switches.
Good Luck!
Jorge says
Hi Michel
I hope you can help me in this moment we have a c7000 Enclosure with 8 blades BL685c G6 these servers were installed with VMware ESX 3.5 Update 5 with no problem.
The problem is that were installed before you configure the Virtual Connect Flex 10 and we can only see 2 10gb NICS.
After configure the Virtual Connect I thought we presented the 16 nics but it did not, when I run “lspci” I can actually see the nics but if I run a “esxcfg-nics-l” I do not show any.
I commented that this can be solved by reinstalling the esx.
Any comments or suggestions.
Michael McNamara says
Hi Jorge,
This is really more of a VMware question. You probably just need to reconfigure your Vswitch, delete vmnic1, vmnic2 and add in the new nics, vmnic3, vmnic3, etc.
Do you still have access to the ESX server via the GUI?
Have a look here for some additional VMware service console commands (in case you don’t have access to the GUI),
http://symbolik.wordpress.com/2008/11/12/some-tips-on-vmware-esx/
Assuming you have a SAN you should be able to just shutdown the virtual guest, unmount the SAN LUN and re-install the VMware ESX server software.
Good Luck!