Showing results for 
Search instead for 
Did you mean: 

Lacp rate fast across VPC Peer-Link


We have been testing the port-channel failover on a pair of port-channel links to a HP Server.


                     |       |

Po54(vpc54) |           | Po54(vpc54)

                |               |

              |                   |

            |                       |

        N7K1 -------------- N7K2 (NX-OS 5.1.5)


When we fail on the on the Nexus port, the traffic instantly switches over  to the alternative connection and back again. However when we fail via the HP utility GUI there is a delay of around 20 seconds before the traffic fails over. It has been suggested that the slow failover could be resolved by using LACP rate fast on the physcial interfaces of the of the port-channel.         

show lacp interface ethernet 8/19

Interface Ethernet8/19 is up

  Channel group is 54 port channel is Po54

  PDUs sent: 125756

  PDUs rcvd: 104715

  Markers sent: 0

  Markers rcvd: 0

  Marker response sent: 0

  Marker response rcvd: 0

  Unknown packets rcvd: 0

  Illegal packets rcvd: 0

Lag Id: [ [(1, 0-23-4-ee-be-14, 8036, 8000, 813), (8000, 80-c1-6e-6e-c8-f2, b00, 0, 1900)] ]

Operational as aggregated link since Wed Oct 17 14:23:00 2012

Local Port: Eth8/19   MAC Address= 0-24-f7-1d-ca-c1

  System Identifier=0x8000,0-24-f7-1d-ca-c1

  Port Identifier=0x8000,0x813

  Operational key=32822


  LACP_Timeout=Short Timeout (1s)




  Partner information refresh timeout=Long Timeout (90s)

Actor Admin State=191

Actor Oper State=191

Neighbor: 0x1900

  MAC Address= 80-c1-6e-6e-c8-f2

  System Identifier=0x8000,80-c1-6e-6e-c8-f2

  Port Identifier=0x0,0x1900

  Operational key=2816


  LACP_Timeout=Long Timeout (30s)




Partner Admin State=61

Partner Oper State=61

Aggregate or Individual(True=1)= 1

I have seen this configured in a test lab but the VPC peer-link was also configured with the lacp rate fast command. We have lots of other VPC channels going over the vpc peer link.

I have two questions:

1) Do I need to configured the lacp rate fast on the peer link to get this working? or only the uplinks to the server?

2) If I do, then will I have to change all my other port-channels to use Lacp rate fast?

thanks again,


5 Replies 5

Reza Sharifi
Hall of Fame Expert Hall of Fame Expert
Hall of Fame Expert

Are you using Wmware with Esxi on your servers.  Have you tried just trunking the 2 ports from the server to the 7Ks without using VPC and test the fail over?


Thanks for the response. We aren't running VmWare on this device. I don't want to just trunk to the N7K without vPC as this means I then have active/standby. We didn't buy N7K to have that type of configuration. I don't believe the issue is vPC but more related to LACP.

I need someone to test this or confirm what they have in their setup.


Hi Alan,

One thing I see is that your N7K port Eth8/19 is set to fast-timout (1s) while your HP device is set to default (30s). By failing the link at the Nexus end, I assume you mean you issued a shut command, you bring the physical link down therefore both LACP and link state goes down. However using HP utility to drop a link does not drop the pyhsical Layer 1 link so you will observe the link remains up on the Nexus end. LACP will however stop working on the HP end and your link will unbundle based on configured time-out.

Can you set the HP end to fast-timeout as well?




I think you've got the idea however there is no parameter within the HP GUI to change/modify the LACP settings. The only end I can change is the Nexus end.


Cisco have advised me that the LACP rate fast wasn't required over the vpc peer-link. So we went back to testing. We setup up the port-channels and then applied the LACP rate fast but the links never remained stable and so we have decided to stop trying to use the feature.

It was also highly recommended not to use the feature a a long term solution.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: