cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
13987
Views
10
Helpful
8
Replies

VPC Port-Channel Issue

robert.horrigan
Level 2
Level 2

Hi,

I have ~30 pairs of 5Ks that have a VPC with a pair of 7Ks.  All are working fine except for one pair that I just tried to bring up.  Debugging Port-Channel errors on the 7K i get the below.

  %ETH_PORT_CHANNEL-5-PORT_SUSPENDED: Ethernet7/5: Ethernet7/5 is suspended

846360 eth_port_channel: pcm_proc_response(389): Ethernet7/5 (0x1a304000): Setting response status to 0x402b000c (port not compatible) for MTS_OPC_ETHPM_PORT_BRINGUP (61442) response

The switches all are configured (or appear to be) the same as the other pairs.  The SFPs are the same as the rest of the pairs. The portchannel is active active on both ends.  Has anyone seen this before?

/r

Rob

8 Replies 8

thejman85
Level 1
Level 1

I'm seeing the same error between a pair of 5020's and a pair of 6500's. Did you ever come across a fix?

Hi All,

On  the  nexus5000 VPC pair  can you run the following command "show system internal mts buffer"

What version are you running?

Thanks,

Andrew

Hi!

I had the same issue, until I upgraded both Nexus 5000's to the newest software. Then it just started to work.

thejman85
Level 1
Level 1

Forgot I commented here. For the record the fix I came up with was to light up the 'mgmt0' interface on each of our 5020's. We were doing inband management but the port-channels (part of a vPC) carrying the management traffic wouldn't come online. Once we lit those up and change the vPC keep-alive link to be the mgmt0 interface the vPCs came up fine. I'm guessing it was a chicken and the egg deal, vPC's can't come up without keep-alive link and the keep alive link was waiting for itself.

We were (and are) running 5.0(2)N1(1). I'd do more testing but this needed to be production ready so I can't break it at this point but if a config helps then I could pass that along to someone who wants it.

--Jeremy

We were doing inband management but the port-channels (part of a vPC) carrying the management traffic wouldn't come online.

If you want to run vPC, you need to connect the Mgmt port together for the "heartbeat" to run through.  Without this link the vPC link will stay down.

Hi,

maybe you hit the bug CSCub44141.

So if you used a SVI to form you "peer keep-alive" link and this will stay down because of the bug, you won´t be able to form the vPC.

regards,

    Dirk

regards, Dirk (Please rate if helpful)

Hi Guys,

I have come across the problem twice now. Once following an upgrade to 5.1(3)N2(1b) in a live environment.

In the live envoronment we had L3 routing modules installed in the Nexus 5k's. We have the peer keepalive configured as a port-channel with an SVI on each switch. To get around the issue of the SVI not coming up because the keepalive link was down I configured the IP address directly on the port-channel as a routed interface instead of using SVI's.

I then came up against the same problem again in the lab running 5.1(3)N2(1b) again as this is currently Cisco Recommend release. This time I didn't have L3 routing modules so couldn't use the same solution.

To resolve the problem this time I configured the command "dual-active exclude interface-vlan 901" with vlan901 being my keepalive SVI.

Hope this helps someone else.

cjh
Level 1
Level 1

One thing i will throw out there that bit me today but may not have much to do with the original post is that you can see ports getting suspended and all sorts of log noise about their duplex status changing and a raucous pile of fail if you try to interconnect ports on machines that are members of separate VPC domains if you are lazy and used the same VPC domain ID number (i.e., 1) on the two VPCs you are interconnecting, like i did.  Once i changed the VPC domain ID (to 2) on one of the pair, it is smooth sailing, and i now have a pair of VPC-ed N7k's (vpc domain ID 1) hooked to a pair of VPC-ed N5k's (vpc domain ID 2).  Now i just need to figure out what i've accomplished ...