cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
576
Views
0
Helpful
1
Replies

10G RSTP

rbermel83
Level 1
Level 1

I have a pair of Nexus 3548 switches running 6.0(2)A6(4) acting as cores connected to a stack of two c3850 switches running 03.02.03.SE over 10G uplinks. 

The physical configuration has Nexus 3548 core switch 1 connected c3850 stack chassis 1 and 3548 core switch 2 connected to stack chassis 2 via 10G twinax.

For some reason core switch 2 is becoming the root bridge for all VLANs and core switch 1 is showing the interface to the 3850 stack as "Dispute P2p" for all vlans. Additionally, when shutting down the uplink from core switch 2 to the stack the stack becomes the root bridge for all of the VLANs.

To make this issue even more confusing if I switch the uplinks to 1G instead of 10G RSTP operates as expected choosing the correct interface to the root bridge and setting the other path to alternate.

Is there some fundamental difference in the use of 10G vs. 1G for the uplinks that is causing the switches to behave differently?

1 Reply 1

Peter Paluch
Cisco Employee
Cisco Employee

Hello,

RSTP encodes the role and state of the port in BPDUs it sends out. This is used in RSTP as a fault detection mechanism: If a port receives an inferior BPDU to its own BPDUs, yet this inferior BPDU indicates that its originating port is in the Designated role, it clearly indicates that the other device does not receive or understand the superior BPDUs. In such situations, the superior port will declare a Dispute condition and will be blocked to prevent a switching loop.

The occurence of the Dispute condition does seem to be related to the fact that your core switch 2 becomes the root switch even though it is not supposed to. For some reason, the core switch 2 does not receive or does not accept BPDUs coming onto its ports, and so it claims the root switch role itself and treats its ports as designated ports. Clearly, other switches know better, and so they declare the Dispute condition.

Your observation with the link speeds having an impact on the RSTP operation indicates that this might be related to the link cost computation method. Because of historical reasons, Cisco switches implement two modes of link cost computation - the so-called short pathcost method, and the long pathcost method. The short pathcost method uses recommended link cost values from 802.1D-1998 that are only 16 bits wide. The long pathcost method uses recommended link costs from 802.1D-2004 that are 32 bits wide. Obviously, to have all switches evaluate the link costs in the same way, all switches should be using the same method.

Now, the pure pathcost computation method should not affect how a root bridge is elected. However, and this is just a speculation but a one that could explain your observation, if your switches used different pathcost computation methods, it is possible that some of them reject received BPDUs as invalid because they find what they consider illegal values in the Root Path Cost field (out of bounds).

So my first recommendation is to make sure that your switches are using the same pathcost computation method - and my strong recomendation is to use the long method. Please note that the default is the short method so migrating to the long method may be tedious.

On both Catalyst and Nexus switches, the command is

spanning-tree pathcost method long

in the global configuration mode. Also, at least on Catalyst switches, the current setting can be verified using the show spanning-tree summary command.

If having the same (preferably long) pathcost method on all switches does not help then I would recommend running RSTP debugs on core switch 2 to see if there is any message suggesting why this switch does not receive or accept incoming BPDUs.

Please keep us posted!

Best regards,
Peter

Review Cisco Networking products for a $25 gift card