05-26-2012 01:55 AM - edited 03-07-2019 06:55 AM
I realize that this is normally caused by a native VLAN ID mismatch but the switches are using the default (VLAN 1) as a native.
The strange thing is that VLANs crossing Portchannel 1 goe into STP blocking state and then a few minutes later come back up.
May 22 17:32:30.541: %SPANTREE-2-RECV_PVID_ERR: Received BPDU with inconsistent peer vlan id 21 on Port-channel1 VLAN1.
May 22 17:32:30.541: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0021. Inconsistent peer vlan.
May 22 17:32:30.541: %SPANTREE-2-BLOCK_PVID_LOCAL: Blocking Port-channel1 on VLAN0001. Inconsistent local vlan.
May 22 17:32:30.541: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0051. Inconsistent peer vlan.
May 22 17:32:30.541: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0071. Inconsistent peer vlan.
May 22 17:32:30.541: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0081. Inconsistent peer vlan.
May 22 17:32:30.541: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0082. Inconsistent peer vlan.
May 22 17:32:30.541: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0091. Inconsistent peer vlan.
May 22 17:32:45.548: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0021. Port consistency restored.
May 22 17:32:45.548: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0001. Port consistency restored.
May 22 17:32:45.548: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0051. Port consistency restored.
May 22 17:32:45.548: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0071. Port consistency restored.
May 22 17:32:45.548: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0081. Port consistency restored.
May 22 17:32:45.548: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0082. Port consistency restored.
May 22 17:32:45.548: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0091. Port consistency restored.
May 22 18:04:36.398: %SPANTREE-2-RECV_PVID_ERR: Received BPDU with inconsistent peer vlan id 21 on Port-channel1 VLAN1.
May 22 18:04:36.398: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0021. Inconsistent peer vlan.
May 22 18:04:36.398: %SPANTREE-2-BLOCK_PVID_LOCAL: Blocking Port-channel1 on VLAN0001. Inconsistent local vlan.
May 22 18:04:36.406: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0051. Inconsistent peer vlan.
May 22 18:04:36.406: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0071. Inconsistent peer vlan.
May 22 18:04:36.406: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0081. Inconsistent peer vlan.
May 22 18:04:36.406: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0082. Inconsistent peer vlan.
May 22 18:04:36.406: %SPANTREE-2-BLOCK_PVID_PEER: Blocking Port-channel1 on VLAN0091. Inconsistent peer vlan.
May 22 18:04:51.405: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0021. Port consistency restored.
May 22 18:04:51.413: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0051. Port consistency restored.
May 22 18:04:51.413: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0001. Port consistency restored.
May 22 18:04:51.413: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0071. Port consistency restored.
May 22 18:04:51.413: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0081. Port consistency restored.
May 22 18:04:51.413: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0082. Port consistency restored.
May 22 18:04:51.413: %SPANTREE-2-UNBLOCK_CONSIST_PORT: Unblocking Port-channel1 on VLAN0091. Port consistency restored.
I am wondering if the Hardware of the remote switch is failing? any Ideas?
Thanks in advance.
05-26-2012 04:08 AM
Hello Marius,
I have seen this error in the past in customer switches.
if I correctly remember a possible cause is an access port in vlan 1 connected to another access port in vlan 21 somewhere in the network not only on the involved trunk port ( port channel trunk in your case)
the trigger is the event:
May 22 17:32:30.541: %SPANTREE-2-RECV_PVID_ERR: Received BPDU with inconsistent peer vlan id 21 on Port-channel1 VLAN1.
Cisco PVST+ on trunk ports uses a proprietary encapsulation for BPDUs and the frame format includes an internal field that contains the vlan-id = the instance of STP to which the BPDU frame belongs to.
This internal field is compared with the external 802.1Q tag or in this case with the native vlan encapsulation.
This kind of check is not performed on access ports and allows to join broadcast domains by simply connecting an access port in vlan x with an access port in vlan y.
When the error is detected the trunk port is put in inconsistent state for all Vlans. This is a timed state and after few seconds the port is restored.
Hope to help
Giuseppe
05-28-2012 11:02 AM
I tried labbing this but can't get the results you mentioned. In any case, I do not think this is the case. On the core switch the portchannels and physical interfaces are flapping up and down. I think this might be causing the errors seen on the access switch. But what is causing the interfaces to flap on the core I do not know yet. I am leaning towards either HW failure or perhaps a loop.
05-28-2012 03:08 PM
Marius, Giuseppe,
Please allow me to join the thread.
To my best knowledge, if a switch complains about PVID inconsistency then it indeed indicates that a PVST+ BPDU was received on a different VLAN it was originated. What is interesting in your case is the fact that so many peer VLANs are being reported as inconsistent. It seems as if the Port-channel1 was operating in the access mode on the access switch and in trunk mode on the core switch, or as if some device removed all 802.1Q tags from PVST+ BPDUs as they arrive onto the access switch. The native VLAN mismatch is possible but it would complain about a pair of VLANs, not about so many VLANs at once.
Can you please make sure that the Port-channel interfaces, including the physical bundled ports, are configured for identical mode of operation (access or trunk) on both ends? Also, is it actually possible to provide a detailed picture of your network topology?
Best regards,
Peter
EDIT: This link may be useful for initial information and troubleshooting of PVID Inconsistencies:
http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a00801d11a0.shtml
05-28-2012 11:45 PM
Hi guys,
I am on my way to the client site now. It is curious that so many VLANs, and portchannels, are being affected. It is possible that there was a power outage and that some config wasn't saved. However I am starting to suspect HW failure due to the amount of VLANs and portchannels affected. I will know more when I get on-site.
The odd thing is that the message is PVID inconsistent but then shortly after it recovers on the same port.
05-29-2012 05:34 AM
Hello Marius,
first of all, you are right that my attempt to explain is in part wrong as BDPUs are generated per port so the scenario that I have described is not feasible.
The fact that the problem happens on multiple ports port-channels may be explained also by a SW bug in handling specific ASICs in linecards.
The port flaps on the core switch should be related to the inconsistency detections on the access switch side. Comparing timestamps on the two devices should show a relationship.
We had seen a case of C6500 with just installed WS-6708 10GE ports where the access switch C4500 was complaining of not receiving BPDUs on some vlans from time to time, and for bpdu loop guard the port was placed in inconsistent state. We solved with an IOS upgrade on C6500.
Peter: my understanding is that when this vlan inconsistency is detected the whole port is placed in inconsistent state not only the affected Vlans, as IOS sees the potential for a loop and tries to prevent it by disabling the port. Log messages are generated for each Vlan permitted on the trunk as we can see in original post, but this doesn't mean the inconsistency had happened in all of them at the same time.
The same happened in our case with the slight difference that the access layer switch was complaining of missing BPDUs from designated port in the segment, and loop guard didn't allow to promote the port to designated port in the segment.
Hope to help
Giuseppe
05-29-2012 07:22 AM
Hello Giuseppe,
Peter: my understanding is that when this vlan inconsistency is detected the whole port is placed in inconsistent state not only the affected Vlans, as IOS sees the potential for a loop and tries to prevent it by disabling the port.
I was not sure about this so I tested that in our lab using two 2960 Catalysts running 12.2(58)SE1. I have created 20 VLANs on both these switches, connected them with a trunk and caused a native VLAN mismatch on this trunk: one of these switches uses the native VLAN 10, the other uses native VLAN 11.
My experiment resulted only in VLANs 10 and 11 being disabled: for other VLANs, the STP continued working fine. After activating the trunk port, these messages were displayed on one of these switches that uses :
*Mar 1 04:24:27.840: %LINK-3-UPDOWN: Interface FastEthernet0/1, changed state to up
*Mar 1 04:24:31.623: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/1, changed state to up
*Mar 1 04:24:32.613: %SPANTREE-2-RECV_PVID_ERR: Received BPDU with inconsistent peer vlan id 10 on FastEthernet0/1 VLAN11.
*Mar 1 04:24:32.613: %SPANTREE-2-BLOCK_PVID_PEER: Blocking FastEthernet0/1 on VLAN0010. Inconsistent peer vlan.
*Mar 1 04:24:32.613: %SPANTREE-2-BLOCK_PVID_LOCAL: Blocking FastEthernet0/1 on VLAN0011. Inconsistent local vlan.
Switch(config-if)#do show int trunk
Port Mode Encapsulation Status Native vlan
Fa0/1 on 802.1q trunking 11
Port Vlans allowed on trunk
Fa0/1 1-4094
Port Vlans allowed and active in management domain
Fa0/1 1-20
Port Vlans in spanning tree forwarding state and not pruned
Fa0/1 1-9,12-20
Note that this is the switch that uses the VLAN 11 as the native VLAN. This VLAN was reported as local inconsistent PVID, the VLAN 10 used as the native VLAN on the opposite switch was reported as the peer inconsistent PVID. You can also see here that the Fa0/1 has active VLANs in the range 1-9,12-20 from the show int trunk output. Obviously, the port is not blocked for all VLANs. This is also corroborated by the show spanning-tree inconsistentports command:
Switch(config-if)#do show span inconsist
Name Interface Inconsistency
-------------------- ------------------------ ------------------
VLAN0010 FastEthernet0/1 Port VLAN ID Mismatch
VLAN0011 FastEthernet0/1 Port VLAN ID Mismatch
Number of inconsistent ports (segments) in the system : 2
Considering this, however, note that Marius reported many peer inconsistent PVIDs being complained about to a single local inconsistent PVID of 1. That would suggest that the PVST+ BPDUs arriving on the offending switch were all untagged - or processed as untagged - and being therefore put into VLAN 1, hence creating multiple reports of PVID inconsistency. I am not sure what could be the cause of having so many PVST+ BPDUs arrive untagged.
Best regards,
Peter
05-29-2012 07:56 AM
Hello Peter,
thanks for having tested this.
rated as it deserves.
probably a packet capture would be needed to understand what device is behaving badly. A local span session with appropriate configuration to see also BPDUs.
Best Regards
Giuseppe
08-21-2015 10:18 AM
Hi guys,
Greetings..
i am facing the same issue did you get any resolution for this..
08-21-2015 02:27 PM
In the case I had, it turned out to be faulty fiber between the switches. My client replaced the fiber and the issue was resolved.
05-23-2020 03:37 PM
If all other answers are coming up short, verify that the native vlan for the port-channel, or trunk is defined on both sides of the switch trunk link. for example:
vlan 999
name native
05-29-2012 10:42 AM
Well, checked the config at the client and all looks correct. The ports did not show as inconsistent ports. Hardcoded the native vlan on the portchannel and the error went away, however the interfaces on the core switch are still flapping. I suggested trying to upgrade the core IOS and if that fails, start pulling down switches until we find the one that is causing the issue.
Thanks for your ideas!
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide