06-04-2020 11:21 AM - edited 06-10-2020 02:27 PM
Please read the full issue before responding. I did a big upgrade to three offices (now running 15.0(2)SE12) and all three are now affected but specifically only the 3560-E model. Yes, it's EoS but it's still listed under compatible hardware for 15.0(2)SE12.
I'm at a loss at this point, one of these sites has two other switches that are behaving normally and passing traffic, however same issue when it comes to passing traffic to the 3560-E and vlan 1, anything destined for vlan 2 always makes it. Anything destined to the vlan 1 SVI works (it passes traffic to the other vlan), but the vlan 1 SVI does not seem to want to pass traffic to anything in its own vlan.
For those wondering, I never found the root cause or a fix but I did implement a band-aid by creating a new subnet on vlan 3 and moving users from vlan 1 (default) to vlan 3. I left all the routing to SVI 1 (as it was originally) and it seems to be working. I don't believe this was a native vlan issue because the ports themselves were all assigned default access ports and there were no trunks to/from this switch.
06-04-2020 11:31 AM
This was an old issue - if I remember correctly, you need to execute the command below in Global to fix the issue :
Try below command : (if this is a production environment) do it in maintenance window. test and let me know.
config t
!
no macro auto monitor
!
end
06-04-2020 11:43 AM - edited 06-04-2020 11:50 AM
Are you referring to this CSCud83248 bug? I literally just read that before I saw your response, but what's strange is that I have an STP instance in vlan 1 and I do not use vlan 99 for STP. But it would still make sense in my case since the native vlan is the one being affected.
Do you know what impact this command has? I don't have a problem running it during the day, but my only access is remote (SSH) so if it's possible I lose access permanently, I would like to plan for that.
Try below command : (if this is a production environment) do it in maintenance window. test and let me know.
It is a production environment, but due to the issue it's a full on outage anyways, no maintenance window needed.
Ran it and it did not fix the issue (unless it requires a reboot). I read some more about this issue and it does not apply to my office, we are already using vlan 1 as the native vlan. I also noticed that the command you suggested was referencing dot1.x which is some sort of authentication stuff that we don't use.
06-04-2020 02:06 PM
The issue causes a complete outage so I ran the command anyways, but it didn't fix anything. I noticed in the bug related to your post that they mention this being an issue if vlan 1 is not the native vlan - in my case it is. I also noticed it referenced dot1.x which we don't utilize (however I ran the command to turn it off anyways). I did not observe any difference in functionality.
06-04-2020 03:01 PM
so the part not worked.
show you post complete configuration - before migration and after migration removing (sensitive information)
Can you please post-show spanning tree information for the VLAN
06-08-2020 02:04 PM - edited 06-08-2020 02:17 PM
Pre/post is the same and I cannot provide those. Unless the IOS added something in one of the updates, these switches are pretty bare.
More Findings
I can also verify that the upstream device (Velocloud) is directly connected to the switch. When I ping VLAN0001 SVI on the switch from the upstream device, packets are received by the switch and processed. When I ping a host on VLAN0001 connected to the switch from the upstream device, the packets are never received from the switch perspective.
I verified this through an inbound ACL on the SVI as well as an embedded packet capture. I can also verify the packets are sent from the upstream device via an embedded packet capture on that device. So at some point across that 1' cable, something gets lost or mishandled.
Spanning-tree
VLAN0001 Spanning tree enabled protocol rstp Root ID Priority 49153 Address d0d0.fdc1.a880 This bridge is the root Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 49153 (priority 49152 sys-id-ext 1) Address d0d0.fdc1.a880 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 sec UplinkFast enabled but inactive in rapid-pvst mode Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Gi0/1 Desg FWD 3004 128.1 P2p Edge Gi0/2 Desg FWD 3004 128.2 P2p Edge Gi0/3 Desg FWD 3004 128.3 P2p Edge Gi0/4 Desg FWD 3004 128.4 P2p Edge Gi0/6 Desg FWD 3004 128.6 P2p Edge Gi0/16 Desg FWD 3004 128.16 P2p Edge Gi0/21 Desg FWD 3004 128.21 P2p Edge VLAN0002 Spanning tree enabled protocol rstp Root ID Priority 49154 Address d0d0.fdc1.a880 This bridge is the root Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 49154 (priority 49152 sys-id-ext 2) Address d0d0.fdc1.a880 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 sec UplinkFast enabled but inactive in rapid-pvst mode Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Gi0/2 Desg FWD 3004 128.2 P2p Edge Gi0/3 Desg FWD 3004 128.3 P2p Edge Gi0/4 Desg FWD 3004 128.4 P2p Edge Gi0/6 Desg FWD 3004 128.6 P2p Edge Gi0/21 Desg FWD 3004 128.21 P2p Edge
06-04-2020 11:48 PM
Hello,
I read through some posts related to this issue, one suggestion is to physically (not remotely) shut down the switch (power it off), and restart it after about 10 minutes. Can you give that a try ?
06-08-2020 02:07 PM
Hi!
Hear me out on this one, long story short - we tried it but not for 10 minutes.
In one office, this model is daisy chained to two other switches. When this switch was upgraded, I was only able to reach the vlan 1 svi (same issue describe above though). After a soft reboot, the switch was no longer accessible. We made some physical adjustments and had a guy pull the power on the switch. After we supplied power again, the switch was accessible again, but only the vlan 1 svi like it was before.
I'll see what I can do about leaving it unplugged for 10 minutes.
06-10-2020 07:59 AM
We physically pulled the power cable out of the switch this morning for 15 minutes and that did not resolve the issue.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide