cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1117
Views
10
Helpful
5
Replies

Protecting from Access switch network loop radiating back to 6509 cores causing core DOS type impact

pcweber
Level 1
Level 1

We have a dual 6509 core running VSS. We have many access switches with dual uplinks in a port channels. The access switches are a mix of HP, Avaya, etc across multiple buildings of 3-5 floors each. 2 access switches tacks on each floor. 2-3 times a year someone usually in a conf. room that has a small 4-8 port switch under or on a conf. table will loop the network. (meaning they take a single cable and end up plugging both ends of the same cable into the single 4-8 port switch. That then loops the network back toward the closet access switch stack and then to the cores. The result is packet loose, database disconnects, etc. Basically an overall DoS event. We had a scheduled downtime window this past weekend and inserted a network loop to look at the cores. We saw the CPU get to the highest of 20%, so it is not a CPU pegged scenario. So, I am asking if anyone has a very open workplace that allows users to dynamically add laptops and things in conference rooms, etc. and has been challenged with this. It is a Human error issue period. We are 100% VOIP so the Polycom phones have a built in switch that can allow it too.  It does me no good to be advised to run port based security, etc. We are a large campus, very open, younger workforce that pushes for the ability to operate dynamically, especially in conference room facilities. We tried running 802.1x 9-10 years ago and the CIO came in my office and told us we are not doing that as a result of some challenges getting meetings started and causing impact. Anyhow, my question is focused on protecting the core in a looped network scenario and any config settings I can try on the cores for that goal.

5 Replies 5

Reza Sharifi
Hall of Fame
Hall of Fame

If you are not able to block people from bringing their own hubs, plug them into the network or put them on conference room tables and cause outages, you may want to deploy storm control on every single user port except for printer, and camera ports.

storm-control broadcast include multicast
storm-control broadcast level 1.00
storm-control action trap

This way, the broadcast traffic will not bring the switch to its knees.

HTH

I will give it a try. I had it in at 80% but now reduced it. Problem is I cant test until a scheduled downtime window as it brings the entire network to knees and creates packet loss across all VLANs, etc. Then when we remove the loop it takes 3-6 minutes to come out of it. Out previous cores were Avaya 8600's and they healed instantly.

 

Below is the settings I used on the interface and port channel they are a member of.

 

storm-control broadcast level 3.00
storm-control multicast level 3.00

As an FYI to anyone reading this thread. Here is the cmd to stat all ports and port channels if the settings. I cut a lot out as we have over 300 interfaces and 56 port channels. Note is you have no settings for storm control it will show 100 100 100

 

6509_core# show interfaces counters storm-control

Port UcastSupp % McastSupp % BcastSupp % TotalSuppDiscards

Gi1/1/15 90.00     90.00             90.00            0
Gi1/1/16 100.00   100.00           00.00            0

Po311 100.00       3.00              3.00              0 

kapslock
Level 1
Level 1

You might want to look at configuring 

Spanning-tree BPDUGuard, which will err disable ports where the switch receives a BPDU on. For example, if you connect a HUB to port gi1/0/1, and to gi1/0/2, the BPDU from switch itself, will be looped, the switch will receive its own BPDU and know there's a loop, hence err disabling the port. Enable this on all end user accessports with 

interface gix/y/z
spanning-tree bpduguard enable

 

Or enable this feature globally on all portfast enabled accessports with

spanning-tree portfast bpduguard default  

 

Also storm-control is nice to have configured on accessports to protect from broadcast storms, can be configured like this

interface gix/y/z
storm-control broadcast level pps 1.5k 1k
storm-control multicast level pps 1.5k 1k
storm-control action shutdown

 

Also to automate enabling of interfaces after a violation, you can use err disable recovery to -no shut- interfaces after 300 seconds or a custom value

errdisable recovery cause bpduguard
errdisable recovery cause storm-control
errdisable recovery interval 300

 

/K.

I dont have portfast enables on uplink ports to dual linked port channel group members. Our engineer that installed said you dont want portfast on upstream switch uplinks so spanning tree can vet the connection completely.

 

What about the Spanning-tree Loopguard command? Anyone tried it in this situation of upstream access switch loops?

Review Cisco Networking for a $25 gift card