cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
980
Views
0
Helpful
7
Replies

how to trace the broadcast source in N7K

Maivoko
Level 1
Level 1

i find cpu ultilization is high 97% in access switch at Fed process

then find its queue called broadcast 

then find the port connected with distribution layer switch N7K

then, what should i do ?

 

TX
19958467910 unicast packets 2086888451 multicast packets 2348303959 broadcast packets
24393660320 output packets 13073304291187 bytes
0 jumbo packets
0 output error 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble 44682 output discard
0 Tx pause

7 Replies 7

Reza Sharifi
Hall of Fame
Hall of Fame

If you disconnect the access switch from the distribution layer switch, does the CPU utilization change?

Can you post the output of "sh process cpu | ex 0.00"?

HTH

c0101as01#sh process cpu | ex 0.00
Core 0: CPU utilization for five seconds: 96%; one minute: 93%; five minutes: 93%
Core 1: CPU utilization for five seconds: 8%; one minute: 7%; five minutes: 9%
Core 2: CPU utilization for five seconds: 4%; one minute: 6%; five minutes: 7%
Core 3: CPU utilization for five seconds: 96%; one minute: 96%; five minutes: 92%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
5609 2495038 100445884 153 0.05 0.01 0.03 0 system_mgr
5880 3958811 248839796 547 25.8 25.2 25.2 1088 fed
5881 3818334 240690573 13 0.14 0.11 0.10 0 platform_mgr
5882 2041507 165975042 37 0.10 0.05 0.05 0 stack-mgr
6364 3788612 100357393 80 0.05 0.02 0.04 0 idope.py
6429 3667322 106209980 74 0.05 0.02 0.02 0 console_relay
6434 2391615 233137985 1605 23.1 22.7 22.8 0 pdsd
6445 2837286 105392451 882 0.05 0.05 0.05 0 cpumemd
6466 2353173 536461240 44 0.05 0.01 0.16 0 snmp_subagent
8687 775926 383714802 153 1.89 2.14 2.12 0 iosd

 

number of queue 12 is the largest

show platform punt statistics port-asic 0 cpuq -1 direction rx | in RXQ 12

 

show pds tag all | in BROADCAST

c0101as01#show pds tag all | in cast
65547 6976704 Punt Rx Mcast Data 0 0 0 0 0
65542 7001312 Punt Rx Mcast Snoop 188510299 188510299 0 188510299 1478910358
65541 7395040 Punt Rx Broadcast 3296139495 3296139495 0 -998827801 -1748326279

 

c0101as01#show pds tag all | in cast
65547 6976704 Punt Rx Mcast Data 0 0 0 0 0
65542 7001312 Punt Rx Mcast Snoop 188510299 188510299 0 188510299 1478910358
65541 7395040 Punt Rx Broadcast 3296139495 3296139495 0 -998827801 -1748326279

debug pds switch 1 pktbuf-last
show pds client 7395040 packet last sink
no debug pds switch 1 pktbuf-last

 

show platform port-asic ifm iif-id 0x0106c10000000040

 

show platform port-asic ifm iif-id 0x0106c10000000040
Interface Table

Interface IIF-ID : 0x0106c10000000040
Interface Name : Gi1/1/2
Interface Block Pointer : 0x56a46fe8
Interface State : READY
Interface Stauts : IFM-ADD-RCVD, FFM-ADD-RCVD
Interface Ref-Cnt : 7
Interface Epoch : 0
Interface Type : ETHER
Port Type : SWITCH PORT
Port Location : LOCAL
Slot : 1
Unit : 2

 

 

One out 4 CPU is running very high. This maybe bug in the version of software you are running. Open a ticket with Cisco and send them the data for analysts.

HTH

before open ticket, is it possible to trace the source of broadcast in N7K in distribution layer?

Usually broadcast storm is caused by a loop on the network and when stp is not working correctly. So, I would say, look at the stp topology and see all ports are forwarding correctly.

HTH

Is the number large enough to consider as broadcast storm?

 

how large that can be considered as broadcast storm?

 

if we have satellite tv streaming, will this come from this?

 

if broadcast come from uplink, such as distribution layer, do we need to care?

 

because I search in cisco official web said solution is only to shutdown port for access port in access switch

 

 

if there are four CPU, and only one of CPU over 90% , do we need to open tickets?