cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6797
Views
5
Helpful
8
Replies

CSCvk32439 - IPv6 - SISF main thread consumes high CPU DHCPv6/ICMPv6 SOLICIT - 2

kcrane2
Level 1
Level 1

I believe that this bug, or a variation of it, is present in 16.3.7 as well.  We have been fighting a high CPU issue with SISF switcher since upgrading to 16.3.7 as well and initially applied this to just the trunk links as described.  After a number of attempts to resolve this issue failed and even splitting the stack into 2 smaller stacks failed we put the IPv6 filer into the user interfaces and this stopped the issue immediately.  We could not identify any clear signs that there was a "flood" of traffic going on.

 

Prior to applying the filter to the switch ports:

 

switch#sh proc cpu sorted 
CPU utilization for five seconds: 95%/38%; one minute: 94%; five minutes: 94%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process 
299 16210718 412251 39322 48.85% 45.78% 45.60% 0 SISF Switcher Th 
225 949559 115216 8241 2.06% 1.73% 1.73% 0 Spanning Tree 

This was applied to each bank of ports:

switch(config-if-range)#ipv6 traffic-filter DENY-IPV6 in 
switch(config-if-range)#int range g2/0/1-48       

CPU usage afterwards:

switch#sh proc cpu sorted
CPU utilization for five seconds: 18%/5%; one minute: 34%; five minutes: 73%
 PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process 
 225      959669      116139       8263  1.91%  1.76%  1.76%   0 Spanning Tree    
  74     1581721      536829       2946  1.75%  3.13%  3.14%   0 IOSD ipc task    

 

8 Replies 8

kcrane2
Level 1
Level 1

I should add that this is on a 3650 series switch.

I have 3650 on Everest-16.6.4a (recommended) also with high CPU due to:

CPU utilization for five seconds: 92%/25%; one minute: 55%; five minutes: 55%
 PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process
 478  1071381667   153333886       6987 37.51% 19.57% 19.45%   0 SISF Switcher Th
 479   763884804   142905654       5345 24.55% 14.02% 13.90%   0 SISF Main Thread
 206   132428380    71692306       1847  1.75%  1.72%  1.71%   0 Spanning Tree    
 114    61385239   269843847        227  1.35%  0.87%  1.02%   0 IOSXE-RP Punt Se
 132    65391731    17426494       3752  0.87%  0.58%  0.57%   0 PLFM-MGR IPC pro

I believe we have the same issue.

All I need to do is to add this command to raise the CPU: ip dhcp snooping vlan 10 even if DHCP snooping is disabled.

Switch1#sho ip dhcp snooping 
Switch DHCP snooping is disabled
Switch DHCP gleaning is disabled
DHCP snooping is configured on following VLANs:
10
DHCP snooping is operational on following VLANs:
10
DHCP snooping is configured on the following L3 Interfaces:

Insertion of option 82 is disabled
   circuit-id default format: vlan-mod-port
   remote-id: 1234.1234.1234 (MAC)
Option 82 on untrusted port is not allowed
Verification of hwaddr field is enabled
Verification of giaddr field is enabled
DHCP snooping trust/rate is configured on the following Interfaces:

Interface                  Trusted    Allow option    Rate limit (pps)
-----------------------    -------    ------------    ----------------   
GigabitEthernet1/1/1             yes        yes             unlimited
  Custom circuit-ids:

 

The fact that CISCO is not interested in this is not OK ("No release planned to fix this bug").

Do i need to contact TAC ?

Is there something i can do?

Disabling snooping is not a solution nor IPv6 ACL!

I can confirm this for C3650 with 16.3.5, 16.3.6 and 16.3.7

Also related seems Bug CSCvg26012 as the solution there also helps decreasing CPU usage.

I saw this on our 3560CX switches and after upgrading to 15.6.2(E2) I haven't noticed it coming back.  I don't believe we've noticed it on the latest 16.3 code on our 3650 and 3850 switches.

We did not see it on all of our switches or even several.  In fact only a single stack of 4 out of hundreds experienced the problem.  I guessing some set of rare traffic or circumstances must be present in that building to tickle this bug.  There just was no real stand-out traffic that was an obvious trigger for this unfortunately.

we had two out of six CAT 3650 with IOS XE 16.6.5 with the high CPU SISF bug - we had to hard reboot by power circle and remove all ip dhcp snooping command right after switches came up again. The high CPU on one switch came up after about two weeks running and the other after about two days. This was true after we had configured:

 

ip dhcp snooping vlan 3-7
ip dhcp snooping database flash:/dhcp-snoop
ip dhcp snooping
ip dhcp snooping information option format remote-id hostname

 

the high CPU SISF raised up on the second switch still after removing the - ip dhcp snooping command:

 

ip dhcp snooping vlan 3-7
ip dhcp snooping database flash:/dhcp-snoop
ip dhcp snooping information option format remote-id hostname

 

= dhcp snooping was off!

 

Since we use this type of switches on small offices for Routing too, the network was unusable at all - we where not able to connect either by telnet/ssh nor by the serial line.

This is a very bad situation for us...

RAINER ADAM
Level 1
Level 1

Same on C9200-24P with 17.06.04 

In between IPv6 filter engaged, no changes, the one and only AP on this switch rebootet, but also in between, no changes. IPv6 filter only on user ports, not on uplink ports (23+24).

 

CPU utilization for five seconds: 98%/1%; one minute: 96%; five minutes: 92%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
444 2595902252 132755824 19554 92.47% 88.45% 84.76% 0 SISF Switcher Th
52 91922098 131264502 700 3.27% 3.12% 2.98% 0 ARP Snoop
114 17680411 2410039 7336 0.31% 0.50% 0.59% 0 Crimson flush tr
125 20691840 162499424 127 0.31% 0.59% 0.68% 0 IOSXE-RP Punt Se
264 12971598 113218114 114 0.23% 0.33% 0.41% 0 DAI Packet Proce
445 8330218 30620695 272 0.23% 0.18% 0.26% 0 SISF Main Thread
221 1429970 124112810 11 0.15% 0.06% 0.05% 0 IP ARP Retry Age
439 23779896 52464237 453 0.15% 0.70% 0.53% 0 SNMP ENGINE

 

 

RAINER ADAM
Level 1
Level 1

Disable IP-Device tracking also solves this issue. Do this on uplink ports.

CAT9000: device-tracking attach-policy DT_trunk_policy

CAT:2960x: ip device tracking maximum 0

 

Booth on uplinks and AP ports only.

 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: