cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1838
Views
5
Helpful
6
Replies
Highlighted
Beginner

CSCvk32439 - IPv6 - SISF main thread consumes high CPU DHCPv6/ICMPv6 SOLICIT - 2

I believe that this bug, or a variation of it, is present in 16.3.7 as well.  We have been fighting a high CPU issue with SISF switcher since upgrading to 16.3.7 as well and initially applied this to just the trunk links as described.  After a number of attempts to resolve this issue failed and even splitting the stack into 2 smaller stacks failed we put the IPv6 filer into the user interfaces and this stopped the issue immediately.  We could not identify any clear signs that there was a "flood" of traffic going on.

 

Prior to applying the filter to the switch ports:

 

switch#sh proc cpu sorted 
CPU utilization for five seconds: 95%/38%; one minute: 94%; five minutes: 94%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process 
299 16210718 412251 39322 48.85% 45.78% 45.60% 0 SISF Switcher Th 
225 949559 115216 8241 2.06% 1.73% 1.73% 0 Spanning Tree 

This was applied to each bank of ports:

switch(config-if-range)#ipv6 traffic-filter DENY-IPV6 in 
switch(config-if-range)#int range g2/0/1-48       

CPU usage afterwards:

switch#sh proc cpu sorted
CPU utilization for five seconds: 18%/5%; one minute: 34%; five minutes: 73%
 PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process 
 225      959669      116139       8263  1.91%  1.76%  1.76%   0 Spanning Tree    
  74     1581721      536829       2946  1.75%  3.13%  3.14%   0 IOSD ipc task    

 

6 REPLIES 6
Highlighted
Beginner

I should add that this is on a 3650 series switch.

Highlighted

I have 3650 on Everest-16.6.4a (recommended) also with high CPU due to:

CPU utilization for five seconds: 92%/25%; one minute: 55%; five minutes: 55%
 PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process
 478  1071381667   153333886       6987 37.51% 19.57% 19.45%   0 SISF Switcher Th
 479   763884804   142905654       5345 24.55% 14.02% 13.90%   0 SISF Main Thread
 206   132428380    71692306       1847  1.75%  1.72%  1.71%   0 Spanning Tree    
 114    61385239   269843847        227  1.35%  0.87%  1.02%   0 IOSXE-RP Punt Se
 132    65391731    17426494       3752  0.87%  0.58%  0.57%   0 PLFM-MGR IPC pro

I believe we have the same issue.

All I need to do is to add this command to raise the CPU: ip dhcp snooping vlan 10 even if DHCP snooping is disabled.

Switch1#sho ip dhcp snooping 
Switch DHCP snooping is disabled
Switch DHCP gleaning is disabled
DHCP snooping is configured on following VLANs:
10
DHCP snooping is operational on following VLANs:
10
DHCP snooping is configured on the following L3 Interfaces:

Insertion of option 82 is disabled
   circuit-id default format: vlan-mod-port
   remote-id: 1234.1234.1234 (MAC)
Option 82 on untrusted port is not allowed
Verification of hwaddr field is enabled
Verification of giaddr field is enabled
DHCP snooping trust/rate is configured on the following Interfaces:

Interface                  Trusted    Allow option    Rate limit (pps)
-----------------------    -------    ------------    ----------------   
GigabitEthernet1/1/1             yes        yes             unlimited
  Custom circuit-ids:

 

The fact that CISCO is not interested in this is not OK ("No release planned to fix this bug").

Do i need to contact TAC ?

Is there something i can do?

Disabling snooping is not a solution nor IPv6 ACL!

Highlighted

I can confirm this for C3650 with 16.3.5, 16.3.6 and 16.3.7

Also related seems Bug CSCvg26012 as the solution there also helps decreasing CPU usage.

Highlighted

I saw this on our 3560CX switches and after upgrading to 15.6.2(E2) I haven't noticed it coming back.  I don't believe we've noticed it on the latest 16.3 code on our 3650 and 3850 switches.

Highlighted

We did not see it on all of our switches or even several.  In fact only a single stack of 4 out of hundreds experienced the problem.  I guessing some set of rare traffic or circumstances must be present in that building to tickle this bug.  There just was no real stand-out traffic that was an obvious trigger for this unfortunately.

Highlighted

we had two out of six CAT 3650 with IOS XE 16.6.5 with the high CPU SISF bug - we had to hard reboot by power circle and remove all ip dhcp snooping command right after switches came up again. The high CPU on one switch came up after about two weeks running and the other after about two days. This was true after we had configured:

 

ip dhcp snooping vlan 3-7
ip dhcp snooping database flash:/dhcp-snoop
ip dhcp snooping
ip dhcp snooping information option format remote-id hostname

 

the high CPU SISF raised up on the second switch still after removing the - ip dhcp snooping command:

 

ip dhcp snooping vlan 3-7
ip dhcp snooping database flash:/dhcp-snoop
ip dhcp snooping information option format remote-id hostname

 

= dhcp snooping was off!

 

Since we use this type of switches on small offices for Routing too, the network was unusable at all - we where not able to connect either by telnet/ssh nor by the serial line.

This is a very bad situation for us...