Cisco 6880-X-LE slow response time (CPU problem?)

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-23-2014 11:58 AM - edited 03-07-2019 07:49 PM
Hello,
Just recently I refreshed a complete network where old 3550s and 6500 (SUP2) was switched to 2960X access and 6880-X-LE core. Ever since the refresh we're seeing really wierd response times and users are reporting the network as slow at times.The topology is pretty basic; 2x6880-X-LE core (VSS) and 2960X (single up to 3 switches in a stack) with dual 10G etherchannel-uplinks towards the core.
The logical setup is a little bit different becuse we are preparing for a migration from normally routed core to MPLS VPN. As such the configuration for MPLS is completed on the device but the routes are leaked locally to the old IPv4 core until all the sites have updated core equipment that supports MPLS VPN.
Looking at the CPU of the 6880-X-LE I find the slcp process (VSS) running really high plus there are spikes every now and then:
2222222221111122222111112222222222333331111122222111112222
2222555556666633333666663333300000111116666655555666663333
100
90
80
70
60
50
40
30 ***** ***** *****
20 **********************************************************
10 **********************************************************
0....5....1....1....2....2....3....3....4....4....5....5....
0 5 0 5 0 5 0 5 0 5
CPU% per second (last 60 seconds)
3333333333332222522222222332333233322222223323233322333223
1043420105114499699698569109110812598899880080621068080790
100
90
80
70
60 *
50 *
40 * * * *
30 ************ **#*****************************************
20 ##########################################################
10 ##########################################################
0....5....1....1....2....2....3....3....4....4....5....5....
0 5 0 5 0 5 0 5 0 5
CPU% per minute (last 60 minutes)
* = maximum CPU% # = average CPU%
5576556666565556766666567575656666666566665666556556666565556567667766
3506790362745380444136920707590321618722679271782992402874763916889230
100
90
80 * *
70 ** * * * * * * * * ** * * *****
60 ************ ******************************************* ************
50 **********************************************************************
40 **********************************************************************
30 **********************************************************************
20 ######################################################################
10 ######################################################################
0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
0 5 0 5 0 5 0 5 0 5 0 5 0
CPU% per hour (last 72 hours)
* = maximum CPU% # = average CPU%
Interrupt processing is 0% so all processing dedicated control plane processing. Total of 8 instances of OSPF is running (old VRF-Lite infrastructure) and BGP for the new MPLS VPN which is prepared:
CPU utilization for five seconds: 27%/0%; one minute: 21%; five minutes: 21%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
108 120571588 177074071 680 15.75% 12.55% 12.65% 0 slcp process
603 22238784 3375191 6588 6.23% 3.08% 2.88% 0 Lif stats RP tas
602 22289352 1620456 13754 4.00% 2.79% 2.81% 0 Lif stats hw rea
143 3863428 827054 4671 0.31% 0.39% 0.41% 0 OIR Process
924 2185836 58781876 37 0.31% 0.21% 0.22% 0 Port manager per
907 2682572 2521666 1063 0.23% 0.33% 0.33% 0 Env Poll
1013 84384 63259 1333 0.15% 0.02% 0.00% 0 l2_fwd_mac_oob_s
.
.
Pinging the 6880-X-LE or any device attached to it replies with an average 3ms locally, 15-20ms from remote.
Any ideas?
- Labels:
-
Other Switching

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-27-2014 03:33 AM
I am running the latest IOS 15.1(2)SY2. Still seeing the same wierd behavior after 2 weeks uptime:
100
90
80
70 * * ** ** *** ** * * * * *
60 ******************************************************* * ***********
50 **********************************************************************
40 **********************************************************************
30 **********************************************************************
20 ######################################################################
10 ######################################################################
0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
0 5 0 5 0 5 0 5 0 5 0 5 0
CPU% per hour (last 72 hours)
* = maximum CPU% # = average CPU%
Pinging from 2 hops away:
Packets: Sent = 12, Received = 12, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 63ms, Average = 10ms
Anyone? (guess I should open a TAC case...)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-29-2014 10:05 AM
Jhonny,
I am seeing the same high CPU utilization issue on a pair of 6880-X's that I just deployed as a VSS cluster. Have you found a resolution for this problem?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-29-2014 10:46 AM
Hello,
Yes. I have. The spikes are caused by a process called NIST RNG that is many used y other processes to generate random numbers. This problem is fixed on IOS ver. 15.1(2)SY3.
That the SLCP process is running on 15%+ is normal behaviour according to Cisco TAC.
So my recommedation would be to upgrade. I did it using ISSU where I lost a total 5 pings.
Good Luck.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-29-2014 04:08 PM
Jhonny,
Thanks for the information. I have upgraded to 15.1(2)SY3 and will monitor the CPU utilization.
Just out of curiosity, do you have UDLD enabled on any of your interfaces on your 6880's? I think the new code may be causing an issue with UDLD on my switches. After I upgraded to 15.1(2)SY3 my 6880's started having weird issues with UDLD on some of the interfaces. Each 6880 shuts certain interfaces down due to a supposed unidirectional link issue right after it finishes booting up. However, we did not have this issue with 15.1(2)SY2.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-25-2015 12:47 PM
Hey guys any update on this, have you fixed the issue?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2015 08:48 AM
Cisco Bug: CSCus02698 - UDLD errdisable occurs after 6880-X VSS switchover
Check that out if you are getting this behavior.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2015 10:03 AM
Shane,
That Bug was created as a result of a TAC case that I have had open with Cisco for several months. I have been working with Cisco developers but they still haven't been able to fix the issue. Are you experiencing the UDLD errdisable behavior on the 6880-X's as well?
Jason
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2015 10:08 AM
Jason,
No I'm not having issues currently but I am getting ready to implement some of these and I was just looking around for issues and ran across this post and I had also saw the bug post that was just updated yesterday. Just wanted to post it here in case anyone else was looking like I was.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-03-2014 11:49 AM
I just installed the 6800x
code: c6880x-adventerprisek9-mz.SPA.151-2.SY3.bin
And I have high utilization:
CORSHE1-LAMR-SW1-G1E#sho processes cpu history
3333333333333322222333332222233333222233333333333333333333
4444777774444466666555556666688888444477777000006666622222
100
90
80
70
60
50
40 ***** ***** ***** ***** *****
30 ********************************** ********************
20 **********************************************************
10 **********************************************************
0....5....1....1....2....2....3....3....4....4....5....5....
0 5 0 5 0 5 0 5 0 5
CPU% per second (last 60 seconds)
3333344434343343443443333343333434333443343433443433334443
8898911391819856008119889738778459988119909277148178991118
100
90
80
70
60
50 * *
40 **********************************************************
30 ##########################################################
20 ##########################################################
10 ##########################################################
0....5....1....1....2....2....3....3....4....4....5....5....
0 5 0 5 0 5 0 5 0 5
CPU% per minute (last 60 minutes)
* = maximum CPU% # = average CPU%
1
545555555555555555555595665556556565556575660
391001464312792501332997528892991829988918110
100 * *
90 * *
80 * *
70 * * * * *
60 * ** * ************************
50 *********************************************
40 ****#########################################
30 #############################################
20 #############################################
10 #############################################
0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
0 5 0 5 0 5 0 5 0 5 0 5 0
CPU% per hour (last 72 hours)
* = maximum CPU% # = average CPU%
CORSHE1-LAMR-SW1-G1E#
The Process causing this:
CPU utilization for five seconds: 29%/0%; one minute: 32%; five minutes: 32%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
108 60435144 20641935 2927 24.95% 29.96% 30.10% 0 slcp process
Is this normal?
I have 3 Fex stack attached?
Any resolution?
