cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2677
Views
0
Helpful
9
Replies

Cisco 4500 VSS - High CPU Utilization

Hi all,

I implemented a new Cisco VSS composed by two 4507R+E in quad-supervisor architecture. After the migration from the old core switch to the new, the CPU usage of the VSS raised to 100%. Below the output of the show processes cpu sorted detailed:

it-zo-dc-sw-1#show processes cpu sorted detailed
Core 0: CPU utilization for five seconds: 6%; one minute: 0%; five minutes: 0%
Core 1: CPU utilization for five seconds: 7%; one minute: 2%; five minutes: 3%
Core 2: CPU utilization for five seconds: 4%; one minute: 1%; five minutes: 1%
Core 3: CPU utilization for five seconds: 99%; one minute: 99%; five minutes: 99%
PID    T C  TID    Runtime(ms) Invoked uSecs  5Sec      1Min     5Min     TTY   Process
                                               (%)       (%)      (%)
5660   L           2766413     6303760 706    24.90     24.96   24.95   0     iosd
5660   L 3  5660   1031545     3008453 0      24.88     24.89   24.87   0     iosd
5660   L 0  8208   1604771     3276874 0      0.02      0.07    0.08    0     iosd.fastpath
5660   L 1  8209   114114      612752  0      0.00      0.00    0.00    0     CMI Thread
5660   L 0  8210   15542       1214724 0      0.00      0.00    0.00    0     iosd.monitor
5660   L 0  8211   438         15766   0      0.00      0.00    0.00    34816 iosd.aux
28     I           3139959     2149185 0      70.00     69.99   69.99   0       EPC WS Pkt Send pro
87     I           656818      2950965 0      20.88     20.77   20.77   0       Cat4k Mgmt HiPri
88     I           929920      1786982 0      4.99      4.66    4.55    0       Cat4k Mgmt LoPri
206    I           1354664     4662999 0      1.00      0.99    1.00    0       IP Input
29     I           1091788     3005950 0      0.88      0.66    0.66    0       ARP Input
68     I           134076      1062015 0      0.44      0.44    0.44    0       IDB Work
197    I           1632232     2487529 0      0.22      0.22    0.22    0       CDP Protocol
215    I           2864292     2299232 0      0.22      0.22    0.22    0       Spanning Tree
346    I           314452      854814  0      0.22      0.11    0.11    0       IP SNMP
251    I           1455852     3756467 0      0.00      0.00    0.00    0       LLDP Protocol
170    I           2916504     1213398 0      0.00      0.11    0.11    0       UDLD
180    I           922740      1879001 0      0.00      0.00    0.00    0       VRRS Main thread
156    I           791612      1887873 0      0.00      0.11    0.00    0       Ethernet Msec Timer
84     I           37560       508473  0      0.00      0.00    0.00    0       cpf_msg_rcvq_proces
80     I           355260      269110  0      0.00      0.00    0.00    0       Compute load avgs
181    I           1572008     2532304 0      0.00      0.00    0.00    0       Tunnel IOSd shim DB
230    I           572000      1333650 0      0.00      0.11    0.11    0       ADJ background
18     I           8           7       0      0.00      0.00    0.00    0       IOSD chasfs task
19     I           384         54      0      0.00      0.00    0.00    0       GaliosQuack_helper
17     I           153656      274486  0      0.00      0.00    0.00    0       IOSD ipc task
21     I           415892      702961  0      0.00      0.00    0.00    0       CMI IOSd task
22     I           28          114     0      0.00      0.00    0.00    0       CMI IOSd task
20     I           1684        80886   0      0.00      0.00    0.00    0       IOSD taskmonitor
23     I           4           17      0      0.00      0.00    0.00    0       CMI IOSd task
25     I           4           5       0      0.00      0.00    0.00    0       CMI IOSd task
26     I           44916       1217786 0      0.00      0.00    0.00    0       EPC SM Liaison Upda
27     I           23652       1207133 0      0.00      0.00    0.00    0       EPC WS Test Pkt Sen
10     I           0           1       0      0.00      0.00    0.00    0       DiscardQ Background
9      I           5160        20228   0      0.00      0.00    0.00    0       Pool Manager
24     I           4           11      0      0.00      0.00    0.00    0       CMI IOSd task
16     I           4           11      0      0.00      0.00    0.00    0       ifIndex Receive Pro
32     I           0           1       0      0.00      0.00    0.00    0       AAA_SERVER_DEADTIME
33     I           0           1       0      0.00      0.00    0.00    0       Policy Manager
34     I           20          48      0      0.00      0.00    0.00    0       Entity MIB API
35     I           0           1       0      0.00      0.00    0.00    0       IFS Agent Manager
36     I           5228        241711  0      0.00      0.00    0.00    0       IPC Event Notifier
37     I           30908       1182832 0      0.00      0.00    0.00    0       IPC Mcast Pending G
38     I           0           2       0      0.00      0.00    0.00    0       Galios IPC Bootstra
39     I           348         20226   0      0.00      0.00    0.00    0       IPC Dynamic Cache
40     I           0           6       0      0.00      0.00    0.00    0       IPC Session Service
41     I           0           3       0      0.00      0.00    0.00    0       GaliosQuack_sudiRea
42     I           75976       1192743 0      0.00      0.00    0.00    0       IPC Periodic Timer
43     I           28168       1182832 0      0.00      0.00    0.00    0       IPC Deferred Port C
30     I           53284       1268854 0      0.00      0.00    0.00    0       ARP Background
15     I           256         191     0      0.00      0.00    0.00    0       RF Slave Main Threa

The IOS-XE version is:

Cisco IOS Software, IOS-XE Software, Catalyst 4500 L3 Switch  Software (cat4500es8-UNIVERSAL-M), Version 03.08.02.E RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2016 by Cisco Systems, Inc.
Compiled Mon 27-Jun-16 16:37 by prod_rel_team

Could someone help me troubleshooting this issue?

Many thanks,

Maurizio

9 Replies 9

dperezoquendo
Level 1
Level 1

Hello,

Perhaps it might be the following bug? https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvc08452/?referring_site=bugquickviewredir

However, the only resolution appears to be to reload the switch. 

Hello,

the bug you posted is referred to the version 15.2(3)E3. My VSS is running the following software:

cat4500es8-universal.SPA.03.08.02.E.152-4.E2.bin

Running the command show platform cpu packet statistics all I noticed that the main type of packet dropped is the SA MISS.

it-zo-dc-sw-1#show platfor cpu packet statistics all | i Miss
Sa Miss                              0         0         0         0          0
PVMapping Miss                       0         0         0         0          0
Sa Miss                      673513558      2522      2241      2012       2731

Someone can help me in order to find the cause?

What is the uptime of the two chassis? 

Post the complete output to the command "sh int | i line protocol is|Total output drops".  Put the output in a text file and attach the text file in your response.

Hi Leo,

here the uptime:

it-zo-dc-sw-1 uptime is 2 weeks, 1 day, 21 hours, 53 minutes

I attached the ouput of the command suggested.

Many thanks,

Maurizio

Thanks for the output.  It's very useful. 

Can you please explain what are the downstream clients connected to the following physical interfaces: 

  1. GigabitEthernet2/2/23; and
  2. GigabitEthernet2/5/40

Is GigabitEthernet2/5/40 a member of Port-channel3? 

And what client is connected to Port-channel6?

On interface GigabitEthernet2/2/23 (port-channel 6) there is a server used for backup.

On interface GibabitEthernet2/5/40 (port-channel 3) there is an access switch that connect a VmWare environment.

The switch has an up time of >2 weeks.  And during those 2 weeks the two interfaces I've just mentioned are exhibiting a sizeable amount of Total Output Drops.  

Total Output Drops happens when the switchport is sending as much data downstream and the downstream clients are unable to accept the incoming data.  When this happens the incoming data are dropped and the counters increment. 

The high CPU process is caused by a process called "iosd".  This process usually appears when the switchport(s) is overwhelmed.

Hi Leo,

thanks for your answer.  What's about the output below?

it-zo-dc-sw-1#show platfor cpu packet statistics all | i Miss
Sa Miss                              0         0         0         0          0
PVMapping Miss                       0         0         0         0          0
Sa Miss                      673513558      2522      2241      2012       2731

We have migrated the infrastructure from a stack of Cisco3750 to a VSS. On previous architecture there was not such problem.

How can I act in order to fix this issue?

On previous architecture there was not such problem.

How can I act in order to fix this issue?

Maybe there was QoS or Traffic Shaping Policy enabled.