07-06-2012 12:05 AM - edited 03-07-2019 07:37 AM
Hi,
We are getting the HIGH CPU process due to Cat4k Mgmt Lopri with both core Switches installed at the Customer site.
HTAINHYD03XXXCS0001#sh processes cpu sorted
CPU utilization for five seconds: 89%/2%; one minute: 68%; five minutes: 59%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
52 41015708283160588800 0 68.02% 43.64% 33.26% 0 Cat4k Mgmt LoPri
51 1020058096 667843543 1527 7.83% 11.70% 12.78% 0 Cat4k Mgmt HiPri
111 18148967922560591317 0 7.43% 7.41% 7.42% 0 Spanning Tree
HTAINHYD03XXXCS0002#sh processes cpu sorted
CPU utilization for five seconds: 66%/3%; one minute: 51%; five minutes: 48%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
52 3078834762717954896 0 37.51% 20.72% 18.28% 0 Cat4k Mgmt LoPri
51 18107779121821980917 993 8.63% 13.57% 14.40% 0 Cat4k Mgmt HiPri
105 3375271842297524860 0 8.07% 4.48% 3.80% 0 IP Input
111 1901379976 80949735 23488 6.63% 6.75% 6.75% 0 Spanning Tree
223 91174320 296086319 307 1.43% 1.46% 1.45% 0 HSRP Common
Also attaching here the logs show version, show process cpu sorted,show logging, show interface for both the Switches.
Suggest me on this issue facing and confirm if there is any BUG impact or some other specific reason??
Regards,
07-06-2012 12:51 AM
Hello Ash,
these processes
Cat4k Mgmt LoPri and Cat4k Mgmt HiPri
are specific of C4500 architecture to further troubleshoot your issue you should read the following specific document:
http://www.cisco.com/en/US/products/hw/switches/ps663/products_tech_note09186a00804cef15.shtml
These processes are actually containers of multiple processes and you need to use
show platform health
to go on in the investigation
Edit:
from the log buffer of the first switch there are some messages that are quite important
068722: May 8 11:47:15.797 IST: %C4K_HWACLMAN-4-ACLHWPROGERR: Input Security: Vlan522-ADWEA - hardware TCAM limit, some packet processing will be software switched.
068723: May 8 11:47:15.797 IST: %C4K_HWACLMAN-4-ACLHWPROGERRREASON: Input(null, 46/Normal) Security: Vlan522-ADWEA - insufficient hardware TCAM masks.
Same kind of messages are reported for other Vlans
CLMAN-4-ACLHWPROGERR: Input Security: Vlan424-Armstrong - hardware TCAM limit, some packet processing will be software switched.
068743: May 14 12:59:41.618 IST: %C4K_HWACLMAN-4-ACLHWPROGERRREASON: Input(null, 16/Normal) Security: Vlan424-Armstrong - insufficient hardware TCAM masks.
CLMAN-4-ACLHWPROGERR: Input Security: Vlan427-Laureate - hardware TCAM limit, some packet processing will be software switched.
068747: May 14 13:01:37.008 IST: %C4K_HWACLMAN-4-ACLHWPROGERRREASON: Input(null, 30/Normal) Security: Vlan427-Laureate - insufficient hardware TCAM masks.
This means that the device is falling back to process switching because the portion of the TCAM table dedicated to implementation of ACLs is full, you should review the ACLs applied to SVI vlan522, vlan 440, vlan 424 and so on to understand why this happens. This may explain the high cpu usage on the device
on the first device there is also an IP address conflict with an external host
069004: Jun 14 12:37:59.602 IST: %IP-4-DUPADDR: Duplicate address 10.119.81.161 on Vlan525, sourced by 4487.fc8a.2929
OUI search for 4487fc provides the following result:
44-87-FC (hex) ELITEGROUP COMPUTER SYSTEM CO., LTD. 4487FC (base 16) ELITEGROUP COMPUTER SYSTEM CO., LTD. NO. 239, Sec. 2, Ti Ding Blvd. Taipei 11493 TAIWAN, REPUBLIC OF CHINA
Edit2:
the same kind of error appears in the log buffer of the second switch:
069846: Jul 4 16:30:55.682 IST: %C4K_HWACLMAN-4-ACLHWPROGERR: Input Security: Vlan480-Oracle - hardware TCAM limit, some packet processing will be software switched.
069847: Jul 4 16:30:55.682 IST: %C4K_HWACLMAN-4-ACLHWPROGERRREASON: Input(null, 47/Normal) Security: Vlan480-Oracle - insufficient hardware TCAM masks.
Hope to help
Giuseppe
07-06-2012 02:12 AM
07-06-2012 02:45 AM
Hello Ashutosh,
the issue is with the switch configuration: ACLs or policy maps used for QoS have used all the portion of TCAM dedicated to this purpose, and this causes fall back to process switching in several client vlans with impact on cpu usage
Edit:
you may want to open a TAC service request to get further help on this.
Hope to help
Giuseppe
07-06-2012 02:57 AM
Hi,
Agree with the reason explaining here for this cause.
Also confirm if some other reason may be found out with the sh platform cpu pack stat all, show platform health. hope that may help us.
Regards,
Ashutosh
07-06-2012 03:59 AM
Hello Ashutosh,
as far as I can see in the latest log files there are no other reasons for high cpu usage
Packets Received by Packet Queue
Queue Total 5 sec avg 1 min avg 5 min avg 1 hour avg
---------------------- --------------- --------- --------- --------- ----------
[output omitted ]
ACL sw processing 650584504 1338 1096 906 785
then you see many items used at 100% related to ACL or QoS
like
Rkios QoS PolicyMaps 356.32 0% 0.03 100%
AclClassifierIdToCla 48.00 0% 0.07 100%
Rkios QoS ClassMaps 896.00 0% 0.16 100%
AclToIosFilterMapLis 384.00 0% 0.07 100%
and
AclOp 2048.00 0% 0.17 100%
AclOpAceSet 2559.96 0% 3.10 100%
AclClassifier 1280.00 0% 3.71 100%
AclFeature 2570.04 0% 9.35 100%
Acl 1408.00 0% 4.08 100%
Ace24 9215.85 3% 287.78 100%
to be noted also some entries related to PIM are at 100%
PimPhyports 1054.68 24% 261.56 100%
PimPorts 906.25 31% 282.75 100%
PimModules 162.00 1% 3.16 100%
PimSlots 6.00 2% 0.16 100%
PimChassis 38.25 6% 2.39 100%
Hope to help
Giuseppe
03-22-2018 12:22 AM
Kir-CS# sh platform cpu pack stat all
Packets Dropped In Hardware By CPU Subport (txQueueNotAvail)
CPU Subport TxQueue 0 TxQueue 1 TxQueue 2 TxQueue 3
------------ --------------- --------------- --------------- ---------------
0 0 46542 0 2714142
1 0 0 932904 0
2 0 0 0 0
3 0 0 0 0
4 0 0 0 0
5 0 0 0 0
6 0 0 0 14801
7 0 0 0 0
Packets Enqueued Overall
Total 5 sec avg 1 min avg 5 min avg 1 hour avg
-------------------- --------- --------- --------- ----------
0 0 0 0 0
Packets Not Enqueued Overall
Total 5 sec avg 1 min avg 5 min avg 1 hour avg
-------------------- --------- --------- --------- ----------
0 0 0 0 0
Packets Dropped In Processing Overall
Total 5 sec avg 1 min avg 5 min avg 1 hour avg
-------------------- --------- --------- --------- ----------
122984 1 1 1 1
Packets Not Enqueued by CPU event
Event Total 5 sec avg 1 min avg 5 min avg 1 hour avg
----------------- -------------------- --------- --------- --------- ----------
Control Packet 0 0 0 0 0
Input Acl 0 0 0 0 0
RPF Fail 0 0 0 0 0
Adjacency Same If 0 0 0 0 0
L3 Forward 0 0 0 0 0
NFL Copy To CPU 0 0 0 0 0
Output Acl 0 0 0 0 0
MTU Check Fail 0 0 0 0 0
SA Miss 0 0 0 0 0
L2 Forward 0 0 0 0 0
SPAN 0 0 0 0 0
CPU Generated 0 0 0 0 0
Unknown 0 0 0 0 0
Packets Dropped In Processing by CPU event
Event Total 5 sec avg 1 min avg 5 min avg 1 hour avg
----------------- -------------------- --------- --------- --------- ----------
Control Packet 122848 1 1 1 1
Input Acl 0 0 0 0 0
RPF Fail 0 0 0 0 0
Adjacency Same If 0 0 0 0 0
L3 Forward 0 0 0 0 0
NFL Copy To CPU 0 0 0 0 0
Output Acl 0 0 0 0 0
MTU Check Fail 0 0 0 0 0
SA Miss 136 0 0 0 0
L2 Forward 0 0 0 0 0
SPAN 0 0 0 0 0
CPU Generated 0 0 0 0 0
Unknown 0 0 0 0 0
Packets Not Enqueued by Priority
Priority Total 5 sec avg 1 min avg 5 min avg 1 hour avg
----------------- -------------------- --------- --------- --------- ----------
Unknown 0 0 0 0 0
Normal 0 0 0 0 0
Medium 0 0 0 0 0
High 0 0 0 0 0
Crucial 0 0 0 0 0
Super Crucial 0 0 0 0 0
Packets Dropped In Processing by Priority
Priority Total 5 sec avg 1 min avg 5 min avg 1 hour avg
----------------- -------------------- --------- --------- --------- ----------
Unknown 0 0 0 0 0
Normal 0 0 0 0 0
Medium 136 0 0 0 0
High 0 0 0 0 0
Crucial 122848 1 1 1 1
Super Crucial 0 0 0 0 0
Packets Not Enqueued by Reason
Reason Total 5 sec avg 1 min avg 5 min avg 1 hour avg
---------------- --------------------- --------- --------- --------- ----------
RxQueueNotAvail 0 0 0 0 0
RxNoBuffersAvail 0 0 0 0 0
RxBadCpuEvent 0 0 0 0 0
Testing 0 0 0 0 0
Packets Dropped In Processing by Reason
Reason Total 5 sec avg 1 min avg 5 min avg 1 hour avg
------------------ -------------------- --------- --------- --------- ----------
BadPddType 0 0 0 0 0
VlanZeroBadCrc 0 0 0 0 0
VlanZeroGoodCrc 0 0 0 0 0
ReplVlan0GoodCrc 0 0 0 0 0
NoPimPhyportMap 0 0 0 0 0
NoPimPhyport 0 0 0 0 0
UnimplementedEvt 0 0 0 0 0
SrcAddrTableFilt 120 0 0 0 0
NoL2Vlan 114319 1 1 1 1
STPDrop 10 0 0 0 0
UnknownLyr-Bridge 0 0 0 0 0
UnknownLyr-OutAcl 0 0 0 0 0
L2DstDrop 17 0 0 0 0
L2DstDropInAcl 8518 0 0 0 0
L2DstDropOutAcl 0 0 0 0 0
AclNoVlan 0 0 0 0 0
AclActionDrop 0 0 0 0 0
AclActionRedirect 0 0 0 0 0
AclActionUnsupptd 0 0 0 0 0
UnknownBridgeAct 0 0 0 0 0
NoTxInfo 0 0 0 0 0
NoDstPorts 0 0 0 0 0
NoFloodPorts 0 0 0 0 0
ControlNoL2DstEnt 0 0 0 0 0
InvalidAclAction 0 0 0 0 0
AclInputErr 0 0 0 0 0
OutAclEvntOnInAcl 0 0 0 0 0
AclOutputErr 0 0 0 0 0
ReservedPort 0 0 0 0 0
EsmpNoTxInfo 0 0 0 0 0
UnknownSource 0 0 0 0 0
NoDriverTxPackets 0 0 0 0 0
NoPimPort 0 0 0 0 0
RouteLookupFailed 0 0 0 0 0
FragPktAllocFailed 0 0 0 0 0
IpHdrException 0 0 0 0 0
SpanEvent 0 0 0 0 0
Total packet queues 16
Packets Received by Packet Queue
Queue Total 5 sec avg 1 min avg 5 min avg 1 hour avg
---------------------- --------------- --------- --------- --------- ----------
Esmp 4975318 65 65 64 64
Control 366871 4 4 4 4
Host Learning 86567 3 3 2 2
L3 Fwd High 169 0 0 0 0
L3 Fwd Medium 120 0 0 0 0
L3 Fwd Low 443753939 2919 2555 1725 2538
L2 Fwd High 0 0 0 0 0
L2 Fwd Medium 46543 0 0 0 0
L2 Fwd Low 2784694 73 66 63 60
L3 Rx High 0 0 0 0 0
L3 Rx Low 171033 5 3 3 2
RPF Failure 0 0 0 0 0
ACL fwd(snooping) 1038682 29 37 38 36
ACL log, unreach 313 0 0 0 0
ACL sw processing 0 0 0 0 0
MTU Fail/Invalid 0 0 0 0 0
Packets Dropped by Packet Queue
Queue Total 5 sec avg 1 min avg 5 min avg 1 hour avg
---------------------- --------------- --------- --------- --------- ----------
Esmp 0 0 0 0 0
Control 0 0 0 0 0
Host Learning 0 0 0 0 0
L3 Fwd High 0 0 0 0 0
L3 Fwd Medium 0 0 0 0 0
L3 Fwd Low 154934996 112 90 33 153
L2 Fwd High 0 0 0 0 0
L2 Fwd Medium 0 0 0 0 0
L2 Fwd Low 0 0 0 0 0
L3 Rx High 0 0 0 0 0
L3 Rx Low 0 0 0 0 0
RPF Failure 0 0 0 0 0
ACL fwd(snooping) 0 0 0 0 0
ACL log, unreach 0 0 0 0 0
ACL sw processing 0 0 0 0 0
MTU Fail/Invalid 0 0 0 0 0
03-22-2018 12:22 AM
09-27-2013 04:25 AM
Hi HCL,
Have you resoved this issue? If so how do you resolved I've the same problem too. :-(
Best regards,
Alcides Miguel
06-08-2015 11:16 AM
10-14-2015 07:32 PM
I use "storm-control broadcast level 2M" for all interface to find which is a problem interface.
06-22-2016 08:52 AM
have enabled a VMWARE servers and these produced the 4500 CPU high.
Reviewing the working modalities found this document delivered to the server administrator.
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2006129
Regards.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide