cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8778
Views
0
Helpful
25
Replies

Catalyst 3560 "IP Input" Process CPU Utilization

jordan.bean
Level 1
Level 1

We just replaced our core network with (2) 3560G's.  CORE1 is version 04.  CORE2 is version 01.  Both are running about 10 VLAN's with HSRP and have 12 EIGRP peers.  Both are running auto QoS.  We're passing about 50 Mbps through CORE1 and 3 Mbps through CORE2.

CORE1's CPU ranges from 8-12%.  CORE2's CPU was showing about 7% this weekend when network utilization was virtually nothing.  Now, with 3 Mbps through it, the CPU is at about 25-30%.  The IP Input process is at about 10-20%:

CPU utilization for five seconds: 17%/3%; one minute: 21%; five minutes: 26%
PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
190     7080239  17633742        401  7.02% 10.04% 13.88%   0 IP Input

CORE2#show ip int | i CEF

Shows the following for each interface:

  IP CEF switching is enabled
  IP CEF switching turbo vector
  IP route-cache flags are Fast, CEF

There is no EIGRP reconvergence occuring - the queues are 0.

Could this difference be due to the different versions of the switches and is this cause for concern?  I find it odd that CORE1's CPU utilization is minimal at ~8% under peak load and CORE2's (which is essentially idle) CPU utilization is ~20-30%.

25 Replies 25

Yea, it looks like I'll have to get a TAC case opened.  I did a packet capture and confirmed there's nothing out of the ordinary with the packets.  No IP options, TTL=254, etc.  I'll update the thread once I know what's going on.  Thanks for your help.

You didn't find anything strange in the "show ip traffic"?

Sorry for not clarifying.  I connected a sniffer and analyzed the packets.  The UDP packets look normal from what I can see with only DSCP set.  'show ip traffic' is below.  I tried 'no ip options' on the switch and the Opts:alerts continued to increase and the same problem persists w/CPU % and queue depth.

CORE2#show ip traffic
IP statistics:
  Rcvd:  75419215 total, 3252246 local destination
         0 format errors, 0 checksum errors, 982 bad hop count
         0 unknown protocol, 4 not a gateway
         0 security failures, 0 bad options, 207902 with options
  Opts:  0 end, 0 nop, 0 basic security, 0 loose source route
         0 timestamp, 0 extended security, 0 record route
         0 stream ID, 0 strict source route, 207902 alert, 0 cipso, 0 ump
         0 other
  Frags: 2 reassembled, 0 timeouts, 0 couldn't reassemble
         73 fragmented, 0 couldn't fragment
  Bcast: 3939 received, 0 sent
  Mcast: 2192941 received, 731233 sent
  Sent:  1950863 generated, 244626854 forwarded
  Drop:  216628 encapsulation failed, 0 unresolved, 0 no adjacency
         0 no route, 0 unicast RPF, 0 forced drop
         20 options denied, 0 source IP address zero

ICMP statistics:
  Rcvd: 0 format errors, 13 checksum errors, 0 redirects, 37293 unreachable
        1145 echo, 44 echo reply, 0 mask requests, 0 mask replies, 0 quench
        0 parameter, 0 timestamp, 0 info request, 0 other
        0 irdp solicitations, 0 irdp advertisements
  Sent: 0 redirects, 0 unreachable, 65 echo, 1145 echo reply
        0 mask requests, 0 mask replies, 0 quench, 0 timestamp
        0 info reply, 148 time exceeded, 0 parameter problem
        0 irdp solicitations, 0 irdp advertisements

TCP statistics:
  Rcvd: 35013 total, 0 checksum errors, 3643 no port
  Sent: 29202 total

UDP statistics:
  Rcvd: 1747188 total, 0 checksum errors, 172 no port
  Sent: 635192 total, 7554 forwarded broadcasts

BGP statistics:
  Rcvd: 0 total, 0 opens, 0 notifications, 0 updates
        0 keepalives, 0 route-refresh, 0 unrecognized
  Sent: 0 total, 0 opens, 0 notifications, 0 updates
        0 keepalives, 0 route-refresh

EIGRP-IPv4 statistics:
  Rcvd: 894904 total
  Sent: 0 total

PIMv2 statistics: Sent/Received
  Total: 155085/459531, 0 checksum errors, 0 format errors
  Registers: 0/0 (0 non-rp, 0 non-sm-group), Register Stops: 0/0,  Hellos: 91218/204397
  Join/Prunes: 0/171888, Asserts: 63391/82740, grafts: 0/506
  Bootstraps: 0/0, Candidate_RP_Advertisements: 0/0
  State-Refresh: 0/0

IGMP statistics: Sent/Received
  Total: 26604/77112, Format errors: 0/0, Checksum errors: 0/0
  Host Queries: 23155/25805, Host Reports: 3449/46887, Host Leaves: 0/4420
  DVMRP: 0/0, PIM: 0/0
OSPF statistics:
  Rcvd: 0 total, 0 checksum errors
        0 hello, 0 database desc, 0 link state req
        0 link state updates, 0 link state acks

  Sent: 0 total
        0 hello, 0 database desc, 0 link state req
        0 link state updates, 0 link state acks

ARP statistics:
  Rcvd: 128502 requests, 787 replies, 4502 reverse, 0 other
  Sent: 216725 requests, 2003 replies (7 proxy), 0 reverse
  Drop due to input queue full: 0

It looks like your sending a lot of ARP requests, but not getting a lot of replies.  This usually indicates that the switch is looking for a device that should be directly connected, but isn't.  When a packet reaches the switch, it tries to forward it in hardware.  If there's no hardware adjacency programmed, it sends the packet to the CPU to do an ARP.  If there's no ARP reply, there will never be an Adjacency in the Hardware table and all packets for the destination will be sent to the CPU.

This is the most likely cause of the high CPU.


Dan

If you clear those counters:

100_#clear ip traffic
Clear "show ip traffic" counters [confirm]
..

Which ones do you see going up at a high rate that would correspond to the rate of increase for process switched inbound of the "show int stat" output?

Remember that the 'sh ip traffic' output is the aggregate of all interfaces.

I didn't see the 207K packets with IP options! 

I believe the IP Option alerts are created by IGMP.  We're running multicast and IGMP.  I checked our CORE1 and there are many ARP replies there.  Could the fact that CORE2 be the HSRP standby for all VLAN's case the ARP numbers?  "clear ip traffic" isn't an available option and I'm running the newest 3560 IOS.  :/

I've looked at the UDP packets and also confirmed the CEF entries are present, so this really doesn't make sense that they're being punted.  I think at this point I'm going to get the TAC case opened and see what we can figure out.  Hopefully TAC will take a closer look at the setup and something will stand out.

Just enter the command "clear ip traffic" and hit Enter. No idea why that one was hidden.

HSRP should not drive the ARP numbers really.

If all the traffic coming through the core sees one of the routers as the best path towards the destination subnet it would arp. But remember the ARP's are only for directly connected ip's.

Did you confirm at what rate the 'sh ip traffic' counters were going up and which ones were increasing the most correlating to the higher

CPU?

I agree...get someone from TAC on the box to take a closer look on the box with you.

Post the resolution back to the thread once it's figured out.

Got TAC involved yesterday.  They looked through everything and didn't see a reason why packets were being process switched.  But, another engineer got involved and said he recently saw something similiar.  The fix was to change the SDM tempate from default to router.  I figured that running as the default would be fine for us since we were under all limits.  But sure enough, changing the SDM template and rebooting the router fixed the problem.

Ok. Thanks for the update. I didn't see anything in the frames or the route, CEF entries, or feature configuration that I thought should cause it. I suspected it was something platform specific. I wish we would have been more specific about what exactly was causing the punts though. We don't know if it was a hardware programming issue that got fixed by the reload or there truly is a SDM need there.

danrya
Level 1
Level 1

Take a look at:

sh interface switching
sh ip interfaces
sh ip traffic

The last one might give insight into "the traffic type" that is being punted, like TTL=1.

Dan

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card